WorldWideScience

Sample records for leave-one-out prediction error

  1. Leave-one-out prediction error of systolic arterial pressure time series under paced breathing

    CERN Document Server

    Ancona, N; Marinazzo, D; Nitti, L; Pellicoro, M; Pinna, G D; Stramaglia, S

    2004-01-01

    In this paper we show that different physiological states and pathological conditions may be characterized in terms of predictability of time series signals from the underlying biological system. In particular we consider systolic arterial pressure time series from healthy subjects and Chronic Heart Failure patients, undergoing paced respiration. We model time series by the regularized least squares approach and quantify predictability by the leave-one-out error. We find that the entrainment mechanism connected to paced breath, that renders the arterial blood pressure signal more regular, thus more predictable, is less effective in patients, and this effect correlates with the seriousness of the heart failure. The leave-one-out error separates controls from patients and, when all orders of nonlinearity are taken into account, alive patients from patients for which cardiac death occurred.

  2. Issues in Predictive Discriminant Analysis: Using and Interpreting the Leave-One-Out Jackknife Method and the Improvement-Over-Change "I" Index Effect Size.

    Science.gov (United States)

    Hwang, Dae-Yeop

    Prediction of group membership is the goal of predictive discriminant analysis (PDA) and the accuracy of group classification is the focus of PDA. The purpose of this paper is to provide an overview of how PDA works and how it can be used to answer a variety of research questions. The paper explains what PDA is and why it is important, and it…

  3. Ensemble Kalman filter regularization using leave-one-out data cross-validation

    KAUST Repository

    Rayo Schiappacasse, Lautaro Jerónimo

    2012-09-19

    In this work, the classical leave-one-out cross-validation method for selecting a regularization parameter for the Tikhonov problem is implemented within the EnKF framework. Following the original concept, the regularization parameter is selected such that it minimizes the predictive error. Some ideas about the implementation, suitability and conceptual interest of the method are discussed. Finally, what will be called the data cross-validation regularized EnKF (dCVr-EnKF) is implemented in a 2D 2-phase synthetic oil reservoir experiment and the results analyzed.

  4. An optimized Leave One Out approach to efficiently identify outliers

    Science.gov (United States)

    Biagi, L.; Caldera, S.; Perego, D.

    2012-04-01

    Least squares are a well established and very popular statistical toolbox in geomatics. Particularly, LS are applied to routinely adjust geodetic networks in the cases both of classical surveys and of modern GNSS permanent networks, both at the local and at the global spatial scale. The linearized functional model between the observables and a vector of unknowns parameters is given. A vector of N observations and its apriori covariance is available. Typically, the observations vector can be decomposed into n subvectors, internally correlated but reciprocally uncorrelated. This happens, for example, when double differences are built from undifferenced observations and are processed to estimate the network coordinates of a GNSS session. Note that when all the observations are independent, n=N: this is for example the case of the adjustment of a levelling network. LS provide the estimates of the parameters, the observables, the residuals and of the a posteriori variance. The testing of the initial hypotheses, the rejection of outliers and the estimation of accuracies and reliabilities can be performed at different levels of significance and power. However, LS are not robust. The a posteriori estimation of the variance can be biased by one unmodelled outlier in the observations. In some case, the unmodelled bias is spread into all the residuals and its identification is difficult. A possible solution to this problem is given by the so called Leave One Out (LOO) approach. A particular subvector can be excluded from the adjustment, whose results are used to check the residuals of the excluded subvector. Clearly, the check is more robust, because a bias in the subvector does not affect the adjustment results. The process can be iterated on all the subvectors. LOO is robust but can be very slow, when n adjustments are performed. An optimized LLO algorithm has been studied. The usual LS adjustment on all the observations is performed to obtain a 'batch' result. The

  5. Reliability analysis on resonance for low-pressure compressor rotor blade based on least squares support vector machine with leave-one-out cross-validation

    Directory of Open Access Journals (Sweden)

    Haifeng Gao

    2015-04-01

    Full Text Available This research article analyzes the resonant reliability at the rotating speed of 6150.0 r/min for low-pressure compressor rotor blade. The aim is to improve the computational efficiency of reliability analysis. This study applies least squares support vector machine to predict the natural frequencies of the low-pressure compressor rotor blade considered. To build a more stable and reliable least squares support vector machine model, leave-one-out cross-validation is introduced to search for the optimal parameters of least squares support vector machine. Least squares support vector machine with leave-one-out cross-validation is presented to analyze the resonant reliability. Additionally, the modal analysis at the rotating speed of 6150.0 r/min for the rotor blade is considered as a tandem system to simplify the analysis and design process, and the randomness of influence factors on frequencies, such as material properties, structural dimension, and operating condition, is taken into consideration. Back-propagation neural network is compared to verify the proposed approach based on the same training and testing sets as least squares support vector machine with leave-one-out cross-validation. Finally, the statistical results prove that the proposed approach is considered to be effective and feasible and can be applied to structural reliability analysis.

  6. A leave-one-out cross-validation SAS macro for the identification of markers associated with survival.

    Science.gov (United States)

    Rushing, Christel; Bulusu, Anuradha; Hurwitz, Herbert I; Nixon, Andrew B; Pang, Herbert

    2015-02-01

    A proper internal validation is necessary for the development of a reliable and reproducible prognostic model for external validation. Variable selection is an important step for building prognostic models. However, not many existing approaches couple the ability to specify the number of covariates in the model with a cross-validation algorithm. We describe a user-friendly SAS macro that implements a score selection method and a leave-one-out cross-validation approach. We discuss the method and applications behind this algorithm, as well as details of the SAS macro.

  7. A Characterization of Prediction Errors

    OpenAIRE

    Meek, Christopher

    2016-01-01

    Understanding prediction errors and determining how to fix them is critical to building effective predictive systems. In this paper, we delineate four types of prediction errors and demonstrate that these four types characterize all prediction errors. In addition, we describe potential remedies and tools that can be used to reduce the uncertainty when trying to determine the source of a prediction error and when trying to take action to remove a prediction errors.

  8. A generalization error estimate for nonlinear systems

    DEFF Research Database (Denmark)

    Larsen, Jan

    1992-01-01

    models of linear and simple neural network systems. Within the linear system GEN is compared to the final prediction error criterion and the leave-one-out cross-validation technique. It was found that the GEN estimate of the true generalization error is less biased on the average. It is concluded...

  9. Prediction of discretization error using the error transport equation

    Science.gov (United States)

    Celik, Ismail B.; Parsons, Don Roscoe

    2017-06-01

    This study focuses on an approach to quantify the discretization error associated with numerical solutions of partial differential equations by solving an error transport equation (ETE). The goal is to develop a method that can be used to adequately predict the discretization error using the numerical solution on only one grid/mesh. The primary problem associated with solving the ETE is the formulation of the error source term which is required for accurately predicting the transport of the error. In this study, a novel approach is considered which involves fitting the numerical solution with a series of locally smooth curves and then blending them together with a weighted spline approach. The result is a continuously differentiable analytic expression that can be used to determine the error source term. Once the source term has been developed, the ETE can easily be solved using the same solver that is used to obtain the original numerical solution. The new methodology is applied to the two-dimensional Navier-Stokes equations in the laminar flow regime. A simple unsteady flow case is also considered. The discretization error predictions based on the methodology presented in this study are in good agreement with the 'true error'. While in most cases the error predictions are not quite as accurate as those from Richardson extrapolation, the results are reasonable and only require one numerical grid. The current results indicate that there is much promise going forward with the newly developed error source term evaluation technique and the ETE.

  10. MPC-Relevant Prediction-Error Identification

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    model is realized from a continuous-discrete-time linear stochastic system specified using transfer functions with time-delays. It is argued that the prediction-error criterion should be selected such that it is compatible with the objective function of the predictive controller in which the model......A prediction-error-method tailored for model based predictive control is presented. The prediction-error method studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model. The linear discrete-time stochastic state space...

  11. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    is a realization of a continuous-discrete multivariate stochastic transfer function model. The proposed prediction error-methods are demonstrated for a SISO system parameterized by the transfer functions with time delays of a continuous-discrete-time linear stochastic system. The simulations for this case suggest......Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which...... computational resources. The identification method is suitable for predictive control....

  12. Interactions of timing and prediction error learning.

    Science.gov (United States)

    Kirkpatrick, Kimberly

    2014-01-01

    Timing and prediction error learning have historically been treated as independent processes, but growing evidence has indicated that they are not orthogonal. Timing emerges at the earliest time point when conditioned responses are observed, and temporal variables modulate prediction error learning in both simple conditioning and cue competition paradigms. In addition, prediction errors, through changes in reward magnitude or value alter timing of behavior. Thus, there appears to be a bi-directional interaction between timing and prediction error learning. Modern theories have attempted to integrate the two processes with mixed success. A neurocomputational approach to theory development is espoused, which draws on neurobiological evidence to guide and constrain computational model development. Heuristics for future model development are presented with the goal of sparking new approaches to theory development in the timing and prediction error fields.

  13. Spontaneous prediction error generation in schizophrenia.

    Directory of Open Access Journals (Sweden)

    Yuichi Yamashita

    Full Text Available Goal-directed human behavior is enabled by hierarchically-organized neural systems that process executive commands associated with higher brain areas in response to sensory and motor signals from lower brain areas. Psychiatric diseases and psychotic conditions are postulated to involve disturbances in these hierarchical network interactions, but the mechanism for how aberrant disease signals are generated in networks, and a systems-level framework linking disease signals to specific psychiatric symptoms remains undetermined. In this study, we show that neural networks containing schizophrenia-like deficits can spontaneously generate uncompensated error signals with properties that explain psychiatric disease symptoms, including fictive perception, altered sense of self, and unpredictable behavior. To distinguish dysfunction at the behavioral versus network level, we monitored the interactive behavior of a humanoid robot driven by the network. Mild perturbations in network connectivity resulted in the spontaneous appearance of uncompensated prediction errors and altered interactions within the network without external changes in behavior, correlating to the fictive sensations and agency experienced by episodic disease patients. In contrast, more severe deficits resulted in unstable network dynamics resulting in overt changes in behavior similar to those observed in chronic disease patients. These findings demonstrate that prediction error disequilibrium may represent an intrinsic property of schizophrenic brain networks reporting the severity and variability of disease symptoms. Moreover, these results support a systems-level model for psychiatric disease that features the spontaneous generation of maladaptive signals in hierarchical neural networks.

  14. Working memory load strengthens reward prediction errors.

    Science.gov (United States)

    Collins, Anne G E; Ciullo, Brittany; Frank, Michael J; Badre, David

    2017-03-20

    Reinforcement learning in simple instrumental tasks is usually modeled as a monolithic process in which reward prediction errors are used to update expected values of choice options. This modeling ignores the different contributions of different memory and decision-making systems thought to contribute even to simple learning. In an fMRI experiment, we asked how working memory and incremental reinforcement learning processes interact to guide human learning. Working memory load was manipulated by varying the number of stimuli to be learned across blocks. Behavioral results and computational modeling confirmed that learning was best explained as a mixture of two mechanisms: a fast, capacity-limited, and delay-sensitive working memory process together with slower reinforcement learning. Model-based analysis of fMRI data showed that striatum and lateral prefrontal cortex were sensitive to reward prediction error, as shown previously, but critically, these signals were reduced when the learning problem was within capacity of working memory. The degree of this neural interaction related to individual differences in the use of working memory to guide behavioral learning. These results indicate that the two systems do not process information independently, but rather interact during learning.SIGNIFICANCE STATEMENTReinforcement learning theory has been remarkably productive at improving our understanding of instrumental learning as well as dopaminergic and striatal network function across many mammalian species. However, this neural network is only one contributor to human learning, and other mechanisms such as prefrontal cortex working memory, also play a key role. Our results show in addition that these other players interact with the dopaminergic RL system, interfering with its key computation of reward predictions errors.

  15. Relationships of Measurement Error and Prediction Error in Observed-Score Regression

    Science.gov (United States)

    Moses, Tim

    2012-01-01

    The focus of this paper is assessing the impact of measurement errors on the prediction error of an observed-score regression. Measures are presented and described for decomposing the linear regression's prediction error variance into parts attributable to the true score variance and the error variances of the dependent variable and the predictor…

  16. Relative Effects of Trajectory Prediction Errors on the AAC Autoresolver

    Science.gov (United States)

    Lauderdale, Todd

    2011-01-01

    Trajectory prediction is fundamental to automated separation assurance. Every missed alert, false alert and loss of separation can be traced to one or more errors in trajectory prediction. These errors are a product of many different sources including wind prediction errors, inferred pilot intent errors, surveillance errors, navigation errors and aircraft weight estimation errors. This study analyzes the impact of six different types of errors on the performance of an automated separation assurance system composed of a geometric conflict detection algorithm and the Advanced Airspace Concept Autoresolver resolution algorithm. Results show that, of the error sources considered in this study, top-of-descent errors were the leading contributor to missed alerts and failed resolution maneuvers. Descent-speed errors were another significant contributor, as were cruise-speed errors in certain situations. The results further suggest that increasing horizontal detection and resolution standards are not effective strategies for mitigating these types of error sources.

  17. How prediction errors shape perception, attention and motivation

    Directory of Open Access Journals (Sweden)

    Hanneke EM Den Ouden

    2012-12-01

    Full Text Available Prediction errors are a central notion in theoretical models of reinforcement learning, perceptual inference, decision-making and cognition, and prediction error signals have been reported across a wide range of brain regions and experimental paradigms. Here, we will make an attempt to see the forest for the trees, considering the commonalities and differences of reported prediction errors signals in light of recent suggestions that the computation of prediction errors forms a fundamental mode of brain function. We discuss where different types of prediction errors are encoded, how they are generated, and the different functional roles they fulfil. We suggest that while encoding of prediction errors is a common computation across brain regions, the content and function of these error signals can be very different, and are determined by the afferent and efferent connections within the neural circuitry in which they arise.

  18. Prediction and simulation errors in parameter estimation for nonlinear systems

    Science.gov (United States)

    Aguirre, Luis A.; Barbosa, Bruno H. G.; Braga, Antônio P.

    2010-11-01

    This article compares the pros and cons of using prediction error and simulation error to define cost functions for parameter estimation in the context of nonlinear system identification. To avoid being influenced by estimators of the least squares family (e.g. prediction error methods), and in order to be able to solve non-convex optimisation problems (e.g. minimisation of some norm of the free-run simulation error), evolutionary algorithms were used. Simulated examples which include polynomial, rational and neural network models are discussed. Our results—obtained using different model classes—show that, in general the use of simulation error is preferable to prediction error. An interesting exception to this rule seems to be the equation error case when the model structure includes the true model. In the case of error-in-variables, although parameter estimation is biased in both cases, the algorithm based on simulation error is more robust.

  19. Prediction with measurement errors in finite populations.

    Science.gov (United States)

    Singer, Julio M; Stanek, Edward J; Lencina, Viviana B; González, Luz Mery; Li, Wenjun; Martino, Silvina San

    2012-02-01

    We address the problem of selecting the best linear unbiased predictor (BLUP) of the latent value (e.g., serum glucose fasting level) of sample subjects with heteroskedastic measurement errors. Using a simple example, we compare the usual mixed model BLUP to a similar predictor based on a mixed model framed in a finite population (FPMM) setup with two sources of variability, the first of which corresponds to simple random sampling and the second, to heteroskedastic measurement errors. Under this last approach, we show that when measurement errors are subject-specific, the BLUP shrinkage constants are based on a pooled measurement error variance as opposed to the individual ones generally considered for the usual mixed model BLUP. In contrast, when the heteroskedastic measurement errors are measurement condition-specific, the FPMM BLUP involves different shrinkage constants. We also show that in this setup, when measurement errors are subject-specific, the usual mixed model predictor is biased but has a smaller mean squared error than the FPMM BLUP which point to some difficulties in the interpretation of such predictors.

  20. Critical evidence for the prediction error theory in associative learning.

    Science.gov (United States)

    Terao, Kanta; Matsumoto, Yukihisa; Mizunami, Makoto

    2015-03-10

    In associative learning in mammals, it is widely accepted that the discrepancy, or error, between actual and predicted reward determines whether learning occurs. Complete evidence for the prediction error theory, however, has not been obtained in any learning systems: Prediction error theory stems from the finding of a blocking phenomenon, but blocking can also be accounted for by other theories, such as the attentional theory. We demonstrated blocking in classical conditioning in crickets and obtained evidence to reject the attentional theory. To obtain further evidence supporting the prediction error theory and rejecting alternative theories, we constructed a neural model to match the prediction error theory, by modifying our previous model of learning in crickets, and we tested a prediction from the model: the model predicts that pharmacological intervention of octopaminergic transmission during appetitive conditioning impairs learning but not formation of reward prediction itself, and it thus predicts no learning in subsequent training. We observed such an "auto-blocking", which could be accounted for by the prediction error theory but not by other competitive theories to account for blocking. This study unambiguously demonstrates validity of the prediction error theory in associative learning.

  1. Dopamine neurons share common response function for reward prediction error.

    Science.gov (United States)

    Eshel, Neir; Tian, Ju; Bukwich, Michael; Uchida, Naoshige

    2016-03-01

    Dopamine neurons are thought to signal reward prediction error, or the difference between actual and predicted reward. How dopamine neurons jointly encode this information, however, remains unclear. One possibility is that different neurons specialize in different aspects of prediction error; another is that each neuron calculates prediction error in the same way. We recorded from optogenetically identified dopamine neurons in the lateral ventral tegmental area (VTA) while mice performed classical conditioning tasks. Our tasks allowed us to determine the full prediction error functions of dopamine neurons and compare them to each other. We found marked homogeneity among individual dopamine neurons: their responses to both unexpected and expected rewards followed the same function, just scaled up or down. As a result, we were able to describe both individual and population responses using just two parameters. Such uniformity ensures robust information coding, allowing each dopamine neuron to contribute fully to the prediction error signal.

  2. Visuomotor adaptation needs a validation of prediction error by feedback error

    Directory of Open Access Journals (Sweden)

    Valérie eGaveau

    2014-11-01

    Full Text Available The processes underlying short-term plasticity induced by visuomotor adaptation to a shifted visual field are still debated. Two main sources of error can induce motor adaptation: reaching feedback errors, which correspond to visually perceived discrepancies between hand and target positions, and errors between predicted and actual visual reafferences of the moving hand. These two sources of error are closely intertwined and difficult to disentangle, as both the target and the reaching limb are simultaneously visible. Accordingly, the goal of the present study was to clarify the relative contributions of these two types of errors during a pointing task under prism-displaced vision. In ‘terminal feedback error’ condition, viewing of their hand by subjects was allowed only at movement end, simultaneously with viewing of the target. In ‘movement prediction error’ condition, viewing of the hand was limited to movement duration, in the absence of any visual target, and error signals arose solely from comparisons between predicted and actual reafferences of the hand. In order to prevent intentional corrections of errors, a subthreshold, progressive stepwise increase in prism deviation was used, so that subjects remained unaware of the visual deviation applied in both conditions. An adaptive aftereffect was observed in the ‘terminal feedback error’ condition only. As far as subjects remained unaware of the optical deviation and self-assigned pointing errors, prediction error alone was insufficient to induce adaptation. These results indicate a critical role of hand-to-target feedback error signals in visuomotor adaptation; consistent with recent neurophysiological findings, they suggest that a combination of feedback and prediction error signals is necessary for eliciting aftereffects. They also suggest that feedback error updates the prediction of reafferences when a visual perturbation is introduced gradually and cognitive factors are

  3. Predictive error analysis for a water resource management model

    Science.gov (United States)

    Gallagher, Mark; Doherty, John

    2007-02-01

    SummaryIn calibrating a model, a set of parameters is assigned to the model which will be employed for the making of all future predictions. If these parameters are estimated through solution of an inverse problem, formulated to be properly posed through either pre-calibration or mathematical regularisation, then solution of this inverse problem will, of necessity, lead to a simplified parameter set that omits the details of reality, while still fitting historical data acceptably well. Furthermore, estimates of parameters so obtained will be contaminated by measurement noise. Both of these phenomena will lead to errors in predictions made by the model, with the potential for error increasing with the hydraulic property detail on which the prediction depends. Integrity of model usage demands that model predictions be accompanied by some estimate of the possible errors associated with them. The present paper applies theory developed in a previous work to the analysis of predictive error associated with a real world, water resource management model. The analysis offers many challenges, including the fact that the model is a complex one that was partly calibrated by hand. Nevertheless, it is typical of models which are commonly employed as the basis for the making of important decisions, and for which such an analysis must be made. The potential errors associated with point-based and averaged water level and creek inflow predictions are examined, together with the dependence of these errors on the amount of averaging involved. Error variances associated with predictions made by the existing model are compared with "optimized error variances" that could have been obtained had calibration been undertaken in such a way as to minimize predictive error variance. The contributions by different parameter types to the overall error variance of selected predictions are also examined.

  4. A causal link between prediction errors, dopamine neurons and learning.

    Science.gov (United States)

    Steinberg, Elizabeth E; Keiflin, Ronald; Boivin, Josiah R; Witten, Ilana B; Deisseroth, Karl; Janak, Patricia H

    2013-07-01

    Situations in which rewards are unexpectedly obtained or withheld represent opportunities for new learning. Often, this learning includes identifying cues that predict reward availability. Unexpected rewards strongly activate midbrain dopamine neurons. This phasic signal is proposed to support learning about antecedent cues by signaling discrepancies between actual and expected outcomes, termed a reward prediction error. However, it is unknown whether dopamine neuron prediction error signaling and cue-reward learning are causally linked. To test this hypothesis, we manipulated dopamine neuron activity in rats in two behavioral procedures, associative blocking and extinction, that illustrate the essential function of prediction errors in learning. We observed that optogenetic activation of dopamine neurons concurrent with reward delivery, mimicking a prediction error, was sufficient to cause long-lasting increases in cue-elicited reward-seeking behavior. Our findings establish a causal role for temporally precise dopamine neuron signaling in cue-reward learning, bridging a critical gap between experimental evidence and influential theoretical frameworks.

  5. Temporal prediction errors modulate cingulate-insular coupling.

    Science.gov (United States)

    Limongi, Roberto; Sutherland, Steven C; Zhu, Jian; Young, Michael E; Habib, Reza

    2013-05-01

    Prediction error (i.e., the difference between the expected and the actual event's outcome) mediates adaptive behavior. Activity in the anterior mid-cingulate cortex (aMCC) and in the anterior insula (aINS) is associated with the commission of prediction errors under uncertainty. We propose a dynamic causal model of effective connectivity (i.e., neuronal coupling) between the aMCC, the aINS, and the striatum in which the task context drives activity in the aINS and the temporal prediction errors modulate extrinsic cingulate-insular connections. With functional magnetic resonance imaging, we scanned 15 participants when they performed a temporal prediction task. They observed visual animations and predicted when a stationary ball began moving after being contacted by another moving ball. To induced uncertainty-driven prediction errors, we introduced spatial gaps and temporal delays between the balls. Classical and Bayesian fMRI analyses provided evidence to support that the aMCC-aINS system along with the striatum not only responds when humans predict whether a dynamic event occurs but also when it occurs. Our results reveal that the insula is the entry port of a three-region pathway involved in the processing of temporal predictions. Moreover, prediction errors rather than attentional demands, task difficulty, or task duration exert an influence in the aMCC-aINS system. Prediction errors debilitate the effect of the aMCC on the aINS. Finally, our computational model provides a way forward to characterize the physiological parallel of temporal prediction errors elicited in dynamic tasks.

  6. Standard Errors of Prediction for the Vineland Adaptive Behavior Scales.

    Science.gov (United States)

    Atkinson, Leslie

    1990-01-01

    Offers standard errors of prediction and confidence intervals for Vineland Adaptive Behavior Scales (VABS) that help in deciding whether variation in obtained scores of scale administered to the same person more than once is a result of measurement error or whether it reflects actual change in examinee's functional level. Presented values were…

  7. PREDICTING THE BOILING POINT OF PCDD/Fs BY THE QSPR METHOD BASED ON THE MOLECULAR DISTANCE-EDGE VECTOR INDEX

    Directory of Open Access Journals (Sweden)

    Long Jiao

    2015-05-01

    Full Text Available The quantitative structure property relationship (QSPR for the boiling point (Tb of polychlorinated dibenzo-p-dioxins and polychlorinated dibenzofurans (PCDD/Fs was investigated. The molecular distance-edge vector (MDEV index was used as the structural descriptor. The quantitative relationship between the MDEV index and Tb was modeled by using multivariate linear regression (MLR and artificial neural network (ANN, respectively. Leave-one-out cross validation and external validation were carried out to assess the prediction performance of the models developed. For the MLR method, the prediction root mean square relative error (RMSRE of leave-one-out cross validation and external validation was 1.77 and 1.23, respectively. For the ANN method, the prediction RMSRE of leave-one-out cross validation and external validation was 1.65 and 1.16, respectively. A quantitative relationship between the MDEV index and Tb of PCDD/Fs was demonstrated. Both MLR and ANN are practicable for modeling this relationship. The MLR model and ANN model developed can be used to predict the Tb of PCDD/Fs. Thus, the Tb of each PCDD/F was predicted by the developed models.

  8. Estimation of genetic connectedness diagnostics based on prediction errors without the prediction error variance-covariance matrix

    National Research Council Canada - National Science Library

    John B Holmes; Ken G Dodds; Michael A Lee

    2017-01-01

    .... While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix...

  9. The Pupillary Orienting Response Predicts Adaptive Behavioral Adjustment after Errors.

    Directory of Open Access Journals (Sweden)

    Peter R Murphy

    Full Text Available Reaction time (RT is commonly observed to slow down after an error. This post-error slowing (PES has been thought to arise from the strategic adoption of a more cautious response mode following deployment of cognitive control. Recently, an alternative account has suggested that PES results from interference due to an error-evoked orienting response. We investigated whether error-related orienting may in fact be a pre-cursor to adaptive post-error behavioral adjustment when the orienting response resolves before subsequent trial onset. We measured pupil dilation, a prototypical measure of autonomic orienting, during performance of a choice RT task with long inter-stimulus intervals, and found that the trial-by-trial magnitude of the error-evoked pupil response positively predicted both PES magnitude and the likelihood that the following response would be correct. These combined findings suggest that the magnitude of the error-related orienting response predicts an adaptive change of response strategy following errors, and thereby promote a reconciliation of the orienting and adaptive control accounts of PES.

  10. Prediction error, ketamine and psychosis: An updated model.

    Science.gov (United States)

    Corlett, Philip R; Honey, Garry D; Fletcher, Paul C

    2016-11-01

    In 2007, we proposed an explanation of delusion formation as aberrant prediction error-driven associative learning. Further, we argued that the NMDA receptor antagonist ketamine provided a good model for this process. Subsequently, we validated the model in patients with psychosis, relating aberrant prediction error signals to delusion severity. During the ensuing period, we have developed these ideas, drawing on the simple principle that brains build a model of the world and refine it by minimising prediction errors, as well as using it to guide perceptual inferences. While previously we focused on the prediction error signal per se, an updated view takes into account its precision, as well as the precision of prior expectations. With this expanded perspective, we see several possible routes to psychotic symptoms - which may explain the heterogeneity of psychotic illness, as well as the fact that other drugs, with different pharmacological actions, can produce psychotomimetic effects. In this article, we review the basic principles of this model and highlight specific ways in which prediction errors can be perturbed, in particular considering the reliability and uncertainty of predictions. The expanded model explains hallucinations as perturbations of the uncertainty mediated balance between expectation and prediction error. Here, expectations dominate and create perceptions by suppressing or ignoring actual inputs. Negative symptoms may arise due to poor reliability of predictions in service of action. By mapping from biology to belief and perception, the account proffers new explanations of psychosis. However, challenges remain. We attempt to address some of these concerns and suggest future directions, incorporating other symptoms into the model, building towards better understanding of psychosis. © The Author(s) 2016.

  11. Continuous-Discrete Time Prediction-Error Identification Relevant for Linear Model Predictive Control

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    model is realized from a continuous-discrete-time linear stochastic system specified using transfer functions with time-delays. It is argued that the prediction-error criterion should be selected such that it is compatible with the objective function of the predictive controller in which the model......A Prediction-error-method tailored for model based predictive control is presented. The prediction-error method studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model. The linear discrete-time stochastic state space...

  12. Arithmetic and local circuitry underlying dopamine prediction errors.

    Science.gov (United States)

    Eshel, Neir; Bukwich, Michael; Rao, Vinod; Hemmelder, Vivian; Tian, Ju; Uchida, Naoshige

    2015-09-10

    Dopamine neurons are thought to facilitate learning by comparing actual and expected reward. Despite two decades of investigation, little is known about how this comparison is made. To determine how dopamine neurons calculate prediction error, we combined optogenetic manipulations with extracellular recordings in the ventral tegmental area while mice engaged in classical conditioning. Here we demonstrate, by manipulating the temporal expectation of reward, that dopamine neurons perform subtraction, a computation that is ideal for reinforcement learning but rarely observed in the brain. Furthermore, selectively exciting and inhibiting neighbouring GABA (γ-aminobutyric acid) neurons in the ventral tegmental area reveals that these neurons are a source of subtraction: they inhibit dopamine neurons when reward is expected, causally contributing to prediction-error calculations. Finally, bilaterally stimulating ventral tegmental area GABA neurons dramatically reduces anticipatory licking to conditioned odours, consistent with an important role for these neurons in reinforcement learning. Together, our results uncover the arithmetic and local circuitry underlying dopamine prediction errors.

  13. Estimation of genetic connectedness diagnostics based on prediction errors without the prediction error variance-covariance matrix.

    Science.gov (United States)

    Holmes, John B; Dodds, Ken G; Lee, Michael A

    2017-03-02

    An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.

  14. Evaluating Random Forests for Survival Analysis Using Prediction Error Curves

    Directory of Open Access Journals (Sweden)

    Ulla B. Mogensen

    2012-09-01

    Full Text Available Prediction error curves are increasingly used to assess and compare predictions in survival analysis. This article surveys the R package pec which provides a set of functions for efficient computation of prediction error curves. The software implements inverse probability of censoring weights to deal with right censored data and several variants of cross-validation to deal with the apparent error problem. In principle, all kinds of prediction models can be assessed, and the package readily supports most traditional regression modeling strategies, like Cox regression or additive hazard regression, as well as state of the art machine learning methods such as random forests, a nonparametric method which provides promising alternatives to traditional strategies in low and high-dimensional settings. We show how the functionality of pec can be extended to yet unsupported prediction models. As an example, we implement support for random forest prediction models based on the R packages randomSurvivalForest and party. Using data of the Copenhagen Stroke Study we use pec to compare random forests to a Cox regression model derived from stepwise variable selection.

  15. Neural correlates of error prediction in a complex motor task

    Directory of Open Access Journals (Sweden)

    Lisa Katharina Maurer

    2015-08-01

    Full Text Available The goal of the study was to quantify error prediction processes via neural correlates in the Electroencephalogram. Access to such a neural signal will allow to gain insights into functional and temporal aspects of error perception in the course of learning. We focused on the error negativity (Ne or error‐related negativity (ERN as a candidate index for the prediction processes. We have used a virtual goal-oriented throwing task where participants used a lever to throw a virtual ball displayed on a computer monitor with the goal of hitting a virtual target as often as possible. After one day of practice with 400 trials, participants performed another 400 trials on a second day with EEG measurement. After error trials (i.e. when the ball missed the target, we found a sharp negative deflection in the EEG peaking 250 ms after ball release (mean amplitude: t = -2.5, df = 20, p = .02 and another broader negative deflection following the first, reaching from about 300 ms after release until unambiguous visual KR (hitting or passing by the target; mean amplitude: t = -7.5, df = 20, p < .001. According to shape and timing of the two deflections, we assume that the first deflection represents a predictive Ne/ERN (prediction based on efferent commands and proprioceptive feedback while the second deflection might have arisen from action monitoring.

  16. Controlling motion prediction errors in radiotherapy with relevance vector machines.

    Science.gov (United States)

    Dürichen, Robert; Wissel, Tobias; Schweikard, Achim

    2015-04-01

    Robotic radiotherapy can precisely ablate moving tumors when time latencies have been compensated. Recently, relevance vector machines (RVM), a probabilistic regression technique, outperformed six other prediction algorithms for respiratory compensation. The method has the distinct advantage that each predicted point is assumed to be drawn from a normal distribution. Second-order statistics, the predicted variance, were used to control RVM prediction error during a treatment and to construct hybrid prediction algorithms. First, the duty cycle and the precision were correlated to the variance by interrupting the treatment if the variance exceeds a threshold. Second, two hybrid algorithms based on the variance were developed, one consisting of multiple RVMs (HYB(RVM)) and the other of a combination between a wavelet-based least mean square algorithm (wLMS) and a RVM (HYB(wLMS-RVM)). The variance for different motion traces was analyzed to reveal a characteristic variance pattern which gives insight in what kind of prediction errors can be controlled by the variance. Limiting the variance by a threshold resulted in an increased precision with a decreased duty cycle. All hybrid algorithms showed an increased prediction accuracy compared to using only their individual algorithms. The best hybrid algorithm, HYB(RVM), can decrease the mean RMSE over all 304 motion traces from 0.18 mm for a linear RVM to 0.17 mm. The predicted variance was shown to be an efficient metric to control prediction errors, resulting in a more robust radiotherapy treatment. The hybrid algorithm HYB(RVM) could be translated to clinical practice. It does not require further parameters, can be completely parallelised and easily further extended.

  17. Estimation of prediction error variances via Monte Carlo sampling methods using different formulations of the prediction error variance

    NARCIS (Netherlands)

    Hickey, J.M.; Veerkamp, R.F.; Calus, M.P.L.; Mulder, H.A.; Thompson, R.

    2009-01-01

    Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo

  18. Estimation of prediction error variances via Monte Carlo sampling methods using different formulations of the prediction error variance

    NARCIS (Netherlands)

    Hickey, J.M.; Veerkamp, R.F.; Calus, M.P.L.; Mulder, H.A.; Thompson, R.

    2009-01-01

    Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo sam

  19. Estimation of prediction error variances via Monte Carlo sampling methods using different formulations of the prediction error variance.

    Science.gov (United States)

    Hickey, John M; Veerkamp, Roel F; Calus, Mario P L; Mulder, Han A; Thompson, Robin

    2009-02-09

    Calculation of the exact prediction error variance covariance matrix is often computationally too demanding, which limits its application in REML algorithms, the calculation of accuracies of estimated breeding values and the control of variance of response to selection. Alternatively Monte Carlo sampling can be used to calculate approximations of the prediction error variance, which converge to the true values if enough samples are used. However, in practical situations the number of samples, which are computationally feasible, is limited. The objective of this study was to compare the convergence rate of different formulations of the prediction error variance calculated using Monte Carlo sampling. Four of these formulations were published, four were corresponding alternative versions, and two were derived as part of this study. The different formulations had different convergence rates and these were shown to depend on the number of samples and on the level of prediction error variance. Four formulations were competitive and these made use of information on either the variance of the estimated breeding value and on the variance of the true breeding value minus the estimated breeding value or on the covariance between the true and estimated breeding values.

  20. How to Avoid Errors in Error Propagation: Prediction Intervals and Confidence Intervals in Forest Biomass

    Science.gov (United States)

    Lilly, P.; Yanai, R. D.; Buckley, H. L.; Case, B. S.; Woollons, R. C.; Holdaway, R. J.; Johnson, J.

    2016-12-01

    Calculations of forest biomass and elemental content require many measurements and models, each contributing uncertainty to the final estimates. While sampling error is commonly reported, based on replicate plots, error due to uncertainty in the regression used to estimate biomass from tree diameter is usually not quantified. Some published estimates of uncertainty due to the regression models have used the uncertainty in the prediction of individuals, ignoring uncertainty in the mean, while others have propagated uncertainty in the mean while ignoring individual variation. Using the simple case of the calcium concentration of sugar maple leaves, we compare the variation among individuals (the standard deviation) to the uncertainty in the mean (the standard error) and illustrate the declining importance in the prediction of individual concentrations as the number of individuals increases. For allometric models, the analogous statistics are the prediction interval (or the residual variation in the model fit) and the confidence interval (describing the uncertainty in the best fit model). The effect of propagating these two sources of error is illustrated using the mass of sugar maple foliage. The uncertainty in individual tree predictions was large for plots with few trees; for plots with 30 trees or more, the uncertainty in individuals was less important than the uncertainty in the mean. Authors of previously published analyses have reanalyzed their data to show the magnitude of these two sources of uncertainty in scales ranging from experimental plots to entire countries. The most correct analysis will take both sources of uncertainty into account, but for practical purposes, country-level reports of uncertainty in carbon stocks, as required by the IPCC, can ignore the uncertainty in individuals. Ignoring the uncertainty in the mean will lead to exaggerated estimates of confidence in estimates of forest biomass and carbon and nutrient contents.

  1. Data Quality in Linear Regression Models: Effect of Errors in Test Data and Errors in Training Data on Predictive Accuracy

    Directory of Open Access Journals (Sweden)

    Barbara D. Klein

    1999-01-01

    Full Text Available Although databases used in many organizations have been found to contain errors, little is known about the effect of these errors on predictions made by linear regression models. The paper uses a real-world example, the prediction of the net asset values of mutual funds, to investigate the effect of data quality on linear regression models. The results of two experiments are reported. The first experiment shows that the error rate and magnitude of error in data used in model prediction negatively affect the predictive accuracy of linear regression models. The second experiment shows that the error rate and the magnitude of error in data used to build the model positively affect the predictive accuracy of linear regression models. All findings are statistically significant. The findings have managerial implications for users and builders of linear regression models.

  2. LÉVY-BASED ERROR PREDICTION IN CIRCULAR SYSTEMATIC SAMPLING

    Directory of Open Access Journals (Sweden)

    Kristjana Ýr Jónsdóttir

    2013-06-01

    Full Text Available In the present paper, Lévy-based error prediction in circular systematic sampling is developed. A model-based statistical setting as in Hobolth and Jensen (2002 is used, but the assumption that the measurement function is Gaussian is relaxed. The measurement function is represented as a periodic stationary stochastic process X obtained by a kernel smoothing of a Lévy basis. The process X may have an arbitrary covariance function. The distribution of the error predictor, based on measurements in n systematic directions is derived. Statistical inference is developed for the model parameters in the case where the covariance function follows the celebrated p-order covariance model.

  3. Prediction Error During Functional and Non-Functional Action Sequences

    DEFF Research Database (Denmark)

    Nielbo, Kristoffer Laigaard; Sørensen, Jesper

    2013-01-01

    error. Non-functionality in this proximal sense is a feature of many socio-cultural practices, such as those found in religious rituals private and social, as well as pathological practices, such as ritualized behavior found among people suffering from Obsessive Compulsory Disorder (OCD). A recent...... behavioral study has shown that human subjects segment non-functional behavior in a more fine-grained way than functional behavior. This increase in segmentation rate implies that non-functionality elicits a stronger error signal. To further explore the implications, two computer simulations using simple......By means of the computational approach the present study investigates the difference between observation of functional behavior (i.e. actions involving necessary integration of subparts) and non-functional behavior (i.e. actions lacking necessary integration of subparts) in terms of prediction...

  4. CREME96 and Related Error Rate Prediction Methods

    Science.gov (United States)

    Adams, James H., Jr.

    2012-01-01

    Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and

  5. Motion Compensation With Prediction Error Using Ezw Wavelet Coefficients

    Directory of Open Access Journals (Sweden)

    Gopinath M (M.Tech

    2016-05-01

    Full Text Available The video compression technique is used to represent any video with minimal distortion. In the compression techniques of image processing, DWT is more significant because of its multi-resolution properties. DCT used in video coding often produces undesirability. The main objective of video coding is reduce spatial and temporal redundancies. In this proposed work a new encoder is designed by exploiting the multi – resolution properties of DWT to get the prediction error, using motion estimation technique to avoid the translation invariance.

  6. Error sensitivity analysis in 10-30-day extended range forecasting by using a nonlinear cross-prediction error model

    Science.gov (United States)

    Xia, Zhiye; Xu, Lisheng; Chen, Hongbin; Wang, Yongqian; Liu, Jinbao; Feng, Wenlan

    2017-06-01

    Extended range forecasting of 10-30 days, which lies between medium-term and climate prediction in terms of timescale, plays a significant role in decision-making processes for the prevention and mitigation of disastrous meteorological events. The sensitivity of initial error, model parameter error, and random error in a nonlinear crossprediction error (NCPE) model, and their stability in the prediction validity period in 10-30-day extended range forecasting, are analyzed quantitatively. The associated sensitivity of precipitable water, temperature, and geopotential height during cases of heavy rain and hurricane is also discussed. The results are summarized as follows. First, the initial error and random error interact. When the ratio of random error to initial error is small (10-6-10-2), minor variation in random error cannot significantly change the dynamic features of a chaotic system, and therefore random error has minimal effect on the prediction. When the ratio is in the range of 10-1-2 (i.e., random error dominates), attention should be paid to the random error instead of only the initial error. When the ratio is around 10-2-10-1, both influences must be considered. Their mutual effects may bring considerable uncertainty to extended range forecasting, and de-noising is therefore necessary. Second, in terms of model parameter error, the embedding dimension m should be determined by the factual nonlinear time series. The dynamic features of a chaotic system cannot be depicted because of the incomplete structure of the attractor when m is small. When m is large, prediction indicators can vanish because of the scarcity of phase points in phase space. A method for overcoming the cut-off effect ( m > 4) is proposed. Third, for heavy rains, precipitable water is more sensitive to the prediction validity period than temperature or geopotential height; however, for hurricanes, geopotential height is most sensitive, followed by precipitable water.

  7. Estimating Model Prediction Error: Should You Treat Predictions as Fixed or Random?

    Science.gov (United States)

    Wallach, Daniel; Thorburn, Peter; Asseng, Senthold; Challinor, Andrew J.; Ewert, Frank; Jones, James W.; Rotter, Reimund; Ruane, Alexander

    2016-01-01

    Crop models are important tools for impact assessment of climate change, as well as for exploring management options under current climate. It is essential to evaluate the uncertainty associated with predictions of these models. We compare two criteria of prediction error; MSEP fixed, which evaluates mean squared error of prediction for a model with fixed structure, parameters and inputs, and MSEP uncertain( X), which evaluates mean squared error averaged over the distributions of model structure, inputs and parameters. Comparison of model outputs with data can be used to estimate the former. The latter has a squared bias term, which can be estimated using hindcasts, and a model variance term, which can be estimated from a simulation experiment. The separate contributions to MSEP uncertain (X) can be estimated using a random effects ANOVA. It is argued that MSEP uncertain (X) is the more informative uncertainty criterion, because it is specific to each prediction situation.

  8. An error prediction framework for interferometric SAR data

    DEFF Research Database (Denmark)

    Mohr, Johan Jacob; Merryman Boncori, John Peter

    2008-01-01

    Three of the major error sources in interferometric synthetic aperture radar measurements of terrain elevation and displacement are baseline errors, atmospheric path length errors, and phase unwrapping errors. In many processing schemes, these errors are calibrated out by using ground control poi...

  9. Chasing probabilities - Signaling negative and positive prediction errors across domains.

    Science.gov (United States)

    Meder, David; Madsen, Kristoffer H; Hulme, Oliver; Siebner, Hartwig R

    2016-07-01

    Adaptive actions build on internal probabilistic models of possible outcomes that are tuned according to the errors of their predictions when experiencing an actual outcome. Prediction errors (PEs) inform choice behavior across a diversity of outcome domains and dimensions, yet neuroimaging studies have so far only investigated such signals in singular experimental contexts. It is thus unclear whether the neuroanatomical distribution of PE encoding reported previously pertains to computational features that are invariant with respect to outcome valence, sensory domain, or some combination of the two. We acquired functional MRI data while volunteers performed four probabilistic reversal learning tasks which differed in terms of outcome valence (reward-seeking versus punishment-avoidance) and domain (abstract symbols versus facial expressions) of outcomes. We found that ventral striatum and frontopolar cortex coded increasingly positive PEs, whereas dorsal anterior cingulate cortex (dACC) traced increasingly negative PEs, irrespectively of the outcome dimension. Individual reversal behavior was unaffected by context manipulations and was predicted by activity in dACC and right inferior frontal gyrus (IFG). The stronger the response to negative PEs in these areas, the lower was the tendency to reverse choice behavior in response to negative events, suggesting that these regions enforce a rule-based strategy across outcome dimensions. Outcome valence influenced PE-related activity in left amygdala, IFG, and dorsomedial prefrontal cortex, where activity selectively scaled with increasingly positive PEs in the reward-seeking but not punishment-avoidance context, irrespective of sensory domain. Left amygdala displayed an additional influence of sensory domain. In the context of avoiding punishment, amygdala activity increased with increasingly negative PEs, but only for facial stimuli, indicating an integration of outcome valence and sensory domain during probabilistic

  10. Predicting errors from reconfiguration patterns in human brain networks.

    Science.gov (United States)

    Ekman, Matthias; Derrfuss, Jan; Tittgemeyer, Marc; Fiebach, Christian J

    2012-10-09

    Task preparation is a complex cognitive process that implements anticipatory adjustments to facilitate future task performance. Little is known about quantitative network parameters governing this process in humans. Using functional magnetic resonance imaging (fMRI) and functional connectivity measurements, we show that the large-scale topology of the brain network involved in task preparation shows a pattern of dynamic reconfigurations that guides optimal behavior. This network could be decomposed into two distinct topological structures, an error-resilient core acting as a major hub that integrates most of the network's communication and a predominantly sensory periphery showing more flexible network adaptations. During task preparation, core-periphery interactions were dynamically adjusted. Task-relevant visual areas showed a higher topological proximity to the network core and an enhancement in their local centrality and interconnectivity. Failure to reconfigure the network topology was predictive for errors, indicating that anticipatory network reconfigurations are crucial for successful task performance. On the basis of a unique network decoding approach, we also develop a general framework for the identification of characteristic patterns in complex networks, which is applicable to other fields in neuroscience that relate dynamic network properties to behavior.

  11. Adaptive Hammerstein Predistorter Using the Recursive Prediction Error Method

    Institute of Scientific and Technical Information of China (English)

    LI Hui; WANG Desheng; CHEN Zhaowu

    2008-01-01

    The digital baseband predistorter is an effective technique to compensate for the nonlinearity of power amplifiers (Pas) with memory effects. However, most available adaptive predistorters based on direct learning architectures suffer from slow convergence speeds. In this paper, the recursive prediction error method is used to construct an adaptive Hammerstein predistorter based on the direct learning architecture,which is used to linearize the Wiener PA model. The effectiveness of the scheme is demonstrated on a digi-tal video broadcasting-terrestrial system. Simulation results show that the predistorter outperforms previous predistorters based on direct learning architectures in terms of convergence speed and linearization. A simi-lar algorithm can be applied to estimate the Wiener PA model, which will achieve high model accuracy.

  12. Dopamine restores reward prediction errors in old age.

    Science.gov (United States)

    Chowdhury, Rumana; Guitart-Masip, Marc; Lambert, Christian; Dayan, Peter; Huys, Quentin; Düzel, Emrah; Dolan, Raymond J

    2013-05-01

    Senescence affects the ability to utilize information about the likelihood of rewards for optimal decision-making. Using functional magnetic resonance imaging in humans, we found that healthy older adults had an abnormal signature of expected value, resulting in an incomplete reward prediction error (RPE) signal in the nucleus accumbens, a brain region that receives rich input projections from substantia nigra/ventral tegmental area (SN/VTA) dopaminergic neurons. Structural connectivity between SN/VTA and striatum, measured by diffusion tensor imaging, was tightly coupled to inter-individual differences in the expression of this expected reward value signal. The dopamine precursor levodopa (L-DOPA) increased the task-based learning rate and task performance in some older adults to the level of young adults. This drug effect was linked to restoration of a canonical neural RPE. Our results identify a neurochemical signature underlying abnormal reward processing in older adults and indicate that this can be modulated by L-DOPA.

  13. Temporal prediction errors modulate task-switching performance

    Directory of Open Access Journals (Sweden)

    Roberto eLimongi

    2015-08-01

    Full Text Available We have previously shown that temporal prediction errors (PEs, the differences between the expected and the actual stimulus’ onset times modulate the effective connectivity between the anterior cingulate cortex and the right anterior insular cortex (rAI, causing the activity of the rAI to decrease. The activity of the rAI is associated with efficient performance under uncertainty (e.g., changing a prepared behavior when a change demand is not expected, which leads to hypothesize that temporal PEs might disrupt behavior-change performance under uncertainty. This hypothesis has not been tested at a behavioral level. In this work, we evaluated this hypothesis within the context of task switching and concurrent temporal predictions. Our participants performed temporal predictions while observing one moving ball striking a stationary ball which bounced off with a variable temporal gap. Simultaneously, they performed a simple color comparison task. In some trials, a change signal made the participants change their behaviors. Performance accuracy decreased as a function of both the temporal PE and the delay. Explaining these results without appealing to ad-hoc concepts such as executive control is a challenge for cognitive neuroscience. We provide a predictive coding explanation. We hypothesize that exteroceptive and proprioceptive minimization of PEs would converge in a fronto-basal ganglia network which would include the rAI. Both temporal gaps (or uncertainty and temporal PEs would drive and modulate this network respectively. Whereas the temporal gaps would drive the activity of the rAI, the temporal PEs would modulate the endogenous excitatory connections of the fronto-striatal network. We conclude that in the context of perceptual uncertainty, the system is not able to minimize perceptual PE, causing the ongoing behavior to finalize and, in consequence, disrupting task switching.

  14. Social learning through prediction error in the brain

    Science.gov (United States)

    Joiner, Jessica; Piva, Matthew; Turrin, Courtney; Chang, Steve W. C.

    2017-06-01

    Learning about the world is critical to survival and success. In social animals, learning about others is a necessary component of navigating the social world, ultimately contributing to increasing evolutionary fitness. How humans and nonhuman animals represent the internal states and experiences of others has long been a subject of intense interest in the developmental psychology tradition, and, more recently, in studies of learning and decision making involving self and other. In this review, we explore how psychology conceptualizes the process of representing others, and how neuroscience has uncovered correlates of reinforcement learning signals to explore the neural mechanisms underlying social learning from the perspective of representing reward-related information about self and other. In particular, we discuss self-referenced and other-referenced types of reward prediction errors across multiple brain structures that effectively allow reinforcement learning algorithms to mediate social learning. Prediction-based computational principles in the brain may be strikingly conserved between self-referenced and other-referenced information.

  15. Perceptual learning of degraded speech by minimizing prediction error.

    Science.gov (United States)

    Sohoglu, Ediz; Davis, Matthew H

    2016-03-22

    Human perception is shaped by past experience on multiple timescales. Sudden and dramatic changes in perception occur when prior knowledge or expectations match stimulus content. These immediate effects contrast with the longer-term, more gradual improvements that are characteristic of perceptual learning. Despite extensive investigation of these two experience-dependent phenomena, there is considerable debate about whether they result from common or dissociable neural mechanisms. Here we test single- and dual-mechanism accounts of experience-dependent changes in perception using concurrent magnetoencephalographic and EEG recordings of neural responses evoked by degraded speech. When speech clarity was enhanced by prior knowledge obtained from matching text, we observed reduced neural activity in a peri-auditory region of the superior temporal gyrus (STG). Critically, longer-term improvements in the accuracy of speech recognition following perceptual learning resulted in reduced activity in a nearly identical STG region. Moreover, short-term neural changes caused by prior knowledge and longer-term neural changes arising from perceptual learning were correlated across subjects with the magnitude of learning-induced changes in recognition accuracy. These experience-dependent effects on neural processing could be dissociated from the neural effect of hearing physically clearer speech, which similarly enhanced perception but increased rather than decreased STG responses. Hence, the observed neural effects of prior knowledge and perceptual learning cannot be attributed to epiphenomenal changes in listening effort that accompany enhanced perception. Instead, our results support a predictive coding account of speech perception; computational simulations show how a single mechanism, minimization of prediction error, can drive immediate perceptual effects of prior knowledge and longer-term perceptual learning of degraded speech.

  16. The Attraction Effect Modulates Reward Prediction Errors and Intertemporal Choices.

    Science.gov (United States)

    Gluth, Sebastian; Hotaling, Jared M; Rieskamp, Jörg

    2017-01-11

    Classical economic theory contends that the utility of a choice option should be independent of other options. This view is challenged by the attraction effect, in which the relative preference between two options is altered by the addition of a third, asymmetrically dominated option. Here, we leveraged the attraction effect in the context of intertemporal choices to test whether both decisions and reward prediction errors (RPE) in the absence of choice violate the independence of irrelevant alternatives principle. We first demonstrate that intertemporal decision making is prone to the attraction effect in humans. In an independent group of participants, we then investigated how this affects the neural and behavioral valuation of outcomes using a novel intertemporal lottery task and fMRI. Participants' behavioral responses (i.e., satisfaction ratings) were modulated systematically by the attraction effect and this modulation was correlated across participants with the respective change of the RPE signal in the nucleus accumbens. Furthermore, we show that, because exponential and hyperbolic discounting models are unable to account for the attraction effect, recently proposed sequential sampling models might be more appropriate to describe intertemporal choices. Our findings demonstrate for the first time that the attraction effect modulates subjective valuation even in the absence of choice. The findings also challenge the prospect of using neuroscientific methods to measure utility in a context-free manner and have important implications for theories of reinforcement learning and delay discounting.

  17. Hierarchical prediction errors in midbrain and septum during social learning

    Science.gov (United States)

    Mathys, Christoph; Weber, Lilian A. E.; Kasper, Lars; Mauer, Jan; Stephan, Klaas E.

    2017-01-01

    Abstract Social learning is fundamental to human interactions, yet its computational and physiological mechanisms are not well understood. One prominent open question concerns the role of neuromodulatory transmitters. We combined fMRI, computational modelling and genetics to address this question in two separate samples (N = 35, N = 47). Participants played a game requiring inference on an adviser’s intentions whose motivation to help or mislead changed over time. Our analyses suggest that hierarchically structured belief updates about current advice validity and the adviser’s trustworthiness, respectively, depend on different neuromodulatory systems. Low-level prediction errors (PEs) about advice accuracy not only activated regions known to support ‘theory of mind’, but also the dopaminergic midbrain. Furthermore, PE responses in ventral striatum were influenced by the Met/Val polymorphism of the Catechol-O-Methyltransferase (COMT) gene. By contrast, high-level PEs (‘expected uncertainty’) about the adviser’s fidelity activated the cholinergic septum. These findings, replicated in both samples, have important implications: They suggest that social learning rests on hierarchically related PEs encoded by midbrain and septum activity, respectively, in the same manner as other forms of learning under volatility. Furthermore, these hierarchical PEs may be broadcast by dopaminergic and cholinergic projections to induce plasticity specifically in cortical areas known to represent beliefs about others. PMID:28119508

  18. Positioning Errors Predicting Method of Strapdown Inertial Navigation Systems Based on PSO-SVM

    Directory of Open Access Journals (Sweden)

    Xunyuan Yin

    2013-01-01

    Full Text Available The strapdown inertial navigation systems (SINS have been widely used for many vehicles, such as commercial airplanes, Unmanned Aerial Vehicles (UAVs, and other types of aircrafts. In order to evaluate the navigation errors precisely and efficiently, a prediction method based on support vector machine (SVM is proposed for positioning error assessment. Firstly, SINS error models that are used for error calculation are established considering several error resources with respect to inertial units. Secondly, flight paths for simulation are designed. Thirdly, the -SVR based prediction method is proposed to predict the positioning errors of navigation systems, and particle swarm optimization (PSO is used for the SVM parameters optimization. Finally, 600 sets of error parameters of SINS are utilized to train the SVM model, which is used for the performance prediction of new navigation systems. By comparing the predicting results with the real errors, the latitudinal predicting accuracy is 92.73%, while the longitudinal predicting accuracy is 91.64%, and PSO is effective to increase the prediction accuracy compared with traditional SVM with fixed parameters. This method is also demonstrated to be effective for error prediction for an entire flight process. Moreover, the prediction method can save 75% of calculation time compared with analyses based on error models.

  19. Predicting pilot error: testing a new methodology and a multi-methods and analysts approach.

    Science.gov (United States)

    Stanton, Neville A; Salmon, Paul; Harris, Don; Marshall, Andrew; Demagalski, Jason; Young, Mark S; Waldmann, Thomas; Dekker, Sidney

    2009-05-01

    The Human Error Template (HET) is a recently developed methodology for predicting design-induced pilot error. This article describes a validation study undertaken to compare the performance of HET against three contemporary Human Error Identification (HEI) approaches when used to predict pilot errors for an approach and landing task and also to compare analyst error predictions to an approach to enhancing error prediction sensitivity: the multiple analysts and methods approach, whereby multiple analyst predictions using a range of HEI techniques are pooled. The findings indicate that, of the four methodologies used in isolation, analysts using the HET methodology offered the most accurate error predictions, and also that the multiple analysts and methods approach was more successful overall in terms of error prediction sensitivity than the three other methods but not the HET approach. The results suggest that when predicting design-induced error, it is appropriate to use a toolkit of different HEI approaches and multiple analysts in order to heighten error prediction sensitivity.

  20. A Comparison of the Backpropagation and Recursive Prediction Error Algorithms for Training Neural Networks.

    OpenAIRE

    1990-01-01

    A new recursive prediction error routine is compared with the backpropagation method of training neural networks. Results based on simulated systems, the prediction of Canadian Lynx data and the modelling of an automotive diesel engine indicate that the recursive prediction error algorithm is far superior to backpropagation.

  1. BANKRUPTCY PREDICTION MODEL WITH ZETAc OPTIMAL CUT-OFF SCORE TO CORRECT TYPE I ERRORS

    Directory of Open Access Journals (Sweden)

    Mohamad Iwan

    2005-06-01

    This research has successfully attained the following results: (1 type I error is in fact 59,83 times more costly compared to type II error, (2 22 ratios distinguish between bankrupt and non-bankrupt groups, (3 2 financial ratios proved to be effective in predicting bankruptcy, (4 prediction using ZETAc optimal cut-off score predicts more companies filing for bankruptcy within one year compared to prediction using Hair et al. optimum cutting score, (5 Although prediction using Hair et al. optimum cutting score is more accurate, prediction using ZETAc optimal cut-off score proved to be able to minimize cost incurred from classification errors.

  2. Brief optogenetic inhibition of dopamine neurons mimics endogenous negative reward prediction errors.

    Science.gov (United States)

    Chang, Chun Yun; Esber, Guillem R; Marrero-Garcia, Yasmin; Yau, Hau-Jie; Bonci, Antonello; Schoenbaum, Geoffrey

    2016-01-01

    Correlative studies have strongly linked phasic changes in dopamine activity with reward prediction error signaling. But causal evidence that these brief changes in firing actually serve as error signals to drive associative learning is more tenuous. Although there is direct evidence that brief increases can substitute for positive prediction errors, there is no comparable evidence that similarly brief pauses can substitute for negative prediction errors. In the absence of such evidence, the effect of increases in firing could reflect novelty or salience, variables also correlated with dopamine activity. Here we provide evidence in support of the proposed linkage, showing in a modified Pavlovian over-expectation task that brief pauses in the firing of dopamine neurons in rat ventral tegmental area at the time of reward are sufficient to mimic the effects of endogenous negative prediction errors. These results support the proposal that brief changes in the firing of dopamine neurons serve as full-fledged bidirectional prediction error signals.

  3. Learning about Expectation Violation from Prediction Error Paradigms – A Meta-Analysis on Brain Processes Following a Prediction Error

    Science.gov (United States)

    D’Astolfo, Lisa; Rief, Winfried

    2017-01-01

    Modifying patients’ expectations by exposing them to expectation violation situations (thus maximizing the difference between the expected and the actual situational outcome) is proposed to be a crucial mechanism for therapeutic success for a variety of different mental disorders. However, clinical observations suggest that patients often maintain their expectations regardless of experiences contradicting their expectations. It remains unclear which information processing mechanisms lead to modification or persistence of patients’ expectations. Insight in the processing could be provided by Neuroimaging studies investigating prediction error (PE, i.e., neuronal reactions to non-expected stimuli). Two methods are often used to investigate the PE: (1) paradigms, in which participants passively observe PEs (”passive” paradigms) and (2) paradigms, which encourage a behavioral adaptation following a PE (“active” paradigms). These paradigms are similar to the methods used to induce expectation violations in clinical settings: (1) the confrontation with an expectation violation situation and (2) an enhanced confrontation in which the patient actively challenges his expectation. We used this similarity to gain insight in the different neuronal processing of the two PE paradigms. We performed a meta-analysis contrasting neuronal activity of PE paradigms encouraging a behavioral adaptation following a PE and paradigms enforcing passiveness following a PE. We found more neuronal activity in the striatum, the insula and the fusiform gyrus in studies encouraging behavioral adaptation following a PE. Due to the involvement of reward assessment and avoidance learning associated with the striatum and the insula we propose that the deliberate execution of action alternatives following a PE is associated with the integration of new information into previously existing expectations, therefore leading to an expectation change. While further research is needed to directly

  4. Learning about Expectation Violation from Prediction Error Paradigms - A Meta-Analysis on Brain Processes Following a Prediction Error.

    Science.gov (United States)

    D'Astolfo, Lisa; Rief, Winfried

    2017-01-01

    Modifying patients' expectations by exposing them to expectation violation situations (thus maximizing the difference between the expected and the actual situational outcome) is proposed to be a crucial mechanism for therapeutic success for a variety of different mental disorders. However, clinical observations suggest that patients often maintain their expectations regardless of experiences contradicting their expectations. It remains unclear which information processing mechanisms lead to modification or persistence of patients' expectations. Insight in the processing could be provided by Neuroimaging studies investigating prediction error (PE, i.e., neuronal reactions to non-expected stimuli). Two methods are often used to investigate the PE: (1) paradigms, in which participants passively observe PEs ("passive" paradigms) and (2) paradigms, which encourage a behavioral adaptation following a PE ("active" paradigms). These paradigms are similar to the methods used to induce expectation violations in clinical settings: (1) the confrontation with an expectation violation situation and (2) an enhanced confrontation in which the patient actively challenges his expectation. We used this similarity to gain insight in the different neuronal processing of the two PE paradigms. We performed a meta-analysis contrasting neuronal activity of PE paradigms encouraging a behavioral adaptation following a PE and paradigms enforcing passiveness following a PE. We found more neuronal activity in the striatum, the insula and the fusiform gyrus in studies encouraging behavioral adaptation following a PE. Due to the involvement of reward assessment and avoidance learning associated with the striatum and the insula we propose that the deliberate execution of action alternatives following a PE is associated with the integration of new information into previously existing expectations, therefore leading to an expectation change. While further research is needed to directly assess

  5. The conditions that promote fear learning: prediction error and Pavlovian fear conditioning.

    Science.gov (United States)

    Li, Susan Shi Yuan; McNally, Gavan P

    2014-02-01

    A key insight of associative learning theory is that learning depends on the actions of prediction error: a discrepancy between the actual and expected outcomes of a conditioning trial. When positive, such error causes increments in associative strength and, when negative, such error causes decrements in associative strength. Prediction error can act directly on fear learning by determining the effectiveness of the aversive unconditioned stimulus or indirectly by determining the effectiveness, or associability, of the conditioned stimulus. Evidence from a variety of experimental preparations in human and non-human animals suggest that discrete neural circuits code for these actions of prediction error during fear learning. Here we review the circuits and brain regions contributing to the neural coding of prediction error during fear learning and highlight areas of research (safety learning, extinction, and reconsolidation) that may profit from this approach to understanding learning.

  6. General approach to error prediction in point registration

    Science.gov (United States)

    Danilchenko, Andrei; Fitzpatrick, J. Michael

    2010-02-01

    A method for the first-order analysis of the point registration problem is presented and validated. The method is a unified approach to the problem that allows for inhomogeneous and anisotropic fiducial localization error (FLE) and arbitrary weighting in the registration algorithm. Cross-covariance matrices are derived both for target registration error (TRE) and for weighted fiducial registration error (FRE). Furthermore, it is shown that for ideal weighting, in which the weighting matrix for each fiducial equals the inverse of the square root of the cross covariance of the two-space FLE for that fiducial, fluctuations of FRE and TRE are independent. These results are validated by comparison with previously published expressions for special cases and by simulation and shown to be correct. Furthermore, simulations for randomly generated fiducial positions and FLEs are presented that show that correlation is negligible (correlation coefficient FRE, are unreliable estimators of registration accuracy, i.e., TRE, and should be avoided.

  7. Improved prediction error filters for adaptive feedback cancellation in hearing aids

    DEFF Research Database (Denmark)

    Ngo, Kim; van Waterschoot, Toon; Christensen, Mads Græsbøll;

    2013-01-01

    and the loudspeaker signal caused by the closed signal loop, in particular when the near-end signal is spectrally colored as is the case for a speech signal. This paper adopts a prediction-error method (PEM)-based approach to AFC, which is based on the use of decorrelating prediction error filters (PEFs). We propose...

  8. A Case Study of the Error Growth and Predictability of a Meiyu Frontal Heavy Precipitation Event

    Institute of Scientific and Technical Information of China (English)

    罗雨; 张立凤

    2011-01-01

    The Advanced Regional Eta-coordinate Model (AREM) is used to explore the predictability of a heavy rainfall event along the Meiyu front in China during 3-4 July 2003.Based on the sensitivity of precipitation prediction to initial data sources and initial uncertainties in different variables,the evolution of error growth and the associated mechanism are described and discussed in detail in this paper.The results indicate that the smaller-amplitude initial error presents a faster growth rate and its growth is characterized by a transition from localized growth to widespread expansion error.Such modality of the error growth is closely related to the evolvement of the precipitation episode,and consequcntly remarkable forecast divergence is found near the rainband,indicating that the rainfall area is a sensitive region for error growth.The initial error in the rainband contributes significantly to the forecast divergence,and its amplification and propagation are largely determined by the initial moisture distribution.The moisture condition also affects the error growth on smaller scales and the subsequent upscale error cascade.In addition,the error growth defined by an energy norm reveals that large error energy collocates well with the strong latent heating,implying that the occurrence of precipitation and error growth share the same energy source-the latent heat.This may impose an intrinsic predictability limit on the prediction of heavy precipitation.

  9. Curiosity and reward: Valence predicts choice and information prediction errors enhance learning.

    Science.gov (United States)

    Marvin, Caroline B; Shohamy, Daphna

    2016-03-01

    Curiosity drives many of our daily pursuits and interactions; yet, we know surprisingly little about how it works. Here, we harness an idea implied in many conceptualizations of curiosity: that information has value in and of itself. Reframing curiosity as the motivation to obtain reward-where the reward is information-allows one to leverage major advances in theoretical and computational mechanisms of reward-motivated learning. We provide new evidence supporting 2 predictions that emerge from this framework. First, we find an asymmetric effect of positive versus negative information, with positive information enhancing both curiosity and long-term memory for information. Second, we find that it is not the absolute value of information that drives learning but, rather, the gap between the reward expected and reward received, an "information prediction error." These results support the idea that information functions as a reward, much like money or food, guiding choices and driving learning in systematic ways.

  10. Frontal theta links prediction errors to behavioral adaptation in reinforcement learning.

    Science.gov (United States)

    Cavanagh, James F; Frank, Michael J; Klein, Theresa J; Allen, John J B

    2010-02-15

    Investigations into action monitoring have consistently detailed a frontocentral voltage deflection in the event-related potential (ERP) following the presentation of negatively valenced feedback, sometimes termed the feedback-related negativity (FRN). The FRN has been proposed to reflect a neural response to prediction errors during reinforcement learning, yet the single-trial relationship between neural activity and the quanta of expectation violation remains untested. Although ERP methods are not well suited to single-trial analyses, the FRN has been associated with theta band oscillatory perturbations in the medial prefrontal cortex. Mediofrontal theta oscillations have been previously associated with expectation violation and behavioral adaptation and are well suited to single-trial analysis. Here, we recorded EEG activity during a probabilistic reinforcement learning task and fit the performance data to an abstract computational model (Q-learning) for calculation of single-trial reward prediction errors. Single-trial theta oscillatory activities following feedback were investigated within the context of expectation (prediction error) and adaptation (subsequent reaction time change). Results indicate that interactive medial and lateral frontal theta activities reflect the degree of negative and positive reward prediction error in the service of behavioral adaptation. These different brain areas use prediction error calculations for different behavioral adaptations, with medial frontal theta reflecting the utilization of prediction errors for reaction time slowing (specifically following errors), but lateral frontal theta reflecting prediction errors leading to working memory-related reaction time speeding for the correct choice.

  11. Predictive vegetation modeling for conservation: impact of error propagation from digital elevation data.

    Science.gov (United States)

    Van Niel, Kimberly P; Austin, Mike P

    2007-01-01

    The effect of digital elevation model (DEM) error on environmental variables, and subsequently on predictive habitat models, has not been explored. Based on an error analysis of a DEM, multiple error realizations of the DEM were created and used to develop both direct and indirect environmental variables for input to predictive habitat models. The study explores the effects of DEM error and the resultant uncertainty of results on typical steps in the modeling procedure for prediction of vegetation species presence/absence. Results indicate that all of these steps and results, including the statistical significance of environmental variables, shapes of species response curves in generalized additive models (GAMs), stepwise model selection, coefficients and standard errors for generalized linear models (GLMs), prediction accuracy (Cohen's kappa and AUC), and spatial extent of predictions, were greatly affected by this type of error. Error in the DEM can affect the reliability of interpretations of model results and level of accuracy in predictions, as well as the spatial extent of the predictions. We suggest that the sensitivity of DEM-derived environmental variables to error in the DEM should be considered before including them in the modeling processes.

  12. Error criteria for cross validation in the context of chaotic time series prediction.

    Science.gov (United States)

    Lim, Teck Por; Puthusserypady, Sadasivan

    2006-03-01

    The prediction of a chaotic time series over a long horizon is commonly done by iterating one-step-ahead prediction. Prediction can be implemented using machine learning methods, such as radial basis function networks. Typically, cross validation is used to select prediction models based on mean squared error. The bias-variance dilemma dictates that there is an inevitable tradeoff between bias and variance. However, invariants of chaotic systems are unchanged by linear transformations; thus, the bias component may be irrelevant to model selection in the context of chaotic time series prediction. Hence, the use of error variance for model selection, instead of mean squared error, is examined. Clipping is introduced, as a simple way to stabilize iterated predictions. It is shown that using the error variance for model selection, in combination with clipping, may result in better models.

  13. Prediction Error Associated with the Perceptual Segmentation of Naturalistic Events

    Science.gov (United States)

    Zacks, Jeffrey M.; Kurby, Christopher A.; Eisenberg, Michelle L.; Haroutunian, Nayiri

    2011-01-01

    Predicting the near future is important for survival and plays a central role in theories of perception, language processing, and learning. Prediction failures may be particularly important for initiating the updating of perceptual and memory systems and, thus, for the subjective experience of events. Here, we asked observers to make predictions…

  14. Surprised at all the entropy: hippocampal, caudate and midbrain contributions to learning from prediction errors.

    Directory of Open Access Journals (Sweden)

    Anne-Marike Schiffer

    Full Text Available Influential concepts in neuroscientific research cast the brain a predictive machine that revises its predictions when they are violated by sensory input. This relates to the predictive coding account of perception, but also to learning. Learning from prediction errors has been suggested for take place in the hippocampal memory system as well as in the basal ganglia. The present fMRI study used an action-observation paradigm to investigate the contributions of the hippocampus, caudate nucleus and midbrain dopaminergic system to different types of learning: learning in the absence of prediction errors, learning from prediction errors, and responding to the accumulation of prediction errors in unpredictable stimulus configurations. We conducted analyses of the regions of interests' BOLD response towards these different types of learning, implementing a bootstrapping procedure to correct for false positives. We found both, caudate nucleus and the hippocampus to be activated by perceptual prediction errors. The hippocampal responses seemed to relate to the associative mismatch between a stored representation and current sensory input. Moreover, its response was significantly influenced by the average information, or Shannon entropy of the stimulus material. In accordance with earlier results, the habenula was activated by perceptual prediction errors. Lastly, we found that the substantia nigra was activated by the novelty of sensory input. In sum, we established that the midbrain dopaminergic system, the hippocampus, and the caudate nucleus were to different degrees significantly involved in the three different types of learning: acquisition of new information, learning from prediction errors and responding to unpredictable stimulus developments. We relate learning from perceptual prediction errors to the concept of predictive coding and related information theoretic accounts.

  15. PASS-GP: Predictive active set selection for Gaussian processes

    DEFF Research Database (Denmark)

    Henao, Ricardo; Winther, Ole

    2010-01-01

    to the active set selection strategy and marginal likelihood optimization on the active set. We make extensive tests on the USPS and MNIST digit classification databases with and without incorporating invariances, demonstrating that we can get state-of-the-art results (e.g.0.86% error on MNIST) with reasonable......We propose a new approximation method for Gaussian process (GP) learning for large data sets that combines inline active set selection with hyperparameter optimization. The predictive probability of the label is used for ranking the data points. We use the leave-one-out predictive probability...... available in GPs to make a common ranking for both active and inactive points, allowing points to be removed again from the active set. This is important for keeping the complexity down and at the same time focusing on points close to the decision boundary. We lend both theoretical and empirical support...

  16. A Generalized Process Model of Human Action Selection and Error and its Application to Error Prediction

    Science.gov (United States)

    2014-07-01

    Macmillan & Creelman , 2005). This is a quite high degree of discriminability and it means that when the decision model predicts a probability of...ROC analysis. Pattern Recognition Letters, 27(8), 861-874. Retrieved from Google Scholar. Macmillan, N. A., & Creelman , C. D. (2005). Detection

  17. Comparison of Transmission Error Predictions with Noise Measurements for Several Spur and Helical Gears

    Science.gov (United States)

    Houser, Donald R.; Oswald, Fred B.; Valco, Mark J.; Drago, Raymond J.; Lenski, Joseph W., Jr.

    1994-01-01

    Measured sound power data from eight different spur, single and double helical gear designs are compared with predictions of transmission error by the Load Distribution Program. The sound power data was taken from the recent Army-funded Advanced Rotorcraft Transmission project. Tests were conducted in the NASA gear noise rig. Results of both test data and transmission error predictions are made for each harmonic of mesh frequency at several operating conditions. In general, the transmission error predictions compare favorably with the measured noise levels.

  18. Comparison of transmission error predictions with noise measurements for several spur and helical gears

    Science.gov (United States)

    Houser, Donald R.; Oswald, Fred B.; Valco, Mark J.; Drago, Raymond J.; Lenski, Joseph W., Jr.

    1994-06-01

    Measured sound power data from eight different spur, single and double helical gear designs are compared with predictions of transmission error by the Load Distribution Program. The sound power data was taken from the recent Army-funded Advanced Rotorcraft Transmission project. Tests were conducted in the NASA gear noise rig. Results of both test data and transmission error predictions are made for each harmonic of mesh frequency at several operating conditions. In general, the transmission error predictions compare favorably with the measured noise levels.

  19. A wavelet-based approach to assessing timing errors in hydrologic predictions

    Science.gov (United States)

    Liu, Yuqiong; Brown, James; Demargne, Julie; Seo, Dong-Jun

    2011-02-01

    SummaryStreamflow predictions typically contain errors in both the timing and the magnitude of peak flows. These two types of error often originate from different sources (e.g. rainfall-runoff modeling vs. routing) and hence may have different implications and ramifications for both model diagnosis and decision support. Thus, where possible and relevant, they should be distinguished and separated in model evaluation and forecast verification applications. Distinct information on timing errors in hydrologic prediction could lead to more targeted model improvements in a diagnostic evaluation context, as well as better-informed decisions in many practical applications, such as flood prediction, water supply forecasting, river regulation, navigation, and engineering design. However, information on timing errors in hydrologic predictions is rarely evaluated or provided. In this paper, we discuss the importance of assessing and quantifying timing error in hydrologic predictions and present a new approach, which is based on the cross wavelet transform (XWT) technique. The XWT technique transforms the time series of predictions and corresponding observations into a two-dimensional time-scale space and provides information on scale- and time-dependent timing differences between the two time series. The results for synthetic timing errors (both constant and time-varying) indicate that the XWT-based approach can estimate timing errors in streamflow predictions with reasonable reliability. The approach is then employed to analyze the timing errors in real streamflow simulations for a number of headwater basins in the US state of Texas. The resulting timing error estimates were consistent with the physiographic and climatic characteristics of these basins. A simple post-factum timing adjustment based on these estimates led to considerably improved agreement between streamflow observations and simulations, further illustrating the potential for using the XWT-based approach for

  20. Putamen Activation Represents an Intrinsic Positive Prediction Error Signal for Visual Search in Repeated Configurations.

    Science.gov (United States)

    Sommer, Susanne; Pollmann, Stefan

    2016-01-01

    We investigated fMRI responses to visual search targets appearing at locations that were predicted by the search context. Based on previous work in visual category learning we expected an intrinsic reward prediction error signal in the putamen whenever the target appeared at a location that was predicted with some degree of uncertainty. Comparing target appearance at locations predicted with 50% probability to either locations predicted with 100% probability or unpredicted locations, increased activation was observed in left posterior putamen and adjacent left posterior insula. Thus, our hypothesis of an intrinsic prediction error-like signal was confirmed. This extends the observation of intrinsic prediction error-like signals, driven by intrinsic rather than extrinsic reward, to memory-driven visual search.

  1. Putamen Activation Represents an Intrinsic Positive Prediction Error Signal for Visual Search in Repeated Configurations

    Science.gov (United States)

    Sommer, Susanne; Pollmann, Stefan

    2016-01-01

    We investigated fMRI responses to visual search targets appearing at locations that were predicted by the search context. Based on previous work in visual category learning we expected an intrinsic reward prediction error signal in the putamen whenever the target appeared at a location that was predicted with some degree of uncertainty. Comparing target appearance at locations predicted with 50% probability to either locations predicted with 100% probability or unpredicted locations, increased activation was observed in left posterior putamen and adjacent left posterior insula. Thus, our hypothesis of an intrinsic prediction error-like signal was confirmed. This extends the observation of intrinsic prediction error-like signals, driven by intrinsic rather than extrinsic reward, to memory-driven visual search. PMID:27867436

  2. The effect of retrospective sampling on estimates of prediction error for multifactor dimensionality reduction.

    Science.gov (United States)

    Winham, Stacey J; Motsinger-Reif, Alison A

    2011-01-01

    The standard in genetic association studies of complex diseases is replication and validation of positive results, with an emphasis on assessing the predictive value of associations. In response to this need, a number of analytical approaches have been developed to identify predictive models that account for complex genetic etiologies. Multifactor Dimensionality Reduction (MDR) is a commonly used, highly successful method designed to evaluate potential gene-gene interactions. MDR relies on classification error in a cross-validation framework to rank and evaluate potentially predictive models. Previous work has demonstrated the high power of MDR, but has not considered the accuracy and variance of the MDR prediction error estimate. Currently, we evaluate the bias and variance of the MDR error estimate as both a retrospective and prospective estimator and show that MDR can both underestimate and overestimate error. We argue that a prospective error estimate is necessary if MDR models are used for prediction, and propose a bootstrap resampling estimate, integrating population prevalence, to accurately estimate prospective error. We demonstrate that this bootstrap estimate is preferable for prediction to the error estimate currently produced by MDR. While demonstrated with MDR, the proposed estimation is applicable to all data-mining methods that use similar estimates.

  3. STATISTICAL CHARACTERISTICS INVESTIGATION OF PREDICTION ERRORS FOR INTERFEROMETRIC SIGNAL IN THE ALGORITHM OF NONLINEAR KALMAN FILTERING

    Directory of Open Access Journals (Sweden)

    E. L. Dmitrieva

    2016-05-01

    Full Text Available Basic peculiarities of nonlinear Kalman filtering algorithm applied to processing of interferometric signals are considered. Analytical estimates determining statistical characteristics of signal values prediction errors were obtained and analysis of errors histograms taking into account variations of different parameters of interferometric signal was carried out. Modeling of the signal prediction procedure with known fixed parameters and variable parameters of signal in the algorithm of nonlinear Kalman filtering was performed. Numerical estimates of prediction errors for interferometric signal values were obtained by formation and analysis of the errors histograms under the influence of additive noise and random variations of amplitude and frequency of interferometric signal. Nonlinear Kalman filter is shown to provide processing of signals with randomly variable parameters, however, it does not take into account directly the linearization error of harmonic function representing interferometric signal that is a filtering error source. The main drawback of the linear prediction consists in non-Gaussian statistics of prediction errors including cases of random deviations of signal amplitude and/or frequency. When implementing stochastic filtering of interferometric signals, it is reasonable to use prediction procedures based on local statistics of a signal and its parameters taken into account.

  4. Error-likelihood prediction in the medial frontal cortex: A critical evaluation

    NARCIS (Netherlands)

    Nieuwenhuis, S.; Scheizer, T.S.; Mars, R.B.; Botvinick, M.M.; Hajcal, G.

    2007-01-01

    A recent study has proposed that posterior regions of the medial frontal cortex (pMFC) learn to predict the likelihood of errors ccurring in a given task context. A key prediction of the errorlZelihood (EL) hypothesis is that the pMFC should exhibit enhanced activity to cues that are predictive of h

  5. A predictive model for dimensional errors in fused deposition modeling

    DEFF Research Database (Denmark)

    Stolfi, A.

    2015-01-01

    This work concerns the effect of deposition angle (a) and layer thickness (L) on the dimensional performance of FDM parts using a predictive model based on the geometrical description of the FDM filament profile. An experimental validation over the whole a range from 0° to 177° at 3° steps and two...

  6. A Foundation for the Accurate Prediction of the Soft Error Vulnerability of Scientific Applications

    Energy Technology Data Exchange (ETDEWEB)

    Bronevetsky, G; de Supinski, B; Schulz, M

    2009-02-13

    Understanding the soft error vulnerability of supercomputer applications is critical as these systems are using ever larger numbers of devices that have decreasing feature sizes and, thus, increasing frequency of soft errors. As many large scale parallel scientific applications use BLAS and LAPACK linear algebra routines, the soft error vulnerability of these methods constitutes a large fraction of the applications overall vulnerability. This paper analyzes the vulnerability of these routines to soft errors by characterizing how their outputs are affected by injected errors and by evaluating several techniques for predicting how errors propagate from the input to the output of each routine. The resulting error profiles can be used to understand the fault vulnerability of full applications that use these routines.

  7. Period, epoch and prediction errors of ephemeris from continuous sets of timing measurements

    CERN Document Server

    Deeg, Hans J

    2015-01-01

    Space missions such as Kepler and CoRoT have led to large numbers of eclipse or transit measurements in nearly continuous time series. This paper shows how to obtain the period error in such measurements from a basic linear least-squares fit, and how to correctly derive the timing error in the prediction of future transit or eclipse events. Assuming strict periodicity, a formula for the period error of such time series is derived: sigma_P = sigma_T (12/( N^3-N))^0.5, where sigma_P is the period error; sigma_T the timing error of a single measurement and N the number of measurements. Relative to the iterative method for period error estimation by Mighell & Plavchan (2013), this much simpler formula leads to smaller period errors, whose correctness has been verified through simulations. For the prediction of times of future periodic events, the usual linear ephemeris where epoch errors are quoted for the first time measurement, are prone to overestimation of the error of that prediction. This may be avoided...

  8. Analysts forecast error : A robust prediction model and its short term trading

    NARCIS (Netherlands)

    Boudt, Kris; de Goeij, Peter; Thewissen, James; Van Campenhout, Geert

    2015-01-01

    We examine the profitability of implementing a short term trading strategy based on predicting the error in analysts' earnings per share forecasts using publicly available information. Since large earnings surprises may lead to extreme values in the forecast error series that disrupt their smooth au

  9. Low Frequency Predictive Skill Despite Structural Instability and Model Error

    Science.gov (United States)

    2014-09-30

    suitable coarse-grained variables is a necessary but not sufficient condition for this predictive skill, and 4 elementary examples are given here...issue in contemporary applied mathematics is the development of simpler dynamical models for a reduced subset of variables in complex high...In this article I developed a new practical framework of creating a stochastically parameterized reduced model for slow variables of complex

  10. Artificial neural network implementation of a near-ideal error prediction controller

    Science.gov (United States)

    Mcvey, Eugene S.; Taylor, Lynore Denise

    1992-01-01

    A theory has been developed at the University of Virginia which explains the effects of including an ideal predictor in the forward loop of a linear error-sampled system. It has been shown that the presence of this ideal predictor tends to stabilize the class of systems considered. A prediction controller is merely a system which anticipates a signal or part of a signal before it actually occurs. It is understood that an exact prediction controller is physically unrealizable. However, in systems where the input tends to be repetitive or limited, (i.e., not random) near ideal prediction is possible. In order for the controller to act as a stability compensator, the predictor must be designed in a way that allows it to learn the expected error response of the system. In this way, an unstable system will become stable by including the predicted error in the system transfer function. Previous and current prediction controller include pattern recognition developments and fast-time simulation which are applicable to the analysis of linear sampled data type systems. The use of pattern recognition techniques, along with a template matching scheme, has been proposed as one realizable type of near-ideal prediction. Since many, if not most, systems are repeatedly subjected to similar inputs, it was proposed that an adaptive mechanism be used to 'learn' the correct predicted error response. Once the system has learned the response of all the expected inputs, it is necessary only to recognize the type of input with a template matching mechanism and then to use the correct predicted error to drive the system. Suggested here is an alternate approach to the realization of a near-ideal error prediction controller, one designed using Neural Networks. Neural Networks are good at recognizing patterns such as system responses, and the back-propagation architecture makes use of a template matching scheme. In using this type of error prediction, it is assumed that the system error

  11. Stimulus-dependent adjustment of reward prediction error in the midbrain.

    Directory of Open Access Journals (Sweden)

    Hiromasa Takemura

    Full Text Available Previous reports have described that neural activities in midbrain dopamine areas are sensitive to unexpected reward delivery and omission. These activities are correlated with reward prediction error in reinforcement learning models, the difference between predicted reward values and the obtained reward outcome. These findings suggest that the reward prediction error signal in the brain updates reward prediction through stimulus-reward experiences. It remains unknown, however, how sensory processing of reward-predicting stimuli contributes to the computation of reward prediction error. To elucidate this issue, we examined the relation between stimulus discriminability of the reward-predicting stimuli and the reward prediction error signal in the brain using functional magnetic resonance imaging (fMRI. Before main experiments, subjects learned an association between the orientation of a perceptually salient (high-contrast Gabor patch and a juice reward. The subjects were then presented with lower-contrast Gabor patch stimuli to predict a reward. We calculated the correlation between fMRI signals and reward prediction error in two reinforcement learning models: a model including the modulation of reward prediction by stimulus discriminability and a model excluding this modulation. Results showed that fMRI signals in the midbrain are more highly correlated with reward prediction error in the model that includes stimulus discriminability than in the model that excludes stimulus discriminability. No regions showed higher correlation with the model that excludes stimulus discriminability. Moreover, results show that the difference in correlation between the two models was significant from the first session of the experiment, suggesting that the reward computation in the midbrain was modulated based on stimulus discriminability before learning a new contingency between perceptually ambiguous stimuli and a reward. These results suggest that the human

  12. All That Glitters … Dissociating Attention and Outcome Expectancy From Prediction Errors Signals

    OpenAIRE

    Roesch, Matthew R.; Calu, Donna J; Esber, Guillem R.; Schoenbaum, Geoffrey

    2010-01-01

    Initially reported in dopamine neurons, neural correlates of prediction errors have now been shown in a variety of areas, including orbitofrontal cortex, ventral striatum, and amygdala. Yet changes in neural activity to an outcome or cues that precede it can reflect other processes. We review the recent literature and show that although activity in dopamine neurons appears to signal prediction errors, similar activity in orbitofrontal cortex, basolateral amygdala, and ventral striatum does no...

  13. Dopamine reward prediction-error signalling: a two-component response

    Science.gov (United States)

    Schultz, Wolfram

    2017-01-01

    Environmental stimuli and objects, including rewards, are often processed sequentially in the brain. Recent work suggests that the phasic dopamine reward prediction-error response follows a similar sequential pattern. An initial brief, unselective and highly sensitive increase in activity unspecifically detects a wide range of environmental stimuli, then quickly evolves into the main response component, which reflects subjective reward value and utility. This temporal evolution allows the dopamine reward prediction-error signal to optimally combine speed and accuracy. PMID:26865020

  14. Estimation of Mechanical Signals in Induction Motors using the Recursive Prediction Error Method

    DEFF Research Database (Denmark)

    Børsting, H.; Knudsen, Morten; Rasmussen, Henrik;

    1993-01-01

    Sensor feedback of mechanical quantities for control applications in induction motors is troublesome and relative expensive. In this paper a recursive prediction error (RPE) method has successfully been used to estimate the angular rotor speed ........Sensor feedback of mechanical quantities for control applications in induction motors is troublesome and relative expensive. In this paper a recursive prediction error (RPE) method has successfully been used to estimate the angular rotor speed .....

  15. The Dopamine Prediction Error: Contributions to Associative Models of Reward Learning

    Science.gov (United States)

    Nasser, Helen M.; Calu, Donna J.; Schoenbaum, Geoffrey; Sharpe, Melissa J.

    2017-01-01

    Phasic activity of midbrain dopamine neurons is currently thought to encapsulate the prediction-error signal described in Sutton and Barto’s (1981) model-free reinforcement learning algorithm. This phasic signal is thought to contain information about the quantitative value of reward, which transfers to the reward-predictive cue after learning. This is argued to endow the reward-predictive cue with the value inherent in the reward, motivating behavior toward cues signaling the presence of reward. Yet theoretical and empirical research has implicated prediction-error signaling in learning that extends far beyond a transfer of quantitative value to a reward-predictive cue. Here, we review the research which demonstrates the complexity of how dopaminergic prediction errors facilitate learning. After briefly discussing the literature demonstrating that phasic dopaminergic signals can act in the manner described by Sutton and Barto (1981), we consider how these signals may also influence attentional processing across multiple attentional systems in distinct brain circuits. Then, we discuss how prediction errors encode and promote the development of context-specific associations between cues and rewards. Finally, we consider recent evidence that shows dopaminergic activity contains information about causal relationships between cues and rewards that reflect information garnered from rich associative models of the world that can be adapted in the absence of direct experience. In discussing this research we hope to support the expansion of how dopaminergic prediction errors are thought to contribute to the learning process beyond the traditional concept of transferring quantitative value. PMID:28275359

  16. An MEG signature corresponding to an axiomatic model of reward prediction error.

    Science.gov (United States)

    Talmi, Deborah; Fuentemilla, Lluis; Litvak, Vladimir; Duzel, Emrah; Dolan, Raymond J

    2012-01-01

    Optimal decision-making is guided by evaluating the outcomes of previous decisions. Prediction errors are theoretical teaching signals which integrate two features of an outcome: its inherent value and prior expectation of its occurrence. To uncover the magnetic signature of prediction errors in the human brain we acquired magnetoencephalographic (MEG) data while participants performed a gambling task. Our primary objective was to use formal criteria, based upon an axiomatic model (Caplin and Dean, 2008a), to determine the presence and timing profile of MEG signals that express prediction errors. We report analyses at the sensor level, implemented in SPM8, time locked to outcome onset. We identified, for the first time, a MEG signature of prediction error, which emerged approximately 320 ms after an outcome and expressed as an interaction between outcome valence and probability. This signal followed earlier, separate signals for outcome valence and probability, which emerged approximately 200 ms after an outcome. Strikingly, the time course of the prediction error signal, as well as the early valence signal, resembled the Feedback-Related Negativity (FRN). In simultaneously acquired EEG data we obtained a robust FRN, but the win and loss signals that comprised this difference wave did not comply with the axiomatic model. Our findings motivate an explicit examination of the critical issue of timing embodied in computational models of prediction errors as seen in human electrophysiological data.

  17. Prediction error in reinforcement learning: a meta-analysis of neuroimaging studies.

    Science.gov (United States)

    Garrison, Jane; Erdeniz, Burak; Done, John

    2013-08-01

    Activation likelihood estimation (ALE) meta-analyses were used to examine the neural correlates of prediction error in reinforcement learning. The findings are interpreted in the light of current computational models of learning and action selection. In this context, particular consideration is given to the comparison of activation patterns from studies using instrumental and Pavlovian conditioning, and where reinforcement involved rewarding or punishing feedback. The striatum was the key brain area encoding for prediction error, with activity encompassing dorsal and ventral regions for instrumental and Pavlovian reinforcement alike, a finding which challenges the functional separation of the striatum into a dorsal 'actor' and a ventral 'critic'. Prediction error activity was further observed in diverse areas of predominantly anterior cerebral cortex including medial prefrontal cortex and anterior cingulate cortex. Distinct patterns of prediction error activity were found for studies using rewarding and aversive reinforcers; reward prediction errors were observed primarily in the striatum while aversive prediction errors were found more widely including insula and habenula.

  18. The impact of experimental measurement errors on long-term viscoelastic predictions. [of structural materials

    Science.gov (United States)

    Tuttle, M. E.; Brinson, H. F.

    1986-01-01

    The impact of flight error in measured viscoelastic parameters on subsequent long-term viscoelastic predictions is numerically evaluated using the Schapery nonlinear viscoelastic model. Of the seven Schapery parameters, the results indicated that long-term predictions were most sensitive to errors in the power law parameter n. Although errors in the other parameters were significant as well, errors in n dominated all other factors at long times. The process of selecting an appropriate short-term test cycle so as to insure an accurate long-term prediction was considered, and a short-term test cycle was selected using material properties typical for T300/5208 graphite-epoxy at 149 C. The process of selection is described, and its individual steps are itemized.

  19. A Fully Bayesian Approach to Improved Calibration and Prediction of Groundwater Models With Structure Error

    Science.gov (United States)

    Xu, T.; Valocchi, A. J.

    2014-12-01

    Effective water resource management typically relies on numerical models to analyse groundwater flow and solute transport processes. These models are usually subject to model structure error due to simplification and/or misrepresentation of the real system. As a result, the model outputs may systematically deviate from measurements, thus violating a key assumption for traditional regression-based calibration and uncertainty analysis. On the other hand, model structure error induced bias can be described statistically in an inductive, data-driven way based on historical model-to-measurement misfit. We adopt a fully Bayesian approach that integrates a Gaussian process error model to account for model structure error to the calibration, prediction and uncertainty analysis of groundwater models. The posterior distributions of parameters of the groundwater model and the Gaussian process error model are jointly inferred using DREAM, an efficient Markov chain Monte Carlo sampler. We test the usefulness of the fully Bayesian approach towards a synthetic case study of surface-ground water interaction under changing pumping conditions. We first illustrate through this example that traditional least squares regression without accounting for model structure error yields biased parameter estimates due to parameter compensation as well as biased predictions. In contrast, the Bayesian approach gives less biased parameter estimates. Moreover, the integration of a Gaussian process error model significantly reduces predictive bias and leads to prediction intervals that are more consistent with observations. The results highlight the importance of explicit treatment of model structure error especially in circumstances where subsequent decision-making and risk analysis require accurate prediction and uncertainty quantification. In addition, the data-driven error modelling approach is capable of extracting more information from observation data than using a groundwater model alone.

  20. Inadvertent interchange of electrocardiogram limb lead connections: analysis of predicted consequences part II: double interconnection errors.

    Science.gov (United States)

    Rowlands, Derek J

    2012-01-01

    Limb lead connection errors are known to be very common in clinical practice. The consequences of all possible single limb lead interconnection errors were analyzed in an earlier publication (J Electrocardiology 2008;41:84-90). With a single limb lead interconnection error, 6 combinations of limb lead connections are possible. Two of these combinations give rise to records in which the limb lead morphology is uninterpretable. Such records show a "flat line" in lead II or III. Three of the errors give rise to records that are fully interpretable once the specific interconnection error has been identified (although one of the errors cannot reliably be recognized in the absence of a previous record for comparison). One of the errors produces no change in the electrocardiogram recording. In all cases, the precordial leads are interpretable, although there are very minor changes in the voltages. This communication predicts the changes in limb lead appearances consequent upon all possible double limb lead interchanges and illustrates these with records electively taken with such double interconnection errors. There are only 3 possible double limb lead interconnection errors. In 2 of the possible combinations, interpretation of the limb leads is impossible, and each of these errors gives rise to a flat line in lead I. In the third combination, the record is fully interpretable once the abnormality has been identified. In all 3 types, the precordial leads are interpretable, although there are very minor changes in the voltages.

  1. Association of Elevated Reward Prediction Error Response With Weight Gain in Adolescent Anorexia Nervosa.

    Science.gov (United States)

    DeGuzman, Marisa; Shott, Megan E; Yang, Tony T; Riederer, Justin; Frank, Guido K W

    2017-06-01

    Anorexia nervosa is a psychiatric disorder of unknown etiology. Understanding associations between behavior and neurobiology is important in treatment development. Using a novel monetary reward task during functional magnetic resonance brain imaging, the authors tested how brain reward learning in adolescent anorexia nervosa changes with weight restoration. Female adolescents with anorexia nervosa (N=21; mean age, 16.4 years [SD=1.9]) underwent functional MRI (fMRI) before and after treatment; similarly, healthy female control adolescents (N=21; mean age, 15.2 years [SD=2.4]) underwent fMRI on two occasions. Brain function was tested using the reward prediction error construct, a computational model for reward receipt and omission related to motivation and neural dopamine responsiveness. Compared with the control group, the anorexia nervosa group exhibited greater brain response 1) for prediction error regression within the caudate, ventral caudate/nucleus accumbens, and anterior and posterior insula, 2) to unexpected reward receipt in the anterior and posterior insula, and 3) to unexpected reward omission in the caudate body. Prediction error and unexpected reward omission response tended to normalize with treatment, while unexpected reward receipt response remained significantly elevated. Greater caudate prediction error response when underweight was associated with lower weight gain during treatment. Punishment sensitivity correlated positively with ventral caudate prediction error response. Reward system responsiveness is elevated in adolescent anorexia nervosa when underweight and after weight restoration. Heightened prediction error activity in brain reward regions may represent a phenotype of adolescent anorexia nervosa that does not respond well to treatment. Prediction error response could be a neurobiological marker of illness severity that can indicate individual treatment needs.

  2. Cortical delta activity reflects reward prediction error and related behavioral adjustments, but at different times.

    Science.gov (United States)

    Cavanagh, James F

    2015-04-15

    Recent work has suggested that reward prediction errors elicit a positive voltage deflection in the scalp-recorded electroencephalogram (EEG); an event sometimes termed a reward positivity. However, a strong test of this proposed relationship remains to be defined. Other important questions remain unaddressed: such as the role of the reward positivity in predicting future behavioral adjustments that maximize reward. To answer these questions, a three-armed bandit task was used to investigate the role of positive prediction errors during trial-by-trial exploration and task-set based exploitation. The feedback-locked reward positivity was characterized by delta band activities, and these related EEG features scaled with the degree of a computationally derived positive prediction error. However, these phenomena were also dissociated: the computational model predicted exploitative action selection and related response time speeding whereas the feedback-locked EEG features did not. Compellingly, delta band dynamics time-locked to the subsequent bandit (the P3) successfully predicted these behaviors. These bandit-locked findings included an enhanced parietal to motor cortex delta phase lag that correlated with the degree of response time speeding, suggesting a mechanistic role for delta band activities in motivating action selection. This dissociation in feedback vs. bandit locked EEG signals is interpreted as a differentiation in hierarchically distinct types of prediction error, yielding novel predictions about these dissociable delta band phenomena during reinforcement learning and decision making.

  3. Application of Exactly Linearized Error Transport Equations to AIAA CFD Prediction Workshops

    Science.gov (United States)

    Derlaga, Joseph M.; Park, Michael A.; Rallabhandi, Sriram

    2017-01-01

    The computational fluid dynamics (CFD) prediction workshops sponsored by the AIAA have created invaluable opportunities in which to discuss the predictive capabilities of CFD in areas in which it has struggled, e.g., cruise drag, high-lift, and sonic boom pre diction. While there are many factors that contribute to disagreement between simulated and experimental results, such as modeling or discretization error, quantifying the errors contained in a simulation is important for those who make decisions based on the computational results. The linearized error transport equations (ETE) combined with a truncation error estimate is a method to quantify one source of errors. The ETE are implemented with a complex-step method to provide an exact linearization with minimal source code modifications to CFD and multidisciplinary analysis methods. The equivalency of adjoint and linearized ETE functional error correction is demonstrated. Uniformly refined grids from a series of AIAA prediction workshops demonstrate the utility of ETE for multidisciplinary analysis with a connection between estimated discretization error and (resolved or under-resolved) flow features.

  4. WAsP prediction errors due to site orography[Wind Atlas Analysis and Application Program

    Energy Technology Data Exchange (ETDEWEB)

    Bowen, A.J.; Mortensen, N.G.

    2004-12-01

    The influence of rugged terrain on the prediction accuracy of the Wind Atlas Analysis and Application Program (WAsP) is investigated using a case study of field measurements taken in rugged terrain. The parameters that could cause substantial errors in a prediction are identified and discussed. In particular, the effects from extreme orography are investigated. A suitable performance indicator is developed which predicts the sign and approximate magnitude of such errors due to orography. This procedure allows the user to assess the consequences of using WAsP outside its operating envelope and could provide a means of correction for rugged terrain effects. (au)

  5. Drivers of coupled model ENSO error dynamics and the spring predictability barrier

    Science.gov (United States)

    Larson, Sarah M.; Kirtman, Ben P.

    2017-06-01

    Despite recent improvements in ENSO simulations, ENSO predictions ultimately remain limited by error growth and model inadequacies. Determining the accompanying dynamical processes that drive the growth of certain types of errors may help the community better recognize which error sources provide an intrinsic limit to predictability. This study applies a dynamical analysis to previously developed CCSM4 error ensemble experiments that have been used to model noise-driven error growth. Analysis reveals that ENSO-independent error growth is instigated via a coupled instability mechanism. Daily error fields indicate that persistent stochastic zonal wind stress perturbations (τx^' } ) near the equatorial dateline activate the coupled instability, first driving local SST and anomalous zonal current changes that then induce upwelling anomalies and a clear thermocline response. In particular, March presents a window of opportunity for stochastic τx^' } to impose a lasting influence on the evolution of eastern Pacific SST through December, suggesting that stochastic τx^' } is an important contributor to the spring predictability barrier. Stochastic winds occurring in other months only temporarily affect eastern Pacific SST for 2-3 months. Comparison of a control simulation with an ENSO cycle and the ENSO-independent error ensemble experiments reveals that once the instability is initiated, the subsequent error growth is modulated via an ENSO-like mechanism, namely the seasonal strength of the Bjerknes feedback. Furthermore, unlike ENSO events that exhibit growth through the fall, the growth of ENSO-independent SST errors terminates once the seasonal strength of the Bjerknes feedback weakens in fall. Results imply that the heat content supplied by the subsurface precursor preceding the onset of an ENSO event is paramount to maintaining the growth of the instability (or event) through fall.

  6. Drivers of coupled model ENSO error dynamics and the spring predictability barrier

    Science.gov (United States)

    Larson, Sarah M.; Kirtman, Ben P.

    2016-07-01

    Despite recent improvements in ENSO simulations, ENSO predictions ultimately remain limited by error growth and model inadequacies. Determining the accompanying dynamical processes that drive the growth of certain types of errors may help the community better recognize which error sources provide an intrinsic limit to predictability. This study applies a dynamical analysis to previously developed CCSM4 error ensemble experiments that have been used to model noise-driven error growth. Analysis reveals that ENSO-independent error growth is instigated via a coupled instability mechanism. Daily error fields indicate that persistent stochastic zonal wind stress perturbations (τx^' } ) near the equatorial dateline activate the coupled instability, first driving local SST and anomalous zonal current changes that then induce upwelling anomalies and a clear thermocline response. In particular, March presents a window of opportunity for stochastic τx^' } to impose a lasting influence on the evolution of eastern Pacific SST through December, suggesting that stochastic τx^' } is an important contributor to the spring predictability barrier. Stochastic winds occurring in other months only temporarily affect eastern Pacific SST for 2-3 months. Comparison of a control simulation with an ENSO cycle and the ENSO-independent error ensemble experiments reveals that once the instability is initiated, the subsequent error growth is modulated via an ENSO-like mechanism, namely the seasonal strength of the Bjerknes feedback. Furthermore, unlike ENSO events that exhibit growth through the fall, the growth of ENSO-independent SST errors terminates once the seasonal strength of the Bjerknes feedback weakens in fall. Results imply that the heat content supplied by the subsurface precursor preceding the onset of an ENSO event is paramount to maintaining the growth of the instability (or event) through fall.

  7. High Capacity Reversible Watermarking for Audio by Histogram Shifting and Predicted Error Expansion

    Directory of Open Access Journals (Sweden)

    Fei Wang

    2014-01-01

    Full Text Available Being reversible, the watermarking information embedded in audio signals can be extracted while the original audio data can achieve lossless recovery. Currently, the few reversible audio watermarking algorithms are confronted with following problems: relatively low SNR (signal-to-noise of embedded audio; a large amount of auxiliary embedded location information; and the absence of accurate capacity control capability. In this paper, we present a novel reversible audio watermarking scheme based on improved prediction error expansion and histogram shifting. First, we use differential evolution algorithm to optimize prediction coefficients and then apply prediction error expansion to output stego data. Second, in order to reduce location map bits length, we introduced histogram shifting scheme. Meanwhile, the prediction error modification threshold according to a given embedding capacity can be computed by our proposed scheme. Experiments show that this algorithm improves the SNR of embedded audio signals and embedding capacity, drastically reduces location map bits length, and enhances capacity control capability.

  8. Temporal Prediction Errors Affect Short-Term Memory Scanning Response Time.

    Science.gov (United States)

    Limongi, Roberto; Silva, Angélica M

    2016-11-01

    The Sternberg short-term memory scanning task has been used to unveil cognitive operations involved in time perception. Participants produce time intervals during the task, and the researcher explores how task performance affects interval production - where time estimation error is the dependent variable of interest. The perspective of predictive behavior regards time estimation error as a temporal prediction error (PE), an independent variable that controls cognition, behavior, and learning. Based on this perspective, we investigated whether temporal PEs affect short-term memory scanning. Participants performed temporal predictions while they maintained information in memory. Model inference revealed that PEs affected memory scanning response time independently of the memory-set size effect. We discuss the results within the context of formal and mechanistic models of short-term memory scanning and predictive coding, a Bayes-based theory of brain function. We state the hypothesis that our finding could be associated with weak frontostriatal connections and weak striatal activity.

  9. High capacity reversible watermarking for audio by histogram shifting and predicted error expansion.

    Science.gov (United States)

    Wang, Fei; Xie, Zhaoxin; Chen, Zuo

    2014-01-01

    Being reversible, the watermarking information embedded in audio signals can be extracted while the original audio data can achieve lossless recovery. Currently, the few reversible audio watermarking algorithms are confronted with following problems: relatively low SNR (signal-to-noise) of embedded audio; a large amount of auxiliary embedded location information; and the absence of accurate capacity control capability. In this paper, we present a novel reversible audio watermarking scheme based on improved prediction error expansion and histogram shifting. First, we use differential evolution algorithm to optimize prediction coefficients and then apply prediction error expansion to output stego data. Second, in order to reduce location map bits length, we introduced histogram shifting scheme. Meanwhile, the prediction error modification threshold according to a given embedding capacity can be computed by our proposed scheme. Experiments show that this algorithm improves the SNR of embedded audio signals and embedding capacity, drastically reduces location map bits length, and enhances capacity control capability.

  10. Analogue correction method of errors and its applicatim to numerical weather prediction

    Institute of Scientific and Technical Information of China (English)

    Gao Li; Ren Hong-Li; Li Jian-Ping; Chou Ji-Fan

    2006-01-01

    In this paper,an analogue correction method of errors (ACE) based on a complicated atmospheric model is further developed and applied to numerical weather prediction (NWP).The analysis shows that the ACE can effectively reduce model errors by combining the statistical analogue method with the dynamical model together in order that the information of plenty of historical data is utilized in the current complicated NWP model.Furthermore.in the ACE.the differences of the similarities between different historical analogues and the current initial state are considered as the weights for estimating model errors.The results of daily,decad and monthly prediction experiments On a complicated T63 atmospheric model show that the performance of the ACE by correcting model errors based on the estimation of the errors of 4 historical analogue predictions is not only better than that of the scheme of only introducing the correction of the errors of every single analogue prediction,but is also better than that of the T63 model.

  11. Quantifying the Effect of Lidar Turbulence Error on Wind Power Prediction

    Energy Technology Data Exchange (ETDEWEB)

    Newman, Jennifer F.; Clifton, Andrew

    2016-01-01

    Currently, cup anemometers on meteorological towers are used to measure wind speeds and turbulence intensity to make decisions about wind turbine class and site suitability; however, as modern turbine hub heights increase and wind energy expands to complex and remote sites, it becomes more difficult and costly to install meteorological towers at potential sites. As a result, remote-sensing devices (e.g., lidars) are now commonly used by wind farm managers and researchers to estimate the flow field at heights spanned by a turbine. Although lidars can accurately estimate mean wind speeds and wind directions, there is still a large amount of uncertainty surrounding the measurement of turbulence using these devices. Errors in lidar turbulence estimates are caused by a variety of factors, including instrument noise, volume averaging, and variance contamination, in which the magnitude of these factors is highly dependent on measurement height and atmospheric stability. As turbulence has a large impact on wind power production, errors in turbulence measurements will translate into errors in wind power prediction. The impact of using lidars rather than cup anemometers for wind power prediction must be understood if lidars are to be considered a viable alternative to cup anemometers.In this poster, the sensitivity of power prediction error to typical lidar turbulence measurement errors is assessed. Turbulence estimates from a vertically profiling WINDCUBE v2 lidar are compared to high-resolution sonic anemometer measurements at field sites in Oklahoma and Colorado to determine the degree of lidar turbulence error that can be expected under different atmospheric conditions. These errors are then incorporated into a power prediction model to estimate the sensitivity of power prediction error to turbulence measurement error. Power prediction models, including the standard binning method and a random forest method, were developed using data from the aeroelastic simulator FAST

  12. Using lexical variables to predict picture-naming errors in jargon aphasia

    Directory of Open Access Journals (Sweden)

    Catherine Godbold

    2015-04-01

    Full Text Available Introduction Individuals with jargon aphasia produce fluent output which often comprises high proportions of non-word errors (e.g., maf for dog. Research has been devoted to identifying the underlying mechanisms behind such output. Some accounts posit a reduced flow of spreading activation between levels in the lexical network (e.g., Robson et al., 2003. If activation level differences across the lexical network are a cause of non-word outputs, we would predict improved performance when target items reflect an increased flow of activation between levels (e.g. more frequently-used words are often represented by higher resting levels of activation. This research investigates the effect of lexical properties of targets (e.g., frequency, imageability on accuracy, error type (real word vs. non-word and target-error overlap of non-word errors in a picture naming task by individuals with jargon aphasia. Method Participants were 17 individuals with Wernicke’s aphasia, who produced a high proportion of non-word errors (>20% of errors on the Philadelphia Naming Test (PNT; Roach et al., 1996. The data were retrieved from the Moss Aphasic Psycholinguistic Database Project (MAPPD, Mirman et al., 2010. We used a series of mixed models to test whether lexical variables predicted accuracy, error type (real word vs. non-word and target-error overlap for the PNT data. As lexical variables tend to be highly correlated, we performed a principal components analysis to reduce the variables into five components representing variables associated with phonology (length, phonotactic probability, neighbourhood density and neighbourhood frequency, semantics (imageability and concreteness, usage (frequency and age-of-acquisition, name agreement and visual complexity. Results and Discussion Table 1 shows the components that made a significant contribution to each model. Individuals with jargon aphasia produced more correct responses and fewer non-word errors relative to

  13. Predictor-based error correction method in short-term climate prediction

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    In terms of the basic idea of combining dynamical and statistical methods in short-term climate prediction, a new prediction method of predictor-based error correction (PREC) is put forward in order to effectively use statistical experiences in dynamical prediction. Analyses show that the PREC can reasonably utilize the significant correlations between predictors and model prediction errors and correct prediction errors by establishing statistical prediction model. Besides, the PREC is further applied to the cross-validation experiments of dynamical seasonal prediction on the operational atmosphere-ocean coupled general circulation model of China Meteorological Administration/National Climate Center by selecting the sea surface temperature index in Ni(n)o3 region as the physical predictor that represents the prevailing ENSO-cycle mode of interannual variability in climate system. It is shown from the prediction results of summer mean circulation and total precipitation that the PREC can improve predictive skills to some extent. Thus the PREC provides a new approach for improving short-term climate prediction.

  14. Multiscale error analysis, correction, and predictive uncertainty estimation in a flood forecasting system

    Science.gov (United States)

    Bogner, K.; Pappenberger, F.

    2011-07-01

    River discharge predictions often show errors that degrade the quality of forecasts. Three different methods of error correction are compared, namely, an autoregressive model with and without exogenous input (ARX and AR, respectively), and a method based on wavelet transforms. For the wavelet method, a Vector-Autoregressive model with exogenous input (VARX) is simultaneously fitted for the different levels of wavelet decomposition; after predicting the next time steps for each scale, a reconstruction formula is applied to transform the predictions in the wavelet domain back to the original time domain. The error correction methods are combined with the Hydrological Uncertainty Processor (HUP) in order to estimate the predictive conditional distribution. For three stations along the Danube catchment, and using output from the European Flood Alert System (EFAS), we demonstrate that the method based on wavelets outperforms simpler methods and uncorrected predictions with respect to mean absolute error, Nash-Sutcliffe efficiency coefficient (and its decomposed performance criteria), informativeness score, and in particular forecast reliability. The wavelet approach efficiently accounts for forecast errors with scale properties of unknown source and statistical structure.

  15. How we learn to make decisions: rapid propagation of reinforcement learning prediction errors in humans.

    Science.gov (United States)

    Krigolson, Olav E; Hassall, Cameron D; Handy, Todd C

    2014-03-01

    Our ability to make decisions is predicated upon our knowledge of the outcomes of the actions available to us. Reinforcement learning theory posits that actions followed by a reward or punishment acquire value through the computation of prediction errors-discrepancies between the predicted and the actual reward. A multitude of neuroimaging studies have demonstrated that rewards and punishments evoke neural responses that appear to reflect reinforcement learning prediction errors [e.g., Krigolson, O. E., Pierce, L. J., Holroyd, C. B., & Tanaka, J. W. Learning to become an expert: Reinforcement learning and the acquisition of perceptual expertise. Journal of Cognitive Neuroscience, 21, 1833-1840, 2009; Bayer, H. M., & Glimcher, P. W. Midbrain dopamine neurons encode a quantitative reward prediction error signal. Neuron, 47, 129-141, 2005; O'Doherty, J. P. Reward representations and reward-related learning in the human brain: Insights from neuroimaging. Current Opinion in Neurobiology, 14, 769-776, 2004; Holroyd, C. B., & Coles, M. G. H. The neural basis of human error processing: Reinforcement learning, dopamine, and the error-related negativity. Psychological Review, 109, 679-709, 2002]. Here, we used the brain ERP technique to demonstrate that not only do rewards elicit a neural response akin to a prediction error but also that this signal rapidly diminished and propagated to the time of choice presentation with learning. Specifically, in a simple, learnable gambling task, we show that novel rewards elicited a feedback error-related negativity that rapidly decreased in amplitude with learning. Furthermore, we demonstrate the existence of a reward positivity at choice presentation, a previously unreported ERP component that has a similar timing and topography as the feedback error-related negativity that increased in amplitude with learning. The pattern of results we observed mirrored the output of a computational model that we implemented to compute reward

  16. Model structural uncertainty quantification and hydrologic parameter and prediction error analysis using airborne electromagnetic data

    DEFF Research Database (Denmark)

    Minsley, B. J.; Christensen, Nikolaj Kruse; Christensen, Steen

    Model structure, or the spatial arrangement of subsurface lithological units, is fundamental to the hydrological behavior of Earth systems. Knowledge of geological model structure is critically important in order to make informed hydrological predictions and management decisions. Model structure...... indicator simulation, we produce many realizations of model structure that are consistent with observed datasets and prior knowledge. Given estimates of model structural uncertainty, we incorporate hydrologic observations to evaluate the errors in hydrologic parameter or prediction errors that occur when...... is never perfectly known, however, and incorrect assumptions can be a significant source of error when making model predictions. We describe a systematic approach for quantifying model structural uncertainty that is based on the integration of sparse borehole observations and large-scale airborne...

  17. Real time remaining useful life prediction based on nonlinear Wiener based degradation processes with measurement errors

    Institute of Scientific and Technical Information of China (English)

    唐圣金; 郭晓松; 于传强; 周志杰; 周召发; 张邦成

    2014-01-01

    Real time remaining useful life (RUL) prediction based on condition monitoring is an essential part in condition based maintenance (CBM). In the current methods about the real time RUL prediction of the nonlinear degradation process, the measurement error is not considered and forecasting uncertainty is large. Therefore, an approximate analytical RUL distribution in a closed-form of a nonlinear Wiener based degradation process with measurement errors was proposed. The maximum likelihood estimation approach was used to estimate the unknown fixed parameters in the proposed model. When the newly observed data are available, the random parameter is updated by the Bayesian method to make the estimation adapt to the item’s individual characteristic and reduce the uncertainty of the estimation. The simulation results show that considering measurement errors in the degradation process can significantly improve the accuracy of real time RUL prediction.

  18. Lossless compression of hyperspectral images based on the prediction error block

    Science.gov (United States)

    Li, Yongjun; Li, Yunsong; Song, Juan; Liu, Weijia; Li, Jiaojiao

    2016-05-01

    A lossless compression algorithm of hyperspectral image based on distributed source coding is proposed, which is used to compress the spaceborne hyperspectral data effectively. In order to make full use of the intra-frame correlation and inter-frame correlation, the prediction error block scheme are introduced. Compared with the scalar coset based distributed compression method (s-DSC) proposed by E.Magli et al., that is , the bitrate of the whole block is determined by its maximum prediction error, and the s-DSC-classify scheme proposed by Song Juan that is based on classification and coset coding, the prediction error block scheme could reduce the bitrate efficiently. Experimental results on hyperspectral images show that the proposed scheme can offer both high compression performance and low encoder complexity and decoder complexity, which is available for on-board compression of hyperspectral images.

  19. Cognitive strategies regulate fictive, but not reward prediction error signals in a sequential investment task.

    Science.gov (United States)

    Gu, Xiaosi; Kirk, Ulrich; Lohrenz, Terry M; Montague, P Read

    2014-08-01

    Computational models of reward processing suggest that foregone or fictive outcomes serve as important information sources for learning and augment those generated by experienced rewards (e.g. reward prediction errors). An outstanding question is how these learning signals interact with top-down cognitive influences, such as cognitive reappraisal strategies. Using a sequential investment task and functional magnetic resonance imaging, we show that the reappraisal strategy selectively attenuates the influence of fictive, but not reward prediction error signals on investment behavior; such behavioral effect is accompanied by changes in neural activity and connectivity in the anterior insular cortex, a brain region thought to integrate subjective feelings with high-order cognition. Furthermore, individuals differ in the extent to which their behaviors are driven by fictive errors versus reward prediction errors, and the reappraisal strategy interacts with such individual differences; a finding also accompanied by distinct underlying neural mechanisms. These findings suggest that the variable interaction of cognitive strategies with two important classes of computational learning signals (fictive, reward prediction error) represent one contributing substrate for the variable capacity of individuals to control their behavior based on foregone rewards. These findings also expose important possibilities for understanding the lack of control in addiction based on possibly foregone rewarding outcomes.

  20. Choice modulates the neural dynamics of prediction error processing during rewarded learning.

    Science.gov (United States)

    Peterson, David A; Lotz, Daniel T; Halgren, Eric; Sejnowski, Terrence J; Poizner, Howard

    2011-01-15

    Our ability to selectively engage with our environment enables us to guide our learning and to take advantage of its benefits. When facing multiple possible actions, our choices are a critical aspect of learning. In the case of learning from rewarding feedback, there has been substantial theoretical and empirical progress in elucidating the associated behavioral and neural processes, predominantly in terms of a reward prediction error, a measure of the discrepancy between actual versus expected reward. Nevertheless, the distinct influence of choice on prediction error processing and its neural dynamics remains relatively unexplored. In this study we used a novel paradigm to determine how choice influences prediction error processing and to examine whether there are correspondingly distinct neural dynamics. We recorded scalp electroencephalogram while healthy adults were administered a rewarded learning task in which choice trials were intermingled with control trials involving the same stimuli, motor responses, and probabilistic rewards. We used a temporal difference learning model of subjects' trial-by-trial choices to infer subjects' image valuations and corresponding prediction errors. As expected, choices were associated with lower overall prediction error magnitudes, most notably over the course of learning the stimulus-reward contingencies. Choices also induced a higher-amplitude relative positivity in the frontocentral event-related potential about 200 ms after reward signal onset that was negatively correlated with the differential effect of choice on the prediction error. Thus choice influences the neural dynamics associated with how reward signals are processed during learning. Behavioral, computational, and neurobiological models of rewarded learning should therefore accommodate a distinct influence for choice during rewarded learning.

  1. Predictive error detection in pianists: A combined ERP and motion capture study

    Directory of Open Access Journals (Sweden)

    Clemens eMaidhof

    2013-09-01

    Full Text Available Performing a piece of music involves the interplay of several cognitive and motor processes and requires extensive training to achieve a high skill level. However, even professional musicians commit errors occasionally. Previous event-related potential (ERP studies have investigated the neurophysiological correlates of pitch errors during piano performance, and reported pre-error negativity already occurring approximately 70-100 ms before the error had been committed and audible. It was assumed that this pre-error negativity reflects predictive control processes that compare predicted consequences with actual consequences of one’s own actions. However, in previous investigations, correct and incorrect pitch events were confounded by their different tempi. In addition, no data about the underlying movements were available. In the present study, we exploratively recorded the ERPs and 3D movement data of pianists’ fingers simultaneously while they performed fingering exercises from memory. Results showed a pre-error negativity for incorrect keystrokes when both correct and incorrect keystrokes were performed with comparable tempi. Interestingly, even correct notes immediately preceding erroneous keystrokes elicited a very similar negativity. In addition, we explored the possibility of computing ERPs time-locked to a kinematic landmark in the finger motion trajectories defined by when a finger makes initial contact with the key surface, that is, at the onset of tactile feedback. Results suggest that incorrect notes elicited a small difference after the onset of tactile feedback, whereas correct notes preceding incorrect ones elicited negativity before the onset of tactile feedback. The results tentatively suggest that tactile feedback plays an important role in error-monitoring during piano performance, because the comparison between predicted and actual sensory (tactile feedback may provide the information necessary for the detection of an

  2. Neural Activities Underlying the Feedback Express Salience Prediction Errors for Appetitive and Aversive Stimuli

    Science.gov (United States)

    Gu, Yan; Hu, Xueping; Pan, Weigang; Yang, Chun; Wang, Lijun; Li, Yiyuan; Chen, Antao

    2016-01-01

    Feedback information is essential for us to adapt appropriately to the environment. The feedback-related negativity (FRN), a frontocentral negative deflection after the delivery of feedback, has been found to be larger for outcomes that are worse than expected, and it reflects a reward prediction error derived from the midbrain dopaminergic projections to the anterior cingulate cortex (ACC), as stated in reinforcement learning theory. In contrast, the prediction of response-outcome (PRO) model claims that the neural activity in the mediofrontal cortex (mPFC), especially the ACC, is sensitive to the violation of expectancy, irrespective of the valence of feedback. Additionally, increasing evidence has demonstrated significant activities in the striatum, anterior insula and occipital lobe for unexpected outcomes independently of their valence. Thus, the neural mechanism of the feedback remains under dispute. Here, we investigated the feedback with monetary reward and electrical pain shock in one task via functional magnetic resonance imaging. The results revealed significant prediction-error-related activities in the bilateral fusiform gyrus, right middle frontal gyrus and left cingulate gyrus for both money and pain. This implies that some regions underlying the feedback may signal a salience prediction error rather than a reward prediction error. PMID:27694920

  3. What Kind of Initial Errors Cause the Severest Prediction Uncertainty of EI Nino in Zebiak-Cane Model

    Institute of Scientific and Technical Information of China (English)

    XU Hui; DUAN Wansuo

    2008-01-01

    With the Zebiak-Cane (ZC) model, the initial error that has the largest effect on ENSO prediction is explored by conditional nonlinear optimal perturbation (CNOP). The results demonstrate that CNOP-type errors cause the largest prediction error of ENSO in the ZC model. By analyzing the behavior of CNOP- type errors, we find that for the normal states and the relatively weak EI Nino events in the ZC model, the predictions tend to yield false alarms due to the uncertainties caused by CNOP. For the relatively strong EI Nino events, the ZC model largely underestimates their intensities. Also, our results suggest that the error growth of EI Nino in the ZC model depends on the phases of both the annual cycle and ENSO. The condition during northern spring and summer is most favorable for the error growth. The ENSO prediction bestriding these two seasons may be the most difficult. A linear singular vector (LSV) approach is also used to estimate the error growth of ENSO, but it underestimates the prediction uncertainties of ENSO in the ZC model. This result indicates that the different initial errors cause different amplitudes of prediction errors though they have same magnitudes. CNOP yields the severest prediction uncertainty. That is to say, the prediction skill of ENSO is closely related to the types of initial error. This finding illustrates a theoretical basis of data assimilation. It is expected that a data assimilation method can filter the initial errors related to CNOP and improve the ENSO forecast skill.

  4. Lateral habenula neurons signal errors in the prediction of reward information.

    Science.gov (United States)

    Bromberg-Martin, Ethan S; Hikosaka, Okihide

    2011-08-21

    Humans and animals have the ability to predict future events, which they cultivate by continuously searching their environment for sources of predictive information. However, little is known about the neural systems that motivate this behavior. We hypothesized that information-seeking is assigned value by the same circuits that support reward-seeking, such that neural signals encoding reward prediction errors (RPEs) include analogous information prediction errors (IPEs). To test this, we recorded from neurons in the lateral habenula, a nucleus that encodes RPEs, while monkeys chose between cues that provided different chances to view information about upcoming rewards. We found that a subpopulation of lateral habenula neurons transmitted signals resembling IPEs, responding when reward information was unexpectedly cued, delivered or denied. These signals evaluated information sources reliably, even when the monkey's decisions did not. These neurons could provide a common instructive signal for reward-seeking and information-seeking behavior.

  5. Effect of Measurement Errors on Predicted Cosmological Constraints from Shear Peak Statistics with LSST

    CERN Document Server

    Bard, D; Chang, C; May, M; Kahn, S M; AlSayyad, Y; Ahmad, Z; Bankert, J; Connolly, A; Gibson, R R; Gilmore, K; Grace, E; Haiman, Z; Hannel, M; Huffenberger, K M; Jernigan, J G; Jones, L; Krughoff, S; Lorenz, S; Marshall, S; Meert, A; Nagarajan, S; Peng, E; Peterson, J; Rasmussen, A P; Shmakova, M; Sylvestre, N; Todd, N; Young, M

    2013-01-01

    The statistics of peak counts in reconstructed shear maps contain information beyond the power spectrum, and can improve cosmological constraints from measurements of the power spectrum alone if systematic errors can be controlled. We study the effect of galaxy shape measurement errors on predicted cosmological constraints from the statistics of shear peak counts with the Large Synoptic Survey Telescope (LSST). We use the LSST image simulator in combination with cosmological N-body simulations to model realistic shear maps for different cosmological models. We include both galaxy shape noise and, for the first time, measurement errors on galaxy shapes. We find that the measurement errors considered have relatively little impact on the constraining power of shear peak counts for LSST.

  6. Comparisons of Two Ensemble Mean Methods in Measuring the Average Error Growth and the Predictability

    Institute of Scientific and Technical Information of China (English)

    丁瑞强; 李建平

    2011-01-01

    In this paper,taking the Lorenz system as an example,we compare the influences of the arithmetic mean and the geometric mean on measuring the global and local average error growth.The results show that the geometric mean error (GME) has a smoother growth than the arithmetic mean error (AME) for the global average error growth,and the GME is directly related to the maximal Lyapunov exponent,but the AME is not,as already noted by Krishnamurthy in 1993.Besides these,the GME is shown to be more appropriate than the AME in measuring the mean error growth in terms of the probability distribution of errors.The physical meanings of the saturation levels of the AME and the GME are also shown to be different.However,there is no obvious difference between the local average error growth with the arithmetic mean and the geometric mean,indicating that the choices of the AME or the GME have no influence on the measure of local average predictability.

  7. Solar cycle full-shape predictions: a global error evaluation for cycle 24

    CERN Document Server

    Sello, Stefano

    2016-01-01

    There are many proposed prediction methods for solar cycles behavior. In a previous paper we updated the full-shape curve prediction of the current solar cycle 24 using a non-linear dynamics method and we compared the results with the predictions collected by the NOAA/SEC prediction panel, using observed data up to October 2010. The aim of the present paper is to give a quantitative evaluation, a posteriori, of the performances of these prediction methods using a specific global error, updated on a monthly basis, which is a measure of the global performance on the predicted shape (both amplitude and phase) of the solar cycle. We suggest also the use of a percent cycle similarity degree, to better evaluate the predicted shape of the solar cycle curve.

  8. Correction for Measurement Error from Genotyping-by-Sequencing in Genomic Variance and Genomic Prediction Models

    DEFF Research Database (Denmark)

    Ashraf, Bilal; Janss, Luc; Jensen, Just

    Genotyping-by-sequencing (GBSeq) is becoming a cost-effective genotyping platform for species without available SNP arrays. GBSeq considers to sequence short reads from restriction sites covering a limited part of the genome (e.g., 5-10%) with low sequencing depth per individual (e.g., 5-10X per....... In the current work we show how the correction for measurement error in GBSeq can also be applied in whole genome genomic variance and genomic prediction models. Bayesian whole-genome random regression models are proposed to allow implementation of large-scale SNP-based models with a per-SNP correction...... for measurement error. We show correct retrieval of genomic explained variance, and improved genomic prediction when accounting for the measurement error in GBSeq data...

  9. The effect of prediction error correlation on optimal sensor placement in structural dynamics

    Science.gov (United States)

    Papadimitriou, Costas; Lombaert, Geert

    2012-04-01

    The problem of estimating the optimal sensor locations for parameter estimation in structural dynamics is re-visited. The effect of spatially correlated prediction errors on the optimal sensor placement is investigated. The information entropy is used as a performance measure of the sensor configuration. The optimal sensor location is formulated as an optimization problem involving discrete-valued variables, which is solved using computationally efficient sequential sensor placement algorithms. Asymptotic estimates for the information entropy are used to develop useful properties that provide insight into the dependence of the information entropy on the number and location of sensors. A theoretical analysis shows that the spatial correlation length of the prediction errors controls the minimum distance between the sensors and should be taken into account when designing optimal sensor locations with potential sensor distances up to the order of the characteristic length of the dynamic problem considered. Implementation issues for modal identification and structural-related model parameter estimation are addressed. Theoretical and computational developments are illustrated by designing the optimal sensor configurations for a continuous beam model, a discrete chain-like stiffness-mass model and a finite element model of a footbridge in Wetteren (Belgium). Results point out the crucial effect the spatial correlation of the prediction errors have on the design of optimal sensor locations for structural dynamics applications, revealing simultaneously potential inadequacies of spatially uncorrelated prediction errors models.

  10. Principal components analysis of reward prediction errors in a reinforcement learning task.

    Science.gov (United States)

    Sambrook, Thomas D; Goslin, Jeremy

    2016-01-01

    Models of reinforcement learning represent reward and punishment in terms of reward prediction errors (RPEs), quantitative signed terms describing the degree to which outcomes are better than expected (positive RPEs) or worse (negative RPEs). An electrophysiological component known as feedback related negativity (FRN) occurs at frontocentral sites 240-340ms after feedback on whether a reward or punishment is obtained, and has been claimed to neurally encode an RPE. An outstanding question however, is whether the FRN is sensitive to the size of both positive RPEs and negative RPEs. Previous attempts to answer this question have examined the simple effects of RPE size for positive RPEs and negative RPEs separately. However, this methodology can be compromised by overlap from components coding for unsigned prediction error size, or "salience", which are sensitive to the absolute size of a prediction error but not its valence. In our study, positive and negative RPEs were parametrically modulated using both reward likelihood and magnitude, with principal components analysis used to separate out overlying components. This revealed a single RPE encoding component responsive to the size of positive RPEs, peaking at ~330ms, and occupying the delta frequency band. Other components responsive to unsigned prediction error size were shown, but no component sensitive to negative RPE size was found.

  11. EEG Error Prediction as a Solution for Combining the Advantages of Retrieval Practice and Errorless Learning.

    Science.gov (United States)

    Riley, Ellyn A; McFarland, Dennis J

    2017-01-01

    Given the frequency of naming errors in aphasia, a common aim of speech and language rehabilitation is the improvement of naming. Based on evidence of significant word recall improvements in patients with memory impairments, errorless learning methods have been successfully applied to naming therapy in aphasia; however, other evidence suggests that although errorless learning can lead to better performance during treatment sessions, retrieval practice may be the key to lasting improvements. Task performance may vary with brain state (e.g., level of arousal, degree of task focus), and changes in brain state can be detected using EEG. With the ultimate goal of designing a system that monitors patient brain state in real time during therapy, we sought to determine whether errors could be predicted using spectral features obtained from an analysis of EEG. Thus, this study aimed to investigate the use of individual EEG responses to predict error production in aphasia. Eight participants with aphasia each completed 900 object-naming trials across three sessions while EEG was recorded and response accuracy scored for each trial. Analysis of the EEG response for seven of the eight participants showed significant correlations between EEG features and response accuracy (correct vs. incorrect) and error correction (correct, self-corrected, incorrect). Furthermore, upon combining the training data for the first two sessions, the model generalized to predict accuracy for performance in the third session for seven participants when accuracy was used as a predictor, and for five participants when error correction category was used as a predictor. With such ability to predict errors during therapy, it may be possible to use this information to intervene with errorless learning strategies only when necessary, thereby allowing patients to benefit from both the high within-session success of errorless learning as well as the longer-term improvements associated with retrieval practice.

  12. Early adversity disrupts the adult use of aversive prediction errors to reduce fear in uncertainty

    Directory of Open Access Journals (Sweden)

    Kristina M Wright

    2015-08-01

    Full Text Available Early life adversity increases anxiety in adult rodents and primates, and increases the risk for developing post-traumatic disorder (PTSD in humans. We hypothesized that early adversity impairs the use of learning signals – negative, aversive prediction errors – to reduce fear in uncertainty. To test this hypothesis, we gave adolescent rats a battery of adverse experiences then assessed adult performance in probabilistic Pavlovian fear conditioning and fear extinction. Rats were confronted with three cues associated with different probabilities of foot shock: one cue never predicted shock, another cue predicted shock with uncertainty, and a final cue always predicted shock. Control rats initially acquired fear to all cues, but rapidly reduced fear to the non-predictive and uncertain cues. Early adversity rats were slower to reduce fear to the non-predictive cue and never fully reduced fear to the uncertain cue. In extinction, all cues were presented in the absence of shock. Fear to the uncertain cue in discrimination, but not early adversity itself, predicted the reduction of fear in extinction. These results demonstrate early adversity impairs the use of negative, aversive prediction errors to reduce fear, especially in situations of uncertainty.

  13. Wavelet based error correction and predictive uncertainty of a hydrological forecasting system

    Science.gov (United States)

    Bogner, Konrad; Pappenberger, Florian; Thielen, Jutta; de Roo, Ad

    2010-05-01

    River discharge predictions most often show errors with scaling properties of unknown source and statistical structure that degrade the quality of forecasts. This is especially true for lead-time ranges greater then a few days. Since the European Flood Alert System (EFAS) provides discharge forecasts up to ten days ahead, it is necessary to take these scaling properties into consideration. For example the range of scales for the error that occurs at the spring time will be caused by long lasting snowmelt processes, and is by far larger then the error, that appears during the summer period and is caused by convective rain fields of short duration. The wavelet decomposition is an excellent way to provide the detailed model error at different levels in order to estimate the (unobserved) state variables more precisely. A Vector-AutoRegressive model with eXogenous input (VARX) is fitted for the different levels of wavelet decomposition simultaneously and after predicting the next time steps ahead for each scale, a reconstruction formula is applied to transform the predictions in the wavelet domain back to the original time domain. The Bayesian Uncertainty Processor (BUP) developed by Krzysztofowicz is an efficient method to estimate the full predictive uncertainty, which is derived by integrating the hydrological model uncertainty and the meteorological input uncertainty. A hydrological uncertainty processor has been applied to the error corrected discharge series at first in order to derive the predictive conditional distribution under the hypothesis that there is no input uncertainty. The uncertainty of the forecasted meteorological input forcing the hydrological model is derived from the combination of deterministic weather forecasts and ensemble predictions systems (EPS) and the Input Processor maps this input uncertainty into the output uncertainty under the hypothesis that there is no hydrological uncertainty. The main objective of this Bayesian forecasting system

  14. Predicting a multi-parametric probability map of active tumor extent using random forests.

    Science.gov (United States)

    Prior, Fred W; Fouke, Sarah J; Benzinger, Tammie; Boyd, Alicia; Chicoine, Michael; Cholleti, Sharath; Kelsey, Matthew; Keogh, Bart; Kim, Lauren; Milchenko, Mikhail; Politte, David G; Tyree, Stephen; Weinberger, Kilian; Marcus, Daniel

    2013-01-01

    Glioblastoma Mulitforme is highly infiltrative, making precise delineation of tumor margin difficult. Multimodality or multi-parametric MR imaging sequences promise an advantage over anatomic sequences such as post contrast enhancement as methods for determining the spatial extent of tumor involvement. In considering multi-parametric imaging sequences however, manual image segmentation and classification is time-consuming and prone to error. As a preliminary step toward integration of multi-parametric imaging into clinical assessments of primary brain tumors, we propose a machine-learning based multi-parametric approach that uses radiologist generated labels to train a classifier that is able to classify tissue on a voxel-wise basis and automatically generate a tumor segmentation. A random forests classifier was trained using a leave-one-out experimental paradigm. A simple linear classifier was also trained for comparison. The random forests classifier accurately predicted radiologist generated segmentations and tumor extent.

  15. Predicting a Multi-Parametric Probability Map of Active Tumor Extent Using Random Forests*

    Science.gov (United States)

    Prior, Fred W.; Fouke, Sarah J.; Benzinger, Tammie; Boyd, Alicia; Chicoine, Michael; Cholleti, Sharath; Kelsey, Matthew; Keogh, Bart; Kim, Lauren; Milchenko, Mikhail; Politte, David G.; Tyree, Stephen; Weinberger, Kilian; Marcus, Daniel

    2014-01-01

    Glioblastoma Mulitforme is highly infiltrative, making precise delineation of tumor margin difficult. Multimodality or multi-parametric MR imaging sequences promise an advantage over anatomic sequences such as post contrast enhancement as methods for determining the spatial extent of tumor involvement. In considering multi-parametric imaging sequences however, manual image segmentation and classification is time-consuming and prone to error. As a preliminary step toward integration of multi-parametric imaging into clinical assessments of primary brain tumors, we propose a machine-learning based multi-parametric approach that uses radiologist generated labels to train a classifier that is able to classify tissue on a voxel-wise basis and automatically generate a tumor segmentation. A random forests classifier was trained using a leave-one-out experimental paradigm. A simple linear classifier was also trained for comparison. The random forests classifier accurately predicted radiologist generated segmentations and tumor extent. PMID:24111225

  16. Prediction of the result in race walking using regularized regression models

    Directory of Open Access Journals (Sweden)

    Krzysztof Przednowek

    2013-04-01

    Full Text Available The following paper presents the use of regularized linear models as tools to optimize training process. The models were calculated by using data collected from race-walkers' training events. The models used predict the outcomes over a 3 km race and following a prescribed training plan. The material included a total of 122 training patterns made by 21 players. The methods of analysis include: classical model of OLS regression, ridge regression, LASSO regression and elastic net regression. In order to compare and choose the best method a cross-validation of the extit{leave-one-out} was used. All models were calculated using R language with additional packages. The best model was determined by the LASSO method which generates an error of about 26 seconds. The method has simplified the structure of the model by eliminating 5 out of 18 predictors.

  17. Working Memory Capacity Predicts Selection and Identification Errors in Visual Search.

    Science.gov (United States)

    Peltier, Chad; Becker, Mark W

    2016-11-17

    As public safety relies on the ability of professionals, such as radiologists and baggage screeners, to detect rare targets, it could be useful to identify predictors of visual search performance. Schwark, Sandry, and Dolgov found that working memory capacity (WMC) predicts hit rate and reaction time in low prevalence searches. This link was attributed to higher WMC individuals exhibiting a higher quitting threshold and increasing the probability of finding the target before terminating search in low prevalence search. These conclusions were limited based on the methods; without eye tracking, the researchers could not differentiate between an increase in accuracy due to fewer identification errors (failing to identify a fixated target), selection errors (failing to fixate a target), or a combination of both. Here, we measure WMC and correlate it with reaction time and accuracy in a visual search task. We replicate the finding that WMC predicts reaction time and hit rate. However, our analysis shows that it does so through both a reduction in selection and identification errors. The correlation between WMC and selection errors is attributable to increased quitting thresholds in those with high WMC. The correlation between WMC and identification errors is less clear, though potentially attributable to increased item inspection times in those with higher WMC. In addition, unlike Schwark and coworkers, we find that these WMC effects are fairly consistent across prevalence rates rather than being specific to low-prevalence searches.

  18. Comparison of two stochastic techniques for reliable urban runoff prediction by modeling systematic errors

    DEFF Research Database (Denmark)

    Del Giudice, Dario; Löwe, Roland; Madsen, Henrik;

    2015-01-01

    provide probabilistic predictions of wastewater discharge in a similarly reliable way, both for periods ranging from a few hours up to more than 1 week ahead of time. The EBD produces more accurate predictions on long horizons but relies on computationally heavy MCMC routines for parameter inferences......In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two...

  19. Predicting diagnostic error in radiology via eye-tracking and image analytics: Preliminary investigation in mammography

    Energy Technology Data Exchange (ETDEWEB)

    Voisin, Sophie; Tourassi, Georgia D. [Biomedical Science and Engineering Center, Oak Ridge National Laboratory, Oak Ridge, Tennessee 37831 (United States); Pinto, Frank [School of Engineering, Science, and Technology, Virginia State University, Petersburg, Virginia 23806 (United States); Morin-Ducote, Garnetta; Hudson, Kathleen B. [Department of Radiology, University of Tennessee Medical Center at Knoxville, Knoxville, Tennessee 37920 (United States)

    2013-10-15

    Purpose: The primary aim of the present study was to test the feasibility of predicting diagnostic errors in mammography by merging radiologists’ gaze behavior and image characteristics. A secondary aim was to investigate group-based and personalized predictive models for radiologists of variable experience levels.Methods: The study was performed for the clinical task of assessing the likelihood of malignancy of mammographic masses. Eye-tracking data and diagnostic decisions for 40 cases were acquired from four Radiology residents and two breast imaging experts as part of an IRB-approved pilot study. Gaze behavior features were extracted from the eye-tracking data. Computer-generated and BIRADS images features were extracted from the images. Finally, machine learning algorithms were used to merge gaze and image features for predicting human error. Feature selection was thoroughly explored to determine the relative contribution of the various features. Group-based and personalized user modeling was also investigated.Results: Machine learning can be used to predict diagnostic error by merging gaze behavior characteristics from the radiologist and textural characteristics from the image under review. Leveraging data collected from multiple readers produced a reasonable group model [area under the ROC curve (AUC) = 0.792 ± 0.030]. Personalized user modeling was far more accurate for the more experienced readers (AUC = 0.837 ± 0.029) than for the less experienced ones (AUC = 0.667 ± 0.099). The best performing group-based and personalized predictive models involved combinations of both gaze and image features.Conclusions: Diagnostic errors in mammography can be predicted to a good extent by leveraging the radiologists’ gaze behavior and image content.

  20. Predicting diagnostic error in Radiology via eye-tracking and image analytics: Application in mammography

    Energy Technology Data Exchange (ETDEWEB)

    Voisin, Sophie [ORNL; Pinto, Frank M [ORNL; Morin-Ducote, Garnetta [University of Tennessee, Knoxville (UTK); Hudson, Kathy [University of Tennessee, Knoxville (UTK); Tourassi, Georgia [ORNL

    2013-01-01

    Purpose: The primary aim of the present study was to test the feasibility of predicting diagnostic errors in mammography by merging radiologists gaze behavior and image characteristics. A secondary aim was to investigate group-based and personalized predictive models for radiologists of variable experience levels. Methods: The study was performed for the clinical task of assessing the likelihood of malignancy of mammographic masses. Eye-tracking data and diagnostic decisions for 40 cases were acquired from 4 Radiology residents and 2 breast imaging experts as part of an IRB-approved pilot study. Gaze behavior features were extracted from the eye-tracking data. Computer-generated and BIRADs images features were extracted from the images. Finally, machine learning algorithms were used to merge gaze and image features for predicting human error. Feature selection was thoroughly explored to determine the relative contribution of the various features. Group-based and personalized user modeling was also investigated. Results: Diagnostic error can be predicted reliably by merging gaze behavior characteristics from the radiologist and textural characteristics from the image under review. Leveraging data collected from multiple readers produced a reasonable group model (AUC=0.79). Personalized user modeling was far more accurate for the more experienced readers (average AUC of 0.837 0.029) than for the less experienced ones (average AUC of 0.667 0.099). The best performing group-based and personalized predictive models involved combinations of both gaze and image features. Conclusions: Diagnostic errors in mammography can be predicted reliably by leveraging the radiologists gaze behavior and image content.

  1. A Conceptual Framework for Predicting Error in Complex Human-Machine Environments

    Science.gov (United States)

    Freed, Michael; Remington, Roger; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    We present a Goals, Operators, Methods, and Selection Rules-Model Human Processor (GOMS-MHP) style model-based approach to the problem of predicting human habit capture errors. Habit captures occur when the model fails to allocate limited cognitive resources to retrieve task-relevant information from memory. Lacking the unretrieved information, decision mechanisms act in accordance with implicit default assumptions, resulting in error when relied upon assumptions prove incorrect. The model helps interface designers identify situations in which such failures are especially likely.

  2. Per-beam, planar IMRT QA passing rates do not predict clinically relevant patient dose errors

    Energy Technology Data Exchange (ETDEWEB)

    Nelms, Benjamin E.; Zhen Heming; Tome, Wolfgang A. [Canis Lupus LLC and Department of Human Oncology, University of Wisconsin, Merrimac, Wisconsin 53561 (United States); Department of Medical Physics, University of Wisconsin, Madison, Wisconsin 53705 (United States); Departments of Human Oncology, Medical Physics, and Biomedical Engineering, University of Wisconsin, Madison, Wisconsin 53792 (United States)

    2011-02-15

    Purpose: The purpose of this work is to determine the statistical correlation between per-beam, planar IMRT QA passing rates and several clinically relevant, anatomy-based dose errors for per-patient IMRT QA. The intent is to assess the predictive power of a common conventional IMRT QA performance metric, the Gamma passing rate per beam. Methods: Ninety-six unique data sets were created by inducing four types of dose errors in 24 clinical head and neck IMRT plans, each planned with 6 MV Varian 120-leaf MLC linear accelerators using a commercial treatment planning system and step-and-shoot delivery. The error-free beams/plans were used as ''simulated measurements'' (for generating the IMRT QA dose planes and the anatomy dose metrics) to compare to the corresponding data calculated by the error-induced plans. The degree of the induced errors was tuned to mimic IMRT QA passing rates that are commonly achieved using conventional methods. Results: Analysis of clinical metrics (parotid mean doses, spinal cord max and D1cc, CTV D95, and larynx mean) vs IMRT QA Gamma analysis (3%/3 mm, 2/2, 1/1) showed that in all cases, there were only weak to moderate correlations (range of Pearson's r-values: -0.295 to 0.653). Moreover, the moderate correlations actually had positive Pearson's r-values (i.e., clinically relevant metric differences increased with increasing IMRT QA passing rate), indicating that some of the largest anatomy-based dose differences occurred in the cases of high IMRT QA passing rates, which may be called ''false negatives.'' The results also show numerous instances of false positives or cases where low IMRT QA passing rates do not imply large errors in anatomy dose metrics. In none of the cases was there correlation consistent with high predictive power of planar IMRT passing rates, i.e., in none of the cases did high IMRT QA Gamma passing rates predict low errors in anatomy dose metrics or vice versa

  3. Neurophysiology of Reward-Guided Behavior: Correlates Related to Predictions, Value, Motivation, Errors, Attention, and Action.

    Science.gov (United States)

    Bissonette, Gregory B; Roesch, Matthew R

    2016-01-01

    Many brain areas are activated by the possibility and receipt of reward. Are all of these brain areas reporting the same information about reward? Or are these signals related to other functions that accompany reward-guided learning and decision-making? Through carefully controlled behavioral studies, it has been shown that reward-related activity can represent reward expectations related to future outcomes, errors in those expectations, motivation, and signals related to goal- and habit-driven behaviors. These dissociations have been accomplished by manipulating the predictability of positively and negatively valued events. Here, we review single neuron recordings in behaving animals that have addressed this issue. We describe data showing that several brain areas, including orbitofrontal cortex, anterior cingulate, and basolateral amygdala signal reward prediction. In addition, anterior cingulate, basolateral amygdala, and dopamine neurons also signal errors in reward prediction, but in different ways. For these areas, we will describe how unexpected manipulations of positive and negative value can dissociate signed from unsigned reward prediction errors. All of these signals feed into striatum to modify signals that motivate behavior in ventral striatum and guide responding via associative encoding in dorsolateral striatum.

  4. Neurophysiology of Reward-Guided Behavior: Correlates Related to Predictions, Value, Motivation, Errors, Attention, and Action

    Science.gov (United States)

    Roesch, Matthew R.

    2017-01-01

    Many brain areas are activated by the possibility and receipt of reward. Are all of these brain areas reporting the same information about reward? Or are these signals related to other functions that accompany reward-guided learning and decision-making? Through carefully controlled behavioral studies, it has been shown that reward-related activity can represent reward expectations related to future outcomes, errors in those expectations, motivation, and signals related to goal- and habit-driven behaviors. These dissociations have been accomplished by manipulating the predictability of positively and negatively valued events. Here, we review single neuron recordings in behaving animals that have addressed this issue. We describe data showing that several brain areas, including orbitofrontal cortex, anterior cingulate, and basolateral amygdala signal reward prediction. In addition, anterior cingulate, basolateral amygdala, and dopamine neurons also signal errors in reward prediction, but in different ways. For these areas, we will describe how unexpected manipulations of positive and negative value can dissociate signed from unsigned reward prediction errors. All of these signals feed into striatum to modify signals that motivate behavior in ventral striatum and guide responding via associative encoding in dorsolateral striatum. PMID:26276036

  5. Effects of optimal initial errors on predicting the seasonal reduction of the upstream Kuroshio transport

    Science.gov (United States)

    Zhang, Kun; Wang, Qiang; Mu, Mu; Liang, Peng

    2016-10-01

    With the Regional Ocean Modeling System (ROMS), we realistically simulated the transport variations of the upstream Kuroshio (referring to the Kuroshio from its origin to the south of Taiwan), particularly for the seasonal transport reduction. Then, we investigated the effects of the optimal initial errors estimated by the conditional nonlinear optimal perturbation (CNOP) approach on predicting the seasonal transport reduction. Two transport reduction events (denoted as Event 1 and Event 2) were chosen, and CNOP1 and CNOP2 were obtained for each event. By examining the spatial structures of the two types of CNOPs, we found that the dominant amplitudes are located around (128°E, 17°N) horizontally and in the upper 1000 m vertically. For each event, the two CNOPs caused large prediction errors. Specifically, at the prediction time, CNOP1 (CNOP2) develops into an anticyclonic (cyclonic) eddy-like structure centered around 124°E, leading to the increase (decrease) of the upstream Kuroshio transport. By investigating the time evolution of the CNOPs in Event 1, we found that the eddy-like structures originating from east of Luzon gradually grow and simultaneously propagate westward. The eddy-energetic analysis indicated that the errors obtain energy from the background state through barotropic and baroclinic instabilities and that the latter plays a more important role. These results suggest that improving the initial conditions in east of Luzon could lead to better prediction of the upstream Kuroshio transport variation.

  6. Pupillary response predicts multiple object tracking load, error rate, and conscientiousness, but not inattentional blindness.

    Science.gov (United States)

    Wright, Timothy J; Boot, Walter R; Morgan, Chelsea S

    2013-09-01

    Research on inattentional blindness (IB) has uncovered few individual difference measures that predict failures to detect an unexpected event. Notably, no clear relationship exists between primary task performance and IB. This is perplexing as better task performance is typically associated with increased effort and should result in fewer spare resources to process the unexpected event. We utilized a psychophysiological measure of effort (pupillary response) to explore whether differences in effort devoted to the primary task (multiple object tracking) are related to IB. Pupillary response was sensitive to tracking load and differences in primary task error rates. Furthermore, pupillary response was a better predictor of conscientiousness than primary task errors; errors were uncorrelated with conscientiousness. Despite being sensitive to task load, individual differences in performance and conscientiousness, pupillary response did not distinguish between those who noticed the unexpected event and those who did not. Results provide converging evidence that effort and primary task engagement may be unrelated to IB.

  7. Influence of precision of emission characteristic parameters on model prediction error of VOCs/formaldehyde from dry building material.

    Directory of Open Access Journals (Sweden)

    Wenjuan Wei

    Full Text Available Mass transfer models are useful in predicting the emissions of volatile organic compounds (VOCs and formaldehyde from building materials in indoor environments. They are also useful for human exposure evaluation and in sustainable building design. The measurement errors in the emission characteristic parameters in these mass transfer models, i.e., the initial emittable concentration (C 0, the diffusion coefficient (D, and the partition coefficient (K, can result in errors in predicting indoor VOC and formaldehyde concentrations. These errors have not yet been quantitatively well analyzed in the literature. This paper addresses this by using modelling to assess these errors for some typical building conditions. The error in C 0, as measured in environmental chambers and applied to a reference living room in Beijing, has the largest influence on the model prediction error in indoor VOC and formaldehyde concentration, while the error in K has the least effect. A correlation between the errors in D, K, and C 0 and the error in the indoor VOC and formaldehyde concentration prediction is then derived for engineering applications. In addition, the influence of temperature on the model prediction of emissions is investigated. It shows the impact of temperature fluctuations on the prediction errors in indoor VOC and formaldehyde concentrations to be less than 7% at 23±0.5°C and less than 30% at 23±2°C.

  8. Influence of precision of emission characteristic parameters on model prediction error of VOCs/formaldehyde from dry building material.

    Science.gov (United States)

    Wei, Wenjuan; Xiong, Jianyin; Zhang, Yinping

    2013-01-01

    Mass transfer models are useful in predicting the emissions of volatile organic compounds (VOCs) and formaldehyde from building materials in indoor environments. They are also useful for human exposure evaluation and in sustainable building design. The measurement errors in the emission characteristic parameters in these mass transfer models, i.e., the initial emittable concentration (C 0), the diffusion coefficient (D), and the partition coefficient (K), can result in errors in predicting indoor VOC and formaldehyde concentrations. These errors have not yet been quantitatively well analyzed in the literature. This paper addresses this by using modelling to assess these errors for some typical building conditions. The error in C 0, as measured in environmental chambers and applied to a reference living room in Beijing, has the largest influence on the model prediction error in indoor VOC and formaldehyde concentration, while the error in K has the least effect. A correlation between the errors in D, K, and C 0 and the error in the indoor VOC and formaldehyde concentration prediction is then derived for engineering applications. In addition, the influence of temperature on the model prediction of emissions is investigated. It shows the impact of temperature fluctuations on the prediction errors in indoor VOC and formaldehyde concentrations to be less than 7% at 23±0.5°C and less than 30% at 23±2°C.

  9. A machine learning approach to the accurate prediction of multi-leaf collimator positional errors

    Science.gov (United States)

    Carlson, Joel N. K.; Park, Jong Min; Park, So-Yeon; In Park, Jong; Choi, Yunseok; Ye, Sung-Joon

    2016-03-01

    Discrepancies between planned and delivered movements of multi-leaf collimators (MLCs) are an important source of errors in dose distributions during radiotherapy. In this work we used machine learning techniques to train models to predict these discrepancies, assessed the accuracy of the model predictions, and examined the impact these errors have on quality assurance (QA) procedures and dosimetry. Predictive leaf motion parameters for the models were calculated from the plan files, such as leaf position and velocity, whether the leaf was moving towards or away from the isocenter of the MLC, and many others. Differences in positions between synchronized DICOM-RT planning files and DynaLog files reported during QA delivery were used as a target response for training of the models. The final model is capable of predicting MLC positions during delivery to a high degree of accuracy. For moving MLC leaves, predicted positions were shown to be significantly closer to delivered positions than were planned positions. By incorporating predicted positions into dose calculations in the TPS, increases were shown in gamma passing rates against measured dose distributions recorded during QA delivery. For instance, head and neck plans with 1%/2 mm gamma criteria had an average increase in passing rate of 4.17% (SD  =  1.54%). This indicates that the inclusion of predictions during dose calculation leads to a more realistic representation of plan delivery. To assess impact on the patient, dose volumetric histograms (DVH) using delivered positions were calculated for comparison with planned and predicted DVHs. In all cases, predicted dose volumetric parameters were in closer agreement to the delivered parameters than were the planned parameters, particularly for organs at risk on the periphery of the treatment area. By incorporating the predicted positions into the TPS, the treatment planner is given a more realistic view of the dose distribution as it will truly be

  10. Error-rate estimation in discriminant analysis of non-linear longitudinal data: A comparison of resampling methods.

    Science.gov (United States)

    de la Cruz, Rolando; Fuentes, Claudio; Meza, Cristian; Núñez-Antón, Vicente

    2016-07-08

    Consider longitudinal observations across different subjects such that the underlying distribution is determined by a non-linear mixed-effects model. In this context, we look at the misclassification error rate for allocating future subjects using cross-validation, bootstrap algorithms (parametric bootstrap, leave-one-out, .632 and [Formula: see text]), and bootstrap cross-validation (which combines the first two approaches), and conduct a numerical study to compare the performance of the different methods. The simulation and comparisons in this study are motivated by real observations from a pregnancy study in which one of the main objectives is to predict normal versus abnormal pregnancy outcomes based on information gathered at early stages. Since in this type of studies it is not uncommon to have insufficient data to simultaneously solve the classification problem and estimate the misclassification error rate, we put special attention to situations when only a small sample size is available. We discuss how the misclassification error rate estimates may be affected by the sample size in terms of variability and bias, and examine conditions under which the misclassification error rate estimates perform reasonably well.

  11. Mindfulness meditation modulates reward prediction errors in the striatum in a passive conditioning task

    Directory of Open Access Journals (Sweden)

    Ulrich eKirk

    2015-02-01

    Full Text Available Reinforcement learning models have demonstrated that phasic activity of dopamine neurons during reward expectation encodes information about the predictability of rewards and cues that predict reward. Evidence indicates that mindfulness-based approaches reduce reward anticipation signal in the striatum to negative and positive incentives suggesting the hypothesis that such training influence basic reward processing. Using a passive conditioning task and fMRI in a group of experienced mindfulness meditators and age-matched controls, we tested the hypothesis that mindfulness meditation influence reward and reward prediction error signals. We found diminished positive and negative prediction error-related blood-oxygen level-dependent (BOLD responses in the putamen in meditators compared with controls. In the meditators, this decrease in striatal BOLD responses to reward prediction was paralleled by increased activity in posterior insula, a primary interoceptive region. Critically, responses in the putamen during early trials of the conditioning procedure (run 1 were elevated in both meditators and controls. These results provide evidence that experienced mindfulness meditators show attenuated reward prediction signals to valenced stimuli, which may be related to interoceptive processes encoded in the posterior insula.

  12. Integrating a calibrated groundwater flow model with error-correcting data-driven models to improve predictions

    Science.gov (United States)

    Demissie, Yonas K.; Valocchi, Albert J.; Minsker, Barbara S.; Bailey, Barbara A.

    2009-01-01

    SummaryPhysically-based groundwater models (PBMs), such as MODFLOW, contain numerous parameters which are usually estimated using statistically-based methods, which assume that the underlying error is white noise. However, because of the practical difficulties of representing all the natural subsurface complexity, numerical simulations are often prone to large uncertainties that can result in both random and systematic model error. The systematic errors can be attributed to conceptual, parameter, and measurement uncertainty, and most often it can be difficult to determine their physical cause. In this paper, we have developed a framework to handle systematic error in physically-based groundwater flow model applications that uses error-correcting data-driven models (DDMs) in a complementary fashion. The data-driven models are separately developed to predict the MODFLOW head prediction errors, which were subsequently used to update the head predictions at existing and proposed observation wells. The framework is evaluated using a hypothetical case study developed based on a phytoremediation site at the Argonne National Laboratory. This case study includes structural, parameter, and measurement uncertainties. In terms of bias and prediction uncertainty range, the complementary modeling framework has shown substantial improvements (up to 64% reduction in RMSE and prediction error ranges) over the original MODFLOW model, in both the calibration and the verification periods. Moreover, the spatial and temporal correlations of the prediction errors are significantly reduced, thus resulting in reduced local biases and structures in the model prediction errors.

  13. Prediction of XRF analyzers error for elements on-line assaying using Kalman Filter

    Institute of Scientific and Technical Information of China (English)

    Nakhaei F; Sam A; Mosavi MR; Nakhaei A

    2012-01-01

    Determination of chemical elements assay plays an important role in mineral processing operations.This factor is used to control process accuracy,recovery calculation and plant profitability.The new assaying methods including chemical methods,X-ray fluorescence and atomic absorption spectrometry are advanced and accurate.However,in some applications,such as on-line assaying process,high accuracy is required.In this paper,an algorithm based on Kalman Filter is presented to predict on-line XRF errors.This research has been carried out on the basis of based the industrial real data collection for evaluating the performance of the presented algorithm.The measurements and analysis for this study were conducted at the Sarcheshmeh Copper Concentrator Plant located in Iran.The quality of the obtained results was very satisfied; so that the RMS errors of prediction obtained for Cu and Mo grade assaying errors in rougher feed were less than 0.039 and 0.002 and in final flotation concentration less than 0.58 and 0.074,respectively.The results indicate that the mentioned method is quite accurate to reduce the on-line XRF errors measurement.

  14. Light induced fluorescence for predicting API content in tablets: sampling and error.

    Science.gov (United States)

    Domike, Reuben; Ngai, Samuel; Cooney, Charles L

    2010-05-31

    The use of a light induced fluorescence (LIF) instrument to estimate the total content of fluorescent active pharmaceutical ingredient in a tablet from surface sampling was demonstrated. Different LIF sampling strategies were compared to a total tablet ultraviolet (UV) absorbance test for each tablet. Testing was completed on tablets with triamterene as the active ingredient and on tablets with caffeine as the active ingredient, each with a range of concentrations. The LIF instrument accurately estimated the active ingredient within 10% of total tablet test greater than 95% of the time. The largest error amongst all of the tablets tested was 13%. The RMSEP between the techniques was in the range of 4.4-7.9%. Theory of the error associated with the surface sampling was developed and found to accurately predict the experimental error. This theory uses one empirically determined parameter: the deviation of estimations at different locations on the tablet surface. As this empirical parameter can be found rapidly, correct use of this prediction of error may reduce the effort required for calibration and validation studies of non-destructive surface measurement techniques, and thereby rapidly determine appropriate analytical techniques for estimating content uniformity in tablets.

  15. Neural correlates of error monitoring in adolescents prospectively predict initiation of tobacco use

    Directory of Open Access Journals (Sweden)

    Andrey P. Anokhin

    2015-12-01

    Full Text Available Deficits in self-regulation of behavior can play an important role in the initiation of substance use and progression to regular use and dependence. One of the distinct component processes of self-regulation is error monitoring, i.e. detection of a conflict between the intended and actually executed action. Here we examined whether a neural marker of error monitoring, Error-Related Negativity (ERN, predicts future initiation of tobacco use. ERN was assessed in a prospective longitudinal sample at ages 12, 14, and 16 using a flanker task. ERN amplitude showed a significant increase with age during adolescence. Reduced ERN amplitude at ages 14 and 16, as well as slower rate of its developmental changes significantly predicted initiation of tobacco use by age 18 but not transition to regular tobacco use or initiation of marijuana and alcohol use. The present results suggest that attenuated development of the neural mechanisms of error monitoring during adolescence can increase the risk for initiation of tobacco use. The present results also suggest that the role of distinct neurocognitive component processes involved in behavioral regulation may be limited to specific stages of addiction.

  16. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    Science.gov (United States)

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-01-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek

  17. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steven B.

    2013-07-23

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cek resolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek

  18. Effects of error covariance structure on estimation of model averaging weights and predictive performance

    Science.gov (United States)

    Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.

    2013-09-01

    When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, Cɛ, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cek resolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek

  19. Mediofrontal event-related potentials in response to positive, negative and unsigned prediction errors.

    Science.gov (United States)

    Sambrook, Thomas D; Goslin, Jeremy

    2014-08-01

    Reinforcement learning models make use of reward prediction errors (RPEs), the difference between an expected and obtained reward. There is evidence that the brain computes RPEs, but an outstanding question is whether positive RPEs ("better than expected") and negative RPEs ("worse than expected") are represented in a single integrated system. An electrophysiological component, feedback related negativity, has been claimed to encode an RPE but its relative sensitivity to the utility of positive and negative RPEs remains unclear. This study explored the question by varying the utility of positive and negative RPEs in a design that controlled for other closely related properties of feedback and could distinguish utility from salience. It revealed a mediofrontal sensitivity to utility, for positive RPEs at 275-310ms and for negative RPEs at 310-390ms. These effects were preceded and succeeded by a response consistent with an unsigned prediction error, or "salience" coding.

  20. Delusions and prediction error: clarifying the roles of behavioural and brain responses.

    Science.gov (United States)

    Corlett, Philip Robert; Fletcher, Paul Charles

    2015-01-01

    Griffiths and colleagues provided a clear and thoughtful review of the prediction error model of delusion formation [Cognitive Neuropsychiatry, 2014 April 4 (Epub ahead of print)]. As well as reviewing the central ideas and concluding that the existing evidence base is broadly supportive of the model, they provide a detailed critique of some of the experiments that we have performed to study it. Though they conclude that the shortcomings that they identify in these experiments do not fundamentally challenge the prediction error model, we nevertheless respond to these criticisms. We begin by providing a more detailed outline of the model itself as there are certain important aspects of it that were not covered in their review. We then respond to their specific criticisms of the empirical evidence. We defend the neuroimaging contrasts that we used to explore this model of psychosis arguing that, while any single contrast entails some ambiguity, our assumptions have been justified by our extensive background work before and since.

  1. The balanced mind: the variability of task-unrelated thoughts predicts error-monitoring

    Directory of Open Access Journals (Sweden)

    Micah eAllen

    2013-11-01

    Full Text Available Self-generated thoughts unrelated to ongoing activities, also known as ‘mind-wandering’, make up a substantial portion of our daily lives. Reports of such task-unrelated thoughts (TUTs predict both poor performance on demanding cognitive tasks and blood-oxygen-level-dependent (BOLD activity in the default mode network (DMN. However, recent findings suggest that TUTs and the DMN can also facilitate metacognitive abilities and related behaviors. To further understand these relationships, we examined the influence of subjective intensity, ruminative quality, and variability of mind-wandering on response inhibition and monitoring, using the Error Awareness Task (EAT. We expected to replicate links between TUT and reduced inhibition, and explored whether variance in TUT would predict improved error monitoring, reflecting a capacity to balance between internal and external cognition. By analyzing BOLD responses to subjective probes and the EAT, we dissociated contributions of the DMN, executive, and salience networks to task performance. While both response inhibition and online TUT ratings modulated BOLD activity in the medial prefrontal cortex (mPFC of the DMN, the former recruited a more dorsal area implying functional segregation. We further found that individual differences in mean TUTs strongly predicted EAT stop accuracy, while TUT variability specifically predicted levels of error awareness. Interestingly, we also observed co-activation of salience and default mode regions during error awareness, supporting a link between monitoring and TUTs. Altogether our results suggest that although TUT is detrimental to task performance, fluctuations in attention between self-generated and external task-related thought is a characteristic of individuals with greater metacognitive monitoring capacity. Achieving a balance between internal and externally oriented thought may thus allow individuals to optimize their task performance.

  2. Identification of Nonlinear Rational Systems Using A Prediction-Error Estimation Algorithm

    OpenAIRE

    1987-01-01

    Identification of discrete-time noninear stochastic systems which can be represented by a rational input-output model is considered. A prediction-error parameter estimation algorithm is developed and a criterion is derived using results from the theory of hypothesis testing to determine the correct model structure. The identification of a simulated system and a heat exchanger are included to illustrate the algorithms.

  3. The balanced mind: the variability of task-unrelated thoughts predicts error monitoring.

    Science.gov (United States)

    Allen, Micah; Smallwood, Jonathan; Christensen, Joanna; Gramm, Daniel; Rasmussen, Beinta; Jensen, Christian Gaden; Roepstorff, Andreas; Lutz, Antoine

    2013-01-01

    Self-generated thoughts unrelated to ongoing activities, also known as "mind-wandering," make up a substantial portion of our daily lives. Reports of such task-unrelated thoughts (TUTs) predict both poor performance on demanding cognitive tasks and blood-oxygen-level-dependent (BOLD) activity in the default mode network (DMN). However, recent findings suggest that TUTs and the DMN can also facilitate metacognitive abilities and related behaviors. To further understand these relationships, we examined the influence of subjective intensity, ruminative quality, and variability of mind-wandering on response inhibition and monitoring, using the Error Awareness Task (EAT). We expected to replicate links between TUT and reduced inhibition, and explored whether variance in TUT would predict improved error monitoring, reflecting a capacity to balance between internal and external cognition. By analyzing BOLD responses to subjective probes and the EAT, we dissociated contributions of the DMN, executive, and salience networks to task performance. While both response inhibition and online TUT ratings modulated BOLD activity in the medial prefrontal cortex (mPFC) of the DMN, the former recruited a more dorsal area implying functional segregation. We further found that individual differences in mean TUTs strongly predicted EAT stop accuracy, while TUT variability specifically predicted levels of error awareness. Interestingly, we also observed co-activation of salience and default mode regions during error awareness, supporting a link between monitoring and TUTs. Altogether our results suggest that although TUT is detrimental to task performance, fluctuations in attention between self-generated and external task-related thought is a characteristic of individuals with greater metacognitive monitoring capacity. Achieving a balance between internally and externally oriented thought may thus aid individuals in optimizing their task performance.

  4. The balanced mind: the variability of task-unrelated thoughts predicts error monitoring

    Science.gov (United States)

    Allen, Micah; Smallwood, Jonathan; Christensen, Joanna; Gramm, Daniel; Rasmussen, Beinta; Jensen, Christian Gaden; Roepstorff, Andreas; Lutz, Antoine

    2013-01-01

    Self-generated thoughts unrelated to ongoing activities, also known as “mind-wandering,” make up a substantial portion of our daily lives. Reports of such task-unrelated thoughts (TUTs) predict both poor performance on demanding cognitive tasks and blood-oxygen-level-dependent (BOLD) activity in the default mode network (DMN). However, recent findings suggest that TUTs and the DMN can also facilitate metacognitive abilities and related behaviors. To further understand these relationships, we examined the influence of subjective intensity, ruminative quality, and variability of mind-wandering on response inhibition and monitoring, using the Error Awareness Task (EAT). We expected to replicate links between TUT and reduced inhibition, and explored whether variance in TUT would predict improved error monitoring, reflecting a capacity to balance between internal and external cognition. By analyzing BOLD responses to subjective probes and the EAT, we dissociated contributions of the DMN, executive, and salience networks to task performance. While both response inhibition and online TUT ratings modulated BOLD activity in the medial prefrontal cortex (mPFC) of the DMN, the former recruited a more dorsal area implying functional segregation. We further found that individual differences in mean TUTs strongly predicted EAT stop accuracy, while TUT variability specifically predicted levels of error awareness. Interestingly, we also observed co-activation of salience and default mode regions during error awareness, supporting a link between monitoring and TUTs. Altogether our results suggest that although TUT is detrimental to task performance, fluctuations in attention between self-generated and external task-related thought is a characteristic of individuals with greater metacognitive monitoring capacity. Achieving a balance between internally and externally oriented thought may thus aid individuals in optimizing their task performance. PMID:24223545

  5. A Bayesian joint probability post-processor for reducing errors and quantifying uncertainty in monthly streamflow predictions

    OpenAIRE

    P. Pokhrel; Robertson, D E; Q. J. Wang

    2013-01-01

    Hydrologic model predictions are often biased and subject to heteroscedastic errors originating from various sources including data, model structure and parameter calibration. Statistical post-processors are applied to reduce such errors and quantify uncertainty in the predictions. In this study, we investigate the use of a statistical post-processor based on the Bayesian joint probability (BJP) modelling approach to reduce errors and quantify uncertainty in streamflow predi...

  6. The sensorimotor system minimizes prediction error for object lifting when the object's weight is uncertain.

    Science.gov (United States)

    Brooks, Jack; Thaler, Anne

    2017-08-01

    A reliable mechanism to predict the heaviness of an object is important for manipulating an object under environmental uncertainty. Recently, Cashaback et al. (Cashaback JGA, McGregor HR, Pun HCH, Buckingham G, Gribble PL. J Neurophysiol 117: 260-274, 2017) showed that for object lifting the sensorimotor system uses a strategy that minimizes prediction error when the object's weight is uncertain. Previous research demonstrates that visually guided reaching is similarly optimized. Although this suggests a unified strategy of the sensorimotor system for object manipulation, the selected strategy appears to be task dependent and subject to change in response to the degree of environmental uncertainty. Copyright © 2017 the American Physiological Society.

  7. Using a mesoscale ensemble to predict forecast error and perform targeted observation

    Institute of Scientific and Technical Information of China (English)

    DU Jun; YU Rucong; CUI Chunguang; LI Jun

    2014-01-01

    Using NCEP short range ensemble forecast (SREF) system, demonstrated two fundamental on-going evolu-tions in numerical weather prediction (NWP) are through ensemble methodology. One evolution is the shift from traditional single-value deterministic forecast to flow-dependent (not statistical) probabilistic forecast to address forecast uncertainty. Another is from a one-way observation-prediction system shifting to an in-teractive two-way observation-prediction system to increase predictability of a weather system. In the first part, how ensemble spread from NCEP SREF predicting ensemble-mean forecast error was evaluated over a period of about a month. The result shows that the current capability of predicting forecast error by the 21-member NCEP SREF has reached to a similar or even higher level than that of current state-of-the-art NWP models in predicting precipitation, e.g., the spatial correlation between ensemble spread and absolute fore-cast error has reached 0.5 or higher at 87 h (3.5 d) lead time on average for some meteorological variables. This demonstrates that the current operational ensemble system has already had preliminary capability of predicting the forecast error with usable skill, which is a remarkable achievement as of today. Given the good spread-skill relation, the probability derived from the ensemble was also statistically reliable, which is the most important feature a useful probabilistic forecast should have. The second part of this research tested an ensemble-based interactive targeting (E-BIT) method. Unlike other mathematically-calculated objec-tive approaches, this method is subjective or human interactive based on information from an ensemble of forecasts. A numerical simulation study was performed to eight real atmospheric cases with a 10-member, bred vector-based mesoscale ensemble using the NCEP regional spectral model (RSM, a sub-component of NCEP SREF) to prove the concept of this E-BIT method. The method seems to work most

  8. Belief about nicotine selectively modulates value and reward prediction error signals in smokers

    Science.gov (United States)

    Gu, Xiaosi; Lohrenz, Terry; Salas, Ramiro; Baldwin, Philip R.; Soltani, Alireza; Kirk, Ulrich; Cinciripini, Paul M.; Montague, P. Read

    2015-01-01

    Little is known about how prior beliefs impact biophysically described processes in the presence of neuroactive drugs, which presents a profound challenge to the understanding of the mechanisms and treatments of addiction. We engineered smokers’ prior beliefs about the presence of nicotine in a cigarette smoked before a functional magnetic resonance imaging session where subjects carried out a sequential choice task. Using a model-based approach, we show that smokers’ beliefs about nicotine specifically modulated learning signals (value and reward prediction error) defined by a computational model of mesolimbic dopamine systems. Belief of “no nicotine in cigarette” (compared with “nicotine in cigarette”) strongly diminished neural responses in the striatum to value and reward prediction errors and reduced the impact of both on smokers’ choices. These effects of belief could not be explained by global changes in visual attention and were specific to value and reward prediction errors. Thus, by modulating the expression of computationally explicit signals important for valuation and choice, beliefs can override the physical presence of a potent neuroactive compound like nicotine. These selective effects of belief demonstrate that belief can modulate model-based parameters important for learning. The implications of these findings may be far ranging because belief-dependent effects on learning signals could impact a host of other behaviors in addiction as well as in other mental health problems. PMID:25605923

  9. Efficient reversible watermarking based on adaptive prediction-error expansion and pixel selection.

    Science.gov (United States)

    Li, Xiaolong; Yang, Bin; Zeng, Tieyong

    2011-12-01

    Prediction-error expansion (PEE) is an important technique of reversible watermarking which can embed large payloads into digital images with low distortion. In this paper, the PEE technique is further investigated and an efficient reversible watermarking scheme is proposed, by incorporating in PEE two new strategies, namely, adaptive embedding and pixel selection. Unlike conventional PEE which embeds data uniformly, we propose to adaptively embed 1 or 2 bits into expandable pixel according to the local complexity. This avoids expanding pixels with large prediction-errors, and thus, it reduces embedding impact by decreasing the maximum modification to pixel values. Meanwhile, adaptive PEE allows very large payload in a single embedding pass, and it improves the capacity limit of conventional PEE. We also propose to select pixels of smooth area for data embedding and leave rough pixels unchanged. In this way, compared with conventional PEE, a more sharply distributed prediction-error histogram is obtained and a better visual quality of watermarked image is observed. With these improvements, our method outperforms conventional PEE. Its superiority over other state-of-the-art methods is also demonstrated experimentally.

  10. Ventral striatal prediction error signaling is associated with dopamine synthesis capacity and fluid intelligence.

    Science.gov (United States)

    Schlagenhauf, Florian; Rapp, Michael A; Huys, Quentin J M; Beck, Anne; Wüstenberg, Torsten; Deserno, Lorenz; Buchholz, Hans-Georg; Kalbitzer, Jan; Buchert, Ralph; Bauer, Michael; Kienast, Thorsten; Cumming, Paul; Plotkin, Michail; Kumakura, Yoshitaka; Grace, Anthony A; Dolan, Raymond J; Heinz, Andreas

    2013-06-01

    Fluid intelligence represents the capacity for flexible problem solving and rapid behavioral adaptation. Rewards drive flexible behavioral adaptation, in part via a teaching signal expressed as reward prediction errors in the ventral striatum, which has been associated with phasic dopamine release in animal studies. We examined a sample of 28 healthy male adults using multimodal imaging and biological parametric mapping with (1) functional magnetic resonance imaging during a reversal learning task and (2) in a subsample of 17 subjects also with positron emission tomography using 6-[(18) F]fluoro-L-DOPA to assess dopamine synthesis capacity. Fluid intelligence was measured using a battery of nine standard neuropsychological tests. Ventral striatal BOLD correlates of reward prediction errors were positively correlated with fluid intelligence and, in the right ventral striatum, also inversely correlated with dopamine synthesis capacity (FDOPA K inapp). When exploring aspects of fluid intelligence, we observed that prediction error signaling correlates with complex attention and reasoning. These findings indicate that individual differences in the capacity for flexible problem solving relate to ventral striatal activation during reward-related learning, which in turn proved to be inversely associated with ventral striatal dopamine synthesis capacity.

  11. Belief about nicotine selectively modulates value and reward prediction error signals in smokers.

    Science.gov (United States)

    Gu, Xiaosi; Lohrenz, Terry; Salas, Ramiro; Baldwin, Philip R; Soltani, Alireza; Kirk, Ulrich; Cinciripini, Paul M; Montague, P Read

    2015-02-24

    Little is known about how prior beliefs impact biophysically described processes in the presence of neuroactive drugs, which presents a profound challenge to the understanding of the mechanisms and treatments of addiction. We engineered smokers' prior beliefs about the presence of nicotine in a cigarette smoked before a functional magnetic resonance imaging session where subjects carried out a sequential choice task. Using a model-based approach, we show that smokers' beliefs about nicotine specifically modulated learning signals (value and reward prediction error) defined by a computational model of mesolimbic dopamine systems. Belief of "no nicotine in cigarette" (compared with "nicotine in cigarette") strongly diminished neural responses in the striatum to value and reward prediction errors and reduced the impact of both on smokers' choices. These effects of belief could not be explained by global changes in visual attention and were specific to value and reward prediction errors. Thus, by modulating the expression of computationally explicit signals important for valuation and choice, beliefs can override the physical presence of a potent neuroactive compound like nicotine. These selective effects of belief demonstrate that belief can modulate model-based parameters important for learning. The implications of these findings may be far ranging because belief-dependent effects on learning signals could impact a host of other behaviors in addiction as well as in other mental health problems.

  12. Effects of rapid eye movement sleep deprivation on fear extinction recall and prediction error signaling.

    Science.gov (United States)

    Spoormaker, Victor I; Schröter, Manuel S; Andrade, Kátia C; Dresler, Martin; Kiem, Sara A; Goya-Maldonado, Roberto; Wetter, Thomas C; Holsboer, Florian; Sämann, Philipp G; Czisch, Michael

    2012-10-01

    In a temporal difference learning approach of classical conditioning, a theoretical error signal shifts from outcome deliverance to the onset of the conditioned stimulus. Omission of an expected outcome results in a negative prediction error signal, which is the initial step towards successful extinction and may therefore be relevant for fear extinction recall. As studies in rodents have observed a bidirectional relationship between fear extinction and rapid eye movement (REM) sleep, we aimed to test the hypothesis that REM sleep deprivation impairs recall of fear extinction through prediction error signaling in humans. In a three-day design with polysomnographically controlled REM sleep deprivation, 18 young, healthy subjects performed a fear conditioning, extinction and recall of extinction task with visual stimuli, and mild electrical shocks during combined functional magnetic resonance imaging (fMRI) and skin conductance response (SCR) measurements. Compared to the control group, the REM sleep deprivation group had increased SCR scores to a previously extinguished stimulus at early recall of extinction trials, which was associated with an altered fMRI time-course in the left middle temporal gyrus. Post-hoc contrasts corrected for measures of NREM sleep variability also revealed between-group differences primarily in the temporal lobe. Our results demonstrate altered prediction error signaling during recall of fear extinction after REM sleep deprivation, which may further our understanding of anxiety disorders in which disturbed sleep and impaired fear extinction learning coincide. Moreover, our findings are indicative of REM sleep related plasticity in regions that also show an increase in activity during REM sleep.

  13. Prediction error and somatosensory insula activation in women recovered from anorexia nervosa.

    Science.gov (United States)

    Frank, Guido K W; Collier, Shaleise; Shott, Megan E; O'Reilly, Randall C

    2016-08-01

    Previous research in patients with anorexia nervosa showed heightened brain response during a taste reward conditioning task and heightened sensitivity to rewarding and punishing stimuli. Here we tested the hypothesis that individuals recovered from anorexia nervosa would also experience greater brain activation during this task as well as higher sensitivity to salient stimuli than controls. Women recovered from restricting-type anorexia nervosa and healthy control women underwent fMRI during application of a prediction error taste reward learning paradigm. Twenty-four women recovered from anorexia nervosa (mean age 30.3 ± 8.1 yr) and 24 control women (mean age 27.4 ± 6.3 yr) took part in this study. The recovered anorexia nervosa group showed greater left posterior insula activation for the prediction error model analysis than the control group (family-wise error- and small volume-corrected p anorexia nervosa than controls for unexpected stimulus omission, but not for unexpected receipt. Sensitivity to punishment was elevated in women recovered from anorexia nervosa. This was a cross-sectional study, and the sample size was modest. Anorexia nervosa after recovery is associated with heightened prediction error-related brain response in the posterior insula as well as greater response to unexpected reward stimulus omission. This finding, together with behaviourally increased sensitivity to punishment, could indicate that individuals recovered from anorexia nervosa are particularly responsive to punishment. The posterior insula processes somatosensory stimuli, including unexpected bodily states, and greater response could indicate altered perception or integration of unexpected or maybe unwanted bodily feelings. Whether those findings develop during the ill state or whether they are biological traits requires further study.

  14. Standard error of inverse prediction for dose-response relationship: approximate and exact statistical inference.

    Science.gov (United States)

    Demidenko, Eugene; Williams, Benjamin B; Flood, Ann Barry; Swartz, Harold M

    2013-05-30

    This paper develops a new metric, the standard error of inverse prediction (SEIP), for a dose-response relationship (calibration curve) when dose is estimated from response via inverse regression. SEIP can be viewed as a generalization of the coefficient of variation to regression problem when x is predicted using y-value. We employ nonstandard statistical methods to treat the inverse prediction, which has an infinite mean and variance due to the presence of a normally distributed variable in the denominator. We develop confidence intervals and hypothesis testing for SEIP on the basis of the normal approximation and using the exact statistical inference based on the noncentral t-distribution. We derive the power functions for both approaches and test them via statistical simulations. The theoretical SEIP, as the ratio of the regression standard error to the slope, is viewed as reciprocal of the signal-to-noise ratio, a popular measure of signal processing. The SEIP, as a figure of merit for inverse prediction, can be used for comparison of calibration curves with different dependent variables and slopes. We illustrate our theory with electron paramagnetic resonance tooth dosimetry for a rapid estimation of the radiation dose received in the event of nuclear terrorism.

  15. Quantifying the predictive consequences of model error with linear subspace analysis

    Science.gov (United States)

    White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.

    2014-01-01

    All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.

  16. Putting Reward in Art: A Tentative Prediction Error Account of Visual Art

    Directory of Open Access Journals (Sweden)

    Sander Van de Cruys

    2011-12-01

    Full Text Available The predictive coding model is increasingly and fruitfully used to explain a wide range of findings in perception. Here we discuss the potential of this model in explaining the mechanisms underlying aesthetic experiences. Traditionally art appreciation has been associated with concepts such as harmony, perceptual fluency, and the so-called good Gestalt. We observe that more often than not great artworks blatantly violate these characteristics. Using the concept of prediction error from the predictive coding approach, we attempt to resolve this contradiction. We argue that artists often destroy predictions that they have first carefully built up in their viewers, and thus highlight the importance of negative affect in aesthetic experience. However, the viewer often succeeds in recovering the predictable pattern, sometimes on a different level. The ensuing rewarding effect is derived from this transition from a state of uncertainty to a state of increased predictability. We illustrate our account with several example paintings and with a discussion of art movements and individual differences in preference. On a more fundamental level, our theorizing leads us to consider the affective implications of prediction confirmation and violation. We compare our proposal to other influential theories on aesthetics and explore its advantages and limitations.

  17. Error consciousness predicts physiological response to an acute psychosocial stressor in men.

    Science.gov (United States)

    Wu, Jianhui; Sun, Xiaofang; Wang, Li; Zhang, Liang; Fernández, Guillén; Yao, Zhuxi

    2017-09-01

    There are substantial individual differences in the response towards acute stressor. The aim of the current study was to examine how the neural activity after an error response during a non-stressful state, prospectively predicts the magnitude of physiological stress response (e.g., cortisol response and heart rate) and negative affect elicited by a laboratory stress induction procedure in nonclinical participants. Thirty-seven healthy young male adults came to the laboratory for the baseline neurocognitive measurement on the first day during which they performed a Go/Nogo task with their electroencephalogram recorded. On the second day, they came again to be tested on their stress response using an acute psychosocial stress procedure (i.e., the Trier Social Stress Test, the TSST). Results showed that the amplitude of error positivity (Pe) significantly predicted both the heart rate and cortisol response towards the TSST. Our results suggested that baseline cognitive neural activity reflecting error consciousness could be used as a biological predictor of physiological response to an acute psychological stressor in men. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Error estimates for density-functional theory predictions of surface energy and work function

    Science.gov (United States)

    De Waele, Sam; Lejaeghere, Kurt; Sluydts, Michael; Cottenier, Stefaan

    2016-12-01

    Density-functional theory (DFT) predictions of materials properties are becoming ever more widespread. With increased use comes the demand for estimates of the accuracy of DFT results. In view of the importance of reliable surface properties, this work calculates surface energies and work functions for a large and diverse test set of crystalline solids. They are compared to experimental values by performing a linear regression, which results in a measure of the predictable and material-specific error of the theoretical result. Two of the most prevalent functionals, the local density approximation (LDA) and the Perdew-Burke-Ernzerhof parametrization of the generalized gradient approximation (PBE-GGA), are evaluated and compared. Both LDA and GGA-PBE are found to yield accurate work functions with error bars below 0.3 eV, rivaling the experimental precision. LDA also provides satisfactory estimates for the surface energy with error bars smaller than 10%, but GGA-PBE significantly underestimates the surface energy for materials with a large correlation energy.

  19. Reassessing Domain Architecture Evolution of Metazoan Proteins: Major Impact of Gene Prediction Errors

    Directory of Open Access Journals (Sweden)

    László Patthy

    2011-07-01

    Full Text Available In view of the fact that appearance of novel protein domain architectures (DA is closely associated with biological innovations, there is a growing interest in the genome-scale reconstruction of the evolutionary history of the domain architectures of multidomain proteins. In such analyses, however, it is usually ignored that a significant proportion of Metazoan sequences analyzed is mispredicted and that this may seriously affect the validity of the conclusions. To estimate the contribution of errors in gene prediction to differences in DA of predicted proteins, we have used the high quality manually curated UniProtKB/Swiss-Prot database as a reference. For genome-scale analysis of domain architectures of predicted proteins we focused on RefSeq, EnsEMBL and NCBI’s GNOMON predicted sequences of Metazoan species with completely sequenced genomes. Comparison of the DA of UniProtKB/Swiss-Prot sequences of worm, fly, zebrafish, frog, chick, mouse, rat and orangutan with those of human Swiss-Prot entries have identified relatively few cases where orthologs had different DA, although the percentage with different DA increased with evolutionary distance. In contrast with this, comparison of the DA of human, orangutan, rat, mouse, chicken, frog, zebrafish, worm and fly RefSeq, EnsEMBL and NCBI’s GNOMON predicted protein sequences with those of the corresponding/orthologous human Swiss-Prot entries identified a significantly higher proportion of domain architecture differences than in the case of the comparison of Swiss-Prot entries. Analysis of RefSeq, EnsEMBL and NCBI’s GNOMON predicted protein sequences with DAs different from those of their Swiss-Prot orthologs confirmed that the higher rate of domain architecture differences is due to errors in gene prediction, the majority of which could be corrected with our FixPred protocol. We have also demonstrated that contamination of databases with incomplete, abnormal or mispredicted sequences

  20. Modeling workplace contact networks: The effects of organizational structure, architecture, and reporting errors on epidemic predictions.

    Science.gov (United States)

    Potter, Gail E; Smieszek, Timo; Sailer, Kerstin

    2015-09-01

    Face-to-face social contacts are potentially important transmission routes for acute respiratory infections, and understanding the contact network can improve our ability to predict, contain, and control epidemics. Although workplaces are important settings for infectious disease transmission, few studies have collected workplace contact data and estimated workplace contact networks. We use contact diaries, architectural distance measures, and institutional structures to estimate social contact networks within a Swiss research institute. Some contact reports were inconsistent, indicating reporting errors. We adjust for this with a latent variable model, jointly estimating the true (unobserved) network of contacts and duration-specific reporting probabilities. We find that contact probability decreases with distance, and that research group membership, role, and shared projects are strongly predictive of contact patterns. Estimated reporting probabilities were low only for 0-5 min contacts. Adjusting for reporting error changed the estimate of the duration distribution, but did not change the estimates of covariate effects and had little effect on epidemic predictions. Our epidemic simulation study indicates that inclusion of network structure based on architectural and organizational structure data can improve the accuracy of epidemic forecasting models.

  1. Detection of microcalcifications in mammograms using error of prediction and statistical measures

    Science.gov (United States)

    Acha, Begoña; Serrano, Carmen; Rangayyan, Rangaraj M.; Leo Desautels, J. E.

    2009-01-01

    A two-stage method for detecting microcalcifications in mammograms is presented. In the first stage, the determination of the candidates for microcalcifications is performed. For this purpose, a 2-D linear prediction error filter is applied, and for those pixels where the prediction error is larger than a threshold, a statistical measure is calculated to determine whether they are candidates for microcalcifications or not. In the second stage, a feature vector is derived for each candidate, and after a classification step using a support vector machine, the final detection is performed. The algorithm is tested with 40 mammographic images, from Screen Test: The Alberta Program for the Early Detection of Breast Cancer with 50-μm resolution, and the results are evaluated using a free-response receiver operating characteristics curve. Two different analyses are performed: an individual microcalcification detection analysis and a cluster analysis. In the analysis of individual microcalcifications, detection sensitivity values of 0.75 and 0.81 are obtained at 2.6 and 6.2 false positives per image, on the average, respectively. The best performance is characterized by a sensitivity of 0.89, a specificity of 0.99, and a positive predictive value of 0.79. In cluster analysis, a sensitivity value of 0.97 is obtained at 1.77 false positives per image, and a value of 0.90 is achieved at 0.94 false positive per image.

  2. Recursive prediction error methods for online estimation in nonlinear state-space models

    Directory of Open Access Journals (Sweden)

    Dag Ljungquist

    1994-04-01

    Full Text Available Several recursive algorithms for online, combined state and parameter estimation in nonlinear state-space models are discussed in this paper. Well-known algorithms such as the extended Kalman filter and alternative formulations of the recursive prediction error method are included, as well as a new method based on a line-search strategy. A comparison of the algorithms illustrates that they are very similar although the differences can be important for the online tracking capabilities and robustness. Simulation experiments on a simple nonlinear process show that the performance under certain conditions can be improved by including a line-search strategy.

  3. Measured and predicted root-mean-square errors in square and triangular antenna mesh facets

    Science.gov (United States)

    Fichter, W. B.

    1989-01-01

    Deflection shapes of square and equilateral triangular facets of two tricot-knit, gold plated molybdenum wire mesh antenna materials were measured and compared, on the basis of root mean square (rms) differences, with deflection shapes predicted by linear membrane theory, for several cases of biaxial mesh tension. The two mesh materials contained approximately 10 and 16 holes per linear inch, measured diagonally with respect to the course and wale directions. The deflection measurement system employed a non-contact eddy current proximity probe and an electromagnetic distance sensing probe in conjunction with a precision optical level. Despite experimental uncertainties, rms differences between measured and predicted deflection shapes suggest the following conclusions: that replacing flat antenna facets with facets conforming to parabolically curved structural members yields smaller rms surface error; that potential accuracy gains are greater for equilateral triangular facets than for square facets; and that linear membrane theory can be a useful tool in the design of tricot knit wire mesh antennas.

  4. Addressing Conceptual Model Uncertainty in the Evaluation of Model Prediction Errors

    Science.gov (United States)

    Carrera, J.; Pool, M.

    2014-12-01

    Model predictions are uncertain because of errors in model parameters, future forcing terms, and model concepts. The latter remain the largest and most difficult to assess source of uncertainty in long term model predictions. We first review existing methods to evaluate conceptual model uncertainty. We argue that they are highly sensitive to the ingenuity of the modeler, in the sense that they rely on the modeler's ability to propose alternative model concepts. Worse, we find that the standard practice of stochastic methods leads to poor, potentially biased and often too optimistic, estimation of actual model errors. This is bad news because stochastic methods are purported to properly represent uncertainty. We contend that the problem does not lie on the stochastic approach itself, but on the way it is applied. Specifically, stochastic inversion methodologies, which demand quantitative information, tend to ignore geological understanding, which is conceptually rich. We illustrate some of these problems with the application to Mar del Plata aquifer, where extensive data are available for nearly a century. Geologically based models, where spatial variability is handled through zonation, yield calibration fits similar to geostatiscally based models, but much better predictions. In fact, the appearance of the stochastic T fields is similar to the geologically based models only in areas with high density of data. We take this finding to illustrate the ability of stochastic models to accommodate many data, but also, ironically, their inability to address conceptual model uncertainty. In fact, stochastic model realizations tend to be too close to the "most likely" one (i.e., they do not really realize the full conceptualuncertainty). The second part of the presentation is devoted to argue that acknowledging model uncertainty may lead to qualitatively different decisions than just working with "most likely" model predictions. Therefore, efforts should concentrate on

  5. Current error vector based prediction control of the section winding permanent magnet linear synchronous motor

    Energy Technology Data Exchange (ETDEWEB)

    Hong Junjie, E-mail: hongjjie@mail.sysu.edu.cn [School of Engineering, Sun Yat-Sen University, Guangzhou 510006 (China); Li Liyi, E-mail: liliyi@hit.edu.cn [Dept. Electrical Engineering, Harbin Institute of Technology, Harbin 150000 (China); Zong Zhijian; Liu Zhongtu [School of Engineering, Sun Yat-Sen University, Guangzhou 510006 (China)

    2011-10-15

    Highlights: {yields} The structure of the permanent magnet linear synchronous motor (SW-PMLSM) is new. {yields} A new current control method CEVPC is employed in this motor. {yields} The sectional power supply method is different to the others and effective. {yields} The performance gets worse with voltage and current limitations. - Abstract: To include features such as greater thrust density, higher efficiency without reducing the thrust stability, this paper proposes a section winding permanent magnet linear synchronous motor (SW-PMLSM), whose iron core is continuous, whereas winding is divided. The discrete system model of the motor is derived. With the definition of the current error vector and selection of the value function, the theory of the current error vector based prediction control (CEVPC) for the motor currents is explained clearly. According to the winding section feature, the motion region of the mover is divided into five zones, in which the implementation of the current predictive control method is proposed. Finally, the experimental platform is constructed and experiments are carried out. The results show: the current control effect has good dynamic response, and the thrust on the mover remains constant basically.

  6. Prediction-error in the context of real social relationships modulates reward system activity

    Directory of Open Access Journals (Sweden)

    Joshua ePoore

    2012-08-01

    Full Text Available The human reward system is sensitive to both social (e.g., validation and non-social rewards (e.g., money and is likely integral for relationship development and reputation building. However, data is sparse on the question of whether implicit social reward processing meaningfully contributes to explicit social representations such as trust and attachment security in pre-existing relationships. This event-related fMRI experiment examined reward system prediction-error activity in response to a potent social reward—social validation—and this activity’s relation to both attachment security and trust in the context of real romantic relationships. During the experiment, participants’ expectations for their romantic partners’ positive regard of them were confirmed (validated or violated, in either positive or negative directions. Primary analyses were conducted using predefined regions of interest, the locations of which were taken from previously published research. Results indicate that activity for mid-brain and striatal reward system regions of interest was modulated by social reward expectation violation in ways consistent with prior research on reward prediction-error. Additionally, activity in the striatum during viewing of disconfirmatory information was associated with both increases in post-scan reports of attachment anxiety and decreases in post-scan trust, a finding that follows directly from representational models of attachment and trust.

  7. A two-dimensional matrix correction for off-axis portal dose prediction errors

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, Daniel W. [Department of Physics, State University of New York at Buffalo, Buffalo, New York 14260 (United States); Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Kumaraswamy, Lalith; Bakhtiari, Mohammad [Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263 (United States); Podgorsak, Matthew B. [Department of Radiation Medicine, Roswell Park Cancer Institute, Buffalo, New York 14263 and Department of Physiology and Biophysics, State University of New York at Buffalo, Buffalo, New York 14214 (United States)

    2013-05-15

    Purpose: This study presents a follow-up to a modified calibration procedure for portal dosimetry published by Bailey et al. ['An effective correction algorithm for off-axis portal dosimetry errors,' Med. Phys. 36, 4089-4094 (2009)]. A commercial portal dose prediction system exhibits disagreement of up to 15% (calibrated units) between measured and predicted images as off-axis distance increases. The previous modified calibration procedure accounts for these off-axis effects in most regions of the detecting surface, but is limited by the simplistic assumption of radial symmetry. Methods: We find that a two-dimensional (2D) matrix correction, applied to each calibrated image, accounts for off-axis prediction errors in all regions of the detecting surface, including those still problematic after the radial correction is performed. The correction matrix is calculated by quantitative comparison of predicted and measured images that span the entire detecting surface. The correction matrix was verified for dose-linearity, and its effectiveness was verified on a number of test fields. The 2D correction was employed to retrospectively examine 22 off-axis, asymmetric electronic-compensation breast fields, five intensity-modulated brain fields (moderate-high modulation) manipulated for far off-axis delivery, and 29 intensity-modulated clinical fields of varying complexity in the central portion of the detecting surface. Results: Employing the matrix correction to the off-axis test fields and clinical fields, predicted vs measured portal dose agreement improves by up to 15%, producing up to 10% better agreement than the radial correction in some areas of the detecting surface. Gamma evaluation analyses (3 mm, 3% global, 10% dose threshold) of predicted vs measured portal dose images demonstrate pass rate improvement of up to 75% with the matrix correction, producing pass rates that are up to 30% higher than those resulting from the radial correction technique alone

  8. The feedback-related negativity reflects ‘more or less’ prediction error in appetitive and aversive conditions

    Directory of Open Access Journals (Sweden)

    Rongjun eYu

    2014-05-01

    Full Text Available Humans make predictions and use feedback to update their subsequent predictions. The feedback-related negativity (FRN has been found to be sensitive to negative feedback as well as negative prediction error, such that the FRN is larger for outcomes that are worse than expected. The present study examined prediction errors in both appetitive and aversive conditions. We found that the FRN was more negative for reward omission versus wins and for loss omission versus losses, suggesting that the FRN might classify outcomes in a more-or-less than expected fashion rather than in the better-or-worse than expected dimension. Our findings challenge the previous notion that the FRN only encodes negative feedback and ‘worse than expected’ negative prediction error.

  9. Small-Sample Error Estimation for Bagged Classification Rules

    Science.gov (United States)

    Vu, T. T.; Braga-Neto, U. M.

    2010-12-01

    Application of ensemble classification rules in genomics and proteomics has become increasingly common. However, the problem of error estimation for these classification rules, particularly for bagging under the small-sample settings prevalent in genomics and proteomics, is not well understood. Breiman proposed the "out-of-bag" method for estimating statistics of bagged classifiers, which was subsequently applied by other authors to estimate the classification error. In this paper, we give an explicit definition of the out-of-bag estimator that is intended to remove estimator bias, by formulating carefully how the error count is normalized. We also report the results of an extensive simulation study of bagging of common classification rules, including LDA, 3NN, and CART, applied on both synthetic and real patient data, corresponding to the use of common error estimators such as resubstitution, leave-one-out, cross-validation, basic bootstrap, bootstrap 632, bootstrap 632 plus, bolstering, semi-bolstering, in addition to the out-of-bag estimator. The results from the numerical experiments indicated that the performance of the out-of-bag estimator is very similar to that of leave-one-out; in particular, the out-of-bag estimator is slightly pessimistically biased. The performance of the other estimators is consistent with their performance with the corresponding single classifiers, as reported in other studies.

  10. Small-Sample Error Estimation for Bagged Classification Rules

    Directory of Open Access Journals (Sweden)

    Vu TT

    2010-01-01

    Full Text Available Application of ensemble classification rules in genomics and proteomics has become increasingly common. However, the problem of error estimation for these classification rules, particularly for bagging under the small-sample settings prevalent in genomics and proteomics, is not well understood. Breiman proposed the "out-of-bag" method for estimating statistics of bagged classifiers, which was subsequently applied by other authors to estimate the classification error. In this paper, we give an explicit definition of the out-of-bag estimator that is intended to remove estimator bias, by formulating carefully how the error count is normalized. We also report the results of an extensive simulation study of bagging of common classification rules, including LDA, 3NN, and CART, applied on both synthetic and real patient data, corresponding to the use of common error estimators such as resubstitution, leave-one-out, cross-validation, basic bootstrap, bootstrap 632, bootstrap 632 plus, bolstering, semi-bolstering, in addition to the out-of-bag estimator. The results from the numerical experiments indicated that the performance of the out-of-bag estimator is very similar to that of leave-one-out; in particular, the out-of-bag estimator is slightly pessimistically biased. The performance of the other estimators is consistent with their performance with the corresponding single classifiers, as reported in other studies.

  11. Testing alternative uses of electromagnetic data to reduce the prediction error of groundwater models

    Science.gov (United States)

    Kruse Christensen, Nikolaj; Christensen, Steen; Ferre, Ty Paul A.

    2016-05-01

    In spite of geophysics being used increasingly, it is often unclear how and when the integration of geophysical data and models can best improve the construction and predictive capability of groundwater models. This paper uses a newly developed HYdrogeophysical TEst-Bench (HYTEB) that is a collection of geological, groundwater and geophysical modeling and inversion software to demonstrate alternative uses of electromagnetic (EM) data for groundwater modeling in a hydrogeological environment consisting of various types of glacial deposits with typical hydraulic conductivities and electrical resistivities covering impermeable bedrock with low resistivity (clay). The synthetic 3-D reference system is designed so that there is a perfect relationship between hydraulic conductivity and electrical resistivity. For this system it is investigated to what extent groundwater model calibration and, often more importantly, model predictions can be improved by including in the calibration process electrical resistivity estimates obtained from TEM data. In all calibration cases, the hydraulic conductivity field is highly parameterized and the estimation is stabilized by (in most cases) geophysics-based regularization. For the studied system and inversion approaches it is found that resistivities estimated by sequential hydrogeophysical inversion (SHI) or joint hydrogeophysical inversion (JHI) should be used with caution as estimators of hydraulic conductivity or as regularization means for subsequent hydrological inversion. The limited groundwater model improvement obtained by using the geophysical data probably mainly arises from the way these data are used here: the alternative inversion approaches propagate geophysical estimation errors into the hydrologic model parameters. It was expected that JHI would compensate for this, but the hydrologic data were apparently insufficient to secure such compensation. With respect to reducing model prediction error, it depends on the type

  12. Triangle network motifs predict complexes by complementing high-error interactomes with structural information

    Directory of Open Access Journals (Sweden)

    Labudde Dirk

    2009-06-01

    Full Text Available Abstract Background A lot of high-throughput studies produce protein-protein interaction networks (PPINs with many errors and missing information. Even for genome-wide approaches, there is often a low overlap between PPINs produced by different studies. Second-level neighbors separated by two protein-protein interactions (PPIs were previously used for predicting protein function and finding complexes in high-error PPINs. We retrieve second level neighbors in PPINs, and complement these with structural domain-domain interactions (SDDIs representing binding evidence on proteins, forming PPI-SDDI-PPI triangles. Results We find low overlap between PPINs, SDDIs and known complexes, all well below 10%. We evaluate the overlap of PPI-SDDI-PPI triangles with known complexes from Munich Information center for Protein Sequences (MIPS. PPI-SDDI-PPI triangles have ~20 times higher overlap with MIPS complexes than using second-level neighbors in PPINs without SDDIs. The biological interpretation for triangles is that a SDDI causes two proteins to be observed with common interaction partners in high-throughput experiments. The relatively few SDDIs overlapping with PPINs are part of highly connected SDDI components, and are more likely to be detected in experimental studies. We demonstrate the utility of PPI-SDDI-PPI triangles by reconstructing myosin-actin processes in the nucleus, cytoplasm, and cytoskeleton, which were not obvious in the original PPIN. Using other complementary datatypes in place of SDDIs to form triangles, such as PubMed co-occurrences or threading information, results in a similar ability to find protein complexes. Conclusion Given high-error PPINs with missing information, triangles of mixed datatypes are a promising direction for finding protein complexes. Integrating PPINs with SDDIs improves finding complexes. Structural SDDIs partially explain the high functional similarity of second-level neighbors in PPINs. We estimate that

  13. When theory and biology differ: The relationship between reward prediction errors and expectancy.

    Science.gov (United States)

    Williams, Chad C; Hassall, Cameron D; Trska, Robert; Holroyd, Clay B; Krigolson, Olave E

    2017-09-18

    Comparisons between expectations and outcomes are critical for learning. Termed prediction errors, the violations of expectancy that occur when outcomes differ from expectations are used to modify value and shape behaviour. In the present study, we examined how a wide range of expectancy violations impacted neural signals associated with feedback processing. Participants performed a time estimation task in which they had to guess the duration of one second while their electroencephalogram was recorded. In a key manipulation, we varied task difficulty across the experiment to create a range of different feedback expectancies - reward feedback was either very expected, expected, 50/50, unexpected, or very unexpected. As predicted, the amplitude of the reward positivity, a component of the human event-related brain potential associated with feedback processing, scaled inversely with expectancy (e.g., unexpected feedback yielded a larger reward positivity than expected feedback). Interestingly, the scaling of the reward positivity to outcome expectancy was not linear as would be predicted by some theoretical models. Specifically, we found that the amplitude of the reward positivity was about equivalent for very expected and expected feedback, and for very unexpected and unexpected feedback. As such, our results demonstrate a sigmoidal relationship between reward expectancy and the amplitude of the reward positivity, with interesting implications for theories of reinforcement learning. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Harsh parenting and fearfulness in toddlerhood interact to predict amplitudes of preschool error-related negativity

    Directory of Open Access Journals (Sweden)

    Rebecca J. Brooker

    2014-07-01

    Full Text Available Temperamentally fearful children are at increased risk for the development of anxiety problems relative to less-fearful children. This risk is even greater when early environments include high levels of harsh parenting behaviors. However, the mechanisms by which harsh parenting may impact fearful children's risk for anxiety problems are largely unknown. Recent neuroscience work has suggested that punishment is associated with exaggerated error-related negativity (ERN, an event-related potential linked to performance monitoring, even after the threat of punishment is removed. In the current study, we examined the possibility that harsh parenting interacts with fearfulness, impacting anxiety risk via neural processes of performance monitoring. We found that greater fearfulness and harsher parenting at 2 years of age predicted greater fearfulness and greater ERN amplitudes at age 4. Supporting the role of cognitive processes in this association, greater fearfulness and harsher parenting also predicted less efficient neural processing during preschool. This study provides initial evidence that performance monitoring may be a candidate process by which early parenting interacts with fearfulness to predict risk for anxiety problems.

  15. Temporal dynamics of prediction error processing during reward-based decision making.

    Science.gov (United States)

    Philiastides, Marios G; Biele, Guido; Vavatzanidis, Niki; Kazzer, Philipp; Heekeren, Hauke R

    2010-10-15

    Adaptive decision making depends on the accurate representation of rewards associated with potential choices. These representations can be acquired with reinforcement learning (RL) mechanisms, which use the prediction error (PE, the difference between expected and received rewards) as a learning signal to update reward expectations. While EEG experiments have highlighted the role of feedback-related potentials during performance monitoring, important questions about the temporal sequence of feedback processing and the specific function of feedback-related potentials during reward-based decision making remain. Here, we hypothesized that feedback processing starts with a qualitative evaluation of outcome-valence, which is subsequently complemented by a quantitative representation of PE magnitude. Results of a model-based single-trial analysis of EEG data collected during a reversal learning task showed that around 220ms after feedback outcomes are initially evaluated categorically with respect to their valence (positive vs. negative). Around 300ms, and parallel to the maintained valence-evaluation, the brain also represents quantitative information about PE magnitude, thus providing the complete information needed to update reward expectations and to guide adaptive decision making. Importantly, our single-trial EEG analysis based on PEs from an RL model showed that the feedback-related potentials do not merely reflect error awareness, but rather quantitative information crucial for learning reward contingencies.

  16. Constraining uncertainty in the prediction of pollutant transport in rivers allowing for measurement error.

    Science.gov (United States)

    Smith, P.; Beven, K.; Blazkova, S.; Merta, L.

    2003-04-01

    This poster outlines a methodology for the estimation of parameters in an Aggregated Dead Zone (ADZ) model of pollutant transport, by use of an example reach of the River Elbe. Both tracer and continuous water quality measurements are analysed to investigate the relationship between discharge and advective time delay. This includes a study of the effects of different error distributions being applied to the measurement of both variables using Monte-Carlo Markov Chain (MCMC) techniques. The derived relationships between discharge and advective time delay can then be incorporated into the formulation of the ADZ model to allow prediction of pollutant transport given uncertainty in the parameter values. The calibration is demonstrated in a hierarchical framework, giving the potential for the selection of appropriate model structures for the change in transport characteristics with discharge in the river. The value of different types and numbers of measurements are assessed within this framework.

  17. Prediction and standard error estimation for a finite universe total when a stratum is not sampled

    Energy Technology Data Exchange (ETDEWEB)

    Wright, T.

    1994-01-01

    In the context of a universe of trucks operating in the United States in 1990, this paper presents statistical methodology for estimating a finite universe total on a second occasion when a part of the universe is sampled and the remainder of the universe is not sampled. Prediction is used to compensate for the lack of data from the unsampled portion of the universe. The sample is assumed to be a subsample of an earlier sample where stratification is used on both occasions before sample selection. Accounting for births and deaths in the universe between the two points in time, the detailed sampling plan, estimator, standard error, and optimal sample allocation, are presented with a focus on the second occasion. If prior auxiliary information is available, the methodology is also applicable to a first occasion.

  18. EEG oscillatory patterns are associated with error prediction during music performance and are altered in musician's dystonia.

    Science.gov (United States)

    Ruiz, María Herrojo; Strübing, Felix; Jabusch, Hans-Christian; Altenmüller, Eckart

    2011-04-15

    Skilled performance requires the ability to monitor ongoing behavior, detect errors in advance and modify the performance accordingly. The acquisition of fast predictive mechanisms might be possible due to the extensive training characterizing expertise performance. Recent EEG studies on piano performance reported a negative event-related potential (ERP) triggered in the ACC 70 ms before performance errors (pitch errors due to incorrect keypress). This ERP component, termed pre-error related negativity (pre-ERN), was assumed to reflect processes of error detection in advance. However, some questions remained to be addressed: (i) Does the electrophysiological marker prior to errors reflect an error signal itself or is it related instead to the implementation of control mechanisms? (ii) Does the posterior frontomedial cortex (pFMC, including ACC) interact with other brain regions to implement control adjustments following motor prediction of an upcoming error? (iii) Can we gain insight into the electrophysiological correlates of error prediction and control by assessing the local neuronal synchronization and phase interaction among neuronal populations? (iv) Finally, are error detection and control mechanisms defective in pianists with musician's dystonia (MD), a focal task-specific dystonia resulting from dysfunction of the basal ganglia-thalamic-frontal circuits? Consequently, we investigated the EEG oscillatory and phase synchronization correlates of error detection and control during piano performances in healthy pianists and in a group of pianists with MD. In healthy pianists, the main outcomes were increased pre-error theta and beta band oscillations over the pFMC and 13-15 Hz phase synchronization, between the pFMC and the right lateral prefrontal cortex, which predicted corrective mechanisms. In MD patients, the pattern of phase synchronization appeared in a different frequency band (6-8 Hz) and correlated with the severity of the disorder. The present

  19. Hierarchical prediction errors in midbrain and basal forebrain during sensory learning.

    Science.gov (United States)

    Iglesias, Sandra; Mathys, Christoph; Brodersen, Kay H; Kasper, Lars; Piccirelli, Marco; den Ouden, Hanneke E M; Stephan, Klaas E

    2013-10-16

    In Bayesian brain theories, hierarchically related prediction errors (PEs) play a central role for predicting sensory inputs and inferring their underlying causes, e.g., the probabilistic structure of the environment and its volatility. Notably, PEs at different hierarchical levels may be encoded by different neuromodulatory transmitters. Here, we tested this possibility in computational fMRI studies of audio-visual learning. Using a hierarchical Bayesian model, we found that low-level PEs about visual stimulus outcome were reflected by widespread activity in visual and supramodal areas but also in the midbrain. In contrast, high-level PEs about stimulus probabilities were encoded by the basal forebrain. These findings were replicated in two groups of healthy volunteers. While our fMRI measures do not reveal the exact neuron types activated in midbrain and basal forebrain, they suggest a dichotomy between neuromodulatory systems, linking dopamine to low-level PEs about stimulus outcome and acetylcholine to more abstract PEs about stimulus probabilities.

  20. The Method for Calculating Atmospheric Drag Coefficient Based on the Characteristics of Along-track Error in LEO Orbit Prediction

    Science.gov (United States)

    Wang, H. B.; Zhao, C. Y.; Liu, Z. G.; Zhang, W.

    2016-07-01

    The errors of atmosphere density model and drag coefficient are the major factors to restrain the accuracy of orbit prediction for the LEO (Low Earth Orbit) objects, which would affect unfavorably the space missions that need a high-precision orbit. This paper brings out a new method for calculating the drag coefficient based on the divergence laws of prediction error's along-track component. Firstly, we deduce the expression of along-track error in LEO's orbit prediction, revealing the comprehensive effect of the initial orbit and model's errors in the along-track direction. According to this expression, we work out a suitable drag coefficient adopted in prediction step on the basis of some certain information from orbit determination step, which will limit the increasing rate of along-track error and reduce the largest error in this direction, then achieving the goal of improving the accuracy of orbit prediction. In order to verify the method's accuracy and successful rate in the practice of orbit prediction, we use the full-arcs high precision position data from the GPS receiver on GRACE-A. The result shows that this new method can significantly improve the accuracy of prediction by about 45%, achieving a successful rate of about 71% and an effective rate of about 86%, with respect to classical method which uses the fitted drag coefficient directly from orbit determination step. Furthermore, the new method shows a preferable application value, because it is effective for low, moderate, and high solar radiation levels, as well as some quiet and moderate geomagnetic activity condition.

  1. Neural correlates of sensory prediction errors in monkeys: evidence for internal models of voluntary self-motion in the cerebellum.

    Science.gov (United States)

    Cullen, Kathleen E; Brooks, Jessica X

    2015-02-01

    During self-motion, the vestibular system makes essential contributions to postural stability and self-motion perception. To ensure accurate perception and motor control, it is critical to distinguish between vestibular sensory inputs that are the result of externally applied motion (exafference) and that are the result of our own actions (reafference). Indeed, although the vestibular sensors encode vestibular afference and reafference with equal fidelity, neurons at the first central stage of sensory processing selectively encode vestibular exafference. The mechanism underlying this reafferent suppression compares the brain's motor-based expectation of sensory feedback with the actual sensory consequences of voluntary self-motion, effectively computing the sensory prediction error (i.e., exafference). It is generally thought that sensory prediction errors are computed in the cerebellum, yet it has been challenging to explicitly demonstrate this. We have recently addressed this question and found that deep cerebellar nuclei neurons explicitly encode sensory prediction errors during self-motion. Importantly, in everyday life, sensory prediction errors occur in response to changes in the effector or world (muscle strength, load, etc.), as well as in response to externally applied sensory stimulation. Accordingly, we hypothesize that altering the relationship between motor commands and the actual movement parameters will result in the updating in the cerebellum-based computation of exafference. If our hypothesis is correct, under these conditions, neuronal responses should initially be increased--consistent with a sudden increase in the sensory prediction error. Then, over time, as the internal model is updated, response modulation should decrease in parallel with a reduction in sensory prediction error, until vestibular reafference is again suppressed. The finding that the internal model predicting the sensory consequences of motor commands adapts for new

  2. Prediction of position estimation errors for 3D target trajetories estimated from cone-beam CT projections

    DEFF Research Database (Denmark)

    Poulsen, Per Rugaard; Cho, Byungchul; Keall, Paul

    2010-01-01

    . The mathematical formalism of the method includes an individualized measure of the position estimation error in terms of an estimated 1D Gaussian distribution for the unresolved target position[2]. The present study investigates how well this 1D Gaussian predicts the actual distribution of position estimation....... This finding indicates that individualized root-mean-square errors and 95% confidence intervals can be applied reliably to the estimated target trajectories....

  3. Factors predictive of intravenous fluid administration errors in Australian surgical care wards

    OpenAIRE

    2005-01-01

    Background: Intravenous (IV) fluid administration is an integral component of clinical care. Errors in administration can cause detrimental patient outcomes and increase healthcare costs, although little is known about medication administration errors associated with continuous IV infusions.

  4. A Hybrid Prediction Method of Thermal Extension Error for Boring Machine Based on PCA and LS-SVM

    Directory of Open Access Journals (Sweden)

    Cheng Qiang

    2017-01-01

    Full Text Available Thermal extension error of boring bar in z-axis is one of the key factors that have a bad influence on the machining accuracy of boring machine, so how to exactly establish the relationship between the thermal extension length and temperature and predict the changing rule of thermal error are the premise of thermal extension error compensation. In this paper, a prediction method of thermal extension length of boring bar in boring machine is proposed based on principal component analysis (PCA and least squares support vector machine (LS-SVM model. In order to avoid the multiple correlation and coupling among the great amount temperature input variables, firstly, PCA is introduced to extract the principal components of temperature data samples. Then, LS-SVM is used to predict the changing tendency of the thermally induced thermal extension error of boring bar. Finally, experiments are conducted on a boring machine, the application results show that Boring bar axial thermal elongation error residual value dropped below 5 μm and minimum residual error is only 0.5 μm. This method not only effectively improve the efficiency of the temperature data acquisition and analysis, and improve the modeling accuracy and robustness.

  5. Improved model predictive control of resistive wall modes by error field estimator in EXTRAP T2R

    Science.gov (United States)

    Setiadi, A. C.; Brunsell, P. R.; Frassinetti, L.

    2016-12-01

    Many implementations of a model-based approach for toroidal plasma have shown better control performance compared to the conventional type of feedback controller. One prerequisite of model-based control is the availability of a control oriented model. This model can be obtained empirically through a systematic procedure called system identification. Such a model is used in this work to design a model predictive controller to stabilize multiple resistive wall modes in EXTRAP T2R reversed-field pinch. Model predictive control is an advanced control method that can optimize the future behaviour of a system. Furthermore, this paper will discuss an additional use of the empirical model which is to estimate the error field in EXTRAP T2R. Two potential methods are discussed that can estimate the error field. The error field estimator is then combined with the model predictive control and yields better radial magnetic field suppression.

  6. A Novel Prediction Algorithm of DR Position Error Based on Bayesian Regularization Back-propagation Neural Network

    Directory of Open Access Journals (Sweden)

    Li Honglian

    2013-07-01

    Full Text Available It is difficult to accurately reckon vehicle position for vehicle navigation system (VNS during GPS outages, a novel prediction algorithm of dead reckon (DR position error is put forward, which based on Bayesian regularization back-propagation (BRBP neural network. DR, GPS position data are first de-noised and compared at different stationary wavelet transformation (SWT decomposition level, and DR position error data are acquired after the SWT coefficients differences are reconstructed. A neural network to mimic position error property is trained with back-propagation algorithm, and the algorithm is improved for improving its generalization by Bayesian regularization theory. During GPS outages, the established prediction algorithm predictes DR position errors, and provides precise position for VNS through DR position error data updating DR position data. The simulation results show the positioning precision of the BRBP algorithm is best among the presented prediction algorithms such as simple DR and adaptive linear network, and a precise mathematical model of navigation sensors isn’t established.

  7. Toward a better understanding on the role of prediction error on memory processes: From bench to clinic.

    Science.gov (United States)

    Krawczyk, María C; Fernández, Rodrigo S; Pedreira, María E; Boccia, Mariano M

    2017-07-01

    Experimental psychology defines Prediction Error (PE) as a mismatch between expected and current events. It represents a unifier concept within the memory field, as it is the driving force of memory acquisition and updating. Prediction error induces updating of consolidated memories in strength or content by memory reconsolidation. This process has two different neurobiological phases, which involves the destabilization (labilization) of a consolidated memory followed by its restabilization. The aim of this work is to emphasize the functional role of PE on the neurobiology of learning and memory, integrating and discussing different research areas: behavioral, neurobiological, computational and clinical psychiatry. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Inferring reward prediction errors in patients with schizophrenia: a dynamic reward task for reinforcement learning

    Directory of Open Access Journals (Sweden)

    Chia-Tzu eLi

    2014-11-01

    Full Text Available Abnormalities in the dopamine system have long been implicated in explanations of reinforcement learning and psychosis. The updated reward prediction error (RPE—a discrepancy between the predicted and actual rewards—is thought to be encoded by dopaminergic neurons. Dysregulation of dopamine systems could alter the appraisal of stimuli and eventually lead to schizophrenia. Accordingly, the measurement of RPE provides a potential behavioral index for the evaluation of brain dopamine activity and psychotic symptoms. Here, we assess two features potentially crucial to the RPE process, namely belief formation and belief perseveration, via a probability learning task and reinforcement-learning modeling. Forty-five patients with schizophrenia (26 high-psychosis and 19 low-psychosis, based on their p1 and p3 scores in the positive-symptom subscales of the Positive and Negative Syndrome Scale (PANSS and 24 controls were tested in a feedback-based dynamic reward task for their RPE-related decision making. While task scores across the three groups were similar, matching law analysis revealed that the reward sensitivities of both psychosis groups were lower than that of controls. Trial-by-trial data were further fit with a reinforcement learning model using the Bayesian estimation approach. Model fitting results indicated that both psychosis groups tend to update their reward values more rapidly than controls. Moreover, among the three groups, high-psychosis patients had the lowest degree of choice perseveration. Lumping patients’ data together, we also found that patients’ perseveration appears to be negatively correlated (p = .09, trending towards significance with their PANSS p1+p3 scores. Our method provides an alternative for investigating reward-related learning and decision making in basic and clinical settings.

  9. Inferring reward prediction errors in patients with schizophrenia: a dynamic reward task for reinforcement learning.

    Science.gov (United States)

    Li, Chia-Tzu; Lai, Wen-Sung; Liu, Chih-Min; Hsu, Yung-Fong

    2014-01-01

    Abnormalities in the dopamine system have long been implicated in explanations of reinforcement learning and psychosis. The updated reward prediction error (RPE)-a discrepancy between the predicted and actual rewards-is thought to be encoded by dopaminergic neurons. Dysregulation of dopamine systems could alter the appraisal of stimuli and eventually lead to schizophrenia. Accordingly, the measurement of RPE provides a potential behavioral index for the evaluation of brain dopamine activity and psychotic symptoms. Here, we assess two features potentially crucial to the RPE process, namely belief formation and belief perseveration, via a probability learning task and reinforcement-learning modeling. Forty-five patients with schizophrenia [26 high-psychosis and 19 low-psychosis, based on their p1 and p3 scores in the positive-symptom subscales of the Positive and Negative Syndrome Scale (PANSS)] and 24 controls were tested in a feedback-based dynamic reward task for their RPE-related decision making. While task scores across the three groups were similar, matching law analysis revealed that the reward sensitivities of both psychosis groups were lower than that of controls. Trial-by-trial data were further fit with a reinforcement learning model using the Bayesian estimation approach. Model fitting results indicated that both psychosis groups tend to update their reward values more rapidly than controls. Moreover, among the three groups, high-psychosis patients had the lowest degree of choice perseveration. Lumping patients' data together, we also found that patients' perseveration appears to be negatively correlated (p = 0.09, trending toward significance) with their PANSS p1 + p3 scores. Our method provides an alternative for investigating reward-related learning and decision making in basic and clinical settings.

  10. Cognitive reappraisal modulates expected value and prediction error encoding in the ventral striatum.

    Science.gov (United States)

    Staudinger, Markus R; Erk, Susanne; Abler, Birgit; Walter, Henrik

    2009-08-15

    In addiction, loss of prefrontal inhibitory control is believed to contribute to impulsivity. To improve cognitive therapy approaches, it is important to determine whether cognitive control strategies can generally influence reward processing at the neural level. We investigated the effects of one such strategy--namely, reappraisal (distancing from feelings)--on neural reward processing in 16 healthy subjects by utilizing event-related functional magnetic resonance imaging (fMRI). In a monetary incentive delay task, expected reward value (expecting to win 0.50 euro vs. 0.10 euro) and outcome valence (win vs. omission) were varied. An attenuation of expected value and a modulation of prediction error (PE) coding caused by distancing were found in right vs. left ventral striatum (VST) in the expectation vs. outcome period, respectively. Distancing from reward feelings recruited a right hemispheric fronto-parietal network. Moreover, self-reported reappraisal success (decrease of feelings by distancing) showed a trend toward positive correlation with activation in the rostral cingulate zone and the lateral orbitofrontal cortex, both part of the regulation network. Our results expand upon recent findings by showing that cognitive control over reward processing impacts not only the expectation period but also the reward signals in the outcome period. Moreover, increased recruitment of prefrontal reflective subsystems might enhance deliberate control over both reward processing and hedonic experience.

  11. Scaling of Perceptual Errors Can Predict the Shape of Neural Tuning Curves

    Science.gov (United States)

    Shouval, Harel Z.; Agarwal, Animesh; Gavornik, Jeffrey P.

    2014-01-01

    Weber’s law, first characterized in the 19th century, states that errors estimating the magnitude of perceptual stimuli scale linearly with stimulus intensity. This linear relationship is found in most sensory modalities, generalizes to temporal interval estimation, and even applies to some abstract variables. Despite its generality and long experimental history, the neural basis of Weber’s law remains unknown. This work presents a simple theory explaining the conditions under which Weber’s law can result from neural variability and predicts that the tuning curves of neural populations which adhere to Weber’s law will have a log-power form with parameters that depend on spike-count statistics. The prevalence of Weber’s law suggests that it might be optimal in some sense. We examine this possibility, using variational calculus, and show that Weber’s law is optimal only when observed real-world variables exhibit power-law statistics with a specific exponent. Our theory explains how physiology gives rise to the behaviorally characterized Weber’s law and may represent a general governing principle relating perception to neural activity. PMID:23679640

  12. [Prediction of spatial distribution of forest carbon storage in Heilongjiang Province using spatial error model].

    Science.gov (United States)

    Liu, Chang; Li, Feng-Ri; Zhen, Zhen

    2014-10-01

    Abstract: Based on the data from Chinese National Forest Inventory (CNFI) and Key Ecological Benefit Forest Monitoring plots (5075 in total) in Heilongjiang Province in 2010 and concurrent meteorological data coming from 59 meteorological stations located in Heilongjiang, Jilin and Inner Mongolia, this paper established a spatial error model (SEM) by GeoDA using carbon storage as dependent variable and several independent variables, including diameter of living trees (DBH), number of trees per hectare (TPH), elevation (Elev), slope (Slope), and product of precipitation and temperature (Rain_Temp). Global Moran's I was computed for describing overall spatial autocorrelations of model results at different spatial scales. Local Moran's I was calculated at the optimal bandwidth (25 km) to present spatial distribution residuals. Intra-block spatial variances were computed to explain spatial heterogeneity of residuals. Finally, a spatial distribution map of carbon storage in Heilongjiang was visualized based on predictions. The results showed that the distribution of forest carbon storage in Heilongjiang had spatial effect and was significantly influenced by stand, topographic and meteorological factors, especially average DBH. SEM could solve the spatial autocorrelation and heterogeneity well. There were significant spatial differences in distribution of forest carbon storage. The carbon storage was mainly distributed in Zhangguangcai Mountain, Xiao Xing'an Mountain and Da Xing'an Mountain where dense, forests existed, rarely distributed in Songnen Plains, while Wanda Mountain had moderate-level carbon storage.

  13. Altered neural reward and loss processing and prediction error signalling in depression.

    Science.gov (United States)

    Ubl, Bettina; Kuehner, Christine; Kirsch, Peter; Ruttorf, Michaela; Diener, Carsten; Flor, Herta

    2015-08-01

    Dysfunctional processing of reward and punishment may play an important role in depression. However, functional magnetic resonance imaging (fMRI) studies have shown heterogeneous results for reward processing in fronto-striatal regions. We examined neural responsivity associated with the processing of reward and loss during anticipation and receipt of incentives and related prediction error (PE) signalling in depressed individuals. Thirty medication-free depressed persons and 28 healthy controls performed an fMRI reward paradigm. Regions of interest analyses focused on neural responses during anticipation and receipt of gains and losses and related PE-signals. Additionally, we assessed the relationship between neural responsivity during gain/loss processing and hedonic capacity. When compared with healthy controls, depressed individuals showed reduced fronto-striatal activity during anticipation of gains and losses. The groups did not significantly differ in response to reward and loss outcomes. In depressed individuals, activity increases in the orbitofrontal cortex and nucleus accumbens during reward anticipation were associated with hedonic capacity. Depressed individuals showed an absence of reward-related PEs but encoded loss-related PEs in the ventral striatum. Depression seems to be linked to blunted responsivity in fronto-striatal regions associated with limited motivational responses for rewards and losses. Alterations in PE encoding might mirror blunted reward- and enhanced loss-related associative learning in depression. © The Author (2015). Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  14. On the improvement of neural cryptography using erroneous transmitted information with error prediction.

    Science.gov (United States)

    Allam, Ahmed M; Abbas, Hazem M

    2010-12-01

    Neural cryptography deals with the problem of "key exchange" between two neural networks using the mutual learning concept. The two networks exchange their outputs (in bits) and the key between the two communicating parties is eventually represented in the final learned weights, when the two networks are said to be synchronized. Security of neural synchronization is put at risk if an attacker is capable of synchronizing with any of the two parties during the training process. Therefore, diminishing the probability of such a threat improves the reliability of exchanging the output bits through a public channel. The synchronization with feedback algorithm is one of the existing algorithms that enhances the security of neural cryptography. This paper proposes three new algorithms to enhance the mutual learning process. They mainly depend on disrupting the attacker confidence in the exchanged outputs and input patterns during training. The first algorithm is called "Do not Trust My Partner" (DTMP), which relies on one party sending erroneous output bits, with the other party being capable of predicting and correcting this error. The second algorithm is called "Synchronization with Common Secret Feedback" (SCSFB), where inputs are kept partially secret and the attacker has to train its network on input patterns that are different from the training sets used by the communicating parties. The third algorithm is a hybrid technique combining the features of the DTMP and SCSFB. The proposed approaches are shown to outperform the synchronization with feedback algorithm in the time needed for the parties to synchronize.

  15. A Bayesian joint probability post-processor for reducing errors and quantifying uncertainty in monthly streamflow predictions

    Directory of Open Access Journals (Sweden)

    P. Pokhrel

    2012-10-01

    Full Text Available Hydrological post-processors refer here to statistical models that are applied to hydrological model predictions to further reduce prediction errors and to quantify remaining uncertainty. For streamflow predictions, post-processors are generally applied to daily or sub-daily time scales. For many applications such as seasonal streamflow forecasting and water resources assessment, monthly volumes of streamflows are of primary interest. While it is possible to aggregate post-processed daily or sub-daily predictions to monthly time scales, the monthly volumes so produced may not have the least errors achievable and may not be reliable in uncertainty distributions. Post-processing directly at the monthly time scale is likely to be more effective. In this study, we investigate the use of a Bayesian joint probability modelling approach to directly post-process model predictions of monthly streamflow volumes. We apply the BJP post-processor to 18 catchments located in eastern Australia and demonstrate its effectiveness in reducing prediction errors and quantifying prediction uncertainty.

  16. Thermal-Induced Errors Prediction and Compensation for a Coordinate Boring Machine Based on Time Series Analysis

    Directory of Open Access Journals (Sweden)

    Jun Yang

    2014-01-01

    Full Text Available To improve the CNC machine tools precision, a thermal error modeling for the motorized spindle was proposed based on time series analysis, considering the length of cutting tools and thermal declined angles, and the real-time error compensation was implemented. A five-point method was applied to measure radial thermal declinations and axial expansion of the spindle with eddy current sensors, solving the problem that the three-point measurement cannot obtain the radial thermal angle errors. Then the stationarity of the thermal error sequences was determined by the Augmented Dickey-Fuller Test Algorithm, and the autocorrelation/partial autocorrelation function was applied to identify the model pattern. By combining both Yule-Walker equations and information criteria, the order and parameters of the models were solved effectively, which improved the prediction accuracy and generalization ability. The results indicated that the prediction accuracy of the time series model could reach up to 90%. In addition, the axial maximum error decreased from 39.6 μm to 7 μm after error compensation, and the machining accuracy was improved by 89.7%. Moreover, the X/Y-direction accuracy can reach up to 77.4% and 86%, respectively, which demonstrated that the proposed methods of measurement, modeling, and compensation were effective.

  17. Preschool speech error patterns predict articulation and phonological awareness outcomes in children with histories of speech sound disorders.

    Science.gov (United States)

    Preston, Jonathan L; Hull, Margaret; Edwards, Mary Louise

    2013-05-01

    To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost 4 years later. Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 (years;months) and were followed up at age 8;3. The frequency of occurrence of preschool distortion errors, typical substitution and syllable structure errors, and atypical substitution and syllable structure errors was used to predict later speech sound production, PA, and literacy outcomes. Group averages revealed below-average school-age articulation scores and low-average PA but age-appropriate reading and spelling. Preschool speech error patterns were related to school-age outcomes. Children for whom >10% of their speech sound errors were atypical had lower PA and literacy scores at school age than children who produced phonological representations, leading to long-term PA weaknesses. Preschoolers' distortions may be resistant to change over time, leading to persisting speech sound production problems.

  18. Estimating Prediction Uncertainty from Geographical Information System Raster Processing: A User's Manual for the Raster Error Propagation Tool (REPTool)

    Science.gov (United States)

    Gurdak, Jason J.; Qi, Sharon L.; Geisler, Michael L.

    2009-01-01

    The U.S. Geological Survey Raster Error Propagation Tool (REPTool) is a custom tool for use with the Environmental System Research Institute (ESRI) ArcGIS Desktop application to estimate error propagation and prediction uncertainty in raster processing operations and geospatial modeling. REPTool is designed to introduce concepts of error and uncertainty in geospatial data and modeling and provide users of ArcGIS Desktop a geoprocessing tool and methodology to consider how error affects geospatial model output. Similar to other geoprocessing tools available in ArcGIS Desktop, REPTool can be run from a dialog window, from the ArcMap command line, or from a Python script. REPTool consists of public-domain, Python-based packages that implement Latin Hypercube Sampling within a probabilistic framework to track error propagation in geospatial models and quantitatively estimate the uncertainty of the model output. Users may specify error for each input raster or model coefficient represented in the geospatial model. The error for the input rasters may be specified as either spatially invariant or spatially variable across the spatial domain. Users may specify model output as a distribution of uncertainty for each raster cell. REPTool uses the Relative Variance Contribution method to quantify the relative error contribution from the two primary components in the geospatial model - errors in the model input data and coefficients of the model variables. REPTool is appropriate for many types of geospatial processing operations, modeling applications, and related research questions, including applications that consider spatially invariant or spatially variable error in geospatial data.

  19. Effective Prediction of Errors by Non-native Speakers Using Decision Tree for Speech Recognition-Based CALL System

    Science.gov (United States)

    Wang, Hongcui; Kawahara, Tatsuya

    CALL (Computer Assisted Language Learning) systems using ASR (Automatic Speech Recognition) for second language learning have received increasing interest recently. However, it still remains a challenge to achieve high speech recognition performance, including accurate detection of erroneous utterances by non-native speakers. Conventionally, possible error patterns, based on linguistic knowledge, are added to the lexicon and language model, or the ASR grammar network. However, this approach easily falls in the trade-off of coverage of errors and the increase of perplexity. To solve the problem, we propose a method based on a decision tree to learn effective prediction of errors made by non-native speakers. An experimental evaluation with a number of foreign students learning Japanese shows that the proposed method can effectively generate an ASR grammar network, given a target sentence, to achieve both better coverage of errors and smaller perplexity, resulting in significant improvement in ASR accuracy.

  20. Absorbed in the task: Personality measures predict engagement during task performance as tracked by error negativity and asymmetrical frontal activity.

    Science.gov (United States)

    Tops, Mattie; Boksem, Maarten A S

    2010-12-01

    We hypothesized that interactions between traits and context predict task engagement, as measured by the amplitude of the error-related negativity (ERN), performance, and relative frontal activity asymmetry (RFA). In Study 1, we found that drive for reward, absorption, and constraint independently predicted self-reported persistence. We hypothesized that, during a prolonged monotonous task, absorption would predict initial ERN amplitudes, constraint would delay declines in ERN amplitudes and deterioration of performance, and drive for reward would predict left RFA when a reward could be obtained. Study 2, employing EEG recordings, confirmed our predictions. The results showed that most traits that have in previous research been related to ERN amplitudes have a relationship with the motivational trait persistence in common. In addition, trait-context combinations that are likely associated with increased engagement predict larger ERN amplitudes and RFA. Together, these results support the hypothesis that engagement may be a common underlying factor predicting ERN amplitude.

  1. Highly porous thermal protection materials: Modelling and prediction of the methodical experimental errors

    Science.gov (United States)

    Cherepanov, Valery V.; Alifanov, Oleg M.; Morzhukhina, Alena V.; Budnik, Sergey A.

    2016-11-01

    The formation mechanisms and the main factors affecting the systematic error of thermocouples were investigated. According to the results of experimental studies and mathematical modelling it was established that in highly porous heat resistant materials for aerospace application the thermocouple errors are determined by two competing mechanisms provided correlation between the errors and the difference between radiation and conduction heat fluxes. The comparative analysis was carried out and some features of the methodical error formation related to the distances from the heated surface were established.

  2. A variational method for correcting non-systematic errors in numerical weather prediction

    Institute of Scientific and Technical Information of China (English)

    SHAO AiMei; XI Shuang; QIU ChongJian

    2009-01-01

    A variational method based on previous numerical forecasts is developed to estimate and correct non-systematic component of numerical weather forecast error. In the method, it is assumed that the error is linearly dependent on some combination of the forecast fields, and three types of forecast combination are applied to identifying the forecasting error: 1) the forecasts at the ending time, 2) the combination of initial fields and the forecasts at the ending time, and 3) the combination of the fore-casts at the ending time and the tendency of the forecast. The Single Value Decomposition (SVD) of the covariance matrix between the forecast and forecasting error is used to obtain the inverse mapping from flow space to the error space during the training period. The background covariance matrix is hereby reduced to a simple diagonal matrix. The method is tested with a shallow-water equation model by introducing two different model errors. The results of error correction for 6, 24 and 48 h forecasts show that the method is effective for improving the quality of the forecast when the forecasting error obviously exceeds the analysis error and it is optimal when the third type of forecast combinations is applied.

  3. A variational method for correcting non-systematic errors in numerical weather prediction

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    A variational method based on previous numerical forecasts is developed to estimate and correct non-systematic component of numerical weather forecast error. In the method, it is assumed that the error is linearly dependent on some combination of the forecast fields, and three types of forecast combination are applied to identifying the forecasting error: 1) the forecasts at the ending time, 2) the combination of initial fields and the forecasts at the ending time, and 3) the combination of the forecasts at the ending time and the tendency of the forecast. The Single Value Decomposition (SVD) of the covariance matrix between the forecast and forecasting error is used to obtain the inverse mapping from flow space to the error space during the training period. The background covariance matrix is hereby reduced to a simple diagonal matrix. The method is tested with a shallow-water equation model by introducing two different model errors. The results of error correction for 6, 24 and 48 h forecasts show that the method is effective for improving the quality of the forecast when the forecasting error obviously exceeds the analysis error and it is optimal when the third type of forecast combinations is applied.

  4. Preschool Speech Error Patterns Predict Articulation and Phonological Awareness Outcomes in Children with Histories of Speech Sound Disorders

    Science.gov (United States)

    Preston, Jonathan L.; Hull, Margaret; Edwards, Mary Louise

    2013-01-01

    Purpose: To determine if speech error patterns in preschoolers with speech sound disorders (SSDs) predict articulation and phonological awareness (PA) outcomes almost 4 years later. Method: Twenty-five children with histories of preschool SSDs (and normal receptive language) were tested at an average age of 4;6 (years;months) and were followed up…

  5. Absorbed in the task : Personality measures predict engagement during task performance as tracked by error negativity and asymmetrical frontal activity

    NARCIS (Netherlands)

    Tops, Mattie; Boksem, Maarten A. S.

    2010-01-01

    We hypothesized that interactions between traits and context predict task engagement, as measured by the amplitude of the error-related negativity (ERN), performance, and relative frontal activity asymmetry (RFA). In Study 1, we found that drive for reward, absorption, and constraint independently p

  6. Assessment of the prediction error in a large-scale application of a dynamic soil acidification model

    NARCIS (Netherlands)

    Kros, J.; Mol-Dijkstra, J.P.; Pebesma, E.J.

    2002-01-01

    The prediction error of a relatively simple soil acidification model (SMART2) was assessed before and after calibration, focussing on the aluminium and nitrate concentrations on a block scale. Although SMART2 is especially developed for application ona national to European scale, it still runs at a

  7. Prediction of human errors by maladaptive changes in event-related brain networks

    NARCIS (Netherlands)

    Eichele, T.; Debener, S.; Calhoun, V.D.; Specht, K.; Engel, A.K.; Hugdahl, K.; Cramon, D.Y. von; Ullsperger, M.

    2008-01-01

    Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional Mill and applying independent component analysis followed by deconvolution of hemodynamic responses, we

  8. The Pe of Perfectionism Concern Over Mistakes Predicts the Amplitude of a Late Frontal Error Positivity

    NARCIS (Netherlands)

    Tops, Mattie; Koole, Sander L.; Wijers, Albertus A.

    2013-01-01

    The present research investigates the association between concern over mistakes (CoM), a facet of the personality style of perfectionism, and the error positivity (Pe), a response-locked event-related brain potential that relates to error-awareness. Sixteen healthy right-handed female participants p

  9. Prediction Error Representation in Individuals With Generalized Anxiety Disorder During Passive Avoidance.

    Science.gov (United States)

    White, Stuart F; Geraci, Marilla; Lewis, Elizabeth; Leshin, Joseph; Teng, Cindy; Averbeck, Bruno; Meffert, Harma; Ernst, Monique; Blair, James R; Grillon, Christian; Blair, Karina S

    2017-02-01

    Deficits in reinforcement-based decision making have been reported in generalized anxiety disorder. However, the pathophysiology of these deficits is largely unknown; published studies have mainly examined adolescents, and the integrity of core functional processes underpinning decision making remains undetermined. In particular, it is unclear whether the representation of reinforcement prediction error (PE) (the difference between received and expected reinforcement) is disrupted in generalized anxiety disorder. This study addresses these issues in adults with the disorder. Forty-six unmedicated individuals with generalized anxiety disorder and 32 healthy comparison subjects group-matched on IQ, gender, and age performed a passive avoidance task while undergoing functional MRI. Data analyses were performed using a computational modeling approach. Behaviorally, individuals with generalized anxiety disorder showed impaired reinforcement-based decision making. Imaging results revealed that during feedback, individuals with generalized anxiety disorder relative to healthy subjects showed a reduced correlation between PE and activity within the ventromedial prefrontal cortex, ventral striatum, and other structures implicated in decision making. In addition, individuals with generalized anxiety disorder relative to healthy participants showed a reduced correlation between punishment PEs, but not reward PEs, and activity within the left and right lentiform nucleus/putamen. This is the first study to identify computational impairments during decision making in generalized anxiety disorder. PE signaling is significantly disrupted in individuals with the disorder and may lead to their decision-making deficits and excessive worry about everyday problems by disrupting the online updating ("reality check") of the current relationship between the expected values of current response options and the actual received rewards and punishments.

  10. Prediction of rainfall intensity measurement errors using commercial microwave communication links

    Directory of Open Access Journals (Sweden)

    A. Zinevich

    2010-10-01

    Full Text Available Commercial microwave radio links forming cellular communication networks are known to be a valuable instrument for measuring near-surface rainfall. However, operational communication links are more uncertain relatively to the dedicated installations since their geometry and frequencies are optimized for high communication performance rather than observing rainfall. Quantification of the uncertainties for measurements that are non-optimal in the first place is essential to assure usability of the data.

    In this work we address modeling of instrumental impairments, i.e. signal variability due to antenna wetting, baseline attenuation uncertainty and digital quantization, as well as environmental ones, i.e. variability of drop size distribution along a link affecting accuracy of path-averaged rainfall measurement and spatial variability of rainfall in the link's neighborhood affecting the accuracy of rainfall estimation out of the link path. Expressions for root mean squared error (RMSE for estimates of path-averaged and point rainfall have been derived. To verify the RMSE expressions quantitatively, path-averaged measurements from 21 operational communication links in 12 different locations have been compared to records of five nearby rain gauges over three rainstorm events.

    The experiments show that the prediction accuracy is above 90% for temporal accumulation less than 30 min and lowers for longer accumulation intervals. Spatial variability in the vicinity of the link, baseline attenuation uncertainty and, possibly, suboptimality of wet antenna attenuation model are the major sources of link-gauge discrepancies. In addition, the dependence of the optimal coefficients of a conventional wet antenna attenuation model on spatial rainfall variability and, accordingly, link length has been shown.

    The expressions for RMSE of the path-averaged rainfall estimates can be useful for integration of measurements from multiple

  11. The application of SHERPA (Systematic Human Error Reduction and Prediction Approach) in the development of compensatory cognitive rehabilitation strategies for stroke patients with left and right brain damage.

    Science.gov (United States)

    Hughes, Charmayne M L; Baber, Chris; Bienkiewicz, Marta; Worthington, Andrew; Hazell, Alexa; Hermsdörfer, Joachim

    2015-01-01

    Approximately 33% of stroke patients have difficulty performing activities of daily living, often committing errors during the planning and execution of such activities. The objective of this study was to evaluate the ability of the human error identification (HEI) technique SHERPA (Systematic Human Error Reduction and Prediction Approach) to predict errors during the performance of daily activities in stroke patients with left and right hemisphere lesions. Using SHERPA we successfully predicted 36 of the 38 observed errors, with analysis indicating that the proportion of predicted and observed errors was similar for all sub-tasks and severity levels. HEI results were used to develop compensatory cognitive strategies that clinicians could employ to reduce or prevent errors from occurring. This study provides evidence for the reliability and validity of SHERPA in the design of cognitive rehabilitation strategies in stroke populations.

  12. Offline modeling for product quality prediction of mineral processing using modeling error PDF shaping and entropy minimization.

    Science.gov (United States)

    Ding, Jinliang; Chai, Tianyou; Wang, Hong

    2011-03-01

    This paper presents a novel offline modeling for product quality prediction of mineral processing which consists of a number of unit processes in series. The prediction of the product quality of the whole mineral process (i.e., the mixed concentrate grade) plays an important role and the establishment of its predictive model is a key issue for the plantwide optimization. For this purpose, a hybrid modeling approach of the mixed concentrate grade prediction is proposed, which consists of a linear model and a nonlinear model. The least-squares support vector machine is adopted to establish the nonlinear model. The inputs of the predictive model are the performance indices of each unit process, while the output is the mixed concentrate grade. In this paper, the model parameter selection is transformed into the shape control of the probability density function (PDF) of the modeling error. In this context, both the PDF-control-based and minimum-entropy-based model parameter selection approaches are proposed. Indeed, this is the first time that the PDF shape control idea is used to deal with system modeling, where the key idea is to turn model parameters so that either the modeling error PDF is controlled to follow a target PDF or the modeling error entropy is minimized. The experimental results using the real plant data and the comparison of the two approaches are discussed. The results show the effectiveness of the proposed approaches.

  13. Early error detection predicted by reduced pre-response control process: an ERP topographic mapping study.

    Science.gov (United States)

    Pourtois, Gilles

    2011-01-01

    Advanced ERP topographic mapping techniques were used to study error monitoring functions in human adult participants, and test whether proactive attentional effects during the pre-response time period could later influence early error detection mechanisms (as measured by the ERN component) or not. Participants performed a speeded go/nogo task, and made a substantial number of false alarms that did not differ from correct hits as a function of behavioral speed or actual motor response. While errors clearly elicited an ERN component generated within the dACC following the onset of these incorrect responses, I also found that correct hits were associated with a different sequence of topographic events during the pre-response baseline time-period, relative to errors. A main topographic transition from occipital to posterior parietal regions (including primarily the precuneus) was evidenced for correct hits ~170-150 ms before the response, whereas this topographic change was markedly reduced for errors. The same topographic transition was found for correct hits that were eventually performed slower than either errors or fast (correct) hits, confirming the involvement of this distinctive posterior parietal activity in top-down attentional control rather than motor preparation. Control analyses further ensured that this pre-response topographic effect was not related to differences in stimulus processing. Furthermore, I found a reliable association between the magnitude of the ERN following errors and the duration of this differential precuneus activity during the pre-response baseline, suggesting a functional link between an anticipatory attentional control component subserved by the precuneus and early error detection mechanisms within the dACC. These results suggest reciprocal links between proactive attention control and decision making processes during error monitoring.

  14. An ENSO-Forecast Independent Statistical Model for the Prediction of Annual Atlantic Tropical Cyclone Frequency in April

    Directory of Open Access Journals (Sweden)

    Kenny Xie

    2014-01-01

    Full Text Available Statistical models for preseason prediction of annual Atlantic tropical cyclone (TC and hurricane counts generally include El Niño/Southern Oscillation (ENSO forecasts as a predictor. As a result, the predictions from such models are often contaminated by the errors in ENSO forecasts. In this study, it is found that the latent heat flux (LHF over Eastern Tropical Pacific (ETP, defined as the region 0°–5°N, 115°–125°W in spring is negatively correlated with the annual Atlantic TC and hurricane counts. By using stepwise backward elimination regression, it is further shown that the March value of ETP LHF is a better predictor than the spring or summer ENSO index for Atlantic TC counts. Leave-one-out cross validation indicates that the annual Atlantic TC counts predicted by this ENSO-independent statistical model show a remarkable correlation with the actual TC counts (R=0.72; P value <0.01. For Atlantic hurricanes, the predictions using March ETP LHF and summer (July–September ENSO indices show only minor differences except in moderate to strong El Niño years. Thus, March ETP LHF is an excellent predictor for seasonal Atlantic TC prediction and a viable alternative to using ENSO index for Atlantic hurricane prediction.

  15. Errors in medication history at hospital admission: prevalence and predicting factors.

    Science.gov (United States)

    Hellström, Lina M; Bondesson, Åsa; Höglund, Peter; Eriksson, Tommy

    2012-04-03

    An accurate medication list at hospital admission is essential for the evaluation and further treatment of patients. The objective of this study was to describe the frequency, type and predictors of errors in medication history, and to evaluate the extent to which standard care corrects these errors. A descriptive study was carried out in two medical wards in a Swedish hospital using Lund Integrated Medicines Management (LIMM)-based medication reconciliation. A clinical pharmacist identified each patient's most accurate pre-admission medication list by conducting a medication reconciliation process shortly after admission. This list was then compared with the patient's medication list in the hospital medical records. Addition or withdrawal of a drug or changes to the dose or dosage form in the hospital medication list were considered medication discrepancies. Medication discrepancies for which no clinical reason could be identified (unintentional changes) were considered medication history errors. The final study population comprised 670 of 818 eligible patients. At least one medication history error was identified by pharmacists conducting medication reconciliations for 313 of these patients (47%; 95% CI 43-51%). The most common medication error was an omitted drug, followed by a wrong dose. Multivariate logistic regression analysis showed that a higher number of drugs at admission (odds ratio [OR] per 1 drug increase = 1.10; 95% CI 1.06-1.14; p medication history errors at admission. The results further indicated that standard care by non-pharmacist ward staff had partly corrected the errors in affected patients by four days after admission, but a considerable proportion of the errors made in the initial medication history at admission remained undetected by standard care (OR for medication errors detected by pharmacists' medication reconciliation carried out on days 4-11 compared to days 0-1 = 0.52; 95% CI 0.30-0.91; p=0.021). Clinical pharmacists conducting

  16. Development of a model to predict partition coefficient of organic pollutants in cloud point extraction process.

    Science.gov (United States)

    Shahmirani, Samareh; Farahani, Ebrahim Vasheghani; Ghasemi, Jahanbakhsh

    2006-01-01

    A quantitative structure property relationship (QSPR) study has been performed to establish a model to relate structural descriptors of 45 organic compounds to their partition coefficients in water-hexadecylpyridinium chloride (CPC) micelles at 298K using partial least squares (PLS). 510 of six different categories of structural descriptors were calculated by Dragon software. The descriptors with 0.9 mutually pair correlations and with less than 0.1 with dependent variables were excluded at the early stage of the preprocessing of the structural data matrix. The data set was randomly divided into two groups: training set (40 molecules) and test set (5 molecules). In the final model 50 of the most effective of the structural descriptors on the partition coefficient were remained to model building by PLS calibration method. The optimum number of latent variables 5, which spanned 80% of the original variations of data matrix, was selected using leave one out cross validation method. Prediction ability of the model was tested by prediction of the partition coefficients of five unknown compounds and the mean relative error of prediction was 3.6%. The outliers were treated using leverage and score plots of the first third principal components. The efficiency of the new model was compared with Abraham model and it was found that the proposed model has more prediction ability.

  17. Design of a predictive targeting error simulator for MRI-guided prostate biopsy.

    Science.gov (United States)

    Avni, Shachar; Vikal, Siddharth; Fichtinger, Gabor

    2010-02-23

    Multi-parametric MRI is a new imaging modality superior in quality to Ultrasound (US) which is currently used in standard prostate biopsy procedures. Surface-based registration of the pre-operative and intra-operative prostate volumes is a simple alternative to side-step the challenges involved with deformable registration. However, segmentation errors inevitably introduced during prostate contouring spoil the registration and biopsy targeting accuracies. For the crucial purpose of validating this procedure, we introduce a fully interactive and customizable simulator which determines the resulting targeting errors of simulated registrations between prostate volumes given user-provided parameters for organ deformation, segmentation, and targeting. We present the workflow executed by the simulator in detail and discuss the parameters involved. We also present a segmentation error introduction algorithm, based on polar curves and natural cubic spline interpolation, which introduces statistically realistic contouring errors. One simulation, including all I/O and preparation for rendering, takes approximately 1 minute and 40 seconds to complete on a system with 3 GB of RAM and four Intel Core 2 Quad CPUs each with a speed of 2.40 GHz. Preliminary results of our simulation suggest the maximum tolerable segmentation error given the presence of a 5.0 mm wide small tumor is between 4-5 mm. We intend to validate these results via clinical trials as part of our ongoing work.

  18. Unscented predictive variable structure filter for satellite attitude estimation with model errors when using low precision sensors

    Science.gov (United States)

    Cao, Lu; Li, Hengnian

    2016-10-01

    For the satellite attitude estimation problem, the serious model errors always exist and hider the estimation performance of the Attitude Determination and Control System (ACDS), especially for a small satellite with low precision sensors. To deal with this problem, a new algorithm for the attitude estimation, referred to as the unscented predictive variable structure filter (UPVSF) is presented. This strategy is proposed based on the variable structure control concept and unscented transform (UT) sampling method. It can be implemented in real time with an ability to estimate the model errors on-line, in order to improve the state estimation precision. In addition, the model errors in this filter are not restricted only to the Gaussian noises; therefore, it has the advantages to deal with the various kinds of model errors or noises. It is anticipated that the UT sampling strategy can further enhance the robustness and accuracy of the novel UPVSF. Numerical simulations show that the proposed UPVSF is more effective and robustness in dealing with the model errors and low precision sensors compared with the traditional unscented Kalman filter (UKF).

  19. A neural reward prediction error revealed by a meta-analysis of ERPs using great grand averages.

    Science.gov (United States)

    Sambrook, Thomas D; Goslin, Jeremy

    2015-01-01

    Economic approaches to decision making assume that people attach values to prospective goods and act to maximize their obtained value. Neuroeconomics strives to observe these values directly in the brain. A widely used valuation term in formal learning and decision-making models is the reward prediction error: the value of an outcome relative to its expected value. An influential theory (Holroyd & Coles, 2002) claims that an electrophysiological component, feedback related negativity (FRN), codes a reward prediction error in the human brain. Such a component should be sensitive to both the prior likelihood of reward and its magnitude on receipt. A number of studies have found the FRN to be insensitive to reward magnitude, thus questioning the Holroyd and Coles account. However, because of marked inconsistencies in how the FRN is measured, a meaningful synthesis of this evidence is highly problematic. We conducted a meta-analysis of the FRN's response to both reward magnitude and likelihood using a novel method in which published effect sizes were disregarded in favor of direct measurement of the published waveforms themselves, with these waveforms then averaged to produce "great grand averages." Under this standardized measure, the meta-analysis revealed strong effects of magnitude and likelihood on the FRN, consistent with it encoding a reward prediction error. In addition, it revealed strong main effects of reward magnitude and likelihood across much of the waveform, indicating sensitivity to unsigned prediction errors or "salience." The great grand average technique is proposed as a general method for meta-analysis of event-related potential (ERP).

  20. A new strategy to improve the predictive ability of the local lazy regression and its application to the QSAR study of melanin-concentrating hormone receptor 1 antagonists.

    Science.gov (United States)

    Li, Jiazhong; Li, Shuyan; Lei, Beilei; Liu, Huanxiang; Yao, Xiaojun; Liu, Mancang; Gramatica, Paola

    2010-04-15

    In the quantitative structure-activity relationship (QSAR) study, local lazy regression (LLR) can predict the activity of a query molecule by using the information of its local neighborhood without need to produce QSAR models a priori. When a prediction is required for a query compound, a set of local models including different number of nearest neighbors are identified. The leave-one-out cross-validation (LOO-CV) procedure is usually used to assess the prediction ability of each model, and the model giving the lowest LOO-CV error or highest LOO-CV correlation coefficient is chosen as the best model. However, it has been proved that the good statistical value from LOO cross-validation appears to be the necessary, but not the sufficient condition for the model to have a high predictive power. In this work, a new strategy is proposed to improve the predictive ability of LLR models and to access the accuracy of a query prediction. The bandwidth of k neighbor value for LLR is optimized by considering the predictive ability of local models using an external validation set. This approach was applied to the QSAR study of a series of thienopyrimidinone antagonists of melanin-concentrating hormone receptor 1. The obtained results from the new strategy shows evident improvement compared with the commonly used LOO-CV LLR methods and the traditional global linear model. 2009 Wiley Periodicals, Inc.

  1. Disrupted expected value and prediction error signaling in youths with disruptive behavior disorders during a passive avoidance task.

    Science.gov (United States)

    White, Stuart F; Pope, Kayla; Sinclair, Stephen; Fowler, Katherine A; Brislin, Sarah J; Williams, W Craig; Pine, Daniel S; Blair, R James R

    2013-03-01

    Youths with disruptive behavior disorders, including conduct disorder and oppositional defiant disorder, show major impairments in reinforcement-based decision making. However, the neural basis of these difficulties remains poorly understood. This partly reflects previous failures to differentiate responses during decision making and feedback processing and to take advantage of computational model-based functional MRI (fMRI). Participants were 38 community youths ages 10-18 (20 had disruptive behavior disorders, and 18 were healthy comparison youths). Model-based fMRI was used to assess the computational processes involved in decision making and feedback processing in the ventromedial prefrontal cortex, insula, and caudate. Youths with disruptive behavior disorders showed reduced use of expected value information within the ventromedial prefrontal cortex when choosing to respond and within the anterior insula when choosing not to respond. In addition, they showed reduced responsiveness to positive prediction errors and increased responsiveness to negative prediction errors within the caudate during feedback. This study is the first to determine impairments in the use of expected value within the ventromedial prefrontal cortex and insula during choice and in prediction error-signaling within the caudate during feedback in youths with disruptive behavior disorders.

  2. Real-time prediction of atmospheric Lagrangian coherent structures based on forecast data: An application and error analysis

    Science.gov (United States)

    BozorgMagham, Amir E.; Ross, Shane D.; Schmale, David G.

    2013-09-01

    The language of Lagrangian coherent structures (LCSs) provides a new means for studying transport and mixing of passive particles advected by an atmospheric flow field. Recent observations suggest that LCSs govern the large-scale atmospheric motion of airborne microorganisms, paving the way for more efficient models and management strategies for the spread of infectious diseases affecting plants, domestic animals, and humans. In addition, having reliable predictions of the timing of hyperbolic LCSs may contribute to improved aerobiological sampling of microorganisms with unmanned aerial vehicles and LCS-based early warning systems. Chaotic atmospheric dynamics lead to unavoidable forecasting errors in the wind velocity field, which compounds errors in LCS forecasting. In this study, we reveal the cumulative effects of errors of (short-term) wind field forecasts on the finite-time Lyapunov exponent (FTLE) fields and the associated LCSs when realistic forecast plans impose certain limits on the forecasting parameters. Objectives of this paper are to (a) quantify the accuracy of prediction of FTLE-LCS features and (b) determine the sensitivity of such predictions to forecasting parameters. Results indicate that forecasts of attracting LCSs exhibit less divergence from the archive-based LCSs than the repelling features. This result is important since attracting LCSs are the backbone of long-lived features in moving fluids. We also show under what circumstances one can trust the forecast results if one merely wants to know if an LCS passed over a region and does not need to precisely know the passage time.

  3. The neurobiology of schizotypy: fronto-striatal prediction error signal correlates with delusion-like beliefs in healthy people.

    Science.gov (United States)

    Corlett, P R; Fletcher, P C

    2012-12-01

    Healthy people sometimes report experiences and beliefs that are strikingly similar to the symptoms of psychosis in their bizarreness and the apparent lack of evidence supporting them. An important question is whether this represents merely a superficial resemblance or whether there is a genuine and deep similarity indicating, as some have suggested, a continuum between odd but healthy beliefs and the symptoms of psychotic illness. We sought to shed light on this question by determining whether the neural marker for prediction error - previously shown to be altered in early psychosis--is comparably altered in healthy individuals reporting schizotypal experiences and beliefs. We showed that non-clinical schizotypal experiences were significantly correlated with aberrant frontal and striatal prediction error signal. This correlation related to the distress associated with the beliefs. Given our previous observations that patients with first episode psychosis show altered neural responses to prediction error and that this alteration, in turn, relates to the severity of their delusional ideation, our results provide novel evidence in support of the view that schizotypy relates to psychosis at more than just a superficial descriptive level. However, the picture is a complex one in which the experiences, though associated with altered striatal responding, may provoke distress but may nonetheless be explained away, while an additional alteration in frontal cortical responding may allow the beliefs to become more delusion-like: intrusive and distressing. Copyright © 2012 Elsevier Ltd. All rights reserved.

  4. Predicting Heats of Explosion of Nitroaromatic Compounds through NBO Charges and 15N NMR Chemical Shifts of Nitro Groups

    Directory of Open Access Journals (Sweden)

    Ricardo Infante-Castillo

    2012-01-01

    Full Text Available This work presents a new quantitative model to predict the heat of explosion of nitroaromatic compounds using the natural bond orbital (NBO charge and 15N NMR chemical shifts of the nitro groups (15NNitro as structural parameters. The values of the heat of explosion predicted for 21 nitroaromatic compounds using the model described here were compared with experimental data. The prediction ability of the model was assessed by the leave-one-out cross-validation method. The cross-validation results show that the model is significant and stable and that the predicted accuracy is within 0.146 MJ kg−1, with an overall root mean squared error of prediction (RMSEP below 0.183 MJ kg−1. Strong correlations were observed between the heat of explosion and the charges (R2 = 0.9533 and 15N NMR chemical shifts (R2 = 0.9531 of the studied compounds. In addition, the dependence of the heat of explosion on the presence of activating or deactivating groups of nitroaromatic explosives was analyzed. All calculations, including optimizations, NBO charges, and 15NNitro NMR chemical shifts analyses, were performed using density functional theory (DFT and a 6-311+G(2d,p basis set. Based on these results, this practical quantitative model can be used as a tool in the design and development of highly energetic materials (HEM based on nitroaromatic compounds.

  5. Report: Low Frequency Predictive Skill Despite Structural Instability and Model Error

    Science.gov (United States)

    2013-09-30

    Structural Instability and Model Error Andrew J. Majda New York University Courant Institute of Mathematical Sciences 251 Mercer Street New York, NY...NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) New York University, Courant Institute of

  6. Prior knowledge is more predictive of error correction than subjective confidence.

    Science.gov (United States)

    Sitzman, Danielle M; Rhodes, Matthew G; Tauber, Sarah K

    2014-01-01

    Previous research has demonstrated that, when given feedback, participants are more likely to correct confidently-held errors, as compared with errors held with lower levels of confidence, a finding termed the hypercorrection effect. Accounts of hypercorrection suggest that confidence modifies attention to feedback; alternatively, hypercorrection may reflect prior domain knowledge, with confidence ratings simply correlated with this prior knowledge. In the present experiments, we attempted to adjudicate among these explanations of the hypercorrection effect. In Experiments 1a and 1b, participants answered general knowledge questions, rated their confidence, and received feedback either immediately after rating their confidence or after a delay of several minutes. Although memory for confidence judgments should have been poorer at a delay, the hypercorrection effect was equivalent for both feedback timings. Experiment 2 showed that hypercorrection remained unchanged even when the delay to feedback was increased. In addition, measures of recall for prior confidence judgments showed that memory for confidence was indeed poorer after a delay. Experiment 3 directly compared estimates of domain knowledge with confidence ratings, showing that such prior knowledge was related to error correction, whereas the unique role of confidence was small. Overall, our results suggest that prior knowledge likely plays a primary role in error correction, while confidence may play a small role or merely serve as a proxy for prior knowledge.

  7. A true order recursive algorithm for two-dimensional mean squared error linear prediction and filtering

    NARCIS (Netherlands)

    Glentis, George-Othon; Slump, Cornelis H.; Hermann, Otto E.

    2000-01-01

    In this paper a novel algorithm is presented for the efficient two-dimensional (2-D), mean squared error (MSE), FIR filtering and system identification. Filter masks of general boundaries are allowed. Efficient order updating recursions are developed by exploiting the spatial shift invariance

  8. EOP预报误差对自主定轨结果影响分析%ANALYSIS OF INFLUENCE OF EOP PREDICTION ERROR ON AUTONOMOUS ORBIT DETERMINATION

    Institute of Scientific and Technical Information of China (English)

    张卫星; 刘万科; 龚晓颖

    2011-01-01

    In the autonomous orbit determination, we need EOP uploaded by ground station to achieve translation of Conventional Terrestrial System and Geocentric Celestial Reference System. However, when the satellite navigation system gets into autonomous navigation mode, the ground station can not upload the latest EOP, the system can only use long-term prediction of EOP. EOP prediction error will affect the ephemeris offered by the autonomous navigation system, and ultimately affect positioning accuracy of users. EOP prediction errors, the influence of EOP prediction errors to autonomous navigation system orbit ephemeris and positioning accuracy of users are discussed and analyzed. The results show that the prediction error of EOP almost has no influence on the radial error of satellite orbit and satellite clock error in long - term (110 days) autonomous orbit determination. It mainly influences the Plane Error (Along Error and Cross Error) and URE (User Range Error) and these errors show a certain periodicity. Moreover, these errors mainly influences North-South direction error and East-West direction error in pseudorange positioning.%对地球定向参数的预报误差变化趋势和地球定向参数预报误差对自主定轨生成星历影响及由此给定位产生的影响的分析结果表明:地球定向参数预报误差对长期(110天)自主定轨轨道的径向误差和卫星钟差几乎没有影响,主要影响水平方向(切向和法向)误差,并且这种影响呈现一定的周期性,由此给定位带来的误差影响主要在东西方向和南北方向.

  9. Prediction of human clearance based on animal data and molecular properties.

    Science.gov (United States)

    Huang, Wenkang; Geng, Lv; Deng, Rong; Lu, Shaoyong; Ma, Guangli; Yu, Jianxiu; Zhang, Jian; Liu, Wei; Hou, Tingjun; Lu, Xuefeng

    2015-11-01

    Human clearance is often predicted prior to clinical study from in vivo preclinical data by virtue of interspecies allometric scaling methods. The aims of this study were to determine the important molecular descriptors for the extrapolation of animal data to human clearance and further to build a model to predict human clearance by combination of animal data and the selected molecular descriptors. These important molecular descriptors selected by genetic algorithm (GA) were from five classes: quantum mechanical, shadow indices, E-state keys, molecular properties, and molecular property counts. Although the data set contained many outliers determined by the conventional Mahmood method, the variation of most outliers was reduced significantly by our final support vector machine (SVM) model. The values of cross-validated correlation coefficient and root-mean-squared error (RMSE) for leave-one-out cross-validation (LOOCV) of the final SVM model were 0.783 and 0.305, respectively. Meanwhile, the reliability and consistency of the final model were also validated by an external test set. In conclusion, the SVM model based on the molecular descriptors selected by GA and animal data achieved better prediction performance than the Mahmood method. This approach can be applied as an improved interspecies allometric scaling method in drug research and development.

  10. Design of a predictive targeting error simulator for MRI-guided prostate biopsy

    OpenAIRE

    Avni, Shachar; Vikal, Siddharth; Fichtinger, Gabor

    2010-01-01

    Multi-parametric MRI is a new imaging modality superior in quality to Ultrasound (US) which is currently used in standard prostate biopsy procedures. Surface-based registration of the pre-operative and intra-operative prostate volumes is a simple alternative to side-step the challenges involved with deformable registration. However, segmentation errors inevitably introduced during prostate contouring spoil the registration and biopsy targeting accuracies. For the crucial purpose of validating...

  11. Prediction errors in learning drug response from gene expression data - influence of labeling, sample size, and machine learning algorithm.

    Science.gov (United States)

    Bayer, Immanuel; Groth, Philip; Schneckener, Sebastian

    2013-01-01

    Model-based prediction is dependent on many choices ranging from the sample collection and prediction endpoint to the choice of algorithm and its parameters. Here we studied the effects of such choices, exemplified by predicting sensitivity (as IC50) of cancer cell lines towards a variety of compounds. For this, we used three independent sample collections and applied several machine learning algorithms for predicting a variety of endpoints for drug response. We compared all possible models for combinations of sample collections, algorithm, drug, and labeling to an identically generated null model. The predictability of treatment effects varies among compounds, i.e. response could be predicted for some but not for all. The choice of sample collection plays a major role towards lowering the prediction error, as does sample size. However, we found that no algorithm was able to consistently outperform the other and there was no significant difference between regression and two- or three class predictors in this experimental setting. These results indicate that response-modeling projects should direct efforts mainly towards sample collection and data quality, rather than method adjustment.

  12. Cognitive tests predict real-world errors: the relationship between drug name confusion rates in laboratory-based memory and perception tests and corresponding error rates in large pharmacy chains

    Science.gov (United States)

    Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L

    2017-01-01

    Background Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. Objectives We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Methods Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Results Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Conclusions Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially

  13. Spatial measurement error and correction by spatial SIMEX in linear regression models when using predicted air pollution exposures.

    Science.gov (United States)

    Alexeeff, Stacey E; Carroll, Raymond J; Coull, Brent

    2016-04-01

    Spatial modeling of air pollution exposures is widespread in air pollution epidemiology research as a way to improve exposure assessment. However, there are key sources of exposure model uncertainty when air pollution is modeled, including estimation error and model misspecification. We examine the use of predicted air pollution levels in linear health effect models under a measurement error framework. For the prediction of air pollution exposures, we consider a universal Kriging framework, which may include land-use regression terms in the mean function and a spatial covariance structure for the residuals. We derive the bias induced by estimation error and by model misspecification in the exposure model, and we find that a misspecified exposure model can induce asymptotic bias in the effect estimate of air pollution on health. We propose a new spatial simulation extrapolation (SIMEX) procedure, and we demonstrate that the procedure has good performance in correcting this asymptotic bias. We illustrate spatial SIMEX in a study of air pollution and birthweight in Massachusetts.

  14. Optimal classifier selection and negative bias in error rate estimation: an empirical study on high-dimensional prediction

    Directory of Open Access Journals (Sweden)

    Boulesteix Anne-Laure

    2009-12-01

    Full Text Available Abstract Background In biometric practice, researchers often apply a large number of different methods in a "trial-and-error" strategy to get as much as possible out of their data and, due to publication pressure or pressure from the consulting customer, present only the most favorable results. This strategy may induce a substantial optimistic bias in prediction error estimation, which is quantitatively assessed in the present manuscript. The focus of our work is on class prediction based on high-dimensional data (e.g. microarray data, since such analyses are particularly exposed to this kind of bias. Methods In our study we consider a total of 124 variants of classifiers (possibly including variable selection or tuning steps within a cross-validation evaluation scheme. The classifiers are applied to original and modified real microarray data sets, some of which are obtained by randomly permuting the class labels to mimic non-informative predictors while preserving their correlation structure. Results We assess the minimal misclassification rate over the different variants of classifiers in order to quantify the bias arising when the optimal classifier is selected a posteriori in a data-driven manner. The bias resulting from the parameter tuning (including gene selection parameters as a special case and the bias resulting from the choice of the classification method are examined both separately and jointly. Conclusions The median minimal error rate over the investigated classifiers was as low as 31% and 41% based on permuted uninformative predictors from studies on colon cancer and prostate cancer, respectively. We conclude that the strategy to present only the optimal result is not acceptable because it yields a substantial bias in error rate estimation, and suggest alternative approaches for properly reporting classification accuracy.

  15. Prediction of scattering cross-section reductions due to plate orthogonality errors in trihedral radar reflectors

    Science.gov (United States)

    Keen, K. M.

    1983-02-01

    A method is developed for the determination of the reduction in scattering cross-section levels due to nonorthogonal alignment of the plates in trihedral radar corner reflectors. This method is based on the technique for finding the effective error at any direction of incidence. The method can be applied to any regular reflector shape and is accurate for any incident ray direction in the reflector main beam zone. It is determined that this method gives good agreement with experimental results for a wide range of reflector sizes, although the analysis is not exact.

  16. The Predictive Value of Subjective Labour Supply Data: A Dynamic Panel Data Model with Measurement Error

    OpenAIRE

    Euwals, Rob

    2002-01-01

    This paper tests the predictive value of subjective labour supply data for adjustments in working hours over time. The idea is that if subjective labour supply data help to predict next year?s working hours, such data must contain at least some information on individual labour supply preferences. This informational content can be crucial to identify models of labour supply. Furthermore, it can be crucial to investigate the need for, or, alternatively, the support for laws and collective agree...

  17. Predictions of the arrival time of Coronal Mass Ejections at 1AU: an analysis of the causes of errors

    Directory of Open Access Journals (Sweden)

    M. Owens

    2004-01-01

    Full Text Available Three existing models of Interplanetary Coronal Mass Ejection (ICME transit between the Sun and the Earth are compared to coronagraph and in situ observations: all three models are found to perform with a similar level of accuracy (i.e. an average error between observed and predicted 1AU transit times of approximately 11h. To improve long-term space weather prediction, factors influencing CME transit are investigated. Both the removal of the plane of sky projection (as suffered by coronagraph derived speeds of Earth directed CMEs and the use of observed values of solar wind speed, fail to significantly improve transit time prediction. However, a correlation is found to exist between the late/early arrival of an ICME and the width of the preceding sheath region, suggesting that the error is a geometrical effect that can only be removed by a more accurate determination of a CME trajectory and expansion. The correlation between magnetic field intensity and speed of ejecta at 1AU is also investigated. It is found to be weak in the body of the ICME, but strong in the sheath, if the upstream solar wind conditions are taken into account.

    Key words. Solar physics, astronomy and astrophysics (flares and mass ejections – Interplanetary physics (interplanetary magnetic fields; sources of the solar wind

  18. A simple solution for model comparison in bold imaging: the special case of reward prediction error and reward outcomes.

    Science.gov (United States)

    Erdeniz, Burak; Rohe, Tim; Done, John; Seidler, Rachael D

    2013-01-01

    Conventional neuroimaging techniques provide information about condition-related changes of the BOLD (blood-oxygen-level dependent) signal, indicating only where and when the underlying cognitive processes occur. Recently, with the help of a new approach called "model-based" functional neuroimaging (fMRI), researchers are able to visualize changes in the internal variables of a time varying learning process, such as the reward prediction error or the predicted reward value of a conditional stimulus. However, despite being extremely beneficial to the imaging community in understanding the neural correlates of decision variables, a model-based approach to brain imaging data is also methodologically challenging due to the multicollinearity problem in statistical analysis. There are multiple sources of multicollinearity in functional neuroimaging including investigations of closely related variables and/or experimental designs that do not account for this. The source of multicollinearity discussed in this paper occurs due to correlation between different subjective variables that are calculated very close in time. Here, we review methodological approaches to analyzing such data by discussing the special case of separating the reward prediction error signal from reward outcomes.

  19. Efficient thermal error prediction in a machine tool using finite element analysis

    Science.gov (United States)

    Mian, Naeem S.; Fletcher, Simon; Longstaff, Andrew P.; Myers, Alan

    2011-08-01

    Thermally induced errors have a major significance on the positional accuracy of a machine tool. Heat generated during the machining process produces thermal gradients that flow through the machine structure causing linear and nonlinear thermal expansions and distortions of associated complex discrete structures, producing deformations that adversely affect structural stability. The heat passes through structural linkages and mechanical joints where interfacial parameters such as the roughness and form of the contacting surfaces affect the thermal resistance and thus the heat transfer coefficients. This paper presents a novel offline technique using finite element analysis (FEA) to simulate the effects of the major internal heat sources such as bearings, motors and belt drives of a small vertical milling machine (VMC) and the effects of ambient temperature pockets that build up during the machine operation. Simplified models of the machine have been created offline using FEA software and evaluated experimental results applied for offline thermal behaviour simulation of the full machine structure. The FEA simulated results are in close agreement with the experimental results ranging from 65% to 90% for a variety of testing regimes and revealed a maximum error range of 70 µm reduced to less than 10 µm.

  20. Temporal and spatial localization of prediction-error signals in the visual brain.

    Science.gov (United States)

    Johnston, Patrick; Robinson, Jonathan; Kokkinakis, Athanasios; Ridgeway, Samuel; Simpson, Michael; Johnson, Sam; Kaufman, Jordy; Young, Andrew W

    2017-04-01

    It has been suggested that the brain pre-empts changes in the environment through generating predictions, although real-time electrophysiological evidence of prediction violations in the domain of visual perception remain elusive. In a series of experiments we showed participants sequences of images that followed a predictable implied sequence or whose final image violated the implied sequence. Through careful design we were able to use the same final image transitions across predictable and unpredictable conditions, ensuring that any differences in neural responses were due only to preceding context and not to the images themselves. EEG and MEG recordings showed that early (N170) and mid-latency (N300) visual evoked potentials were robustly modulated by images that violated the implied sequence across a range of types of image change (expression deformations, rigid-rotations and visual field location). This modulation occurred irrespective of stimulus object category. Although the stimuli were static images, MEG source reconstruction of the early latency signal (N/M170) localized expectancy violation signals to brain areas associated with motion perception. Our findings suggest that the N/M170 can index mismatches between predicted and actual visual inputs in a system that predicts trajectories based on ongoing context. More generally we suggest that the N/M170 may reflect a "family" of brain signals generated across widespread regions of the visual brain indexing the resolution of top-down influences and incoming sensory data. This has important implications for understanding the N/M170 and investigating how the brain represents context to generate perceptual predictions. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. A Frequency-Domain Adaptive Filter (FDAF) Prediction Error Method (PEM) Framework for Double-Talk-Robust Acoustic Echo Cancellation

    DEFF Research Database (Denmark)

    Gil-Cacho, Jose M.; van Waterschoot, Toon; Moonen, Marc

    2014-01-01

    In this paper, we propose a new framework to tackle the double-talk (DT) problem in acoustic echo cancellation (AEC). It is based on a frequency-domain adaptive filter (FDAF) implementation of the so-called prediction error method adaptive filtering using row operations (PEM-AFROW) leading...... to the FDAF-PEM-AFROW algorithm. We show that FDAF-PEM-AFROW is by construction related to the best linear unbiased estimate (BLUE) of the echo path. We depart from this framework to show an improvement in performance with respect to other adaptive filters minimizing the BLUE criterion, namely the PEM...

  2. Predictability of large-scale atmospheric motions: Lyapunov exponents and error dynamics.

    Science.gov (United States)

    Vannitsem, Stéphane

    2017-03-01

    The deterministic equations describing the dynamics of the atmosphere (and of the climate system) are known to display the property of sensitivity to initial conditions. In the ergodic theory of chaos, this property is usually quantified by computing the Lyapunov exponents. In this review, these quantifiers computed in a hierarchy of atmospheric models (coupled or not to an ocean) are analyzed, together with their local counterparts known as the local or finite-time Lyapunov exponents. It is shown in particular that the variability of the local Lyapunov exponents (corresponding to the dominant Lyapunov exponent) decreases when the model resolution increases. The dynamics of (finite-amplitude) initial condition errors in these models is also reviewed, and in general found to display a complicated growth far from the asymptotic estimates provided by the Lyapunov exponents. The implications of these results for operational (high resolution) atmospheric and climate modelling are also discussed.

  3. Electrophysiological correlates of reward prediction error recorded in the human prefrontal cortex

    Science.gov (United States)

    Oya, Hiroyuki; Adolphs, Ralph; Kawasaki, Hiroto; Bechara, Antoine; Damasio, Antonio; Howard, Matthew A.

    2005-01-01

    Lesion and functional imaging studies have shown that the ventromedial prefrontal cortex is critically involved in the avoidance of risky choices. However, detailed descriptions of the mechanisms that underlie the establishment of such behaviors remain elusive, due in part to the spatial and temporal limitations of available research techniques. We investigated this issue by recording directly from prefrontal depth electrodes in a rare neurosurgical patient while he performed the Iowa Gambling Task, and we concurrently measured behavioral, autonomic, and electrophysiological responses. We found a robust alpha-band component of event-related potentials that reflected the mismatch between expected outcomes and actual outcomes in the task, correlating closely with the reward-related error obtained from a reinforcement learning model of the patient's choice behavior. The finding implicates this brain region in the acquisition of choice bias by means of a continuous updating of expectations about reward and punishment. PMID:15928095

  4. Error Quantification and Confidence Assessment of Aerothermal Model Predictions for Hypersonic Aircraft (Preprint)

    Science.gov (United States)

    2013-09-01

    88ABW-2012-1703; Clearance Date: 26 Mar 2012. This paper contains color . The final version of this conference paper was published in the...for Q4 are associated with Eckert’s reference temperature method, which is expected to consistently over-predict the true value due to the calorically

  5. Non-gaussian Test Models for Prediction and State Estimation with Model Errors

    Institute of Scientific and Technical Information of China (English)

    Michal BRANICKI; Nan CHEN; Andrew J.MAJDA

    2013-01-01

    Turbulent dynamical systems involve dynamics with both a large dimensional phase space and a large number of positive Lyapunov exponents.Such systems are ubiquitous in applications in contemporary science and engineering where the statistical ensemble prediction and the real time filtering/state estimation are needed despite the underlying complexity of the system.Statistically exactly solvable test models have a crucial role to provide firm mathematical underpinning or new algorithms for vastly more complex scientific phenomena.Here,a class of statistically exactly solvable non-Gaussian test models is introduced,where a generalized Feynman-Kac formulation reduces the exact behavior of conditional statistical moments to the solution to inhomogeneous Fokker-Planck equations modified by linear lower order coupling and source terms.This procedure is applied to a test model with hidden instabilities and is combined with information theory to address two important issues in the contemporary statistical prediction of turbulent dynamical systems:the coarse-gained ensemble prediction in a perfect model and the improving long range forecasting in imperfect models.The models discussed here should be useful for many other applications and algorithms for the real time prediction and the state estimation.

  6. Stochastic Residual-Error Analysis For Estimating Hydrologic Model Predictive Uncertainty

    Science.gov (United States)

    A hybrid time series-nonparametric sampling approach, referred to herein as semiparametric, is presented for the estimation of model predictive uncertainty. The methodology is a two-step procedure whereby a distributed hydrologic model is first calibrated, then followed by brute ...

  7. Hidden Markov Model for quantitative prediction of snowfall and analysis of hazardous snowfall events over Indian Himalaya

    Science.gov (United States)

    Joshi, J. C.; Tankeshwar, K.; Srivastava, Sunita

    2017-04-01

    A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992-2012. There are six observations and six states of the model. The most probable observation and state sequence has been computed using Forward and Viterbi algorithms, respectively. Baum-Welch algorithm has been used for optimizing the model parameters. The model has been validated for two winters (2012-2013 and 2013-2014) by computing root mean square error (RMSE), accuracy measures such as percent correct (PC), critical success index (CSI) and Heidke skill score (HSS). The RMSE of the model has also been calculated using leave-one-out cross-validation method. Snowfall predicted by the model during hazardous snowfall events in different parts of the Himalaya matches well with the observed one. The HSS of the model for all the stations implies that the optimized model has better forecasting skill than random forecast for both the days. The RMSE of the optimized model has also been found smaller than the persistence forecast and standard deviation for both the days.

  8. Hidden Markov Model for quantitative prediction of snowfall and analysis of hazardous snowfall events over Indian Himalaya

    Indian Academy of Sciences (India)

    J C Joshi; K Tankeshwar; Sunita Srivastava

    2017-04-01

    A Hidden Markov Model (HMM) has been developed for prediction of quantitative snowfall in Pir-Panjal and Great Himalayan mountain ranges of Indian Himalaya. The model predicts snowfall for two days in advance using daily recorded nine meteorological variables of past 20 winters from 1992–2012. There are six observations and six states of the model. The most probable observation and state sequence has been computed using Forward and Viterbi algorithms, respectively. Baum–Welch algorithm has been used for optimizing the model parameters. The model has been validated for two winters (2012–2013 and 2013–2014) by computing root mean square error (RMSE), accuracy measures such as percent correct (PC), critical success index (CSI) and Heidke skill score (HSS). The RMSE of the model has also been calculated using leave-one-out cross-validation method. Snowfall predicted by the model during hazardous snowfall events in different parts of the Himalaya matches well with the observed one. The HSS of the model for all the stations implies that the optimized model has better forecasting skill than random forecast for both the days. The RMSE of the optimized model has also been found smaller than the persistence forecast and standard deviation for both the days.

  9. Subjective and model-estimated reward prediction: association with the feedback-related negativity (FRN) and reward prediction error in a reinforcement learning task.

    Science.gov (United States)

    Ichikawa, Naho; Siegle, Greg J; Dombrovski, Alexandre; Ohira, Hideki

    2010-12-01

    In this study, we examined whether the feedback-related negativity (FRN) is associated with both subjective and objective (model-estimated) reward prediction errors (RPE) per trial in a reinforcement learning task in healthy adults (n=25). The level of RPE was assessed by 1) subjective ratings per trial and by 2) a computational model of reinforcement learning. As results, model-estimated RPE was highly correlated with subjective RPE (r=.82), and the grand-averaged ERP waves based on the trials with high and low model-estimated RPE showed the significant difference only in the time period of the FRN component (pcontingency.

  10. Observed, predicted, and misclassification error data for observations in the training datset for nitrate and arsenic concentrations in basin-fill aquifers in the Southwest Principal Aquifers study.

    Data.gov (United States)

    U.S. Geological Survey, Department of the Interior — This product "Observed, predicted, and misclassification error data for observations in the training dataset for nitrate and arsenic concentrations in basin-fill...

  11. Nonlinear Scale Interaction: A possible mechanism of up-scale error transport attributing to the inadequate predictability of Intra-seasonal Oscillations

    Science.gov (United States)

    De, Saumyendu; Sahai, Atul Kumar; Nath Goswami, Bhupendra

    2013-04-01

    One of the fundamental science questions raised by the Year of Tropical Convection (YOTC) group was that under what circumstances and via what mechanisms water vapor, energy and momentum were transferred across scales ranging from meso-scale to the large (or planetary scale) (The YOTC Science Plan, 2008)? This study has partially addressed the above broad science question by exploring a probable mechanism of error energy transfer across scales in relation to the predictability studies of Intra-seasonal oscillations (ISOs). The predictability of ISOs being in the dominant planetary scales of wavenumbers 1 - 4 is restricted by the rapid growth and the large accumulation of errors in these planetary / ultra-long waves in almost all medium range forecast models (Baumhefner et al.1978, Krishnamurti et al. 1990). Understanding the rapid growth and enormous build-up of error is, therefore, imperative for improving the forecast of ISOs. It is revealed that while the initial errors are largely on the small scales, maximum errors are appeared in the ultra-long waves (around the tropical convergence zone) within 3-5 days of forecasts. The wavenumber distribution of error with the forecast lead time shows that the initial error in the small scales has already attained its saturation value at these scales within 6-hr forecast lead, whereas that in ultra-long scales is about two order of magnitude smaller than their saturation value. This much amount of error increase in planetary waves cannot be explained simply as a growth of the initial error unless it has been transported from smaller scales. Hence, it has been proposed that the fast growth of errors in the planetary waves is due to continuous generation of errors in the small scales attributing to the inadequacy in representing different physical processes such as formulation of cumulus clouds in the model and upscale propagation of these errors through the process of scale interactions. Basic systematic error kinetic

  12. Numerical Error Prediction and its applications in CFD using tau-estimation

    OpenAIRE

    2012-01-01

    Nowadays, Computational Fluid Dynamics (CFD) solvers are widely used within the industry to model fluid flow phenomenons. Several fluid flow model equations have been employed in the last decades to simulate and predict forces acting, for example, on different aircraft configurations. Computational time and accuracy are strongly dependent on the fluid flow model equation and the spatial dimension of the problem considered. While simple models based on perfect flows, like panel methods or pote...

  13. Surface Accuracy and Pointing Error Prediction of a 32 m Diameter Class Radio Astronomy Telescope

    Science.gov (United States)

    Azankpo, Severin

    2017-03-01

    The African Very-long-baseline interferometry Network (AVN) is a joint project between South Africa and eight partner African countries aimed at establishing a VLBI (Very-Long-Baseline Interferometry) capable network of radio telescopes across the African continent. An existing structure that is earmarked for this project, is a 32 m diameter antenna located in Ghana that has become obsolete due to advances in telecommunication. The first phase of the conversion of this Ghana antenna into a radio astronomy telescope is to upgrade the antenna to observe at 5 GHz to 6.7 GHz frequency and then later to 18 GHz within a required performing tolerance. The surface and pointing accuracies for a radio telescope are much more stringent than that of a telecommunication antenna. The mechanical pointing accuracy of such telescopes is influenced by factors such as mechanical alignment, structural deformation, and servo drive train errors. The current research investigates the numerical simulation of the surface and pointing accuracies of the Ghana 32 m diameter radio astronomy telescope due to its structural deformation mainly influenced by gravity, wind and thermal loads.

  14. Remaining Useful Life Prediction of Lithium-Ion Batteries Based on the Wiener Process with Measurement Error

    Directory of Open Access Journals (Sweden)

    Shengjin Tang

    2014-01-01

    Full Text Available Remaining useful life (RUL prediction is central to the prognostics and health management (PHM of lithium-ion batteries. This paper proposes a novel RUL prediction method for lithium-ion batteries based on the Wiener process with measurement error (WPME. First, we use the truncated normal distribution (TND based modeling approach for the estimated degradation state and obtain an exact and closed-form RUL distribution by simultaneously considering the measurement uncertainty and the distribution of the estimated drift parameter. Then, the traditional maximum likelihood estimation (MLE method for population based parameters estimation is remedied to improve the estimation efficiency. Additionally, we analyze the relationship between the classic MLE method and the combination of the Bayesian updating algorithm and the expectation maximization algorithm for the real time RUL prediction. Interestingly, it is found that the result of the combination algorithm is equal to the classic MLE method. Inspired by this observation, a heuristic algorithm for the real time parameters updating is presented. Finally, numerical examples and a case study of lithium-ion batteries are provided to substantiate the superiority of the proposed RUL prediction method.

  15. The information value of early career productivity in mathematics: a ROC analysis of prediction errors in bibliometricly informed decision making.

    Science.gov (United States)

    Lindahl, Jonas; Danell, Rickard

    2016-01-01

    The aim of this study was to provide a framework to evaluate bibliometric indicators as decision support tools from a decision making perspective and to examine the information value of early career publication rate as a predictor of future productivity. We used ROC analysis to evaluate a bibliometric indicator as a tool for binary decision making. The dataset consisted of 451 early career researchers in the mathematical sub-field of number theory. We investigated the effect of three different definitions of top performance groups-top 10, top 25, and top 50 %; the consequences of using different thresholds in the prediction models; and the added prediction value of information on early career research collaboration and publications in prestige journals. We conclude that early career performance productivity has an information value in all tested decision scenarios, but future performance is more predictable if the definition of a high performance group is more exclusive. Estimated optimal decision thresholds using the Youden index indicated that the top 10 % decision scenario should use 7 articles, the top 25 % scenario should use 7 articles, and the top 50 % should use 5 articles to minimize prediction errors. A comparative analysis between the decision thresholds provided by the Youden index which take consequences into consideration and a method commonly used in evaluative bibliometrics which do not take consequences into consideration when determining decision thresholds, indicated that differences are trivial for the top 25 and the 50 % groups. However, a statistically significant difference between the methods was found for the top 10 % group. Information on early career collaboration and publication strategies did not add any prediction value to the bibliometric indicator publication rate in any of the models. The key contributions of this research is the focus on consequences in terms of prediction errors and the notion of transforming uncertainty

  16. A physiologically based pharmacokinetic model to predict the pharmacokinetics of highly protein-bound drugs and the impact of errors in plasma protein binding.

    Science.gov (United States)

    Ye, Min; Nagar, Swati; Korzekwa, Ken

    2016-04-01

    Predicting the pharmacokinetics of highly protein-bound drugs is difficult. Also, since historical plasma protein binding data were often collected using unbuffered plasma, the resulting inaccurate binding data could contribute to incorrect predictions. This study uses a generic physiologically based pharmacokinetic (PBPK) model to predict human plasma concentration-time profiles for 22 highly protein-bound drugs. Tissue distribution was estimated from in vitro drug lipophilicity data, plasma protein binding and the blood: plasma ratio. Clearance was predicted with a well-stirred liver model. Underestimated hepatic clearance for acidic and neutral compounds was corrected by an empirical scaling factor. Predicted values (pharmacokinetic parameters, plasma concentration-time profile) were compared with observed data to evaluate the model accuracy. Of the 22 drugs, less than a 2-fold error was obtained for the terminal elimination half-life (t1/2 , 100% of drugs), peak plasma concentration (Cmax , 100%), area under the plasma concentration-time curve (AUC0-t , 95.4%), clearance (CLh , 95.4%), mean residence time (MRT, 95.4%) and steady state volume (Vss , 90.9%). The impact of fup errors on CLh and Vss prediction was evaluated. Errors in fup resulted in proportional errors in clearance prediction for low-clearance compounds, and in Vss prediction for high-volume neutral drugs. For high-volume basic drugs, errors in fup did not propagate to errors in Vss prediction. This is due to the cancellation of errors in the calculations for tissue partitioning of basic drugs. Overall, plasma profiles were well simulated with the present PBPK model. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Sagittal range of motion of the thoracic spine using inertial tracking device and effect of measurement errors on model predictions.

    Science.gov (United States)

    Hajibozorgi, M; Arjmand, N

    2016-04-11

    Range of motion (ROM) of the thoracic spine has implications in patient discrimination for diagnostic purposes and in biomechanical models for predictions of spinal loads. Few previous studies have reported quite different thoracic ROMs. Total (T1-T12), lower (T5-T12) and upper (T1-T5) thoracic, lumbar (T12-S1), pelvis, and entire trunk (T1) ROMs were measured using an inertial tracking device as asymptomatic subjects flexed forward from their neutral upright position to full forward flexion. Correlations between body height and the ROMs were conducted. An effect of measurement errors of the trunk flexion (T1) on the model-predicted spinal loads was investigated. Mean of peak voluntary total flexion of trunk (T1) was 118.4 ± 13.9°, of which 20.5 ± 6.5° was generated by flexion of the T1 to T12 (thoracic ROM), and the remaining by flexion of the T12 to S1 (lumbar ROM) (50.2 ± 7.0°) and pelvis (47.8 ± 6.9°). Lower thoracic ROM was significantly larger than upper thoracic ROM (14.8 ± 5.4° versus 5.8 ± 3.1°). There were non-significant weak correlations between body height and the ROMs. Contribution of the pelvis to generate the total trunk flexion increased from ~20% to 40% and that of the lumbar decreased from ~60% to 42% as subjects flexed forward from upright to maximal flexion while that of the thoracic spine remained almost constant (~16% to 20%) during the entire movement. Small uncertainties (±5°) in the measurement of trunk flexion angle resulted in considerable errors (~27%) in the model-predicted spinal loads only in activities involving small trunk flexion.

  18. Predictive error dependencies when using pilot points and singular value decomposition in groundwater model calibration

    DEFF Research Database (Denmark)

    Christensen, Steen; Doherty, John

    2008-01-01

    over the model area. Singular value decomposition (SVD) of the (possibly weighted) sensitivity matrix of the pilot point based model produces eigenvectors of which we pick a small number corresponding to significant eigenvalues. Super parameters are defined as factors through which parameter...... conditions near an inflow boundary where data is lacking and which exhibit apparent significant nonlinear behavior. It is shown that inclusion of Tikhonov regularization can stabilize and speed up the parameter estimation process. A method of linearized model analysis of predictive uncertainty...... nonlinear functions. Recommendations concerning the use of pilot points and singular value decomposition in real-world groundwater model calibration are finally given. (c) 2008 Elsevier Ltd. All rights reserved....

  19. From prediction error to incentive salience: mesolimbic computation of reward motivation

    Science.gov (United States)

    Berridge, Kent C.

    2011-01-01

    Reward contains separable psychological components of learning, incentive motivation and pleasure. Most computational models have focused only on the learning component of reward, but the motivational component is equally important in reward circuitry, and even more directly controls behavior. Modeling the motivational component requires recognition of additional control factors besides learning. Here I will discuss how mesocorticolimbic mechanisms generate the motivation component of incentive salience. Incentive salience takes Pavlovian learning and memory as one input and as an equally important input takes neurobiological state factors (e.g., drug states, appetite states, satiety states) that can vary independently of learning. Neurobiological state changes can produce unlearned fluctuations or even reversals in the ability of a previously-learned reward cue to trigger motivation. Such fluctuations in cue-triggered motivation can dramatically depart from all previously learned values about the associated reward outcome. Thus a consequence of the difference between incentive salience and learning can be to decouple cue-triggered motivation of the moment from previously learned values of how good the associated reward has been in the past. Another consequence can be to produce irrationally strong motivation urges that are not justified by any memories of previous reward values (and without distorting associative predictions of future reward value). Such irrationally strong motivation may be especially problematic in addiction. To comprehend these phenomena, future models of mesocorticolimbic reward function should address the neurobiological state factors that participate to control generation of incentive salience. PMID:22487042

  20. Putative extremely high rate of proteome innovation in lancelets might be explained by high rate of gene prediction errors.

    Science.gov (United States)

    Bányai, László; Patthy, László

    2016-08-01

    A recent analysis of the genomes of Chinese and Florida lancelets has concluded that the rate of creation of novel protein domain combinations is orders of magnitude greater in lancelets than in other metazoa and it was suggested that continuous activity of transposable elements in lancelets is responsible for this increased rate of protein innovation. Since morphologically Chinese and Florida lancelets are highly conserved, this finding would contradict the observation that high rates of protein innovation are usually associated with major evolutionary innovations. Here we show that the conclusion that the rate of proteome innovation is exceptionally high in lancelets may be unjustified: the differences observed in domain architectures of orthologous proteins of different amphioxus species probably reflect high rates of gene prediction errors rather than true innovation.

  1. From prediction error to incentive salience: mesolimbic computation of reward motivation.

    Science.gov (United States)

    Berridge, Kent C

    2012-04-01

    Reward contains separable psychological components of learning, incentive motivation and pleasure. Most computational models have focused only on the learning component of reward, but the motivational component is equally important in reward circuitry, and even more directly controls behavior. Modeling the motivational component requires recognition of additional control factors besides learning. Here I discuss how mesocorticolimbic mechanisms generate the motivation component of incentive salience. Incentive salience takes Pavlovian learning and memory as one input and as an equally important input takes neurobiological state factors (e.g. drug states, appetite states, satiety states) that can vary independently of learning. Neurobiological state changes can produce unlearned fluctuations or even reversals in the ability of a previously learned reward cue to trigger motivation. Such fluctuations in cue-triggered motivation can dramatically depart from all previously learned values about the associated reward outcome. Thus, one consequence of the difference between incentive salience and learning can be to decouple cue-triggered motivation of the moment from previously learned values of how good the associated reward has been in the past. Another consequence can be to produce irrationally strong motivation urges that are not justified by any memories of previous reward values (and without distorting associative predictions of future reward value). Such irrationally strong motivation may be especially problematic in addiction. To understand these phenomena, future models of mesocorticolimbic reward function should address the neurobiological state factors that participate to control generation of incentive salience. © 2012 The Author. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.

  2. Forensic comparison and matching of fingerprints: using quantitative image measures for estimating error rates through understanding and predicting difficulty.

    Directory of Open Access Journals (Sweden)

    Philip J Kellman

    Full Text Available Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert

  3. Forensic comparison and matching of fingerprints: using quantitative image measures for estimating error rates through understanding and predicting difficulty.

    Science.gov (United States)

    Kellman, Philip J; Mnookin, Jennifer L; Erlikhman, Gennady; Garrigan, Patrick; Ghose, Tandra; Mettler, Everett; Charlton, David; Dror, Itiel E

    2014-01-01

    Latent fingerprint examination is a complex task that, despite advances in image processing, still fundamentally depends on the visual judgments of highly trained human examiners. Fingerprints collected from crime scenes typically contain less information than fingerprints collected under controlled conditions. Specifically, they are often noisy and distorted and may contain only a portion of the total fingerprint area. Expertise in fingerprint comparison, like other forms of perceptual expertise, such as face recognition or aircraft identification, depends on perceptual learning processes that lead to the discovery of features and relations that matter in comparing prints. Relatively little is known about the perceptual processes involved in making comparisons, and even less is known about what characteristics of fingerprint pairs make particular comparisons easy or difficult. We measured expert examiner performance and judgments of difficulty and confidence on a new fingerprint database. We developed a number of quantitative measures of image characteristics and used multiple regression techniques to discover objective predictors of error as well as perceived difficulty and confidence. A number of useful predictors emerged, and these included variables related to image quality metrics, such as intensity and contrast information, as well as measures of information quantity, such as the total fingerprint area. Also included were configural features that fingerprint experts have noted, such as the presence and clarity of global features and fingerprint ridges. Within the constraints of the overall low error rates of experts, a regression model incorporating the derived predictors demonstrated reasonable success in predicting objective difficulty for print pairs, as shown both in goodness of fit measures to the original data set and in a cross validation test. The results indicate the plausibility of using objective image metrics to predict expert performance and

  4. Complex terrain wind resource estimation with the wind-atlas method: Prediction errors using linearized and nonlinear CFD micro-scale models

    DEFF Research Database (Denmark)

    Troen, Ib; Bechmann, Andreas; Kelly, Mark C.

    2014-01-01

    Using the Wind Atlas methodology to predict the average wind speed at one location from measured climatological wind frequency distributions at another nearby location we analyse the relative prediction errors using a linearized flow model (IBZ) and a more physically correct fully non-linear 3D...

  5. Cathode design investigation based on iterative correction of predicted profile errors in electrochemical machining of compressor blades

    Institute of Scientific and Technical Information of China (English)

    Zhu Dong; Liu Cheng; Xu Zhengyang; Liu Jia

    2016-01-01

    Electrochemical machining (ECM) is an effective and economical manufacturing method for machining hard-to-cut metal materials that are often used in the aerospace field. Cathode design is very complicated in ECM and is a core problem influencing machining accuracy, especially for complex profiles such as compressor blades in aero engines. A new cathode design method based on iterative correction of predicted profile errors in blade ECM is proposed in this paper. A math-ematical model is first built according to the ECM shaping law, and a simulation is then carried out using ANSYS software. A dynamic forming process is obtained and machining gap distributions at different stages are analyzed. Additionally, the simulation deviation between the prediction profile and model is improved by the new method through correcting the initial cathode profile. Further-more, validation experiments are conducted using cathodes designed before and after the simulation correction. Machining accuracy for the optimal cathode is improved markedly compared with that for the initial cathode. The experimental results illustrate the suitability of the new method and that it can also be applied to other complex engine components such as diffusers.

  6. Cathode design investigation based on iterative correction of predicted profile errors in electrochemical machining of compressor blades

    Directory of Open Access Journals (Sweden)

    Zhu Dong

    2016-08-01

    Full Text Available Electrochemical machining (ECM is an effective and economical manufacturing method for machining hard-to-cut metal materials that are often used in the aerospace field. Cathode design is very complicated in ECM and is a core problem influencing machining accuracy, especially for complex profiles such as compressor blades in aero engines. A new cathode design method based on iterative correction of predicted profile errors in blade ECM is proposed in this paper. A mathematical model is first built according to the ECM shaping law, and a simulation is then carried out using ANSYS software. A dynamic forming process is obtained and machining gap distributions at different stages are analyzed. Additionally, the simulation deviation between the prediction profile and model is improved by the new method through correcting the initial cathode profile. Furthermore, validation experiments are conducted using cathodes designed before and after the simulation correction. Machining accuracy for the optimal cathode is improved markedly compared with that for the initial cathode. The experimental results illustrate the suitability of the new method and that it can also be applied to other complex engine components such as diffusers.

  7. Predicting classifier performance with limited training data: applications to computer-aided diagnosis in breast and prostate cancer.

    Directory of Open Access Journals (Sweden)

    Ajay Basavanhally

    Full Text Available Clinical trials increasingly employ medical imaging data in conjunction with supervised classifiers, where the latter require large amounts of training data to accurately model the system. Yet, a classifier selected at the start of the trial based on smaller and more accessible datasets may yield inaccurate and unstable classification performance. In this paper, we aim to address two common concerns in classifier selection for clinical trials: (1 predicting expected classifier performance for large datasets based on error rates calculated from smaller datasets and (2 the selection of appropriate classifiers based on expected performance for larger datasets. We present a framework for comparative evaluation of classifiers using only limited amounts of training data by using random repeated sampling (RRS in conjunction with a cross-validation sampling strategy. Extrapolated error rates are subsequently validated via comparison with leave-one-out cross-validation performed on a larger dataset. The ability to predict error rates as dataset size increases is demonstrated on both synthetic data as well as three different computational imaging tasks: detecting cancerous image regions in prostate histopathology, differentiating high and low grade cancer in breast histopathology, and detecting cancerous metavoxels in prostate magnetic resonance spectroscopy. For each task, the relationships between 3 distinct classifiers (k-nearest neighbor, naive Bayes, Support Vector Machine are explored. Further quantitative evaluation in terms of interquartile range (IQR suggests that our approach consistently yields error rates with lower variability (mean IQRs of 0.0070, 0.0127, and 0.0140 than a traditional RRS approach (mean IQRs of 0.0297, 0.0779, and 0.305 that does not employ cross-validation sampling for all three datasets.

  8. EEG Theta Dynamics within Frontal and Parietal Cortices for Error Processing during Reaching Movements in a Prism Adaptation Study Altering Visuo-Motor Predictive Planning.

    Science.gov (United States)

    Arrighi, Pieranna; Bonfiglio, Luca; Minichilli, Fabrizio; Cantore, Nicoletta; Carboncini, Maria Chiara; Piccotti, Emily; Rossi, Bruno; Andre, Paolo

    2016-01-01

    Modulation of frontal midline theta (fmθ) is observed during error commission, but little is known about the role of theta oscillations in correcting motor behaviours. We investigate EEG activity of healthy partipants executing a reaching task under variable degrees of prism-induced visuo-motor distortion and visual occlusion of the initial arm trajectory. This task introduces directional errors of different magnitudes. The discrepancy between predicted and actual movement directions (i.e. the error), at the time when visual feedback (hand appearance) became available, elicits a signal that triggers on-line movement correction. Analysis were performed on 25 EEG channels. For each participant, the median value of the angular error of all reaching trials was used to partition the EEG epochs into high- and low-error conditions. We computed event-related spectral perturbations (ERSP) time-locked either to visual feedback or to the onset of movement correction. ERSP time-locked to the onset of visual feedback showed that fmθ increased in the high- but not in the low-error condition with an approximate time lag of 200 ms. Moreover, when single epochs were sorted by the degree of motor error, fmθ started to increase when a certain level of error was exceeded and, then, scaled with error magnitude. When ERSP were time-locked to the onset of movement correction, the fmθ increase anticipated this event with an approximate time lead of 50 ms. During successive trials, an error reduction was observed which was associated with indices of adaptations (i.e., aftereffects) suggesting the need to explore if theta oscillations may facilitate learning. To our knowledge this is the first study where the EEG signal recorded during reaching movements was time-locked to the onset of the error visual feedback. This allowed us to conclude that theta oscillations putatively generated by anterior cingulate cortex activation are implicated in error processing in semi-naturalistic motor

  9. Comparison of the initial errors most likely to cause a spring predictability barrier for two types of El Niño events

    Science.gov (United States)

    Tian, Ben; Duan, Wansuo

    2016-08-01

    In this paper, the spring predictability barrier (SPB) problem for two types of El Niño events is investigated. This is enabled by tracing the evolution of a conditional nonlinear optimal perturbation (CNOP) that acts as the initial error with the biggest negative effect on the El Niño predictions. We show that the CNOP-type errors for central Pacific-El Niño (CP-El Niño) events can be classified into two types: the first are CP-type-1 errors possessing a sea surface temperature anomaly (SSTA) pattern with negative anomalies in the equatorial central western Pacific, positive anomalies in the equatorial eastern Pacific, and accompanied by a thermocline depth anomaly pattern with positive anomalies along the equator. The second are, CP-type-2 errors presenting an SSTA pattern in the central eastern equatorial Pacific, with a dipole structure of negative anomalies in the east and positive anomalies in the west, and a thermocline depth anomaly pattern with a slight deepening along the equator. CP-type-1 errors grow in a manner similar to an eastern Pacific-El Niño (EP-El Niño) event and grow significantly during boreal spring, leading to a significant SPB for the CP-El Niño. CP-type-2 errors initially present as a process similar to a La Niña-like decay, prior to transitioning into a growth phase of an EP-El Niño-like event; but they fail to cause a SPB. For the EP-El Niño events, the CNOP-type errors are also classified into two types: EP-type-1 errors and 2 errors. The former is similar to a CP-type-1 error, while the latter presents with an almost opposite pattern. Both EP-type-1 and 2 errors yield a significant SPB for EP-El Niño events. For both CP- and EP-El Niño, their CNOP-type errors that cause a prominent SPB are concentrated in the central and eastern tropical Pacific. This may indicate that the prediction uncertainties of both types of El Niño events are sensitive to the initial errors in this region. The region may represent a common

  10. Why Don't We Learn to Accurately Forecast Feelings? How Misremembering Our Predictions Blinds Us to Past Forecasting Errors

    Science.gov (United States)

    Meyvis, Tom; Ratner, Rebecca K.; Levav, Jonathan

    2010-01-01

    Why do affective forecasting errors persist in the face of repeated disconfirming evidence? Five studies demonstrate that people misremember their forecasts as consistent with their experience and thus fail to perceive the extent of their forecasting error. As a result, people do not learn from past forecasting errors and fail to adjust subsequent…

  11. Why Don't We Learn to Accurately Forecast Feelings? How Misremembering Our Predictions Blinds Us to Past Forecasting Errors

    Science.gov (United States)

    Meyvis, Tom; Ratner, Rebecca K.; Levav, Jonathan

    2010-01-01

    Why do affective forecasting errors persist in the face of repeated disconfirming evidence? Five studies demonstrate that people misremember their forecasts as consistent with their experience and thus fail to perceive the extent of their forecasting error. As a result, people do not learn from past forecasting errors and fail to adjust subsequent…

  12. Regression-based prediction of net energy expenditure in children performing activities at high altitude.

    Science.gov (United States)

    Sarton-Miller, Isabelle; Holman, Darryl J; Spielvogel, Hilde

    2003-01-01

    We developed a simple, non-invasive, and affordable method for estimating net energy expenditure (EE) in children performing activities at high altitude. A regression-based method predicts net oxygen consumption (VO(2)) from net heart rate (HR) along with several covariates. The method is atypical in that, the "net" measures are taken as the difference between exercise and resting VO(2) (DeltaVO(2)) and the difference between exercise and resting HR (DeltaHR); DeltaVO(2) partially corrects for resting metabolic rate and for posture, and DeltaHR controls for inter-individual variation in physiology and for posture. Twenty children between 8 and 13 years of age, born and raised in La Paz, Bolivia (altitude 3,600m), made up the reference sample. Anthropometric measures were taken, and VO(2) was assessed while the children performed graded exercise tests on a cycle ergometer. A repeated-measures prediction equation was developed, and maximum likelihood estimates of parameters were found from 75 observations on 20 children. The final model included the variables DeltaHR, DeltaHR(2), weight, and sex. The effectiveness of the method was established using leave-one-out cross-validation, yielding a prediction error rate of 0.126 for a mean DeltaVO(2) of 0.693 (SD 0.315). The correlation between the predicted and measured DeltaVO(2) was r = 0.917, suggesting that a useful prediction equation can be produced using paired VO(2) and HR measurements on a relatively small reference sample. The resulting prediction equation can be used for estimating EE from HR in free-living children performing habitual activities in the Bolivian Andes.

  13. Evolutionary artificial neural network approach for predicting properties of Cu- 15Ni-8Sn-0.4Si alloy

    Institute of Scientific and Technical Information of China (English)

    FANG Shan-feng; WANG Ming-pu; WANG Yan-hui; QI Wei-hong; LI Zhou

    2008-01-01

    A novel data mining approach, based on artificial neural network(ANN) using differential evolution(DE) training algorithm, was proposed to model the non-linear relationship between parameters of aging processes and mechanical and electrical properties of Cu-15Ni-8Sn-0.4Si alloy. In order to improve predictive accuracy of ANN model, the leave-one-out-cross-validation (LOOCV) technique was adopted to automatically determine the optimal number of neurons of the hidden layer. The forecasting performance of the proposed global optimization algorithm was compared with that of local optimization algorithm. The present calculated results are consistent with the experimental values, which suggests that the proposed evolutionary artificial neural network algorithm is feasible and efficient. Moreover, the experimental results illustrate that the DE training algorithm combined with gradient-based training algorithm achieves better convergence performance and the lowest forecasting errors and is therefore considered to be a promising alternative method to forecast the hardness and electrical conductivity of Cu- 15Ni-8Sn-0.4Si alloy.

  14. Modelling dopaminergic and other processes involved in learning from reward prediction error: Contributions from an individual differences perspective

    Directory of Open Access Journals (Sweden)

    Alan David Pickering

    2014-09-01

    Full Text Available Phasic firing changes of midbrain dopamine neurons have been widely characterised as reflecting a reward prediction error (RPE. Major personality traits (e.g. extraversion have been linked to inter-individual variations in dopaminergic neurotransmission. Consistent with these two claims, recent research (Smillie, Cooper, & Pickering, 2011; Cooper, Duke, Pickering, & Smillie, 2014 found that extraverts exhibited larger RPEs than introverts, as reflected in feedback related negativity (FRN effects in EEG recordings. Using an established, biologically-localised RPE computational model, we successfully simulated dopaminergic cell firing changes which are thought to modulate the FRN. We introduced simulated individual differences into the model: parameters were systematically varied, with stable values for each simulated individual. We explored whether a model parameter might be responsible for the observed covariance between extraversion and the FRN changes in real data, and argued that a parameter is a plausible source of such covariance if parameter variance, across simulated individuals, correlated almost perfectly with the size of the simulated dopaminergic FRN modulation, and created as much variance as possible in this simulated output. Several model parameters met these criteria, while others did not. In particular, variations in the strength of connections carrying excitatory reward drive inputs to midbrain dopaminergic cells were considered plausible candidates, along with variations in a parameter which scales the effects of dopamine cell firing bursts on synaptic modification in ventral striatum. We suggest possible neurotransmitter mechanisms underpinning these model parameters. Finally, the limitations and possible extensions of our approach are discussed.

  15. The effects of methylphenidate on cerebral responses to conflict anticipation and unsigned prediction error in a stop-signal task.

    Science.gov (United States)

    Manza, Peter; Hu, Sien; Ide, Jaime S; Farr, Olivia M; Zhang, Sheng; Leung, Hoi-Chung; Li, Chiang-shan R

    2016-03-01

    To adapt flexibly to a rapidly changing environment, humans must anticipate conflict and respond to surprising, unexpected events. To this end, the brain estimates upcoming conflict on the basis of prior experience and computes unsigned prediction error (UPE). Although much work implicates catecholamines in cognitive control, little is known about how pharmacological manipulation of catecholamines affects the neural processes underlying conflict anticipation and UPE computation. We addressed this issue by imaging 24 healthy young adults who received a 45 mg oral dose of methylphenidate (MPH) and 62 matched controls who did not receive MPH prior to performing the stop-signal task. We used a Bayesian Dynamic Belief Model to make trial-by-trial estimates of conflict and UPE during task performance. Replicating previous research, the control group showed anticipation-related activation in the presupplementary motor area and deactivation in the ventromedial prefrontal cortex and parahippocampal gyrus, as well as UPE-related activations in the dorsal anterior cingulate, insula, and inferior parietal lobule. In group comparison, MPH increased anticipation activity in the bilateral caudate head and decreased UPE activity in each of the aforementioned regions. These findings highlight distinct effects of catecholamines on the neural mechanisms underlying conflict anticipation and UPE, signals critical to learning and adaptive behavior.

  16. Modeling dopaminergic and other processes involved in learning from reward prediction error: contributions from an individual differences perspective.

    Science.gov (United States)

    Pickering, Alan D; Pesola, Francesca

    2014-01-01

    Phasic firing changes of midbrain dopamine neurons have been widely characterized as reflecting a reward prediction error (RPE). Major personality traits (e.g., extraversion) have been linked to inter-individual variations in dopaminergic neurotransmission. Consistent with these two claims, recent research (Smillie et al., 2011; Cooper et al., 2014) found that extraverts exhibited larger RPEs than introverts, as reflected in feedback related negativity (FRN) effects in EEG recordings. Using an established, biologically-localized RPE computational model, we successfully simulated dopaminergic cell firing changes which are thought to modulate the FRN. We introduced simulated individual differences into the model: parameters were systematically varied, with stable values for each simulated individual. We explored whether a model parameter might be responsible for the observed covariance between extraversion and the FRN changes in real data, and argued that a parameter is a plausible source of such covariance if parameter variance, across simulated individuals, correlated almost perfectly with the size of the simulated dopaminergic FRN modulation, and created as much variance as possible in this simulated output. Several model parameters met these criteria, while others did not. In particular, variations in the strength of connections carrying excitatory reward drive inputs to midbrain dopaminergic cells were considered plausible candidates, along with variations in a parameter which scales the effects of dopamine cell firing bursts on synaptic modification in ventral striatum. We suggest possible neurotransmitter mechanisms underpinning these model parameters. Finally, the limitations and possible extensions of our general approach are discussed.

  17. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part I: Effects of Random Error

    Science.gov (United States)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Straightforward application of the Schmidt-Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy, the percent correct (PC) and the Hanssen-Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher when the climatological frequency of contrail occurrence is used as the critical threshold, while the PC scores are higher when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85 percent for both the prediction of contrail occurrence and non-occurrence, although in practice, larger errors would be anticipated.

  18. How to regress and predict in a Bland-Altman plot? Review and contribution based on tolerance intervals and correlated-errors-in-variables models.

    Science.gov (United States)

    Francq, Bernard G; Govaerts, Bernadette

    2016-06-30

    Two main methodologies for assessing equivalence in method-comparison studies are presented separately in the literature. The first one is the well-known and widely applied Bland-Altman approach with its agreement intervals, where two methods are considered interchangeable if their differences are not clinically significant. The second approach is based on errors-in-variables regression in a classical (X,Y) plot and focuses on confidence intervals, whereby two methods are considered equivalent when providing similar measures notwithstanding the random measurement errors. This paper reconciles these two methodologies and shows their similarities and differences using both real data and simulations. A new consistent correlated-errors-in-variables regression is introduced as the errors are shown to be correlated in the Bland-Altman plot. Indeed, the coverage probabilities collapse and the biases soar when this correlation is ignored. Novel tolerance intervals are compared with agreement intervals with or without replicated data, and novel predictive intervals are introduced to predict a single measure in an (X,Y) plot or in a Bland-Atman plot with excellent coverage probabilities. We conclude that the (correlated)-errors-in-variables regressions should not be avoided in method comparison studies, although the Bland-Altman approach is usually applied to avert their complexity. We argue that tolerance or predictive intervals are better alternatives than agreement intervals, and we provide guidelines for practitioners regarding method comparison studies. Copyright © 2016 John Wiley & Sons, Ltd.

  19. An investigation into multi-dimensional prediction models to estimate the pose error of a quadcopter in a CSP plant setting

    Science.gov (United States)

    Lock, Jacobus C.; Smit, Willie J.; Treurnicht, Johann

    2016-05-01

    The Solar Thermal Energy Research Group (STERG) is investigating ways to make heliostats cheaper to reduce the total cost of a concentrating solar power (CSP) plant. One avenue of research is to use unmanned aerial vehicles (UAVs) to automate and assist with the heliostat calibration process. To do this, the pose estimation error of each UAV must be determined and integrated into a calibration procedure. A computer vision (CV) system is used to measure the pose of a quadcopter UAV. However, this CV system contains considerable measurement errors. Since this is a high-dimensional problem, a sophisticated prediction model must be used to estimate the measurement error of the CV system for any given pose measurement vector. This paper attempts to train and validate such a model with the aim of using it to determine the pose error of a quadcopter in a CSP plant setting.

  20. Assessing the predictive performance of risk-based water quality criteria using decision error estimates from receiver operating characteristics (ROC) analysis.

    Science.gov (United States)

    McLaughlin, Douglas B

    2012-10-01

    Field data relating aquatic ecosystem responses with water quality constituents that are potential ecosystem stressors are being used increasingly in the United States in the derivation of water quality criteria to protect aquatic life. In light of this trend, there is a need for transparent quantitative methods to assess the performance of models that predict ecological conditions using a stressor-response relationship, a response variable threshold, and a stressor variable criterion. Analysis of receiver operating characteristics (ROC analysis) has a considerable history of successful use in medical diagnostic, industrial, and other fields for similarly structured decision problems, but its use for informing water quality management decisions involving risk-based environmental criteria is less common. In this article, ROC analysis is used to evaluate predictions of ecological response variable status for 3 water quality stressor-response data sets. Information on error rates is emphasized due in part to their common use in environmental studies to describe uncertainty. One data set is comprised of simulated data, and 2 involve field measurements described previously in the literature. These data sets are also analyzed using linear regression and conditional probability analysis for comparison. Results indicate that of the methods studied, ROC analysis provides the most comprehensive characterization of prediction error rates including false positive, false negative, positive predictive, and negative predictive errors. This information may be used along with other data analysis procedures to set quality objectives for and assess the predictive performance of risk-based criteria to support water quality management decisions.

  1. Combining empirical approaches and error modelling to enhance predictive uncertainty estimation in extrapolation for operational flood forecasting. Tests on flood events on the Loire basin, France.

    Science.gov (United States)

    Berthet, Lionel; Marty, Renaud; Bourgin, François; Viatgé, Julie; Piotte, Olivier; Perrin, Charles

    2017-04-01

    An increasing number of operational flood forecasting centres assess the predictive uncertainty associated with their forecasts and communicate it to the end users. This information can match the end-users needs (i.e. prove to be useful for an efficient crisis management) only if it is reliable: reliability is therefore a key quality for operational flood forecasts. In 2015, the French flood forecasting national and regional services (Vigicrues network; www.vigicrues.gouv.fr) implemented a framework to compute quantitative discharge and water level forecasts and to assess the predictive uncertainty. Among the possible technical options to achieve this goal, a statistical analysis of past forecasting errors of deterministic models has been selected (QUOIQUE method, Bourgin, 2014). It is a data-based and non-parametric approach based on as few assumptions as possible about the forecasting error mathematical structure. In particular, a very simple assumption is made regarding the predictive uncertainty distributions for large events outside the range of the calibration data: the multiplicative error distribution is assumed to be constant, whatever the magnitude of the flood. Indeed, the predictive distributions may not be reliable in extrapolation. However, estimating the predictive uncertainty for these rare events is crucial when major floods are of concern. In order to improve the forecasts reliability for major floods, an attempt at combining the operational strength of the empirical statistical analysis and a simple error modelling is done. Since the heteroscedasticity of forecast errors can considerably weaken the predictive reliability for large floods, this error modelling is based on the log-sinh transformation which proved to reduce significantly the heteroscedasticity of the transformed error in a simulation context, even for flood peaks (Wang et al., 2012). Exploratory tests on some operational forecasts issued during the recent floods experienced in

  2. Trial-by-Trial Modulation of Associative Memory Formation by Reward Prediction Error and Reward Anticipation as Revealed by a Biologically Plausible Computational Model

    Science.gov (United States)

    Aberg, Kristoffer C.; Müller, Julia; Schwartz, Sophie

    2017-01-01

    Anticipation and delivery of rewards improves memory formation, but little effort has been made to disentangle their respective contributions to memory enhancement. Moreover, it has been suggested that the effects of reward on memory are mediated by dopaminergic influences on hippocampal plasticity. Yet, evidence linking memory improvements to actual reward computations reflected in the activity of the dopaminergic system, i.e., prediction errors and expected values, is scarce and inconclusive. For example, different previous studies reported that the magnitude of prediction errors during a reinforcement learning task was a positive, negative, or non-significant predictor of successfully encoding simultaneously presented images. Individual sensitivities to reward and punishment have been found to influence the activation of the dopaminergic reward system and could therefore help explain these seemingly discrepant results. Here, we used a novel associative memory task combined with computational modeling and showed independent effects of reward-delivery and reward-anticipation on memory. Strikingly, the computational approach revealed positive influences from both reward delivery, as mediated by prediction error magnitude, and reward anticipation, as mediated by magnitude of expected value, even in the absence of behavioral effects when analyzed using standard methods, i.e., by collapsing memory performance across trials within conditions. We additionally measured trait estimates of reward and punishment sensitivity and found that individuals with increased reward (vs. punishment) sensitivity had better memory for associations encoded during positive (vs. negative) prediction errors when tested after 20 min, but a negative trend when tested after 24 h. In conclusion, modeling trial-by-trial fluctuations in the magnitude of reward, as we did here for prediction errors and expected value computations, provides a comprehensive and biologically plausible description of

  3. Trial-by-Trial Modulation of Associative Memory Formation by Reward Prediction Error and Reward Anticipation as Revealed by a Biologically Plausible Computational Model.

    Science.gov (United States)

    Aberg, Kristoffer C; Müller, Julia; Schwartz, Sophie

    2017-01-01

    Anticipation and delivery of rewards improves memory formation, but little effort has been made to disentangle their respective contributions to memory enhancement. Moreover, it has been suggested that the effects of reward on memory are mediated by dopaminergic influences on hippocampal plasticity. Yet, evidence linking memory improvements to actual reward computations reflected in the activity of the dopaminergic system, i.e., prediction errors and expected values, is scarce and inconclusive. For example, different previous studies reported that the magnitude of prediction errors during a reinforcement learning task was a positive, negative, or non-significant predictor of successfully encoding simultaneously presented images. Individual sensitivities to reward and punishment have been found to influence the activation of the dopaminergic reward system and could therefore help explain these seemingly discrepant results. Here, we used a novel associative memory task combined with computational modeling and showed independent effects of reward-delivery and reward-anticipation on memory. Strikingly, the computational approach revealed positive influences from both reward delivery, as mediated by prediction error magnitude, and reward anticipation, as mediated by magnitude of expected value, even in the absence of behavioral effects when analyzed using standard methods, i.e., by collapsing memory performance across trials within conditions. We additionally measured trait estimates of reward and punishment sensitivity and found that individuals with increased reward (vs. punishment) sensitivity had better memory for associations encoded during positive (vs. negative) prediction errors when tested after 20 min, but a negative trend when tested after 24 h. In conclusion, modeling trial-by-trial fluctuations in the magnitude of reward, as we did here for prediction errors and expected value computations, provides a comprehensive and biologically plausible description of

  4. EEG Theta Dynamics within Frontal and Parietal Cortices for Error Processing during Reaching Movements in a Prism Adaptation Study Altering Visuo-Motor Predictive Planning.

    Directory of Open Access Journals (Sweden)

    Pieranna Arrighi

    Full Text Available Modulation of frontal midline theta (fmθ is observed during error commission, but little is known about the role of theta oscillations in correcting motor behaviours. We investigate EEG activity of healthy partipants executing a reaching task under variable degrees of prism-induced visuo-motor distortion and visual occlusion of the initial arm trajectory. This task introduces directional errors of different magnitudes. The discrepancy between predicted and actual movement directions (i.e. the error, at the time when visual feedback (hand appearance became available, elicits a signal that triggers on-line movement correction. Analysis were performed on 25 EEG channels. For each participant, the median value of the angular error of all reaching trials was used to partition the EEG epochs into high- and low-error conditions. We computed event-related spectral perturbations (ERSP time-locked either to visual feedback or to the onset of movement correction. ERSP time-locked to the onset of visual feedback showed that fmθ increased in the high- but not in the low-error condition with an approximate time lag of 200 ms. Moreover, when single epochs were sorted by the degree of motor error, fmθ started to increase when a certain level of error was exceeded and, then, scaled with error magnitude. When ERSP were time-locked to the onset of movement correction, the fmθ increase anticipated this event with an approximate time lead of 50 ms. During successive trials, an error reduction was observed which was associated with indices of adaptations (i.e., aftereffects suggesting the need to explore if theta oscillations may facilitate learning. To our knowledge this is the first study where the EEG signal recorded during reaching movements was time-locked to the onset of the error visual feedback. This allowed us to conclude that theta oscillations putatively generated by anterior cingulate cortex activation are implicated in error processing in semi

  5. Phasic dopamine as a prediction error of intrinsic and extrinsic reinforcements driving both action acquisition and reward maximization: a simulated robotic study.

    Science.gov (United States)

    Mirolli, Marco; Santucci, Vieri G; Baldassarre, Gianluca

    2013-03-01

    An important issue of recent neuroscientific research is to understand the functional role of the phasic release of dopamine in the striatum, and in particular its relation to reinforcement learning. The literature is split between two alternative hypotheses: one considers phasic dopamine as a reward prediction error similar to the computational TD-error, whose function is to guide an animal to maximize future rewards; the other holds that phasic dopamine is a sensory prediction error signal that lets the animal discover and acquire novel actions. In this paper we propose an original hypothesis that integrates these two contrasting positions: according to our view phasic dopamine represents a TD-like reinforcement prediction error learning signal determined by both unexpected changes in the environment (temporary, intrinsic reinforcements) and biological rewards (permanent, extrinsic reinforcements). Accordingly, dopamine plays the functional role of driving both the discovery and acquisition of novel actions and the maximization of future rewards. To validate our hypothesis we perform a series of experiments with a simulated robotic system that has to learn different skills in order to get rewards. We compare different versions of the system in which we vary the composition of the learning signal. The results show that only the system reinforced by both extrinsic and intrinsic reinforcements is able to reach high performance in sufficiently complex conditions.

  6. Predicting sex offender recidivism. I. Correcting for item overselection and accuracy overestimation in scale development. II. Sampling error-induced attenuation of predictive validity over base rate information.

    Science.gov (United States)

    Vrieze, Scott I; Grove, William M

    2008-06-01

    The authors demonstrate a statistical bootstrapping method for obtaining unbiased item selection and predictive validity estimates from a scale development sample, using data (N = 256) of Epperson et al. [2003 Minnesota Sex Offender Screening Tool-Revised (MnSOST-R) technical paper: Development, validation, and recommended risk level cut scores. Retrieved November 18, 2006 from Iowa State University Department of Psychology web site: http://www.psychology.iastate.edu/ approximately dle/mnsost_download.htm] from which the Minnesota Sex Offender Screening Tool-Revised (MnSOST-R) was developed. Validity (area under receiver operating characteristic curve) reported by Epperson et al. was .77 with 16 items selected. The present analysis yielded an asymptotically unbiased estimator AUC = .58. The present article also focused on the degree to which sampling error renders estimated cutting scores (appropriate to local [varying] recidivism base rates) nonoptimal, so that the long-run performance (measured by correct fraction, the total proportion of correct classifications) of these estimated cutting scores is poor, when they are applied to their parent populations (having assumed values for AUC and recidivism rate). This was investigated by Monte Carlo simulation over a range of AUC and recidivism rate values. Results indicate that, except for the AUC values higher than have ever been cross-validated, in combination with recidivism base rates severalfold higher than the literature average [Hanson and Morton-Bourgon, 2004, Predictors of sexual recidivism: An updated meta-analysis. (User report 2004-02.). Ottawa: Public Safety and Emergency Preparedness Canada], the user of an instrument similar in performance to the MnSOST-R cannot expect to achieve correct fraction performance notably in excess of what is achievable from knowing the population recidivism rate alone. The authors discuss the legal implications of their findings for procedural and substantive due process in

  7. Enabling Predictive Simulation and UQ of Complex Multiphysics PDE Systems by the Development of Goal-Oriented Variational Sensitivity Analysis and a-Posteriori Error Estimation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Estep, Donald [Colorado State Univ., Fort Collins, CO (United States)

    2015-11-30

    This project addressed the challenge of predictive computational analysis of strongly coupled, highly nonlinear multiphysics systems characterized by multiple physical phenomena that span a large range of length- and time-scales. Specifically, the project was focused on computational estimation of numerical error and sensitivity analysis of computational solutions with respect to variations in parameters and data. In addition, the project investigated the use of accurate computational estimates to guide efficient adaptive discretization. The project developed, analyzed and evaluated new variational adjoint-based techniques for integration, model, and data error estimation/control and sensitivity analysis, in evolutionary multiphysics multiscale simulations.

  8. Self-Reported and Observed Punitive Parenting Prospectively Predicts Increased Error-Related Brain Activity in Six-Year-Old Children.

    Science.gov (United States)

    Meyer, Alexandria; Proudfit, Greg Hajcak; Bufferd, Sara J; Kujawa, Autumn J; Laptook, Rebecca S; Torpey, Dana C; Klein, Daniel N

    2015-07-01

    The error-related negativity (ERN) is a negative deflection in the event-related potential (ERP) occurring approximately 50 ms after error commission at fronto-central electrode sites and is thought to reflect the activation of a generic error monitoring system. Several studies have reported an increased ERN in clinically anxious children, and suggest that anxious children are more sensitive to error commission--although the mechanisms underlying this association are not clear. We have previously found that punishing errors results in a larger ERN, an effect that persists after punishment ends. It is possible that learning-related experiences that impact sensitivity to errors may lead to an increased ERN. In particular, punitive parenting might sensitize children to errors and increase their ERN. We tested this possibility in the current study by prospectively examining the relationship between parenting style during early childhood and children's ERN approximately 3 years later. Initially, 295 parents and children (approximately 3 years old) participated in a structured observational measure of parenting behavior, and parents completed a self-report measure of parenting style. At a follow-up assessment approximately 3 years later, the ERN was elicited during a Go/No-Go task, and diagnostic interviews were completed with parents to assess child psychopathology. Results suggested that both observational measures of hostile parenting and self-report measures of authoritarian parenting style uniquely predicted a larger ERN in children 3 years later. We previously reported that children in this sample with anxiety disorders were characterized by an increased ERN. A mediation analysis indicated that ERN magnitude mediated the relationship between harsh parenting and child anxiety disorder. Results suggest that parenting may shape children's error processing through environmental conditioning and thereby risk for anxiety, although future work is needed to confirm this

  9. Prediction and error growth in the daily forecast of precipitation from the NCEP CFSv2 over the subdivisions of Indian subcontinent

    Indian Academy of Sciences (India)

    Dhruva Kumar Pandey; Shailendra Rai; A K Sahai; S Abhilash; N K Shahi

    2016-02-01

    This study investigates the forecast skill and predictability of various indices of south Asian monsoon as well as the subdivisions of the Indian subcontinent during JJAS season for the time domain of 2001–2013 using NCEP CFSv2 output. It has been observed that the daily mean climatology of precipitation over the land points of India is underestimated in the model forecast as compared to observation. The monthly model bias of precipitation shows the dry bias over the land points of India and also over the Bay of Bengal, whereas the Himalayan and Arabian Sea regions show the wet bias. We have divided the Indian landmass into five subdivisions namely central India, southern India, Western Ghat, northeast and southern Bay of Bengal regions based on the spatial variation of observed mean precipitation in JJAS season. The underestimation over the land points of India during mature phase was originated from the central India, southern Bay of Bengal, southern India and Western Ghat regions. The error growth in June forecast is slower as compared to July forecast in all the regions. The predictability error also grows slowly in June forecast as compared to July forecast in most of the regions. The doubling time of predictability error was estimated to be in the range of 3–5 days for all the regions. Southern India and Western Ghats are more predictable in the July forecast as compared to June forecast, whereas IMR, northeast, central India and southern Bay of Bengal regions have the opposite nature.

  10. Prediction and error growth in the daily forecast of precipitation from the NCEP CFSv2 over the subdivisions of Indian subcontinent

    Science.gov (United States)

    Pandey, Dhruva Kumar; Rai, Shailendra; Sahai, A. K.; Abhilash, S.; Shahi, N. K.

    2016-02-01

    This study investigates the forecast skill and predictability of various indices of south Asian monsoon as well as the subdivisions of the Indian subcontinent during JJAS season for the time domain of 2001-2013 using NCEP CFSv2 output. It has been observed that the daily mean climatology of precipitation over the land points of India is underestimated in the model forecast as compared to observation. The monthly model bias of precipitation shows the dry bias over the land points of India and also over the Bay of Bengal, whereas the Himalayan and Arabian Sea regions show the wet bias. We have divided the Indian landmass into five subdivisions namely central India, southern India, Western Ghat, northeast and southern Bay of Bengal regions based on the spatial variation of observed mean precipitation in JJAS season. The underestimation over the land points of India during mature phase was originated from the central India, southern Bay of Bengal, southern India and Western Ghat regions. The error growth in June forecast is slower as compared to July forecast in all the regions. The predictability error also grows slowly in June forecast as compared to July forecast in most of the regions. The doubling time of predictability error was estimated to be in the range of 3-5 days for all the regions. Southern India and Western Ghats are more predictable in the July forecast as compared to June forecast, whereas IMR, northeast, central India and southern Bay of Bengal regions have the opposite nature.

  11. A method for predicting errors when interacting with finite state systems. How implicit learning shapes the user's knowledge of a system

    Energy Technology Data Exchange (ETDEWEB)

    Javaux, Denis

    2002-02-01

    This paper describes a method for predicting the errors that may appear when human operators or users interact with systems behaving as finite state systems. The method is a generalization of a method used for predicting errors when interacting with autopilot modes on modern, highly computerized airliners [Proc 17th Digital Avionics Sys Conf (DASC) (1998); Proc 10th Int Symp Aviat Psychol (1999)]. A cognitive model based on spreading activation networks is used for predicting the user's model of the system and its impact on the production of errors. The model strongly posits the importance of implicit learning in user-system interaction and its possible detrimental influence on users' knowledge of the system. An experiment conducted with Airbus Industrie and a major European airline on pilots' knowledge of autopilot behavior on the A340-200/300 confirms the model predictions, and in particular the impact of the frequencies with which specific state transitions and contexts are experienced.

  12. Predicting stem borer density in maize using RapidEye data and generalized linear models

    Science.gov (United States)

    Abdel-Rahman, Elfatih M.; Landmann, Tobias; Kyalo, Richard; Ong'amo, George; Mwalusepo, Sizah; Sulieman, Saad; Ru, Bruno Le

    2017-05-01

    Average maize yield in eastern Africa is 2.03 t ha-1 as compared to global average of 6.06 t ha-1 due to biotic and abiotic constraints. Amongst the biotic production constraints in Africa, stem borers are the most injurious. In eastern Africa, maize yield losses due to stem borers are currently estimated between 12% and 21% of the total production. The objective of the present study was to explore the possibility of RapidEye spectral data to assess stem borer larva densities in maize fields in two study sites in Kenya. RapidEye images were acquired for the Bomet (western Kenya) test site on the 9th of December 2014 and on 27th of January 2015, and for Machakos (eastern Kenya) a RapidEye image was acquired on the 3rd of January 2015. Five RapidEye spectral bands as well as 30 spectral vegetation indices (SVIs) were utilized to predict per field maize stem borer larva densities using generalized linear models (GLMs), assuming Poisson ('Po') and negative binomial ('NB') distributions. Root mean square error (RMSE) and ratio prediction to deviation (RPD) statistics were used to assess the models performance using a leave-one-out cross-validation approach. The Zero-inflated NB ('ZINB') models outperformed the 'NB' models and stem borer larva densities could only be predicted during the mid growing season in December and early January in both study sites, respectively (RMSE = 0.69-1.06 and RPD = 8.25-19.57). Overall, all models performed similar when all the 30 SVIs (non-nested) and only the significant (nested) SVIs were used. The models developed could improve decision making regarding controlling maize stem borers within integrated pest management (IPM) interventions.

  13. Quantitative Structure Property Relations (QSPR) for Predicting Molar Diamagnetic Susceptibilities, χm, of Inorganic Compounds

    Institute of Scientific and Technical Information of China (English)

    MU,Lai-Long; HE,Hong-Mei; FENG,Chang-Jun

    2007-01-01

    For predicting the molar diamagnetic susceptibilities of inorganic compounds, a novel connectivity index mG based on adjacency matrix of molecular graphs and ionic parameter gi was proposed. The gi is defined as gi= (ni0.5-0.91)4·xi0.5/Zi0.5, where Zi, ni, xi are the valence, the outer electronic shell primary quantum number, and the electronegativity of atom I respectively. The good QSPR models for the molar diamagnetic susceptibilities can be constructed from 0G and 1G by using multivariate linear regression (MLR) method and artificial neural network (NN) method. The correlation coefficient r, standard error, and average absolute deviation of the MLR model and NN model are 0.9868, 5.47 cgs, 4.33 cgs, 0.9885, 5.09 cgs and 4.06 cgs, respectively, for the 144 inorganic compounds. The cross-validation by using the leave-one-out method demonstrates that the MLR model is highly reliable from the point of view of statistics. The average absolute deviations of predicted values of the molar diamagnetic susceptibility of other 62 inorganic compounds (test set) are 4.72 cgs and 4.06 cgs for the MLR model and NN model. The results show that the current method is more effective than literature methods for estimating the molar diamagnetic susceptibility of an inorganic compound. Both MLR and NN methods can provide acceptable models for the prediction of the molar diamagnetic susceptibilities. The NN model for the molar diamagnetic susceptibilities appears more reliable than the MLR model.

  14. Posterior medial frontal cortex activity predicts post-error adaptations in task-related visual and motor areas

    NARCIS (Netherlands)

    Danielmeier, C.; Eichele, T.; Forstmann, B.U.; Tittgemeyer, M.; Ullsperger, M.

    2011-01-01

    As Seneca the Younger put it, "To err is human, but to persist is diabolical." To prevent repetition of errors, human performance monitoring often triggers adaptations such as general slowing and/or attentional focusing. The posterior medial frontal cortex (pMFC) is assumed to monitor performance pr

  15. Population-based Stroke Atlas for outcome prediction: method and preliminary results for ischemic stroke from CT.

    Directory of Open Access Journals (Sweden)

    Wieslaw L Nowinski

    Full Text Available BACKGROUND AND PURPOSE: Knowledge of outcome prediction is important in stroke management. We propose a lesion size and location-driven method for stroke outcome prediction using a Population-based Stroke Atlas (PSA linking neurological parameters with neuroimaging in population. The PSA aggregates data from previously treated patients and applies them to currently treated patients. The PSA parameter distribution in the infarct region of a treated patient enables prediction. We introduce a method for PSA calculation, quantify its performance, and use it to illustrate ischemic stroke outcome prediction of modified Rankin Scale (mRS and Barthel Index (BI. METHODS: The preliminary PSA was constructed from 128 ischemic stroke cases calculated for 8 variants (various data aggregation schemes and 3 case selection variables (infarct volume, NIHSS at admission, and NIHSS at day 7, each in 4 ranges. Outcome prediction for 9 parameters (mRS at 7th, and mRS and BI at 30th, 90th, 180th, 360th day was studied using a leave-one-out approach, requiring 589,824 PSA maps to be analyzed. RESULTS: Outcomes predicted for different PSA variants are statistically equivalent, so the simplest and most efficient variant aiming at parameter averaging is employed. This variant allows the PSA to be pre-calculated before prediction. The PSA constrained by infarct volume and NIHSS reduces the average prediction error (absolute difference between the predicted and actual values by a fraction of 0.796; the use of 3 patient-specific variables further lowers it by 0.538. The PSA-based prediction error for mild and severe outcomes (mRS  =  [2]-[5] is (0.5-0.7. Prediction takes about 8 seconds. CONCLUSIONS: PSA-based prediction of individual and group mRS and BI scores over time is feasible, fast and simple, but its clinical usefulness requires further studies. The case selection operation improves PSA predictability. A multiplicity of PSAs can be computed independently for

  16. In silico model for predicting soil organic carbon normalized sorption coefficient (K(OC)) of organic chemicals.

    Science.gov (United States)

    Wang, Ya; Chen, Jingwen; Yang, Xianhai; Lyakurwa, Felichesmi; Li, Xuehua; Qiao, Xianliang

    2015-01-01

    As a kind of in silico method, the methodology of quantitative structure-activity relationship (QSAR) has been shown to be an efficient way to predict soil organic carbon normalized sorption coefficients (KOC) values. In the present study, a total of 824 logKOC values were used to develop and validate a QSAR model for predicting KOC values. The model statistics parameters, adjusted determination coefficient (R(2)adj) of 0.854, the root mean square error (RMSE) of 0.472, the leave-one-out cross-validation squared correlation coefficient (Q(2)LOO) of 0.850, the external validation coefficient Q(2)ext of 0.761 and the RMSEext of 0.558 were obtained, which indicate satisfactory goodness of fit, robustness and predictive ability. The squared Moriguchi octanol-water partition coefficient (MLOGP2) explained 66.5% of the logKOC variance. The applicability domain of the current model has been extended to emerging pollutants like polybrominated diphenyl ethers, perfluorochemicals and heterocyclic toxins. The developed model can be used to predict the compounds with various functional groups including C=C, -C≡C-, -OH, -O-, -CHO, C=O, -C=O(O), -COOH, -C6H5, -NO2, -NH2, -NH-, N-, -N-N-, -NH-C(O)-NH-, -O-C(O)-NH2, -C(O)-NH2, -X(F, Cl, Br, I), -S-, -SH, -S(O)2-, -OS(O)2-, -NH-S(O)2-, (SR)2PH(OR)2 and Si. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. ResQ: An Approach to Unified Estimation of B-Factor and Residue-Specific Error in Protein Structure Prediction.

    Science.gov (United States)

    Yang, Jianyi; Wang, Yan; Zhang, Yang

    2016-02-22

    Computer-based structure prediction becomes a major tool to provide large-scale structure models for annotating biological function of proteins. Information of residue-level accuracy and thermal mobility (or B-factor), which is critical to decide how biologists utilize the predicted models, is however missed in most structure prediction pipelines. We developed ResQ for unified residue-level model quality and B-factor estimations by combining local structure assembly variations with sequence-based and structure-based profiling. ResQ was tested on 635 non-redundant proteins with structure models generated by I-TASSER, where the average difference between estimated and observed distance errors is 1.4Å for the confidently modeled proteins. ResQ was further tested on structure decoys from CASP9-11 experiments, where the error of local structure quality prediction is consistently lower than or comparable to other state-of-the-art predictors. Finally, ResQ B-factor profile was used to assist molecular replacement, which resulted in successful solutions on several proteins that could not be solved from constant B-factor settings.

  18. Mean Expected Error in Prediction of Total Body Water: A True Accuracy Comparison between Bioimpedance Spectroscopy and Single Frequency Regression Equations.

    Science.gov (United States)

    Seoane, Fernando; Abtahi, Shirin; Abtahi, Farhad; Ellegård, Lars; Johannsson, Gudmundur; Bosaeus, Ingvar; Ward, Leigh C

    2015-01-01

    For several decades electrical bioimpedance (EBI) has been used to assess body fluid distribution and body composition. Despite the development of several different approaches for assessing total body water (TBW), it remains uncertain whether bioimpedance spectroscopic (BIS) approaches are more accurate than single frequency regression equations. The main objective of this study was to answer this question by calculating the expected accuracy of a single measurement for different EBI methods. The results of this study showed that all methods produced similarly high correlation and concordance coefficients, indicating good accuracy as a method. Even the limits of agreement produced from the Bland-Altman analysis indicated that the performance of single frequency, Sun's prediction equations, at population level was close to the performance of both BIS methods; however, when comparing the Mean Absolute Percentage Error value between the single frequency prediction equations and the BIS methods, a significant difference was obtained, indicating slightly better accuracy for the BIS methods. Despite the higher accuracy of BIS methods over 50 kHz prediction equations at both population and individual level, the magnitude of the improvement was small. Such slight improvement in accuracy of BIS methods is suggested insufficient to warrant their clinical use where the most accurate predictions of TBW are required, for example, when assessing over-fluidic status on dialysis. To reach expected errors below 4-5%, novel and individualized approaches must be developed to improve the accuracy of bioimpedance-based methods for the advent of innovative personalized health monitoring applications.

  19. Seasonal prediction of tropical cyclone activity over the north Indian Ocean using three artificial neural networks

    Science.gov (United States)

    Nath, Sankar; Kotal, S. D.; Kundu, P. K.

    2016-12-01

    Three artificial neural network (ANN) methods, namely, multilayer perceptron (MLP), radial basis function (RBF) and generalized regression neural network (GRNN) are utilized to predict the seasonal tropical cyclone (TC) activity over the north Indian Ocean (NIO) during the post-monsoon season (October, November, December). The frequency of TC and large-scale climate variables derived from NCEP/NCAR reanalysis dataset of resolution 2.5° × 2.5° were analyzed for the period 1971-2013. Data for the years 1971-2002 were used for the development of the models, which were tested with independent sample data for the year 2003-2013. Using the correlation analysis, the five large-scale climate variables, namely, geopotential height at 500 hPa, relative humidity at 500 hPa, sea-level pressure, zonal wind at 700 hPa and 200 hPa for the preceding month September, are selected as potential predictors of the post-monsoon season TC activity. The result reveals that all the three different ANN methods are able to provide satisfactory forecast in terms of the various metrics, such as root mean-square error (RMSE), standard deviation (SD), correlation coefficient ( r), and bias and index of agreement ( d). Additionally, leave-one-out cross validation (LOOCV) method is also performed and the forecast skill is evaluated. The results show that the MLP model is found to be superior to the other two models (RBF, GRNN). The (MLP) is expected to be very useful to operational forecasters for prediction of TC activity.

  20. A MULTIPLE INTELLIGENT AGENT SYSTEM FOR CREDIT RISK PREDICTION VIA AN OPTIMIZATION OF LOCALIZED GENERALIZATION ERROR WITH DIVERSITY

    Institute of Scientific and Technical Information of China (English)

    Daniel S. YEUNG; Wing W. Y. NG; Aki P. F. CHAN; Patrick P. K. CHAN; Michael FIRTH; Eric C. C. TSANG

    2007-01-01

    Company bankruptcies cost billions of dollars in losses to banks each year. Thus credit risk prediction is a critical part of a bank's loan approval decision process. Traditional financial models for credit risk prediction are no longer adequate for describing today's complex relationship between the financial health and potential bankruptcy of a company. In this work, a multiple classifier system (embedded in a multiple intelligent agent system) is proposed to predict the financial health of a company. In our model, each individual agent (classifier) makes a prediction on the likelihood of credit risk based on only partial information of the company. Each of the agents is an expert, but has limited knowledge (represented by features) about the company. The decisions of all agents are combined together to form a final credit risk prediction. Experiments show that our model out-performs other existing methods using the benchmarking Compustat American Corporations dataset.

  1. 基于预测误差校正的支持向量机短期风速预测%Short-term Wind Speed prediction with Support Vector Machine Based on Predict Error Correction

    Institute of Scientific and Technical Information of China (English)

    周松林; 茆美琴; 苏建徽

    2012-01-01

    An accurate predict of wind speed is important for power department to regulate dispatching plan in time.A support vector machine(SVM) model was established for forecasting wind speed,at the same time,a new idea using predict error correction methods to improving the prediction accuracy was proposed.SVM model was established for a preliminary prediction of wind speed,and then the error prediction model based on wavelet-support vector machine was set up by use of samples which were separately constructed from training error and testing error.Finally,the correction of preliminary prediction values was carried out.Simulation results show that the proposed method can significantly improve the prediction accuracy,and the method is simple,clear and steady,which can be extended to long-term wind speed prediction,load prediction,and other prediction field.%对风电场风速进行较为准确的预测,对于电力部门及时调整调度计划至关重要。建立了支持向量机风速预测模型,并提出了结合预测误差校正来提高预测精度的新思路。先建立SVM模型初步预测风速,再将得到的训练误差和测试误差分别构建样本,建立基于小波-支持向量机的误差预测模型进行误差预测,最后用预测误差对风速初步预测值进行校正。仿真结果表明所提方法能明显改善预测精度,而且方法简洁明了,具有很好的稳健性,能够推广到长期风速预测、负荷预测及其它预测领域。

  2. Predicting Heats of Explosion of Nitroaromatic Compounds through NBO Charges and 15N NMR Chemical Shifts of Nitro Groups

    OpenAIRE

    Ricardo Infante-Castillo; Samuel P. Hernández-Rivera

    2012-01-01

    This work presents a new quantitative model to predict the heat of explosion of nitroaromatic compounds using the natural bond orbital (NBO) charge and 15N NMR chemical shifts of the nitro groups (15NNitro) as structural parameters. The values of the heat of explosion predicted for 21 nitroaromatic compounds using the model described here were compared with experimental data. The prediction ability of the model was assessed by the leave-one-out cross-validation method. The cross-validation re...

  3. A new method for class prediction based on signed-rank algorithms applied to Affymetrix® microarray experiments

    Directory of Open Access Journals (Sweden)

    Vassal Aurélien

    2008-01-01

    Full Text Available Abstract Background The huge amount of data generated by DNA chips is a powerful basis to classify various pathologies. However, constant evolution of microarray technology makes it difficult to mix data from different chip types for class prediction of limited sample populations. Affymetrix® technology provides both a quantitative fluorescence signal and a decision (detection call: absent or present based on signed-rank algorithms applied to several hybridization repeats of each gene, with a per-chip normalization. We developed a new prediction method for class belonging based on the detection call only from recent Affymetrix chip type. Biological data were obtained by hybridization on U133A, U133B and U133Plus 2.0 microarrays of purified normal B cells and cells from three independent groups of multiple myeloma (MM patients. Results After a call-based data reduction step to filter out non class-discriminative probe sets, the gene list obtained was reduced to a predictor with correction for multiple testing by iterative deletion of probe sets that sequentially improve inter-class comparisons and their significance. The error rate of the method was determined using leave-one-out and 5-fold cross-validation. It was successfully applied to (i determine a sex predictor with the normal donor group classifying gender with no error in all patient groups except for male MM samples with a Y chromosome deletion, (ii predict the immunoglobulin light and heavy chains expressed by the malignant myeloma clones of the validation group and (iii predict sex, light and heavy chain nature for every new patient. Finally, this method was shown powerful when compared to the popular classification method Prediction Analysis of Microarray (PAM. Conclusion This normalization-free method is routinely used for quality control and correction of collection errors in patient reports to clinicians. It can be easily extended to multiple class prediction suitable with

  4. Spectral Entropy Can Predict Changes of Working Memory Performance Reduced by Short-Time Training in the Delayed-Match-to-Sample Task

    Directory of Open Access Journals (Sweden)

    Yin Tian

    2017-08-01

    Full Text Available Spectral entropy, which was generated by applying the Shannon entropy concept to the power distribution of the Fourier-transformed electroencephalograph (EEG, was utilized to measure the uniformity of power spectral density underlying EEG when subjects performed the working memory tasks twice, i.e., before and after training. According to Signed Residual Time (SRT scores based on response speed and accuracy trade-off, 20 subjects were divided into two groups, namely high-performance and low-performance groups, to undertake working memory (WM tasks. We found that spectral entropy derived from the retention period of WM on channel FC4 exhibited a high correlation with SRT scores. To this end, spectral entropy was used in support vector machine classifier with linear kernel to differentiate these two groups. Receiver operating characteristics analysis and leave-one out cross-validation (LOOCV demonstrated that the averaged classification accuracy (CA was 90.0 and 92.5% for intra-session and inter-session, respectively, indicating that spectral entropy could be used to distinguish these two different WM performance groups successfully. Furthermore, the support vector regression prediction model with radial basis function kernel and the root-mean-square error of prediction revealed that spectral entropy could be utilized to predict SRT scores on individual WM performance. After testing the changes in SRT scores and spectral entropy for each subject by short-time training, we found that 16 in 20 subjects’ SRT scores were clearly promoted after training and 15 in 20 subjects’ SRT scores showed consistent changes with spectral entropy before and after training. The findings revealed that spectral entropy could be a promising indicator to predict individual’s WM changes by training and further provide a novel application about WM for brain–computer interfaces.

  5. Individual Bayesian Information Matrix for Predicting Estimation Error and Shrinkage of Individual Parameters Accounting for Data Below the Limit of Quantification.

    Science.gov (United States)

    Nguyen, Thi Huyen Tram; Nguyen, Thu Thuy; Mentré, France

    2017-06-28

    In mixed models, the relative standard errors (RSE) and shrinkage of individual parameters can be predicted from the individual Bayesian information matrix (MBF). We proposed an approach accounting for data below the limit of quantification (LOQ) in MBF. MBF is the sum of the expectation of the individual Fisher information (MIF) which can be evaluated by First-Order linearization and the inverse of random effect variance. We expressed the individual information as a weighted sum of predicted MIF for every possible design composing of measurements above and/or below LOQ. When evaluating MIF, we derived the likelihood expressed as the product of the likelihood of observed data and the probability for data to be below LOQ. The relevance of RSE and shrinkage predicted by MBF in absence or presence of data below LOQ were evaluated by simulations, using a pharmacokinetic/viral kinetic model defined by differential equations. Simulations showed good agreement between predicted and observed RSE and shrinkage in absence or presence of data below LOQ. We found that RSE and shrinkage increased with sparser designs and with data below LOQ. The proposed method based on MBF adequately predicted individual RSE and shrinkage, allowing for evaluation of a large number of scenarios without extensive simulations.

  6. Mean Expected Error in Prediction of Total Body Water: A True Accuracy Comparison between Bioimpedance Spectroscopy and Single Frequency Regression Equations

    Directory of Open Access Journals (Sweden)

    Fernando Seoane

    2015-01-01

    Full Text Available For several decades electrical bioimpedance (EBI has been used to assess body fluid distribution and body composition. Despite the development of several different approaches for assessing total body water (TBW, it remains uncertain whether bioimpedance spectroscopic (BIS approaches are more accurate than single frequency regression equations. The main objective of this study was to answer this question by calculating the expected accuracy of a single measurement for different EBI methods. The results of this study showed that all methods produced similarly high correlation and concordance coefficients, indicating good accuracy as a method. Even the limits of agreement produced from the Bland-Altman analysis indicated that the performance of single frequency, Sun’s prediction equations, at population level was close to the performance of both BIS methods; however, when comparing the Mean Absolute Percentage Error value between the single frequency prediction equations and the BIS methods, a significant difference was obtained, indicating slightly better accuracy for the BIS methods. Despite the higher accuracy of BIS methods over 50 kHz prediction equations at both population and individual level, the magnitude of the improvement was small. Such slight improvement in accuracy of BIS methods is suggested insufficient to warrant their clinical use where the most accurate predictions of TBW are required, for example, when assessing over-fluidic status on dialysis. To reach expected errors below 4-5%, novel and individualized approaches must be developed to improve the accuracy of bioimpedance-based methods for the advent of innovative personalized health monitoring applications.

  7. Influence of accelerometer type and placement on physical activity energy expenditure prediction in manual wheelchair users.

    Directory of Open Access Journals (Sweden)

    Tom Edward Nightingale

    Full Text Available To assess the validity of two accelerometer devices, at two different anatomical locations, for the prediction of physical activity energy expenditure (PAEE in manual wheelchair users (MWUs.Seventeen MWUs (36 ± 10 yrs, 72 ± 11 kg completed ten activities; resting, folding clothes, propulsion on a 1% gradient (3,4,5,6 and 7 km·hr-1 and propulsion at 4km·hr-1 (with an additional 8% body mass, 2% and 3% gradient on a motorised wheelchair treadmill. GT3X+ and GENEActiv accelerometers were worn on the right wrist (W and upper arm (UA. Linear regression analysis was conducted between outputs from each accelerometer and criterion PAEE, measured using indirect calorimetry. Subsequent error statistics were calculated for the derived regression equations for all four device/location combinations, using a leave-one-out cross-validation analysis.Accelerometer outputs at each anatomical location were significantly (p < .01 associated with PAEE (GT3X+-UA; r = 0.68 and GT3X+-W; r = 0.82. GENEActiv-UA; r = 0.87 and GENEActiv-W; r = 0.88. Mean ± SD PAEE estimation errors for all activities combined were 15 ± 45%, 14 ± 50%, 3 ± 25% and 4 ± 26% for GT3X+-UA, GT3X+-W, GENEActiv-UA and GENEActiv-W, respectively. Absolute PAEE estimation errors for devices varied, 19 to 66% for GT3X+-UA, 17 to 122% for GT3X+-W, 15 to 26% for GENEActiv-UA and from 17.0 to 32% for the GENEActiv-W.The results indicate that the GENEActiv device worn on either the upper arm or wrist provides the most valid prediction of PAEE in MWUs. Variation in error statistics between the two devices is a result of inherent differences in internal components, on-board filtering processes and outputs of each device.

  8. Predictive QSAR modeling of phosphodiesterase 4 inhibitors.

    Science.gov (United States)

    Kovalishyn, Vasyl; Tanchuk, Vsevolod; Charochkina, Larisa; Semenuta, Ivan; Prokopenko, Volodymyr

    2012-02-01

    A series of diverse organic compounds, phosphodiesterase type 4 (PDE-4) inhibitors, have been modeled using a QSAR-based approach. 48 QSAR models were compared by following the same procedure with different combinations of descriptors and machine learning methods. QSAR methodologies used random forests and associative neural networks. The predictive ability of the models was tested through leave-one-out cross-validation, giving a Q² = 0.66-0.78 for regression models and total accuracies Ac=0.85-0.91 for classification models. Predictions for the external evaluation sets obtained accuracies in the range of 0.82-0.88 (for active/inactive classifications) and Q² = 0.62-0.76 for regressions. The method showed itself to be a potential tool for estimation of IC₅₀ of new drug-like candidates at early stages of drug development. Copyright © 2011 Elsevier Inc. All rights reserved.

  9. Hemispheric Asymmetries in Striatal Reward Responses Relate to Approach-Avoidance Learning and Encoding of Positive-Negative Prediction Errors in Dopaminergic Midbrain Regions.

    Science.gov (United States)

    Aberg, Kristoffer Carl; Doell, Kimberly C; Schwartz, Sophie

    2015-10-28

    Some individuals are better at learning about rewarding situations, whereas others are inclined to avoid punishments (i.e., enhanced approach or avoidance learning, respectively). In reinforcement learning, action values are increased when outcomes are better than predicted (positive prediction errors [PEs]) and decreased for worse than predicted outcomes (negative PEs). Because actions with high and low values are approached and avoided, respectively, individual differences in the neural encoding of PEs may influence the balance between approach-avoidance learning. Recent correlational approaches also indicate that biases in approach-avoidance learning involve hemispheric asymmetries in dopamine function. However, the computational and neural mechanisms underpinning such learning biases remain unknown. Here we assessed hemispheric reward asymmetry in striatal activity in 34 human participants who performed a task involving rewards and punishments. We show that the relative difference in reward response between hemispheres relates to individual biases in approach-avoidance learning. Moreover, using a computational modeling approach, we demonstrate that better encoding of positive (vs negative) PEs in dopaminergic midbrain regions is associated with better approach (vs avoidance) learning, specifically in participants with larger reward responses in the left (vs right) ventral striatum. Thus, individual dispositions or traits may be determined by neural processes acting to constrain learning about specific aspects of the world.

  10. Analysis of nodalization effects on the prediction error of generalized finite element method used for dynamic modeling of hot water storage tank

    Directory of Open Access Journals (Sweden)

    Wołowicz Marcin

    2015-09-01

    Full Text Available The paper presents dynamic model of hot water storage tank. The literature review has been made. Analysis of effects of nodalization on the prediction error of generalized finite element method (GFEM is provided. The model takes into account eleven various parameters, such as: flue gases volumetric flow rate to the spiral, inlet water temperature, outlet water flow rate, etc. Boiler is also described by sizing parameters, nozzle parameters and heat loss including ambient temperature. The model has been validated on existing data. Adequate laboratory experiments were provided. The comparison between 1-, 5-, 10- and 50-zone boiler is presented. Comparison between experiment and simulations for different zone numbers of the boiler model is presented on the plots. The reason of differences between experiment and simulation is explained.

  11. Refining measurements of lateral channel movement from image time series by quantifying spatial variations in registration error

    Science.gov (United States)

    Lea, Devin M.; Legleiter, Carl J.

    2016-04-01

    Remotely sensed data provides information on river morphology useful for examining channel change at yearly-to-decadal time scales. Although previous studies have emphasized the need to distinguish true geomorphic change from errors associated with image registration, standard metrics for assessing and summarizing these errors, such as the root-mean-square error (RMSE) and 90th percentile of the distribution of ground control point (GCP) error, fail to incorporate the spatial structure of this uncertainty. In this study, we introduce a framework for evaluating whether observations of lateral channel migration along a meandering channel are statistically significant, given the spatial distribution of registration error. An iterative leave-one-out cross-validation approach was used to produce local error metrics for an image time series from Savery Creek, Wyoming, USA, and to evaluate various transformation equations, interpolation methods, and GCP placement strategies. Interpolated error surfaces then were used to create error ellipses representing spatially variable buffers of detectable change. Our results show that, for all five sequential image pairs we examined, spatially distributed estimates of registration error enabled detection of a greater number of statistically significant lateral migration vectors than the spatially uniform RMSE or 90th percentile of GCP error. Conversely, spatially distributed error metrics prevented changes from being mistaken as real in areas of greater registration error. Our results also support the findings of previous studies: second-order polynomial functions on average yield the lowest RMSE, and errors are reduced by placing GCPs on the floodplain rather than on hillslopes. This study highlights the importance of characterizing the spatial distribution of image registration errors in the analysis of channel change.

  12. A predictive model of music preference using pairwise comparisons

    DEFF Research Database (Denmark)

    Jensen, Bjørn Sand; Gallego, Javier Saez; Larsen, Jan

    2012-01-01

    Music recommendation is an important aspect of many streaming services and multi-media systems, however, it is typically based on so-called collaborative filtering methods. In this paper we consider the recommendation task from a personal viewpoint and examine to which degree music preference can...... be elicited and predicted using simple and robust queries such as pairwise comparisons. We propose to model - and in turn predict - the pairwise music preference using a very flexible model based on Gaussian Process priors for which we describe the required inference. We further propose a specific covariance...... function and evaluate the predictive performance on a novel dataset. In a recommendation style setting we obtain a leave-one-out accuracy of 74% compared to 50% with random predictions, showing potential for further refinement and evaluation....

  13. A Memory-Based Learning Approach as Compared to Other Data Mining Algorithms for the Prediction of Soil Texture Using Diffuse Reflectance Spectra

    Directory of Open Access Journals (Sweden)

    Asa Gholizadeh

    2016-04-01

    Full Text Available Successful determination of soil texture using reflectance spectroscopy across Visible and Near-Infrared (VNIR, 400–1200 nm and Short-Wave-Infrared (SWIR, 1200–2500 nm ranges depends largely on the selection of a suitable data mining algorithm. The objective of this research was to explore whether the new Memory-Based Learning (MBL method performs better than the other methods, namely: Partial Least Squares Regression (PLSR, Support Vector Machine Regression (SVMR and Boosted Regression Trees (BRT. For this purpose, we chose soil texture (contents of clay, silt and sand as testing attributes. A selected set of soil samples, classified as Technosols, were collected from brown coal mining dumpsites in the Czech Republic (a total of 264 samples. Spectral readings were taken in the laboratory with a fiber optic ASD FieldSpec III Pro FR spectroradiometer. Leave-one-out cross-validation was used to optimize and validate the models. Comparisons were made in terms of the coefficient of determination (R2cv and the Root Mean Square Error of Prediction of Cross-Validation (RMSEPcv. Predictions of the three soil properties by MBL outperformed the accuracy of the remaining algorithms. We found that the MBL performs better than the other three methods by about 10% (largest R2cv and smallest RMSEPcv, followed by the SVMR. It should be pointed out that the other methods (PLSR and BRT still provided reliable results. The study concluded that in this examined dataset, reflectance spectroscopy combined with the MBL algorithm is rapid and accurate, offers major efficiency and cost-saving possibilities in other datasets and can lead to better targeting of management interventions.

  14. Factors that affect large subunit ribosomal DNA amplicon sequencing studies of fungal communities: classification method, primer choice, and error.

    Directory of Open Access Journals (Sweden)

    Teresita M Porter

    Full Text Available Nuclear large subunit ribosomal DNA is widely used in fungal phylogenetics and to an increasing extent also amplicon-based environmental sequencing. The relatively short reads produced by next-generation sequencing, however, makes primer choice and sequence error important variables for obtaining accurate taxonomic classifications. In this simulation study we tested the performance of three classification methods: 1 a similarity-based method (BLAST + Metagenomic Analyzer, MEGAN; 2 a composition-based method (Ribosomal Database Project naïve bayesian classifier, NBC; and, 3 a phylogeny-based method (Statistical Assignment Package, SAP. We also tested the effects of sequence length, primer choice, and sequence error on classification accuracy and perceived community composition. Using a leave-one-out cross validation approach, results for classifications to the genus rank were as follows: BLAST + MEGAN had the lowest error rate and was particularly robust to sequence error; SAP accuracy was highest when long LSU query sequences were classified; and, NBC runs significantly faster than the other tested methods. All methods performed poorly with the shortest 50-100 bp sequences. Increasing simulated sequence error reduced classification accuracy. Community shifts were detected due to sequence error and primer selection even though there was no change in the underlying community composition. Short read datasets from individual primers, as well as pooled datasets, appear to only approximate the true community composition. We hope this work informs investigators of some of the factors that affect the quality and interpretation of their environmental gene surveys.

  15. The effect of non-Gaussianity on error predictions for the Epoch of Reionization (EoR) 21-cm power spectrum

    CERN Document Server

    Mondal, Rajesh; Majumdar, Suman; Bera, Apurba; Acharyya, Ayan

    2014-01-01

    The EoR 21-cm signal is expected to become increasingly non-Gaussian as reionization proceeds. We have used semi-numerical simulations to study how this affects the error predictions for the EoR 21-cm power spectrum. We expect $SNR=\\sqrt{N_k}$ for a Gaussian random field where $N_k$ is the number of Fourier modes in each $k$ bin. We find that the effect of non-Gaussianity on the $SNR$ does not depend on $k$. Non-Gaussianity is important at high $SNR$ where it imposes an upper limit $[SNR]_l$. It is not possible to achieve $SNR > [SNR]_l$ even if $N_k$ is increased. The value of $[SNR]_l$ falls as reionization proceeds, dropping from $\\sim 500$ at $\\bar{x}_{{\\rm HI}} = 0.8-0.9$ to $\\sim 10$ at $\\bar{x}_{{\\rm HI}} = 0.15$. For $SNR \\ll [SNR]_l$ we find $SNR = \\sqrt{N_k}/A$ with $A \\sim 1.5 - 2.5$, roughly consistent with the Gaussian prediction. We present a fitting formula for the $SNR$ as a function of $N_k$, with two parameters $A$ and $[SNR]_l$ that have to be determined using simulations. Our results are r...

  16. Proofreading for word errors.

    Science.gov (United States)

    Pilotti, Maura; Chodorow, Martin; Agpawa, Ian; Krajniak, Marta; Mahamane, Salif

    2012-04-01

    Proofreading (i.e., reading text for the purpose of detecting and correcting typographical errors) is viewed as a component of the activity of revising text and thus is a necessary (albeit not sufficient) procedural step for enhancing the quality of a written product. The purpose of the present research was to test competing accounts of word-error detection which predict factors that may influence reading and proofreading differently. Word errors, which change a word into another word (e.g., from --> form), were selected for examination because they are unlikely to be detected by automatic spell-checking functions. Consequently, their detection still rests mostly in the hands of the human proofreader. Findings highlighted the weaknesses of existing accounts of proofreading and identified factors, such as length and frequency of the error in the English language relative to frequency of the correct word, which might play a key role in detection of word errors.

  17. 基于宏块模式预测的时域错误隐藏算法%Temporal Error Concealment Algorithm Based on Macroblock Mode Prediction

    Institute of Scientific and Technical Information of China (English)

    朱冰莲; 刘剑东

    2009-01-01

    Aiming at the problem that the compressed video stream will inevitable be corrupted because of the wireless channel errors, which may degrade the reconstructed image quality, this paper presents an efficient temporal error concealment method which based on improved border-matching function and flexible macroblock mode of H.264 video standard. It utilizes the relativity between lost macroblock and other surrounding macroblocks to predict the block mode, and estimates the motion vector of the lost macroblock depending on the block mode. Simulation results under 3GPP/3GPP2 wireless channel show the proposed method performs better quality than other approaches with the same RTP packet loss rate.%针对压缩视频码流在无线信道传输过程中由于数据丢失或错误面导致的重构图像质量的问题,提出一种时域错误隐藏方法.在对边界匹配函数改进的基础上,根据H.264视频标准具有灵活的宏块分割模式的特点,利用受损宏块与其周围宏块的相关性预测分块模式进行运动矢量估计.实验结果表明,在相同的RTP包丢失率情况下,该算法与其他算法相比,能恢复出更高质量的图像.

  18. SU-F-BRB-10: A Statistical Voxel Based Normal Organ Dose Prediction Model for Coplanar and Non-Coplanar Prostate Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Tran, A; Yu, V; Nguyen, D; Woods, K; Low, D; Sheng, K [UCLA, Los Angeles, CA (United States)

    2015-06-15

    Purpose: Knowledge learned from previous plans can be used to guide future treatment planning. Existing knowledge-based treatment planning methods study the correlation between organ geometry and dose volume histogram (DVH), which is a lossy representation of the complete dose distribution. A statistical voxel dose learning (SVDL) model was developed that includes the complete dose volume information. Its accuracy of predicting volumetric-modulated arc therapy (VMAT) and non-coplanar 4π radiotherapy was quantified. SVDL provided more isotropic dose gradients and may improve knowledge-based planning. Methods: 12 prostate SBRT patients originally treated using two full-arc VMAT techniques were re-planned with 4π using 20 intensity-modulated non-coplanar fields to a prescription dose of 40 Gy. The bladder and rectum voxels were binned based on their distances to the PTV. The dose distribution in each bin was resampled by convolving to a Gaussian kernel, resulting in 1000 data points in each bin that predicted the statistical dose information of a voxel with unknown dose in a new patient without triaging information that may be collectively important to a particular patient. We used this method to predict the DVHs, mean and max doses in a leave-one-out cross validation (LOOCV) test and compared its performance against lossy estimators including mean, median, mode, Poisson and Rayleigh of the voxelized dose distributions. Results: SVDL predicted the bladder and rectum doses more accurately than other estimators, giving mean percentile errors ranging from 13.35–19.46%, 4.81–19.47%, 22.49–28.69%, 23.35–30.5%, 21.05–53.93% for predicting mean, max dose, V20, V35, and V40 respectively, to OARs in both planning techniques. The prediction errors were generally lower for 4π than VMAT. Conclusion: By employing all dose volume information in the SVDL model, the OAR doses were more accurately predicted. 4π plans are better suited for knowledge-based planning than

  19. Temporal and Spatial prediction of groundwater levels using Artificial Neural Networks, Fuzzy logic and Kriging interpolation.

    Science.gov (United States)

    Tapoglou, Evdokia; Karatzas, George P.; Trichakis, Ioannis C.; Varouchakis, Emmanouil A.

    2014-05-01

    the ANN. Therefore, the neighborhood of each prediction point is the best available. Then, the appropriate variogram is determined, by fitting the experimental variogram to a theoretical variogram model. Three models are examined, the linear, the exponential and the power-law. Finally, the hydraulic head change is predicted for every grid cell and for every time step used. All the algorithms used were developed in Visual Basic .NET, while the visualization of the results was performed in MATLAB using the .NET COM Interoperability. The results are evaluated using leave one out cross-validation and various performance indicators. The best results were achieved by using ANNs with two hidden layers, consisting of 20 and 15 nodes respectively and by using power-law variogram with the fuzzy logic system.

  20. Predicting equilibrium vapour pressure isotope effects by using artificial neural networks or multi-linear regression - A quantitative structure property relationship approach.

    Science.gov (United States)

    Parinet, Julien; Julien, Maxime; Nun, Pierrick; Robins, Richard J; Remaud, Gerald; Höhener, Patrick

    2015-09-01

    We aim at predicting the effect of structure and isotopic substitutions on the equilibrium vapour pressure isotope effect of various organic compounds (alcohols, acids, alkanes, alkenes and aromatics) at intermediate temperatures. We attempt to explore quantitative structure property relationships by using artificial neural networks (ANN); the multi-layer perceptron (MLP) and compare the performances of it with multi-linear regression (MLR). These approaches are based on the relationship between the molecular structure (organic chain, polar functions, type of functions, type of isotope involved) of the organic compounds, and their equilibrium vapour pressure. A data set of 130 equilibrium vapour pressure isotope effects was used: 112 were used in the training set and the remaining 18 were used for the test/validation dataset. Two sets of descriptors were tested, a set with all the descriptors: number of(12)C, (13)C, (16)O, (18)O, (1)H, (2)H, OH functions, OD functions, CO functions, Connolly Solvent Accessible Surface Area (CSA) and temperature and a reduced set of descriptors. The dependent variable (the output) is the natural logarithm of the ratios of vapour pressures (ln R), expressed as light/heavy as in classical literature. Since the database is rather small, the leave-one-out procedure was used to validate both models. Considering higher determination coefficients and lower error values, it is concluded that the multi-layer perceptron provided better results compared to multi-linear regression. The stepwise regression procedure is a useful tool to reduce the number of descriptors. To our knowledge, a Quantitative Structure Property Relationship (QSPR) approach for isotopic studies is novel.

  1. Is it possible to predict long-term success with k-NN? Case study of four market indices (FTSE100, DAX, HANGSENG, NASDAQ)

    Science.gov (United States)

    Shi, Y.; Gorban, A. N.; Y Yang, T.

    2014-03-01

    This case study tests the possibility of prediction for 'success' (or 'winner') components of four stock & shares market indices in a time period of three years from 02-Jul-2009 to 29-Jun-2012.We compare their performance ain two time frames: initial frame three months at the beginning (02/06/2009-30/09/2009) and the final three month frame (02/04/2012-29/06/2012).To label the components, average price ratio between two time frames in descending order is computed. The average price ratio is defined as the ratio between the mean prices of the beginning and final time period. The 'winner' components are referred to the top one third of total components in the same order as average price ratio it means the mean price of final time period is relatively higher than the beginning time period. The 'loser' components are referred to the last one third of total components in the same order as they have higher mean prices of beginning time period. We analyse, is there any information about the winner-looser separation in the initial fragments of the daily closing prices log-returns time series.The Leave-One-Out Cross-Validation with k-NN algorithm is applied on the daily log-return of components using a distance and proximity in the experiment. By looking at the error analysis, it shows that for HANGSENG and DAX index, there are clear signs of possibility to evaluate the probability of long-term success. The correlation distance matrix histograms and 2-D/3-D elastic maps generated from ViDaExpert show that the 'winner' components are closer to each other and 'winner'/'loser' components are separable on elastic maps for HANGSENG and DAX index while for the negative possibility indices, there is no sign of separation.

  2. Filtered selection coupled with support vector machines generate a functionally relevant prediction model for colorectal cancer

    Directory of Open Access Journals (Sweden)

    Gabere MN

    2016-06-01

    Full Text Available Musa Nur Gabere,1 Mohamed Aly Hussein,1 Mohammad Azhar Aziz2 1Department of Bioinformatics, King Abdullah International Medical Research Center/King Saud bin Abdulaziz University for Health Sciences, Riyadh, Saudi Arabia; 2Colorectal Cancer Research Program, Department of Medical Genomics, King Abdullah International Medical Research Center, Riyadh, Saudi Arabia Purpose: There has been considerable interest in using whole-genome expression profiles for the classification of colorectal cancer (CRC. The selection of important features is a crucial step before training a classifier.Methods: In this study, we built a model that uses support vector machine (SVM to classify cancer and normal samples using Affymetrix exon microarray data obtained from 90 samples of 48 patients diagnosed with CRC. From the 22,011 genes, we selected the 20, 30, 50, 100, 200, 300, and 500 genes most relevant to CRC using the minimum-redundancy–maximum-relevance (mRMR technique. With these gene sets, an SVM model was designed using four different kernel types (linear, polynomial, radial basis function [RBF], and sigmoid.Results: The best model, which used 30 genes and RBF kernel, outperformed other combinations; it had an accuracy of 84% for both ten fold and leave-one-out cross validations in discriminating the cancer samples from the normal samples. With this 30 genes set from mRMR, six classifiers were trained using random forest (RF, Bayes net (BN, multilayer perceptron (MLP, naïve Bayes (NB, reduced error pruning tree (REPT, and SVM. Two hybrids, mRMR + SVM and mRMR + BN, were the best models when tested on other datasets, and they achieved a prediction accuracy of 95.27% and 91.99%, respectively, compared to other mRMR hybrid models (mRMR + RF, mRMR + NB, mRMR + REPT, and mRMR + MLP. Ingenuity pathway analysis was used to analyze the functions of the 30 genes selected for this model and their potential association with CRC: CDH3, CEACAM7, CLDN1, IL8, IL6R, MMP1

  3. Errors in Radiologic Reporting

    Directory of Open Access Journals (Sweden)

    Esmaeel Shokrollahi

    2010-05-01

    Full Text Available Given that the report is a professional document and bears the associated responsibilities, all of the radiologist's errors appear in it, either directly or indirectly. It is not easy to distinguish and classify the mistakes made when a report is prepared, because in most cases the errors are complex and attributable to more than one cause and because many errors depend on the individual radiologists' professional, behavioral and psychological traits."nIn fact, anyone can make a mistake, but some radiologists make more mistakes, and some types of mistakes are predictable to some extent."nReporting errors can be categorized differently:"nUniversal vs. individual"nHuman related vs. system related"nPerceptive vs. cognitive errors"n1. Descriptive "n2. Interpretative "n3. Decision related Perceptive errors"n1. False positive "n2. False negative"n Nonidentification "n Erroneous identification "nCognitive errors "n Knowledge-based"n Psychological  

  4. Toward a cognitive taxonomy of medical errors.

    OpenAIRE

    Zhang, Jiajie; Patel, Vimla L.; Johnson, Todd R.; Shortliffe, Edward H.

    2002-01-01

    One critical step in addressing and resolving the problems associated with human errors is the development of a cognitive taxonomy of such errors. In the case of errors, such a taxonomy may be developed (1) to categorize all types of errors along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to explain why, and even predict when and where, a specific error will occur, and (4) to generate intervention strategies for each type of e...

  5. Refractive Errors

    Science.gov (United States)

    ... does the eye focus light? In order to see clearly, light rays from an object must focus onto the ... The refractive errors are: myopia, hyperopia and astigmatism [See figures 2 and 3]. What is hyperopia (farsightedness)? Hyperopia occurs when light rays focus behind the retina (because the eye ...

  6. Medication Errors

    Science.gov (United States)

    ... Proprietary Names (PDF - 146KB) Draft Guidance for Industry: Best Practices in Developing Proprietary Names for Drugs (PDF - 279KB) ... or (301) 796-3400 druginfo@fda.hhs.gov Human Drug ... in Medication Errors Resources for You Agency for Healthcare Research and Quality: ...

  7. Fast Fourier Transform-based Support Vector Machine for Subcellular Localization Prediction Using Different Substitution Models

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    There are approximately 109 proteins in a cell. A hotspot in bioinformatics is how to identify a protein's subcellular localization, if its sequence is known. In this paper, a method using fast Fourier transform-based support vector machine is developed to predict the subcellular localization of proteins from their physicochemical properties and structural parameters. The prediction accuracies reached 83% in prokaryotic organisms and 84% in eukaryotic organisms with the substitution model of the c-p-v matrix (c, composition; p, polarity; and v, molecular volume). The overall prediction accuracy was also evaluated using the "leave-one-out" jackknife procedure. The influence of the substitution model on prediction accuracy has also been discussed in the work. The source code of the new program is available on request from the authors.

  8. Anxiety and Error Monitoring: Increased Error Sensitivity or Altered Expectations?

    Science.gov (United States)

    Compton, Rebecca J.; Carp, Joshua; Chaddock, Laura; Fineman, Stephanie L.; Quandt, Lorna C.; Ratliff, Jeffrey B.

    2007-01-01

    This study tested the prediction that the error-related negativity (ERN), a physiological measure of error monitoring, would be enhanced in anxious individuals, particularly in conditions with threatening cues. Participants made gender judgments about faces whose expressions were either happy, angry, or neutral. Replicating prior studies, midline…

  9. A new two-dimensional approach to quantitative prediction for collision cross-section of more than 110 singly protonated peptides by a novel moecular electronegativity-interaction vector through quantitative structure-spectrometry relationship studies

    Institute of Scientific and Technical Information of China (English)

    ZHOU Peng; MEI Hu; TIAN Feifei; WANG Jiaona; WU Shirong; LI Zhiliang

    2007-01-01

    Based on two-dimensional topological characters,a novel method called molecular electronegativityinteraction vector(MEIV)is proposed to parameterize molecular structures.Applying MEIV into quantitative structure-spectrometry relationship studies on ion mobility spectrometry collision cross-sections of 113 singly protonated peptides,three models were strictly obtained,with correlative coefficient r and leave-one-out cross-validation q of 0.983,0.979,0.981,0.979 and 0.980,0.978,respectively.Thus,the MEIV is confirmed to be potent to structural characterizations and property predictions for organic and biologic molecules.

  10. 储能逆变器预测控制误差形成机理及其抑制策略%Formation Mechanism and Suppression Strategy of Prediction Control Error Applied in a Battery Energy Storage Inverter

    Institute of Scientific and Technical Information of China (English)

    方支剑; 段善旭; 陈天锦; 陈昌松; 刘宝其

    2013-01-01

    在大功率储能系统中,由于储能逆变器开关频率低,采样、计算等引入的延时将恶化输出电能质量甚至引起系统的不稳定。采用预测控制可消除储能逆变器控制中延时的影响,但预测值受模型精确程度和输入扰动的影响与实际值存在差异。在储能逆变器数字化预测控制策略的基础上,推导预测误差与直流电压、负载电流和模型参数误差之间的传递函数,然后分析预测误差对逆变器数字化控制的影响,从而提出一种基于输出误差积分量和状态预测值的全维状态反馈控制策略,其外环采用输出电压误差积分以抑制预测误差,控制器采用后向差分形式的积分环节消除输出电压反馈引入的延时影响。该控制策略可有效消除逆变器控制中的延时,并且抑制了预测控制中的误差。最后设计一台30 kW 原理样机,输出稳态误差由9%降到1%,验证了所提控制策略的正确性。%In high power storage system, time consumption in sampling and calculating of digital processors could reduce the performances of output voltage and even make the system unstable at some serious condition, as the battery energy storage inverter has high power and low switching frequency. The predict control strategy can eliminate the influence of time consumption, while the surge of input voltage and model error could generate the prediction error. Based on the digital strategy of prediction control in battery energy storage inverter, this paper calculated the functions between model error, input change and prediction error. After analyzing the influence of prediction error on digital control strategy for inverter, a full-state feedback control strategy based on integral of output error was given. The outer loop used the integral of output voltage error to suppress the prediction error and the digital controller selected the integral of backward difference to reduce the degrees

  11. 顾及模型误差补偿的GM(1,1)变形预测建模%OPTIMAL METHOD FOR GM (1,1) MODELING FOR PREDICTION OF DEFORMATION TAKING COMPENSATION FOR MODEL ERRORS INTO ACOUNT

    Institute of Scientific and Technical Information of China (English)

    高宁; 崔希民; 高彩云

    2012-01-01

    By analyzing the main error source in the GM (1,1) modeling, the GM (1,1) model taking the compensation for model error was proposed and then, through an example, compared the model ( with compensation for error) with the results predicted with conventional models GM(1,1), PGM(1,1) and time-varying GM (1,1). The results show that the prediction accuracy with the model GM (1,1) of error compensation is the highest.%分析GM(1,1)建模过程中模型误差的主要来源,建立顾及模型误差补偿的GM(1,1)模型,并通过实例将模型误差补偿后的GM(1,1)模型与传统GM(1,1)、PGM(1,1)和时变GM(1,1)模型的预测结果进行比较,结果表明,模型误差补偿后的GM(1,1)模型预测精度最高.

  12. Reflectance spectroscopy: a tool for predicting the risk of iron chlorosis in soils

    Science.gov (United States)

    Cañasveras, J. C.; Barrón, V.; Del Campillo, M. C.; Viscarra Rossel, R. A.

    2012-04-01

    Chlorosis due to iron (Fe) deficiency is the most important nutritional problem a plant can have in calcareous soils. The most characteristic symptom of Fe chlorosis is internervial yellowing in the youngest leaves due to a lack of chlorophyll caused by a disorder in Fe nutrition. Fe chlorosis is related with calcium carbonate equivalent (CCE), clay content and Fe extracted with oxalate (Feo). The conventional technique for determining these properties and others, based on laboratory analysis, are time-consuming and costly. Reflectance spectroscopy (RS) is a rapid, non-destructive, less expensive alternative tool that can be used to enhance or replace conventional methods of soil analysis. The aim of this work was to assess the usefulness of RS for the determination of some properties of Mediterranean soils including clay content, CCE, Feo, cation exchange capacity (CEC), organic matter (OM) and pHw, with emphasis on those with a specially marked influence on the risk of Fe chlorosis. To this end, we used partial least-squares regression (PLS) to construct calibration models, leave-one-out cross-validation and an independent validation set. Our results testify to the usefulness of qualitative soil interpretations based on the variable importance for projection (VIP) as derived by PLS decomposition. The accuracy of predictions in each of the Vis-NIR, MIR and combined spectral regions differed considerably between properties. The R2adj and root mean square error (RMSE) for the external validation predictions were as follows: 0.83 and 37 mg kg-1 for clay content in the Vis-NIR-MIR range; 0.99 and 25 mg kg-1 for CCE, 0.80 and 0.1 mg kg-1 for Feo in the MIR range; 0.93 and 3 cmolc kg-1 for CEC in the Vis-NIR range; 0.87 and 2 mg kg-1 for OM in the Vis-NIR-MIR range, 0.61 and 0.2 for pHw in the MIR range. These results testify to the potential of RS in the Vis, NIR and MIR ranges for efficient soil analysis, the acquisition of soil information and the assessment of the

  13. A Method of Vehicle Route Prediction Based on Social Network Analysis

    Directory of Open Access Journals (Sweden)

    Ning Ye

    2015-01-01

    Full Text Available A method of vehicle route prediction based on social network analysis is proposed in this paper. The difference from proposed work is that, according to our collected vehicles’ past trips, we build a relationship model between different road segments rather than find the driving regularity of vehicles to predict upcoming routes. In this paper, firstly we depend on graph theory to build an initial road network model and modify related model parameters based on the collected data set. Then we transform the model into a matrix. Secondly, two concepts from social network analysis are introduced to describe the meaning of the matrix and we process it by current software of social network analysis. Thirdly, we design the algorithm of vehicle route prediction based on the above processing results. Finally, we use the leave-one-out approach to verify the efficiency of our algorithm.

  14. Prediction of Carbohydrate-Binding Proteins from Sequences Using Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Seizi Someya

    2010-01-01

    Full Text Available Carbohydrate-binding proteins are proteins that can interact with sugar chains but do not modify them. They are involved in many physiological functions, and we have developed a method for predicting them from their amino acid sequences. Our method is based on support vector machines (SVMs. We first clarified the definition of carbohydrate-binding proteins and then constructed positive and negative datasets with which the SVMs were trained. By applying the leave-one-out test to these datasets, our method delivered 0.92 of the area under the receiver operating characteristic (ROC curve. We also examined two amino acid grouping methods that enable effective learning of sequence patterns and evaluated the performance of these methods. When we applied our method in combination with the homology-based prediction method to the annotated human genome database, H-invDB, we found that the true positive rate of prediction was improved.

  15. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  16. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  17. Decorrelation of the True and Estimated Classifier Errors in High-Dimensional Settings

    Directory of Open Access Journals (Sweden)

    Hua Jianping

    2007-01-01

    Full Text Available The aim of many microarray experiments is to build discriminatory diagnosis and prognosis models. Given the huge number of features and the small number of examples, model validity which refers to the precision of error estimation is a critical issue. Previous studies have addressed this issue via the deviation distribution (estimated error minus true error, in particular, the deterioration of cross-validation precision in high-dimensional settings where feature selection is used to mitigate the peaking phenomenon (overfitting. Because classifier design is based upon random samples, both the true and estimated errors are sample-dependent random variables, and one would expect a loss of precision if the estimated and true errors are not well correlated, so that natural questions arise as to the degree of correlation and the manner in which lack of correlation impacts error estimation. We demonstrate the effect of correlation on error precision via a decomposition of the variance of the deviation distribution, observe that the correlation is often severely decreased in high-dimensional settings, and show that the effect of high dimensionality on error estimation tends to result more from its decorrelating effects than from its impact on the variance of the estimated error. We consider the correlation between the true and estimated errors under different experimental conditions using both synthetic and real data, several feature-selection methods, different classification rules, and three error estimators commonly used (leave-one-out cross-validation, -fold cross-validation, and .632 bootstrap. Moreover, three scenarios are considered: (1 feature selection, (2 known-feature set, and (3 all features. Only the first is of practical interest; however, the other two are needed for comparison purposes. We will observe that the true and estimated errors tend to be much more correlated in the case of a known feature set than with either feature selection

  18. An artificial neural network model for prediction of quality characteristics of apples during convective dehydration

    Directory of Open Access Journals (Sweden)

    Karina Di Scala

    2013-09-01

    Full Text Available In this study, the effects of hot-air drying conditions on color, water holding capacity, and total phenolic content of dried apple were investigated using artificial neural network as an intelligent modeling system. After that, a genetic algorithm was used to optimize the drying conditions. Apples were dried at different temperatures (40, 60, and 80 °C and at three air flow-rates (0.5, 1, and 1.5 m/s. Applying the leave-one-out cross validation methodology, simulated and experimental data were in good agreement presenting an error < 2.4 %. Quality index optimal values were found at 62.9 °C and 1.0 m/s using genetic algorithm.

  19. Automatic parametrization of implicit solvent models for the blind prediction of solvation free energies

    CERN Document Server

    Wang, Bao; Wei, Guowei

    2016-01-01

    In this work, a systematic protocol is proposed to automatically parametrize implicit solvent models with polar and nonpolar components. The proposed protocol utilizes the classical Poisson model or the Kohn-Sham density functional theory (KSDFT) based polarizable Poisson model for modeling polar solvation free energies. For the nonpolar component, either the standard model of surface area, molecular volume, and van der Waals interactions, or a model with atomic surface areas and molecular volume is employed. Based on the assumption that similar molecules have similar parametrizations, we develop scoring and ranking algorithms to classify solute molecules. Four sets of radius parameters are combined with four sets of charge force fields to arrive at a total of 16 different parametrizations for the Poisson model. A large database with 668 experimental data is utilized to validate the proposed protocol. The lowest leave-one-out root mean square (RMS) error for the database is 1.33k cal/mol. Additionally, five s...

  20. Cross-Validation, Bootstrap, and Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Masaaki Tsujitani

    2011-01-01

    Full Text Available This paper considers the applications of resampling methods to support vector machines (SVMs. We take into account the leaving-one-out cross-validation (CV when determining the optimum tuning parameters and bootstrapping the deviance in order to summarize the measure of goodness-of-fit in SVMs. The leaving-one-out CV is also adapted in order to provide estimates of the bias of the excess error in a prediction rule constructed with training samples. We analyze the data from a mackerel-egg survey and a liver-disease study.

  1. Quantifying error distributions in crowding.

    Science.gov (United States)

    Hanus, Deborah; Vul, Edward

    2013-03-22

    When multiple objects are in close proximity, observers have difficulty identifying them individually. Two classes of theories aim to account for this crowding phenomenon: spatial pooling and spatial substitution. Variations of these accounts predict different patterns of errors in crowded displays. Here we aim to characterize the kinds of errors that people make during crowding by comparing a number of error models across three experiments in which we manipulate flanker spacing, display eccentricity, and precueing duration. We find that both spatial intrusions and individual letter confusions play a considerable role in errors. Moreover, we find no evidence that a naïve pooling model that predicts errors based on a nonadditive combination of target and flankers explains errors better than an independent intrusion model (indeed, in our data, an independent intrusion model is slightly, but significantly, better). Finally, we find that manipulating trial difficulty in any way (spacing, eccentricity, or precueing) produces homogenous changes in error distributions. Together, these results provide quantitative baselines for predictive models of crowding errors, suggest that pooling and spatial substitution models are difficult to tease apart, and imply that manipulations of crowding all influence a common mechanism that impacts subject performance.

  2. Enabling Predictive Simulation and UQ of Complex Multiphysics PDE Systems by the Development of Goal-Oriented Variational Sensitivity Analysis and A Posteriori Error Estimation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Ginting, Victor

    2014-03-15

    it was demonstrated that a posteriori analyses in general and in particular one that uses adjoint methods can accurately and efficiently compute numerical error estimates and sensitivity for critical Quantities of Interest (QoIs) that depend on a large number of parameters. Activities include: analysis and implementation of several time integration techniques for solving system of ODEs as typically obtained from spatial discretization of PDE systems; multirate integration methods for ordinary differential equations; formulation and analysis of an iterative multi-discretization Galerkin finite element method for multi-scale reaction-diffusion equations; investigation of an inexpensive postprocessing technique to estimate the error of finite element solution of the second-order quasi-linear elliptic problems measured in some global metrics; investigation of an application of the residual-based a posteriori error estimates to symmetric interior penalty discontinuous Galerkin method for solving a class of second order quasi-linear elliptic problems; a posteriori analysis of explicit time integrations for system of linear ordinary differential equations; derivation of accurate a posteriori goal oriented error estimates for a user-defined quantity of interest for two classes of first and second order IMEX schemes for advection-diffusion-reaction problems; Postprocessing finite element solution; and A Bayesian Framework for Uncertain Quantification of Porous Media Flows.

  3. Toward a cognitive taxonomy of medical errors.

    Science.gov (United States)

    Zhang, Jiajie; Patel, Vimla L; Johnson, Todd R; Shortliffe, Edward H

    2002-01-01

    One critical step in addressing and resolving the problems associated with human errors is the development of a cognitive taxonomy of such errors. In the case of errors, such a taxonomy may be developed (1) to categorize all types of errors along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to explain why, and even predict when and where, a specific error will occur, and (4) to generate intervention strategies for each type of error. Based on Reason's (1992) definition of human errors and Norman's (1986) cognitive theory of human action, we have developed a preliminary action-based cognitive taxonomy of errors that largely satisfies these four criteria in the domain of medicine. We discuss initial steps for applying this taxonomy to develop an online medical error reporting system that not only categorizes errors but also identifies problems and generates solutions.

  4. Model-based mean square error estimators for k-nearest neighbour predictions and applications using remotely sensed data for forest inventories

    Science.gov (United States)

    Steen Magnussen; Ronald E. McRoberts; Erkki O. Tomppo

    2009-01-01

    New model-based estimators of the uncertainty of pixel-level and areal k-nearest neighbour (knn) predictions of attribute Y from remotely-sensed ancillary data X are presented. Non-parametric functions predict Y from scalar 'Single Index Model' transformations of X. Variance functions generated...

  5. Statistics-Based Prediction Analysis for Head and Neck Cancer Tumor Deformation

    Directory of Open Access Journals (Sweden)

    Maryam Azimi

    2012-01-01

    Full Text Available Most of the current radiation therapy planning systems, which are based on pre-treatment Computer Tomography (CT images, assume that the tumor geometry does not change during the course of treatment. However, tumor geometry is shown to be changing over time. We propose a methodology to monitor and predict daily size changes of head and neck cancer tumors during the entire radiation therapy period. Using collected patients' CT scan data, MATLAB routines are developed to quantify the progressive geometric changes occurring in patients during radiation therapy. Regression analysis is implemented to develop predictive models for tumor size changes through entire period. The generated models are validated using leave-one-out cross validation. The proposed method will increase the accuracy of therapy and improve patient's safety and quality of life by reducing the number of harmful unnecessary CT scans.

  6. A chemometric approach for prediction of antifungal activity of some benzoxazole derivatives against Candida albicans

    Directory of Open Access Journals (Sweden)

    Podunavac-Kuzmanović Sanja O.

    2012-01-01

    Full Text Available The purpose of the article is to promote and facilitate prediction of antifungal activity of the investigated series of benzoxazoles against Candida albicans. The clinical importance of this investigation is to simplify design of new antifungal agents against the fungi which can cause serious illnesses in humans. Quantitative structure activity relationship analysis was applied on nineteen benzoxazole derivatives. A multiple linear regression (MLR procedure was used to model the relationships between the molecular descriptors and the antifungal activity of benzoxazole derivatives. Two mathematical models have been developed as a calibration models for predicting the inhibitory activity of this class of compounds against Candida albicans. The quality of the models was validated by the leave-one-out technique, as well as by the calculation of statistical parameters for the established model. [Projekat Ministarstva nauke Republike Srbije, br. 172012 i br. 172014

  7. 一种感应电机预测控制的电流静差消除方法%Static current error elimination algorithm for induction motor predictive current control

    Institute of Scientific and Technical Information of China (English)

    金辛海; 张扬; 杨明; 徐殿国

    2015-01-01

    Current predictive control of induction motor can effectively avoid the deterioration of control performance caused by delay in current loop, and dynamic performance of current can be improved. However, due to measurement error, parameter changes and other reasons, deviation probably existed be-tween the induction motor model parameters which predictive controller used and the actual motor parame-ters, then static current error were caused, system efficiency was lowered, failed to output nominal torque, and could not operate in torque control mode. Based on induction motor model, influence on cur-rent control stability was quantitatively analyzed caused by predictive control model parameter error, the mathematical relation of model parameter error and the actual feedback current static error was presented, and an algorithm was proposed to eliminate the static error. The algorithm corrected predictive control model parameters through d、q axis current feedback, and static error caused by the controller motor mod-el parameter errors was eliminated. With experimental result, the stability and effectiveness of this pro-posed method were proved.%感应电机电流预测控制可以有效避免由电流环中各个滞后环节所导致的控制性能恶化,提高电流控制的动态性能。但由于测量误差以及参数变化等原因,预测控制器所使用的感应电机模型参数与实际电机参数很有可能存在偏差,进而引起电流静差,导致系统效率降低,无法输出额定转矩及无法工作在转矩控制模式等问题。在感应电机模型基础上,定量分析了预测控制模型参数误差对电流控制稳定性的影响,并推导出电流指令与实际反馈电流的静差与模型参数误差两者之间的定量关系,进而提出一种电流静差消除方法。这种方法通过d、q轴反馈电流对预测控制模型参数进行校正,来消除控制器电机模型参数误差所引起的静差。最

  8. Value of dual biometry in the detection and investigation of error in the preoperative prediction of refractive status following cataract surgery.

    LENUS (Irish Health Repository)

    Charalampidou, Sofia

    2012-02-01

    PURPOSE: To report the value of dual biometry in the detection of biometry errors. METHODS: Study 1: retrospective study of 224 consecutive cataract operations. The intraocular lens power calculation was based on immersion biometry. Study 2: immersion biometry was compared with optical coherence biometry (OCB) in terms of axial length, anterior chamber depth, keratometry readings and the recommended lens power to achieve emmetropia. Study 3: prospective study of 61 consecutive cataract operations. Both immersion and OCB were performed, but lens power calculation was based on the latter. RESULTS: Study 1: 115 (86%), 101 (75.4%), 90 (67.2%) and 50 (37.3%) of postoperative spherical equivalents were within +\\/-1.5 dioptres (D), +\\/-1.25 D, +\\/-1 D and +\\/-0.5 D of the target, respectively. Study 2: excellent agreement between axial length readings, anterior chamber depth readings and keratometry readings by immersion biometry and OCB was observed (reflected in a mean bias of -0.065 mm, -0.048 mm and +0.1803 D, respectively, in association with OCB). Agreement between the lens power recommended by each technique to achieve emmetropia was poor (mean bias of +1.16 D in association with OCB), but improved following appropriate modification of lens constants in the Accutome A-scan software (mean bias with OCB = -0.4 D). Study 3: 37 (92.5%) and 23 (57.5%) of operated eyes achieved a postoperative refraction within +\\/-1 D and +\\/-0.5 D of target, respectively. CONCLUSION: Systematic errors in biometry can exist, in the presence of acceptable postoperative refractive results. Dual biometry allows each biometric parameter to be scrutinized in isolation, and identify sources of error that may otherwise go undetected.

  9. Error estimate for Doo-Sabin surfaces

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Based on a general bound on the distance error between a uniform Doo-Sabin surface and its control polyhedron, an exponential error bound independent of the subdivision process is presented in this paper. Using the exponential bound, one can predict the depth of recursive subdivision of the Doo-Sabin surface within any user-specified error tolerance.

  10. Drifting from Slow to "D'oh!": Working Memory Capacity and Mind Wandering Predict Extreme Reaction Times and Executive Control Errors

    Science.gov (United States)

    McVay, Jennifer C.; Kane, Michael J.

    2012-01-01

    A combined experimental, individual-differences, and thought-sampling study tested the predictions of executive attention (e.g., Engle & Kane, 2004) and coordinative binding (e.g., Oberauer, Suss, Wilhelm, & Sander, 2007) theories of working memory capacity (WMC). We assessed 288 subjects' WMC and their performance and mind-wandering rates during…

  11. Drifting from Slow to "D'oh!": Working Memory Capacity and Mind Wandering Predict Extreme Reaction Times and Executive Control Errors

    Science.gov (United States)

    McVay, Jennifer C.; Kane, Michael J.

    2012-01-01

    A combined experimental, individual-differences, and thought-sampling study tested the predictions of executive attention (e.g., Engle & Kane, 2004) and coordinative binding (e.g., Oberauer, Suss, Wilhelm, & Sander, 2007) theories of working memory capacity (WMC). We assessed 288 subjects' WMC and their performance and mind-wandering rates…

  12. Drifting from Slow to "D'oh!": Working Memory Capacity and Mind Wandering Predict Extreme Reaction Times and Executive Control Errors

    Science.gov (United States)

    McVay, Jennifer C.; Kane, Michael J.

    2012-01-01

    A combined experimental, individual-differences, and thought-sampling study tested the predictions of executive attention (e.g., Engle & Kane, 2004) and coordinative binding (e.g., Oberauer, Suss, Wilhelm, & Sander, 2007) theories of working memory capacity (WMC). We assessed 288 subjects' WMC and their performance and mind-wandering rates…

  13. Drifting from Slow to “D’oh!” Working Memory Capacity and Mind Wandering Predict Extreme Reaction Times and Executive-Control Errors

    Science.gov (United States)

    McVay, Jennifer C.; Kane, Michael J.

    2012-01-01

    A combined experimental, individual-differences, and thought-sampling study tested the predictions of executive attention (e.g., Engle & Kane, 2004) and coordinative binding (e.g., Oberauer, Süß, Wilhelm, & Sander, 2007) theories of working memory capacity (WMC). We assessed 288 subjects’ WMC and their performance and mind-wandering rates during a sustained-attention task; subjects completed either a go/no-go version requiring executive control over habit, or a vigilance version that did not. We further combined the data with those from McVay and Kane (2009) to: (1) gauge the contributions of WMC and attentional lapses to the worst-performance rule and the tail, or τ parameter, of response time (RT) distributions; (2) assess which parameters from a quantitative evidence-accumulation RT model were predicted by WMC and mind-wandering reports, and (3) consider intra-subject RT patterns – particularly, speeding – as potential objective markers of mind wandering. We found that WMC predicted action and thought control in only some conditions, that attentional lapses (indicated by TUT reports and drift-rate variability in evidence accumulation) contributed to τ, performance accuracy, and WMC’s association with them, and that mind-wandering experiences were not predicted by trial-to-trial RT changes, and so they cannot always be inferred from objective performance measures. PMID:22004270

  14. Predicting DNA-binding proteins and binding residues by complex structure prediction and application to human proteome.

    Directory of Open Access Journals (Sweden)

    Huiying Zhao

    Full Text Available As more and more protein sequences are uncovered from increasingly inexpensive sequencing techniques, an urgent task is to find their functions. This work presents a highly reliable computational technique for predicting DNA-binding function at the level of protein-DNA complex structures, rather than low-resolution two-state prediction of DNA-binding as most existing techniques do. The method first predicts protein-DNA complex structure by utilizing the template-based structure prediction technique HHblits, followed by binding affinity prediction based on a knowledge-based energy function (Distance-scaled finite ideal-gas reference state for protein-DNA interactions. A leave-one-out cross validation of the method based on 179 DNA-binding and 3797 non-binding protein domains achieves a Matthews correlation coefficient (MCC of 0.77 with high precision (94% and high sensitivity (65%. We further found 51% sensitivity for 82 newly determined structures of DNA-binding proteins and 56% sensitivity for the human proteome. In addition, the method provides a reasonably accurate prediction of DNA-binding residues in proteins based on predicted DNA-binding complex structures. Its application to human proteome leads to more than 300 novel DNA-binding proteins; some of these predicted structures were validated by known structures of homologous proteins in APO forms. The method [SPOT-Seq (DNA] is available as an on-line server at http://sparks-lab.org.

  15. 利用智能停车场平抑风电预测误差的可行性研究%Feasibility study of depressing wind power prediction error by Smart Park

    Institute of Scientific and Technical Information of China (English)

    张广韬; 吴俊勇; 周彦衡; 苗青

    2013-01-01

    For breaking the bottleneck that further improvement of the wind power prediction accuracy encounters, an idea of making use of the intensive control of the Smark Park to depress wind power prediction error is proposed. Firstly, a detailed Smark Park simulation model is established and the port power characteristics of charge/discharge unit are analyzed. The relationship between the battery storage energy and the state of charge (SOC) is obtained. Based on a real wind power output and its prediction data, this paper analyzes the relationship between the target of depressing wind power prediction error and the factors such as the size of Smart Park, initial state of charged distribution, wind power installed capacity and wind power forecasting technique level. The results show that the coordinated operation of wind farms and Smart Park can depress wind power prediction error to a great extent, and it is an economic and feasible way to turn the undispatchable wind farm into dispatchable energy sources.%为了突破风电预测精度进一步提高的瓶颈,提出了一种利用智能停车场的集约化控制来平抑风电预测误差的方法。建立了智能停车场仿真模型,分析了充放电单元的端口功率特性,得到了电池储能与荷电状态之间的关系。基于一个实际风电场的出力及其预测数据,详细分析了智能停车场电动汽车规模、初始荷电状态分布、风电场装机容量和风电预测技术水平等因素与最终的风电预测误差平抑目标之间的关系。研究表明,将风电场与智能停车场联合协调运行,可以在很大程度上平抑风电预测误差,将不可调度的风电场变成可预测可调度的风电场,是一条经济可行的途径。

  16. Quantifying and handling errors in instrumental measurements using the measurement error theory

    DEFF Research Database (Denmark)

    Andersen, Charlotte Møller; Bro, R.; Brockhoff, P.B.

    2003-01-01

    Measurement error modelling is used for investigating the influence of measurement/sampling error on univariate predictions of water content and water-holding capacity (reference measurement) from nuclear magnetic resonance (NMR) relaxations (instrumental) measured on two gadoid fish species....... This is a new way of using the measurement error theory. Reliability ratios illustrate that the models for the two fish species are influenced differently by the error. However, the error seems to influence the predictions of the two reference measures in the same way. The effect of using replicated x......-measurements is illustrated by simulated data and by NMR relaxations measured several times on each fish. The standard error of the Physical determination of the reference values is lower than the standard error of the NMR measurements. In this case, lower prediction error is obtained by replicating the instrumental...

  17. Molecular Structural Characterization and Quantitative Prediction of Reduced Ion Mobility Constants for Diversified Organic Compounds

    Institute of Scientific and Technical Information of China (English)

    HE Liu; LIANG Gui-Zhao; LI Zhi-Liang

    2008-01-01

    Based on two-dimensional topological structures, a novel molecular electronega- tivity interaction vector with hybridization (MEHIV) was developed to describe atomic hybri- dization state in different molecular environments. Five quantitative models by MEHIV cha- racterization and multiple linear regression modeling were successfully established to predict reduced ion mobility constants (K0) of alkanes, aromatic hydrocarbons, fatty alcohols, fatty aldehydes and ketones and carboxylic esters. The correlation coefficients Rcv by leave-one-out cross-validation are 0.792, 0.787, 0.949, 0.972 and 0.981, respectively, and the standard deviations SDcv are 0.067, 0.086, 0.064, 0.043 and 0.042, respectively. These results suggested that MEHIV is an excellent topological index descriptor with many advantages such as straightforward physicochemical meaning, high characterization competence, convenient expan- sibility and easy manipulation.

  18. Holographic quantitative structure-activity relationship for prediction acute toxicity of benzene derivatives to the guppy(poecilia reticulata)

    Institute of Scientific and Technical Information of China (English)

    HUANG Hong; WANG Xiao-dong; DAI Xuan-li; YU Ya-juan; WANG Lian-sheng

    2004-01-01

    Holographic quantitative structure-activity relationship(HQSAR) is an emerging QSAR technique with the combined application of molecular hologram, which encoded the frequency of occurrence of various molecular fragment types, and the subsequent partial least squares(PLS) regression analysis. In this paper, the acute toxicity data to the guppy(poecilia reticulata) for a series of 56 substituted benzenes, phenols, aromatic amines and nitro-aromatics were subjected and this resulted in a model with a high predictive ability. The influence of fragment size and fragment distinction parameters on the quality of HQSAR model was investigated. The robustness and predictive ability of the model were also validated by leave-one-out (LOO) cross-validation procedure and external testing data set.

  19. Spatial frequency domain error budget

    Energy Technology Data Exchange (ETDEWEB)

    Hauschildt, H; Krulewich, D

    1998-08-27

    The aim of this paper is to describe a methodology for designing and characterizing machines used to manufacture or inspect parts with spatial-frequency-based specifications. At Lawrence Livermore National Laboratory, one of our responsibilities is to design or select the appropriate machine tools to produce advanced optical and weapons systems. Recently, many of the component tolerances for these systems have been specified in terms of the spatial frequency content of residual errors on the surface. We typically use an error budget as a sensitivity analysis tool to ensure that the parts manufactured by a machine will meet the specified component tolerances. Error budgets provide the formalism whereby we account for all sources of uncertainty in a process, and sum them to arrive at a net prediction of how "precisely" a manufactured component can meet a target specification. Using the error budget, we are able to minimize risk during initial stages by ensuring that the machine will produce components that meet specifications before the machine is actually built or purchased. However, the current error budgeting procedure provides no formal mechanism for designing machines that can produce parts with spatial-frequency-based specifications. The output from the current error budgeting procedure is a single number estimating the net worst case or RMS error on the work piece. This procedure has limited ability to differentiate between low spatial frequency form errors versus high frequency surface finish errors. Therefore the current error budgeting procedure can lead us to reject a machine that is adequate or accept a machine that is inadequate. This paper will describe a new error budgeting methodology to aid in the design and characterization of machines used to manufacture or inspect parts with spatial-frequency-based specifications. The output from this new procedure is the continuous spatial frequency content of errors that result on a machined part. If the machine

  20. [Survey in hospitals. Nursing errors, error culture and error management].

    Science.gov (United States)

    Habermann, Monika; Cramer, Henning

    2010-09-01

    Knowledge on errors is important to design safe nursing practice and its framework. This article presents results of a survey on this topic, including data of a representative sample of 724 nurses from 30 German hospitals. Participants predominantly remembered medication errors. Structural and organizational factors were rated as most important causes of errors. Reporting rates were considered low; this was explained by organizational barriers. Nurses in large part expressed having suffered from mental problems after error events. Nurses' perception focussing on medication errors seems to be influenced by current discussions which are mainly medication-related. This priority should be revised. Hospitals' risk management should concentrate on organizational deficits and positive error cultures. Decision makers are requested to tackle structural problems such as staff shortage.

  1. Application of Mouse Embryonic Stem Cell Test to Detect Gender-Specific Effect of Chemicals: A Supplementary Tool for Embryotoxicity Prediction.

    Science.gov (United States)

    Cheng, Wei; Zhou, Ren; Liang, Fan; Wei, Hongying; Feng, Yan; Wang, Yan

    2016-09-19

    Gender effect is an inherent property of chemicals, characterized by variations caused by the chemical-biology interaction. It has widely existed, but the shortage of an appropriate model restricts the study on gender-specific effect. The embryonic stem cell test (EST) has been utilized as an alternative test for developmental toxicity. Despite its numerous improvements, mouse embryonic stem cells with an XX karyotype have not been used in the EST, which restricts the ability of the EST to identify gender-specific effects during high-throughput-screening (HTS) of chemicals to date. To address this, the embryonic stem cell (ESC) SP3 line with an XX karyotype was used to establish a "female" model as a complement to EST. Here, we proposed a "double-objects in unison" (DOU)-EST, which consisted of male ESC and female ESC; a seven-day EST protocol was utilized, and the gender-specific effect of chemicals was determined and discriminated; the replacement of myosin heavy chain (MHC) with myosin light chain (MLC) provided a suitable molecular biomarker in the DOU-EST. New linear discriminant functions were given in the purpose of distinguishing chemicals into three classes, namely, no gender-specific effect, male-susceptive, and female-susceptive. For 15 chemicals in the training set, the concordances of prediction result as no gender effect, male susceptive, and female susceptive were 86.67%, 86.67%, and 93.33%, respectively, the sensitivities were 66.67%, 83.33%, and 83.33%, respectively, and the specificities were 91.67%, 88.89%, and 100%, respectively; the total accuracy of DOU-EST was 86.67%. For three chemicals in the test set, one was incorrectively predicted. The possible reason for misclassification may due to the absence of hormone environment in vitro. Leave-one-out cross-validation (LOOCV) indicated a mean error rate of 18.34%. Taken together, these data suggested a good performance of the proposed DOU-EST. Emerging chemicals with undiscovered gender

  2. Prediction

    CERN Document Server

    Sornette, Didier

    2010-01-01

    This chapter first presents a rather personal view of some different aspects of predictability, going in crescendo from simple linear systems to high-dimensional nonlinear systems with stochastic forcing, which exhibit emergent properties such as phase transitions and regime shifts. Then, a detailed correspondence between the phenomenology of earthquakes, financial crashes and epileptic seizures is offered. The presented statistical evidence provides the substance of a general phase diagram for understanding the many facets of the spatio-temporal organization of these systems. A key insight is to organize the evidence and mechanisms in terms of two summarizing measures: (i) amplitude of disorder or heterogeneity in the system and (ii) level of coupling or interaction strength among the system's components. On the basis of the recently identified remarkable correspondence between earthquakes and seizures, we present detailed information on a class of stochastic point processes that has been found to be particu...

  3. 基于赋值型误差传递网络的多工序加工质量预测%Quality Prediction of Multistage Machining Processes Based on Assigned Error Propagation Network

    Institute of Scientific and Technical Information of China (English)

    江平宇; 王岩; 王焕发; 郑镁

    2013-01-01

    It is the key issue to predict the machining quality in real time for machining quality control in multistage machining processes(MMPs). For aircraft manufacturing, the characteristics of special and large space size, hard machining materials, and small batch processing always lead to insufficient sample data and difficult monitoring of machining error. Considering the above issue, a quality prediction method is proposed based on assigned error propagation network(AEPN) in MMPs. Quality features(QFs) are introduced into a machining error propagation network(MEPN) for describing the influence relation between each node in machining process, and an AEPN is constructed too. Based on key QF nodes, a single process predict model(SPPM) is established by employing the support vector regression machine(SVRM), which is optimized by the particle swarm optimization(PSO) algorithm. Based on this, the SPPM is merged based on the topology structure of the AEPN, and a multi-processes predict model(MPPM) is further constructed, A software platform for machining quality prediction in MMPs is developed, and a landing gear part is used to verify the applicability of the above method. The result shows that these methods can effectively predict machining error and provide foundation for the machining process control of special parts from the perspective of MMPs.%加工质量实时预测是工件多工序加工质量控制的关键.航空制造领域关键零部件的异形空间大尺寸、材料难加工与小批量加工等特性,导致加工样本数据不足与加工误差监测困难.针对上述问题,提出一种基于赋值型误差传递网络的多工序加工质量预测建模方法.通过将质量特征引入多工序误差传递网络来描述加工过程中节点间的影响关系,形成赋值型的误差传递网络.并以关键质量特征节点为基础,采用基于粒子群算法优化的支持矢量回归机方法,构建单工序质量预测模型.在此基础上,

  4. Explaining errors in children's questions.

    Science.gov (United States)

    Rowland, Caroline F

    2007-07-01

    The ability to explain the occurrence of errors in children's speech is an essential component of successful theories of language acquisition. The present study tested some generativist and constructivist predictions about error on the questions produced by ten English-learning children between 2 and 5 years of age. The analyses demonstrated that, as predicted by some generativist theories [e.g. Santelmann, L., Berk, S., Austin, J., Somashekar, S. & Lust. B. (2002). Continuity and development in the acquisition of inversion in yes/no questions: dissociating movement and inflection, Journal of Child Language, 29, 813-842], questions with auxiliary DO attracted higher error rates than those with modal auxiliaries. However, in wh-questions, questions with modals and DO attracted equally high error rates, and these findings could not be explained in terms of problems forming questions with why or negated auxiliaries. It was concluded that the data might be better explained in terms of a constructivist account that suggests that entrenched item-based constructions may be protected from error in children's speech, and that errors occur when children resort to other operations to produce questions [e.g. Dabrowska, E. (2000). From formula to schema: the acquisition of English questions. Cognitive Liguistics, 11, 83-102; Rowland, C. F. & Pine, J. M. (2000). Subject-auxiliary inversion errors and wh-question acquisition: What children do know? Journal of Child Language, 27, 157-181; Tomasello, M. (2003). Constructing a language: A usage-based theory of language acquisition. Cambridge, MA: Harvard University Press]. However, further work on constructivist theory development is required to allow researchers to make predictions about the nature of these operations.

  5. Game Design Principles based on Human Error

    Directory of Open Access Journals (Sweden)

    Guilherme Zaffari

    2016-03-01

    Full Text Available This paper displays the result of the authors’ research regarding to the incorporation of Human Error, through design principles, to video game design. In a general way, designers must consider Human Error factors throughout video game interface development; however, when related to its core design, adaptations are in need, since challenge is an important factor for fun and under the perspective of Human Error, challenge can be considered as a flaw in the system. The research utilized Human Error classifications, data triangulation via predictive human error analysis, and the expanded flow theory to allow the design of a set of principles in order to match the design of playful challenges with the principles of Human Error. From the results, it was possible to conclude that the application of Human Error in game design has a positive effect on player experience, allowing it to interact only with errors associated with the intended aesthetics of the game.

  6. Within-field and regional-scale accuracies of topsoil organic carbon content prediction from an airborne visible near-infrared hyperspectral image combined with synchronous field spectra for temperate croplands

    Science.gov (United States)

    Vaudour, Emmanuelle; Gilliot, Jean-Marc; Bel, Liliane; Lefevre, Josias; Chehdi, Kacem

    2016-04-01

    calibration models derived either from Kennard-Stone or conditioned Latin Hypercube sampling on smoothed spectra. However, the most generalizable model leading to lowest RMSE value of 3.73 g. Kg-1 at the regional scale and 1.44 g. Kg-1 at the within-field scale and low validation bias was the cross-validated leave-one-out PLSR model constructed with the 28 near-synchronous samples and raw spectra.

  7. Refractive errors in infancy predict reduced performance on the movement assessment battery for children at 3 1/2 and 5 1/2 years.

    Science.gov (United States)

    Atkinson, Janette; Nardini, Marko; Anker, Shirley; Braddick, Oliver; Hughes, Clare; Rae, Sarah

    2005-04-01

    We have previously reported that significant hyperopia at 9 months predicts mild deficits on visuocognitive and visuomotor measures between 2 years and 5 years 6 months. Here we compare the motor skills of children who had been hyperopic in infancy (hyperopic group) with those who had been emmetropic (control group), using the Movement Assessment Battery for Children (Movement ABC). Children were tested at 3 years 6 months (hyperopic group: 47 males, 63 females, mean age 3 y 7 mo, SD 1.6 mo; control group: 61 males, 70 females, mean age 3 y 7 mo, SD 1.2 mo) and at 5 years 6 months (hyperopic group: 43 males, 56 females, mean age 5 y 4 mo, SD 1.7 mo; control group: 51 males, 62 females, mean age 5 y 3 mo, SD 1.6 mo). The hyperopic group performed significantly worse at both ages, overall and on at least one test from each category of motor skill (manual dexterity, balance, and ball skills). Distributions of scores showed that these differences were not due to poor performance by a minority but to a widespread mild deficit in the hyperopic group. This study also provides the first normative data on the Movement ABC for children below 4 years of age, and shows that it provides a useful measure of motor development at this young age.

  8. Prediction error variance and expected response to selection, when selection is based on the best predictor - for Gaussian and threshold characters, traits following a Poisson mixed model and survival traits

    DEFF Research Database (Denmark)

    Andersen, Anders Holst; Korsgaard, Inge Riis; Jensen, Just

    2002-01-01

    In this paper, we consider selection based on the best predictor of animal additive genetic values in Gaussian linear mixed models, threshold models, Poisson mixed models, and log normal frailty models for survival data (including models with time-dependent covariates with associated fixed...... or random effects). In the different models, expressions are given (when these can be found - otherwise unbiased estimates are given) for prediction error variance, accuracy of selection and expected response to selection on the additive genetic scale and on the observed scale. The expressions given for non...... Gaussian traits are generalisations of the well-known formulas for Gaussian traits - and reflect, for Poisson mixed models and frailty models for survival data, the hierarchal structure of the models. In general the ratio of the additive genetic variance to the total variance in the Gaussian part...

  9. Analysis and prediction of inducement combination modes of maritime accidents induced by human errors%人因海事事故诱因组合模式分析与预测

    Institute of Scientific and Technical Information of China (English)

    张丽丽; 吕靖; 艾云飞

    2014-01-01

    为深入了解人因失误对海事事故的诱发机制,以事故历史数据为基础,对诱因组合模式进行分析和预测。在阐述“瑞士奶酪”模型和人的因素分析与分类系统(Human Factors Analysis and Classification System,HFACS)核心思想的基础上,构建人因海事事故诱因分类体系。将诱因量化为矩阵并通过矩阵转化和聚类分析等提取事故主要诱因组合模式,利用Bootstrap方法对主要诱因组合模式进行预测,结果有助于决策者制定针对性强、可操作性高的防范措施,可以从根本上提高海上运输安全性。%To better understand the mechanism of maritime accidents induced by human errors,the in-ducement combination modes are analyzed and predicted based on the history data. An inducement clas-sification system of maritime accidents induced by human errors is developed based on the core concept of the Swiss Cheese Model and Human Factors Analysis and Classification System (HFACS). The induce-ment factors are quantified in form of matrix. By matrix transform and clustering analysis,the main inducement combination modes are obtained. Then,the modes are predicted by Bootstrap method. The results may help decision-makers to implement targeted and maneuverable preventive measures,and improve the maritime transportation safety.

  10. 波动方程预测误差的统计分析与Gauss过程建模%Statistical analysis of the wave equation's prediction errors and the Gaussian process modeling approach

    Institute of Scientific and Technical Information of China (English)

    赵宏旭; 吴甦

    2012-01-01

    为了提高预测复杂波动过程的能力,结合物理模型和统计方法建立了"波动方程-Gauss过程"模型。通过误差分析,波动方程的理论预测与实际数据的差值被分解为3个部分,并拟合为Gauss过程模型:外力和初边值条件偏移带来的误差拟合为正交预测因子的线性叠加;模型假设不成立、数值解收敛性等因素导致的误差拟合为Gauss过程项;测量误差拟合为白噪声。"波动方程-Gauss过程"模型的预测因子是波动过程的基函数组,作为波动的本征特性不受外界影响,体现了波动的物理机理。基于实验数据的预测效果检验说明模型的基函数组和Gauss过程项都显著提高了预测波动过程的能力。%A wave equation Gaussian process model was developed to describe complicated wave motion by integrating physical and statistical approaches. The errors between the theoretical solution of the wave equation and the observed data were modeled as the three parts of aGaussian process model. The errors caused by the external interference and the shift boundary and initial conditions were described by a group of orthogonal basis functions. The errors caused by the inadequate model assumptions and limited convergence of the numerical solution were modeled as a Gaussian process term. Measurement errors were modeled as white noise. The basis functions, as the model predictors, are the intrinsic characteristics of the wave motion. The model was validated using experimental data generated i'rom a vibrating string. The results indicate that both the basis functions and the Gaussian process terms significantly improve the prediction accuracy.

  11. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  12. Prediction of partition coefficient of some 3-hydroxy pyridine-4-one derivatives using combined partial least square regression and genetic algorithm.

    Science.gov (United States)

    Shahlaei, M; Fassihi, A; Saghaie, L; Zare, A

    2014-01-01

    A quantiatative structure property relationship (QSPR) treatment was used to a data set consisting of diverse 3-hydroxypyridine-4-one derivatives to relate the logarithmic function of octanol:water partition coefficients (denoted by log po/w) with theoretical molecular descriptors. Evaluation of a test set of 6 compounds with the developed partial least squares (PLS) model revealed that this model is reliable with a good predictability. Since the QSPR study was performed on the basis of theoretical descriptors calculated completely from the molecular structures, the proposed model could potentially provide useful information about the activity of the studied compounds. Various tests and criteria such as leave-one-out cross validation, leave-many-out cross validation, and also criteria suggested by Tropsha were employed to examine the predictability and robustness of the developed model.

  13. Classification of Spreadsheet Errors

    OpenAIRE

    Rajalingham, Kamalasen; Chadwick, David R.; Knight, Brian

    2008-01-01

    This paper describes a framework for a systematic classification of spreadsheet errors. This classification or taxonomy of errors is aimed at facilitating analysis and comprehension of the different types of spreadsheet errors. The taxonomy is an outcome of an investigation of the widespread problem of spreadsheet errors and an analysis of specific types of these errors. This paper contains a description of the various elements and categories of the classification and is supported by appropri...

  14. Error Estimates of Theoretical Models: a Guide

    CERN Document Server

    Dobaczewski, J; Reinhard, P -G

    2014-01-01

    This guide offers suggestions/insights on uncertainty quantification of nuclear structure models. We discuss a simple approach to statistical error estimates, strategies to assess systematic errors, and show how to uncover inter-dependencies by correlation analysis. The basic concepts are illustrated through simple examples. By providing theoretical error bars on predicted quantities and using statistical methods to study correlations between observables, theory can significantly enhance the feedback between experiment and nuclear modeling.

  15. 选择性激光烧结成型件密度的支持向量回归预测%Density prediction of selective laser sintering parts based on support vector regression

    Institute of Scientific and Technical Information of China (English)

    蔡从中; 裴军芳; 温玉锋; 朱星键; 肖婷婷

    2009-01-01

    根据不同工艺参数(层厚、扫描间距、激光功率、扫描速度、加工环境温度、层与层之间的加工时间间隔和扫描方式)下的选择性激光烧结成型件密度的实测数据集,应用基于粒子群算法寻优的支持向量回归(SVR)方法,建立了加工工艺参数与成型件密度间的预测模型,并与BP神经网络模型进行了比较.结果表明:基于相同的训练样本和检验样本,成型件密度的SVR模型比其BP神经网络模型具有更强的内部拟合能力和更高的预测精度;增加训练样本数有助于提高SVR预测模型的泛化能力;基于留一交叉验证法的SVR模型的预测误差最小.因此,SVR是一种预测选择性激光烧结成型件密度的有效方法.%The support vector regression (SVR) approach combined with particle swarm optimization for parameter optimization, is proposed to establish a model for estimating the density of selective laser sintering parts under processing parameters, including layer thickness, hatch spacing, laser power, scanning speed, ambient temperature, interval time and scanning mode. A comparison between the prediction results and the results from the BP neural networks strongly supports that the internal fitting capacity and prediction accuracy of SVR model are superior to those of BP neural networks under the identical training and test samples; the generation ability of SVR model can be efficiently improved by increasing the number of training samples. The minimum error value is provided by leave-one-out cross validation test of SVR. These results suggest that SVR is an effective and powerful tool for estimating the density of selective laser sintering parts.

  16. 改良SRK-T公式预测人工晶状体植入术后散光误差分析%Error prediction analysis of astigmatism after intraocular lenses implantation with modified SRK-T

    Institute of Scientific and Technical Information of China (English)

    涂云海; 俞阿勇; 宥永胜; 高潮; 吴文灿

    2011-01-01

    Objective To evaluate the feasibility and the factors of prediction error of astigmatism after intraocular lens (IOL) implantation by modified SRK-T, and to study the value of modified SRK-T using for toric intraocular lens power calculation. Methods Retrospective case seriesstudy. This study included 68 patients (106 eyes) underwent phacoemulsification during Oct 2007 to June 2008 in Eye hospital of Wenzhou Medical College. The result of the spherical equivalent calculated by SRK-T and modified SRK-T was compared, and the astigmatism error predicted by modified SRK-T and subjective refraction was compared with vector analysis. Influencing factor of modified SRK-T was analyzed with a multivariate linear regression analysis. Results Spherical equivalent calculated by modified SRK-T and SRK-T was equal. The factors of prediction error in J0 was astigmatism of cornea (KS),J0=-0.108-0.102×KS (P=0.034); and in J45 was axial length (L) and average refraction of cornea (K),J45=1.797-0.019×L-0.031×K (P=0.009). Conclusion Modified SRK-T is a good option for toric intraocular lens power calculation. The influencing factors of prediction include Ks, L and K.%目的 评价改良SRK-T公式对球面人工晶状体植入术后预测散光的可行性及其误差的影响因素,以探讨改良SRK-T公式在散光人工晶状体度数计算中的应用价值.方法 回顾性系列病例研究.分析2007年10月至2008年6月行超声乳化白内障摘除联合球面人工晶状体植入术且资料完整的白内障病例68例(106眼).比较改良SRK-T公式与SRK-T公式计算得出的术后预测等效屈光度数之间的差异.矢量分析改良SRK-T公式预测的术后散光度与实际术后散光度之间的差异,并采用多元线性回归分析其影响因素.结果 改良SRK-T公式与SRK-T公式计算得出的术后预测等效屈光度结果完全吻合.矢量分析术后散光预测误差结果示:J0预测误差的因素主要是角膜散光(KS),J0=-0.108-0.102

  17. Summer fire predictability in a Mediterranean environment

    Science.gov (United States)

    Marcos, Raül; Turco, Marco; Bedía, Joaquín; Llasat, Maria Carmen; Provenzale, Antonello

    2015-04-01

    Each year approximately 500000 hectares burn in Europe. Most of them are consequence of Mediterranean summer fires that lead to damages to the natural environment causing important economic and life losses. In order to allow the preparedness of adequate prevention measures in European Mediterranean regions, a better understanding of the summer fire predictability is crucial. Climate is a primary driver of the interannual variability of fires in Mediterranean-type ecosystems, controlling fuel flammability and fuel structure [1, 2]. That is, summer fires are linked to current-year climate values (proxies for the climatic factors that affect fuel flammability) and to antecedent climate variables (proxies for the climatic factors influencing fine fuel availability and connectivity). In our contribution we explore the long-term predictability of wildfires in a Mediterranean region (NE Spain), driving a multiple linear regression model with observed antecedent climate variables and with predicted variables from the ECMWF System-4 seasonal forecast. The approaches are evaluated through a leave-one-out cross-validation over the period 1983-2010. While the ECMWF System-4 proved of limited usefulness due to its limited skill, the model driven with antecedent climate variables alone allowed for satisfactory long-term prediction of above-normal fire activity, suggesting the feasibility of successful seasonal prediction of summer fires in Mediterranean-type regions. *References [1] M. Turco, M. C. Llasat, J. von Hardenberg, and A. Provenzale. Impact of climate variability on summer fires in a mediterranean environment (northeastern iberian peninsula). Climatic Change, 116:665-678, 2013. [2] M. Turco, M. C. Llasat, J. von Hardenberg, and A. Provenzale. Climate change impacts on wildfires in a Mediterranean environment. Climatic Change, 125: 369-380, 2014.

  18. Correlated measurement error hampers association network inference.

    Science.gov (United States)

    Kaduk, Mateusz; Hoefsloot, Huub C J; Vis, Daniel J; Reijmers, Theo; van der Greef, Jan; Smilde, Age K; Hendriks, Margriet M W B

    2014-09-01

    Modern chromatography-based metabolomics measurements generate large amounts of data in the form of abundances of metabolites. An increasingly popular way of representing and analyzing such data is by means of association networks. Ideally, such a network can be interpreted in terms of the underlying biology. A property of chromatography-based metabolomics data is that the measurement error structure is complex: apart from the usual (random) instrumental error there is also correlated measurement error. This is intrinsic to the way the samples are prepared and the analyses are performed and cannot be avoided. The impact of correlated measurement errors on (partial) correlation networks can be large and is not always predictable. The interplay between relative amounts of uncorrelated measurement error, correlated measurement error and biological variation defines this impact. Using chromatography-based time-resolved lipidomics data obtained from a human intervention study we show how partial correlation based association networks are influenced by correlated measurement error. We show how the effect of correlated measurement error on partial correlations is different for direct and indirect associations. For direct associations the correlated measurement error usually has no negative effect on the results, while for indirect associations, depending on the relative size of the correlated measurement error, results can become unreliable. The aim of this paper is to generate awareness of the existence of correlated measurement errors and their influence on association networks. Time series lipidomics data is used for this purpose, as it makes it possible to visually distinguish the correlated measurement error from a biological response. Underestimating the phenomenon of correlated measurement error will result in the suggestion of biologically meaningful results that in reality rest solely on complicated error structures. Using proper experimental designs that allow

  19. Robot learning and error correction

    Science.gov (United States)

    Friedman, L.

    1977-01-01

    A model of robot learning is described that associates previously unknown perceptions with the sensed known consequences of robot actions. For these actions, both the categories of outcomes and the corresponding sensory patterns are incorporated in a knowledge base by the system designer. Thus the robot is able to predict the outcome of an action and compare the expectation with the experience. New knowledge about what to expect in the world may then be incorporated by the robot in a pre-existing structure whether it detects accordance or discrepancy between a predicted consequence and experience. Errors committed during plan execution are detected by the same type of comparison process and learning may be applied to avoiding the errors.

  20. Reducing medication errors.

    Science.gov (United States)

    Nute, Christine

    2014-11-25

    Most nurses are involved in medicines management, which is integral to promoting patient safety. Medicines management is prone to errors, which depending on the error can cause patient injury, increased hospital stay and significant legal expenses. This article describes a new approach to help minimise drug errors within healthcare settings where medications are prescribed, dispensed or administered. The acronym DRAINS, which considers all aspects of medicines management before administration, was devised to reduce medication errors on a cardiothoracic intensive care unit.

  1. Prediction of protein solvent accessibility using fuzzy k-nearest neighbor method.

    Science.gov (United States)

    Sim, Jaehyun; Kim, Seung-Yeon; Lee, Julian

    2005-06-15

    The solvent accessibility of amino acid residues plays an important role in tertiary structure prediction, especially in the absence of significant sequence similarity of a query protein to those with known structures. The prediction of solvent accessibility is less accurate than secondary structure prediction in spite of improvements in recent researches. The k-nearest neighbor method, a simple but powerful classification algorithm, has never been applied to the prediction of solvent accessibility, although it has been used frequently for the classification of biological and medical data. We applied the fuzzy k-nearest neighbor method to the solvent accessibility prediction, using PSI-BLAST profiles as feature vectors, and achieved high prediction accuracies. With leave-one-out cross-validation on the ASTRAL SCOP reference dataset constructed by sequence clustering, our method achieved 64.1% accuracy for a 3-state (buried/intermediate/exposed) prediction (thresholds of 9% for buried/intermediate and 36% for intermediate/exposed) and 86.7, 82.0, 79.0 and 78.5% accuracies for 2-state (buried/exposed) predictions (thresholds of each 0, 5, 16 and 25% for buried/exposed), respectively. Our method also showed slightly better accuracies than other methods by about 2-5% on the RS126 dataset and a benchmarking dataset with 229 proteins. Program and datasets are available at http://biocom1.ssu.ac.kr/FKNNacc/ jul@ssu.ac.kr.

  2. Demand Forecasting Errors

    OpenAIRE

    Mackie, Peter; Nellthorp, John; Laird, James

    2005-01-01

    Demand forecasts form a key input to the economic appraisal. As such any errors present within the demand forecasts will undermine the reliability of the economic appraisal. The minimization of demand forecasting errors is therefore important in the delivery of a robust appraisal. This issue is addressed in this note by introducing the key issues, and error types present within demand fore...

  3. When errors are rewarding

    NARCIS (Netherlands)

    Bruijn, E.R.A. de; Lange, F.P. de; Cramon, D.Y. von; Ullsperger, M.

    2009-01-01

    For social beings like humans, detecting one's own and others' errors is essential for efficient goal-directed behavior. Although one's own errors are always negative events, errors from other persons may be negative or positive depending on the social context. We used neuroimaging to disentangle br

  4. Hierarchical error representation in medial prefrontal cortex.

    Science.gov (United States)

    Zarr, Noah; Brown, Joshua W

    2016-01-01

    The medial prefrontal cortex (mPFC) is reliably activated by both performance and prediction errors. Error signals have typically been treated as a scalar, and it is unknown to what extent multiple error signals may co-exist within mPFC. Previous studies have shown that lateral frontal cortex (LFC) is arranged in a hierarchy of abstraction, such that more abstract concepts and rules are represented in more anterior cortical regions. Given the close interaction between lateral and medial prefrontal cortex, we explored the hypothesis that mPFC would be organized along a similar rostro-caudal gradient of abstraction, such that more abstract prediction errors are represented further anterior and more concrete errors further posterior. We show that multiple prediction error signals can be found in mPFC, and furthermore, these are arranged in a rostro-caudal gradient of abstraction which parallels that found in LFC. We used a task that requires a three-level hierarchy of rules to be followed, in which the rules changed without warning at each level of the hierarchy. Task feedback indicated which level of the rule hierarchy changed and led to corresponding prediction error signals in mPFC. Moreover, each identified region of mPFC was preferentially functionally connected to correspondingly anterior regions of LFC. These results suggest the presence of a parallel structure between lateral and medial prefrontal cortex, with the medial regions monitoring and evaluating performance based on rules maintained in the corresponding lateral regions.

  5. Systematic error revisited

    Energy Technology Data Exchange (ETDEWEB)

    Glosup, J.G.; Axelrod, M.C.

    1996-08-05

    The American National Standards Institute (ANSI) defines systematic error as An error which remains constant over replicative measurements. It would seem from the ANSI definition that a systematic error is not really an error at all; it is merely a failure to calibrate the measurement system properly because if error is constant why not simply correct for it? Yet systematic errors undoubtedly exist, and they differ in some fundamental way from the kind of errors we call random. Early papers by Eisenhart and by Youden discussed systematic versus random error with regard to measurements in the physical sciences, but not in a fundamental way, and the distinction remains clouded by controversy. The lack of a general agreement on definitions has led to a plethora of different and often confusing methods on how to quantify the total uncertainty of a measurement that incorporates both its systematic and random errors. Some assert that systematic error should be treated by non- statistical methods. We disagree with this approach, and we provide basic definitions based on entropy concepts, and a statistical methodology for combining errors and making statements of total measurement of uncertainty. We illustrate our methods with radiometric assay data.

  6. Error handling strategies in multiphase inverse modeling

    Energy Technology Data Exchange (ETDEWEB)

    Finsterle, S.; Zhang, Y.

    2010-12-01

    Parameter estimation by inverse modeling involves the repeated evaluation of a function of residuals. These residuals represent both errors in the model and errors in the data. In practical applications of inverse modeling of multiphase flow and transport, the error structure of the final residuals often significantly deviates from the statistical assumptions that underlie standard maximum likelihood estimation using the least-squares method. Large random or systematic errors are likely to lead to convergence problems, biased parameter estimates, misleading uncertainty measures, or poor predictive capabilities of the calibrated model. The multiphase inverse modeling code iTOUGH2 supports strategies that identify and mitigate the impact of systematic or non-normal error structures. We discuss these approaches and provide an overview of the error handling features implemented in iTOUGH2.

  7. Scoring function to predict solubility mutagenesis

    Directory of Open Access Journals (Sweden)

    Deutsch Christopher

    2010-10-01

    Full Text Available Abstract Background Mutagenesis is commonly used to engineer proteins with desirable properties not present in the wild type (WT protein, such as increased or decreased stability, reactivity, or solubility. Experimentalists often have to choose a small subset of mutations from a large number of candidates to obtain the desired change, and computational techniques are invaluable to make the choices. While several such methods have been proposed to predict stability and reactivity mutagenesis, solubility has not received much attention. Results We use concepts from computational geometry to define a three body scoring function that predicts the change in protein solubility due to mutations. The scoring function captures both sequence and structure information. By exploring the literature, we have assembled a substantial database of 137 single- and multiple-point solubility mutations. Our database is the largest such collection with structural information known so far. We optimize the scoring function using linear programming (LP methods to derive its weights based on training. Starting with default values of 1, we find weights in the range [0,2] so that predictions of increase or decrease in solubility are optimized. We compare the LP method to the standard machine learning techniques of support vector machines (SVM and the Lasso. Using statistics for leave-one-out (LOO, 10-fold, and 3-fold cross validations (CV for training and prediction, we demonstrate that the LP method performs the best overall. For the LOOCV, the LP method has an overall accuracy of 81%. Availability Executables of programs, tables of weights, and datasets of mutants are available from the following web page: http://www.wsu.edu/~kbala/OptSolMut.html.

  8. Predicting radiotherapy outcomes using statistical learning techniques

    Energy Technology Data Exchange (ETDEWEB)

    El Naqa, Issam; Bradley, Jeffrey D; Deasy, Joseph O [Washington University, Saint Louis, MO (United States); Lindsay, Patricia E; Hope, Andrew J [Department of Radiation Oncology, Princess Margaret Hospital, Toronto, ON (Canada)

    2009-09-21

    Radiotherapy outcomes are determined by complex interactions between treatment, anatomical and patient-related variables. A common obstacle to building maximally predictive outcome models for clinical practice is the failure to capture potential complexity of heterogeneous variable interactions and applicability beyond institutional data. We describe a statistical learning methodology that can automatically screen for nonlinear relations among prognostic variables and generalize to unseen data before. In this work, several types of linear and nonlinear kernels to generate interaction terms and approximate the treatment-response function are evaluated. Examples of institutional datasets of esophagitis, pneumonitis and xerostomia endpoints were used. Furthermore, an independent RTOG dataset was used for 'generalizabilty' validation. We formulated the discrimination between risk groups as a supervised learning problem. The distribution of patient groups was initially analyzed using principle components analysis (PCA) to uncover potential nonlinear behavior. The performance of the different methods was evaluated using bivariate correlations and actuarial analysis. Over-fitting was controlled via cross-validation resampling. Our results suggest that a modified support vector machine (SVM) kernel method provided superior performance on leave-one-out testing compared to logistic regression and neural networks in cases where the data exhibited nonlinear behavior on PCA. For instance, in prediction of esophagitis and pneumonitis endpoints, which exhibited nonlinear behavior on PCA, the method provided 21% and 60% improvements, respectively. Furthermore, evaluation on the independent pneumonitis RTOG dataset demonstrated good generalizabilty beyond institutional data in contrast with other models. This indicates that the prediction of treatment response can be improved by utilizing nonlinear kernel methods for discovering important nonlinear interactions among

  9. Predicting radiotherapy outcomes using statistical learning techniques*

    Science.gov (United States)

    El Naqa, Issam; Bradley, Jeffrey D; Lindsay, Patricia E; Hope, Andrew J; Deasy, Joseph O

    2013-01-01

    Radiotherapy outcomes are determined by complex interactions between treatment, anatomical and patient-related variables. A common obstacle to building maximally predictive outcome models for clinical practice is the failure to capture potential complexity of heterogeneous variable interactions and applicability beyond institutional data. We describe a statistical learning methodology that can automatically screen for nonlinear relations among prognostic variables and generalize to unseen data before. In this work, several types of linear and nonlinear kernels to generate interaction terms and approximate the treatment-response function are evaluated. Examples of institutional datasets of esophagitis, pneumonitis and xerostomia endpoints were used. Furthermore, an independent RTOG dataset was used for ‘generalizabilty’ validation. We formulated the discrimination between risk groups as a supervised learning problem. The distribution of patient groups was initially analyzed using principle components analysis (PCA) to uncover potential nonlinear behavior. The performance of the different methods was evaluated using bivariate correlations and actuarial analysis. Over-fitting was controlled via cross-validation resampling. Our results suggest that a modified support vector machine (SVM) kernel method provided superior performance on leave-one-out testing compared to logistic regression and neural networks in cases where the data exhibited nonlinear behavior on PCA. For instance, in prediction of esophagitis and pneumonitis endpoints, which exhibited nonlinear behavior on PCA, the method provided 21% and 60% improvements, respectively. Furthermore, evaluation on the independent pneumonitis RTOG dataset demonstrated good generalizabilty beyond institutional data in contrast with other models. This indicates that the prediction of treatment response can be improved by utilizing nonlinear kernel methods for discovering important nonlinear interactions among model

  10. Predicting radiotherapy outcomes using statistical learning techniques

    Science.gov (United States)

    El Naqa, Issam; Bradley, Jeffrey D.; Lindsay, Patricia E.; Hope, Andrew J.; Deasy, Joseph O.

    2009-09-01

    Radiotherapy outcomes are determined by complex interactions between treatment, anatomical and patient-related variables. A common obstacle to building maximally predictive outcome models for clinical practice is the failure to capture potential complexity of heterogeneous variable interactions and applicability beyond institutional data. We describe a statistical learning methodology that can automatically screen for nonlinear relations among prognostic variables and generalize to unseen data before. In this work, several types of linear and nonlinear kernels to generate interaction terms and approximate the treatment-response function are evaluated. Examples of institutional datasets of esophagitis, pneumonitis and xerostomia endpoints were used. Furthermore, an independent RTOG dataset was used for 'generalizabilty' validation. We formulated the discrimination between risk groups as a supervised learning problem. The distribution of patient groups was initially analyzed using principle components analysis (PCA) to uncover potential nonlinear behavior. The performance of the different methods was evaluated using bivariate correlations and actuarial analysis. Over-fitting was controlled via cross-validation resampling. Our results suggest that a modified support vector machine (SVM) kernel method provided superior performance on leave-one-out testing compared to logistic regression and neural networks in cases where the data exhibited nonlinear behavior on PCA. For instance, in prediction of esophagitis and pneumonitis endpoints, which exhibited nonlinear behavior on PCA, the method provided 21% and 60% improvements, respectively. Furthermore, evaluation on the independent pneumonitis RTOG dataset demonstrated good generalizabilty beyond institutional data in contrast with other models. This indicates that the prediction of treatment response can be improved by utilizing nonlinear kernel methods for discovering important nonlinear interactions among model

  11. Prediction error variance and expected response to selection, when selection is based on the best predictor – for Gaussian and threshold characters, traits following a Poisson mixed model and survival traits

    Directory of Open Access Journals (Sweden)

    Jensen Just

    2002-05-01

    Full Text Available Abstract In this paper, we consider selection based on the best predictor of animal additive genetic values in Gaussian linear mixed models, threshold models, Poisson mixed models, and log normal frailty models for survival data (including models with time-dependent covariates with associated fixed or random effects. In the different models, expressions are given (when these can be found – otherwise unbiased estimates are given for prediction error variance, accuracy of selection and expected response to selection on the additive genetic scale and on the observed scale. The expressions given for non Gaussian traits are generalisations of the well-known formulas for Gaussian traits – and reflect, for Poisson mixed models and frailty models for survival data, the hierarchal structure of the models. In general the ratio of the additive genetic variance to the total variance in the Gaussian part of the model (heritability on the normally distributed level of the model or a generalised version of heritability plays a central role in these formulas.

  12. Development of a (13)C NMR Chemical Shift Prediction Procedure Using B3LYP/cc-pVDZ and Empirically Derived Systematic Error Correction Terms: A Computational Small Molecule Structure Elucidation Method.

    Science.gov (United States)

    Xin, Dongyue; Sader, C Avery; Chaudhary, Om; Jones, Paul-James; Wagner, Klaus; Tautermann, Christofer S; Yang, Zheng; Busacca, Carl A; Saraceno, Reginaldo A; Fandrick, Keith R; Gonnella, Nina C; Horspool, Keith; Hansen, Gordon; Senanayake, Chris H

    2017-05-19

    An accurate and efficient procedure was developed for performing (13)C NMR chemical shift calculations employing density functional theory with the gauge invariant atomic orbitals (DFT-GIAO). Benchmarking analysis was carried out, incorporating several density functionals and basis sets commonly used for prediction of (13)C NMR chemical shifts, from which the B3LYP/cc-pVDZ level of theory was found to provide accurate results at low computational cost. Statistical analyses from a large data set of (13)C NMR chemical shifts in DMSO are presented with TMS as the calculated reference and with empirical scaling parameters obtained from a linear regression analysis. Systematic errors were observed locally for key functional groups and carbon types, and correction factors were determined. The application of this process and associated correction factors enabled assignment of the correct structures of therapeutically relevant compounds in cases where experimental data yielded inconclusive or ambiguous results. Overall, the use of B3LYP/cc-pVDZ with linear scaling and correction terms affords a powerful and efficient tool for structure elucidation.

  13. Prediction error variance and expected response to selection, when selection is based on the best predictor – for Gaussian and threshold characters, traits following a Poisson mixed model and survival traits

    Science.gov (United States)

    Korsgaard, Inge Riis; Andersen, Anders Holst; Jensen, Just

    2002-01-01

    In this paper, we consider selection based on the best predictor of animal additive genetic values in Gaussian linear mixed models, threshold models, Poisson mixed models, and log normal frailty models for survival data (including models with time-dependent covariates with associated fixed or random effects). In the different models, expressions are given (when these can be found – otherwise unbiased estimates are given) for prediction error variance, accuracy of selection and expected response to selection on the additive genetic scale and on the observed scale. The expressions given for non Gaussian traits are generalisations of the well-known formulas for Gaussian traits – and reflect, for Poisson mixed models and frailty models for survival data, the hierarchal structure of the models. In general the ratio of the additive genetic variance to the total variance in the Gaussian part of the model (heritability on the normally distributed level of the model) or a generalised version of heritability plays a central role in these formulas. PMID:12081800

  14. Embedded wavelet video coding with error concealment

    Science.gov (United States)

    Chang, Pao-Chi; Chen, Hsiao-Ching; Lu, Ta-Te

    2000-04-01

    We present an error-concealed embedded wavelet (ECEW) video coding system for transmission over Internet or wireless networks. This system consists of two types of frames: intra (I) frames and inter, or predicted (P), frames. Inter frames are constructed by the residual frames formed by variable block-size multiresolution motion estimation (MRME). Motion vectors are compressed by arithmetic coding. The image data of intra frames and residual frames are coded by error-resilient embedded zerotree wavelet (ER-EZW) coding. The ER-EZW coding partitions the wavelet coefficients into several groups and each group is coded independently. Therefore, the error propagation effect resulting from an error is only confined in a group. In EZW coding any single error may result in a totally undecodable bitstream. To further reduce the error damage, we use the error concealment at the decoding end. In intra frames, the erroneous wavelet coefficients are replaced by neighbors. In inter frames, erroneous blocks of wavelet coefficients are replaced by data from the previous frame. Simulations show that the performance of ECEW is superior to ECEW without error concealment by 7 to approximately 8 dB at the error-rate of 10-3 in intra frames. The improvement still has 2 to approximately 3 dB at a higher error-rate of 10-2 in inter frames.

  15. Median Unbiased Estimation of Bivariate Predictive Regression Models with Heavy-tailed or Heteroscedastic Errors%具有重尾或异方差误差的双变量预测回归模型的中位无偏估计

    Institute of Scientific and Technical Information of China (English)

    朱复康; 王德军

    2007-01-01

    In this paper, we consider median unbiased estimation of bivariate predictive regression models with non-normal, heavy-tailed or heterescedastic errors. We construct confidence intervals and median unbiased estimator for the parameter of interest. We show that the proposed estimator has better predictive potential than the usual least squares estimator via simulation. An empirical application to finance is given. And a possible extension of the estimation procedure to cointegration models is also described.

  16. Cosine tuning minimizes motor errors.

    Science.gov (United States)

    Todorov, Emanuel

    2002-06-01

    Cosine tuning is ubiquitous in the motor system, yet a satisfying explanation of its origin is lacking. Here we argue that cosine tuning minimizes expected errors in force production, which makes it a natural choice for activating muscles and neurons in the final stages of motor processing. Our results are based on the empirically observed scaling of neuromotor noise, whose standard deviation is a linear function of the mean. Such scaling predicts a reduction of net force errors when redundant actuators pull in the same direction. We confirm this prediction by comparing forces produced with one versus two hands and generalize it across directions. Under the resulting neuromotor noise model, we prove that the optimal activation profile is a (possibly truncated) cosine--for arbitrary dimensionality of the workspace, distribution of force directions, correlated or uncorrelated noise, with or without a separate cocontraction command. The model predicts a negative force bias, truncated cosine tuning at low muscle cocontraction levels, and misalignment of preferred directions and lines of action for nonuniform muscle distributions. All predictions are supported by experimental data.

  17. Cognitive modelling of pilot errors and error recovery in flight management tasks

    NARCIS (Netherlands)

    Lüdtke, A.; Osterloh, J.P.; Mioch, T.; Rister, F.; Looije, R.

    2009-01-01

    This paper presents a cognitive modelling approach to predict pilot errors and error recovery during the interaction with aircraft cockpit systems. The model allows execution of flight procedures in a virtual simulation environment and production of simulation traces. We present traces for the inter

  18. Probabilistic quantum error correction

    CERN Document Server

    Fern, J; Fern, Jesse; Terilla, John

    2002-01-01

    There are well known necessary and sufficient conditions for a quantum code to correct a set of errors. We study weaker conditions under which a quantum code may correct errors with probabilities that may be less than one. We work with stabilizer codes and as an application study how the nine qubit code, the seven qubit code, and the five qubit code perform when there are errors on more than one qubit. As a second application, we discuss the concept of syndrome quality and use it to suggest a way that quantum error correction can be practically improved.

  19. A New MCP Method of Predicting Long-term Wind Speed with Height Error Revision%一种考虑海拔高度的风速测量相关推测法

    Institute of Scientific and Technical Information of China (English)

    刘郁珏; 李军; 胡非; 朱蓉

    2013-01-01

    above can only predict wind speed of target site with the same altitude. If the target site is higher or lower than the reference site too much, the result will be unreliable. So a new MCP method with height error revision is proposed based on data of two wind measurements, including six-layer wind data in one year. The fitted equations of Weibull parameters k and c as the function of height have been derived. By means of fitted equations, the relationship between winds of high and low altitude can be formulated. So, a method for error reduction is presented. At last, a set of performance comparison are carried out. The coefficient of correlation, the mean speed, the wind distribution and the correct annual energy production are selected as metrics at the target site, and a sample wind turbine power curve is analyzed. The mean and standard deviation of those estimates are used to characterize results. Results indicate that the new MCP method with height error revision work much better than previous ones.

  20. Error processing network dynamics in schizophrenia.

    Science.gov (United States)

    Becerril, Karla E; Repovs, Grega; Barch, Deanna M

    2011-01-15

    Current theories of cognitive dysfunction in schizophrenia emphasize an impairment in the ability of individuals suffering from this disorder to monitor their own performance, and adjust their behavior to changing demands. Detecting an error in performance is a critical component of evaluative functions that allow the flexible adjustment of behavior to optimize outcomes. The dorsal anterior cingulate cortex (dACC) has been repeatedly implicated in error-detection and implementation of error-based behavioral adjustments. However, accurate error-detection and subsequent behavioral adjustments are unlikely to rely on a single brain region. Recent research demonstrates that regions in the anterior insula, inferior parietal lobule, anterior prefrontal cortex, thalamus, and cerebellum also show robust error-related activity, and integrate into a functional network. Despite the relevance of examining brain activity related to the processing of error information and supporting behavioral adjustments in terms of a distributed network, the contribution of regions outside the dACC to error processing remains poorly understood. To address this question, we used functional magnetic resonance imaging to examine error-related responses in 37 individuals with schizophrenia and 32 healthy controls in regions identified in the basic science literature as being involved in error processing, and determined whether their activity was related to behavioral adjustments. Our imaging results support previous findings showing that regions outside the dACC are sensitive to error commission, and demonstrated that abnormalities in brain responses to errors among individuals with schizophrenia extend beyond the dACC to almost all of the regions involved in error-related processing in controls. However, error related responses in the dACC were most predictive of behavioral adjustments in both groups. Moreover, the integration of this network of regions differed between groups, with the

  1. Development of Interpretable Predictive Models for BPH and Prostate Cancer

    Science.gov (United States)

    Bermejo, Pablo; Vivo, Alicia; Tárraga, Pedro J; Rodríguez-Montes, JA

    2015-01-01

    BACKGROUND Traditional methods for deciding whether to recommend a patient for a prostate biopsy are based on cut-off levels of stand-alone markers such as prostate-specific antigen (PSA) or any of its derivatives. However, in the last decade we have seen the increasing use of predictive models that combine, in a non-linear manner, several predictives that are better able to predict prostate cancer (PC), but these fail to help the clinician to distinguish between PC and benign prostate hyperplasia (BPH) patients. We construct two new models that are capable of predicting both PC and BPH. METHODS An observational study was performed on 150 patients with PSA ≥3 ng/mL and age >50 years. We built a decision tree and a logistic regression model, validated with the leave-one-out methodology, in order to predict PC or BPH, or reject both. RESULTS Statistical dependence with PC and BPH was found for prostate volume (P-value < 0.001), PSA (P-value < 0.001), international prostate symptom score (IPSS; P-value < 0.001), digital rectal examination (DRE; P-value < 0.001), age (P-value < 0.002), antecedents (P-value < 0.006), and meat consumption (P-value < 0.08). The two predictive models that were constructed selected a subset of these, namely, volume, PSA, DRE, and IPSS, obtaining an area under the ROC curve (AUC) between 72% and 80% for both PC and BPH prediction. CONCLUSION PSA and volume together help to build predictive models that accurately distinguish among PC, BPH, and patients without any of these pathologies. Our decision tree and logistic regression models outperform the AUC obtained in the compared studies. Using these models as decision support, the number of unnecessary biopsies might be significantly reduced. PMID:25780348

  2. Correction for quadrature errors

    DEFF Research Database (Denmark)

    Netterstrøm, A.; Christensen, Erik Lintz

    1994-01-01

    In high bandwidth radar systems it is necessary to use quadrature devices to convert the signal to/from baseband. Practical problems make it difficult to implement a perfect quadrature system. Channel imbalance and quadrature phase errors in the transmitter and the receiver result in error signal...

  3. ERRORS AND CORRECTION

    Institute of Scientific and Technical Information of China (English)

    1998-01-01

    To err is human . Since the 1960s, most second language teachers or language theorists have regarded errors as natural and inevitable in the language learning process . Instead of regarding them as terrible and disappointing, teachers have come to realize their value. This paper will consider these values, analyze some errors and propose some effective correction techniques.

  4. Prediction of endoplasmic reticulum resident proteins using fragmented amino acid composition and support vector machine

    Directory of Open Access Journals (Sweden)

    Ravindra Kumar

    2017-09-01

    Full Text Available Background The endoplasmic reticulum plays an important role in many cellular processes, which includes protein synthesis, folding and post-translational processing of newly synthesized proteins. It is also the site for quality control of misfolded proteins and entry point of extracellular proteins to the secretory pathway. Hence at any given point of time, endoplasmic reticulum contains two different cohorts of proteins, (i proteins involved in endoplasmic reticulum-specific function, which reside in the lumen of the endoplasmic reticulum, called as endoplasmic reticulum resident proteins and (ii proteins which are in process of moving to the extracellular space. Thus, endoplasmic reticulum resident proteins must somehow be distinguished from newly synthesized secretory proteins, which pass through the endoplasmic reticulum on their way out of the cell. Approximately only 50% of the proteins used in this study as training data had endoplasmic reticulum retention signal, which shows that these signals are not essentially present in all endoplasmic reticulum resident proteins. This also strongly indicates the role of additional factors in retention of endoplasmic reticulum-specific proteins inside the endoplasmic reticulum. Methods This is a support vector machine based method, where we had used different forms of protein features as inputs for support vector machine to develop the prediction models. During training leave-one-out approach of cross-validation was used. Maximum performance was obtained with a combination of amino acid compositions of different part of proteins. Results In this study, we have reported a novel support vector machine based method for predicting endoplasmic reticulum resident proteins, named as ERPred. During training we achieved a maximum accuracy of 81.42% with leave-one-out approach of cross-validation. When evaluated on independent dataset, ERPred did prediction with sensitivity of 72.31% and specificity of 83

  5. ERROR AND ERROR CORRECTION AT ELEMENTARY LEVEL

    Institute of Scientific and Technical Information of China (English)

    1994-01-01

    Introduction Errors are unavoidable in language learning, however, to a great extent, teachers in most middle schools in China regard errors as undesirable, a sign of failure in language learning. Most middle schools are still using the grammar-translation method which aims at encouraging students to read scientific works and enjoy literary works. The other goals of this method are to gain a greater understanding of the first language and to improve the students’ ability to cope with difficult subjects and materials, i.e. to develop the students’ minds. The practical purpose of using this method is to help learners pass the annual entrance examination. "To achieve these goals, the students must first learn grammar and vocabulary,... Grammar is taught deductively by means of long and elaborate explanations... students learn the rules of the language rather than its use." (Tang Lixing, 1983:11-12)

  6. Errors on errors - Estimating cosmological parameter covariance

    CERN Document Server

    Joachimi, Benjamin

    2014-01-01

    Current and forthcoming cosmological data analyses share the challenge of huge datasets alongside increasingly tight requirements on the precision and accuracy of extracted cosmological parameters. The community is becoming increasingly aware that these requirements not only apply to the central values of parameters but, equally important, also to the error bars. Due to non-linear effects in the astrophysics, the instrument, and the analysis pipeline, data covariance matrices are usually not well known a priori and need to be estimated from the data itself, or from suites of large simulations. In either case, the finite number of realisations available to determine data covariances introduces significant biases and additional variance in the errors on cosmological parameters in a standard likelihood analysis. Here, we review recent work on quantifying these biases and additional variances and discuss approaches to remedy these effects.

  7. Quantifying truncation errors in effective field theory

    CERN Document Server

    Furnstahl, R J; Phillips, D R; Wesolowski, S

    2015-01-01

    Bayesian procedures designed to quantify truncation errors in perturbative calculations of quantum chromodynamics observables are adapted to expansions in effective field theory (EFT). In the Bayesian approach, such truncation errors are derived from degree-of-belief (DOB) intervals for EFT predictions. Computation of these intervals requires specification of prior probability distributions ("priors") for the expansion coefficients. By encoding expectations about the naturalness of these coefficients, this framework provides a statistical interpretation of the standard EFT procedure where truncation errors are estimated using the order-by-order convergence of the expansion. It also permits exploration of the ways in which such error bars are, and are not, sensitive to assumptions about EFT-coefficient naturalness. We first demonstrate the calculation of Bayesian probability distributions for the EFT truncation error in some representative examples, and then focus on the application of chiral EFT to neutron-pr...

  8. Error Resilient Video Compression Using Behavior Models

    Directory of Open Access Journals (Sweden)

    Jacco R. Taal

    2004-03-01

    Full Text Available Wireless and Internet video applications are inherently subjected to bit errors and packet errors, respectively. This is especially so if constraints on the end-to-end compression and transmission latencies are imposed. Therefore, it is necessary to develop methods to optimize the video compression parameters and the rate allocation of these applications that take into account residual channel bit errors. In this paper, we study the behavior of a predictive (interframe video encoder and model the encoders behavior using only the statistics of the original input data and of the underlying channel prone to bit errors. The resulting data-driven behavior models are then used to carry out group-of-pictures partitioning and to control the rate of the video encoder in such a way that the overall quality of the decoded video with compression and channel errors is optimized.

  9. A k-NN algorithm for predicting the oral sub-chronic toxicity in the rat.

    Science.gov (United States)

    Gadaleta, Domenico; Pizzo, Fabiola; Lombardo, Anna; Carotti, Angelo; Escher, Sylvia E; Nicolotti, Orazio; Benfenati, Emilio

    2014-01-01

    Repeated dose toxicity is of the utmost importance to characterize the toxicological profile of a chemical after repeated administration. Its evaluation refers to the Lowest-Observed-(Adverse)-Effect-Level (LO(A)EL) explicitly requested in several regulatory contexts, such as REACH and EC Regulation 1223/2009 on cosmetic products. So far in vivo tests have been the sole viable option to assess repeated dose toxicity. We report a customized k-Nearest Neighbors approach for predicting sub-chronic oral toxicity in rats. A training set of 254 chemicals was used to derive models whose robustness was challenged through leave-one-out cross-validation. Their predictive power was evaluated on an external dataset comprising 179 chemicals. Despite the intrinsically heterogeneous nature of the data, our models give promising results, with q²≥0.632 and external r²≥0.543. The confidence in prediction was ensured by implementing restrictive user-adjustable rules excluding suspicious chemicals irrespective of the goodness in their prediction. Comparison with the very few LO(A)EL predictive models in the literature indicates that the results of the present analysis can be valuable in prioritizing the safety assessment of chemicals and thus making safe decisions and justifying waiving animal tests according to current regulations concerning chemical safety.

  10. Predicting Ecological Roles in the Rhizosphere using Metabolome and Transportome Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, Peter E.; Collart, Frank R.; Dai, Yang

    2015-09-02

    The ability to obtain complete genome sequences from bacteria in environmental samples, such as soil samples from the rhizosphere, has highlighted the microbial diversity and complexity of environmental communities. However, new algorithms to analyze genome sequence information in the context of community structure are needed to enhance our understanding of the specific ecological roles of these organisms in soil environments. We present a machine learning approach using sequenced Pseudomonad genomes coupled with outputs of metabolic and transportomic computational models for identifying the most predictive molecular mechanisms indicative of a Pseudomonad's ecological role in the rhizosphere: a biofilm, biocontrol agent, promoter of plant growth, or plant pathogen. Computational predictions of ecological niche were highly accurate overall with models trained on transportomic model output being the most accurate (Leave One Out Validation F-scores between 0.82 and 0.89). The strongest predictive molecular mechanism features for rhizosphere ecological niche overlap with many previously reported analyses of Pseudomonad interactions in the rhizosphere, suggesting that this approach successfully informs a system-scale level understanding of how Pseudomonads sense and interact with their environments. The observation that an organism's transportome is highly predictive of its ecological niche is a novel discovery and may have implications in our understanding microbial ecology. The framework developed here can be generalized to the analysis of any bacteria across a wide range of environments and ecological niches making this approach a powerful tool for providing insights into functional predictions from bacterial genomic data.

  11. Predicting Ecological Roles in the Rhizosphere Using Metabolome and Transportome Modeling.

    Science.gov (United States)

    Larsen, Peter E; Collart, Frank R; Dai, Yang

    2015-01-01

    The ability to obtain complete genome sequences from bacteria in environmental samples, such as soil samples from the rhizosphere, has highlighted the microbial diversity and complexity of environmental communities. However, new algorithms to analyze genome sequence information in the context of community structure are needed to enhance our understanding of the specific ecological roles of these organisms in soil environments. We present a machine learning approach using sequenced Pseudomonad genomes coupled with outputs of metabolic and transportomic computational models for identifying the most predictive molecular mechanisms indicative of a Pseudomonad's ecological role in the rhizosphere: a biofilm, biocontrol agent, promoter of plant growth, or plant pathogen. Computational predictions of ecological niche were highly accurate overall with models trained on transportomic model output being the most accurate (Leave One Out Validation F-scores between 0.82 and 0.89). The strongest predictive molecular mechanism features for rhizosphere ecological niche overlap with many previously reported analyses of Pseudomonad interactions in the rhizosphere, suggesting that this approach successfully informs a system-scale level understanding of how Pseudomonads sense and interact with their environments. The observation that an organism's transportome is highly predictive of its ecological niche is a novel discovery and may have implications in our understanding microbial ecology. The framework developed here can be generalized to the analysis of any bacteria across a wide range of environments and ecological niches making this approach a powerful tool for providing insights into functional predictions from bacterial genomic data.

  12. Predicting Ecological Roles in the Rhizosphere Using Metabolome and Transportome Modeling.

    Directory of Open Access Journals (Sweden)

    Peter E Larsen

    Full Text Available The ability to obtain complete genome sequences from bacteria in environmental samples, such as soil samples from the rhizosphere, has highlighted the microbial diversity and complexity of environmental communities. However, new algorithms to analyze genome sequence information in the context of community structure are needed to enhance our understanding of the specific ecological roles of these organisms in soil environments. We present a machine learning approach using sequenced Pseudomonad genomes coupled with outputs of metabolic and transportomic computational models for identifying the most predictive molecular mechanisms indicative of a Pseudomonad's ecological role in the rhizosphere: a biofilm, biocontrol agent, promoter of plant growth, or plant pathogen. Computational predictions of ecological niche were highly accurate overall with models trained on transportomic model output being the most accurate (Leave One Out Validation F-scores between 0.82 and 0.89. The strongest predictive molecular mechanism features for rhizosphere ecological niche overlap with many previously reported analyses of Pseudomonad interactions in the rhizosphere, suggesting that this approach successfully informs a system-scale level understanding of how Pseudomonads sense and interact with their environments. The observation that an organism's transportome is highly predictive of its ecological niche is a novel discovery and may have implications in our understanding microbial ecology. The framework developed here can be generalized to the analysis of any bacteria across a wide range of environments and ecological niches making this approach a powerful tool for providing insights into functional predictions from bacterial genomic data.

  13. Analysis of Errors in a Special Perturbations Satellite Orbit Propagator

    Energy Technology Data Exchange (ETDEWEB)

    Beckerman, M.; Jones, J.P.

    1999-02-01

    We performed an analysis of error densities for the Special Perturbations orbit propagator using data for 29 satellites in orbits of interest to Space Shuttle and International Space Station collision avoidance. We find that the along-track errors predominate. These errors increase monotonically over each 36-hour prediction interval. The predicted positions in the along-track direction progressively either leap ahead of or lag behind the actual positions. Unlike the along-track errors the radial and cross-track errors oscillate about their nearly zero mean values. As the number of observations per fit interval decline the along-track prediction errors, and amplitudes of the radial and cross-track errors, increase.

  14. Uncorrected refractive errors

    Directory of Open Access Journals (Sweden)

    Kovin S Naidoo

    2012-01-01

    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  15. Uncorrected refractive errors.

    Science.gov (United States)

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  16. Errors in neuroradiology.

    Science.gov (United States)

    Caranci, Ferdinando; Tedeschi, Enrico; Leone, Giuseppe; Reginelli, Alfonso; Gatta, Gianluca; Pinto, Antonio; Squillaci, Ettore; Briganti, Francesco; Brunese, Luca

    2015-09-01

    Approximately 4 % of radiologic interpretation in daily practice contains errors and discrepancies that should occur in 2-20 % of reports. Fortunately, most of them are minor degree errors, or if serious, are found and corrected with sufficient promptness; obviously, diagnostic errors become critical when misinterpretation or misidentification should significantly delay medical or surgical treatments. Errors can be summarized into four main categories: observer errors, errors in interpretation, failure to suggest the next appropriate procedure, failure to communicate in a timely and a clinically appropriate manner. Misdiagnosis/misinterpretation percentage should rise up in emergency setting and in the first moments of the learning curve, as in residency. Para-physiological and pathological pitfalls in neuroradiology include calcification and brain stones, pseudofractures, and enlargement of subarachnoid or epidural spaces, ventricular system abnormalities, vascular system abnormalities, intracranial lesions or pseudolesions, and finally neuroradiological emergencies. In order to minimize the possibility of error, it is important to be aware of various presentations of pathology, obtain clinical information, know current practice guidelines, review after interpreting a diagnostic study, suggest follow-up studies when appropriate, communicate significant abnormal findings appropriately and in a timely fashion directly with the treatment team.

  17. AMS 3.0: prediction of post-translational modifications

    Directory of Open Access Journals (Sweden)

    Plewczynski Dariusz

    2010-04-01

    Full Text Available Abstract Background We present here the recent update of AMS algorithm for identification of post-translational modification (PTM sites in proteins based only on sequence information, using artificial neural network (ANN method. The query protein sequence is dissected into overlapping short sequence segments. Ten different physicochemical features describe each amino acid; therefore nine residues long segment is represented as a point in a 90 dimensional space. The database of sequence segments with confirmed by experiments post-translational modification sites are used for training a set of ANNs. Results The efficiency of the classification for each type of modification and the prediction power of the method is estimated here using recall (sensitivity, precision values, the area under receiver operating characteristic (ROC curves and leave-one-out tests (LOOCV. The significant differences in the performance for differently optimized neural networks are observed, yet the AMS 3.0 tool integrates those heterogeneous classification schemes into the single consensus scheme, and it is able to boost the precision and recall values independent of a PTM type in comparison with the currently available state-of-the art methods. Conclusions The standalone version of AMS 3.0 presents an efficient way to indentify post-translational modifications for whole proteomes. The training datasets, precompiled binaries for AMS 3.0 tool and the source code are available at http://code.google.com/p/automotifserver under the Apache 2.0 license scheme.

  18. Profiles in patient safety: when an error occurs.

    Science.gov (United States)

    Hobgood, Cherri; Hevia, Armando; Hinchey, Paul

    2004-07-01

    Medical error is now clearly established as one of the most significant problems facing the American health care system. Anecdotal evidence, studies of human cognition, and analysis of high-reliability organizations all predict that despite excellent training, human error is unavoidable. When an error occurs and is recognized, providers have a duty to disclose the error. Yet disclosure of error to patients, families, and hospital colleagues is a difficult and/or threatening process for most physicians. A more thorough understanding of the ethical and social contract between physicians and their patients as well as the professional milieu surrounding an error may improve the likelihood of its disclosure. Key among these is the identification of institutional factors that support disclosure and recognize error as an unavoidable part of the practice of medicine. Using a case-based format, this article focuses on the communication of error with patients, families, and colleagues and grounds error disclosure in the cultural milieu of medial ethics.

  19. A prediction scheme of tropical cyclone frequency based on lasso and random forest

    Science.gov (United States)

    Tan, Jinkai; Liu, Hexiang; Li, Mengya; Wang, Jun

    2017-07-01

    This study aims to propose a novel prediction scheme of tropical cyclone frequency (TCF) over the Western North Pacific (WNP). We concerned the large-scale meteorological factors inclusive of the sea surface temperature, sea level pressure, the Niño-3.4 index, the wind shear, the vorticity, the subtropical high, and the sea ice cover, since the chronic change of these factors in the context of climate change would cause a gradual variation of the annual TCF. Specifically, we focus on the correlation between the year-to-year increment of these factors and TCF. The least absolute shrinkage and selection operator (Lasso) method was used for variable selection and dimension reduction from 11 initial predictors. Then, a prediction model based on random forest (RF) was established by using the training samples (1978-2011) for calibration and the testing samples (2012-2016) for validation. The RF model presents a major variation and trend of TCF in the period of calibration, and also fitted well with the observed TCF in the period of validation though there were some deviations. The leave-one-out cross validation of the model exhibited most of the predicted TCF are in consistence with the observed TCF with a high correlation coefficient. A comparison between results of the RF model and the multiple linear regression (MLR) model suggested the RF is more practical and capable of giving reliable results of TCF prediction over the WNP.

  20. Prediction of drought-resistant genes in Arabidopsis thaliana using SVM-RFE.

    Directory of Open Access Journals (Sweden)

    Yanchun Liang

    Full Text Available BACKGROUND: Identifying genes with essential roles in resisting environmental stress rates high in agronomic importance. Although massive DNA microarray gene expression data have been generated for plants, current computational approaches underutilize these data for studying genotype-trait relationships. Some advanced gene identification methods have been explored for human diseases, but typically these methods have not been converted into publicly available software tools and cannot be applied to plants for identifying genes with agronomic traits. METHODOLOGY: In this study, we used 22 sets of Arabidopsis thaliana gene expression data from GEO to predict the key genes involved in water tolerance. We applied an SVM-RFE (Support Vector Machine-Recursive Feature Elimination feature selection method for the prediction. To address small sample sizes, we developed a modified approach for SVM-RFE by using bootstrapping and leave-one-out cross-validation. We also expanded our study to predict genes involved in water susceptibility. CONCLUSIONS: We analyzed the top 10 genes predicted to be involved in water tolerance. Seven of them are connected to known biological processes in drought resistance. We also analyzed the top 100 genes in terms of their biological functions. Our study shows that the SVM-RFE method is a highly promising method in analyzing plant microarray data for studying genotype-phenotype relationships. The software is freely available with source code at http://ccst.jlu.edu.cn/JCSB/RFET/.

  1. Comparative assessment of predictions in ungauged basins – Part 2: Flood and low flow studies

    Directory of Open Access Journals (Sweden)

    J. L. Salinas

    2013-07-01

    Full Text Available The objective of this paper is to assess the performance of methods that predict low flows and flood runoff in ungauged catchments. The aim is to learn from the similarities and differences between catchments in different places, and to interpret the differences in performance in terms of the underlying climate-landscape controls. The assessment is performed at two levels. The Level 1 assessment is a meta-analysis of 14 low flow prediction studies reported in the literature involving 3112 catchments, and 20 flood prediction studies involving 3023 catchments. The Level 2 assessment consists of a more focused and detailed analysis of individual basins from selected studies from Level 1 in terms of how the leave-one-out cross-validation performance depends on climate and catchment characteristics as well as on the regionalisation method. The results indicate that both flood and low flow predictions in ungauged catchments tend to be less accurate in arid than in humid climates and more accurate in large than in small catchments. There is also a tendency towards a somewhat lower performance of regressions than other methods in those studies that apply different methods in the same region, while geostatistical methods tend to perform better than other methods. Of the various flood regionalisation approaches, index methods show significantly lower performance in arid catchments than regression methods or geostatistical methods. For low flow regionalisation, regional regressions are generally better than global regressions.

  2. Prediction of pharmacological and xenobiotic responses to drugs based on time course gene expression profiles.

    Directory of Open Access Journals (Sweden)

    Tao Huang

    Full Text Available More and more people are concerned by the risk of unexpected side effects observed in the later steps of the development of new drugs, either in late clinical development or after marketing approval. In order to reduce the risk of the side effects, it is important to look out for the possible xenobiotic responses at an early stage. We attempt such an effort through a prediction by assuming that similarities in microarray profiles indicate shared mechanisms of action and/or toxicological responses among the chemicals being compared. A large time course microarray database derived from livers of compound-treated rats with thirty-four distinct pharmacological and toxicological responses were studied. The mRMR (Minimum-Redundancy-Maximum-Relevance method and IFS (Incremental Feature Selection were used to select a compact feature set (141 features for the reduction of feature dimension and improvement of prediction performance. With these 141 features, the Leave-one-out cross-validation prediction accuracy of first order response using NNA (Nearest Neighbor Algorithm was 63.9%. Our method can be used for pharmacological and xenobiotic responses prediction of new compounds and accelerate drug development.

  3. Predicting the risk of toxic blooms of golden alga from cell abundance and environmental covariates

    Science.gov (United States)

    Patino, Reynaldo; VanLandeghem, Matthew M.; Denny, Shawn

    2016-01-01

    Golden alga (Prymnesium parvum) is a toxic haptophyte that has caused considerable ecological damage to marine and inland aquatic ecosystems worldwide. Studies focused primarily on laboratory cultures have indicated that toxicity is poorly correlated with the abundance of golden alga cells. This relationship, however, has not been rigorously evaluated in the field where environmental conditions are much different. The ability to predict toxicity using readily measured environmental variables and golden alga abundance would allow managers rapid assessments of ichthyotoxicity potential without laboratory bioassay confirmation, which requires additional resources to accomplish. To assess the potential utility of these relationships, several a priori models relating lethal levels of golden alga ichthyotoxicity to golden alga abundance and environmental covariates were constructed. Model parameters were estimated using archived data from four river basins in Texas and New Mexico (Colorado, Brazos, Red, Pecos). Model predictive ability was quantified using cross-validation, sensitivity, and specificity, and the relative ranking of environmental covariate models was determined by Akaike Information Criterion values and Akaike weights. Overall, abundance was a generally good predictor of ichthyotoxicity as cross validation of golden alga abundance-only models ranged from ∼ 80% to ∼ 90% (leave-one-out cross-validation). Environmental covariates improved predictions, especially the ability to predict lethally toxic events (i.e., increased sensitivity), and top-ranked environmental covariate models differed among the four basins. These associations may be useful for monitoring as well as understanding the abiotic factors that influence toxicity during blooms.

  4. Inpatients’ medical prescription errors

    Directory of Open Access Journals (Sweden)

    Aline Melo Santos Silva

    2009-09-01

    Full Text Available Objective: To identify and quantify the most frequent prescription errors in inpatients’ medical prescriptions. Methods: A survey of prescription errors was performed in the inpatients’ medical prescriptions, from July 2008 to May 2009 for eight hours a day. Rresults: At total of 3,931 prescriptions was analyzed and 362 (9.2% prescription errors were found, which involved the healthcare team as a whole. Among the 16 types of errors detected in prescription, the most frequent occurrences were lack of information, such as dose (66 cases, 18.2% and administration route (26 cases, 7.2%; 45 cases (12.4% of wrong transcriptions to the information system; 30 cases (8.3% of duplicate drugs; doses higher than recommended (24 events, 6.6% and 29 cases (8.0% of prescriptions with indication but not specifying allergy. Cconclusion: Medication errors are a reality at hospitals. All healthcare professionals are responsible for the identification and prevention of these errors, each one in his/her own area. The pharmacist is an essential professional in the drug therapy process. All hospital organizations need a pharmacist team responsible for medical prescription analyses before preparation, dispensation and administration of drugs to inpatients. This study showed that the pharmacist improves the inpatient’s safety and success of prescribed therapy.

  5. SVM-based method for protein structural class prediction using secondary structural content and structural information of amino acids.

    Science.gov (United States)

    Mohammad, Tabrez Anwar Shamim; Nagarajaram, Hampapathalu Adimurthy

    2011-08-01

    The knowledge collated from the known protein structures has revealed that the proteins are usually folded into the four structural classes: all-α, all-β, α/β and α + β. A number of methods have been proposed to predict the protein's structural class from its primary structure; however, it has been observed that these methods fail or perform poorly in the cases of distantly related sequences. In this paper, we propose a new method for protein structural class prediction using low homology (twilight-zone) protein sequences dataset. Since protein structural class prediction is a typical classification problem, we have developed a Support Vector Machine (SVM)-based method for protein structural class prediction that uses features derived from the predicted secondary structure and predicted burial information of amino acid residues. The examination of different individual as well as feature combinations revealed that the combination of secondary structural content, secondary structural and solvent accessibility state frequencies of amino acids gave rise to the best leave-one-out cross-validation accuracy of ~81% which is comparable to the best accuracy reported in the literature so far.

  6. Error monitoring in musicians

    Directory of Open Access Journals (Sweden)

    Clemens eMaidhof

    2013-07-01

    Full Text Available To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e. the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. EEG Studies reported an early component of the event-related potential (ERP occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e. attempts to cancel the undesired sensory consequence (a wrong tone a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed.

  7. Smoothing error pitfalls

    Science.gov (United States)

    von Clarmann, T.

    2014-09-01

    The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by a diagnostic quantity called smoothing error. In this paper it is shown that, regardless of the usefulness of the smoothing error as a diagnostic tool in its own right, the concept of the smoothing error as a component of the retrieval error budget is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state; in other words, to characterize the full loss of information with respect to the true atmosphere, the effect of the representation of the atmospheric state on a finite grid also needs to be considered. The idea of a sufficiently fine sampling of this reference atmospheric state is problematic because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help, because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully discuss temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the covariance matrix involved has been evaluated on the comparison grid rather than resulting from interpolation and if the averaging kernel matrices have been evaluated on a grid fine enough to capture all atmospheric variations that the instruments are sensitive to. This is, under the assumptions stated, because the undefined component of the smoothing error, which is the

  8. Learning from Errors

    Directory of Open Access Journals (Sweden)

    MA. Lendita Kryeziu

    2015-06-01

    Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.

  9. Error Correction in Classroom

    Institute of Scientific and Technical Information of China (English)

    Dr. Grace Zhang

    2000-01-01

    Error correction is an important issue in foreign language acquisition. This paper investigates how students feel about the way in which error correction should take place in a Chinese-as-a foreign-language classroom, based on empirical data of a large scale. The study shows that there is a general consensus that error correction is necessary. In terms of correction strategy, the students preferred a combination of direct and indirect corrections, or a direct only correction. The former choice indicates that students would be happy to take either so long as the correction gets done.Most students didn't mind peer correcting provided it is conducted in a constructive way. More than halfofthe students would feel uncomfortable ifthe same error they make in class is corrected consecutively more than three times. Taking these findings into consideration, we may want to cncourage peer correcting, use a combination of correction strategies (direct only if suitable) and do it in a non-threatening and sensitive way. It is hoped that this study would contribute to the effectiveness of error correction in a Chinese language classroom and it may also have a wider implication on other languages.

  10. Phenome-driven disease genetics prediction toward drug discovery.

    Science.gov (United States)

    Chen, Yang; Li, Li; Zhang, Guo-Qiang; Xu, Rong

    2015-06-15

    Discerning genetic contributions to diseases not only enhances our understanding of disease mechanisms, but also leads to translational opportunities for drug discovery. Recent computational approaches incorporate disease phenotypic similarities to improve the prediction power of disease gene discovery. However, most current studies used only one data source of human disease phenotype. We present an innovative and generic strategy for combining multiple different data sources of human disease phenotype and predicting disease-associated genes from integrated phenotypic and genomic data. To demonstrate our approach, we explored a new phenotype database from biomedical ontologies and constructed Disease Manifestation Network (DMN). We combined DMN with mimMiner, which was a widely used phenotype database in disease gene prediction studies. Our approach achieved significantly improved performance over a baseline method, which used only one phenotype data source. In the leave-one-out cross-validation and de novo gene prediction analysis, our approach achieved the area under the curves of 90.7% and 90.3%, which are significantly higher than 84.2% (P disease as an example and ranked the candidate drugs based on the rank of drug targets. Our gene prediction approach prioritized druggable genes that are likely to be associated with Crohn's disease pathogenesis, and our rank of candidate drugs successfully prioritized the Food and Drug Administration-approved drugs for Crohn's disease. We also found literature evidence to support a number of drugs among the top 200 candidates. In summary, we demonstrated that a novel strategy combining unique disease phenotype data with system approaches can lead to rapid drug discovery. nlp. edu/public/data/DMN © The Author 2015. Published by Oxford University Press.

  11. QSPR study of supercooled liquid vapour pressures of PBDEs by using molecular distance-edge vector index

    Directory of Open Access Journals (Sweden)

    Jiao Long

    2015-01-01

    Full Text Available The quantitative structure property relationship (QSPR for supercooled liquid vapour pressures (PL of PBDEs was investigated. Molecular distance-edge vector (MDEV index was used as the structural descriptor. The quantitative relationship between the MDEV index and lgPL was modeled by using multivariate linear regression (MLR and artificial neural network (ANN respectively. Leave-one-out cross validation and k-fold cross validation were carried out to assess the prediction ability of the developed models. For the MLR method, the prediction root mean square relative error (RMSRE of leave-one-out cross validation and k-fold cross validation is 9.95 and 9.05 respectively. For the ANN method, the prediction RMSRE of leave-one-out cross validation and k-fold cross validation is 8.75 and 8.31 respectively. It is demonstrated the established models are practicable for predicting the lgPL of PBDEs. The MDEV index is quantitatively related to the lgPL of PBDEs. MLR and L-ANN are practicable for modeling this relationship. Compared with MLR, ANN shows slightly higher prediction accuracy. Subsequently, an MLR model, which regression equation is lgPL = 0.2868 M11 - 0.8449 M12 - 0.0605, and an ANN model, which is a two inputs linear network, were developed. The two models can be used to predict the lgPL of each PBDE.

  12. Generalization error bounds for stationary autoregressive models

    CERN Document Server

    McDonald, Daniel J; Schervish, Mark

    2011-01-01

    We derive generalization error bounds for stationary univariate autoregressive (AR) models. We show that the stationarity assumption alone lets us treat the estimation of AR models as a regularized kernel regression without the need to further regularize the model arbitrarily. We thereby bound the Rademacher complexity of AR models and apply existing Rademacher complexity results to characterize the predictive risk of AR models. We demonstrate our methods by predicting interest rate movements.

  13. A probabilistic framework to predict protein function from interaction data integrated with semantic knowledge

    Directory of Open Access Journals (Sweden)

    Ramanathan Murali

    2008-09-01

    Full Text Available Abstract Background The functional characterization of newly discovered proteins has been a challenge in the post-genomic era. Protein-protein interactions provide insights into the functional analysis because the function of unknown proteins can be postulated on the basis of their interaction evidence with known proteins. The protein-protein interaction data sets have been enriched by high-throughput experimental methods. However, the functional analysis using the interaction data has a limitation in accuracy because of the presence of the false positive data experimentally generated and the interactions that are a lack of functional linkage. Results Protein-protein interaction data can be integrated with the functional knowledge existing in the Gene Ontology (GO database. We apply similarity measures to assess the functional similarity between interacting proteins. We present a probabilistic framework for predicting functions of unknown proteins based on the functional similarity. We use the leave-one-out cross validation to compare the performance. The experimental results demonstrate that our algorithm performs better than other competing methods in terms of prediction accuracy. In particular, it handles the high false positive rates of current interaction data well. Conclusion The experimentally determined protein-protein interactions are erroneous to uncover the functional associations among proteins. The performance of function prediction for uncharacterized proteins can be enhanced by the integration of multiple data sources available.

  14. Perceptual learning eases crowding by reducing recognition errors but not position errors.

    Science.gov (United States)

    Xiong, Ying-Zi; Yu, Cong; Zhang, Jun-Yun

    2015-08-01

    When an observer reports a letter flanked by additional letters in the visual periphery, the response errors (the crowding effect) may result from failure to recognize the target letter (recognition errors), from mislocating a correctly recognized target letter at a flanker location (target misplacement errors), or from reporting a flanker as the target letter (flanker substitution errors). Crowding can be reduced through perceptual learning. However, it is not known how perceptual learning operates to reduce crowding. In this study we trained observers with a partial-report task (Experiment 1), in which they reported the central target letter of a three-letter string presented in the visual periphery, or a whole-report task (Experiment 2), in which they reported all three letters in order. We then assessed the impact of training on recognition of both unflanked and flanked targets, with particular attention to how perceptual learning affected the types of errors. Our results show that training improved target recognition but not single-letter recognition, indicating that training indeed affected crowding. However, training did not reduce target misplacement errors or flanker substitution errors. This dissociation between target recognition and flanker substitution errors supports the view that flanker substitution may be more likely a by-product (due to response bias), rather than a cause, of crowding. Moreover, the dissociation is not consistent with hypothesized mechanisms of crowding that would predict reduced positional errors.

  15. Errors in Neonatology

    Directory of Open Access Journals (Sweden)

    Antonio Boldrini

    2013-06-01

    Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research

  16. Conically scanning lidar error in complex terrain

    Directory of Open Access Journals (Sweden)

    Ferhat Bingöl

    2009-05-01

    Full Text Available Conically scanning lidars assume the flow to be homogeneous in order to deduce the horizontal wind speed. However, in mountainous or complex terrain this assumption is not valid implying a risk that the lidar will derive an erroneous wind speed. The magnitude of this error is measured by collocating a meteorological mast and a lidar at two Greek sites, one hilly and one mountainous. The maximum error for the sites investigated is of the order of 10 %. In order to predict the error for various wind directions the flows at both sites are simulated with the linearized flow model, WAsP Engineering 2.0. The measurement data are compared with the model predictions with good results for the hilly site, but with less success at the mountainous site. This is a deficiency of the flow model, but the methods presented in this paper can be used with any flow model.

  17. Error Free Software

    Science.gov (United States)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  18. LIBERTARISMO & ERROR CATEGORIAL

    Directory of Open Access Journals (Sweden)

    Carlos G. Patarroyo G.

    2009-01-01

    Full Text Available En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibilidad de la libertad humana no necesariamente puede ser acusado de incurrir en ellos.

  19. Logical error rate in the Pauli twirling approximation.

    Science.gov (United States)

    Katabarwa, Amara; Geller, Michael R

    2015-09-30

    The performance of error correction protocols are necessary for understanding the operation of potential quantum computers, but this requires physical error models that can be simulated efficiently with classical computers. The Gottesmann-Knill theorem guarantees a class of such error models. Of these, one of the simplest is the Pauli twirling approximation (PTA), which is obtained by twirling an arbitrary completely positive error channel over the Pauli basis, resulting in a Pauli channel. In this work, we test the PTA's accuracy at predicting the logical error rate by simulating the 5-qubit code using a 9-qubit circuit with realistic decoherence and unitary gate errors. We find evidence for good agreement with exact simulation, with the PTA overestimating the logical error rate by a factor of 2 to 3. Our results suggest that the PTA is a reliable predictor of the logical error rate, at least for low-distance codes.

  20. Orwell's Instructive Errors

    Science.gov (United States)

    Julian, Liam

    2009-01-01

    In this article, the author talks about George Orwell, his instructive errors, and the manner in which Orwell pierced worthless theory, faced facts and defended decency (with fluctuating success), and largely ignored the tradition of accumulated wisdom that has rendered him a timeless teacher--one whose inadvertent lessons, while infrequently…

  1. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  2. Using simple artificial intelligence methods for predicting amyloidogenesis in antibodies

    Science.gov (United States)

    2010-01-01

    Background All polypeptide backbones have the potential to form amyloid fibrils, which are associated with a number of degenerative disorders. However, the likelihood that amyloidosis would actually occur under physiological conditions depends largely on the amino acid composition of a protein. We explore using a naive Bayesian classifier and a weighted decision tree for predicting the amyloidogenicity of immunoglobulin sequences. Results The average accuracy based on leave-one-out (LOO) cross validation of a Bayesian classifier generated from 143 amyloidogenic sequences is 60.84%. This is consistent with the average accuracy of 61.15% for a holdout test set comprised of 103 AM and 28 non-amyloidogenic sequences. The LOO cross validation accuracy increases to 81.08% when the training set is augmented by the holdout test set. In comparison, the average classification accuracy for the holdout test set obtained using a decision tree is 78.64%. Non-amyloidogenic sequences are predicted with average LOO cross validation accuracies between 74.05% and 77.24% using the Bayesian classifier, depending on the training set size. The accuracy for the holdout test set was 89%. For the decision tree, the non-amyloidogenic prediction accuracy is 75.00%. Conclusions This exploratory study indicates that both classification methods may be promising in providing straightforward predictions on the amyloidogenicity of a sequence. Nevertheless, the number of available sequences that satisfy the premises of this study are limited, and are consequently smaller than the ideal training set size. Increasing the size of the training set clearly increases the accuracy, and the expansion of the training set to include not only more derivatives, but more alignments, would make the method more sound. The accuracy of the classifiers may also be improved when additional factors, such as structural and physico-chemical data, are considered. The development of this type of classifier has significant

  3. Forecast Combination under Heavy-Tailed Errors

    Directory of Open Access Journals (Sweden)

    Gang Cheng

    2015-11-01

    Full Text Available Forecast combination has been proven to be a very important technique to obtain accurate predictions for various applications in economics, finance, marketing and many other areas. In many applications, forecast errors exhibit heavy-tailed behaviors for various reasons. Unfortunately, to our knowledge, little has been done to obtain reliable forecast combinations for such situations. The familiar forecast combination methods, such as simple average, least squares regression or those based on the variance-covariance of the forecasts, may perform very poorly due to the fact that outliers tend to occur, and they make these methods have unstable weights, leading to un-robust forecasts. To address this problem, in this paper, we propose two nonparametric forecast combination methods. One is specially proposed for the situations in which the forecast errors are strongly believed to have heavy tails that can be modeled by a scaled Student’s t-distribution; the other is designed for relatively more general situations when there is a lack of strong or consistent evidence on the tail behaviors of the forecast errors due to a shortage of data and/or an evolving data-generating process. Adaptive risk bounds of both methods are developed. They show that the resulting combined forecasts yield near optimal mean forecast errors relative to the candidate forecasts. Simulations and a real example demonstrate their superior performance in that they indeed tend to have significantly smaller prediction errors than the previous combination methods in the presence of forecast outliers.

  4. Relativity Study on Topological Index of Methylalkane Structures and Chromatographic Retention Index

    Institute of Scientific and Technical Information of China (English)

    Xiang Zheng; Liang Yizeng; Hu Qiannan

    2006-01-01

    The study of quantitative structure and retention index relationship(QSRR)is an important subject in chromatographic field,which has been used to obtain simple models to explain and predict the chromatographic behavior of various classes of compounds.One hundred twenty-seven topological descriptors of 207 methylalkane structures are calculated and investigated via the quantitative structureproperty relationship(QSPR)model in the present paper.GAPLS method,which is a variable selection method combining genetic algorithms(GA),back stepwise,and partial least squares(PLS),is introduced in the variable selection of quantitative structure gas chromatographic(GC)retention index(RI)relationship.Seven topological descriptors are finally selected from 127 topological descriptors by GAPLS method to build a QSRR model with a high regression quality of squared correlation coefficient(R2)of 0.99998 and standard deviation(S)of 2.88.The errors of the model are quite close to the experimental errors.The validation of the model is then checked by leave-one-out cross-validation technique.The results of leave-one-out crossvalidation indicate that the built model is reliable and stable with high prediction quality,such as squared correlation coefficient of leave-one-out(R2cv)of 0.99997 and standard deviation of leave-one-out predictions(Scv)of 2.95.A successful interpretation of the complex relationship between GC RIs of methyalkanes and the chemical structure is achieved using the QSPR method.The seven variables in the model are also rationally interpreted,which indicates that methylalkanes' RI is precisely represented by topological descriptors.

  5. Patient error: a preliminary taxonomy.

    NARCIS (Netherlands)

    Buetow, S.; Kiata, L.; Liew, T.; Kenealy, T.; Dovey, S.; Elwyn, G.

    2009-01-01

    PURPOSE: Current research on errors in health care focuses almost exclusively on system and clinician error. It tends to exclude how patients may create errors that influence their health. We aimed to identify the types of errors that patients can contribute and help manage, especially in primary ca

  6. Automatic Error Analysis Using Intervals

    Science.gov (United States)

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  7. Imagery of Errors in Typing

    Science.gov (United States)

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  8. Rapid mapping of volumetric machine errors using distance measurements

    Energy Technology Data Exchange (ETDEWEB)

    Krulewich, D.A.

    1998-04-01

    This paper describes a relatively inexpensive, fast, and easy to execute approach to maping the volumetric errors of a machine tool, coordinate measuring machine, or robot. An error map is used to characterize a machine or to improve its accuracy by compensating for the systematic errors. The method consists of three steps: (1) models the relationship between volumetric error and the current state of the machine, (2) acquiring error data based on distance measurements throughout the work volume; and (3)fitting the error model using the nonlinear equation for the distance. The error model is formulated from the kinematic relationship among the six degrees of freedom of error an each moving axis. Expressing each parametric error as function of position each is combined to predict the error between the functional point and workpiece, also as a function of position. A series of distances between several fixed base locations and various functional points in the work volume is measured using a Laser Ball Bar (LBB). Each measured distance is a non-linear function dependent on the commanded location of the machine, the machine error, and the location of the base locations. Using the error model, the non-linear equation is solved producing a fit for the error model Also note that, given approximate distances between each pair of base locations, the exact base locations in the machine coordinate system determined during the non-linear filling procedure. Furthermore, with the use of 2048 more than three base locations, bias error in the measuring instrument can be removed The volumetric errors of three-axis commercial machining center have been mapped using this procedure. In this study, only errors associated with the nominal position of the machine were considered Other errors such as thermally induced and load induced errors were not considered although the mathematical model has the ability to account for these errors. Due to the proprietary nature of the projects we are

  9. Prediction on the softening point of bitumen in producing by using SVR%沥青生产过程中软化点的SVR预测

    Institute of Scientific and Technical Information of China (English)

    蔡从中; 王桂莲; 裴军芳; 朱星键

    2011-01-01

    According to an experimental dataset on the softening points of 30 bitumen samples under different resistances and temperatures,the support vector regression(SVR) approach combined with particle swarm optimization(PSO) for its parameter optimization is proposed to conduct leave-one-out cross validation(LOOCV) for modeling and predicting the softening point of bitumen,and its prediction result is compared with that of multivariate linear regression(MLR).The maximum error 2.1 ℃ predicted by SVR is much less than 7.9 ℃ which is calculated by MLR modeling.The statistical results reveal that the root mean square error(RMSE=0.75 ℃),mean absolute error(MAE=0.32 ℃) and mean absolute percentage error(MAPE=0.28%) achieved by SVR-LOOCV are all less than those(RMSE=3.3 ℃,MAE=2.6 ℃ and MAPE=2.34%) calculated via MLR model.This study suggests that the softening point of bitumen can be forecasted timely by SVR to provide an accurate guidance for producing of high-quality bitumen.%根据30组不同电阻和温度下的沥青软化点的实测数据集,应用基于粒子群算法(PSO)寻优的支持向量回归(SVR)方法,并结合留一交叉验证(LOOCV)法对沥青软化点进行了建模和预测研究,将其预测结果与多元线性回归(MLR)模型的计算结果进行了比较。SVR-LOOCV预测的最大误差为2.1℃,远比MLR模型计算的最大误差7.9℃要小得多。统计结果表明:基于SVR-LOOCV预测结果的均方根误差(RMSE=0.75℃)、平均绝对误差(MAE=0.32℃)和平均绝对百分误差(MAPE=0.28%)相应也比MLR回归模型的预测结果(RMSE=3.3℃,MAE=2.6℃和MAPE=2.34%)要小。因此,应用SVR实时预测沥青产品的软化点,可为生产优质沥青提供准确的科学指导。

  10. Error bars in experimental biology.

    Science.gov (United States)

    Cumming, Geoff; Fidler, Fiona; Vaux, David L

    2007-04-09

    Error bars commonly appear in figures in publications, but experimental biologists are often unsure how they should be used and interpreted. In this article we illustrate some basic features of error bars and explain how they can help communicate data and assist correct interpretation. Error bars may show confidence intervals, standard errors, standard deviations, or other quantities. Different types of error bars give quite different information, and so figure legends must make clear what error bars represent. We suggest eight simple rules to assist with effective use and interpretation of error bars.

  11. Video Error Correction Using Steganography

    Directory of Open Access Journals (Sweden)

    Robie David L

    2002-01-01

    Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  12. Evaluating operating system vulnerability to memory errors.

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, Kurt Brian; Bridges, Patrick G. (University of New Mexico); Pedretti, Kevin Thomas Tauke; Mueller, Frank (North Carolina State University); Fiala, David (North Carolina State University); Brightwell, Ronald Brian

    2012-05-01

    Reliability is of great concern to the scalability of extreme-scale systems. Of particular concern are soft errors in main memory, which are a leading cause of failures on current systems and are predicted to be the leading cause on future systems. While great effort has gone into designing algorithms and applications that can continue to make progress in the presence of these errors without restarting, the most critical software running on a node, the operating system (OS), is currently left relatively unprotected. OS resiliency is of particular importance because, though this software typically represents a small footprint of a compute node's physical memory, recent studies show more memory errors in this region of memory than the remainder of the system. In this paper, we investigate the soft error vulnerability of two operating systems used in current and future high-performance computing systems: Kitten, the lightweight kernel developed at Sandia National Laboratories, and CLE, a high-performance Linux-based operating system developed by Cray. For each of these platforms, we outline major structures and subsystems that are vulnerable to soft errors and describe methods that could be used to reconstruct damaged state. Our results show the Kitten lightweight operating system may be an easier target to harden against memory errors due to its smaller memory footprint, largely deterministic state, and simpler system structure.

  13. Human pluripotent stem cell-derived neural constructs for predicting neural toxicity.

    Science.gov (United States)

    Schwartz, Michael P; Hou, Zhonggang; Propson, Nicholas E; Zhang, Jue; Engstrom, Collin J; Santos Costa, Vitor; Jiang, Peng; Nguyen, Bao Kim; Bolin, Jennifer M; Daly, William; Wang, Yu; Stewart, Ron; Page, C David; Murphy, William L; Thomson, James A

    2015-10-01

    Human pluripotent stem cell-based in vitro models that reflect human physiology have the potential to reduce the number of drug failures in clinical trials and offer a cost-effective approach for assessing chemical safety. Here, human embryonic stem (ES) cell-derived neural progenitor cells, endothelial cells, mesenchymal stem cells, and microglia/macrophage precursors were combined on chemically defined polyethylene glycol hydrogels and cultured in serum-free medium to model cellular interactions within the developing brain. The precursors self-assembled into 3D neural constructs with diverse neuronal and glial populations, interconnected vascular networks, and ramified microglia. Replicate constructs were reproducible