WorldWideScience

Sample records for maximum relative error

  1. Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors

    DEFF Research Database (Denmark)

    Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi

    2013-01-01

    Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...

  2. Error-related brain activity and error awareness in an error classification paradigm.

    Science.gov (United States)

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Effect of error in crack length measurement on maximum load fracture toughness of Zr-2.5Nb pressure tube material

    International Nuclear Information System (INIS)

    Bind, A.K.; Sunil, Saurav; Singh, R.N.; Chakravartty, J.K.

    2016-03-01

    Recently it was found that maximum load toughness (J max ) for Zr-2.5Nb pressure tube material was practically unaffected by error in Δ a . To check the sensitivity of the J max to error in Δ a measurement, the J max was calculated assuming no crack growth up to the maximum load (P max ) for as received and hydrogen charged Zr-2.5Nb pressure tube material. For load up to the P max , the J values calculated assuming no crack growth (J NC ) were slightly higher than that calculated based on Δ a measured using DCPD technique (JDCPD). In general, error in the J calculation found to be increased exponentially with Δ a . The error in J max calculation was increased with an increase in Δ a and a decrease in J max . Based on deformation theory of J, an analytic criterion was developed to check the insensitivity of the J max to error in Δ a . There was very good linear relation was found between the J max calculated based on Δ a measured using DCPD technique and the J max calculated assuming no crack growth. This relation will be very useful to calculate J max without measuring the crack growth during fracture test especially for irradiated material. (author)

  4. Effects of errors in velocity tilt on maximum longitudinal compression during neutralized drift compression of intense beam pulses: I. general description

    Energy Technology Data Exchange (ETDEWEB)

    Kaganovich, Igor D.; Massidda, Scottt; Startsev, Edward A.; Davidson, Ronald C.; Vay, Jean-Luc; Friedman, Alex

    2012-06-21

    Neutralized drift compression offers an effective means for particle beam pulse compression and current amplification. In neutralized drift compression, a linear longitudinal velocity tilt (head-to-tail gradient) is applied to the non-relativistic beam pulse, so that the beam pulse compresses as it drifts in the focusing section. The beam current can increase by more than a factor of 100 in the longitudinal direction. We have performed an analytical study of how errors in the velocity tilt acquired by the beam in the induction bunching module limit the maximum longitudinal compression. It is found that the compression ratio is determined by the relative errors in the velocity tilt. That is, one-percent errors may limit the compression to a factor of one hundred. However, a part of the beam pulse where the errors are small may compress to much higher values, which are determined by the initial thermal spread of the beam pulse. It is also shown that sharp jumps in the compressed current density profile can be produced due to overlaying of different parts of the pulse near the focal plane. Examples of slowly varying and rapidly varying errors compared to the beam pulse duration are studied. For beam velocity errors given by a cubic function, the compression ratio can be described analytically. In this limit, a significant portion of the beam pulse is located in the broad wings of the pulse and is poorly compressed. The central part of the compressed pulse is determined by the thermal spread. The scaling law for maximum compression ratio is derived. In addition to a smooth variation in the velocity tilt, fast-changing errors during the pulse may appear in the induction bunching module if the voltage pulse is formed by several pulsed elements. Different parts of the pulse compress nearly simultaneously at the target and the compressed profile may have many peaks. The maximum compression is a function of both thermal spread and the velocity errors. The effects of the

  5. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    International Nuclear Information System (INIS)

    Bokanowski, Olivier; Picarelli, Athena; Zidani, Hasnaa

    2015-01-01

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach

  6. Dynamic Programming and Error Estimates for Stochastic Control Problems with Maximum Cost

    Energy Technology Data Exchange (ETDEWEB)

    Bokanowski, Olivier, E-mail: boka@math.jussieu.fr [Laboratoire Jacques-Louis Lions, Université Paris-Diderot (Paris 7) UFR de Mathématiques - Bât. Sophie Germain (France); Picarelli, Athena, E-mail: athena.picarelli@inria.fr [Projet Commands, INRIA Saclay & ENSTA ParisTech (France); Zidani, Hasnaa, E-mail: hasnaa.zidani@ensta.fr [Unité de Mathématiques appliquées (UMA), ENSTA ParisTech (France)

    2015-02-15

    This work is concerned with stochastic optimal control for a running maximum cost. A direct approach based on dynamic programming techniques is studied leading to the characterization of the value function as the unique viscosity solution of a second order Hamilton–Jacobi–Bellman (HJB) equation with an oblique derivative boundary condition. A general numerical scheme is proposed and a convergence result is provided. Error estimates are obtained for the semi-Lagrangian scheme. These results can apply to the case of lookback options in finance. Moreover, optimal control problems with maximum cost arise in the characterization of the reachable sets for a system of controlled stochastic differential equations. Some numerical simulations on examples of reachable analysis are included to illustrate our approach.

  7. Time-varying block codes for synchronisation errors: maximum a posteriori decoder and practical issues

    Directory of Open Access Journals (Sweden)

    Johann A. Briffa

    2014-06-01

    Full Text Available In this study, the authors consider time-varying block (TVB codes, which generalise a number of previous synchronisation error-correcting codes. They also consider various practical issues related to maximum a posteriori (MAP decoding of these codes. Specifically, they give an expression for the expected distribution of drift between transmitter and receiver because of synchronisation errors. They determine an appropriate choice for state space limits based on the drift probability distribution. In turn, they obtain an expression for the decoder complexity under given channel conditions in terms of the state space limits used. For a given state space, they also give a number of optimisations that reduce the algorithm complexity with no further loss of decoder performance. They also show how the MAP decoder can be used in the absence of known frame boundaries, and demonstrate that an appropriate choice of decoder parameters allows the decoder to approach the performance when frame boundaries are known, at the expense of some increase in complexity. Finally, they express some existing constructions as TVB codes, comparing performance with published results and showing that improved performance is possible by taking advantage of the flexibility of TVB codes.

  8. Abnormal error monitoring in math-anxious individuals: evidence from error-related brain potentials.

    Directory of Open Access Journals (Sweden)

    Macarena Suárez-Pellicioni

    Full Text Available This study used event-related brain potentials to investigate whether math anxiety is related to abnormal error monitoring processing. Seventeen high math-anxious (HMA and seventeen low math-anxious (LMA individuals were presented with a numerical and a classical Stroop task. Groups did not differ in terms of trait or state anxiety. We found enhanced error-related negativity (ERN in the HMA group when subjects committed an error on the numerical Stroop task, but not on the classical Stroop task. Groups did not differ in terms of the correct-related negativity component (CRN, the error positivity component (Pe, classical behavioral measures or post-error measures. The amplitude of the ERN was negatively related to participants' math anxiety scores, showing a more negative amplitude as the score increased. Moreover, using standardized low resolution electromagnetic tomography (sLORETA we found greater activation of the insula in errors on a numerical task as compared to errors in a non-numerical task only for the HMA group. The results were interpreted according to the motivational significance theory of the ERN.

  9. Error-related anterior cingulate cortex activity and the prediction of conscious error awareness

    Directory of Open Access Journals (Sweden)

    Catherine eOrr

    2012-06-01

    Full Text Available Research examining the neural mechanisms associated with error awareness has consistently identified dorsal anterior cingulate activity (ACC as necessary but not predictive of conscious error detection. Two recent studies (Steinhauser and Yeung, 2010; Wessel et al. 2011 have found a contrary pattern of greater dorsal ACC activity (in the form of the error-related negativity during detected errors, but suggested that the greater activity may instead reflect task influences (e.g., response conflict, error probability and or individual variability (e.g., statistical power. We re-analyzed fMRI BOLD data from 56 healthy participants who had previously been administered the Error Awareness Task, a motor Go/No-go response inhibition task in which subjects make errors of commission of which they are aware (Aware errors, or unaware (Unaware errors. Consistent with previous data, the activity in a number of cortical regions was predictive of error awareness, including bilateral inferior parietal and insula cortices, however in contrast to previous studies, including our own smaller sample studies using the same task, error-related dorsal ACC activity was significantly greater during aware errors when compared to unaware errors. While the significantly faster RT for aware errors (compared to unaware was consistent with the hypothesis of higher response conflict increasing ACC activity, we could find no relationship between dorsal ACC activity and the error RT difference. The data suggests that individual variability in error awareness is associated with error-related dorsal ACC activity, and therefore this region may be important to conscious error detection, but it remains unclear what task and individual factors influence error awareness.

  10. Human errors related to maintenance and modifications

    International Nuclear Information System (INIS)

    Laakso, K.; Pyy, P.; Reiman, L.

    1998-01-01

    The focus in human reliability analysis (HRA) relating to nuclear power plants has traditionally been on human performance in disturbance conditions. On the other hand, some studies and incidents have shown that also maintenance errors, which have taken place earlier in plant history, may have an impact on the severity of a disturbance, e.g. if they disable safety related equipment. Especially common cause and other dependent failures of safety systems may significantly contribute to the core damage risk. The first aim of the study was to identify and give examples of multiple human errors which have penetrated the various error detection and inspection processes of plant safety barriers. Another objective was to generate numerical safety indicators to describe and forecast the effectiveness of maintenance. A more general objective was to identify needs for further development of maintenance quality and planning. In the first phase of this operational experience feedback analysis, human errors recognisable in connection with maintenance were looked for by reviewing about 4400 failure and repair reports and some special reports which cover two nuclear power plant units on the same site during 1992-94. A special effort was made to study dependent human errors since they are generally the most serious ones. An in-depth root cause analysis was made for 14 dependent errors by interviewing plant maintenance foremen and by thoroughly analysing the errors. A more simple treatment was given to maintenance-related single errors. The results were shown as a distribution of errors among operating states i.a. as regards the following matters: in what operational state the errors were committed and detected; in what operational and working condition the errors were detected, and what component and error type they were related to. These results were presented separately for single and dependent maintenance-related errors. As regards dependent errors, observations were also made

  11. How Do Simulated Error Experiences Impact Attitudes Related to Error Prevention?

    Science.gov (United States)

    Breitkreuz, Karen R; Dougal, Renae L; Wright, Melanie C

    2016-10-01

    The objective of this project was to determine whether simulated exposure to error situations changes attitudes in a way that may have a positive impact on error prevention behaviors. Using a stratified quasi-randomized experiment design, we compared risk perception attitudes of a control group of nursing students who received standard error education (reviewed medication error content and watched movies about error experiences) to an experimental group of students who reviewed medication error content and participated in simulated error experiences. Dependent measures included perceived memorability of the educational experience, perceived frequency of errors, and perceived caution with respect to preventing errors. Experienced nursing students perceived the simulated error experiences to be more memorable than movies. Less experienced students perceived both simulated error experiences and movies to be highly memorable. After the intervention, compared with movie participants, simulation participants believed errors occurred more frequently. Both types of education increased the participants' intentions to be more cautious and reported caution remained higher than baseline for medication errors 6 months after the intervention. This study provides limited evidence of an advantage of simulation over watching movies describing actual errors with respect to manipulating attitudes related to error prevention. Both interventions resulted in long-term impacts on perceived caution in medication administration. Simulated error experiences made participants more aware of how easily errors can occur, and the movie education made participants more aware of the devastating consequences of errors.

  12. Maximum inflation of the type 1 error rate when sample size and allocation rate are adapted in a pre-planned interim look.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter

    2011-06-30

    We calculate the maximum type 1 error rate of the pre-planned conventional fixed sample size test for comparing the means of independent normal distributions (with common known variance) which can be yielded when sample size and allocation rate to the treatment arms can be modified in an interim analysis. Thereby it is assumed that the experimenter fully exploits knowledge of the unblinded interim estimates of the treatment effects in order to maximize the conditional type 1 error rate. The 'worst-case' strategies require knowledge of the unknown common treatment effect under the null hypothesis. Although this is a rather hypothetical scenario it may be approached in practice when using a standard control treatment for which precise estimates are available from historical data. The maximum inflation of the type 1 error rate is substantially larger than derived by Proschan and Hunsberger (Biometrics 1995; 51:1315-1324) for design modifications applying balanced samples before and after the interim analysis. Corresponding upper limits for the maximum type 1 error rate are calculated for a number of situations arising from practical considerations (e.g. restricting the maximum sample size, not allowing sample size to decrease, allowing only increase in the sample size in the experimental treatment). The application is discussed for a motivating example. Copyright © 2011 John Wiley & Sons, Ltd.

  13. Maximum type I error rate inflation from sample size reassessment when investigators are blind to treatment labels.

    Science.gov (United States)

    Żebrowska, Magdalena; Posch, Martin; Magirr, Dominic

    2016-05-30

    Consider a parallel group trial for the comparison of an experimental treatment to a control, where the second-stage sample size may depend on the blinded primary endpoint data as well as on additional blinded data from a secondary endpoint. For the setting of normally distributed endpoints, we demonstrate that this may lead to an inflation of the type I error rate if the null hypothesis holds for the primary but not the secondary endpoint. We derive upper bounds for the inflation of the type I error rate, both for trials that employ random allocation and for those that use block randomization. We illustrate the worst-case sample size reassessment rule in a case study. For both randomization strategies, the maximum type I error rate increases with the effect size in the secondary endpoint and the correlation between endpoints. The maximum inflation increases with smaller block sizes if information on the block size is used in the reassessment rule. Based on our findings, we do not question the well-established use of blinded sample size reassessment methods with nuisance parameter estimates computed from the blinded interim data of the primary endpoint. However, we demonstrate that the type I error rate control of these methods relies on the application of specific, binding, pre-planned and fully algorithmic sample size reassessment rules and does not extend to general or unplanned sample size adjustments based on blinded data. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2015 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  14. Improving Bayesian credibility intervals for classifier error rates using maximum entropy empirical priors.

    Science.gov (United States)

    Gustafsson, Mats G; Wallman, Mikael; Wickenberg Bolin, Ulrika; Göransson, Hanna; Fryknäs, M; Andersson, Claes R; Isaksson, Anders

    2010-06-01

    Successful use of classifiers that learn to make decisions from a set of patient examples require robust methods for performance estimation. Recently many promising approaches for determination of an upper bound for the error rate of a single classifier have been reported but the Bayesian credibility interval (CI) obtained from a conventional holdout test still delivers one of the tightest bounds. The conventional Bayesian CI becomes unacceptably large in real world applications where the test set sizes are less than a few hundred. The source of this problem is that fact that the CI is determined exclusively by the result on the test examples. In other words, there is no information at all provided by the uniform prior density distribution employed which reflects complete lack of prior knowledge about the unknown error rate. Therefore, the aim of the study reported here was to study a maximum entropy (ME) based approach to improved prior knowledge and Bayesian CIs, demonstrating its relevance for biomedical research and clinical practice. It is demonstrated how a refined non-uniform prior density distribution can be obtained by means of the ME principle using empirical results from a few designs and tests using non-overlapping sets of examples. Experimental results show that ME based priors improve the CIs when employed to four quite different simulated and two real world data sets. An empirically derived ME prior seems promising for improving the Bayesian CI for the unknown error rate of a designed classifier. Copyright 2010 Elsevier B.V. All rights reserved.

  15. Maximum error-bounded Piecewise Linear Representation for online stream approximation

    KAUST Repository

    Xie, Qing; Pang, Chaoyi; Zhou, Xiaofang; Zhang, Xiangliang; Deng, Ke

    2014-01-01

    Given a time series data stream, the generation of error-bounded Piecewise Linear Representation (error-bounded PLR) is to construct a number of consecutive line segments to approximate the stream, such that the approximation error does not exceed a prescribed error bound. In this work, we consider the error bound in L∞ norm as approximation criterion, which constrains the approximation error on each corresponding data point, and aim on designing algorithms to generate the minimal number of segments. In the literature, the optimal approximation algorithms are effectively designed based on transformed space other than time-value space, while desirable optimal solutions based on original time domain (i.e., time-value space) are still lacked. In this article, we proposed two linear-time algorithms to construct error-bounded PLR for data stream based on time domain, which are named OptimalPLR and GreedyPLR, respectively. The OptimalPLR is an optimal algorithm that generates minimal number of line segments for the stream approximation, and the GreedyPLR is an alternative solution for the requirements of high efficiency and resource-constrained environment. In order to evaluate the superiority of OptimalPLR, we theoretically analyzed and compared OptimalPLR with the state-of-art optimal solution in transformed space, which also achieves linear complexity. We successfully proved the theoretical equivalence between time-value space and such transformed space, and also discovered the superiority of OptimalPLR on processing efficiency in practice. The extensive results of empirical evaluation support and demonstrate the effectiveness and efficiency of our proposed algorithms.

  16. Maximum error-bounded Piecewise Linear Representation for online stream approximation

    KAUST Repository

    Xie, Qing

    2014-04-04

    Given a time series data stream, the generation of error-bounded Piecewise Linear Representation (error-bounded PLR) is to construct a number of consecutive line segments to approximate the stream, such that the approximation error does not exceed a prescribed error bound. In this work, we consider the error bound in L∞ norm as approximation criterion, which constrains the approximation error on each corresponding data point, and aim on designing algorithms to generate the minimal number of segments. In the literature, the optimal approximation algorithms are effectively designed based on transformed space other than time-value space, while desirable optimal solutions based on original time domain (i.e., time-value space) are still lacked. In this article, we proposed two linear-time algorithms to construct error-bounded PLR for data stream based on time domain, which are named OptimalPLR and GreedyPLR, respectively. The OptimalPLR is an optimal algorithm that generates minimal number of line segments for the stream approximation, and the GreedyPLR is an alternative solution for the requirements of high efficiency and resource-constrained environment. In order to evaluate the superiority of OptimalPLR, we theoretically analyzed and compared OptimalPLR with the state-of-art optimal solution in transformed space, which also achieves linear complexity. We successfully proved the theoretical equivalence between time-value space and such transformed space, and also discovered the superiority of OptimalPLR on processing efficiency in practice. The extensive results of empirical evaluation support and demonstrate the effectiveness and efficiency of our proposed algorithms.

  17. Improved efficiency of maximum likelihood analysis of time series with temporally correlated errors

    Science.gov (United States)

    Langbein, John

    2017-08-01

    Most time series of geophysical phenomena have temporally correlated errors. From these measurements, various parameters are estimated. For instance, from geodetic measurements of positions, the rates and changes in rates are often estimated and are used to model tectonic processes. Along with the estimates of the size of the parameters, the error in these parameters needs to be assessed. If temporal correlations are not taken into account, or each observation is assumed to be independent, it is likely that any estimate of the error of these parameters will be too low and the estimated value of the parameter will be biased. Inclusion of better estimates of uncertainties is limited by several factors, including selection of the correct model for the background noise and the computational requirements to estimate the parameters of the selected noise model for cases where there are numerous observations. Here, I address the second problem of computational efficiency using maximum likelihood estimates (MLE). Most geophysical time series have background noise processes that can be represented as a combination of white and power-law noise, 1/f^{α } with frequency, f. With missing data, standard spectral techniques involving FFTs are not appropriate. Instead, time domain techniques involving construction and inversion of large data covariance matrices are employed. Bos et al. (J Geod, 2013. doi: 10.1007/s00190-012-0605-0) demonstrate one technique that substantially increases the efficiency of the MLE methods, yet is only an approximate solution for power-law indices >1.0 since they require the data covariance matrix to be Toeplitz. That restriction can be removed by simply forming a data filter that adds noise processes rather than combining them in quadrature. Consequently, the inversion of the data covariance matrix is simplified yet provides robust results for a wider range of power-law indices.

  18. An overview of intravenous-related medication administration errors as reported to MEDMARX, a national medication error-reporting program.

    Science.gov (United States)

    Hicks, Rodney W; Becker, Shawn C

    2006-01-01

    Medication errors can be harmful, especially if they involve the intravenous (IV) route of administration. A mixed-methodology study using a 5-year review of 73,769 IV-related medication errors from a national medication error reporting program indicates that between 3% and 5% of these errors were harmful. The leading type of error was omission, and the leading cause of error involved clinician performance deficit. Using content analysis, three themes-product shortage, calculation errors, and tubing interconnectivity-emerge and appear to predispose patients to harm. Nurses often participate in IV therapy, and these findings have implications for practice and patient safety. Voluntary medication error-reporting programs afford an opportunity to improve patient care and to further understanding about the nature of IV-related medication errors.

  19. Error-related potentials during continuous feedback: using EEG to detect errors of different type and severity

    Science.gov (United States)

    Spüler, Martin; Niethammer, Christian

    2015-01-01

    When a person recognizes an error during a task, an error-related potential (ErrP) can be measured as response. It has been shown that ErrPs can be automatically detected in tasks with time-discrete feedback, which is widely applied in the field of Brain-Computer Interfaces (BCIs) for error correction or adaptation. However, there are only a few studies that concentrate on ErrPs during continuous feedback. With this study, we wanted to answer three different questions: (i) Can ErrPs be measured in electroencephalography (EEG) recordings during a task with continuous cursor control? (ii) Can ErrPs be classified using machine learning methods and is it possible to discriminate errors of different origins? (iii) Can we use EEG to detect the severity of an error? To answer these questions, we recorded EEG data from 10 subjects during a video game task and investigated two different types of error (execution error, due to inaccurate feedback; outcome error, due to not achieving the goal of an action). We analyzed the recorded data to show that during the same task, different kinds of error produce different ErrP waveforms and have a different spectral response. This allows us to detect and discriminate errors of different origin in an event-locked manner. By utilizing the error-related spectral response, we show that also a continuous, asynchronous detection of errors is possible. Although the detection of error severity based on EEG was one goal of this study, we did not find any significant influence of the severity on the EEG. PMID:25859204

  20. Error-related potentials during continuous feedback: using EEG to detect errors of different type and severity

    Directory of Open Access Journals (Sweden)

    Martin eSpüler

    2015-03-01

    Full Text Available When a person recognizes an error during a task, an error-related potential (ErrP can be measured as response. It has been shown that ErrPs can be automatically detected in tasks with time-discrete feedback, which is widely applied in the field of Brain-Computer Interfaces (BCIs for error correction or adaptation. However, there are only a few studies that concentrate on ErrPs during continuous feedback.With this study, we wanted to answer three different questions: (i Can ErrPs be measured in electroencephalography (EEG recordings during a task with continuous cursor control? (ii Can ErrPs be classified using machine learning methods and is it possible to discriminate errors of different origins? (iii Can we use EEG to detect the severity of an error? To answer these questions, we recorded EEG data from 10 subjects during a video game task and investigated two different types of error (execution error, due to inaccurate feedback; outcome error, due to not achieving the goal of an action. We analyzed the recorded data to show that during the same task, different kinds of error produce different ErrP waveforms and have a different spectral response. This allows us to detect and discriminate errors of different origin in an event-locked manner. By utilizing the error-related spectral response, we show that also a continuous, asynchronous detection of errors is possible.Although the detection of error severity based on EEG was one goal of this study, we did not find any significant influence of the severity on the EEG.

  1. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  2. Maximum type 1 error rate inflation in multiarmed clinical trials with adaptive interim sample size modifications.

    Science.gov (United States)

    Graf, Alexandra C; Bauer, Peter; Glimm, Ekkehard; Koenig, Franz

    2014-07-01

    Sample size modifications in the interim analyses of an adaptive design can inflate the type 1 error rate, if test statistics and critical boundaries are used in the final analysis as if no modification had been made. While this is already true for designs with an overall change of the sample size in a balanced treatment-control comparison, the inflation can be much larger if in addition a modification of allocation ratios is allowed as well. In this paper, we investigate adaptive designs with several treatment arms compared to a single common control group. Regarding modifications, we consider treatment arm selection as well as modifications of overall sample size and allocation ratios. The inflation is quantified for two approaches: a naive procedure that ignores not only all modifications, but also the multiplicity issue arising from the many-to-one comparison, and a Dunnett procedure that ignores modifications, but adjusts for the initially started multiple treatments. The maximum inflation of the type 1 error rate for such types of design can be calculated by searching for the "worst case" scenarios, that are sample size adaptation rules in the interim analysis that lead to the largest conditional type 1 error rate in any point of the sample space. To show the most extreme inflation, we initially assume unconstrained second stage sample size modifications leading to a large inflation of the type 1 error rate. Furthermore, we investigate the inflation when putting constraints on the second stage sample sizes. It turns out that, for example fixing the sample size of the control group, leads to designs controlling the type 1 error rate. © 2014 The Author. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Event-Related Potentials for Post-Error and Post-Conflict Slowing

    Science.gov (United States)

    Chang, Andrew; Chen, Chien-Chung; Li, Hsin-Hung; Li, Chiang-Shan R.

    2014-01-01

    In a reaction time task, people typically slow down following an error or conflict, each called post-error slowing (PES) and post-conflict slowing (PCS). Despite many studies of the cognitive mechanisms, the neural responses of PES and PCS continue to be debated. In this study, we combined high-density array EEG and a stop-signal task to examine event-related potentials of PES and PCS in sixteen young adult participants. The results showed that the amplitude of N2 is greater during PES but not PCS. In contrast, the peak latency of N2 is longer for PCS but not PES. Furthermore, error-positivity (Pe) but not error-related negativity (ERN) was greater in the stop error trials preceding PES than non-PES trials, suggesting that PES is related to participants' awareness of the error. Together, these findings extend earlier work of cognitive control by specifying the neural correlates of PES and PCS in the stop signal task. PMID:24932780

  4. A lower bound on the relative error of mixed-state cloning and related operations

    International Nuclear Information System (INIS)

    Rastegin, A E

    2003-01-01

    We extend the concept of the relative error to mixed-state cloning and related physical operations, in which the ancilla contains some information a priori about the input state. The lower bound on the relative error is obtained. It is shown that this result provides further support for a stronger no-cloning theorem

  5. CORRECTING ERRORS: THE RELATIVE EFFICACY OF DIFFERENT FORMS OF ERROR FEEDBACK IN SECOND LANGUAGE WRITING

    Directory of Open Access Journals (Sweden)

    Chitra Jayathilake

    2013-01-01

    Full Text Available Error correction in ESL (English as a Second Language classes has been a focal phenomenon in SLA (Second Language Acquisition research due to some controversial research results and diverse feedback practices. This paper presents a study which explored the relative efficacy of three forms of error correction employed in ESL writing classes: focusing on the acquisition of one grammar element both for immediate and delayed language contexts, and collecting data from university undergraduates, this study employed an experimental research design with a pretest-treatment-posttests structure. The research revealed that the degree of success in acquiring L2 (Second Language grammar through error correction differs according to the form of the correction and to learning contexts. While the findings are discussed in relation to the previous literature, this paper concludes creating a cline of error correction forms to be promoted in Sri Lankan L2 writing contexts, particularly in ESL contexts in Universities.

  6. Making related errors facilitates learning, but learners do not know it.

    Science.gov (United States)

    Huelser, Barbie J; Metcalfe, Janet

    2012-05-01

    Producing an error, so long as it is followed by corrective feedback, has been shown to result in better retention of the correct answers than does simply studying the correct answers from the outset. The reasons for this surprising finding, however, have not been investigated. Our hypothesis was that the effect might occur only when the errors produced were related to the targeted correct response. In Experiment 1, participants studied either related or unrelated word pairs, manipulated between participants. Participants either were given the cue and target to study for 5 or 10 s or generated an error in response to the cue for the first 5 s before receiving the correct answer for the final 5 s. When the cues and targets were related, error-generation led to the highest correct retention. However, consistent with the hypothesis, no benefit was derived from generating an error when the cue and target were unrelated. Latent semantic analysis revealed that the errors generated in the related condition were related to the target, whereas they were not related to the target in the unrelated condition. Experiment 2 replicated these findings in a within-participants design. We found, additionally, that people did not know that generating an error enhanced memory, even after they had just completed the task that produced substantial benefits.

  7. Assessing errors related to characteristics of the items measured

    International Nuclear Information System (INIS)

    Liggett, W.

    1980-01-01

    Errors that are related to some intrinsic property of the items measured are often encountered in nuclear material accounting. An example is the error in nondestructive assay measurements caused by uncorrected matrix effects. Nuclear material accounting requires for each materials type one measurement method for which bounds on these errors can be determined. If such a method is available, a second method might be used to reduce costs or to improve precision. If the measurement error for the first method is longer-tailed than Gaussian, then precision might be improved by measuring all items by both methods. 8 refs

  8. Regional compensation for statistical maximum likelihood reconstruction error of PET image pixels

    International Nuclear Information System (INIS)

    Forma, J; Ruotsalainen, U; Niemi, J A

    2013-01-01

    In positron emission tomography (PET), there is an increasing interest in studying not only the regional mean tracer concentration, but its variation arising from local differences in physiology, the tissue heterogeneity. However, in reconstructed images this physiological variation is shadowed by a large reconstruction error, which is caused by noisy data and the inversion of tomographic problem. We present a new procedure which can quantify the error variation in regional reconstructed values for given PET measurement, and reveal the remaining tissue heterogeneity. The error quantification is made by creating and reconstructing the noise realizations of virtual sinograms, which are statistically similar with the measured sinogram. Tests with physical phantom data show that the characterization of error variation and the true heterogeneity are possible, despite the existing model error when real measurement is considered. (paper)

  9. Maximum entropy reconstructions for crystallographic imaging; Cristallographie et reconstruction d`images par maximum d`entropie

    Energy Technology Data Exchange (ETDEWEB)

    Papoular, R

    1997-07-01

    The Fourier Transform is of central importance to Crystallography since it allows the visualization in real space of tridimensional scattering densities pertaining to physical systems from diffraction data (powder or single-crystal diffraction, using x-rays, neutrons, electrons or else). In turn, this visualization makes it possible to model and parametrize these systems, the crystal structures of which are eventually refined by Least-Squares techniques (e.g., the Rietveld method in the case of Powder Diffraction). The Maximum Entropy Method (sometimes called MEM or MaxEnt) is a general imaging technique, related to solving ill-conditioned inverse problems. It is ideally suited for tackling undetermined systems of linear questions (for which the number of variables is much larger than the number of equations). It is already being applied successfully in Astronomy, Radioastronomy and Medical Imaging. The advantages of using MAXIMUM Entropy over conventional Fourier and `difference Fourier` syntheses stem from the following facts: MaxEnt takes the experimental error bars into account; MaxEnt incorporate Prior Knowledge (e.g., the positivity of the scattering density in some instances); MaxEnt allows density reconstructions from incompletely phased data, as well as from overlapping Bragg reflections; MaxEnt substantially reduces truncation errors to which conventional experimental Fourier reconstructions are usually prone. The principles of Maximum Entropy imaging as applied to Crystallography are first presented. The method is then illustrated by a detailed example specific to Neutron Diffraction: the search for proton in solids. (author). 17 refs.

  10. THE EFFECT OF THE STATIC RELATIVE STRENGTH ON THE MAXIMUM RELATIVE RECEIVING OF OXYGEN

    Directory of Open Access Journals (Sweden)

    Abdulla Elezi

    2011-09-01

    Full Text Available Based on research on the sample of 263 students of age- 18 years, and used batteries of 9 tests for evaluation of the static relative strength and the criterion variable- maximum relative receiving of oxygen (VO2 ml / kg / min based on the Astrand test ,and on regression analysis to determine the influence of the static relative strength on the criterion variable maximum relative oxygen receiving, can be generally concluded that from 9 predictor variables statistically significant partial effect have 2variables. In hierarchical order, they are: the variable of static relative leg strength - endurance of the fingers (the angle of the lower leg and thigh 900 (SRL2 which arithmetic mean is 25.04 seconds and variable ctatic relative strength of arms and shoulders – push-up endurance in the balance beam (angle of the forearm and upper arm 900 ( SRA2 with arithmetic mean of 17.75 seconds. From the statistically influential significant predictor variables on the criterion variable one is from the static relative leg strength (SRL2 and the other is from the static relative strength of arm and shoulder area (SRA2. With the analysis of these relations we can conclude that the isometric contractions of the four headed thigh muscle and the isometric contractions of the three headed upper arm muscle are predominantly responsible for the successful execution of doing actions on a bicycle ergometer and not on the maximum relative receiving of oxygen.

  11. Relating physician's workload with errors during radiation therapy planning.

    Science.gov (United States)

    Mazur, Lukasz M; Mosaly, Prithima R; Hoyle, Lesley M; Jones, Ellen L; Chera, Bhishamjit S; Marks, Lawrence B

    2014-01-01

    To relate subjective workload (WL) levels to errors for routine clinical tasks. Nine physicians (4 faculty and 5 residents) each performed 3 radiation therapy planning cases. The WL levels were subjectively assessed using National Aeronautics and Space Administration Task Load Index (NASA-TLX). Individual performance was assessed objectively based on the severity grade of errors. The relationship between the WL and performance was assessed via ordinal logistic regression. There was an increased rate of severity grade of errors with increasing WL (P value = .02). As the majority of the higher NASA-TLX scores, and the majority of the performance errors were in the residents, our findings are likely most pertinent to radiation oncology centers with training programs. WL levels may be an important factor contributing to errors during radiation therapy planning tasks. Published by Elsevier Inc.

  12. Effects of errors in velocity tilt on maximum longitudinal compression during neutralized drift compression of intense beam pulses: II. Analysis of experimental data of the Neutralized Drift Compression eXperiment-I (NDCX-I)

    International Nuclear Information System (INIS)

    Massidda, Scott; Kaganovich, Igor D.; Startsev, Edward A.; Davidson, Ronald C.; Lidia, Steven M.; Seidl, Peter; Friedman, Alex

    2012-01-01

    Neutralized drift compression offers an effective means for particle beam focusing and current amplification with applications to heavy ion fusion. In the Neutralized Drift Compression eXperiment-I (NDCX-I), a non-relativistic ion beam pulse is passed through an inductive bunching module that produces a longitudinal velocity modulation. Due to the applied velocity tilt, the beam pulse compresses during neutralized drift. The ion beam pulse can be compressed by a factor of more than 100; however, errors in the velocity modulation affect the compression ratio in complex ways. We have performed a study of how the longitudinal compression of a typical NDCX-I ion beam pulse is affected by the initial errors in the acquired velocity modulation. Without any voltage errors, an ideal compression is limited only by the initial energy spread of the ion beam, ΔΕ b . In the presence of large voltage errors, δU⪢ΔE b , the maximum compression ratio is found to be inversely proportional to the geometric mean of the relative error in velocity modulation and the relative intrinsic energy spread of the beam ions. Although small parts of a beam pulse can achieve high local values of compression ratio, the acquired velocity errors cause these parts to compress at different times, limiting the overall compression of the ion beam pulse.

  13. Refractive error magnitude and variability: Relation to age.

    Science.gov (United States)

    Irving, Elizabeth L; Machan, Carolyn M; Lam, Sharon; Hrynchak, Patricia K; Lillakas, Linda

    2018-03-19

    To investigate mean ocular refraction (MOR) and astigmatism, over the human age range and compare severity of refractive error to earlier studies from clinical populations having large age ranges. For this descriptive study patient age, refractive error and history of surgery affecting refraction were abstracted from the Waterloo Eye Study database (WatES). Average MOR, standard deviation of MOR and astigmatism were assessed in relation to age. Refractive distributions for developmental age groups were determined. MOR standard deviation relative to average MOR was evaluated. Data from earlier clinically based studies with similar age ranges were compared to WatES. Right eye refractive errors were available for 5933 patients with no history of surgery affecting refraction. Average MOR varied with age. Children <1 yr of age were the most hyperopic (+1.79D) and the highest magnitude of myopia was found at 27yrs (-2.86D). MOR distributions were leptokurtic, and negatively skewed. The mode varied with age group. MOR variability increased with increasing myopia. Average astigmatism increased gradually to age 60 after which it increased at a faster rate. By 85+ years it was 1.25D. J 0 power vector became increasingly negative with age. J 45 power vector values remained close to zero but variability increased at approximately 70 years. In relation to comparable earlier studies, WatES data were most myopic. Mean ocular refraction and refractive error distribution vary with age. The highest magnitude of myopia is found in young adults. Similar to prevalence, the severity of myopia also appears to have increased since 1931. Copyright © 2018 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.

  14. Dysfunctional error-related processing in incarcerated youth with elevated psychopathic traits

    Science.gov (United States)

    Maurer, J. Michael; Steele, Vaughn R.; Cope, Lora M.; Vincent, Gina M.; Stephen, Julia M.; Calhoun, Vince D.; Kiehl, Kent A.

    2016-01-01

    Adult psychopathic offenders show an increased propensity towards violence, impulsivity, and recidivism. A subsample of youth with elevated psychopathic traits represent a particularly severe subgroup characterized by extreme behavioral problems and comparable neurocognitive deficits as their adult counterparts, including perseveration deficits. Here, we investigate response-locked event-related potential (ERP) components (the error-related negativity [ERN/Ne] related to early error-monitoring processing and the error-related positivity [Pe] involved in later error-related processing) in a sample of incarcerated juvenile male offenders (n = 100) who performed a response inhibition Go/NoGo task. Psychopathic traits were assessed using the Hare Psychopathy Checklist: Youth Version (PCL:YV). The ERN/Ne and Pe were analyzed with classic windowed ERP components and principal component analysis (PCA). Using linear regression analyses, PCL:YV scores were unrelated to the ERN/Ne, but were negatively related to Pe mean amplitude. Specifically, the PCL:YV Facet 4 subscale reflecting antisocial traits emerged as a significant predictor of reduced amplitude of a subcomponent underlying the Pe identified with PCA. This is the first evidence to suggest a negative relationship between adolescent psychopathy scores and Pe mean amplitude. PMID:26930170

  15. Association of medication errors with drug classifications, clinical units, and consequence of errors: Are they related?

    Science.gov (United States)

    Muroi, Maki; Shen, Jay J; Angosta, Alona

    2017-02-01

    Registered nurses (RNs) play an important role in safe medication administration and patient safety. This study examined a total of 1276 medication error (ME) incident reports made by RNs in hospital inpatient settings in the southwestern region of the United States. The most common drug class associated with MEs was cardiovascular drugs (24.7%). Among this class, anticoagulants had the most errors (11.3%). The antimicrobials was the second most common drug class associated with errors (19.1%) and vancomycin was the most common antimicrobial that caused errors in this category (6.1%). MEs occurred more frequently in the medical-surgical and intensive care units than any other hospital units. Ten percent of MEs reached the patients with harm and 11% reached the patients with increased monitoring. Understanding the contributing factors related to MEs, addressing and eliminating risk of errors across hospital units, and providing education and resources for nurses may help reduce MEs. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Practical application of the theory of errors in measurement

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the practical application of the theory of errors in measurement. The topics of the chapter include fixing on a maximum desired error, selecting a maximum error, the procedure for limiting the error, utilizing a standard procedure, setting specifications for a standard procedure, and selecting the number of measurements to be made

  17. [Event-related EEG potentials associated with error detection in psychiatric disorder: literature review].

    Science.gov (United States)

    Balogh, Lívia; Czobor, Pál

    2010-01-01

    Error-related bioelectric signals constitute a special subgroup of event-related potentials. Researchers have identified two evoked potential components to be closely related to error processing, namely error-related negativity (ERN) and error-positivity (Pe), and they linked these to specific cognitive functions. In our article first we give a brief description of these components, then based on the available literature, we review differences in error-related evoked potentials observed in patients across psychiatric disorders. The PubMed and Medline search engines were used in order to identify all relevant articles, published between 2000 and 2009. For the purpose of the current paper we reviewed publications summarizing results of clinical trials. Patients suffering from schizophrenia, anorexia nervosa or borderline personality disorder exhibited a decrease in the amplitude of error-negativity when compared with healthy controls, while in cases of depression and anxiety an increase in the amplitude has been observed. Some of the articles suggest specific personality variables, such as impulsivity, perfectionism, negative emotions or sensitivity to punishment to underlie these electrophysiological differences. Research in the field of error-related electric activity has come to the focus of psychiatry research only recently, thus the amount of available data is significantly limited. However, since this is a relatively new field of research, the results available at present are noteworthy and promising for future electrophysiological investigations in psychiatric disorders.

  18. Comprehensive analysis of a medication dosing error related to CPOE.

    Science.gov (United States)

    Horsky, Jan; Kuperman, Gilad J; Patel, Vimla L

    2005-01-01

    This case study of a serious medication error demonstrates the necessity of a comprehensive methodology for the analysis of failures in interaction between humans and information systems. The authors used a novel approach to analyze a dosing error related to computer-based ordering of potassium chloride (KCl). The method included a chronological reconstruction of events and their interdependencies from provider order entry usage logs, semistructured interviews with involved clinicians, and interface usability inspection of the ordering system. Information collected from all sources was compared and evaluated to understand how the error evolved and propagated through the system. In this case, the error was the product of faults in interaction among human and system agents that methods limited in scope to their distinct analytical domains would not identify. The authors characterized errors in several converging aspects of the drug ordering process: confusing on-screen laboratory results review, system usability difficulties, user training problems, and suboptimal clinical system safeguards that all contributed to a serious dosing error. The results of the authors' analysis were used to formulate specific recommendations for interface layout and functionality modifications, suggest new user alerts, propose changes to user training, and address error-prone steps of the KCl ordering process to reduce the risk of future medication dosing errors.

  19. Masked and unmasked error-related potentials during continuous control and feedback

    Science.gov (United States)

    Lopes Dias, Catarina; Sburlea, Andreea I.; Müller-Putz, Gernot R.

    2018-06-01

    The detection of error-related potentials (ErrPs) in tasks with discrete feedback is well established in the brain–computer interface (BCI) field. However, the decoding of ErrPs in tasks with continuous feedback is still in its early stages. Objective. We developed a task in which subjects have continuous control of a cursor’s position by means of a joystick. The cursor’s position was shown to the participants in two different modalities of continuous feedback: normal and jittered. The jittered feedback was created to mimic the instability that could exist if participants controlled the trajectory directly with brain signals. Approach. This paper studies the electroencephalographic (EEG)—measurable signatures caused by a loss of control over the cursor’s trajectory, causing a target miss. Main results. In both feedback modalities, time-locked potentials revealed the typical frontal-central components of error-related potentials. Errors occurring during the jittered feedback (masked errors) were delayed in comparison to errors occurring during normal feedback (unmasked errors). Masked errors displayed lower peak amplitudes than unmasked errors. Time-locked classification analysis allowed a good distinction between correct and error classes (average Cohen-, average TPR  =  81.8% and average TNR  =  96.4%). Time-locked classification analysis between masked error and unmasked error classes revealed results at chance level (average Cohen-, average TPR  =  60.9% and average TNR  =  58.3%). Afterwards, we performed asynchronous detection of ErrPs, combining both masked and unmasked trials. The asynchronous detection of ErrPs in a simulated online scenario resulted in an average TNR of 84.0% and in an average TPR of 64.9%. Significance. The time-locked classification results suggest that the masked and unmasked errors were indistinguishable in terms of classification. The asynchronous classification results suggest that the

  20. Relating faults in diagnostic reasoning with diagnostic errors and patient harm.

    NARCIS (Netherlands)

    Zwaan, L.; Thijs, A.; Wagner, C.; Wal, G. van der; Timmermans, D.R.M.

    2012-01-01

    Purpose: The relationship between faults in diagnostic reasoning, diagnostic errors, and patient harm has hardly been studied. This study examined suboptimal cognitive acts (SCAs; i.e., faults in diagnostic reasoning), related them to the occurrence of diagnostic errors and patient harm, and studied

  1. FEL small signal gain reduction due to phase error of undulator

    International Nuclear Information System (INIS)

    Jia Qika

    2002-01-01

    The effects of undulator phase errors on the Free Electron Laser small signal gain is analyzed and discussed. The gain reduction factor due to the phase error is given analytically for low-gain regimes, it shows that degradation of the gain is similar to that of the spontaneous radiation, has a simple exponential relation with square of the rms phase error, and the linear variation part of phase error induces the position shift of maximum gain. The result also shows that the Madey's theorem still hold in the presence of phase error. The gain reduction factor due to the phase error for high-gain regimes also can be given in a simple way

  2. The impact of work-related stress on medication errors in Eastern Region Saudi Arabia.

    Science.gov (United States)

    Salam, Abdul; Segal, David M; Abu-Helalah, Munir Ahmad; Gutierrez, Mary Lou; Joosub, Imran; Ahmed, Wasim; Bibi, Rubina; Clarke, Elizabeth; Qarni, Ali Ahmed Al

    2018-05-07

    To examine the relationship between overall level and source-specific work-related stressors on medication errors rate. A cross-sectional study examined the relationship between overall levels of stress, 25 source-specific work-related stressors and medication error rate based on documented incident reports in Saudi Arabia (SA) hospital, using secondary databases. King Abdulaziz Hospital in Al-Ahsa, Eastern Region, SA. Two hundred and sixty-nine healthcare professionals (HCPs). The odds ratio (OR) and corresponding 95% confidence interval (CI) for HCPs documented incident report medication errors and self-reported sources of Job Stress Survey. Multiple logistic regression analysis identified source-specific work-related stress as significantly associated with HCPs who made at least one medication error per month (P stress were two times more likely to make at least one medication error per month than non-stressed HCPs (OR: 1.95, P = 0.081). This is the first study to use documented incident reports for medication errors rather than self-report to evaluate the level of stress-related medication errors in SA HCPs. Job demands, such as social stressors (home life disruption, difficulties with colleagues), time pressures, structural determinants (compulsory night/weekend call duties) and higher income, were significantly associated with medication errors whereas overall stress revealed a 2-fold higher trend.

  3. Maximum Power Point Tracking in Variable Speed Wind Turbine Based on Permanent Magnet Synchronous Generator Using Maximum Torque Sliding Mode Control Strategy

    Institute of Scientific and Technical Information of China (English)

    Esmaeil Ghaderi; Hossein Tohidi; Behnam Khosrozadeh

    2017-01-01

    The present study was carried out in order to track the maximum power point in a variable speed turbine by minimizing electromechanical torque changes using a sliding mode control strategy.In this strategy,fhst,the rotor speed is set at an optimal point for different wind speeds.As a result of which,the tip speed ratio reaches an optimal point,mechanical power coefficient is maximized,and wind turbine produces its maximum power and mechanical torque.Then,the maximum mechanical torque is tracked using electromechanical torque.In this technique,tracking error integral of maximum mechanical torque,the error,and the derivative of error are used as state variables.During changes in wind speed,sliding mode control is designed to absorb the maximum energy from the wind and minimize the response time of maximum power point tracking (MPPT).In this method,the actual control input signal is formed from a second order integral operation of the original sliding mode control input signal.The result of the second order integral in this model includes control signal integrity,full chattering attenuation,and prevention from large fluctuations in the power generator output.The simulation results,calculated by using MATLAB/m-file software,have shown the effectiveness of the proposed control strategy for wind energy systems based on the permanent magnet synchronous generator (PMSG).

  4. Technology-related medication errors in a tertiary hospital: a 5-year analysis of reported medication incidents.

    Science.gov (United States)

    Samaranayake, N R; Cheung, S T D; Chui, W C M; Cheung, B M Y

    2012-12-01

    Healthcare technology is meant to reduce medication errors. The objective of this study was to assess unintended errors related to technologies in the medication use process. Medication incidents reported from 2006 to 2010 in a main tertiary care hospital were analysed by a pharmacist and technology-related errors were identified. Technology-related errors were further classified as socio-technical errors and device errors. This analysis was conducted using data from medication incident reports which may represent only a small proportion of medication errors that actually takes place in a hospital. Hence, interpretation of results must be tentative. 1538 medication incidents were reported. 17.1% of all incidents were technology-related, of which only 1.9% were device errors, whereas most were socio-technical errors (98.1%). Of these, 61.2% were linked to computerised prescription order entry, 23.2% to bar-coded patient identification labels, 7.2% to infusion pumps, 6.8% to computer-aided dispensing label generation and 1.5% to other technologies. The immediate causes for technology-related errors included, poor interface between user and computer (68.1%), improper procedures or rule violations (22.1%), poor interface between user and infusion pump (4.9%), technical defects (1.9%) and others (3.0%). In 11.4% of the technology-related incidents, the error was detected after the drug had been administered. A considerable proportion of all incidents were technology-related. Most errors were due to socio-technical issues. Unintended and unanticipated errors may happen when using technologies. Therefore, when using technologies, system improvement, awareness, training and monitoring are needed to minimise medication errors. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  5. Intelligence and Neurophysiological Markers of Error Monitoring Relate to Children's Intellectual Humility.

    Science.gov (United States)

    Danovitch, Judith H; Fisher, Megan; Schroder, Hans; Hambrick, David Z; Moser, Jason

    2017-09-18

    This study explored developmental and individual differences in intellectual humility (IH) among 127 children ages 6-8. IH was operationalized as children's assessment of their knowledge and willingness to delegate scientific questions to experts. Children completed measures of IH, theory of mind, motivational framework, and intelligence, and neurophysiological measures indexing early (error-related negativity [ERN]) and later (error positivity [Pe]) error-monitoring processes related to cognitive control. Children's knowledge self-assessment correlated with question delegation, and older children showed greater IH than younger children. Greater IH was associated with higher intelligence but not with social cognition or motivational framework. ERN related to self-assessment, whereas Pe related to question delegation. Thus, children show separable epistemic and social components of IH that may differentially contribute to metacognition and learning. © 2017 The Authors. Child Development © 2017 Society for Research in Child Development, Inc.

  6. Relating Complexity and Error Rates of Ontology Concepts. More Complex NCIt Concepts Have More Errors.

    Science.gov (United States)

    Min, Hua; Zheng, Ling; Perl, Yehoshua; Halper, Michael; De Coronado, Sherri; Ochs, Christopher

    2017-05-18

    Ontologies are knowledge structures that lend support to many health-information systems. A study is carried out to assess the quality of ontological concepts based on a measure of their complexity. The results show a relation between complexity of concepts and error rates of concepts. A measure of lateral complexity defined as the number of exhibited role types is used to distinguish between more complex and simpler concepts. Using a framework called an area taxonomy, a kind of abstraction network that summarizes the structural organization of an ontology, concepts are divided into two groups along these lines. Various concepts from each group are then subjected to a two-phase QA analysis to uncover and verify errors and inconsistencies in their modeling. A hierarchy of the National Cancer Institute thesaurus (NCIt) is used as our test-bed. A hypothesis pertaining to the expected error rates of the complex and simple concepts is tested. Our study was done on the NCIt's Biological Process hierarchy. Various errors, including missing roles, incorrect role targets, and incorrectly assigned roles, were discovered and verified in the two phases of our QA analysis. The overall findings confirmed our hypothesis by showing a statistically significant difference between the amounts of errors exhibited by more laterally complex concepts vis-à-vis simpler concepts. QA is an essential part of any ontology's maintenance regimen. In this paper, we reported on the results of a QA study targeting two groups of ontology concepts distinguished by their level of complexity, defined in terms of the number of exhibited role types. The study was carried out on a major component of an important ontology, the NCIt. The findings suggest that more complex concepts tend to have a higher error rate than simpler concepts. These findings can be utilized to guide ongoing efforts in ontology QA.

  7. Orbit-related sea level errors for TOPEX altimetry at seasonal to decadal timescales

    Science.gov (United States)

    Esselborn, Saskia; Rudenko, Sergei; Schöne, Tilo

    2018-03-01

    Interannual to decadal sea level trends are indicators of climate variability and change. A major source of global and regional sea level data is satellite radar altimetry, which relies on precise knowledge of the satellite's orbit. Here, we assess the error budget of the radial orbit component for the TOPEX/Poseidon mission for the period 1993 to 2004 from a set of different orbit solutions. The errors for seasonal, interannual (5-year), and decadal periods are estimated on global and regional scales based on radial orbit differences from three state-of-the-art orbit solutions provided by different research teams: the German Research Centre for Geosciences (GFZ), the Groupe de Recherche de Géodésie Spatiale (GRGS), and the Goddard Space Flight Center (GSFC). The global mean sea level error related to orbit uncertainties is of the order of 1 mm (8 % of the global mean sea level variability) with negligible contributions on the annual and decadal timescales. In contrast, the orbit-related error of the interannual trend is 0.1 mm yr-1 (27 % of the corresponding sea level variability) and might hamper the estimation of an acceleration of the global mean sea level rise. For regional scales, the gridded orbit-related error is up to 11 mm, and for about half the ocean the orbit error accounts for at least 10 % of the observed sea level variability. The seasonal orbit error amounts to 10 % of the observed seasonal sea level signal in the Southern Ocean. At interannual and decadal timescales, the orbit-related trend uncertainties reach regionally more than 1 mm yr-1. The interannual trend errors account for 10 % of the observed sea level signal in the tropical Atlantic and the south-eastern Pacific. For decadal scales, the orbit-related trend errors are prominent in a several regions including the South Atlantic, western North Atlantic, central Pacific, South Australian Basin, and the Mediterranean Sea. Based on a set of test orbits calculated at GFZ, the sources of the

  8. Orbit-related sea level errors for TOPEX altimetry at seasonal to decadal timescales

    Directory of Open Access Journals (Sweden)

    S. Esselborn

    2018-03-01

    Full Text Available Interannual to decadal sea level trends are indicators of climate variability and change. A major source of global and regional sea level data is satellite radar altimetry, which relies on precise knowledge of the satellite's orbit. Here, we assess the error budget of the radial orbit component for the TOPEX/Poseidon mission for the period 1993 to 2004 from a set of different orbit solutions. The errors for seasonal, interannual (5-year, and decadal periods are estimated on global and regional scales based on radial orbit differences from three state-of-the-art orbit solutions provided by different research teams: the German Research Centre for Geosciences (GFZ, the Groupe de Recherche de Géodésie Spatiale (GRGS, and the Goddard Space Flight Center (GSFC. The global mean sea level error related to orbit uncertainties is of the order of 1 mm (8 % of the global mean sea level variability with negligible contributions on the annual and decadal timescales. In contrast, the orbit-related error of the interannual trend is 0.1 mm yr−1 (27 % of the corresponding sea level variability and might hamper the estimation of an acceleration of the global mean sea level rise. For regional scales, the gridded orbit-related error is up to 11 mm, and for about half the ocean the orbit error accounts for at least 10 % of the observed sea level variability. The seasonal orbit error amounts to 10 % of the observed seasonal sea level signal in the Southern Ocean. At interannual and decadal timescales, the orbit-related trend uncertainties reach regionally more than 1 mm yr−1. The interannual trend errors account for 10 % of the observed sea level signal in the tropical Atlantic and the south-eastern Pacific. For decadal scales, the orbit-related trend errors are prominent in a several regions including the South Atlantic, western North Atlantic, central Pacific, South Australian Basin, and the Mediterranean Sea. Based on a set of test

  9. Error-related brain activity predicts cocaine use after treatment at 3-month follow-up.

    Science.gov (United States)

    Marhe, Reshmi; van de Wetering, Ben J M; Franken, Ingmar H A

    2013-04-15

    Relapse after treatment is one of the most important problems in drug dependency. Several studies suggest that lack of cognitive control is one of the causes of relapse. In this study, a relative new electrophysiologic index of cognitive control, the error-related negativity, is investigated to examine its suitability as a predictor of relapse. The error-related negativity was measured in 57 cocaine-dependent patients during their first week in detoxification treatment. Data from 49 participants were used to predict cocaine use at 3-month follow-up. Cocaine use at follow-up was measured by means of self-reported days of cocaine use in the last month verified by urine screening. A multiple hierarchical regression model was used to examine the predictive value of the error-related negativity while controlling for addiction severity and self-reported craving in the week before treatment. The error-related negativity was the only significant predictor in the model and added 7.4% of explained variance to the control variables, resulting in a total of 33.4% explained variance in the prediction of days of cocaine use at follow-up. A reduced error-related negativity measured during the first week of treatment was associated with more days of cocaine use at 3-month follow-up. Moreover, the error-related negativity was a stronger predictor of recent cocaine use than addiction severity and craving. These results suggest that underactive error-related brain activity might help to identify patients who are at risk of relapse as early as in the first week of detoxification treatment. Copyright © 2013 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  10. Error Analysis of Relative Calibration for RCS Measurement on Ground Plane Range

    Directory of Open Access Journals (Sweden)

    Wu Peng-fei

    2012-03-01

    Full Text Available Ground plane range is a kind of outdoor Radar Cross Section (RCS test range used for static measurement of full-size or scaled targets. Starting from the characteristics of ground plane range, the impact of environments on targets and calibrators is analyzed during calibration in the RCS measurements. The error of relative calibration produced by the different illumination of target and calibrator is studied. The relative calibration technique used in ground plane range is to place the calibrator on a fixed and auxiliary pylon somewhere between the radar and the target under test. By considering the effect of ground reflection and antenna pattern, the relationship between the magnitude of echoes and the position of calibrator is discussed. According to the different distances between the calibrator and target, the difference between free space and ground plane range is studied and the error of relative calibration is calculated. Numerical simulation results are presented with useful conclusions. The relative calibration error varies with the position of calibrator, frequency and antenna beam width. In most case, set calibrator close to the target may keep the error under control.

  11. The Hurst Phenomenon in Error Estimates Related to Atmospheric Turbulence

    Science.gov (United States)

    Dias, Nelson Luís; Crivellaro, Bianca Luhm; Chamecki, Marcelo

    2018-05-01

    The Hurst phenomenon is a well-known feature of long-range persistence first observed in hydrological and geophysical time series by E. Hurst in the 1950s. It has also been found in several cases in turbulence time series measured in the wind tunnel, the atmosphere, and in rivers. Here, we conduct a systematic investigation of the value of the Hurst coefficient H in atmospheric surface-layer data, and its impact on the estimation of random errors. We show that usually H > 0.5 , which implies the non-existence (in the statistical sense) of the integral time scale. Since the integral time scale is present in the Lumley-Panofsky equation for the estimation of random errors, this has important practical consequences. We estimated H in two principal ways: (1) with an extension of the recently proposed filtering method to estimate the random error (H_p ), and (2) with the classical rescaled range introduced by Hurst (H_R ). Other estimators were tried but were found less able to capture the statistical behaviour of the large scales of turbulence. Using data from three micrometeorological campaigns we found that both first- and second-order turbulence statistics display the Hurst phenomenon. Usually, H_R is larger than H_p for the same dataset, raising the question that one, or even both, of these estimators, may be biased. For the relative error, we found that the errors estimated with the approach adopted by us, that we call the relaxed filtering method, and that takes into account the occurrence of the Hurst phenomenon, are larger than both the filtering method and the classical Lumley-Panofsky estimates. Finally, we found that there is no apparent relationship between H and the Obukhov stability parameter. The relative errors, however, do show stability dependence, particularly in the case of the error of the kinematic momentum flux in unstable conditions, and that of the kinematic sensible heat flux in stable conditions.

  12. Receiver function estimated by maximum entropy deconvolution

    Institute of Scientific and Technical Information of China (English)

    吴庆举; 田小波; 张乃铃; 李卫平; 曾融生

    2003-01-01

    Maximum entropy deconvolution is presented to estimate receiver function, with the maximum entropy as the rule to determine auto-correlation and cross-correlation functions. The Toeplitz equation and Levinson algorithm are used to calculate the iterative formula of error-predicting filter, and receiver function is then estimated. During extrapolation, reflective coefficient is always less than 1, which keeps maximum entropy deconvolution stable. The maximum entropy of the data outside window increases the resolution of receiver function. Both synthetic and real seismograms show that maximum entropy deconvolution is an effective method to measure receiver function in time-domain.

  13. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  14. Maximum entropy deconvolution of low count nuclear medicine images

    International Nuclear Information System (INIS)

    McGrath, D.M.

    1998-12-01

    Maximum entropy is applied to the problem of deconvolving nuclear medicine images, with special consideration for very low count data. The physics of the formation of scintigraphic images is described, illustrating the phenomena which degrade planar estimates of the tracer distribution. Various techniques which are used to restore these images are reviewed, outlining the relative merits of each. The development and theoretical justification of maximum entropy as an image processing technique is discussed. Maximum entropy is then applied to the problem of planar deconvolution, highlighting the question of the choice of error parameters for low count data. A novel iterative version of the algorithm is suggested which allows the errors to be estimated from the predicted Poisson mean values. This method is shown to produce the exact results predicted by combining Poisson statistics and a Bayesian interpretation of the maximum entropy approach. A facility for total count preservation has also been incorporated, leading to improved quantification. In order to evaluate this iterative maximum entropy technique, two comparable methods, Wiener filtering and a novel Bayesian maximum likelihood expectation maximisation technique, were implemented. The comparison of results obtained indicated that this maximum entropy approach may produce equivalent or better measures of image quality than the compared methods, depending upon the accuracy of the system model used. The novel Bayesian maximum likelihood expectation maximisation technique was shown to be preferable over many existing maximum a posteriori methods due to its simplicity of implementation. A single parameter is required to define the Bayesian prior, which suppresses noise in the solution and may reduce the processing time substantially. Finally, maximum entropy deconvolution was applied as a pre-processing step in single photon emission computed tomography reconstruction of low count data. Higher contrast results were

  15. The role of hand of error and stimulus orientation in the relationship between worry and error-related brain activity: Implications for theory and practice.

    Science.gov (United States)

    Lin, Yanli; Moran, Tim P; Schroder, Hans S; Moser, Jason S

    2015-10-01

    Anxious apprehension/worry is associated with exaggerated error monitoring; however, the precise mechanisms underlying this relationship remain unclear. The current study tested the hypothesis that the worry-error monitoring relationship involves left-lateralized linguistic brain activity by examining the relationship between worry and error monitoring, indexed by the error-related negativity (ERN), as a function of hand of error (Experiment 1) and stimulus orientation (Experiment 2). Results revealed that worry was exclusively related to the ERN on right-handed errors committed by the linguistically dominant left hemisphere. Moreover, the right-hand ERN-worry relationship emerged only when stimuli were presented horizontally (known to activate verbal processes) but not vertically. Together, these findings suggest that the worry-ERN relationship involves left hemisphere verbal processing, elucidating a potential mechanism to explain error monitoring abnormalities in anxiety. Implications for theory and practice are discussed. © 2015 Society for Psychophysiological Research.

  16. Relative Error Evaluation to Typical Open Global dem Datasets in Shanxi Plateau of China

    Science.gov (United States)

    Zhao, S.; Zhang, S.; Cheng, W.

    2018-04-01

    Produced by radar data or stereo remote sensing image pairs, global DEM datasets are one of the most important types for DEM data. Relative error relates to surface quality created by DEM data, so it relates to geomorphology and hydrologic applications using DEM data. Taking Shanxi Plateau of China as the study area, this research evaluated the relative error to typical open global DEM datasets including Shuttle Radar Terrain Mission (SRTM) data with 1 arc second resolution (SRTM1), SRTM data with 3 arc second resolution (SRTM3), ASTER global DEM data in the second version (GDEM-v2) and ALOS world 3D-30m (AW3D) data. Through process and selection, more than 300,000 ICESat/GLA14 points were used as the GCP data, and the vertical error was computed and compared among four typical global DEM datasets. Then, more than 2,600,000 ICESat/GLA14 point pairs were acquired using the distance threshold between 100 m and 500 m. Meanwhile, the horizontal distance between every point pair was computed, so the relative error was achieved using slope values based on vertical error difference and the horizontal distance of the point pairs. Finally, false slope ratio (FSR) index was computed through analyzing the difference between DEM and ICESat/GLA14 values for every point pair. Both relative error and FSR index were categorically compared for the four DEM datasets under different slope classes. Research results show: Overall, AW3D has the lowest relative error values in mean error, mean absolute error, root mean square error and standard deviation error; then the SRTM1 data, its values are a little higher than AW3D data; the SRTM3 and GDEM-v2 data have the highest relative error values, and the values for the two datasets are similar. Considering different slope conditions, all the four DEM data have better performance in flat areas but worse performance in sloping regions; AW3D has the best performance in all the slope classes, a litter better than SRTM1; with slope increasing

  17. Running Records and First Grade English Learners: An Analysis of Language Related Errors

    Science.gov (United States)

    Briceño, Allison; Klein, Adria F.

    2018-01-01

    The purpose of this study was to determine if first-grade English Learners made patterns of language related errors when reading, and if so, to identify those patterns and how teachers coded language related errors when analyzing English Learners' running records. Using research from the fields of both literacy and Second Language Acquisition, we…

  18. A new accuracy measure based on bounded relative error for time series forecasting.

    Science.gov (United States)

    Chen, Chao; Twycross, Jamie; Garibaldi, Jonathan M

    2017-01-01

    Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred.

  19. Analysis of error functions in speckle shearing interferometry

    International Nuclear Information System (INIS)

    Wan Saffiey Wan Abdullah

    2001-01-01

    Electronic Speckle Pattern Shearing Interferometry (ESPSI) or shearography has successfully been used in NDT for slope (∂w/ (∂x and / or (∂w/ (∂y) measurement while strain measurement (∂u/ ∂x, ∂v/ ∂y, ∂u/ ∂y and (∂v/ (∂x) is still under investigation. This method is well accepted in industrial applications especially in the aerospace industry. Demand of this method is increasing due to complexity of the test materials and objects. ESPSI has successfully performed in NDT only for qualitative measurement whilst quantitative measurement is the current aim of many manufacturers. Industrial use of such equipment is being completed without considering the errors arising from numerous sources, including wavefront divergence. The majority of commercial systems are operated with diverging object illumination wave fronts without considering the curvature of the object illumination wavefront or the object geometry, when calculating the interferometer fringe function and quantifying data. This thesis reports the novel approach in quantified maximum phase change difference analysis for derivative out-of-plane (OOP) and in-plane (IP) cases that propagate from the divergent illumination wavefront compared to collimated illumination. The theoretical of maximum phase difference is formulated by means of three dependent variables, these being the object distance, illuminated diameter, center of illuminated area and camera distance and illumination angle. The relative maximum phase change difference that may contributed to the error in the measurement analysis in this scope of research is defined by the difference of maximum phase difference value measured by divergent illumination wavefront relative to the maximum phase difference value of collimated illumination wavefront, taken at the edge of illuminated area. Experimental validation using test objects for derivative out-of-plane and derivative in-plane deformation, using a single illumination wavefront

  20. An investigation of Saudi Arabian MR radiographers' knowledge and confidence in relation to MR image-quality-related errors

    International Nuclear Information System (INIS)

    Alsharif, W.; Davis, M.; McGee, A.; Rainford, L.

    2017-01-01

    Objective: To investigate MR radiographers' current knowledge base and confidence level in relation to quality-related errors within MR images. Method: Thirty-five MR radiographers within 16 MRI departments in the Kingdom of Saudi Arabia (KSA) independently reviewed a prepared set of 25 MR images, naming the error, specifying the error-correction strategy, scoring how confident they were in recognising this error and suggesting a correction strategy by using a scale of 1–100. The datasets were obtained from MRI departments in the KSA to represent the range of images which depicted excellent, acceptable and poor image quality. Results: The findings demonstrated a low level of radiographer knowledge in identifying the type of quality errors and when suggesting an appropriate strategy to rectify those errors. The findings show that only (n = 7) 20% of the radiographers could correctly name what the quality errors were in 70% of the dataset, and none of the radiographers correctly specified the error-correction strategy in more than 68% of the MR datasets. The confidence level of radiography participants in their ability to state the type of image quality errors was significantly different (p < 0.001) for who work in different hospital types. Conclusion: The findings of this study suggest there is a need to establish a national association for MR radiographers to monitor training and the development of postgraduate MRI education in Saudi Arabia to improve the current status of the MR radiographers' knowledge and direct high quality service delivery. - Highlights: • MR radiographers recognised the existence of the image quality related errors. • A few MR radiographers were able to correctly identify which image quality errors were being shown. • None of MR radiographers were able to correctly specify error-correction strategy of the image quality errors. • A low level of knowledge was demonstrated in identifying and rectify image quality errors.

  1. Error-related negativity and tic history in pediatric obsessive-compulsive disorder.

    Science.gov (United States)

    Hanna, Gregory L; Carrasco, Melisa; Harbin, Shannon M; Nienhuis, Jenna K; LaRosa, Christina E; Chen, Poyu; Fitzgerald, Kate D; Gehring, William J

    2012-09-01

    The error-related negativity (ERN) is a negative deflection in the event-related potential after an incorrect response, which is often increased in patients with obsessive-compulsive disorder (OCD). However, the relation of the ERN to comorbid tic disorders has not been examined in patients with OCD. This study compared ERN amplitudes in patients with tic-related OCD, patients with non-tic-related OCD, and healthy controls. The ERN, correct response negativity, and error number were measured during an Eriksen flanker task to assess performance monitoring in 44 youth with a lifetime diagnosis of OCD and 44 matched healthy controls ranging in age from 10 to 19 years. Nine youth with OCD had a lifetime history of tics. ERN amplitude was significantly increased in patients with OCD compared with healthy controls. ERN amplitude was significantly larger in patients with non-tic-related OCD than in patients with tic-related OCD or controls. ERN amplitude had a significant negative correlation with age in healthy controls but not in patients with OCD. Instead, in patients with non-tic-related OCD, ERN amplitude had a significant positive correlation with age at onset of OCD symptoms. ERN amplitude in patients was unrelated to OCD symptom severity, current diagnostic status, or treatment effects. The results provide further evidence of increased error-related brain activity in pediatric OCD. The difference in the ERN between patients with tic-related and those with non-tic-related OCD provides preliminary evidence of a neurobiological difference between these two OCD subtypes. The results indicate the ERN is a trait-like measurement that may serve as a biomarker for non-tic-related OCD. Copyright © 2012 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.

  2. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...

  3. Error-Related Negativity and Tic History in Pediatric Obsessive-Compulsive Disorder

    Science.gov (United States)

    Hanna, Gregory L.; Carrasco, Melisa; Harbin, Shannon M.; Nienhuis, Jenna K.; LaRosa, Christina E.; Chen, Poyu; Fitzgerald, Kate D.; Gehring, William J.

    2012-01-01

    Objective: The error-related negativity (ERN) is a negative deflection in the event-related potential after an incorrect response, which is often increased in patients with obsessive-compulsive disorder (OCD). However, the relation of the ERN to comorbid tic disorders has not been examined in patients with OCD. This study compared ERN amplitudes…

  4. Age-related changes in error processing in young children: A school-based investigation

    Directory of Open Access Journals (Sweden)

    Jennie K. Grammer

    2014-07-01

    Full Text Available Growth in executive functioning (EF skills play a role children's academic success, and the transition to elementary school is an important time for the development of these abilities. Despite this, evidence concerning the development of the ERP components linked to EF, including the error-related negativity (ERN and the error positivity (Pe, over this period is inconclusive. Data were recorded in a school setting from 3- to 7-year-old children (N = 96, mean age = 5 years 11 months as they performed a Go/No-Go task. Results revealed the presence of the ERN and Pe on error relative to correct trials at all age levels. Older children showed increased response inhibition as evidenced by faster, more accurate responses. Although developmental changes in the ERN were not identified, the Pe increased with age. In addition, girls made fewer mistakes and showed elevated Pe amplitudes relative to boys. Based on a representative school-based sample, findings indicate that the ERN is present in children as young as 3, and that development can be seen in the Pe between ages 3 and 7. Results varied as a function of gender, providing insight into the range of factors associated with developmental changes in the complex relations between behavioral and electrophysiological measures of error processing.

  5. Research on Human-Error Factors of Civil Aircraft Pilots Based On Grey Relational Analysis

    Directory of Open Access Journals (Sweden)

    Guo Yundong

    2018-01-01

    Full Text Available In consideration of the situation that civil aviation accidents involve many human-error factors and show the features of typical grey systems, an index system of civil aviation accident human-error factors is built using human factor analysis and classification system model. With the data of accidents happened worldwide between 2008 and 2011, the correlation between human-error factors can be analyzed quantitatively using the method of grey relational analysis. Research results show that the order of main factors affecting pilot human-error factors is preconditions for unsafe acts, unsafe supervision, organization and unsafe acts. The factor related most closely with second-level indexes and pilot human-error factors is the physical/mental limitations of pilots, followed by supervisory violations. The relevancy between the first-level indexes and the corresponding second-level indexes and the relevancy between second-level indexes can also be analyzed quantitatively.

  6. Working memory capacity and task goals modulate error-related ERPs.

    Science.gov (United States)

    Coleman, James R; Watson, Jason M; Strayer, David L

    2018-03-01

    The present study investigated individual differences in information processing following errant behavior. Participants were initially classified as high or as low working memory capacity using the Operation Span Task. In a subsequent session, they then performed a high congruency version of the flanker task under both speed and accuracy stress. We recorded ERPs and behavioral measures of accuracy and response time in the flanker task with a primary focus on processing following an error. The error-related negativity was larger for the high working memory capacity group than for the low working memory capacity group. The positivity following an error (Pe) was modulated to a greater extent by speed-accuracy instruction for the high working memory capacity group than for the low working memory capacity group. These data help to explicate the neural bases of individual differences in working memory capacity and cognitive control. © 2017 Society for Psychophysiological Research.

  7. Relating Tropical Cyclone Track Forecast Error Distributions with Measurements of Forecast Uncertainty

    Science.gov (United States)

    2016-03-01

    CYCLONE TRACK FORECAST ERROR DISTRIBUTIONS WITH MEASUREMENTS OF FORECAST UNCERTAINTY by Nicholas M. Chisler March 2016 Thesis Advisor...March 2016 3. REPORT TYPE AND DATES COVERED Master’s thesis 4. TITLE AND SUBTITLE RELATING TROPICAL CYCLONE TRACK FORECAST ERROR DISTRIBUTIONS...WITH MEASUREMENTS OF FORECAST UNCERTAINTY 5. FUNDING NUMBERS 6. AUTHOR(S) Nicholas M. Chisler 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES

  8. Human errors evaluation for muster in emergency situations applying human error probability index (HEPI, in the oil company warehouse in Hamadan City

    Directory of Open Access Journals (Sweden)

    2012-12-01

    Full Text Available Introduction: Emergency situation is one of the influencing factors on human error. The aim of this research was purpose to evaluate human error in emergency situation of fire and explosion at the oil company warehouse in Hamadan city applying human error probability index (HEPI. . Material and Method: First, the scenario of emergency situation of those situation of fire and explosion at the oil company warehouse was designed and then maneuver against, was performed. The scaled questionnaire of muster for the maneuver was completed in the next stage. Collected data were analyzed to calculate the probability success for the 18 actions required in an emergency situation from starting point of the muster until the latest action to temporary sheltersafe. .Result: The result showed that the highest probability of error occurrence was related to make safe workplace (evaluation phase with 32.4 % and lowest probability of occurrence error in detection alarm (awareness phase with 1.8 %, probability. The highest severity of error was in the evaluation phase and the lowest severity of error was in the awareness and recovery phase. Maximum risk level was related to the evaluating exit routes and selecting one route and choosy another exit route and minimum risk level was related to the four evaluation phases. . Conclusion: To reduce the risk of reaction in the exit phases of an emergency situation, the following actions are recommended, based on the finding in this study: A periodic evaluation of the exit phase and modifying them if necessary, conducting more maneuvers and analyzing this results along with a sufficient feedback to the employees.

  9. Error-Related Negativity and Tic History in Pediatric Obsessive-Compulsive Disorder (OCD)

    Science.gov (United States)

    Hanna, Gregory L.; Carrasco, Melisa; Harbin, Shannon M.; Nienhuis, Jenna K.; LaRosa, Christina E.; Chen, Poyu; Fitzgerald, Kate D.; Gehring, William J.

    2012-01-01

    Objective The error-related negativity (ERN) is a negative deflection in the event-related potential following an incorrect response, which is often increased in patients with obsessive-compulsive disorder (OCD). However, the relationship of the ERN to comorbid tic disorders has not been examined in patients with OCD. This study compared ERN amplitudes in patients with tic-related OCD, patients with non-tic-related OCD, and healthy controls. Method The ERN, correct response negativity, and error number were measured during an Eriksen flanker task to assess performance monitoring in 44 youth with a lifetime diagnosis of OCD and 44 matched healthy controls ranging in age from 10 to 19 years. Nine youth with OCD had a lifetime history of tics. Results ERN amplitudewas significantly increased in OCD patients compared to healthy controls. ERN amplitude was significantly larger in patients with non-tic-related OCD than either patients with tic-related OCD or controls. ERN amplitude had a significant negative correlation with age in healthy controls but not patients with OCD. Instead, in patients with non-tic-related OCD, ERN amplitude had a significant positive correlation with age at onset of OCD symptoms. ERN amplitude in patients was unrelated to OCD symptom severity, current diagnostic status, or treatment effects. Conclusions The results provide further evidence of increased error-related brain activity in pediatric OCD. The difference in the ERN between patients with tic-related and non-tic-related OCD provides preliminary evidence of a neurobiological difference between these two OCD subtypes. The results indicate the ERN is a trait-like measure that may serve as a biomarker for non-tic-related OCD. PMID:22917203

  10. Dependence of fluence errors in dynamic IMRT on leaf-positional errors varying with time and leaf number

    International Nuclear Information System (INIS)

    Zygmanski, Piotr; Kung, Jong H.; Jiang, Steve B.; Chin, Lee

    2003-01-01

    In d-MLC based IMRT, leaves move along a trajectory that lies within a user-defined tolerance (TOL) about the ideal trajectory specified in a d-MLC sequence file. The MLC controller measures leaf positions multiple times per second and corrects them if they deviate from ideal positions by a value greater than TOL. The magnitude of leaf-positional errors resulting from finite mechanical precision depends on the performance of the MLC motors executing leaf motions and is generally larger if leaves are forced to move at higher speeds. The maximum value of leaf-positional errors can be limited by decreasing TOL. However, due to the inherent time delay in the MLC controller, this may not happen at all times. Furthermore, decreasing the leaf tolerance results in a larger number of beam hold-offs, which, in turn leads, to a longer delivery time and, paradoxically, to higher chances of leaf-positional errors (≤TOL). On the other end, the magnitude of leaf-positional errors depends on the complexity of the fluence map to be delivered. Recently, it has been shown that it is possible to determine the actual distribution of leaf-positional errors either by the imaging of moving MLC apertures with a digital imager or by analysis of a MLC log file saved by a MLC controller. This leads next to an important question: What is the relation between the distribution of leaf-positional errors and fluence errors. In this work, we introduce an analytical method to determine this relation in dynamic IMRT delivery. We model MLC errors as Random-Leaf Positional (RLP) errors described by a truncated normal distribution defined by two characteristic parameters: a standard deviation σ and a cut-off value Δx 0 (Δx 0 ∼TOL). We quantify fluence errors for two cases: (i) Δx 0 >>σ (unrestricted normal distribution) and (ii) Δx 0 0 --limited normal distribution). We show that an average fluence error of an IMRT field is proportional to (i) σ/ALPO and (ii) Δx 0 /ALPO, respectively, where

  11. A Model of Self-Monitoring Blood Glucose Measurement Error.

    Science.gov (United States)

    Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio

    2017-07-01

    A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.

  12. Constrained Maximum Likelihood Estimation of Relative Abundances of Protein Conformation in a Heterogeneous Mixture from Small Angle X-Ray Scattering Intensity Measurements

    Science.gov (United States)

    Onuk, A. Emre; Akcakaya, Murat; Bardhan, Jaydeep P.; Erdogmus, Deniz; Brooks, Dana H.; Makowski, Lee

    2015-01-01

    In this paper, we describe a model for maximum likelihood estimation (MLE) of the relative abundances of different conformations of a protein in a heterogeneous mixture from small angle X-ray scattering (SAXS) intensities. To consider cases where the solution includes intermediate or unknown conformations, we develop a subset selection method based on k-means clustering and the Cramér-Rao bound on the mixture coefficient estimation error to find a sparse basis set that represents the space spanned by the measured SAXS intensities of the known conformations of a protein. Then, using the selected basis set and the assumptions on the model for the intensity measurements, we show that the MLE model can be expressed as a constrained convex optimization problem. Employing the adenylate kinase (ADK) protein and its known conformations as an example, and using Monte Carlo simulations, we demonstrate the performance of the proposed estimation scheme. Here, although we use 45 crystallographically determined experimental structures and we could generate many more using, for instance, molecular dynamics calculations, the clustering technique indicates that the data cannot support the determination of relative abundances for more than 5 conformations. The estimation of this maximum number of conformations is intrinsic to the methodology we have used here. PMID:26924916

  13. Senior High School Students' Errors on the Use of Relative Words

    Science.gov (United States)

    Bao, Xiaoli

    2015-01-01

    Relative clause is one of the most important language points in College English Examination. Teachers have been attaching great importance to the teaching of relative clause, but the outcomes are not satisfactory. Based on Error Analysis theory, this article aims to explore the reasons why senior high school students find it difficult to choose…

  14. Model parameter-related optimal perturbations and their contributions to El Niño prediction errors

    Science.gov (United States)

    Tao, Ling-Jiang; Gao, Chuan; Zhang, Rong-Hua

    2018-04-01

    Errors in initial conditions and model parameters (MPs) are the main sources that limit the accuracy of ENSO predictions. In addition to exploring the initial error-induced prediction errors, model errors are equally important in determining prediction performance. In this paper, the MP-related optimal errors that can cause prominent error growth in ENSO predictions are investigated using an intermediate coupled model (ICM) and a conditional nonlinear optimal perturbation (CNOP) approach. Two MPs related to the Bjerknes feedback are considered in the CNOP analysis: one involves the SST-surface wind coupling ({α _τ } ), and the other involves the thermocline effect on the SST ({α _{Te}} ). The MP-related optimal perturbations (denoted as CNOP-P) are found uniformly positive and restrained in a small region: the {α _τ } component is mainly concentrated in the central equatorial Pacific, and the {α _{Te}} component is mainly located in the eastern cold tongue region. This kind of CNOP-P enhances the strength of the Bjerknes feedback and induces an El Niño- or La Niña-like error evolution, resulting in an El Niño-like systematic bias in this model. The CNOP-P is also found to play a role in the spring predictability barrier (SPB) for ENSO predictions. Evidently, such error growth is primarily attributed to MP errors in small areas based on the localized distribution of CNOP-P. Further sensitivity experiments firmly indicate that ENSO simulations are sensitive to the representation of SST-surface wind coupling in the central Pacific and to the thermocline effect in the eastern Pacific in the ICM. These results provide guidance and theoretical support for the future improvement in numerical models to reduce the systematic bias and SPB phenomenon in ENSO predictions.

  15. 47 CFR 1.1167 - Error claims related to regulatory fees.

    Science.gov (United States)

    2010-10-01

    ...) Challenges to determinations or an insufficient regulatory fee payment or delinquent fees should be made in writing. A challenge to a determination that a party is delinquent in paying a standard regulatory fee... 47 Telecommunication 1 2010-10-01 2010-10-01 false Error claims related to regulatory fees. 1.1167...

  16. Practical, Reliable Error Bars in Quantum Tomography

    OpenAIRE

    Faist, Philippe; Renner, Renato

    2015-01-01

    Precise characterization of quantum devices is usually achieved with quantum tomography. However, most methods which are currently widely used in experiments, such as maximum likelihood estimation, lack a well-justified error analysis. Promising recent methods based on confidence regions are difficult to apply in practice or yield error bars which are unnecessarily large. Here, we propose a practical yet robust method for obtaining error bars. We do so by introducing a novel representation of...

  17. Comments on a derivation and application of the 'maximum entropy production' principle

    International Nuclear Information System (INIS)

    Grinstein, G; Linsker, R

    2007-01-01

    We show that (1) an error invalidates the derivation (Dewar 2005 J. Phys. A: Math. Gen. 38 L371) of the maximum entropy production (MaxEP) principle for systems far from equilibrium, for which the constitutive relations are nonlinear; and (2) the claim (Dewar 2003 J. Phys. A: Math. Gen. 36 631) that the phenomenon of 'self-organized criticality' is a consequence of MaxEP for slowly driven systems is unjustified. (comment)

  18. Evaluating Equating Results: Percent Relative Error for Chained Kernel Equating

    Science.gov (United States)

    Jiang, Yanlin; von Davier, Alina A.; Chen, Haiwen

    2012-01-01

    This article presents a method for evaluating equating results. Within the kernel equating framework, the percent relative error (PRE) for chained equipercentile equating was computed under the nonequivalent groups with anchor test (NEAT) design. The method was applied to two data sets to obtain the PRE, which can be used to measure equating…

  19. Statistical evaluation of design-error related accidents

    International Nuclear Information System (INIS)

    Ott, K.O.; Marchaterre, J.F.

    1980-01-01

    In a recently published paper (Campbell and Ott, 1979), a general methodology was proposed for the statistical evaluation of design-error related accidents. The evaluation aims at an estimate of the combined residual frequency of yet unknown types of accidents lurking in a certain technological system. Here, the original methodology is extended, as to apply to a variety of systems that evolves during the development of large-scale technologies. A special categorization of incidents and accidents is introduced to define the events that should be jointly analyzed. The resulting formalism is applied to the development of the nuclear power reactor technology, considering serious accidents that involve in the accident-progression a particular design inadequacy

  20. Maximum Power Point Tracking of Photovoltaic System for Traffic Light Application

    Directory of Open Access Journals (Sweden)

    Riza Muhida

    2013-07-01

    Full Text Available Photovoltaic traffic light system is a significant application of renewable energy source. The development of the system is an alternative effort of local authority to reduce expenditure for paying fees to power supplier which the power comes from conventional energy source. Since photovoltaic (PV modules still have relatively low conversion efficiency, an alternative control of maximum power point tracking (MPPT method is applied to the traffic light system. MPPT is intended to catch up the maximum power at daytime in order to charge the battery at the maximum rate in which the power from the battery is intended to be used at night time or cloudy day. MPPT is actually a DC-DC converter that can step up or down voltage in order to achieve the maximum power using Pulse Width Modulation (PWM control. From experiment, we obtained the voltage of operation using MPPT is at 16.454 V, this value has error of 2.6%, if we compared with maximum power point voltage of PV module that is 16.9 V. Based on this result it can be said that this MPPT control works successfully to deliver the power from PV module to battery maximally.

  1. Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation

    Directory of Open Access Journals (Sweden)

    Xi Liu

    2016-09-01

    Full Text Available A new algorithm called maximum correntropy unscented Kalman filter (MCUKF is proposed and applied to relative state estimation in space communication networks. As is well known, the unscented Kalman filter (UKF provides an efficient tool to solve the non-linear state estimate problem. However, the UKF usually plays well in Gaussian noises. Its performance may deteriorate substantially in the presence of non-Gaussian noises, especially when the measurements are disturbed by some heavy-tailed impulsive noises. By making use of the maximum correntropy criterion (MCC, the proposed algorithm can enhance the robustness of UKF against impulsive noises. In the MCUKF, the unscented transformation (UT is applied to obtain a predicted state estimation and covariance matrix, and a nonlinear regression method with the MCC cost is then used to reformulate the measurement information. Finally, the UT is adopted to the measurement equation to obtain the filter state and covariance matrix. Illustrative examples demonstrate the superior performance of the new algorithm.

  2. Formulation of uncertainty relation of error and disturbance in quantum measurement by using quantum estimation theory

    International Nuclear Information System (INIS)

    Yu Watanabe; Masahito Ueda

    2012-01-01

    Full text: When we try to obtain information about a quantum system, we need to perform measurement on the system. The measurement process causes unavoidable state change. Heisenberg discussed a thought experiment of the position measurement of a particle by using a gamma-ray microscope, and found a trade-off relation between the error of the measured position and the disturbance in the momentum caused by the measurement process. The trade-off relation epitomizes the complementarity in quantum measurements: we cannot perform a measurement of an observable without causing disturbance in its canonically conjugate observable. However, at the time Heisenberg found the complementarity, quantum measurement theory was not established yet, and Kennard and Robertson's inequality erroneously interpreted as a mathematical formulation of the complementarity. Kennard and Robertson's inequality actually implies the indeterminacy of the quantum state: non-commuting observables cannot have definite values simultaneously. However, Kennard and Robertson's inequality reflects the inherent nature of a quantum state alone, and does not concern any trade-off relation between the error and disturbance in the measurement process. In this talk, we report a resolution to the complementarity in quantum measurements. First, we find that it is necessary to involve the estimation process from the outcome of the measurement for quantifying the error and disturbance in the quantum measurement. We clarify the implicitly involved estimation process in Heisenberg's gamma-ray microscope and other measurement schemes, and formulate the error and disturbance for an arbitrary quantum measurement by using quantum estimation theory. The error and disturbance are defined in terms of the Fisher information, which gives the upper bound of the accuracy of the estimation. Second, we obtain uncertainty relations between the measurement errors of two observables [1], and between the error and disturbance in the

  3. Task types and error types involved in the human-related unplanned reactor trip events

    International Nuclear Information System (INIS)

    Kim, Jae Whan; Park, Jin Kyun

    2008-01-01

    In this paper, the contribution of task types and error types involved in the human-related unplanned reactor trip events that have occurred between 1986 and 2006 in Korean nuclear power plants are analysed in order to establish a strategy for reducing the human-related unplanned reactor trips. Classification systems for the task types, error modes, and cognitive functions are developed or adopted from the currently available taxonomies, and the relevant information is extracted from the event reports or judged on the basis of an event description. According to the analyses from this study, the contributions of the task types are as follows: corrective maintenance (25.7%), planned maintenance (22.8%), planned operation (19.8%), periodic preventive maintenance (14.9%), response to a transient (9.9%), and design/manufacturing/installation (6.9%). According to the analysis of the error modes, error modes such as control failure (22.2%), wrong object (18.5%), omission (14.8%), wrong action (11.1%), and inadequate (8.3%) take up about 75% of the total unplanned trip events. The analysis of the cognitive functions involved in the events indicated that the planning function had the highest contribution (46.7%) to the human actions leading to unplanned reactor trips. This analysis concludes that in order to significantly reduce human-induced or human-related unplanned reactor trips, an aide system (in support of maintenance personnel) for evaluating possible (negative) impacts of planned actions or erroneous actions as well as an appropriate human error prediction technique, should be developed

  4. Task types and error types involved in the human-related unplanned reactor trip events

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jae Whan; Park, Jin Kyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2008-12-15

    In this paper, the contribution of task types and error types involved in the human-related unplanned reactor trip events that have occurred between 1986 and 2006 in Korean nuclear power plants are analysed in order to establish a strategy for reducing the human-related unplanned reactor trips. Classification systems for the task types, error modes, and cognitive functions are developed or adopted from the currently available taxonomies, and the relevant information is extracted from the event reports or judged on the basis of an event description. According to the analyses from this study, the contributions of the task types are as follows: corrective maintenance (25.7%), planned maintenance (22.8%), planned operation (19.8%), periodic preventive maintenance (14.9%), response to a transient (9.9%), and design/manufacturing/installation (6.9%). According to the analysis of the error modes, error modes such as control failure (22.2%), wrong object (18.5%), omission (14.8%), wrong action (11.1%), and inadequate (8.3%) take up about 75% of the total unplanned trip events. The analysis of the cognitive functions involved in the events indicated that the planning function had the highest contribution (46.7%) to the human actions leading to unplanned reactor trips. This analysis concludes that in order to significantly reduce human-induced or human-related unplanned reactor trips, an aide system (in support of maintenance personnel) for evaluating possible (negative) impacts of planned actions or erroneous actions as well as an appropriate human error prediction technique, should be developed.

  5. Maximum entropy method approach to the θ term

    International Nuclear Information System (INIS)

    Imachi, Masahiro; Shinno, Yasuhiko; Yoneyama, Hiroshi

    2004-01-01

    In Monte Carlo simulations of lattice field theory with a θ term, one confronts the complex weight problem, or the sign problem. This is circumvented by performing the Fourier transform of the topological charge distribution P(Q). This procedure, however, causes flattening phenomenon of the free energy f(θ), which makes study of the phase structure unfeasible. In order to treat this problem, we apply the maximum entropy method (MEM) to a Gaussian form of P(Q), which serves as a good example to test whether the MEM can be applied effectively to the θ term. We study the case with flattering as well as that without flattening. In the latter case, the results of the MEM agree with those obtained from the direct application of the Fourier transform. For the former, the MEM gives a smoother f(θ) than that of the Fourier transform. Among various default models investigated, the images which yield the least error do not show flattening, although some others cannot be excluded given the uncertainly related to statistical error. (author)

  6. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  7. Reduction of weighing errors caused by tritium decay heating

    International Nuclear Information System (INIS)

    Shaw, J.F.

    1978-01-01

    The deuterium-tritium source gas mixture for laser targets is formulated by weight. Experiments show that the maximum weighing error caused by tritium decay heating is 0.2% for a 104-cm 3 mix vessel. Air cooling the vessel reduces the weighing error by 90%

  8. Tests for detecting overdispersion in models with measurement error in covariates.

    Science.gov (United States)

    Yang, Yingsi; Wong, Man Yu

    2015-11-30

    Measurement error in covariates can affect the accuracy in count data modeling and analysis. In overdispersion identification, the true mean-variance relationship can be obscured under the influence of measurement error in covariates. In this paper, we propose three tests for detecting overdispersion when covariates are measured with error: a modified score test and two score tests based on the proposed approximate likelihood and quasi-likelihood, respectively. The proposed approximate likelihood is derived under the classical measurement error model, and the resulting approximate maximum likelihood estimator is shown to have superior efficiency. Simulation results also show that the score test based on approximate likelihood outperforms the test based on quasi-likelihood and other alternatives in terms of empirical power. By analyzing a real dataset containing the health-related quality-of-life measurements of a particular group of patients, we demonstrate the importance of the proposed methods by showing that the analyses with and without measurement error correction yield significantly different results. Copyright © 2015 John Wiley & Sons, Ltd.

  9. Error-related negativities during spelling judgments expose orthographic knowledge.

    Science.gov (United States)

    Harris, Lindsay N; Perfetti, Charles A; Rickles, Benjamin

    2014-02-01

    In two experiments, we demonstrate that error-related negativities (ERNs) recorded during spelling decisions can expose individual differences in lexical knowledge. The first experiment found that the ERN was elicited during spelling decisions and that its magnitude was correlated with independent measures of subjects' spelling knowledge. In the second experiment, we manipulated the phonology of misspelled stimuli and observed that ERN magnitudes were larger when misspelled words altered the phonology of their correctly spelled counterparts than when they preserved it. Thus, when an error is made in a decision about spelling, the brain processes indexed by the ERN reflect both phonological and orthographic input to the decision process. In both experiments, ERN effect sizes were correlated with assessments of lexical knowledge and reading, including offline spelling ability and spelling-mediated vocabulary knowledge. These results affirm the interdependent nature of orthographic, semantic, and phonological knowledge components while showing that spelling knowledge uniquely influences the ERN during spelling decisions. Finally, the study demonstrates the value of ERNs in exposing individual differences in lexical knowledge. Copyright © 2013 Elsevier Ltd. All rights reserved.

  10. Adaptive Unscented Kalman Filter using Maximum Likelihood Estimation

    DEFF Research Database (Denmark)

    Mahmoudi, Zeinab; Poulsen, Niels Kjølstad; Madsen, Henrik

    2017-01-01

    The purpose of this study is to develop an adaptive unscented Kalman filter (UKF) by tuning the measurement noise covariance. We use the maximum likelihood estimation (MLE) and the covariance matching (CM) method to estimate the noise covariance. The multi-step prediction errors generated...

  11. Einstein's error

    International Nuclear Information System (INIS)

    Winterflood, A.H.

    1980-01-01

    In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)

  12. The application of a Grey Markov Model to forecasting annual maximum water levels at hydrological stations

    Science.gov (United States)

    Dong, Sheng; Chi, Kun; Zhang, Qiyi; Zhang, Xiangdong

    2012-03-01

    Compared with traditional real-time forecasting, this paper proposes a Grey Markov Model (GMM) to forecast the maximum water levels at hydrological stations in the estuary area. The GMM combines the Grey System and Markov theory into a higher precision model. The GMM takes advantage of the Grey System to predict the trend values and uses the Markov theory to forecast fluctuation values, and thus gives forecast results involving two aspects of information. The procedure for forecasting annul maximum water levels with the GMM contains five main steps: 1) establish the GM (1, 1) model based on the data series; 2) estimate the trend values; 3) establish a Markov Model based on relative error series; 4) modify the relative errors caused in step 2, and then obtain the relative errors of the second order estimation; 5) compare the results with measured data and estimate the accuracy. The historical water level records (from 1960 to 1992) at Yuqiao Hydrological Station in the estuary area of the Haihe River near Tianjin, China are utilized to calibrate and verify the proposed model according to the above steps. Every 25 years' data are regarded as a hydro-sequence. Eight groups of simulated results show reasonable agreement between the predicted values and the measured data. The GMM is also applied to the 10 other hydrological stations in the same estuary. The forecast results for all of the hydrological stations are good or acceptable. The feasibility and effectiveness of this new forecasting model have been proved in this paper.

  13. A Maximum Likelihood Approach to Determine Sensor Radiometric Response Coefficients for NPP VIIRS Reflective Solar Bands

    Science.gov (United States)

    Lei, Ning; Chiang, Kwo-Fu; Oudrari, Hassan; Xiong, Xiaoxiong

    2011-01-01

    Optical sensors aboard Earth orbiting satellites such as the next generation Visible/Infrared Imager/Radiometer Suite (VIIRS) assume that the sensors radiometric response in the Reflective Solar Bands (RSB) is described by a quadratic polynomial, in relating the aperture spectral radiance to the sensor Digital Number (DN) readout. For VIIRS Flight Unit 1, the coefficients are to be determined before launch by an attenuation method, although the linear coefficient will be further determined on-orbit through observing the Solar Diffuser. In determining the quadratic polynomial coefficients by the attenuation method, a Maximum Likelihood approach is applied in carrying out the least-squares procedure. Crucial to the Maximum Likelihood least-squares procedure is the computation of the weight. The weight not only has a contribution from the noise of the sensor s digital count, with an important contribution from digitization error, but also is affected heavily by the mathematical expression used to predict the value of the dependent variable, because both the independent and the dependent variables contain random noise. In addition, model errors have a major impact on the uncertainties of the coefficients. The Maximum Likelihood approach demonstrates the inadequacy of the attenuation method model with a quadratic polynomial for the retrieved spectral radiance. We show that using the inadequate model dramatically increases the uncertainties of the coefficients. We compute the coefficient values and their uncertainties, considering both measurement and model errors.

  14. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  15. Application of the method of maximum likelihood to the determination of cepheid radii

    International Nuclear Information System (INIS)

    Balona, L.A.

    1977-01-01

    A method is described whereby the radius of any pulsating star can be obtained by applying the Principle of Maximum Likelihood. The relative merits of this method and of the usual Baade-Wesselink method are discussed in an Appendix. The new method is applied to 54 well-observed cepheids which include a number of spectroscopic binaries and two W Vir stars. An empirical period-radius relation is constructed and discussed in terms of two recent period-luminosity-colour calibrations. It is shown that the new method gives radii with an error of no more than 10 per cent. (author)

  16. Parameters and error of a theoretical model

    International Nuclear Information System (INIS)

    Moeller, P.; Nix, J.R.; Swiatecki, W.

    1986-09-01

    We propose a definition for the error of a theoretical model of the type whose parameters are determined from adjustment to experimental data. By applying a standard statistical method, the maximum-likelihoodlmethod, we derive expressions for both the parameters of the theoretical model and its error. We investigate the derived equations by solving them for simulated experimental and theoretical quantities generated by use of random number generators. 2 refs., 4 tabs

  17. On Selection of the Probability Distribution for Representing the Maximum Annual Wind Speed in East Cairo, Egypt

    International Nuclear Information System (INIS)

    El-Shanshoury, Gh. I.; El-Hemamy, S.T.

    2013-01-01

    The main objective of this paper is to identify an appropriate probability model and best plotting position formula which represent the maximum annual wind speed in east Cairo. This model can be used to estimate the extreme wind speed and return period at a particular site as well as to determine the radioactive release distribution in case of accident occurrence at a nuclear power plant. Wind speed probabilities can be estimated by using probability distributions. An accurate determination of probability distribution for maximum wind speed data is very important in expecting the extreme value . The probability plots of the maximum annual wind speed (MAWS) in east Cairo are fitted to six major statistical distributions namely: Gumbel, Weibull, Normal, Log-Normal, Logistic and Log- Logistic distribution, while eight plotting positions of Hosking and Wallis, Hazen, Gringorten, Cunnane, Blom, Filliben, Benard and Weibull are used for determining exceedance of their probabilities. A proper probability distribution for representing the MAWS is selected by the statistical test criteria in frequency analysis. Therefore, the best plotting position formula which can be used to select appropriate probability model representing the MAWS data must be determined. The statistical test criteria which represented in: the probability plot correlation coefficient (PPCC), the root mean square error (RMSE), the relative root mean square error (RRMSE) and the maximum absolute error (MAE) are used to select the appropriate probability position and distribution. The data obtained show that the maximum annual wind speed in east Cairo vary from 44.3 Km/h to 96.1 Km/h within duration of 39 years . Weibull plotting position combined with Normal distribution gave the highest fit, most reliable, accurate predictions and determination of the wind speed in the study area having the highest value of PPCC and lowest values of RMSE, RRMSE and MAE

  18. Errores innatos del metabolismo de las purinas y otras enfermedades relacionadas Inborn purine metabolism errors and other related diseases

    Directory of Open Access Journals (Sweden)

    Jiovanna Contreras Roura

    2012-06-01

    growth, recurrent infections, self-mutilation, immunodeficiencies, unexplainable haemolytic anemia, gout-related arthritis, family history, consanguinity and adverse reactions to those drugs that are analogous of purines. The study of these diseases generally begins by quantifying serum uric acid and uric acid present in the urine which is the final product of purine metabolism in human beings. Diet and drug consumption are among the pathological, physiological and clinical conditions capable of changing the level of this compound. This review was intended to disseminate information on the inborn purine metabolism errors as well as to facilitate the interpretation of the uric acid levels and other biochemical markers making the diagnosis of these diseases possible. The tables relating these diseases to the excretory levels of uric acid and other biochemical markers, the altered enzymes, the clinical symptoms, the model of inheritance, and in some cases, the suggested treatment. This paper allowed us to affirm that variations in the uric acid levels and the presence of other biochemical markers in urine are important tools in screening some inborn purine metabolism errors, and also other related pathological conditions.

  19. Diagnostic errors in pediatric radiology

    International Nuclear Information System (INIS)

    Taylor, George A.; Voss, Stephan D.; Melvin, Patrice R.; Graham, Dionne A.

    2011-01-01

    Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement. (orig.)

  20. Propagation of errors from a null balance terahertz reflectometer to a sample's relative water content

    International Nuclear Information System (INIS)

    Hadjiloucas, S; Walker, G C; Bowen, J W; Zafiropoulos, A

    2009-01-01

    The THz water content index of a sample is defined and advantages in using such metric in estimating a sample's relative water content are discussed. The errors from reflectance measurements performed at two different THz frequencies using a quasi-optical null-balance reflectometer are propagated to the errors in estimating the sample water content index.

  1. Error-related negativity varies with the activation of gender stereotypes.

    Science.gov (United States)

    Ma, Qingguo; Shu, Liangchao; Wang, Xiaoyi; Dai, Shenyi; Che, Hongmin

    2008-09-19

    The error-related negativity (ERN) was suggested to reflect the response-performance monitoring process. The purpose of this study is to investigate how the activation of gender stereotypes influences the ERN. Twenty-eight male participants were asked to complete a tool or kitchenware identification task. The prime stimulus is a picture of a male or female face and the target stimulus is either a kitchen utensil or a hand tool. The ERN amplitude on male-kitchenware trials is significantly larger than that on female-kitchenware trials, which reveals the low-level, automatic activation of gender stereotypes. The ERN that was elicited in this task has two sources--operation errors and the conflict between the gender stereotype activation and the non-prejudice beliefs. And the gender stereotype activation may be the key factor leading to this difference of ERN. In other words, the stereotype activation in this experimental paradigm may be indexed by the ERN.

  2. The content of lexical stimuli and self-reported physiological state modulate error-related negativity amplitude.

    Science.gov (United States)

    Benau, Erik M; Moelter, Stephen T

    2016-09-01

    The Error-Related Negativity (ERN) and Correct-Response Negativity (CRN) are brief event-related potential (ERP) components-elicited after the commission of a response-associated with motivation, emotion, and affect. The Error Positivity (Pe) typically appears after the ERN, and corresponds to awareness of having committed an error. Although motivation has long been established as an important factor in the expression and morphology of the ERN, physiological state has rarely been explored as a variable in these investigations. In the present study, we investigated whether self-reported physiological state (SRPS; wakefulness, hunger, or thirst) corresponds with ERN amplitude and type of lexical stimuli. Participants completed a SRPS questionnaire and then completed a speeded Lexical Decision Task with words and pseudowords that were either food-related or neutral. Though similar in frequency and length, food-related stimuli elicited increased accuracy, faster errors, and generated a larger ERN and smaller CRN than neutral words. Self-reported thirst correlated with improved accuracy and smaller ERN and CRN amplitudes. The Pe and Pc (correct positivity) were not impacted by physiological state or by stimulus content. The results indicate that physiological state and manipulations of lexical content may serve as important avenues for future research. Future studies that apply more sensitive measures of physiological and motivational state (e.g., biomarkers for satiety) or direct manipulations of satiety may be a useful technique for future research into response monitoring. Copyright © 2016 Elsevier Inc. All rights reserved.

  3. Maximum likelihood convolutional decoding (MCD) performance due to system losses

    Science.gov (United States)

    Webster, L.

    1976-01-01

    A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.

  4. Outlier Removal and the Relation with Reporting Errors and Quality of Psychological Research

    Science.gov (United States)

    Bakker, Marjan; Wicherts, Jelte M.

    2014-01-01

    Background The removal of outliers to acquire a significant result is a questionable research practice that appears to be commonly used in psychology. In this study, we investigated whether the removal of outliers in psychology papers is related to weaker evidence (against the null hypothesis of no effect), a higher prevalence of reporting errors, and smaller sample sizes in these papers compared to papers in the same journals that did not report the exclusion of outliers from the analyses. Methods and Findings We retrieved a total of 2667 statistical results of null hypothesis significance tests from 153 articles in main psychology journals, and compared results from articles in which outliers were removed (N = 92) with results from articles that reported no exclusion of outliers (N = 61). We preregistered our hypotheses and methods and analyzed the data at the level of articles. Results show no significant difference between the two types of articles in median p value, sample sizes, or prevalence of all reporting errors, large reporting errors, and reporting errors that concerned the statistical significance. However, we did find a discrepancy between the reported degrees of freedom of t tests and the reported sample size in 41% of articles that did not report removal of any data values. This suggests common failure to report data exclusions (or missingness) in psychological articles. Conclusions We failed to find that the removal of outliers from the analysis in psychological articles was related to weaker evidence (against the null hypothesis of no effect), sample size, or the prevalence of errors. However, our control sample might be contaminated due to nondisclosure of excluded values in articles that did not report exclusion of outliers. Results therefore highlight the importance of more transparent reporting of statistical analyses. PMID:25072606

  5. Software platform for managing the classification of error- related potentials of observers

    Science.gov (United States)

    Asvestas, P.; Ventouras, E.-C.; Kostopoulos, S.; Sidiropoulos, K.; Korfiatis, V.; Korda, A.; Uzunolglu, A.; Karanasiou, I.; Kalatzis, I.; Matsopoulos, G.

    2015-09-01

    Human learning is partly based on observation. Electroencephalographic recordings of subjects who perform acts (actors) or observe actors (observers), contain a negative waveform in the Evoked Potentials (EPs) of the actors that commit errors and of observers who observe the error-committing actors. This waveform is called the Error-Related Negativity (ERN). Its detection has applications in the context of Brain-Computer Interfaces. The present work describes a software system developed for managing EPs of observers, with the aim of classifying them into observations of either correct or incorrect actions. It consists of an integrated platform for the storage, management, processing and classification of EPs recorded during error-observation experiments. The system was developed using C# and the following development tools and frameworks: MySQL, .NET Framework, Entity Framework and Emgu CV, for interfacing with the machine learning library of OpenCV. Up to six features can be computed per EP recording per electrode. The user can select among various feature selection algorithms and then proceed to train one of three types of classifiers: Artificial Neural Networks, Support Vector Machines, k-nearest neighbour. Next the classifier can be used for classifying any EP curve that has been inputted to the database.

  6. A Relative View on Tracking Error

    NARCIS (Netherlands)

    W.G.P.M. Hallerbach (Winfried); I. Pouchkarev (Igor)

    2005-01-01

    textabstractWhen delegating an investment decisions to a professional manager, investors often anchor their mandate to a specific benchmark. The manager’s exposure to risk is controlled by means of a tracking error volatility constraint. It depends on market conditions whether this constraint is

  7. Error-related ERP components and individual differences in punishment and reward sensitivity

    NARCIS (Netherlands)

    Boksem, Maarten A. S.; Tops, Mattie; Wester, Anne E.; Meijman, Theo F.; Lorist, Monique M.

    2006-01-01

    Although the focus of the discussion regarding the significance of the error related negatively (ERN/Ne) has been on the cognitive factors reflected in this component, there is now a growing body of research that describes influences of motivation, affective style and other factors of personality on

  8. INVESTIGATION OF INFLUENCE OF ENCODING FUNCTION COMPLEXITY ON DISTRIBUTION OF ERROR MASKING PROBABILITY

    Directory of Open Access Journals (Sweden)

    A. B. Levina

    2016-03-01

    Full Text Available Error detection codes are mechanisms that enable robust delivery of data in unreliable communication channels and devices. Unreliable channels and devices are error-prone objects. Respectively, error detection codes allow detecting such errors. There are two classes of error detecting codes - classical codes and security-oriented codes. The classical codes have high percentage of detected errors; however, they have a high probability to miss an error in algebraic manipulation. In order, security-oriented codes are codes with a small Hamming distance and high protection to algebraic manipulation. The probability of error masking is a fundamental parameter of security-oriented codes. A detailed study of this parameter allows analyzing the behavior of the error-correcting code in the case of error injection in the encoding device. In order, the complexity of the encoding function plays an important role in the security-oriented codes. Encoding functions with less computational complexity and a low probability of masking are the best protection of encoding device against malicious acts. This paper investigates the influence of encoding function complexity on the error masking probability distribution. It will be shownthat the more complex encoding function reduces the maximum of error masking probability. It is also shown in the paper that increasing of the function complexity changes the error masking probability distribution. In particular, increasing of computational complexity decreases the difference between the maximum and average value of the error masking probability. Our resultshave shown that functions with greater complexity have smoothed maximums of error masking probability, which significantly complicates the analysis of error-correcting code by attacker. As a result, in case of complex encoding function the probability of the algebraic manipulation is reduced. The paper discusses an approach how to measure the error masking

  9. Novel relations between the ergodic capacity and the average bit error rate

    KAUST Repository

    Yilmaz, Ferkan

    2011-11-01

    Ergodic capacity and average bit error rate have been widely used to compare the performance of different wireless communication systems. As such recent scientific research and studies revealed strong impact of designing and implementing wireless technologies based on these two performance indicators. However and to the best of our knowledge, the direct links between these two performance indicators have not been explicitly proposed in the literature so far. In this paper, we propose novel relations between the ergodic capacity and the average bit error rate of an overall communication system using binary modulation schemes for signaling with a limited bandwidth and operating over generalized fading channels. More specifically, we show that these two performance measures can be represented in terms of each other, without the need to know the exact end-to-end statistical characterization of the communication channel. We validate the correctness and accuracy of our newly proposed relations and illustrated their usefulness by considering some classical examples. © 2011 IEEE.

  10. Religious Fundamentalism Modulates Neural Responses to Error-Related Words: The Role of Motivation Toward Closure

    Directory of Open Access Journals (Sweden)

    Małgorzata Kossowska

    2018-03-01

    Full Text Available Examining the relationship between brain activity and religious fundamentalism, this study explores whether fundamentalist religious beliefs increase responses to error-related words among participants intolerant to uncertainty (i.e., high in the need for closure in comparison to those who have a high degree of toleration for uncertainty (i.e., those who are low in the need for closure. We examine a negative-going event-related brain potentials occurring 400 ms after stimulus onset (the N400 due to its well-understood association with the reactions to emotional conflict. Religious fundamentalism and tolerance of uncertainty were measured on self-report measures, and electroencephalographic neural reactivity was recorded as participants were performing an emotional Stroop task. In this task, participants read neutral words and words related to uncertainty, errors, and pondering, while being asked to name the color of the ink with which the word is written. The results confirm that among people who are intolerant of uncertainty (i.e., those high in the need for closure, religious fundamentalism is associated with an increased N400 on error-related words compared with people who tolerate uncertainty well (i.e., those low in the need for closure.

  11. Religious Fundamentalism Modulates Neural Responses to Error-Related Words: The Role of Motivation Toward Closure.

    Science.gov (United States)

    Kossowska, Małgorzata; Szwed, Paulina; Wyczesany, Miroslaw; Czarnek, Gabriela; Wronka, Eligiusz

    2018-01-01

    Examining the relationship between brain activity and religious fundamentalism, this study explores whether fundamentalist religious beliefs increase responses to error-related words among participants intolerant to uncertainty (i.e., high in the need for closure) in comparison to those who have a high degree of toleration for uncertainty (i.e., those who are low in the need for closure). We examine a negative-going event-related brain potentials occurring 400 ms after stimulus onset (the N400) due to its well-understood association with the reactions to emotional conflict. Religious fundamentalism and tolerance of uncertainty were measured on self-report measures, and electroencephalographic neural reactivity was recorded as participants were performing an emotional Stroop task. In this task, participants read neutral words and words related to uncertainty, errors, and pondering, while being asked to name the color of the ink with which the word is written. The results confirm that among people who are intolerant of uncertainty (i.e., those high in the need for closure), religious fundamentalism is associated with an increased N400 on error-related words compared with people who tolerate uncertainty well (i.e., those low in the need for closure).

  12. Religious Fundamentalism Modulates Neural Responses to Error-Related Words: The Role of Motivation Toward Closure

    Science.gov (United States)

    Kossowska, Małgorzata; Szwed, Paulina; Wyczesany, Miroslaw; Czarnek, Gabriela; Wronka, Eligiusz

    2018-01-01

    Examining the relationship between brain activity and religious fundamentalism, this study explores whether fundamentalist religious beliefs increase responses to error-related words among participants intolerant to uncertainty (i.e., high in the need for closure) in comparison to those who have a high degree of toleration for uncertainty (i.e., those who are low in the need for closure). We examine a negative-going event-related brain potentials occurring 400 ms after stimulus onset (the N400) due to its well-understood association with the reactions to emotional conflict. Religious fundamentalism and tolerance of uncertainty were measured on self-report measures, and electroencephalographic neural reactivity was recorded as participants were performing an emotional Stroop task. In this task, participants read neutral words and words related to uncertainty, errors, and pondering, while being asked to name the color of the ink with which the word is written. The results confirm that among people who are intolerant of uncertainty (i.e., those high in the need for closure), religious fundamentalism is associated with an increased N400 on error-related words compared with people who tolerate uncertainty well (i.e., those low in the need for closure). PMID:29636709

  13. Evidence for specificity of the impact of punishment on error-related brain activity in high versus low trait anxious individuals.

    Science.gov (United States)

    Meyer, Alexandria; Gawlowska, Magda

    2017-10-01

    A previous study suggests that when participants were punished with a loud noise after committing errors, the error-related negativity (ERN) was enhanced in high trait anxious individuals. The current study sought to extend these findings by examining the ERN in conditions when punishment was related and unrelated to error commission as a function of individual differences in trait anxiety symptoms; further, the current study utilized an electric shock as an aversive unconditioned stimulus. Results confirmed that the ERN was increased when errors were punished among high trait anxious individuals compared to low anxious individuals; this effect was not observed when punishment was unrelated to errors. Findings suggest that the threat-value of errors may underlie the association between certain anxious traits and punishment-related increases in the ERN. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Prevention of prescription errors by computerized, on-line, individual patient related surveillance of drug order entry.

    Science.gov (United States)

    Oliven, A; Zalman, D; Shilankov, Y; Yeshurun, D; Odeh, M

    2002-01-01

    Computerized prescription of drugs is expected to reduce the number of many preventable drug ordering errors. In the present study we evaluated the usefullness of a computerized drug order entry (CDOE) system in reducing prescription errors. A department of internal medicine using a comprehensive CDOE, which included also patient-related drug-laboratory, drug-disease and drug-allergy on-line surveillance was compared to a similar department in which drug orders were handwritten. CDOE reduced prescription errors to 25-35%. The causes of errors remained similar, and most errors, on both departments, were associated with abnormal renal function and electrolyte balance. Residual errors remaining on the CDOE-using department were due to handwriting on the typed order, failure to feed patients' diseases, and system failures. The use of CDOE was associated with a significant reduction in mean hospital stay and in the number of changes performed in the prescription. The findings of this study both quantity the impact of comprehensive CDOE on prescription errors and delineate the causes for remaining errors.

  15. A Fourier analysis on the maximum acceptable grid size for discrete proton beam dose calculation

    International Nuclear Information System (INIS)

    Li, Haisen S.; Romeijn, H. Edwin; Dempsey, James F.

    2006-01-01

    orientation of the beam with respect to the dose grid was also investigated. The maximum acceptable dose grid size depends on the gradient of dose profile and in turn the range of proton beam. In the case that only the phantom scattering was considered and that the beam was aligned with the dose grid, grid sizes from 0.4 to 6.8 mm were required for proton beams with ranges from 2 to 30 cm for 2% error limit at the Bragg peak point. A near linear relation between the maximum acceptable grid size and beam range was observed. For this analysis model, the resolution requirement was not significantly related to the orientation of the beam with respect to the grid

  16. The modulating effect of personality traits on neural error monitoring: evidence from event-related FMRI.

    Science.gov (United States)

    Sosic-Vasic, Zrinka; Ulrich, Martin; Ruchsow, Martin; Vasic, Nenad; Grön, Georg

    2012-01-01

    The present study investigated the association between traits of the Five Factor Model of Personality (Neuroticism, Extraversion, Openness for Experiences, Agreeableness, and Conscientiousness) and neural correlates of error monitoring obtained from a combined Eriksen-Flanker-Go/NoGo task during event-related functional magnetic resonance imaging in 27 healthy subjects. Individual expressions of personality traits were measured using the NEO-PI-R questionnaire. Conscientiousness correlated positively with error signaling in the left inferior frontal gyrus and adjacent anterior insula (IFG/aI). A second strong positive correlation was observed in the anterior cingulate gyrus (ACC). Neuroticism was negatively correlated with error signaling in the inferior frontal cortex possibly reflecting the negative inter-correlation between both scales observed on the behavioral level. Under present statistical thresholds no significant results were obtained for remaining scales. Aligning the personality trait of Conscientiousness with task accomplishment striving behavior the correlation in the left IFG/aI possibly reflects an inter-individually different involvement whenever task-set related memory representations are violated by the occurrence of errors. The strong correlations in the ACC may indicate that more conscientious subjects were stronger affected by these violations of a given task-set expressed by individually different, negatively valenced signals conveyed by the ACC upon occurrence of an error. Present results illustrate that for predicting individual responses to errors underlying personality traits should be taken into account and also lend external validity to the personality trait approach suggesting that personality constructs do reflect more than mere descriptive taxonomies.

  17. The modulating effect of personality traits on neural error monitoring: evidence from event-related FMRI.

    Directory of Open Access Journals (Sweden)

    Zrinka Sosic-Vasic

    Full Text Available The present study investigated the association between traits of the Five Factor Model of Personality (Neuroticism, Extraversion, Openness for Experiences, Agreeableness, and Conscientiousness and neural correlates of error monitoring obtained from a combined Eriksen-Flanker-Go/NoGo task during event-related functional magnetic resonance imaging in 27 healthy subjects. Individual expressions of personality traits were measured using the NEO-PI-R questionnaire. Conscientiousness correlated positively with error signaling in the left inferior frontal gyrus and adjacent anterior insula (IFG/aI. A second strong positive correlation was observed in the anterior cingulate gyrus (ACC. Neuroticism was negatively correlated with error signaling in the inferior frontal cortex possibly reflecting the negative inter-correlation between both scales observed on the behavioral level. Under present statistical thresholds no significant results were obtained for remaining scales. Aligning the personality trait of Conscientiousness with task accomplishment striving behavior the correlation in the left IFG/aI possibly reflects an inter-individually different involvement whenever task-set related memory representations are violated by the occurrence of errors. The strong correlations in the ACC may indicate that more conscientious subjects were stronger affected by these violations of a given task-set expressed by individually different, negatively valenced signals conveyed by the ACC upon occurrence of an error. Present results illustrate that for predicting individual responses to errors underlying personality traits should be taken into account and also lend external validity to the personality trait approach suggesting that personality constructs do reflect more than mere descriptive taxonomies.

  18. A New Error Analysis and Accuracy Synthesis Method for Shoe Last Machine

    Directory of Open Access Journals (Sweden)

    Bian Xiangjuan

    2014-05-01

    Full Text Available In order to improve the manufacturing precision of the shoe last machine, a new error-computing model has been put forward to. At first, Based on the special topological structure of the shoe last machine and multi-rigid body system theory, a spatial error-calculating model of the system was built; Then, the law of error distributing in the whole work space was discussed, and the maximum error position of the system was found; At last, The sensitivities of error parameters were analyzed at the maximum position and the accuracy synthesis was conducted by using Monte Carlo method. Considering the error sensitivities analysis, the accuracy of the main parts was distributed. Results show that the probability of the maximal volume error less than 0.05 mm of the new scheme was improved from 0.6592 to 0.7021 than the probability of the old scheme, the precision of the system was improved obviously, the model can be used for the error analysis and accuracy synthesis of the complex multi- embranchment motion chain system, and to improve the system precision of manufacturing.

  19. An analysis of annual maximum streamflows in Terengganu, Malaysia using TL-moments approach

    Science.gov (United States)

    Ahmad, Ummi Nadiah; Shabri, Ani; Zakaria, Zahrahtul Amani

    2013-02-01

    TL-moments approach has been used in an analysis to determine the best-fitting distributions to represent the annual series of maximum streamflow data over 12 stations in Terengganu, Malaysia. The TL-moments with different trimming values are used to estimate the parameter of the selected distributions namely: generalized pareto (GPA), generalized logistic, and generalized extreme value distribution. The influence of TL-moments on estimated probability distribution functions are examined by evaluating the relative root mean square error and relative bias of quantile estimates through Monte Carlo simulations. The boxplot is used to show the location of the median and the dispersion of the data, which helps in reaching the decisive conclusions. For most of the cases, the results show that TL-moments with one smallest value was trimmed from the conceptual sample (TL-moments (1,0)), of GPA distribution was the most appropriate in majority of the stations for describing the annual maximum streamflow series in Terengganu, Malaysia.

  20. Joint Maximum Likelihood Time Delay Estimation of Unknown Event-Related Potential Signals for EEG Sensor Signal Quality Enhancement

    Science.gov (United States)

    Kim, Kyungsoo; Lim, Sung-Ho; Lee, Jaeseok; Kang, Won-Seok; Moon, Cheil; Choi, Ji-Woong

    2016-01-01

    Electroencephalograms (EEGs) measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI) studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR) is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP) signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE) schemes based on a joint maximum likelihood (ML) criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°. PMID:27322267

  1. Joint Maximum Likelihood Time Delay Estimation of Unknown Event-Related Potential Signals for EEG Sensor Signal Quality Enhancement

    Directory of Open Access Journals (Sweden)

    Kyungsoo Kim

    2016-06-01

    Full Text Available Electroencephalograms (EEGs measure a brain signal that contains abundant information about the human brain function and health. For this reason, recent clinical brain research and brain computer interface (BCI studies use EEG signals in many applications. Due to the significant noise in EEG traces, signal processing to enhance the signal to noise power ratio (SNR is necessary for EEG analysis, especially for non-invasive EEG. A typical method to improve the SNR is averaging many trials of event related potential (ERP signal that represents a brain’s response to a particular stimulus or a task. The averaging, however, is very sensitive to variable delays. In this study, we propose two time delay estimation (TDE schemes based on a joint maximum likelihood (ML criterion to compensate the uncertain delays which may be different in each trial. We evaluate the performance for different types of signals such as random, deterministic, and real EEG signals. The results show that the proposed schemes provide better performance than other conventional schemes employing averaged signal as a reference, e.g., up to 4 dB gain at the expected delay error of 10°.

  2. Variable Step Size Maximum Correntropy Criteria Based Adaptive Filtering Algorithm

    Directory of Open Access Journals (Sweden)

    S. Radhika

    2016-04-01

    Full Text Available Maximum correntropy criterion (MCC based adaptive filters are found to be robust against impulsive interference. This paper proposes a novel MCC based adaptive filter with variable step size in order to obtain improved performance in terms of both convergence rate and steady state error with robustness against impulsive interference. The optimal variable step size is obtained by minimizing the Mean Square Deviation (MSD error from one iteration to the other. Simulation results in the context of a highly impulsive system identification scenario show that the proposed algorithm has faster convergence and lesser steady state error than the conventional MCC based adaptive filters.

  3. Social Errors in Four Cultures: Evidence about Universal Forms of Social Relations.

    Science.gov (United States)

    Fiske, Alan Page

    1993-01-01

    To test the cross-cultural generality of relational-models theory, 4 studies with 70 adults examined social errors of substitution of persons for Bengali, Korean, Chinese, and Vai (Liberia and Sierra Leone) subjects. In all four cultures, people tend to substitute someone with whom they have the same basic relationship. (SLD)

  4. Terrain Classification on Venus from Maximum-Likelihood Inversion of Parameterized Models of Topography, Gravity, and their Relation

    Science.gov (United States)

    Eggers, G. L.; Lewis, K. W.; Simons, F. J.; Olhede, S.

    2013-12-01

    Venus does not possess a plate-tectonic system like that observed on Earth, and many surface features--such as tesserae and coronae--lack terrestrial equivalents. To understand Venus' tectonics is to understand its lithosphere, requiring a study of topography and gravity, and how they relate. Past studies of topography dealt with mapping and classification of visually observed features, and studies of gravity dealt with inverting the relation between topography and gravity anomalies to recover surface density and elastic thickness in either the space (correlation) or the spectral (admittance, coherence) domain. In the former case, geological features could be delineated but not classified quantitatively. In the latter case, rectangular or circular data windows were used, lacking geological definition. While the estimates of lithospheric strength on this basis were quantitative, they lacked robust error estimates. Here, we remapped the surface into 77 regions visually and qualitatively defined from a combination of Magellan topography, gravity, and radar images. We parameterize the spectral covariance of the observed topography, treating it as a Gaussian process assumed to be stationary over the mapped regions, using a three-parameter isotropic Matern model, and perform maximum-likelihood based inversions for the parameters. We discuss the parameter distribution across the Venusian surface and across terrain types such as coronoae, dorsae, tesserae, and their relation with mean elevation and latitudinal position. We find that the three-parameter model, while mathematically established and applicable to Venus topography, is overparameterized, and thus reduce the results to a two-parameter description of the peak spectral variance and the range-to-half-peak variance (in function of the wavenumber). With the reduction the clustering of geological region types in two-parameter space becomes promising. Finally, we perform inversions for the JOINT spectral variance of

  5. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    Science.gov (United States)

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. The impact of a brief mindfulness meditation intervention on cognitive control and error-related performance monitoring

    Directory of Open Access Journals (Sweden)

    Michael J Larson

    2013-07-01

    Full Text Available Meditation is associated with positive health behaviors and improved cognitive control. One mechanism for the relationship between meditation and cognitive control is changes in activity of the anterior cingulate cortex-mediated neural pathways. The error-related negativity (ERN and error positivity (Pe components of the scalp-recorded event-related potential (ERP represent cingulate-mediated functions of performance monitoring that may be modulated by mindfulness meditation. We utilized a flanker task, an experimental design, and a brief mindfulness intervention in a sample of 55 healthy non-meditators (n = 28 randomly assigned to the mindfulness group and n = 27 randomly assigned to the control group to examine autonomic nervous system functions as measured by blood pressure and indices of cognitive control as measured by response times, error rates, post-error slowing, and the ERN and Pe components of the ERP. Systolic blood pressure significantly differentiated groups following the mindfulness intervention and following the flanker task. There were non-significant differences between the mindfulness and control groups for response times, post-error slowing, and error rates on the flanker task. Amplitude and latency of the ERN did not differ between groups; however, amplitude of the Pe was significantly smaller in individuals in the mindfulness group than in the control group. Findings suggest that a brief mindfulness intervention is associated with reduced autonomic arousal and decreased amplitude of the Pe, an ERP associated with error awareness, attention, and motivational salience, but does not alter amplitude of the ERN or behavioral performance. Implications for brief mindfulness interventions and state versus trait affect theories of the ERN are discussed. Future research examining graded levels of mindfulness and tracking error awareness will clarify relationship between mindfulness and performance monitoring.

  7. Statistical evaluation of design-error related nuclear reactor accidents

    International Nuclear Information System (INIS)

    Ott, K.O.; Marchaterre, J.F.

    1981-01-01

    In this paper, general methodology for the statistical evaluation of design-error related accidents is proposed that can be applied to a variety of systems that evolves during the development of large-scale technologies. The evaluation aims at an estimate of the combined ''residual'' frequency of yet unknown types of accidents ''lurking'' in a certain technological system. A special categorization in incidents and accidents is introduced to define the events that should be jointly analyzed. The resulting formalism is applied to the development of U.S. nuclear power reactor technology, considering serious accidents (category 2 events) that involved, in the accident progression, a particular design inadequacy. 9 refs

  8. Medication prescribing errors in a public teaching hospital in India: A prospective study.

    Directory of Open Access Journals (Sweden)

    Pote S

    2007-03-01

    Full Text Available Background: To prevent medication errors in prescribing, one needs to know their types and relative occurrence. Such errors are a great cause of concern as they have the potential to cause patient harm. The aim of this study was to determine the nature and types of medication prescribing errors in an Indian setting.Methods: The medication errors were analyzed in a prospective observational study conducted in 3 medical wards of a public teaching hospital in India. The medication errors were analyzed by means of Micromedex Drug-Reax database.Results: Out of 312 patients, only 304 were included in the study. Of the 304 cases, 103 (34% cases had at least one error. The total number of errors found was 157. The drug-drug interactions were the most frequently (68.2% occurring type of error, which was followed by incorrect dosing interval (12% and dosing errors (9.5%. The medication classes involved most were antimicrobial agents (29.4%, cardiovascular agents (15.4%, GI agents (8.6% and CNS agents (8.2%. The moderate errors contributed maximum (61.8% to the total errors when compared to the major (25.5% and minor (12.7% errors. The results showed that the number of errors increases with age and number of medicines prescribed.Conclusion: The results point to the establishment of medication error reporting at each hospital and to share the data with other hospitals. The role of clinical pharmacist in this situation appears to be a strong intervention; and the clinical pharmacist, initially, could confine to identification of the medication errors.

  9. Accurate and fast methods to estimate the population mutation rate from error prone sequences

    Directory of Open Access Journals (Sweden)

    Miyamoto Michael M

    2009-08-01

    Full Text Available Abstract Background The population mutation rate (θ remains one of the most fundamental parameters in genetics, ecology, and evolutionary biology. However, its accurate estimation can be seriously compromised when working with error prone data such as expressed sequence tags, low coverage draft sequences, and other such unfinished products. This study is premised on the simple idea that a random sequence error due to a chance accident during data collection or recording will be distributed within a population dataset as a singleton (i.e., as a polymorphic site where one sampled sequence exhibits a unique base relative to the common nucleotide of the others. Thus, one can avoid these random errors by ignoring the singletons within a dataset. Results This strategy is implemented under an infinite sites model that focuses on only the internal branches of the sample genealogy where a shared polymorphism can arise (i.e., a variable site where each alternative base is represented by at least two sequences. This approach is first used to derive independently the same new Watterson and Tajima estimators of θ, as recently reported by Achaz 1 for error prone sequences. It is then used to modify the recent, full, maximum-likelihood model of Knudsen and Miyamoto 2, which incorporates various factors for experimental error and design with those for coalescence and mutation. These new methods are all accurate and fast according to evolutionary simulations and analyses of a real complex population dataset for the California seahare. Conclusion In light of these results, we recommend the use of these three new methods for the determination of θ from error prone sequences. In particular, we advocate the new maximum likelihood model as a starting point for the further development of more complex coalescent/mutation models that also account for experimental error and design.

  10. Assessment of the uncertainty associated with systematic errors in digital instruments: an experimental study on offset errors

    International Nuclear Information System (INIS)

    Attivissimo, F; Giaquinto, N; Savino, M; Cataldo, A

    2012-01-01

    This paper deals with the assessment of the uncertainty due to systematic errors, particularly in A/D conversion-based instruments. The problem of defining and assessing systematic errors is briefly discussed, and the conceptual scheme of gauge repeatability and reproducibility is adopted. A practical example regarding the evaluation of the uncertainty caused by the systematic offset error is presented. The experimental results, obtained under various ambient conditions, show that modelling the variability of systematic errors is more problematic than suggested by the ISO 5725 norm. Additionally, the paper demonstrates the substantial difference between the type B uncertainty evaluation, obtained via the maximum entropy principle applied to manufacturer's specifications, and the type A (experimental) uncertainty evaluation, which reflects actually observable reality. Although it is reasonable to assume a uniform distribution of the offset error, experiments demonstrate that the distribution is not centred and that a correction must be applied. In such a context, this work motivates a more pragmatic and experimental approach to uncertainty, with respect to the directions of supplement 1 of GUM. (paper)

  11. Error-Free Software

    Science.gov (United States)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  12. Data Analysis & Statistical Methods for Command File Errors

    Science.gov (United States)

    Meshkat, Leila; Waggoner, Bruce; Bryant, Larry

    2014-01-01

    This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.

  13. Comparative evaluation of the efficacy of occlusal splints fabricated in centric relation or maximum intercuspation in temporomandibular disorders patients

    Directory of Open Access Journals (Sweden)

    Marcelo Matida Hamata

    2009-02-01

    Full Text Available Fabrication of occlusal splints in centric relation for temporomandibular disorders (TMD patients is arguable, since this position has been defined for asymptomatic stomatognathic system. Thus, maximum intercuspation might be employed in patients with occlusal stability, eliminating the need for interocclusal records. This study compared occlusal splints fabricated in centric relation and maximum intercuspation in muscle pain reduction of TMD patients. Twenty patients with TMD of myogenous origin and bruxism were divided into 2 groups treated with splints in maximum intercuspation (I or centric relation (II. Clinical, electrognathographic and electromyographic examinations were performed before and 3 months after therapy. Data were analyzed by the Student's t test. Differences at 5% level of probability were considered statistically significant. There was a remarkable reduction in pain symptomatology, without statistically significant differences (p>0.05 between the groups. There was mandibular repositioning during therapy, as demonstrated by the change in occlusal contacts on the splints. Electrognathographic examination demonstrated a significant increase in maximum left lateral movement for group I and right lateral movement for group II (p0.05 in the electromyographic activities at rest after utilization of both splints. In conclusion, both occlusal splints were effective for pain control and presented similar action. The results suggest that maximum intercuspation may be used for fabrication of occlusal splints in patients with occlusal stability without large discrepancies between centric relation and maximum intercuspation. Moreover, this technique is simpler and less expensive.

  14. Comparative evaluation of the efficacy of occlusal splints fabricated in centric relation or maximum intercuspation in temporomandibular disorders patients.

    Science.gov (United States)

    Hamata, Marcelo Matida; Zuim, Paulo Renato Junqueira; Garcia, Alicio Rosalino

    2009-01-01

    Fabrication of occlusal splints in centric relation for temporomandibular disorders (TMD) patients is arguable, since this position has been defined for asymptomatic stomatognathic system. Thus, maximum intercuspation might be employed in patients with occlusal stability, eliminating the need for interocclusal records. This study compared occlusal splints fabricated in centric relation and maximum intercuspation in muscle pain reduction of TMD patients. Twenty patients with TMD of myogenous origin and bruxism were divided into 2 groups treated with splints in maximum intercuspation (I) or centric relation (II). Clinical, electrognathographic and electromyographic examinations were performed before and 3 months after therapy. Data were analyzed by the Student's t test. Differences at 5% level of probability were considered statistically significant. There was a remarkable reduction in pain symptomatology, without statistically significant differences (p>0.05) between the groups. There was mandibular repositioning during therapy, as demonstrated by the change in occlusal contacts on the splints. Electrognathographic examination demonstrated a significant increase in maximum left lateral movement for group I and right lateral movement for group II (p0.05) in the electromyographic activities at rest after utilization of both splints. In conclusion, both occlusal splints were effective for pain control and presented similar action. The results suggest that maximum intercuspation may be used for fabrication of occlusal splints in patients with occlusal stability without large discrepancies between centric relation and maximum intercuspation. Moreover, this technique is simpler and less expensive.

  15. Part two: Error propagation

    International Nuclear Information System (INIS)

    Picard, R.R.

    1989-01-01

    Topics covered in this chapter include a discussion of exact results as related to nuclear materials management and accounting in nuclear facilities; propagation of error for a single measured value; propagation of error for several measured values; error propagation for materials balances; and an application of error propagation to an example of uranium hexafluoride conversion process

  16. Relative Error Model Reduction via Time-Weighted Balanced Stochastic Singular Perturbation

    DEFF Research Database (Denmark)

    Tahavori, Maryamsadat; Shaker, Hamid Reza

    2012-01-01

    A new mixed method for relative error model reduction of linear time invariant (LTI) systems is proposed in this paper. This order reduction technique is mainly based upon time-weighted balanced stochastic model reduction method and singular perturbation model reduction technique. Compared...... by using the concept and properties of the reciprocal systems. The results are further illustrated by two practical numerical examples: a model of CD player and a model of the atmospheric storm track....

  17. Driving error and anxiety related to iPod mp3 player use in a simulated driving experience.

    Science.gov (United States)

    Harvey, Ashley R; Carden, Randy L

    2009-08-01

    Driver distraction due to cellular phone usage has repeatedly been shown to increase the risk of vehicular accidents; however, the literature regarding the use of other personal electronic devices while driving is relatively sparse. It was hypothesized that the usage of an mp3 player would result in an increase in not only driving error while operating a driving simulator, but driver anxiety scores as well. It was also hypothesized that anxiety scores would be positively related to driving errors when using an mp3 player. 32 participants drove through a set course in a driving simulator twice, once with and once without an iPod mp3 player, with the order counterbalanced. Number of driving errors per course, such as leaving the road, impacts with stationary objects, loss of vehicular control, etc., and anxiety were significantly higher when an iPod was in use. Anxiety scores were unrelated to number of driving errors.

  18. Error studies of Halbach Magnets

    Energy Technology Data Exchange (ETDEWEB)

    Brooks, S. [Brookhaven National Lab. (BNL), Upton, NY (United States)

    2017-03-02

    These error studies were done on the Halbach magnets for the CBETA “First Girder” as described in note [CBETA001]. The CBETA magnets have since changed slightly to the lattice in [CBETA009]. However, this is not a large enough change to significantly affect the results here. The QF and BD arc FFAG magnets are considered. For each assumed set of error distributions and each ideal magnet, 100 random magnets with errors are generated. These are then run through an automated version of the iron wire multipole cancellation algorithm. The maximum wire diameter allowed is 0.063” as in the proof-of-principle magnets. Initially, 32 wires (2 per Halbach wedge) are tried, then if this does not achieve 1e-­4 level accuracy in the simulation, 48 and then 64 wires. By “1e-4 accuracy”, it is meant the FOM defined by √(Σn≥sextupole an 2+bn 2) is less than 1 unit, where the multipoles are taken at the maximum nominal beam radius, R=23mm for these magnets. The algorithm initially uses 20 convergence interations. If 64 wires does not achieve 1e-­4 accuracy, this is increased to 50 iterations to check for slow converging cases. There are also classifications for magnets that do not achieve 1e-4 but do achieve 1e-3 (FOM ≤ 10 units). This is technically within the spec discussed in the Jan 30, 2017 review; however, there will be errors in practical shimming not dealt with in the simulation, so it is preferable to do much better than the spec in the simulation.

  19. Self-Reported and Observed Punitive Parenting Prospectively Predicts Increased Error-Related Brain Activity in Six-Year-Old Children.

    Science.gov (United States)

    Meyer, Alexandria; Proudfit, Greg Hajcak; Bufferd, Sara J; Kujawa, Autumn J; Laptook, Rebecca S; Torpey, Dana C; Klein, Daniel N

    2015-07-01

    The error-related negativity (ERN) is a negative deflection in the event-related potential (ERP) occurring approximately 50 ms after error commission at fronto-central electrode sites and is thought to reflect the activation of a generic error monitoring system. Several studies have reported an increased ERN in clinically anxious children, and suggest that anxious children are more sensitive to error commission--although the mechanisms underlying this association are not clear. We have previously found that punishing errors results in a larger ERN, an effect that persists after punishment ends. It is possible that learning-related experiences that impact sensitivity to errors may lead to an increased ERN. In particular, punitive parenting might sensitize children to errors and increase their ERN. We tested this possibility in the current study by prospectively examining the relationship between parenting style during early childhood and children's ERN approximately 3 years later. Initially, 295 parents and children (approximately 3 years old) participated in a structured observational measure of parenting behavior, and parents completed a self-report measure of parenting style. At a follow-up assessment approximately 3 years later, the ERN was elicited during a Go/No-Go task, and diagnostic interviews were completed with parents to assess child psychopathology. Results suggested that both observational measures of hostile parenting and self-report measures of authoritarian parenting style uniquely predicted a larger ERN in children 3 years later. We previously reported that children in this sample with anxiety disorders were characterized by an increased ERN. A mediation analysis indicated that ERN magnitude mediated the relationship between harsh parenting and child anxiety disorder. Results suggest that parenting may shape children's error processing through environmental conditioning and thereby risk for anxiety, although future work is needed to confirm this

  20. Self-reported and observed punitive parenting prospectively predicts increased error-related brain activity in six-year-old children

    Science.gov (United States)

    Meyer, Alexandria; Proudfit, Greg Hajcak; Bufferd, Sara J.; Kujawa, Autumn J.; Laptook, Rebecca S.; Torpey, Dana C.; Klein, Daniel N.

    2017-01-01

    The error-related negativity (ERN) is a negative deflection in the event-related potential (ERP) occurring approximately 50 ms after error commission at fronto-central electrode sites and is thought to reflect the activation of a generic error monitoring system. Several studies have reported an increased ERN in clinically anxious children, and suggest that anxious children are more sensitive to error commission—although the mechanisms underlying this association are not clear. We have previously found that punishing errors results in a larger ERN, an effect that persists after punishment ends. It is possible that learning-related experiences that impact sensitivity to errors may lead to an increased ERN. In particular, punitive parenting might sensitize children to errors and increase their ERN. We tested this possibility in the current study by prospectively examining the relationship between parenting style during early childhood and children’s ERN approximately three years later. Initially, 295 parents and children (approximately 3 years old) participated in a structured observational measure of parenting behavior, and parents completed a self-report measure of parenting style. At a follow-up assessment approximately three years later, the ERN was elicited during a Go/No-Go task, and diagnostic interviews were completed with parents to assess child psychopathology. Results suggested that both observational measures of hostile parenting and self-report measures of authoritarian parenting style uniquely predicted a larger ERN in children 3 years later. We previously reported that children in this sample with anxiety disorders were characterized by an increased ERN. A mediation analysis indicated that ERN magnitude mediated the relationship between harsh parenting and child anxiety disorder. Results suggest that parenting may shape children’s error processing through environmental conditioning and thereby risk for anxiety, although future work is needed to

  1. Differences among Job Positions Related to Communication Errors at Construction Sites

    Science.gov (United States)

    Takahashi, Akiko; Ishida, Toshiro

    In a previous study, we classified the communicatio n errors at construction sites as faulty intention and message pattern, inadequate channel pattern, and faulty comprehension pattern. This study seeks to evaluate the degree of risk of communication errors and to investigate differences among people in various job positions in perception of communication error risk . Questionnaires based on the previous study were a dministered to construction workers (n=811; 149 adminis trators, 208 foremen and 454 workers). Administrators evaluated all patterns of communication error risk equally. However, foremen and workers evaluated communication error risk differently in each pattern. The common contributing factors to all patterns wer e inadequate arrangements before work and inadequate confirmation. Some factors were common among patterns but other factors were particular to a specific pattern. To help prevent future accidents at construction sites, administrators should understand how people in various job positions perceive communication errors and propose human factors measures to prevent such errors.

  2. Output Error Analysis of Planar 2-DOF Five-bar Mechanism

    Science.gov (United States)

    Niu, Kejia; Wang, Jun; Ting, Kwun-Lon; Tao, Fen; Cheng, Qunchao; Wang, Quan; Zhang, Kaiyang

    2018-03-01

    Aiming at the mechanism error caused by clearance of planar 2-DOF Five-bar motion pair, the method of equivalent joint clearance of kinematic pair to virtual link is applied. The structural error model of revolute joint clearance is established based on the N-bar rotation laws and the concept of joint rotation space, The influence of the clearance of the moving pair is studied on the output error of the mechanis. and the calculation method and basis of the maximum error are given. The error rotation space of the mechanism under the influence of joint clearance is obtained. The results show that this method can accurately calculate the joint space error rotation space, which provides a new way to analyze the planar parallel mechanism error caused by joint space.

  3. A heteroscedastic measurement error model for method comparison data with replicate measurements.

    Science.gov (United States)

    Nawarathna, Lakshika S; Choudhary, Pankaj K

    2015-03-30

    Measurement error models offer a flexible framework for modeling data collected in studies comparing methods of quantitative measurement. These models generally make two simplifying assumptions: (i) the measurements are homoscedastic, and (ii) the unobservable true values of the methods are linearly related. One or both of these assumptions may be violated in practice. In particular, error variabilities of the methods may depend on the magnitude of measurement, or the true values may be nonlinearly related. Data with these features call for a heteroscedastic measurement error model that allows nonlinear relationships in the true values. We present such a model for the case when the measurements are replicated, discuss its fitting, and explain how to evaluate similarity of measurement methods and agreement between them, which are two common goals of data analysis, under this model. Model fitting involves dealing with lack of a closed form for the likelihood function. We consider estimation methods that approximate either the likelihood or the model to yield approximate maximum likelihood estimates. The fitting methods are evaluated in a simulation study. The proposed methodology is used to analyze a cholesterol dataset. Copyright © 2015 John Wiley & Sons, Ltd.

  4. Maximum-Entropy Inference with a Programmable Annealer

    Science.gov (United States)

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.

    2016-03-01

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition.

  5. Logical error rate scaling of the toric code

    International Nuclear Information System (INIS)

    Watson, Fern H E; Barrett, Sean D

    2014-01-01

    To date, a great deal of attention has focused on characterizing the performance of quantum error correcting codes via their thresholds, the maximum correctable physical error rate for a given noise model and decoding strategy. Practical quantum computers will necessarily operate below these thresholds meaning that other performance indicators become important. In this work we consider the scaling of the logical error rate of the toric code and demonstrate how, in turn, this may be used to calculate a key performance indicator. We use a perfect matching decoding algorithm to find the scaling of the logical error rate and find two distinct operating regimes. The first regime admits a universal scaling analysis due to a mapping to a statistical physics model. The second regime characterizes the behaviour in the limit of small physical error rate and can be understood by counting the error configurations leading to the failure of the decoder. We present a conjecture for the ranges of validity of these two regimes and use them to quantify the overhead—the total number of physical qubits required to perform error correction. (paper)

  6. Measuring worst-case errors in a robot workcell

    International Nuclear Information System (INIS)

    Simon, R.W.; Brost, R.C.; Kholwadwala, D.K.

    1997-10-01

    Errors in model parameters, sensing, and control are inevitably present in real robot systems. These errors must be considered in order to automatically plan robust solutions to many manipulation tasks. Lozano-Perez, Mason, and Taylor proposed a formal method for synthesizing robust actions in the presence of uncertainty; this method has been extended by several subsequent researchers. All of these results presume the existence of worst-case error bounds that describe the maximum possible deviation between the robot's model of the world and reality. This paper examines the problem of measuring these error bounds for a real robot workcell. These measurements are difficult, because of the desire to completely contain all possible deviations while avoiding bounds that are overly conservative. The authors present a detailed description of a series of experiments that characterize and quantify the possible errors in visual sensing and motion control for a robot workcell equipped with standard industrial robot hardware. In addition to providing a means for measuring these specific errors, these experiments shed light on the general problem of measuring worst-case errors

  7. Maximum relative speeds of living organisms: Why do bacteria perform as fast as ostriches?

    Science.gov (United States)

    Meyer-Vernet, Nicole; Rospars, Jean-Pierre

    2016-12-01

    Self-locomotion is central to animal behaviour and survival. It is generally analysed by focusing on preferred speeds and gaits under particular biological and physical constraints. In the present paper we focus instead on the maximum speed and we study its order-of-magnitude scaling with body size, from bacteria to the largest terrestrial and aquatic organisms. Using data for about 460 species of various taxonomic groups, we find a maximum relative speed of the order of magnitude of ten body lengths per second over a 1020-fold mass range of running and swimming animals. This result implies a locomotor time scale of the order of one tenth of second, virtually independent on body size, anatomy and locomotion style, whose ubiquity requires an explanation building on basic properties of motile organisms. From first-principle estimates, we relate this generic time scale to other basic biological properties, using in particular the recent generalisation of the muscle specific tension to molecular motors. Finally, we go a step further by relating this time scale to still more basic quantities, as environmental conditions at Earth in addition to fundamental physical and chemical constants.

  8. ERF/ERFC, Calculation of Error Function, Complementary Error Function, Probability Integrals

    International Nuclear Information System (INIS)

    Vogel, J.E.

    1983-01-01

    1 - Description of problem or function: ERF and ERFC are used to compute values of the error function and complementary error function for any real number. They may be used to compute other related functions such as the normal probability integrals. 4. Method of solution: The error function and complementary error function are approximated by rational functions. Three such rational approximations are used depending on whether - x .GE.4.0. In the first region the error function is computed directly and the complementary error function is computed via the identity erfc(x)=1.0-erf(x). In the other two regions the complementary error function is computed directly and the error function is computed from the identity erf(x)=1.0-erfc(x). The error function and complementary error function are real-valued functions of any real argument. The range of the error function is (-1,1). The range of the complementary error function is (0,2). 5. Restrictions on the complexity of the problem: The user is cautioned against using ERF to compute the complementary error function by using the identity erfc(x)=1.0-erf(x). This subtraction may cause partial or total loss of significance for certain values of x

  9. Reducing patient identification errors related to glucose point-of-care testing

    Directory of Open Access Journals (Sweden)

    Gaurav Alreja

    2011-01-01

    Full Text Available Background: Patient identification (ID errors in point-of-care testing (POCT can cause test results to be transferred to the wrong patient′s chart or prevent results from being transmitted and reported. Despite the implementation of patient barcoding and ongoing operator training at our institution, patient ID errors still occur with glucose POCT. The aim of this study was to develop a solution to reduce identification errors with POCT. Materials and Methods: Glucose POCT was performed by approximately 2,400 clinical operators throughout our health system. Patients are identified by scanning in wristband barcodes or by manual data entry using portable glucose meters. Meters are docked to upload data to a database server which then transmits data to any medical record matching the financial number of the test result. With a new model, meters connect to an interface manager where the patient ID (a nine-digit account number is checked against patient registration data from admission, discharge, and transfer (ADT feeds and only matched results are transferred to the patient′s electronic medical record. With the new process, the patient ID is checked prior to testing, and testing is prevented until ID errors are resolved. Results: When averaged over a period of a month, ID errors were reduced to 3 errors/month (0.015% in comparison with 61.5 errors/month (0.319% before implementing the new meters. Conclusion: Patient ID errors may occur with glucose POCT despite patient barcoding. The verification of patient identification should ideally take place at the bedside before testing occurs so that the errors can be addressed in real time. The introduction of an ADT feed directly to glucose meters reduced patient ID errors in POCT.

  10. Scaling prediction errors to reward variability benefits error-driven learning in humans.

    Science.gov (United States)

    Diederen, Kelly M J; Schultz, Wolfram

    2015-09-01

    Effective error-driven learning requires individuals to adapt learning to environmental reward variability. The adaptive mechanism may involve decays in learning rate across subsequent trials, as shown previously, and rescaling of reward prediction errors. The present study investigated the influence of prediction error scaling and, in particular, the consequences for learning performance. Participants explicitly predicted reward magnitudes that were drawn from different probability distributions with specific standard deviations. By fitting the data with reinforcement learning models, we found scaling of prediction errors, in addition to the learning rate decay shown previously. Importantly, the prediction error scaling was closely related to learning performance, defined as accuracy in predicting the mean of reward distributions, across individual participants. In addition, participants who scaled prediction errors relative to standard deviation also presented with more similar performance for different standard deviations, indicating that increases in standard deviation did not substantially decrease "adapters'" accuracy in predicting the means of reward distributions. However, exaggerated scaling beyond the standard deviation resulted in impaired performance. Thus efficient adaptation makes learning more robust to changing variability. Copyright © 2015 the American Physiological Society.

  11. Did I Do That? Expectancy Effects of Brain Stimulation on Error-related Negativity and Sense of Agency.

    Science.gov (United States)

    Hoogeveen, Suzanne; Schjoedt, Uffe; van Elk, Michiel

    2018-06-19

    This study examines the effects of expected transcranial stimulation on the error(-related) negativity (Ne or ERN) and the sense of agency in participants who perform a cognitive control task. Placebo transcranial direct current stimulation was used to elicit expectations of transcranially induced cognitive improvement or impairment. The improvement/impairment manipulation affected both the Ne/ERN and the sense of agency (i.e., whether participants attributed errors to oneself or the brain stimulation device): Expected improvement increased the ERN in response to errors compared with both impairment and control conditions. Expected impairment made participants falsely attribute errors to the transcranial stimulation. This decrease in sense of agency was correlated with a reduced ERN amplitude. These results show that expectations about transcranial stimulation impact users' neural response to self-generated errors and the attribution of responsibility-especially when actions lead to negative outcomes. We discuss our findings in relation to predictive processing theory according to which the effect of prior expectations on the ERN reflects the brain's attempt to generate predictive models of incoming information. By demonstrating that induced expectations about transcranial stimulation can have effects at a neural level, that is, beyond mere demand characteristics, our findings highlight the potential for placebo brain stimulation as a promising tool for research.

  12. Relations between the efficiency, power and dissipation for linear irreversible heat engine at maximum trade-off figure of merit

    Science.gov (United States)

    Iyyappan, I.; Ponmurugan, M.

    2018-03-01

    A trade of figure of merit (\\dotΩ ) criterion accounts the best compromise between the useful input energy and the lost input energy of the heat devices. When the heat engine is working at maximum \\dotΩ criterion its efficiency increases significantly from the efficiency at maximum power. We derive the general relations between the power, efficiency at maximum \\dotΩ criterion and minimum dissipation for the linear irreversible heat engine. The efficiency at maximum \\dotΩ criterion has the lower bound \

  13. Error quantification of osteometric data in forensic anthropology.

    Science.gov (United States)

    Langley, Natalie R; Meadows Jantz, Lee; McNulty, Shauna; Maijanen, Heli; Ousley, Stephen D; Jantz, Richard L

    2018-04-10

    This study evaluates the reliability of osteometric data commonly used in forensic case analyses, with specific reference to the measurements in Data Collection Procedures 2.0 (DCP 2.0). Four observers took a set of 99 measurements four times on a sample of 50 skeletons (each measurement was taken 200 times by each observer). Two-way mixed ANOVAs and repeated measures ANOVAs with pairwise comparisons were used to examine interobserver (between-subjects) and intraobserver (within-subjects) variability. Relative technical error of measurement (TEM) was calculated for measurements with significant ANOVA results to examine the error among a single observer repeating a measurement multiple times (e.g. repeatability or intraobserver error), as well as the variability between multiple observers (interobserver error). Two general trends emerged from these analyses: (1) maximum lengths and breadths have the lowest error across the board (TEMForensic Skeletal Material, 3rd edition. Each measurement was examined carefully to determine the likely source of the error (e.g. data input, instrumentation, observer's method, or measurement definition). For several measurements (e.g. anterior sacral breadth, distal epiphyseal breadth of the tibia) only one observer differed significantly from the remaining observers, indicating a likely problem with the measurement definition as interpreted by that observer; these definitions were clarified in DCP 2.0 to eliminate this confusion. Other measurements were taken from landmarks that are difficult to locate consistently (e.g. pubis length, ischium length); these measurements were omitted from DCP 2.0. This manual is available for free download online (https://fac.utk.edu/wp-content/uploads/2016/03/DCP20_webversion.pdf), along with an accompanying instructional video (https://www.youtube.com/watch?v=BtkLFl3vim4). Copyright © 2018 Elsevier B.V. All rights reserved.

  14. An error-related negativity potential investigation of response monitoring function in individuals with Internet addiction disorder

    Directory of Open Access Journals (Sweden)

    Zhenhe eZhou

    2013-09-01

    Full Text Available Internet addiction disorder (IAD is an impulse disorder or at least related to impulse control disorder. Deficits in executive functioning, including response monitoring, have been proposed as a hallmark feature of impulse control disorders. The error-related negativity (ERN reflects individual’s ability to monitor behavior. Since IAD belongs to a compulsive-impulsive spectrum disorder, theoretically, it should present response monitoring functional deficit characteristics of some disorders, such as substance dependence, ADHD or alcohol abuse, testing with an Erikson flanker task. Up to now, no studies on response monitoring functional deficit in IAD were reported. The purpose of the present study was to examine whether IAD displays response monitoring functional deficit characteristics in a modified Erikson flanker task.23 subjects were recruited as IAD group. 23 matched age, gender and education healthy persons were recruited as control group. All participants completed the modified Erikson flanker task while measured with event-related potentials (ERPs. IAD group made more total error rates than did controls (P < 0.01; Reactive times for total error responses in IAD group were shorter than did controls (P < 0.01. The mean ERN amplitudes of total error response conditions at frontal electrode sites and at central electrode sites of IAD group were reduced compared with control group (all P < 0.01. These results revealed that IAD displays response monitoring functional deficit characteristics and shares ERN characteristics of compulsive-impulsive spectrum disorder.

  15. On the problem of non-zero word error rates for fixed-rate error correction codes in continuous variable quantum key distribution

    International Nuclear Information System (INIS)

    Johnson, Sarah J; Ong, Lawrence; Shirvanimoghaddam, Mahyar; Lance, Andrew M; Symul, Thomas; Ralph, T C

    2017-01-01

    The maximum operational range of continuous variable quantum key distribution protocols has shown to be improved by employing high-efficiency forward error correction codes. Typically, the secret key rate model for such protocols is modified to account for the non-zero word error rate of such codes. In this paper, we demonstrate that this model is incorrect: firstly, we show by example that fixed-rate error correction codes, as currently defined, can exhibit efficiencies greater than unity. Secondly, we show that using this secret key model combined with greater than unity efficiency codes, implies that it is possible to achieve a positive secret key over an entanglement breaking channel—an impossible scenario. We then consider the secret key model from a post-selection perspective, and examine the implications for key rate if we constrain the forward error correction codes to operate at low word error rates. (paper)

  16. Imagery of Errors in Typing

    Science.gov (United States)

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  17. Comparative evaluation of the efficacy of occlusal splints fabricated in centric relation or maximum intercuspation in temporomandibular disorders patients

    OpenAIRE

    Hamata,Marcelo Matida; Zuim,Paulo Renato Junqueira; Garcia,Alicio Rosalino

    2009-01-01

    Fabrication of occlusal splints in centric relation for temporomandibular disorders (TMD) patients is arguable, since this position has been defined for asymptomatic stomatognathic system. Thus, maximum intercuspation might be employed in patients with occlusal stability, eliminating the need for interocclusal records. This study compared occlusal splints fabricated in centric relation and maximum intercuspation in muscle pain reduction of TMD patients. Twenty patients with TMD of myogenous o...

  18. Influence of Ephemeris Error on GPS Single Point Positioning Accuracy

    Science.gov (United States)

    Lihua, Ma; Wang, Meng

    2013-09-01

    The Global Positioning System (GPS) user makes use of the navigation message transmitted from GPS satellites to achieve its location. Because the receiver uses the satellite's location in position calculations, an ephemeris error, a difference between the expected and actual orbital position of a GPS satellite, reduces user accuracy. The influence extent is decided by the precision of broadcast ephemeris from the control station upload. Simulation analysis with the Yuma almanac show that maximum positioning error exists in the case where the ephemeris error is along the line-of-sight (LOS) direction. Meanwhile, the error is dependent on the relationship between the observer and spatial constellation at some time period.

  19. Research on the Method of Noise Error Estimation of Atomic Clocks

    Science.gov (United States)

    Song, H. J.; Dong, S. W.; Li, W.; Zhang, J. H.; Jing, Y. J.

    2017-05-01

    The simulation methods of different noises of atomic clocks are given. The frequency flicker noise of atomic clock is studied by using the Markov process theory. The method for estimating the maximum interval error of the frequency white noise is studied by using the Wiener process theory. Based on the operation of 9 cesium atomic clocks in the time frequency reference laboratory of NTSC (National Time Service Center), the noise coefficients of the power-law spectrum model are estimated, and the simulations are carried out according to the noise models. Finally, the maximum interval error estimates of the frequency white noises generated by the 9 cesium atomic clocks have been acquired.

  20. Triaxial Accelerometer Error Coefficients Identification with a Novel Artificial Fish Swarm Algorithm

    Directory of Open Access Journals (Sweden)

    Yanbin Gao

    2015-01-01

    Full Text Available Artificial fish swarm algorithm (AFSA is one of the state-of-the-art swarm intelligence techniques, which is widely utilized for optimization purposes. Triaxial accelerometer error coefficients are relatively unstable with the environmental disturbances and aging of the instrument. Therefore, identifying triaxial accelerometer error coefficients accurately and being with lower costs are of great importance to improve the overall performance of triaxial accelerometer-based strapdown inertial navigation system (SINS. In this study, a novel artificial fish swarm algorithm (NAFSA that eliminated the demerits (lack of using artificial fishes’ previous experiences, lack of existing balance between exploration and exploitation, and high computational cost of AFSA is introduced at first. In NAFSA, functional behaviors and overall procedure of AFSA have been improved with some parameters variations. Second, a hybrid accelerometer error coefficients identification algorithm has been proposed based on NAFSA and Monte Carlo simulation (MCS approaches. This combination leads to maximum utilization of the involved approaches for triaxial accelerometer error coefficients identification. Furthermore, the NAFSA-identified coefficients are testified with 24-position verification experiment and triaxial accelerometer-based SINS navigation experiment. The priorities of MCS-NAFSA are compared with that of conventional calibration method and optimal AFSA. Finally, both experiments results demonstrate high efficiency of MCS-NAFSA on triaxial accelerometer error coefficients identification.

  1. Optical losses due to tracking error estimation for a low concentrating solar collector

    International Nuclear Information System (INIS)

    Sallaberry, Fabienne; García de Jalón, Alberto; Torres, José-Luis; Pujol-Nadal, Ramón

    2015-01-01

    Highlights: • A solar thermal collector with low concentration and one-axis tracking was tested. • A quasi-dynamic testing procedure for IAM was defined for tracking collector. • The adequation between the concentrator optics and the tracking was checked. • The maximum and long-term optical losses due to tracking error were calculated. - Abstract: The determination of the accuracy of a solar tracker used in domestic hot water solar collectors is not yet standardized. However, while using optical concentration devices, it is important to use a solar tracker with adequate precision with regard to the specific optical concentration factor. Otherwise, the concentrator would sustain high optical losses due to the inadequate focusing of the solar radiation onto its receiver, despite having a good quality. This study is focused on the estimation of long-term optical losses due to the tracking error of a low-temperature collector using low-concentration optics. For this purpose, a testing procedure for the incidence angle modifier on the tracking plane is proposed to determinate the acceptance angle of its concentrator even with different longitudinal incidence angles along the focal line plane. Then, the impact of maximum tracking error angle upon the optical efficiency has been determined. Finally, the calculation of the long-term optical error due to the tracking errors, using the design angular tracking error declared by the manufacturer, is carried out. The maximum tracking error calculated for this collector imply an optical loss of about 8.5%, which is high, but the average long-term optical loss calculated for one year was about 1%, which is reasonable for such collectors used for domestic hot water

  2. An individual differences approach to multiple-target visual search errors: How search errors relate to different characteristics of attention.

    Science.gov (United States)

    Adamo, Stephen H; Cain, Matthew S; Mitroff, Stephen R

    2017-12-01

    A persistent problem in visual search is that searchers are more likely to miss a target if they have already found another in the same display. This phenomenon, the Subsequent Search Miss (SSM) effect, has remained despite being a known issue for decades. Increasingly, evidence supports a resource depletion account of SSM errors-a previously detected target consumes attentional resources leaving fewer resources available for the processing of a second target. However, "attention" is broadly defined and is composed of many different characteristics, leaving considerable uncertainty about how attention affects second-target detection. The goal of the current study was to identify which attentional characteristics (i.e., selection, limited capacity, modulation, and vigilance) related to second-target misses. The current study compared second-target misses to an attentional blink task and a vigilance task, which both have established measures that were used to operationally define each of four attentional characteristics. Second-target misses in the multiple-target search were correlated with (1) a measure of the time it took for the second target to recovery from the blink in the attentional blink task (i.e., modulation), and (2) target sensitivity (d') in the vigilance task (i.e., vigilance). Participants with longer recovery and poorer vigilance had more second-target misses in the multiple-target visual search task. The results add further support to a resource depletion account of SSM errors and highlight that worse modulation and poor vigilance reflect a deficit in attentional resources that can account for SSM errors. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Online adaptation of a c-VEP Brain-computer Interface(BCI) based on error-related potentials and unsupervised learning.

    Science.gov (United States)

    Spüler, Martin; Rosenstiel, Wolfgang; Bogdan, Martin

    2012-01-01

    The goal of a Brain-Computer Interface (BCI) is to control a computer by pure brain activity. Recently, BCIs based on code-modulated visual evoked potentials (c-VEPs) have shown great potential to establish high-performance communication. In this paper we present a c-VEP BCI that uses online adaptation of the classifier to reduce calibration time and increase performance. We compare two different approaches for online adaptation of the system: an unsupervised method and a method that uses the detection of error-related potentials. Both approaches were tested in an online study, in which an average accuracy of 96% was achieved with adaptation based on error-related potentials. This accuracy corresponds to an average information transfer rate of 144 bit/min, which is the highest bitrate reported so far for a non-invasive BCI. In a free-spelling mode, the subjects were able to write with an average of 21.3 error-free letters per minute, which shows the feasibility of the BCI system in a normal-use scenario. In addition we show that a calibration of the BCI system solely based on the detection of error-related potentials is possible, without knowing the true class labels.

  4. Minimum Tracking Error Volatility

    OpenAIRE

    Luca RICCETTI

    2010-01-01

    Investors assign part of their funds to asset managers that are given the task of beating a benchmark. The risk management department usually imposes a maximum value of the tracking error volatility (TEV) in order to keep the risk of the portfolio near to that of the selected benchmark. However, risk management does not establish a rule on TEV which enables us to understand whether the asset manager is really active or not and, in practice, asset managers sometimes follow passively the corres...

  5. Electrophysiological Endophenotypes and the Error-Related Negativity (ERN) in Autism Spectrum Disorder: A Family Study

    Science.gov (United States)

    Clawson, Ann; South, Mikle; Baldwin, Scott A.; Larson, Michael J.

    2017-01-01

    We examined the error-related negativity (ERN) as an endophenotype of ASD by comparing the ERN in families of ASD probands to control families. We hypothesized that ASD probands and families would display reduced-amplitude ERN relative to controls. Participants included 148 individuals within 39 families consisting of a mother, father, sibling,…

  6. Enhanced error related negativity amplitude in medication-naïve, comorbidity-free obsessive compulsive disorder.

    Science.gov (United States)

    Nawani, Hema; Narayanaswamy, Janardhanan C; Basavaraju, Shrinivasa; Bose, Anushree; Mahavir Agarwal, Sri; Venkatasubramanian, Ganesan; Janardhan Reddy, Y C

    2018-04-01

    Error monitoring and response inhibition is a key cognitive deficit in obsessive-compulsive disorder (OCD). Frontal midline regions such as the cingulate cortex and pre-supplementary motor area are considered critical brain substrates of this deficit. Electrophysiological equivalent of the above dysfunction is a fronto-central event related potential (ERP) which occurs after an error called the error related negativity (ERN). In this study, we sought to compare the ERN parameters between medication-naïve, comorbidity-free subjects with OCD and healthy controls (HC). Age, sex and handedness matched subjects with medication-naïve, comorbidity-free OCD (N = 16) and Healthy Controls (N = 17) performed a modified version of the flanker task while EEG was acquired for ERN. EEG signals were recorded from the electrodes FCz and Cz. Clinical severity of OCD was assessed using the Yale Brown Obsessive Compulsive Scale. The subjects with OCD had significantly greater ERN amplitude at Cz and FCz. There were no significant correlations between ERN measures and illness severity measures. Overactive performance monitoring as evidenced by enhanced ERN amplitude could be considered as a biomarker for OCD. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. Adaptive color halftoning for minimum perceived error using the blue noise mask

    Science.gov (United States)

    Yu, Qing; Parker, Kevin J.

    1997-04-01

    Color halftoning using a conventional screen requires careful selection of screen angles to avoid Moire patterns. An obvious advantage of halftoning using a blue noise mask (BNM) is that there are no conventional screen angle or Moire patterns produced. However, a simple strategy of employing the same BNM on all color planes is unacceptable in case where a small registration error can cause objectionable color shifts. In a previous paper by Yao and Parker, strategies were presented for shifting or inverting the BNM as well as using mutually exclusive BNMs for different color planes. In this paper, the above schemes will be studied in CIE-LAB color space in terms of root mean square error and variance for luminance channel and chrominance channel respectively. We will demonstrate that the dot-on-dot scheme results in minimum chrominance error, but maximum luminance error and the 4-mask scheme results in minimum luminance error but maximum chrominance error, while the shift scheme falls in between. Based on this study, we proposed a new adaptive color halftoning algorithm that takes colorimetric color reproduction into account by applying 2-mutually exclusive BNMs on two different color planes and applying an adaptive scheme on other planes to reduce color error. We will show that by having one adaptive color channel, we obtain increased flexibility to manipulate the output so as to reduce colorimetric error while permitting customization to specific printing hardware.

  8. Measuring galaxy cluster masses with CMB lensing using a Maximum Likelihood estimator: statistical and systematic error budgets for future experiments

    Energy Technology Data Exchange (ETDEWEB)

    Raghunathan, Srinivasan; Patil, Sanjaykumar; Bianchini, Federico; Reichardt, Christian L. [School of Physics, University of Melbourne, 313 David Caro building, Swanston St and Tin Alley, Parkville VIC 3010 (Australia); Baxter, Eric J. [Department of Physics and Astronomy, University of Pennsylvania, 209 S. 33rd Street, Philadelphia, PA 19104 (United States); Bleem, Lindsey E. [Argonne National Laboratory, High-Energy Physics Division, 9700 S. Cass Avenue, Argonne, IL 60439 (United States); Crawford, Thomas M. [Kavli Institute for Cosmological Physics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637 (United States); Holder, Gilbert P. [Department of Astronomy and Department of Physics, University of Illinois, 1002 West Green St., Urbana, IL 61801 (United States); Manzotti, Alessandro, E-mail: srinivasan.raghunathan@unimelb.edu.au, E-mail: s.patil2@student.unimelb.edu.au, E-mail: ebax@sas.upenn.edu, E-mail: federico.bianchini@unimelb.edu.au, E-mail: bleeml@uchicago.edu, E-mail: tcrawfor@kicp.uchicago.edu, E-mail: gholder@illinois.edu, E-mail: manzotti@uchicago.edu, E-mail: christian.reichardt@unimelb.edu.au [Department of Astronomy and Astrophysics, University of Chicago, 5640 South Ellis Avenue, Chicago, IL 60637 (United States)

    2017-08-01

    We develop a Maximum Likelihood estimator (MLE) to measure the masses of galaxy clusters through the impact of gravitational lensing on the temperature and polarization anisotropies of the cosmic microwave background (CMB). We show that, at low noise levels in temperature, this optimal estimator outperforms the standard quadratic estimator by a factor of two. For polarization, we show that the Stokes Q/U maps can be used instead of the traditional E- and B-mode maps without losing information. We test and quantify the bias in the recovered lensing mass for a comprehensive list of potential systematic errors. Using realistic simulations, we examine the cluster mass uncertainties from CMB-cluster lensing as a function of an experiment's beam size and noise level. We predict the cluster mass uncertainties will be 3 - 6% for SPT-3G, AdvACT, and Simons Array experiments with 10,000 clusters and less than 1% for the CMB-S4 experiment with a sample containing 100,000 clusters. The mass constraints from CMB polarization are very sensitive to the experimental beam size and map noise level: for a factor of three reduction in either the beam size or noise level, the lensing signal-to-noise improves by roughly a factor of two.

  9. State estimation bias induced by optimization under uncertainty and error cost asymmetry is likely reflected in perception.

    Science.gov (United States)

    Shimansky, Y P

    2011-05-01

    It is well known from numerous studies that perception can be significantly affected by intended action in many everyday situations, indicating that perception and related decision-making is not a simple, one-way sequence, but a complex iterative cognitive process. However, the underlying functional mechanisms are yet unclear. Based on an optimality approach, a quantitative computational model of one such mechanism has been developed in this study. It is assumed in the model that significant uncertainty about task-related parameters of the environment results in parameter estimation errors and an optimal control system should minimize the cost of such errors in terms of the optimality criterion. It is demonstrated that, if the cost of a parameter estimation error is significantly asymmetrical with respect to error direction, the tendency to minimize error cost creates a systematic deviation of the optimal parameter estimate from its maximum likelihood value. Consequently, optimization of parameter estimate and optimization of control action cannot be performed separately from each other under parameter uncertainty combined with asymmetry of estimation error cost, thus making the certainty equivalence principle non-applicable under those conditions. A hypothesis that not only the action, but also perception itself is biased by the above deviation of parameter estimate is supported by ample experimental evidence. The results provide important insights into the cognitive mechanisms of interaction between sensory perception and planning an action under realistic conditions. Implications for understanding related functional mechanisms of optimal control in the CNS are discussed.

  10. Simultaneous maximum a posteriori longitudinal PET image reconstruction

    Science.gov (United States)

    Ellis, Sam; Reader, Andrew J.

    2017-09-01

    Positron emission tomography (PET) is frequently used to monitor functional changes that occur over extended time scales, for example in longitudinal oncology PET protocols that include routine clinical follow-up scans to assess the efficacy of a course of treatment. In these contexts PET datasets are currently reconstructed into images using single-dataset reconstruction methods. Inspired by recently proposed joint PET-MR reconstruction methods, we propose to reconstruct longitudinal datasets simultaneously by using a joint penalty term in order to exploit the high degree of similarity between longitudinal images. We achieved this by penalising voxel-wise differences between pairs of longitudinal PET images in a one-step-late maximum a posteriori (MAP) fashion, resulting in the MAP simultaneous longitudinal reconstruction (SLR) method. The proposed method reduced reconstruction errors and visually improved images relative to standard maximum likelihood expectation-maximisation (ML-EM) in simulated 2D longitudinal brain tumour scans. In reconstructions of split real 3D data with inserted simulated tumours, noise across images reconstructed with MAP-SLR was reduced to levels equivalent to doubling the number of detected counts when using ML-EM. Furthermore, quantification of tumour activities was largely preserved over a variety of longitudinal tumour changes, including changes in size and activity, with larger changes inducing larger biases relative to standard ML-EM reconstructions. Similar improvements were observed for a range of counts levels, demonstrating the robustness of the method when used with a single penalty strength. The results suggest that longitudinal regularisation is a simple but effective method of improving reconstructed PET images without using resolution degrading priors.

  11. (AJST) RELATIVE EFFICIENCY OF NON-PARAMETRIC ERROR ...

    African Journals Online (AJOL)

    NORBERT OPIYO AKECH

    on 100 bootstrap samples, a sample of size n being taken with replacement in each initial sample of size n. .... the overlap (or optimal error rate) of the populations. However, the expression (2.3) for the computation of ..... Analysis and Machine Intelligence, 9, 628-633. Lachenbruch P. A. (1967). An almost unbiased method ...

  12. Relative azimuth inversion by way of damped maximum correlation estimates

    Science.gov (United States)

    Ringler, A.T.; Edwards, J.D.; Hutt, C.R.; Shelly, F.

    2012-01-01

    Horizontal seismic data are utilized in a large number of Earth studies. Such work depends on the published orientations of the sensitive axes of seismic sensors relative to true North. These orientations can be estimated using a number of different techniques: SensOrLoc (Sensitivity, Orientation and Location), comparison to synthetics (Ekstrom and Busby, 2008), or by way of magnetic compass. Current methods for finding relative station azimuths are unable to do so with arbitrary precision quickly because of limitations in the algorithms (e.g. grid search methods). Furthermore, in order to determine instrument orientations during station visits, it is critical that any analysis software be easily run on a large number of different computer platforms and the results be obtained quickly while on site. We developed a new technique for estimating relative sensor azimuths by inverting for the orientation with the maximum correlation to a reference instrument, using a non-linear parameter estimation routine. By making use of overlapping windows, we are able to make multiple azimuth estimates, which helps to identify the confidence of our azimuth estimate, even when the signal-to-noise ratio (SNR) is low. Finally, our algorithm has been written as a stand-alone, platform independent, Java software package with a graphical user interface for reading and selecting data segments to be analyzed.

  13. Method for evaluation of risk due to seismic related design and construction errors based on past reactor experience

    International Nuclear Information System (INIS)

    Gonzalez Cuesta, M.; Okrent, D.

    1985-01-01

    This paper proposes a methodology for quantification of risk due to seismic related design and construction errors in nuclear power plants, based on information available on errors discovered in the past. For the purposes of this paper, an error is defined as any event that causes the seismic safety margins of a nuclear power plant to be smaller than implied by current regulatory requirements and industry common practice. Also, the actual reduction in the safety margins caused by the error will be called a deficiency. The method is based on a theoretical model of errors, called a deficiency logic diagram. First, an ultimate cause is present. This ultimate cause is consumated as a specific instance, called originating error. As originating errors may occur in actions to be applied a number of times, a deficiency generation system may be involved. Quality assurance activities will hopefully identify most of these deficiencies, requesting their disposition. However, the quality assurance program is not perfect and some operating plant deficiencies may persist, causing different levels of impact to the plant logic. The paper provides a way of extrapolating information about errors discovered in plants under construction in order to assess the risk due to errors that have not been discovered

  14. Maximum likelihood estimation for integrated diffusion processes

    DEFF Research Database (Denmark)

    Baltazar-Larios, Fernando; Sørensen, Michael

    We propose a method for obtaining maximum likelihood estimates of parameters in diffusion models when the data is a discrete time sample of the integral of the process, while no direct observations of the process itself are available. The data are, moreover, assumed to be contaminated...... EM-algorithm to obtain maximum likelihood estimates of the parameters in the diffusion model. As part of the algorithm, we use a recent simple method for approximate simulation of diffusion bridges. In simulation studies for the Ornstein-Uhlenbeck process and the CIR process the proposed method works...... by measurement errors. Integrated volatility is an example of this type of observations. Another example is ice-core data on oxygen isotopes used to investigate paleo-temperatures. The data can be viewed as incomplete observations of a model with a tractable likelihood function. Therefore we propose a simulated...

  15. An Error Estimate for Symplectic Euler Approximation of Optimal Control Problems

    KAUST Repository

    Karlsson, Jesper; Larsson, Stig; Sandberg, Mattias; Szepessy, Anders; Tempone, Raul

    2015-01-01

    This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns symplectic Euler solutions of the Hamiltonian system connected with the optimal control problem. The error representation has a leading-order term consisting of an error density that is computable from symplectic Euler solutions. Under an assumption of the pathwise convergence of the approximate dual function as the maximum time step goes to zero, we prove that the remainder is of higher order than the leading-error density part in the error representation. With the error representation, it is possible to perform adaptive time stepping. We apply an adaptive algorithm originally developed for ordinary differential equations. The performance is illustrated by numerical tests.

  16. MO-FG-BRA-06: Electromagnetic Beacon Insertion in Lung Cancer Patients and Resultant Surrogacy Errors for Dynamic MLC Tumour Tracking

    Energy Technology Data Exchange (ETDEWEB)

    Hardcastle, N; Booth, J; Caillet, V; Haddad, C; Crasta, C [Royal North Shore Hospital, St. Leonards, NSW (Australia); O’Brien, R; Keall, P [University of Sydney, Sydney, NSW (Australia); Szymura, K [Royal North Shore Hospital, Sydney, NSW (Australia)

    2016-06-15

    Purpose: To assess endo-bronchial electromagnetic beacon insertion and to quantify the geometric accuracy of using beacons as a surrogate for tumour motion in real-time multileaf collimator (MLC) tracking of lung tumours. Methods: The LIGHT SABR trial is a world-first clinical trial in which the MLC leaves move with lung tumours in real time on a standard linear accelerator. Tracking is performed based on implanted electromagnetic beacons (CalypsoTM, Varian Medical Systems, USA) as a surrogate for tumour motion. Five patients have been treated and have each had three beacons implanted endo-bronchially under fluoroscopic guidance. The centre of mass (C.O.M) has been used to adapt the MLC in real-time. The geometric error in using the beacon C.O.M as a surrogate for tumour motion was measured by measuring the tumour and beacon C.O.M in all phases of the respiratory cycle of a 4DCT. The surrogacy error was defined as the difference in beacon and tumour C.O.M relative to the reference phase (maximum exhale). Results: All five patients have had three beacons successfully implanted with no migration between simulation and end of treatment. Beacon placement relative to tumour C.O.M varied from 14 to 74 mm and in one patient spanned two lobes. Surrogacy error was measured in each patient on the simulation 4DCT and ranged from 0 to 3 mm. Surrogacy error as measured on 4DCT was subject to artefacts in mid-ventilation phases. Surrogacy error was a function of breathing phase and was typically larger at maximum inhale. Conclusion: Beacon placement and thus surrogacy error is a major component of geometric uncertainty in MLC tracking of lung tumours. Surrogacy error must be measured on each patient and incorporated into margin calculation. Reduction of surrogacy error is limited by airway anatomy, however should be taken into consideration when performing beacon insertion and planning. This research is funded by Varian Medical Systems via a collaborative research agreement.

  17. MO-FG-BRA-06: Electromagnetic Beacon Insertion in Lung Cancer Patients and Resultant Surrogacy Errors for Dynamic MLC Tumour Tracking

    International Nuclear Information System (INIS)

    Hardcastle, N; Booth, J; Caillet, V; Haddad, C; Crasta, C; O’Brien, R; Keall, P; Szymura, K

    2016-01-01

    Purpose: To assess endo-bronchial electromagnetic beacon insertion and to quantify the geometric accuracy of using beacons as a surrogate for tumour motion in real-time multileaf collimator (MLC) tracking of lung tumours. Methods: The LIGHT SABR trial is a world-first clinical trial in which the MLC leaves move with lung tumours in real time on a standard linear accelerator. Tracking is performed based on implanted electromagnetic beacons (CalypsoTM, Varian Medical Systems, USA) as a surrogate for tumour motion. Five patients have been treated and have each had three beacons implanted endo-bronchially under fluoroscopic guidance. The centre of mass (C.O.M) has been used to adapt the MLC in real-time. The geometric error in using the beacon C.O.M as a surrogate for tumour motion was measured by measuring the tumour and beacon C.O.M in all phases of the respiratory cycle of a 4DCT. The surrogacy error was defined as the difference in beacon and tumour C.O.M relative to the reference phase (maximum exhale). Results: All five patients have had three beacons successfully implanted with no migration between simulation and end of treatment. Beacon placement relative to tumour C.O.M varied from 14 to 74 mm and in one patient spanned two lobes. Surrogacy error was measured in each patient on the simulation 4DCT and ranged from 0 to 3 mm. Surrogacy error as measured on 4DCT was subject to artefacts in mid-ventilation phases. Surrogacy error was a function of breathing phase and was typically larger at maximum inhale. Conclusion: Beacon placement and thus surrogacy error is a major component of geometric uncertainty in MLC tracking of lung tumours. Surrogacy error must be measured on each patient and incorporated into margin calculation. Reduction of surrogacy error is limited by airway anatomy, however should be taken into consideration when performing beacon insertion and planning. This research is funded by Varian Medical Systems via a collaborative research agreement.

  18. A Novel Artificial Fish Swarm Algorithm for Recalibration of Fiber Optic Gyroscope Error Parameters

    Directory of Open Access Journals (Sweden)

    Yanbin Gao

    2015-05-01

    Full Text Available The artificial fish swarm algorithm (AFSA is one of the state-of-the-art swarm intelligent techniques, which is widely utilized for optimization purposes. Fiber optic gyroscope (FOG error parameters such as scale factors, biases and misalignment errors are relatively unstable, especially with the environmental disturbances and the aging of fiber coils. These uncalibrated error parameters are the main reasons that the precision of FOG-based strapdown inertial navigation system (SINS degraded. This research is mainly on the application of a novel artificial fish swarm algorithm (NAFSA on FOG error coefficients recalibration/identification. First, the NAFSA avoided the demerits (e.g., lack of using artificial fishes’ pervious experiences, lack of existing balance between exploration and exploitation, and high computational cost of the standard AFSA during the optimization process. To solve these weak points, functional behaviors and the overall procedures of AFSA have been improved with some parameters eliminated and several supplementary parameters added. Second, a hybrid FOG error coefficients recalibration algorithm has been proposed based on NAFSA and Monte Carlo simulation (MCS approaches. This combination leads to maximum utilization of the involved approaches for FOG error coefficients recalibration. After that, the NAFSA is verified with simulation and experiments and its priorities are compared with that of the conventional calibration method and optimal AFSA. Results demonstrate high efficiency of the NAFSA on FOG error coefficients recalibration.

  19. A novel artificial fish swarm algorithm for recalibration of fiber optic gyroscope error parameters.

    Science.gov (United States)

    Gao, Yanbin; Guan, Lianwu; Wang, Tingjun; Sun, Yunlong

    2015-05-05

    The artificial fish swarm algorithm (AFSA) is one of the state-of-the-art swarm intelligent techniques, which is widely utilized for optimization purposes. Fiber optic gyroscope (FOG) error parameters such as scale factors, biases and misalignment errors are relatively unstable, especially with the environmental disturbances and the aging of fiber coils. These uncalibrated error parameters are the main reasons that the precision of FOG-based strapdown inertial navigation system (SINS) degraded. This research is mainly on the application of a novel artificial fish swarm algorithm (NAFSA) on FOG error coefficients recalibration/identification. First, the NAFSA avoided the demerits (e.g., lack of using artificial fishes' pervious experiences, lack of existing balance between exploration and exploitation, and high computational cost) of the standard AFSA during the optimization process. To solve these weak points, functional behaviors and the overall procedures of AFSA have been improved with some parameters eliminated and several supplementary parameters added. Second, a hybrid FOG error coefficients recalibration algorithm has been proposed based on NAFSA and Monte Carlo simulation (MCS) approaches. This combination leads to maximum utilization of the involved approaches for FOG error coefficients recalibration. After that, the NAFSA is verified with simulation and experiments and its priorities are compared with that of the conventional calibration method and optimal AFSA. Results demonstrate high efficiency of the NAFSA on FOG error coefficients recalibration.

  20. Errors in clinical laboratories or errors in laboratory medicine?

    Science.gov (United States)

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  1. PERFORMANCE OF OPPORTUNISTIC SPECTRUM ACCESS WITH SENSING ERROR IN COGNITIVE RADIO AD HOC NETWORKS

    Directory of Open Access Journals (Sweden)

    N. ARMI

    2012-04-01

    Full Text Available Sensing in opportunistic spectrum access (OSA has a responsibility to detect the available channel by performing binary hypothesis as busy or idle states. If channel is busy, secondary user (SU cannot access and refrain from data transmission. SU is allowed to access when primary user (PU does not use it (idle states. However, channel is sensed on imperfect communication link. Fading, noise and any obstacles existed can cause sensing errors in PU signal detection. False alarm detects idle states as a busy channel while miss-identification detects busy states as an idle channel. False detection makes SU refrain from transmission and reduces number of bits transmitted. On the other hand, miss-identification causes SU collide to PU transmission. This paper study the performance of OSA based on the greedy approach with sensing errors by the restriction of maximum collision probability allowed (collision threshold by PU network. The throughput of SU and spectrum capacity metric is used to evaluate OSA performance and make comparisons to those ones without sensing error as function of number of slot based on the greedy approach. The relations between throughput and signal to noise ratio (SNR with different collision probability as well as false detection with different SNR are presented. According to the obtained results show that CR users can gain the reward from the previous slot for both of with and without sensing errors. It is indicated by the throughput improvement as slot number increases. However, sensing on imperfect channel with sensing errors can degrade the throughput performance. Subsequently, the throughput of SU and spectrum capacity improves by increasing maximum collision probability allowed by PU network as well. Due to frequent collision with PU, the throughput of SU and spectrum capacity decreases at certain value of collision threshold. Computer simulation is used to evaluate and validate these works.

  2. Influences of optical-spectrum errors on excess relative intensity noise in a fiber-optic gyroscope

    Science.gov (United States)

    Zheng, Yue; Zhang, Chunxi; Li, Lijing

    2018-03-01

    The excess relative intensity noise (RIN) generated from broadband sources degrades the angular-random-walk performance of a fiber-optic gyroscope dramatically. Many methods have been proposed and managed to suppress the excess RIN. However, the properties of the excess RIN under the influences of different optical errors in the fiber-optic gyroscope have not been systematically investigated. Therefore, it is difficult for the existing RIN-suppression methods to achieve the optimal results in practice. In this work, the influences of different optical-spectrum errors on the power spectral density of the excess RIN are theoretically analyzed. In particular, the properties of the excess RIN affected by the raised-cosine-type ripples in the optical spectrum are elaborately investigated. Experimental measurements of the excess RIN corresponding to different optical-spectrum errors are in good agreement with our theoretical analysis, demonstrating its validity. This work provides a comprehensive understanding of the properties of the excess RIN under the influences of different optical-spectrum errors. Potentially, it can be utilized to optimize the configurations of the existing RIN-suppression methods by accurately evaluating the power spectral density of the excess RIN.

  3. Team errors: definition and taxonomy

    International Nuclear Information System (INIS)

    Sasou, Kunihide; Reason, James

    1999-01-01

    In error analysis or error management, the focus is usually upon individuals who have made errors. In large complex systems, however, most people work in teams or groups. Considering this working environment, insufficient emphasis has been given to 'team errors'. This paper discusses the definition of team errors and its taxonomy. These notions are also applied to events that have occurred in the nuclear power industry, aviation industry and shipping industry. The paper also discusses the relations between team errors and Performance Shaping Factors (PSFs). As a result, the proposed definition and taxonomy are found to be useful in categorizing team errors. The analysis also reveals that deficiencies in communication, resource/task management, excessive authority gradient, excessive professional courtesy will cause team errors. Handling human errors as team errors provides an opportunity to reduce human errors

  4. Enhanced Pedestrian Navigation Based on Course Angle Error Estimation Using Cascaded Kalman Filters.

    Science.gov (United States)

    Song, Jin Woo; Park, Chan Gook

    2018-04-21

    An enhanced pedestrian dead reckoning (PDR) based navigation algorithm, which uses two cascaded Kalman filters (TCKF) for the estimation of course angle and navigation errors, is proposed. The proposed algorithm uses a foot-mounted inertial measurement unit (IMU), waist-mounted magnetic sensors, and a zero velocity update (ZUPT) based inertial navigation technique with TCKF. The first stage filter estimates the course angle error of a human, which is closely related to the heading error of the IMU. In order to obtain the course measurements, the filter uses magnetic sensors and a position-trace based course angle. For preventing magnetic disturbance from contaminating the estimation, the magnetic sensors are attached to the waistband. Because the course angle error is mainly due to the heading error of the IMU, and the characteristic error of the heading angle is highly dependent on that of the course angle, the estimated course angle error is used as a measurement for estimating the heading error in the second stage filter. At the second stage, an inertial navigation system-extended Kalman filter-ZUPT (INS-EKF-ZUPT) method is adopted. As the heading error is estimated directly by using course-angle error measurements, the estimation accuracy for the heading and yaw gyro bias can be enhanced, compared with the ZUPT-only case, which eventually enhances the position accuracy more efficiently. The performance enhancements are verified via experiments, and the way-point position error for the proposed method is compared with those for the ZUPT-only case and with other cases that use ZUPT and various types of magnetic heading measurements. The results show that the position errors are reduced by a maximum of 90% compared with the conventional ZUPT based PDR algorithms.

  5. The systematic and random errors determination using realtime 3D surface tracking system in breast cancer

    International Nuclear Information System (INIS)

    Kanphet, J; Suriyapee, S; Sanghangthum, T; Kumkhwao, J; Wisetrintong, M; Dumrongkijudom, N

    2016-01-01

    The purpose of this study to determine the patient setup uncertainties in deep inspiration breath-hold (DIBH) radiation therapy for left breast cancer patients using real-time 3D surface tracking system. The six breast cancer patients treated by 6 MV photon beams from TrueBeam linear accelerator were selected. The patient setup errors and motion during treatment were observed and calculated for interfraction and intrafraction motions. The systematic and random errors were calculated in vertical, longitudinal and lateral directions. From 180 images tracking before and during treatment, the maximum systematic error of interfraction and intrafraction motions were 0.56 mm and 0.23 mm, the maximum random error of interfraction and intrafraction motions were 1.18 mm and 0.53 mm, respectively. The interfraction was more pronounce than the intrafraction, while the systematic error was less impact than random error. In conclusion the intrafraction motion error from patient setup uncertainty is about half of interfraction motion error, which is less impact due to the stability in organ movement from DIBH. The systematic reproducibility is also half of random error because of the high efficiency of modern linac machine that can reduce the systematic uncertainty effectively, while the random errors is uncontrollable. (paper)

  6. Analysis of the "naming game" with learning errors in communications.

    Science.gov (United States)

    Lou, Yang; Chen, Guanrong

    2015-07-16

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.

  7. Estimation of Dynamic Errors in Laser Optoelectronic Dimension Gauges for Geometric Measurement of Details

    Directory of Open Access Journals (Sweden)

    Khasanov Zimfir

    2018-01-01

    Full Text Available The article reviews the capabilities and particularities of the approach to the improvement of metrological characteristics of fiber-optic pressure sensors (FOPS based on estimation estimation of dynamic errors in laser optoelectronic dimension gauges for geometric measurement of details. It is shown that the proposed criteria render new methods for conjugation of optoelectronic converters in the dimension gauge for geometric measurements in order to reduce the speed and volume requirements for the Random Access Memory (RAM of the video controller which process the signal. It is found that the lower relative error, the higher the interrogetion speed of the CCD array. It is shown that thus, the maximum achievable dynamic accuracy characteristics of the optoelectronic gauge are determined by the following conditions: the parameter stability of the electronic circuits in the CCD array and the microprocessor calculator; linearity of characteristics; error dynamics and noise in all electronic circuits of the CCD array and microprocessor calculator.

  8. VOLUMETRIC ERROR COMPENSATION IN FIVE-AXIS CNC MACHINING CENTER THROUGH KINEMATICS MODELING OF GEOMETRIC ERROR

    Directory of Open Access Journals (Sweden)

    Pooyan Vahidi Pashsaki

    2016-06-01

    Full Text Available Accuracy of a five-axis CNC machine tool is affected by a vast number of error sources. This paper investigates volumetric error modeling and its compensation to the basis for creation of new tool path for improvement of work pieces accuracy. The volumetric error model of a five-axis machine tool with the configuration RTTTR (tilting head B-axis and rotary table in work piece side A΄ was set up taking into consideration rigid body kinematics and homogeneous transformation matrix, in which 43 error components are included. Volumetric error comprises 43 error components that can separately reduce geometrical and dimensional accuracy of work pieces. The machining accuracy of work piece is guaranteed due to the position of the cutting tool center point (TCP relative to the work piece. The cutting tool is deviated from its ideal position relative to the work piece and machining error is experienced. For compensation process detection of the present tool path and analysis of the RTTTR five-axis CNC machine tools geometrical error, translating current position of component to compensated positions using the Kinematics error model, converting newly created component to new tool paths using the compensation algorithms and finally editing old G-codes using G-code generator algorithm have been employed.

  9. Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation

    Science.gov (United States)

    Prentice, J. S. C.

    2012-01-01

    An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…

  10. Error analysis of the phase-shifting technique when applied to shadow moire

    International Nuclear Information System (INIS)

    Han, Changwoon; Han Bongtae

    2006-01-01

    An exact solution for the intensity distribution of shadow moire fringes produced by a broad spectrum light is presented. A mathematical study quantifies errors in fractional fringe orders determined by the phase-shifting technique, and its validity is corroborated experimentally. The errors vary cyclically as the distance between the reference grating and the specimen increases. The amplitude of the maximum error is approximately 0.017 fringe, which defines the theoretical limit of resolution enhancement offered by the phase-shifting technique

  11. Error evaluation of inelastic response spectrum method for earthquake design

    International Nuclear Information System (INIS)

    Paz, M.; Wong, J.

    1981-01-01

    Two-story, four-story and ten-story shear building-type frames subjected to earthquake excitaion, were analyzed at several levels of their yield resistance. These frames were subjected at their base to the motion recorded for north-south component of the 1940 El Centro earthquake, and to an artificial earthquake which would produce the response spectral charts recommended for design. The frames were first subjected to 25% or 50% of the intensity level of these earthquakes. The resulting maximum relative displacement for each story of the frames was assumed to be yield resistance for the subsequent analyses at 100% of intensity for the excitation. The frames analyzed were uniform along their height with the stiffness adjusted as to result in 0.20 seconds of the fundamental period for the two-story frame, 0.40 seconds for the four-story frame and 1.0 seconds for the ten-story frame. Results of the study provided the following conclusions: (1) The percentage error in floor displacement for linear behavior was less than 10%; (2) The percentage error in floor displacement for inelastic behavior (elastoplastic) could be as high as 100%; (3) In most of the cases analyzed, the error increased with damping in the system; (4) As a general rule, the error increased as the modal yield resistance decreased; (5) The error was lower for the structures subjected to the 1940 E1 Centro earthquake than for the same structures subjected to an artificial earthquake which was generated from the response spectra for design. (orig./HP)

  12. Maximum Quantum Entropy Method

    OpenAIRE

    Sim, Jae-Hoon; Han, Myung Joon

    2018-01-01

    Maximum entropy method for analytic continuation is extended by introducing quantum relative entropy. This new method is formulated in terms of matrix-valued functions and therefore invariant under arbitrary unitary transformation of input matrix. As a result, the continuation of off-diagonal elements becomes straightforward. Without introducing any further ambiguity, the Bayesian probabilistic interpretation is maintained just as in the conventional maximum entropy method. The applications o...

  13. Rectangular maximum-volume submatrices and their applications

    KAUST Repository

    Mikhalev, Aleksandr; Oseledets, I.V.

    2017-01-01

    We introduce a definition of the volume of a general rectangular matrix, which is equivalent to an absolute value of the determinant for square matrices. We generalize results of square maximum-volume submatrices to the rectangular case, show a connection of the rectangular volume with an optimal experimental design and provide estimates for a growth of coefficients and an approximation error in spectral and Chebyshev norms. Three promising applications of such submatrices are presented: recommender systems, finding maximal elements in low-rank matrices and preconditioning of overdetermined linear systems. The code is available online.

  14. Rectangular maximum-volume submatrices and their applications

    KAUST Repository

    Mikhalev, Aleksandr

    2017-10-18

    We introduce a definition of the volume of a general rectangular matrix, which is equivalent to an absolute value of the determinant for square matrices. We generalize results of square maximum-volume submatrices to the rectangular case, show a connection of the rectangular volume with an optimal experimental design and provide estimates for a growth of coefficients and an approximation error in spectral and Chebyshev norms. Three promising applications of such submatrices are presented: recommender systems, finding maximal elements in low-rank matrices and preconditioning of overdetermined linear systems. The code is available online.

  15. EEG-based decoding of error-related brain activity in a real-world driving task

    Science.gov (United States)

    Zhang, H.; Chavarriaga, R.; Khaliliardali, Z.; Gheorghe, L.; Iturrate, I.; Millán, J. d. R.

    2015-12-01

    Objectives. Recent studies have started to explore the implementation of brain-computer interfaces (BCI) as part of driving assistant systems. The current study presents an EEG-based BCI that decodes error-related brain activity. Such information can be used, e.g., to predict driver’s intended turning direction before reaching road intersections. Approach. We executed experiments in a car simulator (N = 22) and a real car (N = 8). While subject was driving, a directional cue was shown before reaching an intersection, and we classified the presence or not of an error-related potentials from EEG to infer whether the cued direction coincided with the subject’s intention. In this protocol, the directional cue can correspond to an estimation of the driving direction provided by a driving assistance system. We analyzed ERPs elicited during normal driving and evaluated the classification performance in both offline and online tests. Results. An average classification accuracy of 0.698 ± 0.065 was obtained in offline experiments in the car simulator, while tests in the real car yielded a performance of 0.682 ± 0.059. The results were significantly higher than chance level for all cases. Online experiments led to equivalent performances in both simulated and real car driving experiments. These results support the feasibility of decoding these signals to help estimating whether the driver’s intention coincides with the advice provided by the driving assistant in a real car. Significance. The study demonstrates a BCI system in real-world driving, extending the work from previous simulated studies. As far as we know, this is the first online study in real car decoding driver’s error-related brain activity. Given the encouraging results, the paradigm could be further improved by using more sophisticated machine learning approaches and possibly be combined with applications in intelligent vehicles.

  16. Temperature and SAR measurement errors in the evaluation of metallic linear structures heating during MRI using fluoroptic (registered) probes

    Energy Technology Data Exchange (ETDEWEB)

    Mattei, E [Department of Technologies and Health, Italian National Institute of Health, Rome (Italy); Triventi, M [Department of Technologies and Health, Italian National Institute of Health, Rome (Italy); Calcagnini, G [Department of Technologies and Health, Italian National Institute of Health, Rome (Italy); Censi, F [Department of Technologies and Health, Italian National Institute of Health, Rome (Italy); Kainz, W [Center for Devices and Radiological Health, Food and Drug Administration, Rockville, MD (United States); Bassen, H I [Center for Devices and Radiological Health, Food and Drug Administration, Rockville, MD (United States); Bartolini, P [Department of Technologies and Health, Italian National Institute of Health, Rome (Italy)

    2007-03-21

    The purpose of this work is to evaluate the error associated with temperature and SAR measurements using fluoroptic (registered) temperature probes on pacemaker (PM) leads during magnetic resonance imaging (MRI). We performed temperature measurements on pacemaker leads, excited with a 25, 64, and 128 MHz current. The PM lead tip heating was measured with a fluoroptic (registered) thermometer (Luxtron, Model 3100, USA). Different contact configurations between the pigmented portion of the temperature probe and the PM lead tip were investigated to find the contact position minimizing the temperature and SAR underestimation. A computer model was used to estimate the error made by fluoroptic (registered) probes in temperature and SAR measurement. The transversal contact of the pigmented portion of the temperature probe and the PM lead tip minimizes the underestimation for temperature and SAR. This contact position also has the lowest temperature and SAR error. For other contact positions, the maximum temperature error can be as high as -45%, whereas the maximum SAR error can be as high as -54%. MRI heating evaluations with temperature probes should use a contact position minimizing the maximum error, need to be accompanied by a thorough uncertainty budget and the temperature and SAR errors should be specified.

  17. SOERP, Statistics and 2. Order Error Propagation for Function of Random Variables

    International Nuclear Information System (INIS)

    Cox, N. D.; Miller, C. F.

    1985-01-01

    1 - Description of problem or function: SOERP computes second-order error propagation equations for the first four moments of a function of independently distributed random variables. SOERP was written for a rigorous second-order error propagation of any function which may be expanded in a multivariable Taylor series, the input variables being independently distributed. The required input consists of numbers directly related to the partial derivatives of the function, evaluated at the nominal values of the input variables and the central moments of the input variables from the second through the eighth. 2 - Method of solution: The development of equations for computing the propagation of errors begins by expressing the function of random variables in a multivariable Taylor series expansion. The Taylor series expansion is then truncated, and statistical operations are applied to the series in order to obtain equations for the moments (about the origin) of the distribution of the computed value. If the Taylor series is truncated after powers of two, the procedure produces second-order error propagation equations. 3 - Restrictions on the complexity of the problem: The maximum number of component variables allowed is 30. The IBM version will only process one set of input data per run

  18. Diagnostic errors related to acute abdominal pain in the emergency department.

    Science.gov (United States)

    Medford-Davis, Laura; Park, Elizabeth; Shlamovitz, Gil; Suliburk, James; Meyer, Ashley N D; Singh, Hardeep

    2016-04-01

    Diagnostic errors in the emergency department (ED) are harmful and costly. We reviewed a selected high-risk cohort of patients presenting to the ED with abdominal pain to evaluate for possible diagnostic errors and associated process breakdowns. We conducted a retrospective chart review of ED patients >18 years at an urban academic hospital. A computerised 'trigger' algorithm identified patients possibly at high risk for diagnostic errors to facilitate selective record reviews. The trigger determined patients to be at high risk because they: (1) presented to the ED with abdominal pain, and were discharged home and (2) had a return ED visit within 10 days that led to a hospitalisation. Diagnostic errors were defined as missed opportunities to make a correct or timely diagnosis based on the evidence available during the first ED visit, regardless of patient harm, and included errors that involved both ED and non-ED providers. Errors were determined by two independent record reviewers followed by team consensus in cases of disagreement. Diagnostic errors occurred in 35 of 100 high-risk cases. Over two-thirds had breakdowns involving the patient-provider encounter (most commonly history-taking or ordering additional tests) and/or follow-up and tracking of diagnostic information (most commonly follow-up of abnormal test results). The most frequently missed diagnoses were gallbladder pathology (n=10) and urinary infections (n=5). Diagnostic process breakdowns in ED patients with abdominal pain most commonly involved history-taking, ordering insufficient tests in the patient-provider encounter and problems with follow-up of abnormal test results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  19. Applying Intelligent Algorithms to Automate the Identification of Error Factors.

    Science.gov (United States)

    Jin, Haizhe; Qu, Qingxing; Munechika, Masahiko; Sano, Masataka; Kajihara, Chisato; Duffy, Vincent G; Chen, Han

    2018-05-03

    Medical errors are the manifestation of the defects occurring in medical processes. Extracting and identifying defects as medical error factors from these processes are an effective approach to prevent medical errors. However, it is a difficult and time-consuming task and requires an analyst with a professional medical background. The issues of identifying a method to extract medical error factors and reduce the extraction difficulty need to be resolved. In this research, a systematic methodology to extract and identify error factors in the medical administration process was proposed. The design of the error report, extraction of the error factors, and identification of the error factors were analyzed. Based on 624 medical error cases across four medical institutes in both Japan and China, 19 error-related items and their levels were extracted. After which, they were closely related to 12 error factors. The relational model between the error-related items and error factors was established based on a genetic algorithm (GA)-back-propagation neural network (BPNN) model. Additionally, compared to GA-BPNN, BPNN, partial least squares regression and support vector regression, GA-BPNN exhibited a higher overall prediction accuracy, being able to promptly identify the error factors from the error-related items. The combination of "error-related items, their different levels, and the GA-BPNN model" was proposed as an error-factor identification technology, which could automatically identify medical error factors.

  20. High cortisol awakening response is associated with impaired error monitoring and decreased post-error adjustment.

    Science.gov (United States)

    Zhang, Liang; Duan, Hongxia; Qin, Shaozheng; Yuan, Yiran; Buchanan, Tony W; Zhang, Kan; Wu, Jianhui

    2015-01-01

    The cortisol awakening response (CAR), a rapid increase in cortisol levels following morning awakening, is an important aspect of hypothalamic-pituitary-adrenocortical axis activity. Alterations in the CAR have been linked to a variety of mental disorders and cognitive function. However, little is known regarding the relationship between the CAR and error processing, a phenomenon that is vital for cognitive control and behavioral adaptation. Using high-temporal resolution measures of event-related potentials (ERPs) combined with behavioral assessment of error processing, we investigated whether and how the CAR is associated with two key components of error processing: error detection and subsequent behavioral adjustment. Sixty university students performed a Go/No-go task while their ERPs were recorded. Saliva samples were collected at 0, 15, 30 and 60 min after awakening on the two consecutive days following ERP data collection. The results showed that a higher CAR was associated with slowed latency of the error-related negativity (ERN) and a higher post-error miss rate. The CAR was not associated with other behavioral measures such as the false alarm rate and the post-correct miss rate. These findings suggest that high CAR is a biological factor linked to impairments of multiple steps of error processing in healthy populations, specifically, the automatic detection of error and post-error behavioral adjustment. A common underlying neural mechanism of physiological and cognitive control may be crucial for engaging in both CAR and error processing.

  1. Analyzing Software Requirements Errors in Safety-Critical, Embedded Systems

    Science.gov (United States)

    Lutz, Robyn R.

    1993-01-01

    This paper analyzes the root causes of safety-related software errors in safety-critical, embedded systems. The results show that software errors identified as potentially hazardous to the system tend to be produced by different error mechanisms than non- safety-related software errors. Safety-related software errors are shown to arise most commonly from (1) discrepancies between the documented requirements specifications and the requirements needed for correct functioning of the system and (2) misunderstandings of the software's interface with the rest of the system. The paper uses these results to identify methods by which requirements errors can be prevented. The goal is to reduce safety-related software errors and to enhance the safety of complex, embedded systems.

  2. On the mean squared error of the ridge estimator of the covariance and precision matrix

    NARCIS (Netherlands)

    van Wieringen, Wessel N.

    2017-01-01

    For a suitably chosen ridge penalty parameter, the ridge regression estimator uniformly dominates the maximum likelihood regression estimator in terms of the mean squared error. Analogous results for the ridge maximum likelihood estimators of covariance and precision matrix are presented.

  3. Passive quantum error correction of linear optics networks through error averaging

    Science.gov (United States)

    Marshman, Ryan J.; Lund, Austin P.; Rohde, Peter P.; Ralph, Timothy C.

    2018-02-01

    We propose and investigate a method of error detection and noise correction for bosonic linear networks using a method of unitary averaging. The proposed error averaging does not rely on ancillary photons or control and feedforward correction circuits, remaining entirely passive in its operation. We construct a general mathematical framework for this technique and then give a series of proof of principle examples including numerical analysis. Two methods for the construction of averaging are then compared to determine the most effective manner of implementation and probe the related error thresholds. Finally we discuss some of the potential uses of this scheme.

  4. Operator- and software-related post-experimental variability and source of error in 2-DE analysis.

    Science.gov (United States)

    Millioni, Renato; Puricelli, Lucia; Sbrignadello, Stefano; Iori, Elisabetta; Murphy, Ellen; Tessari, Paolo

    2012-05-01

    In the field of proteomics, several approaches have been developed for separating proteins and analyzing their differential relative abundance. One of the oldest, yet still widely used, is 2-DE. Despite the continuous advance of new methods, which are less demanding from a technical standpoint, 2-DE is still compelling and has a lot of potential for improvement. The overall variability which affects 2-DE includes biological, experimental, and post-experimental (software-related) variance. It is important to highlight how much of the total variability of this technique is due to post-experimental variability, which, so far, has been largely neglected. In this short review, we have focused on this topic and explained that post-experimental variability and source of error can be further divided into those which are software-dependent and those which are operator-dependent. We discuss these issues in detail, offering suggestions for reducing errors that may affect the quality of results, summarizing the advantages and drawbacks of each approach.

  5. Verification of surface minimum, mean, and maximum temperature forecasts in Calabria for summer 2008

    Directory of Open Access Journals (Sweden)

    S. Federico

    2011-02-01

    Full Text Available Since 2005, one-hour temperature forecasts for the Calabria region (southern Italy, modelled by the Regional Atmospheric Modeling System (RAMS, have been issued by CRATI/ISAC-CNR (Consortium for Research and Application of Innovative Technologies/Institute for Atmospheric and Climate Sciences of the National Research Council and are available online at http://meteo.crati.it/previsioni.html (every six hours. Beginning in June 2008, the horizontal resolution was enhanced to 2.5 km. In the present paper, forecast skill and accuracy are evaluated out to four days for the 2008 summer season (from 6 June to 30 September, 112 runs. For this purpose, gridded high horizontal resolution forecasts of minimum, mean, and maximum temperatures are evaluated against gridded analyses at the same horizontal resolution (2.5 km.

    Gridded analysis is based on Optimal Interpolation (OI and uses the RAMS first-day temperature forecast as the background field. Observations from 87 thermometers are used in the analysis system. The analysis error is introduced to quantify the effect of using the RAMS first-day forecast as the background field in the OI analyses and to define the forecast error unambiguously, while spatial interpolation (SI analysis is considered to quantify the statistics' sensitivity to the verifying analysis and to show the quality of the OI analyses for different background fields.

    Two case studies, the first one with a low (less than the 10th percentile root mean square error (RMSE in the OI analysis, the second with the largest RMSE of the whole period in the OI analysis, are discussed to show the forecast performance under two different conditions. Cumulative statistics are used to quantify forecast errors out to four days. Results show that maximum temperature has the largest RMSE, while minimum and mean temperature errors are similar. For the period considered

  6. Measurement Model Specification Error in LISREL Structural Equation Models.

    Science.gov (United States)

    Baldwin, Beatrice; Lomax, Richard

    This LISREL study examines the robustness of the maximum likelihood estimates under varying degrees of measurement model misspecification. A true model containing five latent variables (two endogenous and three exogenous) and two indicator variables per latent variable was used. Measurement model misspecification considered included errors of…

  7. Modified Moment, Maximum Likelihood and Percentile Estimators for the Parameters of the Power Function Distribution

    Directory of Open Access Journals (Sweden)

    Azam Zaka

    2014-10-01

    Full Text Available This paper is concerned with the modifications of maximum likelihood, moments and percentile estimators of the two parameter Power function distribution. Sampling behavior of the estimators is indicated by Monte Carlo simulation. For some combinations of parameter values, some of the modified estimators appear better than the traditional maximum likelihood, moments and percentile estimators with respect to bias, mean square error and total deviation.

  8. The Kalman Filter Revisited Using Maximum Relative Entropy

    Directory of Open Access Journals (Sweden)

    Adom Giffin

    2014-02-01

    Full Text Available In 1960, Rudolf E. Kalman created what is known as the Kalman filter, which is a way to estimate unknown variables from noisy measurements. The algorithm follows the logic that if the previous state of the system is known, it could be used as the best guess for the current state. This information is first applied a priori to any measurement by using it in the underlying dynamics of the system. Second, measurements of the unknown variables are taken. These two pieces of information are taken into account to determine the current state of the system. Bayesian inference is specifically designed to accommodate the problem of updating what we think of the world based on partial or uncertain information. In this paper, we present a derivation of the general Bayesian filter, then adapt it for Markov systems. A simple example is shown for pedagogical purposes. We also show that by using the Kalman assumptions or “constraints”, we can arrive at the Kalman filter using the method of maximum (relative entropy (MrE, which goes beyond Bayesian methods. Finally, we derive a generalized, nonlinear filter using MrE, where the original Kalman Filter is a special case. We further show that the variable relationship can be any function, and thus, approximations, such as the extended Kalman filter, the unscented Kalman filter and other Kalman variants are special cases as well.

  9. On nonstationarity-related errors in modal combination rules of the response spectrum method

    Science.gov (United States)

    Pathak, Shashank; Gupta, Vinay K.

    2017-10-01

    Characterization of seismic hazard via (elastic) design spectra and the estimation of linear peak response of a given structure from this characterization continue to form the basis of earthquake-resistant design philosophy in various codes of practice all over the world. Since the direct use of design spectrum ordinates is a preferred option for the practicing engineers, modal combination rules play central role in the peak response estimation. Most of the available modal combination rules are however based on the assumption that nonstationarity affects the structural response alike at the modal and overall response levels. This study considers those situations where this assumption may cause significant errors in the peak response estimation, and preliminary models are proposed for the estimation of the extents to which nonstationarity affects the modal and total system responses, when the ground acceleration process is assumed to be a stationary process. It is shown through numerical examples in the context of complete-quadratic-combination (CQC) method that the nonstationarity-related errors in the estimation of peak base shear may be significant, when strong-motion duration of the excitation is too small compared to the period of the system and/or the response is distributed comparably in several modes. It is also shown that these errors are reduced marginally with the use of the proposed nonstationarity factor models.

  10. Probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty from maximum temperature metric selection

    Science.gov (United States)

    DeWeber, Jefferson T.; Wagner, Tyler

    2018-01-01

    Predictions of the projected changes in species distributions and potential adaptation action benefits can help guide conservation actions. There is substantial uncertainty in projecting species distributions into an unknown future, however, which can undermine confidence in predictions or misdirect conservation actions if not properly considered. Recent studies have shown that the selection of alternative climate metrics describing very different climatic aspects (e.g., mean air temperature vs. mean precipitation) can be a substantial source of projection uncertainty. It is unclear, however, how much projection uncertainty might stem from selecting among highly correlated, ecologically similar climate metrics (e.g., maximum temperature in July, maximum 30‐day temperature) describing the same climatic aspect (e.g., maximum temperatures) known to limit a species’ distribution. It is also unclear how projection uncertainty might propagate into predictions of the potential benefits of adaptation actions that might lessen climate change effects. We provide probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty stemming from the selection of four maximum temperature metrics for brook trout (Salvelinus fontinalis), a cold‐water salmonid of conservation concern in the eastern United States. Projected losses in suitable stream length varied by as much as 20% among alternative maximum temperature metrics for mid‐century climate projections, which was similar to variation among three climate models. Similarly, the regional average predicted increase in brook trout occurrence probability under an adaptation action scenario of full riparian forest restoration varied by as much as .2 among metrics. Our use of Bayesian inference provides probabilistic measures of vulnerability and adaptation action benefits for individual stream reaches that properly address statistical uncertainty and can help guide conservation

  11. Probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty from maximum temperature metric selection.

    Science.gov (United States)

    DeWeber, Jefferson T; Wagner, Tyler

    2018-06-01

    Predictions of the projected changes in species distributions and potential adaptation action benefits can help guide conservation actions. There is substantial uncertainty in projecting species distributions into an unknown future, however, which can undermine confidence in predictions or misdirect conservation actions if not properly considered. Recent studies have shown that the selection of alternative climate metrics describing very different climatic aspects (e.g., mean air temperature vs. mean precipitation) can be a substantial source of projection uncertainty. It is unclear, however, how much projection uncertainty might stem from selecting among highly correlated, ecologically similar climate metrics (e.g., maximum temperature in July, maximum 30-day temperature) describing the same climatic aspect (e.g., maximum temperatures) known to limit a species' distribution. It is also unclear how projection uncertainty might propagate into predictions of the potential benefits of adaptation actions that might lessen climate change effects. We provide probabilistic measures of climate change vulnerability, adaptation action benefits, and related uncertainty stemming from the selection of four maximum temperature metrics for brook trout (Salvelinus fontinalis), a cold-water salmonid of conservation concern in the eastern United States. Projected losses in suitable stream length varied by as much as 20% among alternative maximum temperature metrics for mid-century climate projections, which was similar to variation among three climate models. Similarly, the regional average predicted increase in brook trout occurrence probability under an adaptation action scenario of full riparian forest restoration varied by as much as .2 among metrics. Our use of Bayesian inference provides probabilistic measures of vulnerability and adaptation action benefits for individual stream reaches that properly address statistical uncertainty and can help guide conservation actions. Our

  12. Effects of systematic phase errors on optimized quantum random-walk search algorithm

    International Nuclear Information System (INIS)

    Zhang Yu-Chao; Bao Wan-Su; Wang Xiang; Fu Xiang-Qun

    2015-01-01

    This study investigates the effects of systematic errors in phase inversions on the success rate and number of iterations in the optimized quantum random-walk search algorithm. Using the geometric description of this algorithm, a model of the algorithm with phase errors is established, and the relationship between the success rate of the algorithm, the database size, the number of iterations, and the phase error is determined. For a given database size, we obtain both the maximum success rate of the algorithm and the required number of iterations when phase errors are present in the algorithm. Analyses and numerical simulations show that the optimized quantum random-walk search algorithm is more robust against phase errors than Grover’s algorithm. (paper)

  13. An A Posteriori Error Estimate for Symplectic Euler Approximation of Optimal Control Problems

    KAUST Repository

    Karlsson, Peer Jesper

    2015-01-07

    This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns Symplectic Euler solutions of the Hamiltonian system connected with the optimal control problem. The error representation has a leading order term consisting of an error density that is computable from Symplectic Euler solutions. Under an assumption of the pathwise convergence of the approximate dual function as the maximum time step goes to zero, we prove that the remainder is of higher order than the leading error density part in the error representation. With the error representation, it is possible to perform adaptive time stepping. We apply an adaptive algorithm originally developed for ordinary differential equations.

  14. Reward positivity: Reward prediction error or salience prediction error?

    Science.gov (United States)

    Heydari, Sepideh; Holroyd, Clay B

    2016-08-01

    The reward positivity is a component of the human ERP elicited by feedback stimuli in trial-and-error learning and guessing tasks. A prominent theory holds that the reward positivity reflects a reward prediction error signal that is sensitive to outcome valence, being larger for unexpected positive events relative to unexpected negative events (Holroyd & Coles, 2002). Although the theory has found substantial empirical support, most of these studies have utilized either monetary or performance feedback to test the hypothesis. However, in apparent contradiction to the theory, a recent study found that unexpected physical punishments also elicit the reward positivity (Talmi, Atkinson, & El-Deredy, 2013). The authors of this report argued that the reward positivity reflects a salience prediction error rather than a reward prediction error. To investigate this finding further, in the present study participants navigated a virtual T maze and received feedback on each trial under two conditions. In a reward condition, the feedback indicated that they would either receive a monetary reward or not and in a punishment condition the feedback indicated that they would receive a small shock or not. We found that the feedback stimuli elicited a typical reward positivity in the reward condition and an apparently delayed reward positivity in the punishment condition. Importantly, this signal was more positive to the stimuli that predicted the omission of a possible punishment relative to stimuli that predicted a forthcoming punishment, which is inconsistent with the salience hypothesis. © 2016 Society for Psychophysiological Research.

  15. Discretization error estimates in maximum norm for convergent splittings of matrices with a monotone preconditioning part

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Karátson, J.

    2017-01-01

    Roč. 210, January 2017 (2017), s. 155-164 ISSN 0377-0427 Institutional support: RVO:68145535 Keywords : finite difference method * error estimates * matrix splitting * preconditioning Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.357, year: 2016 http://www. science direct.com/ science /article/pii/S0377042716301492?via%3Dihub

  16. Discretization error estimates in maximum norm for convergent splittings of matrices with a monotone preconditioning part

    Czech Academy of Sciences Publication Activity Database

    Axelsson, Owe; Karátson, J.

    2017-01-01

    Roč. 210, January 2017 (2017), s. 155-164 ISSN 0377-0427 Institutional support: RVO:68145535 Keywords : finite difference method * error estimates * matrix splitting * preconditioning Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.357, year: 2016 http://www.sciencedirect.com/science/article/pii/S0377042716301492?via%3Dihub

  17. Visual error augmentation enhances learning in three dimensions.

    Science.gov (United States)

    Sharp, Ian; Huang, Felix; Patton, James

    2011-09-02

    Because recent preliminary evidence points to the use of Error augmentation (EA) for motor learning enhancements, we visually enhanced deviations from a straight line path while subjects practiced a sensorimotor reversal task, similar to laparoscopic surgery. Our study asked 10 healthy subjects in two groups to perform targeted reaching in a simulated virtual reality environment, where the transformation of the hand position matrix was a complete reversal--rotated 180 degrees about an arbitrary axis (hence 2 of the 3 coordinates are reversed). Our data showed that after 500 practice trials, error-augmented-trained subjects reached the desired targets more quickly and with lower error (differences of 0.4 seconds and 0.5 cm Maximum Perpendicular Trajectory deviation) when compared to the control group. Furthermore, the manner in which subjects practiced was influenced by the error augmentation, resulting in more continuous motions for this group and smaller errors. Even with the extreme sensory discordance of a reversal, these data further support that distorted reality can promote more complete adaptation/learning when compared to regular training. Lastly, upon removing the flip all subjects quickly returned to baseline rapidly within 6 trials.

  18. Visual error augmentation enhances learning in three dimensions

    Directory of Open Access Journals (Sweden)

    Huang Felix

    2011-09-01

    Full Text Available Abstract Because recent preliminary evidence points to the use of Error augmentation (EA for motor learning enhancements, we visually enhanced deviations from a straight line path while subjects practiced a sensorimotor reversal task, similar to laparoscopic surgery. Our study asked 10 healthy subjects in two groups to perform targeted reaching in a simulated virtual reality environment, where the transformation of the hand position matrix was a complete reversal--rotated 180 degrees about an arbitrary axis (hence 2 of the 3 coordinates are reversed. Our data showed that after 500 practice trials, error-augmented-trained subjects reached the desired targets more quickly and with lower error (differences of 0.4 seconds and 0.5 cm Maximum Perpendicular Trajectory deviation when compared to the control group. Furthermore, the manner in which subjects practiced was influenced by the error augmentation, resulting in more continuous motions for this group and smaller errors. Even with the extreme sensory discordance of a reversal, these data further support that distorted reality can promote more complete adaptation/learning when compared to regular training. Lastly, upon removing the flip all subjects quickly returned to baseline rapidly within 6 trials.

  19. Conically scanning lidar error in complex terrain

    Directory of Open Access Journals (Sweden)

    Ferhat Bingöl

    2009-05-01

    Full Text Available Conically scanning lidars assume the flow to be homogeneous in order to deduce the horizontal wind speed. However, in mountainous or complex terrain this assumption is not valid implying a risk that the lidar will derive an erroneous wind speed. The magnitude of this error is measured by collocating a meteorological mast and a lidar at two Greek sites, one hilly and one mountainous. The maximum error for the sites investigated is of the order of 10 %. In order to predict the error for various wind directions the flows at both sites are simulated with the linearized flow model, WAsP Engineering 2.0. The measurement data are compared with the model predictions with good results for the hilly site, but with less success at the mountainous site. This is a deficiency of the flow model, but the methods presented in this paper can be used with any flow model.

  20. Task engagement and the relationships between the error-related negativity, agreeableness, behavioral shame proneness and cortisol

    NARCIS (Netherlands)

    Tops, Mattie; Boksem, Maarten A. S.; Wester, Anne E.; Lorist, Monicque M.; Meijman, Theo F.

    Previous results suggest that both cortisol. mobilization and the error-related negativity (ERN/Ne) reflect goal engagement, i.e. the mobilization and allocation of attentional and physiological resources. Personality measures of negative affectivity have been associated both to high cortisol levels

  1. Statistical analysis of lifetime determinations in the presence of large errors

    International Nuclear Information System (INIS)

    Yost, G.P.

    1984-01-01

    The lifetimes of the new particles are very short, and most of the experiments which measure decay times are subject to measurement errors which are not negligible compared with the decay times themselves. Bartlett has analyzed the problem of lifetime estimation if the error on each event is small or zero. For the case of non-negligible measurement errors, σsub(i), on each event, we are interested in a few basic questions: How well does maximum likelihood work. That is, (a) are the errors reasonable, (b) is the answer unbiased, and (c) are there other estimators with superior performance. We concentrate on the results of our Monte Carlo investigation for the case in which the experiment is sensitive over all times -infinity< xsub(i)< infinity

  2. Error Immune Logic for Low-Power Probabilistic Computing

    Directory of Open Access Journals (Sweden)

    Bo Marr

    2010-01-01

    design for the maximum amount of energy savings per a given error rate. Spice simulation results using a commercially available and well-tested 0.25 μm technology are given verifying the ultra-low power, probabilistic full-adder designs. Further, close to 6X energy savings is achieved for a probabilistic full-adder over the deterministic case.

  3. Accuracy requirements for the calculation of gravitational waveforms from coalescing compact binaries in numerical relativity

    International Nuclear Information System (INIS)

    Miller, Mark

    2005-01-01

    I discuss the accuracy requirements on numerical relativity calculations of inspiraling compact object binaries whose extracted gravitational waveforms are to be used as templates for matched filtering signal extraction and physical parameter estimation in modern interferometric gravitational wave detectors. Using a post-Newtonian point particle model for the premerger phase of the binary inspiral, I calculate the maximum allowable errors for the mass and relative velocity and positions of the binary during numerical simulations of the binary inspiral. These maximum allowable errors are compared to the errors of state-of-the-art numerical simulations of multiple-orbit binary neutron star calculations in full general relativity, and are found to be smaller by several orders of magnitude. A post-Newtonian model for the error of these numerical simulations suggests that adaptive mesh refinement coupled with second-order accurate finite difference codes will not be able to robustly obtain the accuracy required for reliable gravitational wave extraction on Terabyte-scale computers. I conclude that higher-order methods (higher-order finite difference methods and/or spectral methods) combined with adaptive mesh refinement and/or multipatch technology will be needed for robustly accurate gravitational wave extraction from numerical relativity calculations of binary coalescence scenarios

  4. Interpreting the change detection error matrix

    NARCIS (Netherlands)

    Oort, van P.A.J.

    2007-01-01

    Two different matrices are commonly reported in assessment of change detection accuracy: (1) single date error matrices and (2) binary change/no change error matrices. The third, less common form of reporting, is the transition error matrix. This paper discuses the relation between these matrices.

  5. Rate maximum calculation of Dpa in CNA-II pressure vessel

    International Nuclear Information System (INIS)

    Mascitti, J. A

    2012-01-01

    The maximum dpa rate was calculated for the reactor in the following state: fresh fuel, no Xenon, a Boron concentration of 15.3 ppm, critical state, its control rods in the criticality position, hot, at full power (2160 MW). It was determined that the maximum dpa rate under such conditions is 3.54(2)x10 12 s -1 and it is located in the positions corresponding to θ=210 o in the azimuthal direction, and z=20 cm and -60 cm respectively in the axial direction, considering the calculation mesh centered at half height of the fuel element (FE) active length. The dpa rate spectrum was determined as well as the contribution to it for 4 energy groups: a thermal group, two epithermal groups and a fast one. The maximum dpa rate considering the photo-neutrons production from (γ, n) reaction in the heavy water of coolant and moderator was 3.93(4)x10 12 s -1 that is 11% greater than the obtained without photo-neutrons. This verified significant difference between both cases, suggest that photo-neutrons in large heavy water reactors such as CNA-II should not be ignored. The maximum DPA rate in the first mm of the reactor pressure vessel was calculated too and it was obtained a value of 4.22(6)x10 12 s -1 . It should be added that the calculation was carried out with the reactor complete accurate model, with no approximations in spatial or energy variables. Each value has, between parentheses, a percentage relative error representing the statistical uncertainty due to the probabilistic Monte Carlo method used to estimate it. More representative values may be obtained with this method if equilibrium burn-up distribution is used (author)

  6. Parts of the Whole: Error Estimation for Science Students

    Directory of Open Access Journals (Sweden)

    Dorothy Wallace

    2017-01-01

    Full Text Available It is important for science students to understand not only how to estimate error sizes in measurement data, but also to see how these errors contribute to errors in conclusions they may make about the data. Relatively small errors in measurement, errors in assumptions, and roundoff errors in computation may result in large error bounds on computed quantities of interest. In this column, we look closely at a standard method for measuring the volume of cancer tumor xenografts to see how small errors in each of these three factors may contribute to relatively large observed errors in recorded tumor volumes.

  7. Errors in abdominal computed tomography

    International Nuclear Information System (INIS)

    Stephens, S.; Marting, I.; Dixon, A.K.

    1989-01-01

    Sixty-nine patients are presented in whom a substantial error was made on the initial abdominal computed tomography report. Certain features of these errors have been analysed. In 30 (43.5%) a lesion was simply not recognised (error of observation); in 39 (56.5%) the wrong conclusions were drawn about the nature of normal or abnormal structures (error of interpretation). The 39 errors of interpretation were more complex; in 7 patients an abnormal structure was noted but interpreted as normal, whereas in four a normal structure was thought to represent a lesion. Other interpretive errors included those where the wrong cause for a lesion had been ascribed (24 patients), and those where the abnormality was substantially under-reported (4 patients). Various features of these errors are presented and discussed. Errors were made just as often in relation to small and large lesions. Consultants made as many errors as senior registrar radiologists. It is like that dual reporting is the best method of avoiding such errors and, indeed, this is widely practised in our unit. (Author). 9 refs.; 5 figs.; 1 tab

  8. Low relative error in consumer-grade GPS units make them ideal for measuring small-scale animal movement patterns

    Directory of Open Access Journals (Sweden)

    Greg A. Breed

    2015-08-01

    Full Text Available Consumer-grade GPS units are a staple of modern field ecology, but the relatively large error radii reported by manufacturers (up to 10 m ostensibly precludes their utility in measuring fine-scale movement of small animals such as insects. Here we demonstrate that for data collected at fine spatio-temporal scales, these devices can produce exceptionally accurate data on step-length and movement patterns of small animals. With an understanding of the properties of GPS error and how it arises, it is possible, using a simple field protocol, to use consumer grade GPS units to collect step-length data for the movement of small animals that introduces a median error as small as 11 cm. These small error rates were measured in controlled observations of real butterfly movement. Similar conclusions were reached using a ground-truth test track prepared with a field tape and compass and subsequently measured 20 times using the same methodology as the butterfly tracking. Median error in the ground-truth track was slightly higher than the field data, mostly between 20 and 30 cm, but even for the smallest ground-truth step (70 cm, this is still a signal-to-noise ratio of 3:1, and for steps of 3 m or more, the ratio is greater than 10:1. Such small errors relative to the movements being measured make these inexpensive units useful for measuring insect and other small animal movements on small to intermediate scales with budgets orders of magnitude lower than survey-grade units used in past studies. As an additional advantage, these units are simpler to operate, and insect or other small animal trackways can be collected more quickly than either survey-grade units or more traditional ruler/gird approaches.

  9. Age-Related Differences of Maximum Phonation Time in Patients after Cardiac Surgery

    Directory of Open Access Journals (Sweden)

    Kazuhiro P. Izawa

    2017-12-01

    Full Text Available Background and aims: Maximum phonation time (MPT, which is related to respiratory function, is widely used to evaluate maximum vocal capabilities, because its use is non-invasive, quick, and inexpensive. We aimed to examine differences in MPT by age, following recovery phase II cardiac rehabilitation (CR. Methods: This longitudinal observational study assessed 50 consecutive cardiac patients who were divided into the middle-aged group (<65 years, n = 29 and older-aged group (≥65 years, n = 21. MPTs were measured at 1 and 3 months after cardiac surgery, and were compared. Results: The duration of MPT increased more significantly from month 1 to month 3 in the middle-aged group (19.2 ± 7.8 to 27.1 ± 11.6 s, p < 0.001 than in the older-aged group (12.6 ± 3.5 to 17.9 ± 6.0 s, p < 0.001. However, no statistically significant difference occurred in the % change of MPT from 1 month to 3 months after cardiac surgery between the middle-aged group and older-aged group, respectively (41.1% vs. 42.1%. In addition, there were no significant interactions of MPT in the two groups for 1 versus 3 months (F = 1.65, p = 0.20. Conclusion: Following phase II, CR improved MPT for all cardiac surgery patients.

  10. Age-Related Differences of Maximum Phonation Time in Patients after Cardiac Surgery.

    Science.gov (United States)

    Izawa, Kazuhiro P; Kasahara, Yusuke; Hiraki, Koji; Hirano, Yasuyuki; Watanabe, Satoshi

    2017-12-21

    Background and aims: Maximum phonation time (MPT), which is related to respiratory function, is widely used to evaluate maximum vocal capabilities, because its use is non-invasive, quick, and inexpensive. We aimed to examine differences in MPT by age, following recovery phase II cardiac rehabilitation (CR). Methods: This longitudinal observational study assessed 50 consecutive cardiac patients who were divided into the middle-aged group (<65 years, n = 29) and older-aged group (≥65 years, n = 21). MPTs were measured at 1 and 3 months after cardiac surgery, and were compared. Results: The duration of MPT increased more significantly from month 1 to month 3 in the middle-aged group (19.2 ± 7.8 to 27.1 ± 11.6 s, p < 0.001) than in the older-aged group (12.6 ± 3.5 to 17.9 ± 6.0 s, p < 0.001). However, no statistically significant difference occurred in the % change of MPT from 1 month to 3 months after cardiac surgery between the middle-aged group and older-aged group, respectively (41.1% vs. 42.1%). In addition, there were no significant interactions of MPT in the two groups for 1 versus 3 months (F = 1.65, p = 0.20). Conclusion: Following phase II, CR improved MPT for all cardiac surgery patients.

  11. Rounding errors in weighing

    International Nuclear Information System (INIS)

    Jeach, J.L.

    1976-01-01

    When rounding error is large relative to weighing error, it cannot be ignored when estimating scale precision and bias from calibration data. Further, if the data grouping is coarse, rounding error is correlated with weighing error and may also have a mean quite different from zero. These facts are taken into account in a moment estimation method. A copy of the program listing for the MERDA program that provides moment estimates is available from the author. Experience suggests that if the data fall into four or more cells or groups, it is not necessary to apply the moment estimation method. Rather, the estimate given by equation (3) is valid in this instance. 5 tables

  12. Performance monitoring in the anterior cingulate is not all error related: expectancy deviation and the representation of action-outcome associations.

    Science.gov (United States)

    Oliveira, Flavio T P; McDonald, John J; Goodman, David

    2007-12-01

    Several converging lines of evidence suggest that the anterior cingulate cortex (ACC) is selectively involved in error detection or evaluation of poor performance. Here we challenge this notion by presenting event-related potential (ERP) evidence that the feedback-elicited error-related negativity, an ERP component attributed to the ACC, can be elicited by positive feedback when a person is expecting negative feedback and vice versa. These results suggest that performance monitoring in the ACC is not limited to error processing. We propose that the ACC acts as part of a more general performance-monitoring system that is activated by violations in expectancy. Further, we propose that the common observation of increased ACC activity elicited by negative events could be explained by an overoptimistic bias in generating expectations of performance. These results could shed light into neurobehavioral disorders, such as depression and mania, associated with alterations in performance monitoring and also in judgments of self-related events.

  13. Application of Joint Error Maximal Mutual Compensation to hexapod robots

    DEFF Research Database (Denmark)

    Veryha, Yauheni; Petersen, Henrik Gordon

    2008-01-01

    A good practice to ensure high-positioning accuracy in industrial robots is to use joint error maximum mutual compensation (JEMMC). This paper presents an application of JEMMC for positioning of hexapod robots to improve end-effector positioning accuracy. We developed an algorithm and simulation ...

  14. The Errors of Our Ways: Understanding Error Representations in Cerebellar-Dependent Motor Learning.

    Science.gov (United States)

    Popa, Laurentiu S; Streng, Martha L; Hewitt, Angela L; Ebner, Timothy J

    2016-04-01

    The cerebellum is essential for error-driven motor learning and is strongly implicated in detecting and correcting for motor errors. Therefore, elucidating how motor errors are represented in the cerebellum is essential in understanding cerebellar function, in general, and its role in motor learning, in particular. This review examines how motor errors are encoded in the cerebellar cortex in the context of a forward internal model that generates predictions about the upcoming movement and drives learning and adaptation. In this framework, sensory prediction errors, defined as the discrepancy between the predicted consequences of motor commands and the sensory feedback, are crucial for both on-line movement control and motor learning. While many studies support the dominant view that motor errors are encoded in the complex spike discharge of Purkinje cells, others have failed to relate complex spike activity with errors. Given these limitations, we review recent findings in the monkey showing that complex spike modulation is not necessarily required for motor learning or for simple spike adaptation. Also, new results demonstrate that the simple spike discharge provides continuous error signals that both lead and lag the actual movements in time, suggesting errors are encoded as both an internal prediction of motor commands and the actual sensory feedback. These dual error representations have opposing effects on simple spike discharge, consistent with the signals needed to generate sensory prediction errors used to update a forward internal model.

  15. SU-F-T-252: An Investigation of Gamma Knife Frame Definition Error When Using a Pre-Planning Workflow

    International Nuclear Information System (INIS)

    Johnson, P

    2016-01-01

    Purpose: To determine causal factors related to high frame definition error when treating GK patients using a pre-planning workflow. Methods: 160 cases were retrospectively reviewed. All patients received treatment using a pre-planning workflow whereby stereotactic coordinates are determined from a CT scan acquired after framing using a fiducial box. The planning software automatically detects the fiducials and compares their location to expected values based on the rigid design of the fiducial system. Any difference is reported as mean and maximum frame definition error. The manufacturer recommends these values be less than 1.0 mm and 1.5 mm. In this study, frame definition error was analyzed in comparison with a variety of factors including which neurosurgeon/oncologist/physicist was involved with the procedure, number of post used during framing (3 or 4), type of lesion, and which CT scanner was utilized for acquisition. An analysis of variance (ANOVA) approach was used to statistically evaluate the data and determine causal factors related to instances of high frame definition error. Results: Two factors were identified as significant: number of post (p=0.0003) and CT scanner (p=0.0001). Further analysis showed that one of the four scanners was significantly different than the others. This diagnostic scanner was identified as an older model with localization lasers not tightly calibrated. The average value for maximum frame definition error using this scanner was 1.48 mm (4 posts) and 1.75 mm (3 posts). For the other scanners this value was 1.13 mm (4 posts) and 1.40 mm (3 posts). Conclusion: In utilizing a pre-planning workflow the choice of CT scanner matters. Any scanner utilized for GK should undergo routine QA at a level appropriate for radiation oncology. In terms of 3 vs 4 post, it is hypothesized that three posts provide less stability during CT acquisition. This will be tested in future work.

  16. SU-F-T-252: An Investigation of Gamma Knife Frame Definition Error When Using a Pre-Planning Workflow

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, P [University of Miami, Miami, FL (United States)

    2016-06-15

    Purpose: To determine causal factors related to high frame definition error when treating GK patients using a pre-planning workflow. Methods: 160 cases were retrospectively reviewed. All patients received treatment using a pre-planning workflow whereby stereotactic coordinates are determined from a CT scan acquired after framing using a fiducial box. The planning software automatically detects the fiducials and compares their location to expected values based on the rigid design of the fiducial system. Any difference is reported as mean and maximum frame definition error. The manufacturer recommends these values be less than 1.0 mm and 1.5 mm. In this study, frame definition error was analyzed in comparison with a variety of factors including which neurosurgeon/oncologist/physicist was involved with the procedure, number of post used during framing (3 or 4), type of lesion, and which CT scanner was utilized for acquisition. An analysis of variance (ANOVA) approach was used to statistically evaluate the data and determine causal factors related to instances of high frame definition error. Results: Two factors were identified as significant: number of post (p=0.0003) and CT scanner (p=0.0001). Further analysis showed that one of the four scanners was significantly different than the others. This diagnostic scanner was identified as an older model with localization lasers not tightly calibrated. The average value for maximum frame definition error using this scanner was 1.48 mm (4 posts) and 1.75 mm (3 posts). For the other scanners this value was 1.13 mm (4 posts) and 1.40 mm (3 posts). Conclusion: In utilizing a pre-planning workflow the choice of CT scanner matters. Any scanner utilized for GK should undergo routine QA at a level appropriate for radiation oncology. In terms of 3 vs 4 post, it is hypothesized that three posts provide less stability during CT acquisition. This will be tested in future work.

  17. Maximum likelihood window for time delay estimation

    International Nuclear Information System (INIS)

    Lee, Young Sup; Yoon, Dong Jin; Kim, Chi Yup

    2004-01-01

    Time delay estimation for the detection of leak location in underground pipelines is critically important. Because the exact leak location depends upon the precision of the time delay between sensor signals due to leak noise and the speed of elastic waves, the research on the estimation of time delay has been one of the key issues in leak lovating with the time arrival difference method. In this study, an optimal Maximum Likelihood window is considered to obtain a better estimation of the time delay. This method has been proved in experiments, which can provide much clearer and more precise peaks in cross-correlation functions of leak signals. The leak location error has been less than 1 % of the distance between sensors, for example the error was not greater than 3 m for 300 m long underground pipelines. Apart from the experiment, an intensive theoretical analysis in terms of signal processing has been described. The improved leak locating with the suggested method is due to the windowing effect in frequency domain, which offers a weighting in significant frequencies.

  18. CREME96 and Related Error Rate Prediction Methods

    Science.gov (United States)

    Adams, James H., Jr.

    2012-01-01

    Predicting the rate of occurrence of single event effects (SEEs) in space requires knowledge of the radiation environment and the response of electronic devices to that environment. Several analytical models have been developed over the past 36 years to predict SEE rates. The first error rate calculations were performed by Binder, Smith and Holman. Bradford and Pickel and Blandford, in their CRIER (Cosmic-Ray-Induced-Error-Rate) analysis code introduced the basic Rectangular ParallelePiped (RPP) method for error rate calculations. For the radiation environment at the part, both made use of the Cosmic Ray LET (Linear Energy Transfer) spectra calculated by Heinrich for various absorber Depths. A more detailed model for the space radiation environment within spacecraft was developed by Adams and co-workers. This model, together with a reformulation of the RPP method published by Pickel and Blandford, was used to create the CR ME (Cosmic Ray Effects on Micro-Electronics) code. About the same time Shapiro wrote the CRUP (Cosmic Ray Upset Program) based on the RPP method published by Bradford. It was the first code to specifically take into account charge collection from outside the depletion region due to deformation of the electric field caused by the incident cosmic ray. Other early rate prediction methods and codes include the Single Event Figure of Merit, NOVICE, the Space Radiation code and the effective flux method of Binder which is the basis of the SEFA (Scott Effective Flux Approximation) model. By the early 1990s it was becoming clear that CREME and the other early models needed Revision. This revision, CREME96, was completed and released as a WWW-based tool, one of the first of its kind. The revisions in CREME96 included improved environmental models and improved models for calculating single event effects. The need for a revision of CREME also stimulated the development of the CHIME (CRRES/SPACERAD Heavy Ion Model of the Environment) and MACREE (Modeling and

  19. Maximum likelihood estimation of the parameters of nonminimum phase and noncausal ARMA models

    DEFF Research Database (Denmark)

    Rasmussen, Klaus Bolding

    1994-01-01

    The well-known prediction-error-based maximum likelihood (PEML) method can only handle minimum phase ARMA models. This paper presents a new method known as the back-filtering-based maximum likelihood (BFML) method, which can handle nonminimum phase and noncausal ARMA models. The BFML method...... is identical to the PEML method in the case of a minimum phase ARMA model, and it turns out that the BFML method incorporates a noncausal ARMA filter with poles outside the unit circle for estimation of the parameters of a causal, nonminimum phase ARMA model...

  20. Sensation seeking and error processing.

    Science.gov (United States)

    Zheng, Ya; Sheng, Wenbin; Xu, Jing; Zhang, Yuanyuan

    2014-09-01

    Sensation seeking is defined by a strong need for varied, novel, complex, and intense stimulation, and a willingness to take risks for such experience. Several theories propose that the insensitivity to negative consequences incurred by risks is one of the hallmarks of sensation-seeking behaviors. In this study, we investigated the time course of error processing in sensation seeking by recording event-related potentials (ERPs) while high and low sensation seekers performed an Eriksen flanker task. Whereas there were no group differences in ERPs to correct trials, sensation seeking was associated with a blunted error-related negativity (ERN), which was female-specific. Further, different subdimensions of sensation seeking were related to ERN amplitude differently. These findings indicate that the relationship between sensation seeking and error processing is sex-specific. Copyright © 2014 Society for Psychophysiological Research.

  1. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... of non-stationary non-linear time series models. Thus the paper provides a full asymptotic theory for estimators as well as standard and non-standard test statistics. The derived asymptotic results prove to be new compared to results found elsewhere in the literature due to the impact of the estimated...... symmetric non-linear error correction considered. A simulation study shows that the fi…nite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....

  2. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbek, Anders

    In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... of non-stationary non-linear time series models. Thus the paper provides a full asymptotic theory for estimators as well as standard and non-standard test statistics. The derived asymptotic results prove to be new compared to results found elsewhere in the literature due to the impact of the estimated...... symmetric non-linear error correction are considered. A simulation study shows that the finite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....

  3. Patient safety incident reports related to traditional Japanese Kampo medicines: medication errors and adverse drug events in a university hospital for a ten-year period.

    Science.gov (United States)

    Shimada, Yutaka; Fujimoto, Makoto; Nogami, Tatsuya; Watari, Hidetoshi; Kitahara, Hideyuki; Misawa, Hiroki; Kimbara, Yoshiyuki

    2017-12-21

    Kampo medicine is traditional Japanese medicine, which originated in ancient traditional Chinese medicine, but was introduced and developed uniquely in Japan. Today, Kampo medicines are integrated into the Japanese national health care system. Incident reporting systems are currently being widely used to collect information about patient safety incidents that occur in hospitals. However, no investigations have been conducted regarding patient safety incident reports related to Kampo medicines. The aim of this study was to survey and analyse incident reports related to Kampo medicines in a Japanese university hospital to improve future patient safety. We selected incident reports related to Kampo medicines filed in Toyama University Hospital from May 2007 to April 2017, and investigated them in terms of medication errors and adverse drug events. Out of 21,324 total incident reports filed in the 10-year survey period, we discovered 108 Kampo medicine-related incident reports. However, five cases were redundantly reported; thus, the number of actual incidents was 103. Of those, 99 incidents were classified as medication errors (77 administration errors, 15 dispensing errors, and 7 prescribing errors), and four were adverse drug events, namely Kampo medicine-induced interstitial pneumonia. The Kampo medicine (crude drug) that was thought to induce interstitial pneumonia in all four cases was Scutellariae Radix, which is consistent with past reports. According to the incident severity classification system recommended by the National University Hospital Council of Japan, of the 99 medication errors, 10 incidents were classified as level 0 (an error occurred, but the patient was not affected) and 89 incidents were level 1 (an error occurred that affected the patient, but did not cause harm). Of the four adverse drug events, two incidents were classified as level 2 (patient was transiently harmed, but required no treatment), and two incidents were level 3b (patient was

  4. Maximum power point tracking

    International Nuclear Information System (INIS)

    Enslin, J.H.R.

    1990-01-01

    A well engineered renewable remote energy system, utilizing the principal of Maximum Power Point Tracking can be m ore cost effective, has a higher reliability and can improve the quality of life in remote areas. This paper reports that a high-efficient power electronic converter, for converting the output voltage of a solar panel, or wind generator, to the required DC battery bus voltage has been realized. The converter is controlled to track the maximum power point of the input source under varying input and output parameters. Maximum power point tracking for relative small systems is achieved by maximization of the output current in a battery charging regulator, using an optimized hill-climbing, inexpensive microprocessor based algorithm. Through practical field measurements it is shown that a minimum input source saving of 15% on 3-5 kWh/day systems can easily be achieved. A total cost saving of at least 10-15% on the capital cost of these systems are achievable for relative small rating Remote Area Power Supply systems. The advantages at larger temperature variations and larger power rated systems are much higher. Other advantages include optimal sizing and system monitor and control

  5. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  6. The relative impact of sizing errors on steam generator tube failure probability

    International Nuclear Information System (INIS)

    Cizelj, L.; Dvorsek, T.

    1998-01-01

    The Outside Diameter Stress Corrosion Cracking (ODSCC) at tube support plates is currently the major degradation mechanism affecting the steam generator tubes made of Inconel 600. This caused development and licensing of degradation specific maintenance approaches, which addressed two main failure modes of the degraded piping: tube rupture; and excessive leakage through degraded tubes. A methodology aiming at assessing the efficiency of a given set of possible maintenance approaches has already been proposed by the authors. It pointed out better performance of the degradation specific over generic approaches in (1) lower probability of single and multiple steam generator tube rupture (SGTR), (2) lower estimated accidental leak rates and (3) less tubes plugged. A sensitivity analysis was also performed pointing out the relative contributions of uncertain input parameters to the tube rupture probabilities. The dominant contribution was assigned to the uncertainties inherent to the regression models used to correlate the defect size and tube burst pressure. The uncertainties, which can be estimated from the in-service inspections, are further analysed in this paper. The defect growth was found to have significant and to some extent unrealistic impact on the probability of single tube rupture. Since the defect growth estimates were based on the past inspection records they strongly depend on the sizing errors. Therefore, an attempt was made to filter out the sizing errors and to arrive at more realistic estimates of the defect growth. The impact of different assumptions regarding sizing errors on the tube rupture probability was studied using a realistic numerical example. The data used is obtained from a series of inspection results from Krsko NPP with 2 Westinghouse D-4 steam generators. The results obtained are considered useful in safety assessment and maintenance of affected steam generators. (author)

  7. Friendship at work and error disclosure

    Directory of Open Access Journals (Sweden)

    Hsiao-Yen Mao

    2017-10-01

    Full Text Available Organizations rely on contextual factors to promote employee disclosure of self-made errors, which induces a resource dilemma (i.e., disclosure entails costing one's own resources to bring others resources and a friendship dilemma (i.e., disclosure is seemingly easier through friendship, yet the cost of friendship is embedded. This study proposes that friendship at work enhances error disclosure and uses conservation of resources theory as underlying explanation. A three-wave survey collected data from 274 full-time employees with a variety of occupational backgrounds. Empirical results indicated that friendship enhanced error disclosure partially through relational mechanisms of employees’ attitudes toward coworkers (i.e., employee engagement and of coworkers’ attitudes toward employees (i.e., perceived social worth. Such effects hold when controlling for established predictors of error disclosure. This study expands extant perspectives on employee error and the theoretical lenses used to explain the influence of friendship at work. We propose that, while promoting error disclosure through both contextual and relational approaches, organizations should be vigilant about potential incongruence.

  8. Prediction of human errors by maladaptive changes in event-related brain networks

    NARCIS (Netherlands)

    Eichele, T.; Debener, S.; Calhoun, V.D.; Specht, K.; Engel, A.K.; Hugdahl, K.; Cramon, D.Y. von; Ullsperger, M.

    2008-01-01

    Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional Mill and applying independent component analysis followed by deconvolution of hemodynamic responses, we

  9. Propagation of angular errors in two-axis rotation systems

    Science.gov (United States)

    Torrington, Geoffrey K.

    2003-10-01

    Two-Axis Rotation Systems, or "goniometers," are used in diverse applications including telescope pointing, automotive headlamp testing, and display testing. There are three basic configurations in which a goniometer can be built depending on the orientation and order of the stages. Each configuration has a governing set of equations which convert motion between the system "native" coordinates to other base systems, such as direction cosines, optical field angles, or spherical-polar coordinates. In their simplest form, these equations neglect errors present in real systems. In this paper, a statistical treatment of error source propagation is developed which uses only tolerance data, such as can be obtained from the system mechanical drawings prior to fabrication. It is shown that certain error sources are fully correctable, partially correctable, or uncorrectable, depending upon the goniometer configuration and zeroing technique. The system error budget can be described by a root-sum-of-squares technique with weighting factors describing the sensitivity of each error source. This paper tabulates weighting factors at 67% (k=1) and 95% (k=2) confidence for various levels of maximum travel for each goniometer configuration. As a practical example, this paper works through an error budget used for the procurement of a system at Sandia National Laboratories.

  10. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. A novel simultaneous streak and framing camera without principle errors

    Science.gov (United States)

    Jingzhen, L.; Fengshan, S.; Ningwen, L.; Xiangdong, G.; Bin, H.; Qingyang, W.; Hongyi, C.; Yi, C.; Xiaowei, L.

    2018-02-01

    A novel simultaneous streak and framing camera with continuous access, the perfect information of which is far more important for the exact interpretation and precise evaluation of many detonation events and shockwave phenomena, has been developed. The camera with the maximum imaging frequency of 2 × 106 fps and the maximum scanning velocity of 16.3 mm/μs has fine imaging properties which are the eigen resolution of over 40 lp/mm in the temporal direction and over 60 lp/mm in the spatial direction and the framing frequency principle error of zero for framing record, and the maximum time resolving power of 8 ns and the scanning velocity nonuniformity of 0.136%~-0.277% for streak record. The test data have verified the performance of the camera quantitatively. This camera, simultaneously gained frames and streak with parallax-free and identical time base, is characterized by the plane optical system at oblique incidence different from space system, the innovative camera obscura without principle errors, and the high velocity motor driven beryllium-like rotating mirror, made of high strength aluminum alloy with cellular lateral structure. Experiments demonstrate that the camera is very useful and reliable to take high quality pictures of the detonation events.

  12. Investigating Medication Errors in Educational Health Centers of Kermanshah

    Directory of Open Access Journals (Sweden)

    Mohsen Mohammadi

    2015-08-01

    Full Text Available Background and objectives : Medication errors can be a threat to the safety of patients. Preventing medication errors requires reporting and investigating such errors. The present study was conducted with the purpose of investigating medication errors in educational health centers of Kermanshah. Material and Methods: The present research is an applied, descriptive-analytical study and is done as a survey. Error Report of Ministry of Health and Medical Education was used for data collection. The population of the study included all the personnel (nurses, doctors, paramedics of educational health centers of Kermanshah. Among them, those who reported the committed errors were selected as the sample of the study. The data analysis was done using descriptive statistics and Chi 2 Test using SPSS version 18. Results: The findings of the study showed that most errors were related to not using medication properly, the least number of errors were related to improper dose, and the majority of errors occurred in the morning. The most frequent reason for errors was staff negligence and the least frequent was the lack of knowledge. Conclusion: The health care system should create an environment for detecting and reporting errors by the personnel, recognizing related factors causing errors, training the personnel and create a good working environment and standard workload.

  13. Compact stars with a small electric charge: the limiting radius to mass relation and the maximum mass for incompressible matter

    Energy Technology Data Exchange (ETDEWEB)

    Lemos, Jose P.S.; Lopes, Francisco J.; Quinta, Goncalo [Universidade de Lisboa, UL, Departamento de Fisica, Centro Multidisciplinar de Astrofisica, CENTRA, Instituto Superior Tecnico, IST, Lisbon (Portugal); Zanchin, Vilson T. [Universidade Federal do ABC, Centro de Ciencias Naturais e Humanas, Santo Andre, SP (Brazil)

    2015-02-01

    One of the stiffest equations of state for matter in a compact star is constant energy density and this generates the interior Schwarzschild radius to mass relation and the Misner maximum mass for relativistic compact stars. If dark matter populates the interior of stars, and this matter is supersymmetric or of some other type, some of it possessing a tiny electric charge, there is the possibility that highly compact stars can trap a small but non-negligible electric charge. In this case the radius to mass relation for such compact stars should get modifications. We use an analytical scheme to investigate the limiting radius to mass relation and the maximum mass of relativistic stars made of an incompressible fluid with a small electric charge. The investigation is carried out by using the hydrostatic equilibrium equation, i.e., the Tolman-Oppenheimer-Volkoff (TOV) equation, together with the other equations of structure, with the further hypothesis that the charge distribution is proportional to the energy density. The approach relies on Volkoff and Misner's method to solve the TOV equation. For zero charge one gets the interior Schwarzschild limit, and supposing incompressible boson or fermion matter with constituents with masses of the order of the neutron mass one finds that the maximum mass is the Misner mass. For a small electric charge, our analytical approximating scheme, valid in first order in the star's electric charge, shows that the maximum mass increases relatively to the uncharged case, whereas the minimum possible radius decreases, an expected effect since the new field is repulsive, aiding the pressure to sustain the star against gravitational collapse. (orig.)

  14. Determing and monitoring of maximum permissible power for HWRR-3

    International Nuclear Information System (INIS)

    Jia Zhanli; Xiao Shigang; Jin Huajin; Lu Changshen

    1987-01-01

    The operating power of a reactor is an important parameter to be monitored. This report briefly describes the determining and monitoring of maximum permissiable power for HWRR-3. The calculating method is described, and the result of calculation and analysis of error are also given. On-line calculation and real time monitoring have been realized at the heavy water reactor. It provides the reactor with a real time and reliable supervision. This makes operation convenient and increases reliability

  15. Apologies and Medical Error

    Science.gov (United States)

    2008-01-01

    One way in which physicians can respond to a medical error is to apologize. Apologies—statements that acknowledge an error and its consequences, take responsibility, and communicate regret for having caused harm—can decrease blame, decrease anger, increase trust, and improve relationships. Importantly, apologies also have the potential to decrease the risk of a medical malpractice lawsuit and can help settle claims by patients. Patients indicate they want and expect explanations and apologies after medical errors and physicians indicate they want to apologize. However, in practice, physicians tend to provide minimal information to patients after medical errors and infrequently offer complete apologies. Although fears about potential litigation are the most commonly cited barrier to apologizing after medical error, the link between litigation risk and the practice of disclosure and apology is tenuous. Other barriers might include the culture of medicine and the inherent psychological difficulties in facing one’s mistakes and apologizing for them. Despite these barriers, incorporating apology into conversations between physicians and patients can address the needs of both parties and can play a role in the effective resolution of disputes related to medical error. PMID:18972177

  16. Human Error Mechanisms in Complex Work Environments

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1988-01-01

    will account for most of the action errors observed. In addition, error mechanisms appear to be intimately related to the development of high skill and know-how in a complex work context. This relationship between errors and human adaptation is discussed in detail for individuals and organisations...

  17. Invariance and variability in interaction error-related potentials and their consequences for classification

    Science.gov (United States)

    Abu-Alqumsan, Mohammad; Kapeller, Christoph; Hintermüller, Christoph; Guger, Christoph; Peer, Angelika

    2017-12-01

    Objective. This paper discusses the invariance and variability in interaction error-related potentials (ErrPs), where a special focus is laid upon the factors of (1) the human mental processing required to assess interface actions (2) time (3) subjects. Approach. Three different experiments were designed as to vary primarily with respect to the mental processes that are necessary to assess whether an interface error has occurred or not. The three experiments were carried out with 11 subjects in a repeated-measures experimental design. To study the effect of time, a subset of the recruited subjects additionally performed the same experiments on different days. Main results. The ErrP variability across the different experiments for the same subjects was found largely attributable to the different mental processing required to assess interface actions. Nonetheless, we found that interaction ErrPs are empirically invariant over time (for the same subject and same interface) and to a lesser extent across subjects (for the same interface). Significance. The obtained results may be used to explain across-study variability of ErrPs, as well as to define guidelines for approaches to the ErrP classifier transferability problem.

  18. Online Robot Dead Reckoning Localization Using Maximum Relative Entropy Optimization With Model Constraints

    International Nuclear Information System (INIS)

    Urniezius, Renaldas

    2011-01-01

    The principle of Maximum relative Entropy optimization was analyzed for dead reckoning localization of a rigid body when observation data of two attached accelerometers was collected. Model constraints were derived from the relationships between the sensors. The experiment's results confirmed that accelerometers each axis' noise can be successfully filtered utilizing dependency between channels and the dependency between time series data. Dependency between channels was used for a priori calculation, and a posteriori distribution was derived utilizing dependency between time series data. There was revisited data of autocalibration experiment by removing the initial assumption that instantaneous rotation axis of a rigid body was known. Performance results confirmed that such an approach could be used for online dead reckoning localization.

  19. Human errors in NPP operations

    International Nuclear Information System (INIS)

    Sheng Jufang

    1993-01-01

    Based on the operational experiences of nuclear power plants (NPPs), the importance of studying human performance problems is described. Statistical analysis on the significance or frequency of various root-causes and error-modes from a large number of human-error-related events demonstrate that the defects in operation/maintenance procedures, working place factors, communication and training practices are primary root-causes, while omission, transposition, quantitative mistake are the most frequent among the error-modes. Recommendations about domestic research on human performance problem in NPPs are suggested

  20. Practical Insights from Initial Studies Related to Human Error Analysis Project (HEAP)

    International Nuclear Information System (INIS)

    Follesoe, Knut; Kaarstad, Magnhild; Droeivoldsmo, Asgeir; Hollnagel, Erik; Kirwan; Barry

    1996-01-01

    This report presents practical insights made from an analysis of the three initial studies in the Human Error Analysis Project (HEAP), and the first study in the US NRC Staffing Project. These practical insights relate to our understanding of diagnosis in Nuclear Power Plant (NPP) emergency scenarios and, in particular, the factors that influence whether a diagnosis will succeed or fail. The insights reported here focus on three inter-related areas: (1) the diagnostic strategies and styles that have been observed in single operator and team-based studies; (2) the qualitative aspects of the key operator support systems, namely VDU interfaces, alarms, training and procedures, that have affected the outcome of diagnosis; and (3) the overall success rates of diagnosis and the error types that have been observed in the various studies. With respect to diagnosis, certain patterns have emerged from the various studies, depending on whether operators were alone or in teams, and on their familiarity with the process. Some aspects of the interface and alarm systems were found to contribute to diagnostic failures while others supported performance and recovery. Similar results were found for training and experience. Furthermore, the availability of procedures did not preclude the need for some diagnosis. With respect to HRA and PSA, it was possible to record the failure types seen in the studies, and in some cases to give crude estimates of the failure likelihood for certain scenarios. Although these insights are interim in nature, they do show the type of information that can be derived from these studies. More importantly, they clarify aspects of our understanding of diagnosis in NPP emergencies, including implications for risk assessment, operator support systems development, and for research into diagnosis in a broader range of fields than the nuclear power industry. (author)

  1. MODEL PREDICTIVE CONTROL FOR PHOTOVOLTAIC STATION MAXIMUM POWER POINT TRACKING SYSTEM

    Directory of Open Access Journals (Sweden)

    I. Elzein

    2015-01-01

    Full Text Available The purpose of this paper is to present an alternative maximum power point tracking, MPPT, algorithm for a photovoltaic module, PVM, to produce the maximum power, Pmax, using the optimal duty ratio, D, for different types of converters and load matching.We present a state-based approach to the design of the maximum power point tracker for a stand-alone photovoltaic power generation system. The system under consideration consists of a solar array with nonlinear time-varying characteristics, a step-up converter with appropriate filter.The proposed algorithm has the advantages of maximizing the efficiency of the power utilization, can be integrated to other MPPT algorithms without affecting the PVM performance, is excellent for Real-Time applications and is a robust analytical method, different from the traditional MPPT algorithms which are more based on trial and error, or comparisons between present and past states. The procedure to calculate the optimal duty ratio for a buck, boost and buck-boost converters, to transfer the maximum power from a PVM to a load, is presented in the paper. Additionally, the existence and uniqueness of optimal internal impedance, to transfer the maximum power from a photovoltaic module using load matching, is proved.

  2. Maximum Power Point Tracking Based on Sliding Mode Control

    Directory of Open Access Journals (Sweden)

    Nimrod Vázquez

    2015-01-01

    Full Text Available Solar panels, which have become a good choice, are used to generate and supply electricity in commercial and residential applications. This generated power starts with the solar cells, which have a complex relationship between solar irradiation, temperature, and output power. For this reason a tracking of the maximum power point is required. Traditionally, this has been made by considering just current and voltage conditions at the photovoltaic panel; however, temperature also influences the process. In this paper the voltage, current, and temperature in the PV system are considered to be a part of a sliding surface for the proposed maximum power point tracking; this means a sliding mode controller is applied. Obtained results gave a good dynamic response, as a difference from traditional schemes, which are only based on computational algorithms. A traditional algorithm based on MPPT was added in order to assure a low steady state error.

  3. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.

  4. Evaluation of inter-fraction error during prostate radiotherapy

    International Nuclear Information System (INIS)

    Komiyama, Takafumi; Nakamura, Koji; Motoyama, Tsuyoshi; Onishi, Hiroshi; Sano, Naoki

    2008-01-01

    The purpose of this study was to evaluate inter-fraction error (inter-fraction set-up error+inter-fraction internal organ motion) between treatment planning and delivery during radiotherapy for localized prostate cancer. Twenty three prostate cancer patients underwent image-guided radical irradiation with the CT-linac system. All patients were treated in the supine position. After set-up with external skin markers, using CT-linac system, pretherapy CT images were obtained and isocenter displacement was measured. The mean displacement of the isocenter was 1.8 mm, 3.3 mm, and 1.7 mm in the left-right, ventral-dorsal, and cranial-caudal directions, respectively. The maximum displacement of the isocenter was 7 mm, 12 mm, and 9 mm in the left-right, ventral-dorsal, and cranial-caudal directions, respectively. The mean interquartile range of displacement of the isocenter was 1.8 mm, 3.7 mm, and 2.0 mm in the left-right, ventral-dorsal, and cranial-caudal directions, respectively. In radiotherapy for localized prostate cancer, inter-fraction error was largest in the ventral-dorsal directions. Errors in the ventral-dorsal directions influence both local control and late adverse effects. Our study suggested the set-up with external skin markers was not enough for radical radiotherapy for localized prostate cancer, thereby those such as a CT-linac system for correction of inter-fraction error being required. (author)

  5. Medication errors detected in non-traditional databases

    DEFF Research Database (Denmark)

    Perregaard, Helene; Aronson, Jeffrey K; Dalhoff, Kim

    2015-01-01

    AIMS: We have looked for medication errors involving the use of low-dose methotrexate, by extracting information from Danish sources other than traditional pharmacovigilance databases. We used the data to establish the relative frequencies of different types of errors. METHODS: We searched four...... errors, whereas knowledge-based errors more often resulted in near misses. CONCLUSIONS: The medication errors in this survey were most often action-based (50%) and knowledge-based (34%), suggesting that greater attention should be paid to education and surveillance of medical personnel who prescribe...

  6. Determining the effect of grain size and maximum induction upon coercive field of electrical steels

    Science.gov (United States)

    Landgraf, Fernando José Gomes; da Silveira, João Ricardo Filipini; Rodrigues-Jr., Daniel

    2011-10-01

    Although theoretical models have already been proposed, experimental data is still lacking to quantify the influence of grain size upon coercivity of electrical steels. Some authors consider a linear inverse proportionality, while others suggest a square root inverse proportionality. Results also differ with regard to the slope of the reciprocal of grain size-coercive field relation for a given material. This paper discusses two aspects of the problem: the maximum induction used for determining coercive force and the possible effect of lurking variables such as the grain size distribution breadth and crystallographic texture. Electrical steel sheets containing 0.7% Si, 0.3% Al and 24 ppm C were cold-rolled and annealed in order to produce different grain sizes (ranging from 20 to 150 μm). Coercive field was measured along the rolling direction and found to depend linearly on reciprocal of grain size with a slope of approximately 0.9 (A/m)mm at 1.0 T induction. A general relation for coercive field as a function of grain size and maximum induction was established, yielding an average absolute error below 4%. Through measurement of B50 and image analysis of micrographs, the effects of crystallographic texture and grain size distribution breadth were qualitatively discussed.

  7. The orthopaedic error index: development and application of a novel national indicator for assessing the relative safety of hospital care using a cross-sectional approach.

    Science.gov (United States)

    Panesar, Sukhmeet S; Netuveli, Gopalakrishnan; Carson-Stevens, Andrew; Javad, Sundas; Patel, Bhavesh; Parry, Gareth; Donaldson, Liam J; Sheikh, Aziz

    2013-11-21

    The Orthopaedic Error Index for hospitals aims to provide the first national assessment of the relative safety of provision of orthopaedic surgery. Cross-sectional study (retrospective analysis of records in a database). The National Reporting and Learning System is the largest national repository of patient-safety incidents in the world with over eight million error reports. It offers a unique opportunity to develop novel approaches to enhancing patient safety, including investigating the relative safety of different healthcare providers and specialties. We extracted all orthopaedic error reports from the system over 1 year (2009-2010). The Orthopaedic Error Index was calculated as a sum of the error propensity and severity. All relevant hospitals offering orthopaedic surgery in England were then ranked by this metric to identify possible outliers that warrant further attention. 155 hospitals reported 48 971 orthopaedic-related patient-safety incidents. The mean Orthopaedic Error Index was 7.09/year (SD 2.72); five hospitals were identified as outliers. Three of these units were specialist tertiary hospitals carrying out complex surgery; the remaining two outlier hospitals had unusually high Orthopaedic Error Indexes: mean 14.46 (SD 0.29) and 15.29 (SD 0.51), respectively. The Orthopaedic Error Index has enabled identification of hospitals that may be putting patients at disproportionate risk of orthopaedic-related iatrogenic harm and which therefore warrant further investigation. It provides the prototype of a summary index of harm to enable surveillance of unsafe care over time across institutions. Further validation and scrutiny of the method will be required to assess its potential to be extended to other hospital specialties in the UK and also internationally to other health systems that have comparable national databases of patient-safety incidents.

  8. Revealing the Maximum Strength in Nanotwinned Copper

    DEFF Research Database (Denmark)

    Lu, L.; Chen, X.; Huang, Xiaoxu

    2009-01-01

    boundary–related processes. We investigated the maximum strength of nanotwinned copper samples with different twin thicknesses. We found that the strength increases with decreasing twin thickness, reaching a maximum at 15 nanometers, followed by a softening at smaller values that is accompanied by enhanced...

  9. 49 CFR 195.406 - Maximum operating pressure.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 3 2010-10-01 2010-10-01 false Maximum operating pressure. 195.406 Section 195.406 Transportation Other Regulations Relating to Transportation (Continued) PIPELINE AND HAZARDOUS... HAZARDOUS LIQUIDS BY PIPELINE Operation and Maintenance § 195.406 Maximum operating pressure. (a) Except for...

  10. Two statistics for evaluating parameter identifiability and error reduction

    Science.gov (United States)

    Doherty, John; Hunt, Randall J.

    2009-01-01

    Two statistics are presented that can be used to rank input parameters utilized by a model in terms of their relative identifiability based on a given or possible future calibration dataset. Identifiability is defined here as the capability of model calibration to constrain parameters used by a model. Both statistics require that the sensitivity of each model parameter be calculated for each model output for which there are actual or presumed field measurements. Singular value decomposition (SVD) of the weighted sensitivity matrix is then undertaken to quantify the relation between the parameters and observations that, in turn, allows selection of calibration solution and null spaces spanned by unit orthogonal vectors. The first statistic presented, "parameter identifiability", is quantitatively defined as the direction cosine between a parameter and its projection onto the calibration solution space. This varies between zero and one, with zero indicating complete non-identifiability and one indicating complete identifiability. The second statistic, "relative error reduction", indicates the extent to which the calibration process reduces error in estimation of a parameter from its pre-calibration level where its value must be assigned purely on the basis of prior expert knowledge. This is more sophisticated than identifiability, in that it takes greater account of the noise associated with the calibration dataset. Like identifiability, it has a maximum value of one (which can only be achieved if there is no measurement noise). Conceptually it can fall to zero; and even below zero if a calibration problem is poorly posed. An example, based on a coupled groundwater/surface-water model, is included that demonstrates the utility of the statistics. ?? 2009 Elsevier B.V.

  11. Towards automatic global error control: Computable weak error expansion for the tau-leap method

    KAUST Repository

    Karlsson, Peer Jesper; Tempone, Raul

    2011-01-01

    This work develops novel error expansions with computable leading order terms for the global weak error in the tau-leap discretization of pure jump processes arising in kinetic Monte Carlo models. Accurate computable a posteriori error approximations are the basis for adaptive algorithms, a fundamental tool for numerical simulation of both deterministic and stochastic dynamical systems. These pure jump processes are simulated either by the tau-leap method, or by exact simulation, also referred to as dynamic Monte Carlo, the Gillespie Algorithm or the Stochastic Simulation Slgorithm. Two types of estimates are presented: an a priori estimate for the relative error that gives a comparison between the work for the two methods depending on the propensity regime, and an a posteriori estimate with computable leading order term. © de Gruyter 2011.

  12. Efficient algorithms for maximum likelihood decoding in the surface code

    Science.gov (United States)

    Bravyi, Sergey; Suchara, Martin; Vargo, Alexander

    2014-09-01

    We describe two implementations of the optimal error correction algorithm known as the maximum likelihood decoder (MLD) for the two-dimensional surface code with a noiseless syndrome extraction. First, we show how to implement MLD exactly in time O (n2), where n is the number of code qubits. Our implementation uses a reduction from MLD to simulation of matchgate quantum circuits. This reduction however requires a special noise model with independent bit-flip and phase-flip errors. Secondly, we show how to implement MLD approximately for more general noise models using matrix product states (MPS). Our implementation has running time O (nχ3), where χ is a parameter that controls the approximation precision. The key step of our algorithm, borrowed from the density matrix renormalization-group method, is a subroutine for contracting a tensor network on the two-dimensional grid. The subroutine uses MPS with a bond dimension χ to approximate the sequence of tensors arising in the course of contraction. We benchmark the MPS-based decoder against the standard minimum weight matching decoder observing a significant reduction of the logical error probability for χ ≥4.

  13. Direct comparison of phase-sensitive vibrational sum frequency generation with maximum entropy method: case study of water.

    Science.gov (United States)

    de Beer, Alex G F; Samson, Jean-Sebastièn; Hua, Wei; Huang, Zishuai; Chen, Xiangke; Allen, Heather C; Roke, Sylvie

    2011-12-14

    We present a direct comparison of phase sensitive sum-frequency generation experiments with phase reconstruction obtained by the maximum entropy method. We show that both methods lead to the same complex spectrum. Furthermore, we discuss the strengths and weaknesses of each of these methods, analyzing possible sources of experimental and analytical errors. A simulation program for maximum entropy phase reconstruction is available at: http://lbp.epfl.ch/. © 2011 American Institute of Physics

  14. An error taxonomy system for analysis of haemodialysis incidents.

    Science.gov (United States)

    Gu, Xiuzhu; Itoh, Kenji; Suzuki, Satoshi

    2014-12-01

    This paper describes the development of a haemodialysis error taxonomy system for analysing incidents and predicting the safety status of a dialysis organisation. The error taxonomy system was developed by adapting an error taxonomy system which assumed no specific specialty to haemodialysis situations. Its application was conducted with 1,909 incident reports collected from two dialysis facilities in Japan. Over 70% of haemodialysis incidents were reported as problems or complications related to dialyser, circuit, medication and setting of dialysis condition. Approximately 70% of errors took place immediately before and after the four hours of haemodialysis therapy. Error types most frequently made in the dialysis unit were omission and qualitative errors. Failures or complications classified to staff human factors, communication, task and organisational factors were found in most dialysis incidents. Device/equipment/materials, medicine and clinical documents were most likely to be involved in errors. Haemodialysis nurses were involved in more incidents related to medicine and documents, whereas dialysis technologists made more errors with device/equipment/materials. This error taxonomy system is able to investigate incidents and adverse events occurring in the dialysis setting but is also able to estimate safety-related status of an organisation, such as reporting culture. © 2014 European Dialysis and Transplant Nurses Association/European Renal Care Association.

  15. Ergodicity, Maximum Entropy Production, and Steepest Entropy Ascent in the Proofs of Onsager's Reciprocal Relations

    Science.gov (United States)

    Benfenati, Francesco; Beretta, Gian Paolo

    2018-04-01

    We show that to prove the Onsager relations using the microscopic time reversibility one necessarily has to make an ergodic hypothesis, or a hypothesis closely linked to that. This is true in all the proofs of the Onsager relations in the literature: from the original proof by Onsager, to more advanced proofs in the context of linear response theory and the theory of Markov processes, to the proof in the context of the kinetic theory of gases. The only three proofs that do not require any kind of ergodic hypothesis are based on additional hypotheses on the macroscopic evolution: Ziegler's maximum entropy production principle (MEPP), the principle of time reversal invariance of the entropy production, or the steepest entropy ascent principle (SEAP).

  16. Ac-dc converter firing error detection

    International Nuclear Information System (INIS)

    Gould, O.L.

    1996-01-01

    Each of the twelve Booster Main Magnet Power Supply modules consist of two three-phase, full-wave rectifier bridges in series to provide a 560 VDC maximum output. The harmonic contents of the twelve-pulse ac-dc converter output are multiples of the 60 Hz ac power input, with a predominant 720 Hz signal greater than 14 dB in magnitude above the closest harmonic components at maximum output. The 720 Hz harmonic is typically greater than 20 dB below the 500 VDC output signal under normal operation. Extracting specific harmonics from the rectifier output signal of a 6, 12, or 24 pulse ac-dc converter allows the detection of SCR firing angle errors or complete misfires. A bandpass filter provides the input signal to a frequency-to-voltage converter. Comparing the output of the frequency-to-voltage converter to a reference voltage level provides an indication of the magnitude of the harmonics in the ac-dc converter output signal

  17. Approximation for maximum pressure calculation in containment of PWR reactors

    International Nuclear Information System (INIS)

    Souza, A.L. de

    1989-01-01

    A correlation was developed to estimate the maximum pressure of dry containment of PWR following a Loss-of-Coolant Accident - LOCA. The expression proposed is a function of the total energy released to the containment by the primary circuit, of the free volume of the containment building and of the total surface are of the heat-conducting structures. The results show good agreement with those present in Final Safety Analysis Report - FSAR of several PWR's plants. The errors are in the order of ± 12%. (author) [pt

  18. A Gridded Daily Min/Max Temperature Dataset With 0.1° Resolution for the Yangtze River Valley and its Error Estimation

    Science.gov (United States)

    Xiong, Qiufen; Hu, Jianglin

    2013-05-01

    The minimum/maximum (Min/Max) temperature in the Yangtze River valley is decomposed into the climatic mean and anomaly component. A spatial interpolation is developed which combines the 3D thin-plate spline scheme for climatological mean and the 2D Barnes scheme for the anomaly component to create a daily Min/Max temperature dataset. The climatic mean field is obtained by the 3D thin-plate spline scheme because the relationship between the decreases in Min/Max temperature with elevation is robust and reliable on a long time-scale. The characteristics of the anomaly field tend to be related to elevation variation weakly, and the anomaly component is adequately analyzed by the 2D Barnes procedure, which is computationally efficient and readily tunable. With this hybridized interpolation method, a daily Min/Max temperature dataset that covers the domain from 99°E to 123°E and from 24°N to 36°N with 0.1° longitudinal and latitudinal resolution is obtained by utilizing daily Min/Max temperature data from three kinds of station observations, which are national reference climatological stations, the basic meteorological observing stations and the ordinary meteorological observing stations in 15 provinces and municipalities in the Yangtze River valley from 1971 to 2005. The error estimation of the gridded dataset is assessed by examining cross-validation statistics. The results show that the statistics of daily Min/Max temperature interpolation not only have high correlation coefficient (0.99) and interpolation efficiency (0.98), but also the mean bias error is 0.00 °C. For the maximum temperature, the root mean square error is 1.1 °C and the mean absolute error is 0.85 °C. For the minimum temperature, the root mean square error is 0.89 °C and the mean absolute error is 0.67 °C. Thus, the new dataset provides the distribution of Min/Max temperature over the Yangtze River valley with realistic, successive gridded data with 0.1° × 0.1° spatial resolution and

  19. Error signals in the subthalamic nucleus are related to post-error slowing in patients with Parkinson's disease

    NARCIS (Netherlands)

    Siegert, S.; Herrojo Ruiz, M.; Brücke, C.; Hueble, J.; Schneider, H.G.; Ullsperger, M.; Kühn, A.A.

    2014-01-01

    Error monitoring is essential for optimizing motor behavior. It has been linked to the medial frontal cortex, in particular to the anterior midcingulate cortex (aMCC). The aMCC subserves its performance-monitoring function in interaction with the basal ganglia (BG) circuits, as has been demonstrated

  20. Neural Modeling of Fuzzy Controllers for Maximum Power Point Tracking in Photovoltaic Energy Systems

    Science.gov (United States)

    Lopez-Guede, Jose Manuel; Ramos-Hernanz, Josean; Altın, Necmi; Ozdemir, Saban; Kurt, Erol; Azkune, Gorka

    2018-06-01

    One field in which electronic materials have an important role is energy generation, especially within the scope of photovoltaic energy. This paper deals with one of the most relevant enabling technologies within that scope, i.e, the algorithms for maximum power point tracking implemented in the direct current to direct current converters and its modeling through artificial neural networks (ANNs). More specifically, as a proof of concept, we have addressed the problem of modeling a fuzzy logic controller that has shown its performance in previous works, and more specifically the dimensionless duty cycle signal that controls a quadratic boost converter. We achieved a very accurate model since the obtained medium squared error is 3.47 × 10-6, the maximum error is 16.32 × 10-3 and the regression coefficient R is 0.99992, all for the test dataset. This neural implementation has obvious advantages such as a higher fault tolerance and a simpler implementation, dispensing with all the complex elements needed to run a fuzzy controller (fuzzifier, defuzzifier, inference engine and knowledge base) because, ultimately, ANNs are sums and products.

  1. Unintentional Pharmaceutical-Related Medication Errors Caused by Laypersons Reported to the Toxicological Information Centre in the Czech Republic.

    Science.gov (United States)

    Urban, Michal; Leššo, Roman; Pelclová, Daniela

    2016-07-01

    The purpose of the article was to study unintentional pharmaceutical-related poisonings committed by laypersons that were reported to the Toxicological Information Centre in the Czech Republic. Identifying frequency, sources, reasons and consequences of the medication errors in laypersons could help to reduce the overall rate of medication errors. Records of medication error enquiries from 2013 to 2014 were extracted from the electronic database, and the following variables were reviewed: drug class, dosage form, dose, age of the subject, cause of the error, time interval from ingestion to the call, symptoms, prognosis at the time of the call and first aid recommended. Of the calls, 1354 met the inclusion criteria. Among them, central nervous system-affecting drugs (23.6%), respiratory drugs (18.5%) and alimentary drugs (16.2%) were the most common drug classes involved in the medication errors. The highest proportion of the patients was in the youngest age subgroup 0-5 year-old (46%). The reasons for the medication errors involved the leaflet misinterpretation and mistaken dose (53.6%), mixing up medications (19.2%), attempting to reduce pain with repeated doses (6.4%), erroneous routes of administration (2.2%), psychiatric/elderly patients (2.7%), others (9.0%) or unknown (6.9%). A high proportion of children among the patients may be due to the fact that children's dosages for many drugs vary by their weight, and more medications come in a variety of concentrations. Most overdoses could be prevented by safer labelling, proper cap closure systems for liquid products and medication reconciliation by both physicians and pharmacists. © 2016 Nordic Association for the Publication of BCPT (former Nordic Pharmacological Society).

  2. Comparing Absolute Error with Squared Error for Evaluating Empirical Models of Continuous Variables: Compositions, Implications, and Consequences

    Science.gov (United States)

    Gao, J.

    2014-12-01

    Reducing modeling error is often a major concern of empirical geophysical models. However, modeling errors can be defined in different ways: When the response variable is continuous, the most commonly used metrics are squared (SQ) and absolute (ABS) errors. For most applications, ABS error is the more natural, but SQ error is mathematically more tractable, so is often used as a substitute with little scientific justification. Existing literature has not thoroughly investigated the implications of using SQ error in place of ABS error, especially not geospatially. This study compares the two metrics through the lens of bias-variance decomposition (BVD). BVD breaks down the expected modeling error of each model evaluation point into bias (systematic error), variance (model sensitivity), and noise (observation instability). It offers a way to probe the composition of various error metrics. I analytically derived the BVD of ABS error and compared it with the well-known SQ error BVD, and found that not only the two metrics measure the characteristics of the probability distributions of modeling errors differently, but also the effects of these characteristics on the overall expected error are different. Most notably, under SQ error all bias, variance, and noise increase expected error, while under ABS error certain parts of the error components reduce expected error. Since manipulating these subtractive terms is a legitimate way to reduce expected modeling error, SQ error can never capture the complete story embedded in ABS error. I then empirically compared the two metrics with a supervised remote sensing model for mapping surface imperviousness. Pair-wise spatially-explicit comparison for each error component showed that SQ error overstates all error components in comparison to ABS error, especially variance-related terms. Hence, substituting ABS error with SQ error makes model performance appear worse than it actually is, and the analyst would more likely accept a

  3. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  4. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yi; Zhao, Shiguang; Gao, Xin

    2014-01-01

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  5. Temporal dynamics of conflict monitoring and the effects of one or two conflict sources on error-(related) negativity.

    Science.gov (United States)

    Armbrecht, Anne-Simone; Wöhrmann, Anne; Gibbons, Henning; Stahl, Jutta

    2010-09-01

    The present electrophysiological study investigated the temporal development of response conflict and the effects of diverging conflict sources on error(-related) negativity (Ne). Eighteen participants performed a combined stop-signal flanker task, which was comprised of two different conflict sources: a left-right and a go-stop response conflict. It is assumed that the Ne reflects the activity of a conflict monitoring system and thus increases according to (i) the number of conflict sources and (ii) the temporal development of the conflict activity. No increase of the Ne amplitude after double errors (comprising two conflict sources) as compared to hand- and stop-errors (comprising one conflict source) was found, whereas a higher Ne amplitude was observed after a delayed stop-signal onset. The results suggest that the Ne is not sensitive to an increase in the number of conflict sources, but to the temporal dynamics of a go-stop response conflict. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  6. Period--luminosity--color relations and pulsation modes of pulsating variable stars

    International Nuclear Information System (INIS)

    Breger, M.; Bregman, J.N.

    1975-01-01

    The periods of delta Scuti, RR Lyrae, dwarf Cepheid, and W Virginis variables have been investigated for their dependence on luminosity, color, mass, and pulsation modes. A maximum-likelihood method, which includes consideration of the observational errors in each coordinate, has been applied to obtain observational period-luminosity-color (P-L-C) relations

  7. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  8. On Inertial Body Tracking in the Presence of Model Calibration Errors.

    Science.gov (United States)

    Miezal, Markus; Taetz, Bertram; Bleser, Gabriele

    2016-07-22

    In inertial body tracking, the human body is commonly represented as a biomechanical model consisting of rigid segments with known lengths and connecting joints. The model state is then estimated via sensor fusion methods based on data from attached inertial measurement units (IMUs). This requires the relative poses of the IMUs w.r.t. the segments-the IMU-to-segment calibrations, subsequently called I2S calibrations-to be known. Since calibration methods based on static poses, movements and manual measurements are still the most widely used, potentially large human-induced calibration errors have to be expected. This work compares three newly developed/adapted extended Kalman filter (EKF) and optimization-based sensor fusion methods with an existing EKF-based method w.r.t. their segment orientation estimation accuracy in the presence of model calibration errors with and without using magnetometer information. While the existing EKF-based method uses a segment-centered kinematic chain biomechanical model and a constant angular acceleration motion model, the newly developed/adapted methods are all based on a free segments model, where each segment is represented with six degrees of freedom in the global frame. Moreover, these methods differ in the assumed motion model (constant angular acceleration, constant angular velocity, inertial data as control input), the state representation (segment-centered, IMU-centered) and the estimation method (EKF, sliding window optimization). In addition to the free segments representation, the optimization-based method also represents each IMU with six degrees of freedom in the global frame. In the evaluation on simulated and real data from a three segment model (an arm), the optimization-based method showed the smallest mean errors, standard deviations and maximum errors throughout all tests. It also showed the lowest dependency on magnetometer information and motion agility. Moreover, it was insensitive w.r.t. I2S position and

  9. Classification of Error Related Brain Activity in an Auditory Identification Task with Conditions of Varying Complexity

    Science.gov (United States)

    Kakkos, I.; Gkiatis, K.; Bromis, K.; Asvestas, P. A.; Karanasiou, I. S.; Ventouras, E. M.; Matsopoulos, G. K.

    2017-11-01

    The detection of an error is the cognitive evaluation of an action outcome that is considered undesired or mismatches an expected response. Brain activity during monitoring of correct and incorrect responses elicits Event Related Potentials (ERPs) revealing complex cerebral responses to deviant sensory stimuli. Development of accurate error detection systems is of great importance both concerning practical applications and in investigating the complex neural mechanisms of decision making. In this study, data are used from an audio identification experiment that was implemented with two levels of complexity in order to investigate neurophysiological error processing mechanisms in actors and observers. To examine and analyse the variations of the processing of erroneous sensory information for each level of complexity we employ Support Vector Machines (SVM) classifiers with various learning methods and kernels using characteristic ERP time-windowed features. For dimensionality reduction and to remove redundant features we implement a feature selection framework based on Sequential Forward Selection (SFS). The proposed method provided high accuracy in identifying correct and incorrect responses both for actors and for observers with mean accuracy of 93% and 91% respectively. Additionally, computational time was reduced and the effects of the nesting problem usually occurring in SFS of large feature sets were alleviated.

  10. A national physician survey of diagnostic error in paediatrics.

    Science.gov (United States)

    Perrem, Lucy M; Fanshawe, Thomas R; Sharif, Farhana; Plüddemann, Annette; O'Neill, Michael B

    2016-10-01

    This cross-sectional survey explored paediatric physician perspectives regarding diagnostic errors. All paediatric consultants and specialist registrars in Ireland were invited to participate in this anonymous online survey. The response rate for the study was 54 % (n = 127). Respondents had a median of 9-year clinical experience (interquartile range (IQR) 4-20 years). A diagnostic error was reported at least monthly by 19 (15.0 %) respondents. Consultants reported significantly less diagnostic errors compared to trainees (p value = 0.01). Cognitive error was the top-ranked contributing factor to diagnostic error, with incomplete history and examination considered to be the principal cognitive error. Seeking a second opinion and close follow-up of patients to ensure that the diagnosis is correct were the highest-ranked, clinician-based solutions to diagnostic error. Inadequate staffing levels and excessive workload were the most highly ranked system-related and situational factors. Increased access to and availability of consultants and experts was the most highly ranked system-based solution to diagnostic error. We found a low level of self-perceived diagnostic error in an experienced group of paediatricians, at variance with the literature and warranting further clarification. The results identify perceptions on the major cognitive, system-related and situational factors contributing to diagnostic error and also key preventative strategies. • Diagnostic errors are an important source of preventable patient harm and have an estimated incidence of 10-15 %. • They are multifactorial in origin and include cognitive, system-related and situational factors. What is New: • We identified a low rate of self-perceived diagnostic error in contrast to the existing literature. • Incomplete history and examination, inadequate staffing levels and excessive workload are cited as the principal contributing factors to diagnostic error in this study.

  11. The Location-Scale Mixture Exponential Power Distribution: A Bayesian and Maximum Likelihood Approach

    OpenAIRE

    Rahnamaei, Z.; Nematollahi, N.; Farnoosh, R.

    2012-01-01

    We introduce an alternative skew-slash distribution by using the scale mixture of the exponential power distribution. We derive the properties of this distribution and estimate its parameter by Maximum Likelihood and Bayesian methods. By a simulation study we compute the mentioned estimators and their mean square errors, and we provide an example on real data to demonstrate the modeling strength of the new distribution.

  12. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  13. Cultural differences in categorical memory errors persist with age.

    Science.gov (United States)

    Gutchess, Angela; Boduroglu, Aysecan

    2018-01-02

    This cross-sectional experiment examined the influence of aging on cross-cultural differences in memory errors. Previous research revealed that Americans committed more categorical memory errors than Turks; we tested whether the cognitive constraints associated with aging impacted the pattern of memory errors across cultures. Furthermore, older adults are vulnerable to memory errors for semantically-related information, and we assessed whether this tendency occurs across cultures. Younger and older adults from the US and Turkey studied word pairs, with some pairs sharing a categorical relationship and some unrelated. Participants then completed a cued recall test, generating the word that was paired with the first. These responses were scored for correct responses or different types of errors, including categorical and semantic. The tendency for Americans to commit more categorical memory errors emerged for both younger and older adults. In addition, older adults across cultures committed more memory errors, and these were for semantically-related information (including both categorical and other types of semantic errors). Heightened vulnerability to memory errors with age extends across cultural groups, and Americans' proneness to commit categorical memory errors occurs across ages. The findings indicate some robustness in the ways that age and culture influence memory errors.

  14. Technical errors in MR arthrography

    International Nuclear Information System (INIS)

    Hodler, Juerg

    2008-01-01

    This article discusses potential technical problems of MR arthrography. It starts with contraindications, followed by problems relating to injection technique, contrast material and MR imaging technique. For some of the aspects discussed, there is only little published evidence. Therefore, the article is based on the personal experience of the author and on local standards of procedures. Such standards, as well as medico-legal considerations, may vary from country to country. Contraindications for MR arthrography include pre-existing infection, reflex sympathetic dystrophy and possibly bleeding disorders, avascular necrosis and known allergy to contrast media. Errors in injection technique may lead to extra-articular collection of contrast agent or to contrast agent leaking from the joint space, which may cause diagnostic difficulties. Incorrect concentrations of contrast material influence image quality and may also lead to non-diagnostic examinations. Errors relating to MR imaging include delays between injection and imaging and inadequate choice of sequences. Potential solutions to the various possible errors are presented. (orig.)

  15. Technical errors in MR arthrography

    Energy Technology Data Exchange (ETDEWEB)

    Hodler, Juerg [Orthopaedic University Hospital of Balgrist, Radiology, Zurich (Switzerland)

    2008-01-15

    This article discusses potential technical problems of MR arthrography. It starts with contraindications, followed by problems relating to injection technique, contrast material and MR imaging technique. For some of the aspects discussed, there is only little published evidence. Therefore, the article is based on the personal experience of the author and on local standards of procedures. Such standards, as well as medico-legal considerations, may vary from country to country. Contraindications for MR arthrography include pre-existing infection, reflex sympathetic dystrophy and possibly bleeding disorders, avascular necrosis and known allergy to contrast media. Errors in injection technique may lead to extra-articular collection of contrast agent or to contrast agent leaking from the joint space, which may cause diagnostic difficulties. Incorrect concentrations of contrast material influence image quality and may also lead to non-diagnostic examinations. Errors relating to MR imaging include delays between injection and imaging and inadequate choice of sequences. Potential solutions to the various possible errors are presented. (orig.)

  16. Measurement Rounding Errors in an Assessment Model of Project Led Engineering Education

    Directory of Open Access Journals (Sweden)

    Francisco Moreira

    2009-11-01

    Full Text Available This paper analyzes the rounding errors that occur in the assessment of an interdisciplinary Project-Led Education (PLE process implemented in the Integrated Master degree on Industrial Management and Engineering (IME at University of Minho. PLE is an innovative educational methodology which makes use of active learning, promoting higher levels of motivation and students’ autonomy. The assessment model is based on multiple evaluation components with different weights. Each component can be evaluated by several teachers involved in different Project Supporting Courses (PSC. This model can be affected by different types of errors, namely: (1 rounding errors, and (2 non-uniform criteria of rounding the grades. A rigorous analysis of the assessment model was made and the rounding errors involved on each project component were characterized and measured. This resulted in a global maximum error of 0.308 on the individual student project grade, in a 0 to 100 scale. This analysis intended to improve not only the reliability of the assessment results, but also teachers’ awareness of this problem. Recommendations are also made in order to improve the assessment model and reduce the rounding errors as much as possible.

  17. Performance Errors in Weight Training and Their Correction.

    Science.gov (United States)

    Downing, John H.; Lander, Jeffrey E.

    2002-01-01

    Addresses general performance errors in weight training, also discussing each category of error separately. The paper focuses on frequency and intensity, incorrect training velocities, full range of motion, and symmetrical training. It also examines specific errors related to the bench press, squat, military press, and bent- over and seated row…

  18. Data error effects on net radiation and evapotranspiration estimation

    International Nuclear Information System (INIS)

    Llasat, M.C.; Snyder, R.L.

    1998-01-01

    The objective of this paper is to evaluate the potential error in estimating the net radiation and reference evapotranspiration resulting from errors in the measurement or estimation of weather parameters. A methodology for estimating the net radiation using hourly weather variables measured at a typical agrometeorological station (e.g., solar radiation, temperature and relative humidity) is presented. Then the error propagation analysis is made for net radiation and for reference evapotranspiration. Data from the Raimat weather station, which is located in the Catalonia region of Spain, are used to illustrate the error relationships. The results show that temperature, relative humidity and cloud cover errors have little effect on the net radiation or reference evapotranspiration. A 5°C error in estimating surface temperature leads to errors as big as 30 W m −2 at high temperature. A 4% solar radiation (R s ) error can cause a net radiation error as big as 26 W m −2 when R s ≈ 1000 W m −2 . However, the error is less when cloud cover is calculated as a function of the solar radiation. The absolute error in reference evapotranspiration (ET o ) equals the product of the net radiation error and the radiation term weighting factor [W = Δ(Δ1+γ)] in the ET o equation. Therefore, the ET o error varies between 65 and 85% of the R n error as air temperature increases from about 20° to 40°C. (author)

  19. Novel high accurate sensorless dual-axis solar tracking system controlled by maximum power point tracking unit of photovoltaic systems

    International Nuclear Information System (INIS)

    Fathabadi, Hassan

    2016-01-01

    Highlights: • Novel high accurate sensorless dual-axis solar tracker. • It has the advantages of both sensor based and sensorless solar trackers. • It does not have the disadvantages of sensor based and sensorless solar trackers. • Tracking error of only 0.11° that is less than the tracking errors of others. • An increase of 28.8–43.6% depending on the seasons in the energy efficiency. - Abstract: In this study, a novel high accurate sensorless dual-axis solar tracker controlled by the maximum power point tracking unit available in almost all photovoltaic systems is proposed. The maximum power point tracking controller continuously calculates the maximum output power of the photovoltaic module/panel/array, and uses the altitude and azimuth angles deviations to track the sun direction where the greatest value of the maximum output power is extracted. Unlike all other sensorless solar trackers, the proposed solar tracking system is a closed loop system which means it uses the actual direction of the sun at any time to track the sun direction, and this is the contribution of this work. The proposed solar tracker has the advantages of both sensor based and sensorless dual-axis solar trackers, but it does not have their disadvantages. Other sensorless solar trackers all are open loop, i.e., they use offline estimated data about the sun path in the sky obtained from solar map equations, so low exactness, cloudy sky, and requiring new data for new location are their problems. A photovoltaic system has been built, and it is experimentally verified that the proposed solar tracking system tracks the sun direction with the tracking error of 0.11° which is less than the tracking errors of other both sensor based and sensorless solar trackers. An increase of 28.8–43.6% depending on the seasons in the energy efficiency is the main advantage of utilizing the proposed solar tracking system.

  20. Segmentation error and macular thickness measurements obtained with spectral-domain optical coherence tomography devices in neovascular age-related macular degeneration

    Directory of Open Access Journals (Sweden)

    Moosang Kim

    2013-01-01

    Full Text Available Purpose: To evaluate frequency and severity of segmentation errors of two spectral-domain optical coherence tomography (SD-OCT devices and error effect on central macular thickness (CMT measurements. Materials and Methods: Twenty-seven eyes of 25 patients with neovascular age-related macular degeneration, examined using the Cirrus HD-OCT and Spectralis HRA + OCT, were retrospectively reviewed. Macular cube 512 × 128 and 5-line raster scans were performed with the Cirrus and 512 × 25 volume scans with the Spectralis. Frequency and severity of segmentation errors were compared between scans. Results: Segmentation error frequency was 47.4% (baseline, 40.7% (1 month, 40.7% (2 months, and 48.1% (6 months for the Cirrus, and 59.3%, 62.2%, 57.8%, and 63.7%, respectively, for the Spectralis, differing significantly between devices at all examinations (P < 0.05, except at baseline. Average error score was 1.21 ± 1.65 (baseline, 0.79 ± 1.18 (1 month, 0.74 ± 1.12 (2 months, and 0.96 ± 1.11 (6 months for the Cirrus, and 1.73 ± 1.50, 1.54 ± 1.35, 1.38 ± 1.40, and 1.49 ± 1.30, respectively, for the Spectralis, differing significantly at 1 month and 2 months (P < 0.02. Automated and manual CMT measurements by the Spectralis were larger than those by the Cirrus. Conclusions: The Cirrus HD-OCT had a lower frequency and severity of segmentation error than the Spectralis HRA + OCT. SD-OCT error should be considered when evaluating retinal thickness.

  1. Game Design Principles based on Human Error

    Directory of Open Access Journals (Sweden)

    Guilherme Zaffari

    2016-03-01

    Full Text Available This paper displays the result of the authors’ research regarding to the incorporation of Human Error, through design principles, to video game design. In a general way, designers must consider Human Error factors throughout video game interface development; however, when related to its core design, adaptations are in need, since challenge is an important factor for fun and under the perspective of Human Error, challenge can be considered as a flaw in the system. The research utilized Human Error classifications, data triangulation via predictive human error analysis, and the expanded flow theory to allow the design of a set of principles in order to match the design of playful challenges with the principles of Human Error. From the results, it was possible to conclude that the application of Human Error in game design has a positive effect on player experience, allowing it to interact only with errors associated with the intended aesthetics of the game.

  2. State-independent error-disturbance trade-off for measurement operators

    International Nuclear Information System (INIS)

    Zhou, S.S.; Wu, Shengjun; Chau, H.F.

    2016-01-01

    In general, classical measurement statistics of a quantum measurement is disturbed by performing an additional incompatible quantum measurement beforehand. Using this observation, we introduce a state-independent definition of disturbance by relating it to the distinguishability problem between two classical statistical distributions – one resulting from a single quantum measurement and the other from a succession of two quantum measurements. Interestingly, we find an error-disturbance trade-off relation for any measurements in two-dimensional Hilbert space and for measurements with mutually unbiased bases in any finite-dimensional Hilbert space. This relation shows that error should be reduced to zero in order to minimize the sum of error and disturbance. We conjecture that a similar trade-off relation with a slightly relaxed definition of error can be generalized to any measurements in an arbitrary finite-dimensional Hilbert space.

  3. Republished error management: Descriptions of verbal communication errors between staff. An analysis of 84 root cause analysis-reports from Danish hospitals

    DEFF Research Database (Denmark)

    Rabøl, Louise Isager; Andersen, Mette Lehmann; Østergaard, Doris

    2011-01-01

    Introduction Poor teamwork and communication between healthcare staff are correlated to patient safety incidents. However, the organisational factors responsible for these issues are unexplored. Root cause analyses (RCA) use human factors thinking to analyse the systems behind severe patient safety...... and characteristics of verbal communication errors such as handover errors and error during teamwork. Results Raters found description of verbal communication errors in 44 reports (52%). These included handover errors (35 (86%)), communication errors between different staff groups (19 (43%)), misunderstandings (13...... (30%)), communication errors between junior and senior staff members (11 (25%)), hesitance in speaking up (10 (23%)) and communication errors during teamwork (8 (18%)). The kappa values were 0.44-0.78. Unproceduralized communication and information exchange via telephone, related to transfer between...

  4. Probabilistic error bounds for reduced order modeling

    Energy Technology Data Exchange (ETDEWEB)

    Abdo, M.G.; Wang, C.; Abdel-Khalik, H.S., E-mail: abdo@purdue.edu, E-mail: wang1730@purdue.edu, E-mail: abdelkhalik@purdue.edu [Purdue Univ., School of Nuclear Engineering, West Lafayette, IN (United States)

    2015-07-01

    Reduced order modeling has proven to be an effective tool when repeated execution of reactor analysis codes is required. ROM operates on the assumption that the intrinsic dimensionality of the associated reactor physics models is sufficiently small when compared to the nominal dimensionality of the input and output data streams. By employing a truncation technique with roots in linear algebra matrix decomposition theory, ROM effectively discards all components of the input and output data that have negligible impact on reactor attributes of interest. This manuscript introduces a mathematical approach to quantify the errors resulting from the discarded ROM components. As supported by numerical experiments, the introduced analysis proves that the contribution of the discarded components could be upper-bounded with an overwhelmingly high probability. The reverse of this statement implies that the ROM algorithm can self-adapt to determine the level of the reduction needed such that the maximum resulting reduction error is below a given tolerance limit that is set by the user. (author)

  5. Errors in radiographic recognition in the emergency room

    International Nuclear Information System (INIS)

    Britton, C.A.; Cooperstein, L.A.

    1986-01-01

    For 6 months we monitored the frequency and type of errors in radiographic recognition made by radiology residents on call in our emergency room. A relatively low error rate was observed, probably because the authors evaluated cognitive errors only, rather than include those of interpretation. The most common missed finding was a small fracture, particularly on the hands or feet. First-year residents were most likely to make an error, but, interestingly, our survey revealed a small subset of upper-level residents who made a disproportionate number of errors

  6. Estimators of the Relations of Equivalence, Tolerance and Preference Based on Pairwise Comparisons with Random Errors

    Directory of Open Access Journals (Sweden)

    Leszek Klukowski

    2012-01-01

    Full Text Available This paper presents a review of results of the author in the area of estimation of the relations of equivalence, tolerance and preference within a finite set based on multiple, independent (in a stochastic way pairwise comparisons with random errors, in binary and multivalent forms. These estimators require weaker assumptions than those used in the literature on the subject. Estimates of the relations are obtained based on solutions to problems from discrete optimization. They allow application of both types of comparisons - binary and multivalent (this fact relates to the tolerance and preference relations. The estimates can be verified in a statistical way; in particular, it is possible to verify the type of the relation. The estimates have been applied by the author to problems regarding forecasting, financial engineering and bio-cybernetics. (original abstract

  7. Comparison of ETF´s performance related to the tracking error

    Directory of Open Access Journals (Sweden)

    Michaela Dorocáková

    2017-12-01

    Full Text Available With the development of financial markets, there is also immediate expansion of fund industry, which is a representative issue of collective investment. The purpose of index funds is to replicate returns and risk of underling index to the largest possible extent, with tracking error being one of the most monitored performance indicator of these passively managed funds. The aim of this paper is to describe several perspectives concerning indexing, index funds and exchange-traded funds, to explain the issue of tracking error with its examination and subsequent comparison of such funds provided by leading investment management companies with regard to different methods used for its evaluation. Our research shows that the decisive factor for occurrence of copy deviation is fund size and fund´s stock consolidation. In addition, performance differences between exchange-traded fund and its benchmark tend to show the signs of seasonality in the sense of increasing in the last months of a year.

  8. Development of relative humidity models by using optimized neural network structures

    Energy Technology Data Exchange (ETDEWEB)

    Martinez-romero, A.; Ortega, J. F.; Juan, J. A.; Tarjuelo, J. M.; Moreno, M. A.

    2010-07-01

    Climate has always had a very important role in life on earth, as well as human activity and health. The influence of relative humidity (RH) in controlled environments (e.g. industrial processes in agro-food processing, cold storage of foods such as fruits, vegetables and meat, or controls in greenhouses) is very important. Relative humidity is a main factor in agricultural production and crop yield (due to the influence on crop water demand or the development and distribution of pests and diseases, for example). The main objective of this paper is to estimate RH [maximum (RHmax), average (RHave), and minimum (RHmin)] data in a specific area, being applied to the Region of Castilla-La Mancha (C-LM) in this case, from available data at thermo-pluviometric weather stations. In this paper Artificial neural networks (ANN) are used to generate RH considering maximum and minimum temperatures and extraterrestrial solar radiation data. Model validation and generation is based on data from the years 2000 to 2008 from 44 complete agroclimatic weather stations. Relative errors are estimated as 1) spatial errors of 11.30%, 6.80% and 10.27% and 2) temporal errors of 10.34%, 6.59% and 9.77% for RHmin, RHmax and RHave, respectively. The use of ANNs is interesting in generating climate parameters from available climate data. For determining optimal ANN structure in estimating RH values, model calibration and validation is necessary, considering spatial and temporal variability. (Author) 44 refs.

  9. Students’ Written Production Error Analysis in the EFL Classroom Teaching: A Study of Adult English Learners Errors

    Directory of Open Access Journals (Sweden)

    Ranauli Sihombing

    2016-12-01

    Full Text Available Errors analysis has become one of the most interesting issues in the study of Second Language Acquisition. It can not be denied that some teachers do not know a lot about error analysis and related theories of how L1, L2 or foreign language acquired. In addition, the students often feel upset since they find a gap between themselves and the teachers for the errors the students make and the teachers’ understanding about the error correction. The present research aims to investigate what errors adult English learners make in written production of English. The significances of the study is to know what errors students make in writing that the teachers can find solution to the errors the students make for a better English language teaching and learning especially in teaching English for adults. The study employed qualitative method. The research was undertaken at an airline education center in Bandung. The result showed that syntax errors are more frequently found than morphology errors, especially in terms of verb phrase errors. It is recommended that it is important for teacher to know the theory of second language acquisition in order to know how the students learn and produce theirlanguage. In addition, it will be advantages for teachers if they know what errors students frequently make in their learning, so that the teachers can give solution to the students for a better English language learning achievement.   DOI: https://doi.org/10.24071/llt.2015.180205

  10. Identification and estimation of nonlinear models using two samples with nonclassical measurement errors

    KAUST Repository

    Carroll, Raymond J.

    2010-05-01

    This paper considers identification and estimation of a general nonlinear Errors-in-Variables (EIV) model using two samples. Both samples consist of a dependent variable, some error-free covariates, and an error-prone covariate, for which the measurement error has unknown distribution and could be arbitrarily correlated with the latent true values; and neither sample contains an accurate measurement of the corresponding true variable. We assume that the regression model of interest - the conditional distribution of the dependent variable given the latent true covariate and the error-free covariates - is the same in both samples, but the distributions of the latent true covariates vary with observed error-free discrete covariates. We first show that the general latent nonlinear model is nonparametrically identified using the two samples when both could have nonclassical errors, without either instrumental variables or independence between the two samples. When the two samples are independent and the nonlinear regression model is parameterized, we propose sieve Quasi Maximum Likelihood Estimation (Q-MLE) for the parameter of interest, and establish its root-n consistency and asymptotic normality under possible misspecification, and its semiparametric efficiency under correct specification, with easily estimated standard errors. A Monte Carlo simulation and a data application are presented to show the power of the approach.

  11. Human medial frontal cortex activity predicts learning from errors.

    Science.gov (United States)

    Hester, Robert; Barre, Natalie; Murphy, Kevin; Silk, Tim J; Mattingley, Jason B

    2008-08-01

    Learning from errors is a critical feature of human cognition. It underlies our ability to adapt to changing environmental demands and to tune behavior for optimal performance. The posterior medial frontal cortex (pMFC) has been implicated in the evaluation of errors to control behavior, although it has not previously been shown that activity in this region predicts learning from errors. Using functional magnetic resonance imaging, we examined activity in the pMFC during an associative learning task in which participants had to recall the spatial locations of 2-digit targets and were provided with immediate feedback regarding accuracy. Activity within the pMFC was significantly greater for errors that were subsequently corrected than for errors that were repeated. Moreover, pMFC activity during recall errors predicted future responses (correct vs. incorrect), despite a sizeable interval (on average 70 s) between an error and the next presentation of the same recall probe. Activity within the hippocampus also predicted future performance and correlated with error-feedback-related pMFC activity. A relationship between performance expectations and pMFC activity, in the absence of differing reinforcement value for errors, is consistent with the idea that error-related pMFC activity reflects the extent to which an outcome is "worse than expected."

  12. Optimized Large-scale CMB Likelihood and Quadratic Maximum Likelihood Power Spectrum Estimation

    Science.gov (United States)

    Gjerløw, E.; Colombo, L. P. L.; Eriksen, H. K.; Górski, K. M.; Gruppuso, A.; Jewell, J. B.; Plaszczynski, S.; Wehus, I. K.

    2015-11-01

    We revisit the problem of exact cosmic microwave background (CMB) likelihood and power spectrum estimation with the goal of minimizing computational costs through linear compression. This idea was originally proposed for CMB purposes by Tegmark et al., and here we develop it into a fully functioning computational framework for large-scale polarization analysis, adopting WMAP as a working example. We compare five different linear bases (pixel space, harmonic space, noise covariance eigenvectors, signal-to-noise covariance eigenvectors, and signal-plus-noise covariance eigenvectors) in terms of compression efficiency, and find that the computationally most efficient basis is the signal-to-noise eigenvector basis, which is closely related to the Karhunen-Loeve and Principal Component transforms, in agreement with previous suggestions. For this basis, the information in 6836 unmasked WMAP sky map pixels can be compressed into a smaller set of 3102 modes, with a maximum error increase of any single multipole of 3.8% at ℓ ≤ 32 and a maximum shift in the mean values of a joint distribution of an amplitude-tilt model of 0.006σ. This compression reduces the computational cost of a single likelihood evaluation by a factor of 5, from 38 to 7.5 CPU seconds, and it also results in a more robust likelihood by implicitly regularizing nearly degenerate modes. Finally, we use the same compression framework to formulate a numerically stable and computationally efficient variation of the Quadratic Maximum Likelihood implementation, which requires less than 3 GB of memory and 2 CPU minutes per iteration for ℓ ≤ 32, rendering low-ℓ QML CMB power spectrum analysis fully tractable on a standard laptop.

  13. The Location-Scale Mixture Exponential Power Distribution: A Bayesian and Maximum Likelihood Approach

    Directory of Open Access Journals (Sweden)

    Z. Rahnamaei

    2012-01-01

    Full Text Available We introduce an alternative skew-slash distribution by using the scale mixture of the exponential power distribution. We derive the properties of this distribution and estimate its parameter by Maximum Likelihood and Bayesian methods. By a simulation study we compute the mentioned estimators and their mean square errors, and we provide an example on real data to demonstrate the modeling strength of the new distribution.

  14. Simulation of maximum light use efficiency for some typical vegetation types in China

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Maximum light use efficiency (εmax) is a key parameter for the estimation of net primary productivity (NPP) derived from remote sensing data. There are still many divergences about its value for each vegetation type. The εmax for some typical vegetation types in China is simulated using a modified least squares function based on NOAA/AVHRR remote sensing data and field-observed NPP data. The vegetation classification accuracy is introduced to the process. The sensitivity analysis of εmax to vegetation classification accuracy is also conducted. The results show that the simulated values of εmax are greater than the value used in CASA model, and less than the values simulated with BIOME-BGC model. This is consistent with some other studies. The relative error of εmax resulting from classification accuracy is -5.5%―8.0%. This indicates that the simulated values of εmax are reliable and stable.

  15. Maximum Likelihood Time-of-Arrival Estimation of Optical Pulses via Photon-Counting Photodetectors

    Science.gov (United States)

    Erkmen, Baris I.; Moision, Bruce E.

    2010-01-01

    Many optical imaging, ranging, and communications systems rely on the estimation of the arrival time of an optical pulse. Recently, such systems have been increasingly employing photon-counting photodetector technology, which changes the statistics of the observed photocurrent. This requires time-of-arrival estimators to be developed and their performances characterized. The statistics of the output of an ideal photodetector, which are well modeled as a Poisson point process, were considered. An analytical model was developed for the mean-square error of the maximum likelihood (ML) estimator, demonstrating two phenomena that cause deviations from the minimum achievable error at low signal power. An approximation was derived to the threshold at which the ML estimator essentially fails to provide better than a random guess of the pulse arrival time. Comparing the analytic model performance predictions to those obtained via simulations, it was verified that the model accurately predicts the ML performance over all regimes considered. There is little prior art that attempts to understand the fundamental limitations to time-of-arrival estimation from Poisson statistics. This work establishes both a simple mathematical description of the error behavior, and the associated physical processes that yield this behavior. Previous work on mean-square error characterization for ML estimators has predominantly focused on additive Gaussian noise. This work demonstrates that the discrete nature of the Poisson noise process leads to a distinctly different error behavior.

  16. Forecast errors in IEA-countries' energy consumption

    DEFF Research Database (Denmark)

    Linderoth, Hans

    2002-01-01

    Every year Policy of IEA Countries includes a forecast of the energy consumption in the member countries. Forecasts concerning the years 1985,1990 and 1995 can now be compared to actual values. The second oil crisis resulted in big positive forecast errors. The oil price drop in 1986 did not have...... the small value is often the sum of large positive and negative errors. Almost no significant correlation is found between forecast errors in the 3 years. Correspondingly, no significant correlation coefficient is found between forecasts errors in the 3 main energy sectors. Therefore, a relatively small...

  17. Analysis of error patterns in clinical radiotherapy

    International Nuclear Information System (INIS)

    Macklis, Roger; Meier, Tim; Barrett, Patricia; Weinhous, Martin

    1996-01-01

    Purpose: Until very recently, prescription errors and adverse treatment events have rarely been studied or reported systematically in oncology. We wished to understand the spectrum and severity of radiotherapy errors that take place on a day-to-day basis in a high-volume academic practice and to understand the resource needs and quality assurance challenges placed on a department by rapid upswings in contract-based clinical volumes requiring additional operating hours, procedures, and personnel. The goal was to define clinical benchmarks for operating safety and to detect error-prone treatment processes that might function as 'early warning' signs. Methods: A multi-tiered prospective and retrospective system for clinical error detection and classification was developed, with formal analysis of the antecedents and consequences of all deviations from prescribed treatment delivery, no matter how trivial. A department-wide record-and-verify system was operational during this period and was used as one method of treatment verification and error detection. Brachytherapy discrepancies were analyzed separately. Results: During the analysis year, over 2000 patients were treated with over 93,000 individual fields. A total of 59 errors affecting a total of 170 individual treated fields were reported or detected during this period. After review, all of these errors were classified as Level 1 (minor discrepancy with essentially no potential for negative clinical implications). This total treatment delivery error rate (170/93, 332 or 0.18%) is significantly better than corresponding error rates reported for other hospital and oncology treatment services, perhaps reflecting the relatively sophisticated error avoidance and detection procedures used in modern clinical radiation oncology. Error rates were independent of linac model and manufacturer, time of day (normal operating hours versus late evening or early morning) or clinical machine volumes. There was some relationship to

  18. Subdivision Error Analysis and Compensation for Photoelectric Angle Encoder in a Telescope Control System

    Directory of Open Access Journals (Sweden)

    Yanrui Su

    2015-01-01

    Full Text Available As the position sensor, photoelectric angle encoder affects the accuracy and stability of telescope control system (TCS. A TCS-based subdivision error compensation method for encoder is proposed. Six types of subdivision error sources are extracted through mathematical expressions of subdivision signals first. Then the period length relationships between subdivision signals and subdivision errors are deduced. And the error compensation algorithm only utilizing the shaft position of TCS is put forward, along with two control models; Model I is that the algorithm applies only to the speed loop of TCS and Model II is applied to both speed loop and position loop. Combined with actual project, elevation jittering phenomenon of the telescope is discussed to decide the necessity of DC-type subdivision error compensation. Low-speed elevation performance before and after error compensation is compared to help decide that Model II is preferred. In contrast to original performance, the maximum position error of the elevation with DC subdivision error compensation is reduced by approximately 47.9% from 1.42″ to 0.74″. The elevation gets a huge decrease in jitters. This method can compensate the encoder subdivision errors effectively and improve the stability of TCS.

  19. Error Cost Escalation Through the Project Life Cycle

    Science.gov (United States)

    Stecklein, Jonette M.; Dabney, Jim; Dick, Brandon; Haskins, Bill; Lovell, Randy; Moroney, Gregory

    2004-01-01

    It is well known that the costs to fix errors increase as the project matures, but how fast do those costs build? A study was performed to determine the relative cost of fixing errors discovered during various phases of a project life cycle. This study used three approaches to determine the relative costs: the bottom-up cost method, the total cost breakdown method, and the top-down hypothetical project method. The approaches and results described in this paper presume development of a hardware/software system having project characteristics similar to those used in the development of a large, complex spacecraft, a military aircraft, or a small communications satellite. The results show the degree to which costs escalate, as errors are discovered and fixed at later and later phases in the project life cycle. If the cost of fixing a requirements error discovered during the requirements phase is defined to be 1 unit, the cost to fix that error if found during the design phase increases to 3 - 8 units; at the manufacturing/build phase, the cost to fix the error is 7 - 16 units; at the integration and test phase, the cost to fix the error becomes 21 - 78 units; and at the operations phase, the cost to fix the requirements error ranged from 29 units to more than 1500 units

  20. On the Spatial and Temporal Sampling Errors of Remotely Sensed Precipitation Products

    Directory of Open Access Journals (Sweden)

    Ali Behrangi

    2017-11-01

    Full Text Available Observation with coarse spatial and temporal sampling can cause large errors in quantification of the amount, intensity, and duration of precipitation events. In this study, the errors resulting from temporal and spatial sampling of precipitation events were quantified and examined using the latest version (V4 of the Global Precipitation Measurement (GPM mission integrated multi-satellite retrievals for GPM (IMERG, which is available since spring of 2014. Relative mean square error was calculated at 0.1° × 0.1° every 0.5 h between the degraded (temporally and spatially and original IMERG products. The temporal and spatial degradation was performed by producing three-hour (T3, six-hour (T6, 0.5° × 0.5° (S5, and 1.0° × 1.0° (S10 maps. The results show generally larger errors over land than ocean, especially over mountainous regions. The relative error of T6 is almost 20% larger than T3 over tropical land, but is smaller in higher latitudes. Over land relative error of T6 is larger than S5 across all latitudes, while T6 has larger relative error than S10 poleward of 20°S–20°N. Similarly, the relative error of T3 exceeds S5 poleward of 20°S–20°N, but does not exceed S10, except in very high latitudes. Similar results are also seen over ocean, but the error ratios are generally less sensitive to seasonal changes. The results also show that the spatial and temporal relative errors are not highly correlated. Overall, lower correlations between the spatial and temporal relative errors are observed over ocean than over land. Quantification of such spatiotemporal effects provides additional insights into evaluation studies, especially when different products are cross-compared at a range of spatiotemporal scales.

  1. Error forecasting schemes of error correction at receiver

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-08-01

    To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)

  2. Dissociable genetic contributions to error processing: a multimodal neuroimaging study.

    Directory of Open Access Journals (Sweden)

    Yigal Agam

    Full Text Available Neuroimaging studies reliably identify two markers of error commission: the error-related negativity (ERN, an event-related potential, and functional MRI activation of the dorsal anterior cingulate cortex (dACC. While theorized to reflect the same neural process, recent evidence suggests that the ERN arises from the posterior cingulate cortex not the dACC. Here, we tested the hypothesis that these two error markers also have different genetic mediation.We measured both error markers in a sample of 92 comprised of healthy individuals and those with diagnoses of schizophrenia, obsessive-compulsive disorder or autism spectrum disorder. Participants performed the same task during functional MRI and simultaneously acquired magnetoencephalography and electroencephalography. We examined the mediation of the error markers by two single nucleotide polymorphisms: dopamine D4 receptor (DRD4 C-521T (rs1800955, which has been associated with the ERN and methylenetetrahydrofolate reductase (MTHFR C677T (rs1801133, which has been associated with error-related dACC activation. We then compared the effects of each polymorphism on the two error markers modeled as a bivariate response.We replicated our previous report of a posterior cingulate source of the ERN in healthy participants in the schizophrenia and obsessive-compulsive disorder groups. The effect of genotype on error markers did not differ significantly by diagnostic group. DRD4 C-521T allele load had a significant linear effect on ERN amplitude, but not on dACC activation, and this difference was significant. MTHFR C677T allele load had a significant linear effect on dACC activation but not ERN amplitude, but the difference in effects on the two error markers was not significant.DRD4 C-521T, but not MTHFR C677T, had a significant differential effect on two canonical error markers. Together with the anatomical dissociation between the ERN and error-related dACC activation, these findings suggest that

  3. Characteristics of patients making serious inhaler errors with a dry powder inhaler and association with asthma-related events in a primary care setting

    Science.gov (United States)

    Westerik, Janine A. M.; Carter, Victoria; Chrystyn, Henry; Burden, Anne; Thompson, Samantha L.; Ryan, Dermot; Gruffydd-Jones, Kevin; Haughney, John; Roche, Nicolas; Lavorini, Federico; Papi, Alberto; Infantino, Antonio; Roman-Rodriguez, Miguel; Bosnic-Anticevich, Sinthia; Lisspers, Karin; Ställberg, Björn; Henrichsen, Svein Høegh; van der Molen, Thys; Hutton, Catherine; Price, David B.

    2016-01-01

    Abstract Objective: Correct inhaler technique is central to effective delivery of asthma therapy. The study aim was to identify factors associated with serious inhaler technique errors and their prevalence among primary care patients with asthma using the Diskus dry powder inhaler (DPI). Methods: This was a historical, multinational, cross-sectional study (2011–2013) using the iHARP database, an international initiative that includes patient- and healthcare provider-reported questionnaires from eight countries. Patients with asthma were observed for serious inhaler errors by trained healthcare providers as predefined by the iHARP steering committee. Multivariable logistic regression, stepwise reduced, was used to identify clinical characteristics and asthma-related outcomes associated with ≥1 serious errors. Results: Of 3681 patients with asthma, 623 (17%) were using a Diskus (mean [SD] age, 51 [14]; 61% women). A total of 341 (55%) patients made ≥1 serious errors. The most common errors were the failure to exhale before inhalation, insufficient breath-hold at the end of inhalation, and inhalation that was not forceful from the start. Factors significantly associated with ≥1 serious errors included asthma-related hospitalization the previous year (odds ratio [OR] 2.07; 95% confidence interval [CI], 1.26–3.40); obesity (OR 1.75; 1.17–2.63); poor asthma control the previous 4 weeks (OR 1.57; 1.04–2.36); female sex (OR 1.51; 1.08–2.10); and no inhaler technique review during the previous year (OR 1.45; 1.04–2.02). Conclusions: Patients with evidence of poor asthma control should be targeted for a review of their inhaler technique even when using a device thought to have a low error rate. PMID:26810934

  4. Three-class ROC analysis--the equal error utility assumption and the optimality of three-class ROC surface using the ideal observer.

    Science.gov (United States)

    He, Xin; Frey, Eric C

    2006-08-01

    Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.

  5. Republished error management: Descriptions of verbal communication errors between staff. An analysis of 84 root cause analysis-reports from Danish hospitals

    DEFF Research Database (Denmark)

    Rabøl, Louise Isager; Andersen, Mette Lehmann; Østergaard, Doris

    2011-01-01

    Introduction Poor teamwork and communication between healthcare staff are correlated to patient safety incidents. However, the organisational factors responsible for these issues are unexplored. Root cause analyses (RCA) use human factors thinking to analyse the systems behind severe patient safety...... (30%)), communication errors between junior and senior staff members (11 (25%)), hesitance in speaking up (10 (23%)) and communication errors during teamwork (8 (18%)). The kappa values were 0.44-0.78. Unproceduralized communication and information exchange via telephone, related to transfer between...... incidents. The RCARs rich descriptions of the incidents revealed the organisational factors and needs related to these errors....

  6. Errors and mistakes in breast ultrasound diagnostics

    Directory of Open Access Journals (Sweden)

    Wiesław Jakubowski

    2012-09-01

    Full Text Available Sonomammography is often the first additional examination performed in the diagnostics of breast diseases. The development of ultrasound imaging techniques, particularly the introduction of high frequency transducers, matrix transducers, harmonic imaging and finally, elastography, influenced the improvement of breast disease diagnostics. Neverthe‑ less, as in each imaging method, there are errors and mistakes resulting from the techni‑ cal limitations of the method, breast anatomy (fibrous remodeling, insufficient sensitivity and, in particular, specificity. Errors in breast ultrasound diagnostics can be divided into impossible to be avoided and potentially possible to be reduced. In this article the most frequently made errors in ultrasound have been presented, including the ones caused by the presence of artifacts resulting from volumetric averaging in the near and far field, artifacts in cysts or in dilated lactiferous ducts (reverberations, comet tail artifacts, lateral beam artifacts, improper setting of general enhancement or time gain curve or range. Errors dependent on the examiner, resulting in the wrong BIRADS‑usg classification, are divided into negative and positive errors. The sources of these errors have been listed. The methods of minimization of the number of errors made have been discussed, includ‑ ing the ones related to the appropriate examination technique, taking into account data from case history and the use of the greatest possible number of additional options such as: harmonic imaging, color and power Doppler and elastography. In the article examples of errors resulting from the technical conditions of the method have been presented, and those dependent on the examiner which are related to the great diversity and variation of ultrasound images of pathological breast lesions.

  7. The estimation of differential counting measurements of possitive quantities with relatively large statistical errors

    International Nuclear Information System (INIS)

    Vincent, C.H.

    1982-01-01

    Bayes' principle is applied to the differential counting measurement of a positive quantity in which the statistical errors are not necessarily small in relation to the true value of the quantity. The methods of estimation derived are found to give consistent results and to avoid the anomalous negative estimates sometimes obtained by conventional methods. One of the methods given provides a simple means of deriving the required estimates from conventionally presented results and appears to have wide potential applications. Both methods provide the actual posterior probability distribution of the quantity to be measured. A particularly important potential application is the correction of counts on low radioacitvity samples for background. (orig.)

  8. Analyzing temozolomide medication errors: potentially fatal.

    Science.gov (United States)

    Letarte, Nathalie; Gabay, Michael P; Bressler, Linda R; Long, Katie E; Stachnik, Joan M; Villano, J Lee

    2014-10-01

    The EORTC-NCIC regimen for glioblastoma requires different dosing of temozolomide (TMZ) during radiation and maintenance therapy. This complexity is exacerbated by the availability of multiple TMZ capsule strengths. TMZ is an alkylating agent and the major toxicity of this class is dose-related myelosuppression. Inadvertent overdose can be fatal. The websites of the Institute for Safe Medication Practices (ISMP), and the Food and Drug Administration (FDA) MedWatch database were reviewed. We searched the MedWatch database for adverse events associated with TMZ and obtained all reports including hematologic toxicity submitted from 1st November 1997 to 30th May 2012. The ISMP describes errors with TMZ resulting from the positioning of information on the label of the commercial product. The strength and quantity of capsules on the label were in close proximity to each other, and this has been changed by the manufacturer. MedWatch identified 45 medication errors. Patient errors were the most common, accounting for 21 or 47% of errors, followed by dispensing errors, which accounted for 13 or 29%. Seven reports or 16% were errors in the prescribing of TMZ. Reported outcomes ranged from reversible hematological adverse events (13%), to hospitalization for other adverse events (13%) or death (18%). Four error reports lacked detail and could not be categorized. Although the FDA issued a warning in 2003 regarding fatal medication errors and the product label warns of overdosing, errors in TMZ dosing occur for various reasons and involve both healthcare professionals and patients. Overdosing errors can be fatal.

  9. Common Errors in Ecological Data Sharing

    Directory of Open Access Journals (Sweden)

    Robert B. Cook

    2013-04-01

    Full Text Available Objectives: (1 to identify common errors in data organization and metadata completeness that would preclude a “reader” from being able to interpret and re-use the data for a new purpose; and (2 to develop a set of best practices derived from these common errors that would guide researchers in creating more usable data products that could be readily shared, interpreted, and used.Methods: We used directed qualitative content analysis to assess and categorize data and metadata errors identified by peer reviewers of data papers published in the Ecological Society of America’s (ESA Ecological Archives. Descriptive statistics provided the relative frequency of the errors identified during the peer review process.Results: There were seven overarching error categories: Collection & Organization, Assure, Description, Preserve, Discover, Integrate, and Analyze/Visualize. These categories represent errors researchers regularly make at each stage of the Data Life Cycle. Collection & Organization and Description errors were some of the most common errors, both of which occurred in over 90% of the papers.Conclusions: Publishing data for sharing and reuse is error prone, and each stage of the Data Life Cycle presents opportunities for mistakes. The most common errors occurred when the researcher did not provide adequate metadata to enable others to interpret and potentially re-use the data. Fortunately, there are ways to minimize these mistakes through carefully recording all details about study context, data collection, QA/ QC, and analytical procedures from the beginning of a research project and then including this descriptive information in the metadata.

  10. Estimation Methods for Non-Homogeneous Regression - Minimum CRPS vs Maximum Likelihood

    Science.gov (United States)

    Gebetsberger, Manuel; Messner, Jakob W.; Mayr, Georg J.; Zeileis, Achim

    2017-04-01

    Non-homogeneous regression models are widely used to statistically post-process numerical weather prediction models. Such regression models correct for errors in mean and variance and are capable to forecast a full probability distribution. In order to estimate the corresponding regression coefficients, CRPS minimization is performed in many meteorological post-processing studies since the last decade. In contrast to maximum likelihood estimation, CRPS minimization is claimed to yield more calibrated forecasts. Theoretically, both scoring rules used as an optimization score should be able to locate a similar and unknown optimum. Discrepancies might result from a wrong distributional assumption of the observed quantity. To address this theoretical concept, this study compares maximum likelihood and minimum CRPS estimation for different distributional assumptions. First, a synthetic case study shows that, for an appropriate distributional assumption, both estimation methods yield to similar regression coefficients. The log-likelihood estimator is slightly more efficient. A real world case study for surface temperature forecasts at different sites in Europe confirms these results but shows that surface temperature does not always follow the classical assumption of a Gaussian distribution. KEYWORDS: ensemble post-processing, maximum likelihood estimation, CRPS minimization, probabilistic temperature forecasting, distributional regression models

  11. Collection of offshore human error probability data

    International Nuclear Information System (INIS)

    Basra, Gurpreet; Kirwan, Barry

    1998-01-01

    Accidents such as Piper Alpha have increased concern about the effects of human errors in complex systems. Such accidents can in theory be predicted and prevented by risk assessment, and in particular human reliability assessment (HRA), but HRA ideally requires qualitative and quantitative human error data. A research initiative at the University of Birmingham led to the development of CORE-DATA, a Computerised Human Error Data Base. This system currently contains a reasonably large number of human error data points, collected from a variety of mainly nuclear-power related sources. This article outlines a recent offshore data collection study, concerned with collecting lifeboat evacuation data. Data collection methods are outlined and a selection of human error probabilities generated as a result of the study are provided. These data give insights into the type of errors and human failure rates that could be utilised to support offshore risk analyses

  12. Human error mechanisms in complex work environments

    International Nuclear Information System (INIS)

    Rasmussen, J.

    1988-01-01

    Human error taxonomies have been developed from analysis of industrial incident reports as well as from psychological experiments. In this paper the results of the two approaches are reviewed and compared. It is found, in both cases, that a fairly small number of basic psychological mechanisms will account for most of the action errors observed. In addition, error mechanisms appear to be intimately related to the development of high skill and know-how in a complex work context. This relationship between errors and human adaptation is discussed in detail for individuals and organisations. The implications for system safety and briefly mentioned, together with the implications for system design. (author)

  13. Human error mechanisms in complex work environments

    International Nuclear Information System (INIS)

    Rasmussen, Jens; Danmarks Tekniske Hoejskole, Copenhagen)

    1988-01-01

    Human error taxonomies have been developed from analysis of industrial incident reports as well as from psychological experiments. In this paper the results of the two approaches are reviewed and compared. It is found, in both cases, that a fairly small number of basic psychological mechanisms will account for most of the action errors observed. In addition, error mechanisms appear to be intimately related to the development of high skill and know-how in a complex work context. This relationship between errors and human adaptation is discussed in detail for individuals and organisations. The implications for system safety are briefly mentioned, together with the implications for system design. (author)

  14. An improved approach to reduce partial volume errors in brain SPET

    International Nuclear Information System (INIS)

    Hatton, R.L.; Hatton, B.F.; Michael, G.; Barnden, L.; QUT, Brisbane, QLD; The Queen Elizabeth Hospital, Adelaide, SA

    1999-01-01

    Full text: Limitations in SPET resolution give rise to significant partial volume error (PVE) in small brain structures We have investigated a previously published method (Muller-Gartner et al., J Cereb Blood Flow Metab 1992;16: 650-658) to correct PVE in grey matter using MRI. An MRI is registered and segmented to obtain a grey matter tissue volume which is then smoothed to obtain resolution matched to the corresponding SPET. By dividing the original SPET with this correction map, structures can be corrected for PVE on a pixel-by-pixel basis. Since this approach is limited by space-invariant filtering, modification was made by estimating projections for the segmented MRI and reconstructing these using identical parameters to SPET. The methods were tested on simulated brain scans, reconstructed with the ordered subsets EM algorithm (8,16, 32, 64 equivalent EM iterations) The new method provided better recovery visually. For 32 EM iterations, recovery coefficients were calculated for grey matter regions. The effects of potential errors in the method were examined. Mean recovery was unchanged with one pixel registration error, the maximum error found in most registration programs. Errors in segmentation > 2 pixels results in loss of accuracy for small structures. The method promises to be useful for reducing PVE in brain SPET

  15. Fixturing error measurement and analysis using CMMs

    International Nuclear Information System (INIS)

    Wang, Y; Chen, X; Gindy, N

    2005-01-01

    Influence of fixture on the errors of a machined surface can be very significant. The machined surface errors generated during machining can be measured by using a coordinate measurement machine (CMM) through the displacements of three coordinate systems on a fixture-workpiece pair in relation to the deviation of the machined surface. The surface errors consist of the component movement, component twist, deviation between actual machined surface and defined tool path. A turbine blade fixture for grinding operation is used for case study

  16. The next organizational challenge: finding and addressing diagnostic error.

    Science.gov (United States)

    Graber, Mark L; Trowbridge, Robert; Myers, Jennifer S; Umscheid, Craig A; Strull, William; Kanter, Michael H

    2014-03-01

    Although health care organizations (HCOs) are intensely focused on improving the safety of health care, efforts to date have almost exclusively targeted treatment-related issues. The literature confirms that the approaches HCOs use to identify adverse medical events are not effective in finding diagnostic errors, so the initial challenge is to identify cases of diagnostic error. WHY HEALTH CARE ORGANIZATIONS NEED TO GET INVOLVED: HCOs are preoccupied with many quality- and safety-related operational and clinical issues, including performance measures. The case for paying attention to diagnostic errors, however, is based on the following four points: (1) diagnostic errors are common and harmful, (2) high-quality health care requires high-quality diagnosis, (3) diagnostic errors are costly, and (4) HCOs are well positioned to lead the way in reducing diagnostic error. FINDING DIAGNOSTIC ERRORS: Current approaches to identifying diagnostic errors, such as occurrence screens, incident reports, autopsy, and peer review, were not designed to detect diagnostic issues (or problems of omission in general) and/or rely on voluntary reporting. The realization that the existing tools are inadequate has spurred efforts to identify novel tools that could be used to discover diagnostic errors or breakdowns in the diagnostic process that are associated with errors. New approaches--Maine Medical Center's case-finding of diagnostic errors by facilitating direct reports from physicians and Kaiser Permanente's electronic health record--based reports that detect process breakdowns in the followup of abnormal findings--are described in case studies. By raising awareness and implementing targeted programs that address diagnostic error, HCOs may begin to play an important role in addressing the problem of diagnostic error.

  17. Analyzing the installation angle error of a SAW torque sensor

    International Nuclear Information System (INIS)

    Fan, Yanping; Ji, Xiaojun; Cai, Ping

    2014-01-01

    When a torque is applied to a shaft, normal strain oriented at ±45° direction to the shaft axis is at its maximum, which requires two one-port SAW resonators to be bonded to the shaft at ±45° to the shaft axis. In order to make the SAW torque sensitivity high enough, the installation angle error of two SAW resonators must be confined within ±5° according to our design requirement. However, there are few studies devoted to the installation angle analysis of a SAW torque sensor presently and the angle error was usually obtained by a manual method. Hence, we propose an approximation method to analyze the angle error. First, according to the sensitive mechanism of the SAW device to torque, the SAW torque sensitivity is deduced based on the linear piezoelectric constitutive equation and the perturbation theory. Then, when a torque is applied to the tested shaft, the stress condition of two SAW resonators mounted with an angle deviating from ±45° to the shaft axis, is analyzed. The angle error is obtained by means of the torque sensitivities of two orthogonal SAW resonators. Finally, the torque measurement system is constructed and the loading and unloading experiments are performed twice. The torque sensitivities of two SAW resonators are obtained by applying average and least square method to the experimental results. Based on the derived angle error estimation function, the angle error is estimated about 3.447°, which is close to the actual angle error 2.915°. The difference between the estimated angle and the actual angle is discussed. The validity of the proposed angle error analysis method is testified to by the experimental results. (technical design note)

  18. Grammatical Errors Produced by English Majors: The Translation Task

    Science.gov (United States)

    Mohaghegh, Hamid; Zarandi, Fatemeh Mahmoudi; Shariati, Mohammad

    2011-01-01

    This study investigated the frequency of the grammatical errors related to the four categories of preposition, relative pronoun, article, and tense using the translation task. In addition, the frequencies of these grammatical errors in different categories and in each category were examined. The quantitative component of the study further looked…

  19. Correcting a fundamental error in greenhouse gas accounting related to bioenergy

    International Nuclear Information System (INIS)

    Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc; Cocco, Pierluigi; Desaubies, Yves; Henze, Mogens; Hertel, Ole; Johnson, Richard K.; Kastrup, Ulrike; Laconte, Pierre; Lange, Eckart; Novak, Peter; Paavola, Jouni; Reenberg, Anette; Hove, Sybille van den

    2012-01-01

    Many international policies encourage a switch from fossil fuels to bioenergy based on the premise that its use would not result in carbon accumulation in the atmosphere. Frequently cited bioenergy goals would at least double the present global human use of plant material, the production of which already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of ‘additional biomass’ – biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy – can reduce carbon emissions. Failure to correct this accounting flaw will likely have substantial adverse consequences. The article presents recommendations for correcting greenhouse gas accounts related to bioenergy.

  20. Correcting a fundamental error in greenhouse gas accounting related to bioenergy.

    Science.gov (United States)

    Haberl, Helmut; Sprinz, Detlef; Bonazountas, Marc; Cocco, Pierluigi; Desaubies, Yves; Henze, Mogens; Hertel, Ole; Johnson, Richard K; Kastrup, Ulrike; Laconte, Pierre; Lange, Eckart; Novak, Peter; Paavola, Jouni; Reenberg, Anette; van den Hove, Sybille; Vermeire, Theo; Wadhams, Peter; Searchinger, Timothy

    2012-06-01

    Many international policies encourage a switch from fossil fuels to bioenergy based on the premise that its use would not result in carbon accumulation in the atmosphere. Frequently cited bioenergy goals would at least double the present global human use of plant material, the production of which already requires the dedication of roughly 75% of vegetated lands and more than 70% of water withdrawals. However, burning biomass for energy provision increases the amount of carbon in the air just like burning coal, oil or gas if harvesting the biomass decreases the amount of carbon stored in plants and soils, or reduces carbon sequestration. Neglecting this fact results in an accounting error that could be corrected by considering that only the use of 'additional biomass' - biomass from additional plant growth or biomass that would decompose rapidly if not used for bioenergy - can reduce carbon emissions. Failure to correct this accounting flaw will likely have substantial adverse consequences. The article presents recommendations for correcting greenhouse gas accounts related to bioenergy.

  1. Non-intercepted dose errors in prescribing anti-neoplastic treatment

    DEFF Research Database (Denmark)

    Mattsson, T O; Holm, B; Michelsen, H

    2015-01-01

    BACKGROUND: The incidence of non-intercepted prescription errors and the risk factors involved, including the impact of computerised order entry (CPOE) systems on such errors, are unknown. Our objective was to determine the incidence, type, severity, and related risk factors of non-intercepted pr....... Strategies to prevent future prescription errors could usefully focus on integrated computerised systems that can aid dose calculations and reduce transcription errors between databases....

  2. Trends in mean maximum temperature, mean minimum temperature and mean relative humidity for Lautoka, Fiji during 2003 – 2013

    Directory of Open Access Journals (Sweden)

    Syed S. Ghani

    2017-12-01

    Full Text Available The current work observes the trends in Lautoka’s temperature and relative humidity during the period 2003 – 2013, which were analyzed using the recently updated data obtained from Fiji Meteorological Services (FMS. Four elements, mean maximum temperature, mean minimum temperature along with diurnal temperature range (DTR and mean relative humidity are investigated. From 2003–2013, the annual mean temperature has been enhanced between 0.02 and 0.080C. The heating is more in minimum temperature than in maximum temperature, resulting in a decrease of diurnal temperature range. The statistically significant increase was mostly seen during the summer months of December and January. Mean Relative Humidity has also increased from 3% to 8%. The bases of abnormal climate conditions are also studied. These bases were defined with temperature or humidity anomalies in their appropriate time sequences. These established the observed findings and exhibited that climate has been becoming gradually damper and heater throughout Lautoka during this period. While we are only at an initial phase in the probable inclinations of temperature changes, ecological reactions to recent climate change are already evidently noticeable. So it is proposed that it would be easier to identify climate alteration in a small island nation like Fiji.

  3. Sensitivity of dose-finding studies to observation errors.

    Science.gov (United States)

    Zohar, Sarah; O'Quigley, John

    2009-11-01

    The purpose of Phase I designs is to estimate the MTD (maximum tolerated dose, in practice a dose with some given acceptable rate of toxicity) while, at the same time, minimizing the number of patients treated at doses too far removed from the MTD. Our purpose here is to investigate the sensitivity of conclusions from dose-finding designs to recording or observation errors. Certain toxicities may go undetected and, conversely, certain non-toxicities may be incorrectly recorded as dose-limiting toxicities. Recording inaccuracies would be expected to have an influence on final and within trial recommendations and, in this paper, we study in greater depth this question. We focus, in particular on three designs used currently; the standard '3+3' design, the grouped up-and-down design [M. Gezmu, N. Flournoy, Group up-and-down designs for dose finding. Journal of Statistical Planning and Inference 2006; 136 (6): 1749-1764.] and the continual reassessment method (CRM, [J. O'Quigley, M. Pepe, L. Fisher, Continual reassessment method: a practical design for phase 1 clinical trials in cancer. Biometrics 1990; 46 (1): 33-48.]). A non-toxicity incorrectly recorded as a toxicity (error of first kind) has a greater influence in general than the converse (error of second kind). These results are illustrated via figures which suggest that the standard '3+3' design in particular is sensitive to errors of the second kind. Such errors can have a very important impact on drug development in that, if carried through to the Phase 2 and Phase 3 studies, we can significantly increase the probability of failure to detect efficacy as a result of having delivered an inadequate dose.

  4. System for simultaneously measuring 6DOF geometric motion errors using a polarization maintaining fiber-coupled dual-frequency laser.

    Science.gov (United States)

    Cui, Cunxing; Feng, Qibo; Zhang, Bin; Zhao, Yuqiong

    2016-03-21

    A novel method for simultaneously measuring six degree-of-freedom (6DOF) geometric motion errors is proposed in this paper, and the corresponding measurement instrument is developed. Simultaneous measurement of 6DOF geometric motion errors using a polarization maintaining fiber-coupled dual-frequency laser is accomplished for the first time to the best of the authors' knowledge. Dual-frequency laser beams that are orthogonally linear polarized were adopted as the measuring datum. Positioning error measurement was achieved by heterodyne interferometry, and other 5DOF geometric motion errors were obtained by fiber collimation measurement. A series of experiments was performed to verify the effectiveness of the developed instrument. The experimental results showed that the stability and accuracy of the positioning error measurement are 31.1 nm and 0.5 μm, respectively. For the straightness error measurements, the stability and resolution are 60 and 40 nm, respectively, and the maximum deviation of repeatability is ± 0.15 μm in the x direction and ± 0.1 μm in the y direction. For pitch and yaw measurements, the stabilities are 0.03″ and 0.04″, the maximum deviations of repeatability are ± 0.18″ and ± 0.24″, and the accuracies are 0.4″ and 0.35″, respectively. The stability and resolution of roll measurement are 0.29″ and 0.2″, respectively, and the accuracy is 0.6″.

  5. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  6. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  7. An Improved Minimum Error Interpolator of CNC for General Curves Based on FPGA

    Directory of Open Access Journals (Sweden)

    Jiye HUANG

    2014-05-01

    Full Text Available This paper presents an improved minimum error interpolation algorithm for general curves generation in computer numerical control (CNC. Compared with the conventional interpolation algorithms such as the By-Point Comparison method, the Minimum- Error method and the Digital Differential Analyzer (DDA method, the proposed improved Minimum-Error interpolation algorithm can find a balance between accuracy and efficiency. The new algorithm is applicable for the curves of linear, circular, elliptical and parabolic. The proposed algorithm is realized on a field programmable gate array (FPGA with Verilog HDL language, and simulated by the ModelSim software, and finally verified on a two-axis CNC lathe. The algorithm has the following advantages: firstly, the maximum interpolation error is only half of the minimum step-size; and secondly the computing time is only two clock cycles of the FPGA. Simulations and actual tests have proved that the high accuracy and efficiency of the algorithm, which shows that it is highly suited for real-time applications.

  8. Study of Errors among Nursing Students

    Directory of Open Access Journals (Sweden)

    Ella Koren

    2007-09-01

    Full Text Available The study of errors in the health system today is a topic of considerable interest aimed at reducing errors through analysis of the phenomenon and the conclusions reached. Errors that occur frequently among health professionals have also been observed among nursing students. True, in most cases they are actually “near errors,” but these could be a future indicator of therapeutic reality and the effect of nurses' work environment on their personal performance. There are two different approaches to such errors: (a The EPP (error prone person approach lays full responsibility at the door of the individual involved in the error, whether a student, nurse, doctor, or pharmacist. According to this approach, handling consists purely in identifying and penalizing the guilty party. (b The EPE (error prone environment approach emphasizes the environment as a primary contributory factor to errors. The environment as an abstract concept includes components and processes of interpersonal communications, work relations, human engineering, workload, pressures, technical apparatus, and new technologies. The objective of the present study was to examine the role played by factors in and components of personal performance as compared to elements and features of the environment. The study was based on both of the aforementioned approaches, which, when combined, enable a comprehensive understanding of the phenomenon of errors among the student population as well as a comparison of factors contributing to human error and to error deriving from the environment. The theoretical basis of the study was a model that combined both approaches: one focusing on the individual and his or her personal performance and the other focusing on the work environment. The findings emphasize the work environment of health professionals as an EPE. However, errors could have been avoided by means of strict adherence to practical procedures. The authors examined error events in the

  9. The regulation of starch accumulation in Panicum maximum Jacq ...

    African Journals Online (AJOL)

    ... decrease the starch level. These observations are discussed in relation to the photosynthetic characteristics of P. maximum. Keywords: accumulation; botany; carbon assimilation; co2 fixation; growth conditions; mesophyll; metabolites; nitrogen; nitrogen levels; nitrogen supply; panicum maximum; plant physiology; starch; ...

  10. The Pupillary Orienting Response Predicts Adaptive Behavioral Adjustment after Errors.

    Directory of Open Access Journals (Sweden)

    Peter R Murphy

    Full Text Available Reaction time (RT is commonly observed to slow down after an error. This post-error slowing (PES has been thought to arise from the strategic adoption of a more cautious response mode following deployment of cognitive control. Recently, an alternative account has suggested that PES results from interference due to an error-evoked orienting response. We investigated whether error-related orienting may in fact be a pre-cursor to adaptive post-error behavioral adjustment when the orienting response resolves before subsequent trial onset. We measured pupil dilation, a prototypical measure of autonomic orienting, during performance of a choice RT task with long inter-stimulus intervals, and found that the trial-by-trial magnitude of the error-evoked pupil response positively predicted both PES magnitude and the likelihood that the following response would be correct. These combined findings suggest that the magnitude of the error-related orienting response predicts an adaptive change of response strategy following errors, and thereby promote a reconciliation of the orienting and adaptive control accounts of PES.

  11. Maximum entropy approach to statistical inference for an ocean acoustic waveguide.

    Science.gov (United States)

    Knobles, D P; Sagers, J D; Koch, R A

    2012-02-01

    A conditional probability distribution suitable for estimating the statistical properties of ocean seabed parameter values inferred from acoustic measurements is derived from a maximum entropy principle. The specification of the expectation value for an error function constrains the maximization of an entropy functional. This constraint determines the sensitivity factor (β) to the error function of the resulting probability distribution, which is a canonical form that provides a conservative estimate of the uncertainty of the parameter values. From the conditional distribution, marginal distributions for individual parameters can be determined from integration over the other parameters. The approach is an alternative to obtaining the posterior probability distribution without an intermediary determination of the likelihood function followed by an application of Bayes' rule. In this paper the expectation value that specifies the constraint is determined from the values of the error function for the model solutions obtained from a sparse number of data samples. The method is applied to ocean acoustic measurements taken on the New Jersey continental shelf. The marginal probability distribution for the values of the sound speed ratio at the surface of the seabed and the source levels of a towed source are examined for different geoacoustic model representations. © 2012 Acoustical Society of America

  12. 2-Step Maximum Likelihood Channel Estimation for Multicode DS-CDMA with Frequency-Domain Equalization

    Science.gov (United States)

    Kojima, Yohei; Takeda, Kazuaki; Adachi, Fumiyuki

    Frequency-domain equalization (FDE) based on the minimum mean square error (MMSE) criterion can provide better downlink bit error rate (BER) performance of direct sequence code division multiple access (DS-CDMA) than the conventional rake combining in a frequency-selective fading channel. FDE requires accurate channel estimation. In this paper, we propose a new 2-step maximum likelihood channel estimation (MLCE) for DS-CDMA with FDE in a very slow frequency-selective fading environment. The 1st step uses the conventional pilot-assisted MMSE-CE and the 2nd step carries out the MLCE using decision feedback from the 1st step. The BER performance improvement achieved by 2-step MLCE over pilot assisted MMSE-CE is confirmed by computer simulation.

  13. Effect of asymmetrical transfer coefficients of a non-polarizing beam splitter on the nonlinear error of the polarization interferometer

    Science.gov (United States)

    Zhao, Chen-Guang; Tan, Jiu-Bin; Liu, Tao

    2010-09-01

    The mechanism of a non-polarizing beam splitter (NPBS) with asymmetrical transfer coefficients causing the rotation of polarization direction is explained in principle, and the measurement nonlinear error caused by NPBS is analyzed based on Jones matrix theory. Theoretical calculations show that the nonlinear error changes periodically, and the error period and peak values increase with the deviation between transmissivities of p-polarization and s-polarization states. When the transmissivity of p-polarization is 53% and that of s-polarization is 48%, the maximum error reaches 2.7 nm. The imperfection of NPBS is one of the main error sources in simultaneous phase-shifting polarization interferometer, and its influence can not be neglected in the nanoscale ultra-precision measurement.

  14. Random measurement error: Why worry? An example of cardiovascular risk factors.

    Science.gov (United States)

    Brakenhoff, Timo B; van Smeden, Maarten; Visseren, Frank L J; Groenwold, Rolf H H

    2018-01-01

    With the increased use of data not originally recorded for research, such as routine care data (or 'big data'), measurement error is bound to become an increasingly relevant problem in medical research. A common view among medical researchers on the influence of random measurement error (i.e. classical measurement error) is that its presence leads to some degree of systematic underestimation of studied exposure-outcome relations (i.e. attenuation of the effect estimate). For the common situation where the analysis involves at least one exposure and one confounder, we demonstrate that the direction of effect of random measurement error on the estimated exposure-outcome relations can be difficult to anticipate. Using three example studies on cardiovascular risk factors, we illustrate that random measurement error in the exposure and/or confounder can lead to underestimation as well as overestimation of exposure-outcome relations. We therefore advise medical researchers to refrain from making claims about the direction of effect of measurement error in their manuscripts, unless the appropriate inferential tools are used to study or alleviate the impact of measurement error from the analysis.

  15. Minimization of the effect of errors in approximate radiation view factors

    International Nuclear Information System (INIS)

    Clarksean, R.; Solbrig, C.

    1993-01-01

    The maximum temperature of irradiated fuel rods in storage containers was investigated taking credit only for radiation heat transfer. Estimating view factors is often easy but in many references the emphasis is placed on calculating the quadruple integrals exactly. Selecting different view factors in the view factor matrix as independent, yield somewhat different view factor matrices. In this study ten to twenty percent error in view factors produced small errors in the temperature which are well within the uncertainty due to the surface emissivities uncertainty. However, the enclosure and reciprocity principles must be strictly observed or large errors in the temperatures and wall heat flux were observed (up to a factor of 3). More than just being an aid for calculating the dependent view factors, satisfying these principles, particularly reciprocity, is more important than the calculation accuracy of the view factors. Comparison to experiment showed that the result of the radiation calculation was definitely conservative as desired in spite of the approximations to the view factors

  16. Advancing the research agenda for diagnostic error reduction

    NARCIS (Netherlands)

    Zwaan, L.; Schiff, G.D.; Singh, H.

    2013-01-01

    Diagnostic errors remain an underemphasised and understudied area of patient safety research. We briefly summarise the methods that have been used to conduct research on epidemiology, contributing factors and interventions related to diagnostic error and outline directions for future research.

  17. A Maximum Likelihood Estimation of Vocal-Tract-Related Filter Characteristics for Single Channel Speech Separation

    Directory of Open Access Journals (Sweden)

    Dansereau Richard M

    2007-01-01

    Full Text Available We present a new technique for separating two speech signals from a single recording. The proposed method bridges the gap between underdetermined blind source separation techniques and those techniques that model the human auditory system, that is, computational auditory scene analysis (CASA. For this purpose, we decompose the speech signal into the excitation signal and the vocal-tract-related filter and then estimate the components from the mixed speech using a hybrid model. We first express the probability density function (PDF of the mixed speech's log spectral vectors in terms of the PDFs of the underlying speech signal's vocal-tract-related filters. Then, the mean vectors of PDFs of the vocal-tract-related filters are obtained using a maximum likelihood estimator given the mixed signal. Finally, the estimated vocal-tract-related filters along with the extracted fundamental frequencies are used to reconstruct estimates of the individual speech signals. The proposed technique effectively adds vocal-tract-related filter characteristics as a new cue to CASA models using a new grouping technique based on an underdetermined blind source separation. We compare our model with both an underdetermined blind source separation and a CASA method. The experimental results show that our model outperforms both techniques in terms of SNR improvement and the percentage of crosstalk suppression.

  18. A Maximum Likelihood Estimation of Vocal-Tract-Related Filter Characteristics for Single Channel Speech Separation

    Directory of Open Access Journals (Sweden)

    Mohammad H. Radfar

    2006-11-01

    Full Text Available We present a new technique for separating two speech signals from a single recording. The proposed method bridges the gap between underdetermined blind source separation techniques and those techniques that model the human auditory system, that is, computational auditory scene analysis (CASA. For this purpose, we decompose the speech signal into the excitation signal and the vocal-tract-related filter and then estimate the components from the mixed speech using a hybrid model. We first express the probability density function (PDF of the mixed speech's log spectral vectors in terms of the PDFs of the underlying speech signal's vocal-tract-related filters. Then, the mean vectors of PDFs of the vocal-tract-related filters are obtained using a maximum likelihood estimator given the mixed signal. Finally, the estimated vocal-tract-related filters along with the extracted fundamental frequencies are used to reconstruct estimates of the individual speech signals. The proposed technique effectively adds vocal-tract-related filter characteristics as a new cue to CASA models using a new grouping technique based on an underdetermined blind source separation. We compare our model with both an underdetermined blind source separation and a CASA method. The experimental results show that our model outperforms both techniques in terms of SNR improvement and the percentage of crosstalk suppression.

  19. Forecast Combination under Heavy-Tailed Errors

    Directory of Open Access Journals (Sweden)

    Gang Cheng

    2015-11-01

    Full Text Available Forecast combination has been proven to be a very important technique to obtain accurate predictions for various applications in economics, finance, marketing and many other areas. In many applications, forecast errors exhibit heavy-tailed behaviors for various reasons. Unfortunately, to our knowledge, little has been done to obtain reliable forecast combinations for such situations. The familiar forecast combination methods, such as simple average, least squares regression or those based on the variance-covariance of the forecasts, may perform very poorly due to the fact that outliers tend to occur, and they make these methods have unstable weights, leading to un-robust forecasts. To address this problem, in this paper, we propose two nonparametric forecast combination methods. One is specially proposed for the situations in which the forecast errors are strongly believed to have heavy tails that can be modeled by a scaled Student’s t-distribution; the other is designed for relatively more general situations when there is a lack of strong or consistent evidence on the tail behaviors of the forecast errors due to a shortage of data and/or an evolving data-generating process. Adaptive risk bounds of both methods are developed. They show that the resulting combined forecasts yield near optimal mean forecast errors relative to the candidate forecasts. Simulations and a real example demonstrate their superior performance in that they indeed tend to have significantly smaller prediction errors than the previous combination methods in the presence of forecast outliers.

  20. Error estimation in plant growth analysis

    Directory of Open Access Journals (Sweden)

    Andrzej Gregorczyk

    2014-01-01

    Full Text Available The scheme is presented for calculation of errors of dry matter values which occur during approximation of data with growth curves, determined by the analytical method (logistic function and by the numerical method (Richards function. Further formulae are shown, which describe absolute errors of growth characteristics: Growth rate (GR, Relative growth rate (RGR, Unit leaf rate (ULR and Leaf area ratio (LAR. Calculation examples concerning the growth course of oats and maize plants are given. The critical analysis of the estimation of obtained results has been done. The purposefulness of joint application of statistical methods and error calculus in plant growth analysis has been ascertained.

  1. Artificial Intelligence as a Business Forecasting and Error Handling Tool

    OpenAIRE

    Md. Tabrez Quasim; Rupak Chattopadhyay

    2015-01-01

     Any business enterprise must rely a lot on how well it can predict the future happenings. To cope up with the modern global customer demand, technological challenges, market competitions etc., any organization is compelled to foresee the future having maximum impact and least chances of errors. The traditional forecasting approaches have some limitations. That is why the business world is adopting the modern Artificial Intelligence based forecasting techniques. This paper has tried to presen...

  2. Medication errors of nurses and factors in refusal to report medication errors among nurses in a teaching medical center of iran in 2012.

    Science.gov (United States)

    Mostafaei, Davoud; Barati Marnani, Ahmad; Mosavi Esfahani, Haleh; Estebsari, Fatemeh; Shahzaidi, Shiva; Jamshidi, Ensiyeh; Aghamiri, Seyed Samad

    2014-10-01

    About one third of unwanted reported medication consequences are due to medication errors, resulting in one-fifth of hospital injuries. The aim of this study was determined formal and informal medication errors of nurses and the level of importance of factors in refusal to report medication errors among nurses. The cross-sectional study was done on the nursing staff of Shohada Tajrish Hospital, Tehran, Iran in 2012. The data was gathered through a questionnaire, made by the researchers. The questionnaires' face and content validity was confirmed by experts and for measuring its reliability test-retest was used. The data was analyzed by descriptive statistics. We used SPSS for related statistical analyses. The most important factors in refusal to report medication errors respectively were: lack of medication error recording and reporting system in the hospital (3.3%), non-significant error reporting to hospital authorities and lack of appropriate feedback (3.1%), and lack of a clear definition for a medication error (3%). There were both formal and informal reporting of medication errors in this study. Factors pertaining to management in hospitals as well as the fear of the consequences of reporting are two broad fields among the factors that make nurses not report their medication errors. In this regard, providing enough education to nurses, boosting the job security for nurses, management support and revising related processes and definitions are some factors that can help decreasing medication errors and increasing their report in case of occurrence.

  3. Reverse Transcription Errors and RNA-DNA Differences at Short Tandem Repeats.

    Science.gov (United States)

    Fungtammasan, Arkarachai; Tomaszkiewicz, Marta; Campos-Sánchez, Rebeca; Eckert, Kristin A; DeGiorgio, Michael; Makova, Kateryna D

    2016-10-01

    Transcript variation has important implications for organismal function in health and disease. Most transcriptome studies focus on assessing variation in gene expression levels and isoform representation. Variation at the level of transcript sequence is caused by RNA editing and transcription errors, and leads to nongenetically encoded transcript variants, or RNA-DNA differences (RDDs). Such variation has been understudied, in part because its detection is obscured by reverse transcription (RT) and sequencing errors. It has only been evaluated for intertranscript base substitution differences. Here, we investigated transcript sequence variation for short tandem repeats (STRs). We developed the first maximum-likelihood estimator (MLE) to infer RT error and RDD rates, taking next generation sequencing error rates into account. Using the MLE, we empirically evaluated RT error and RDD rates for STRs in a large-scale DNA and RNA replicated sequencing experiment conducted in a primate species. The RT error rates increased exponentially with STR length and were biased toward expansions. The RDD rates were approximately 1 order of magnitude lower than the RT error rates. The RT error rates estimated with the MLE from a primate data set were concordant with those estimated with an independent method, barcoded RNA sequencing, from a Caenorhabditis elegans data set. Our results have important implications for medical genomics, as STR allelic variation is associated with >40 diseases. STR nonallelic transcript variation can also contribute to disease phenotype. The MLE and empirical rates presented here can be used to evaluate the probability of disease-associated transcripts arising due to RDD. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  4. Eliminating US hospital medical errors.

    Science.gov (United States)

    Kumar, Sameer; Steinebach, Marc

    2008-01-01

    Healthcare costs in the USA have continued to rise steadily since the 1980s. Medical errors are one of the major causes of deaths and injuries of thousands of patients every year, contributing to soaring healthcare costs. The purpose of this study is to examine what has been done to deal with the medical-error problem in the last two decades and present a closed-loop mistake-proof operation system for surgery processes that would likely eliminate preventable medical errors. The design method used is a combination of creating a service blueprint, implementing the six sigma DMAIC cycle, developing cause-and-effect diagrams as well as devising poka-yokes in order to develop a robust surgery operation process for a typical US hospital. In the improve phase of the six sigma DMAIC cycle, a number of poka-yoke techniques are introduced to prevent typical medical errors (identified through cause-and-effect diagrams) that may occur in surgery operation processes in US hospitals. It is the authors' assertion that implementing the new service blueprint along with the poka-yokes, will likely result in the current medical error rate to significantly improve to the six-sigma level. Additionally, designing as many redundancies as possible in the delivery of care will help reduce medical errors. Primary healthcare providers should strongly consider investing in adequate doctor and nurse staffing, and improving their education related to the quality of service delivery to minimize clinical errors. This will lead to an increase in higher fixed costs, especially in the shorter time frame. This paper focuses additional attention needed to make a sound technical and business case for implementing six sigma tools to eliminate medical errors that will enable hospital managers to increase their hospital's profitability in the long run and also ensure patient safety.

  5. Maximum gravitational redshift of white dwarfs

    International Nuclear Information System (INIS)

    Shapiro, S.L.; Teukolsky, S.A.

    1976-01-01

    The stability of uniformly rotating, cold white dwarfs is examined in the framework of the Parametrized Post-Newtonian (PPN) formalism of Will and Nordtvedt. The maximum central density and gravitational redshift of a white dwarf are determined as functions of five of the nine PPN parameters (γ, β, zeta 2 , zeta 3 , and zeta 4 ), the total angular momentum J, and the composition of the star. General relativity predicts that the maximum redshifts is 571 km s -1 for nonrotating carbon and helium dwarfs, but is lower for stars composed of heavier nuclei. Uniform rotation can increase the maximum redshift to 647 km s -1 for carbon stars (the neutronization limit) and to 893 km s -1 for helium stars (the uniform rotation limit). The redshift distribution of a larger sample of white dwarfs may help determine the composition of their cores

  6. Error Analysis of Determining Airplane Location by Global Positioning System

    OpenAIRE

    Hajiyev, Chingiz; Burat, Alper

    1999-01-01

    This paper studies the error analysis of determining airplane location by global positioning system (GPS) using statistical testing method. The Newton Rhapson method positions the airplane at the intersection point of four spheres. Absolute errors, relative errors and standard deviation have been calculated The results show that the positioning error of the airplane varies with the coordinates of GPS satellite and the airplane.

  7. Error of the slanted edge method for measuring the modulation transfer function of imaging systems.

    Science.gov (United States)

    Xie, Xufen; Fan, Hongda; Wang, Hongyuan; Wang, Zebin; Zou, Nianyu

    2018-03-01

    The slanted edge method is a basic approach for measuring the modulation transfer function (MTF) of imaging systems; however, its measurement accuracy is limited in practice. Theoretical analysis of the slanted edge MTF measurement method performed in this paper reveals that inappropriate edge angles and random noise reduce this accuracy. The error caused by edge angles is analyzed using sampling and reconstruction theory. Furthermore, an error model combining noise and edge angles is proposed. We verify the analyses and model with respect to (i) the edge angle, (ii) a statistical analysis of the measurement error, (iii) the full width at half-maximum of a point spread function, and (iv) the error model. The experimental results verify the theoretical findings. This research can be referential for applications of the slanted edge MTF measurement method.

  8. Data on simulated interpersonal touch, individual differences and the error-related negativity

    Directory of Open Access Journals (Sweden)

    Mandy Tjew-A-Sin

    2016-06-01

    Full Text Available The dataset includes data from the electroencephalogram study reported in our paper: ‘Effects of simulated interpersonal touch and trait intrinsic motivation on the error-related negativity’ (doi:10.1016/j.neulet.2016.01.044 (Tjew-A-Sin et al., 2016 [1]. The data was collected at the psychology laboratories at the Vrije Universiteit Amsterdam in 2012 among a Dutch-speaking student sample. The dataset consists of the measures described in the paper, as well as additional (exploratory measures including the Five-Factor Personality Inventory, the Connectedness to Nature Scale, the Rosenberg Self-esteem Scale and a scale measuring life stress. The data can be used for replication purposes, meta-analyses, and exploratory analyses, as well as cross-cultural comparisons of touch and/or ERN effects. The authors also welcome collaborative research based on re-analyses of the data. The data described is available at a data repository called the DANS archive: http://persistent-identifier.nl/?identifier=urn:nbn:nl:ui:13-tzbk-gg.

  9. Optimizer convergence and local minima errors and their clinical importance

    International Nuclear Information System (INIS)

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R

    2003-01-01

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization

  10. The maximum entropy method of moments and Bayesian probability theory

    Science.gov (United States)

    Bretthorst, G. Larry

    2013-08-01

    The problem of density estimation occurs in many disciplines. For example, in MRI it is often necessary to classify the types of tissues in an image. To perform this classification one must first identify the characteristics of the tissues to be classified. These characteristics might be the intensity of a T1 weighted image and in MRI many other types of characteristic weightings (classifiers) may be generated. In a given tissue type there is no single intensity that characterizes the tissue, rather there is a distribution of intensities. Often this distributions can be characterized by a Gaussian, but just as often it is much more complicated. Either way, estimating the distribution of intensities is an inference problem. In the case of a Gaussian distribution, one must estimate the mean and standard deviation. However, in the Non-Gaussian case the shape of the density function itself must be inferred. Three common techniques for estimating density functions are binned histograms [1, 2], kernel density estimation [3, 4], and the maximum entropy method of moments [5, 6]. In the introduction, the maximum entropy method of moments will be reviewed. Some of its problems and conditions under which it fails will be discussed. Then in later sections, the functional form of the maximum entropy method of moments probability distribution will be incorporated into Bayesian probability theory. It will be shown that Bayesian probability theory solves all of the problems with the maximum entropy method of moments. One gets posterior probabilities for the Lagrange multipliers, and, finally, one can put error bars on the resulting estimated density function.

  11. Effect of a health system's medical error disclosure program on gastroenterology-related claims rates and costs.

    Science.gov (United States)

    Adams, Megan A; Elmunzer, B Joseph; Scheiman, James M

    2014-04-01

    In 2001, the University of Michigan Health System (UMHS) implemented a novel medical error disclosure program. This study analyzes the effect of this program on gastroenterology (GI)-related claims and costs. This was a review of claims in the UMHS Risk Management Database (1990-2010), naming a gastroenterologist. Claims were classified according to pre-determined categories. Claims data, including incident date, date of resolution, and total liability dollars, were reviewed. Mean total liability incurred per claim in the pre- and post-implementation eras was compared. Patient encounter data from the Division of Gastroenterology was also reviewed in order to benchmark claims data with changes in clinical volume. There were 238,911 GI encounters in the pre-implementation era and 411,944 in the post-implementation era. A total of 66 encounters resulted in claims: 38 in the pre-implementation era and 28 in the post-implementation era. Of the total number of claims, 15.2% alleged delay in diagnosis/misdiagnosis, 42.4% related to a procedure, and 42.4% involved improper management, treatment, or monitoring. The reduction in the proportion of encounters resulting in claims was statistically significant (P=0.001), as was the reduction in time to claim resolution (1,000 vs. 460 days) (P<0.0001). There was also a reduction in the mean total liability per claim ($167,309 pre vs. $81,107 post, 95% confidence interval: 33682.5-300936.2 pre vs. 1687.8-160526.7 post). Implementation of a novel medical error disclosure program, promoting transparency and quality improvement, not only decreased the number of GI-related claims per patient encounter, but also dramatically shortened the time to claim resolution.

  12. A methodology for translating positional error into measures of attribute error, and combining the two error sources

    Science.gov (United States)

    Yohay Carmel; Curtis Flather; Denis Dean

    2006-01-01

    This paper summarizes our efforts to investigate the nature, behavior, and implications of positional error and attribute error in spatiotemporal datasets. Estimating the combined influence of these errors on map analysis has been hindered by the fact that these two error types are traditionally expressed in different units (distance units, and categorical units,...

  13. Estimates of error introduced when one-dimensional inverse heat transfer techniques are applied to multi-dimensional problems

    International Nuclear Information System (INIS)

    Lopez, C.; Koski, J.A.; Razani, A.

    2000-01-01

    A study of the errors introduced when one-dimensional inverse heat conduction techniques are applied to problems involving two-dimensional heat transfer effects was performed. The geometry used for the study was a cylinder with similar dimensions as a typical container used for the transportation of radioactive materials. The finite element analysis code MSC P/Thermal was used to generate synthetic test data that was then used as input for an inverse heat conduction code. Four different problems were considered including one with uniform flux around the outer surface of the cylinder and three with non-uniform flux applied over 360 deg C, 180 deg C, and 90 deg C sections of the outer surface of the cylinder. The Sandia One-Dimensional Direct and Inverse Thermal (SODDIT) code was used to estimate the surface heat flux of all four cases. The error analysis was performed by comparing the results from SODDIT and the heat flux calculated based on the temperature results obtained from P/Thermal. Results showed an increase in error of the surface heat flux estimates as the applied heat became more localized. For the uniform case, SODDIT provided heat flux estimates with a maximum error of 0.5% whereas for the non-uniform cases, the maximum errors were found to be about 3%, 7%, and 18% for the 360 deg C, 180 deg C, and 90 deg C cases, respectively

  14. Evaluation of localization errors for craniospinal axis irradiation delivery using volume modulated arc therapy and proposal of a technique to minimize such errors

    International Nuclear Information System (INIS)

    Myers, Pamela; Stathakis, Sotirios; Mavroidis, Panayiotis; Esquivel, Carlos; Papanikolaou, Niko

    2013-01-01

    Purpose: To dosimetrically evaluate the effects of improper patient positioning in the junction area of a VMAT cranio-spinal axis irradiation technique consisting of one superior and one inferior arc and propose a solution to minimize these patient setup errors. Methods: Five (n = 5) cranio-spinal axis irradiation patients were planned with 2 arcs: one superior and one inferior. In order to mimic patient setup errors, the plans were recalculated with the inferior isocenter shifted by: 1, 2, 5, and 10 mm superiorly, and 1, 2, 5, and 10 mm inferiorly. The plans were then compared with the corresponding original, non-shifted arc plans on the grounds of target metrics such as conformity number and homogeneity index, as well as several normal tissue dose descriptors. “Gradient-optimized” plans were then created for each patient in an effort to reduce dose discrepancies due to setup errors. Results: Percent differences were calculated in order to compare each of the eight shifted plans with the original non-shifted arc plan, which corresponds to the ideal patient setup. The conformity number was on average lower by 0.9%, 2.7%, 5.8%, and 9.1% for the 1, 2, 5, and 10 mm inferiorly-shifted plans and 0.4%, 0.8%, 2.8%, and 6.0% for the respective superiorly-shifted plans. The homogeneity indices were, averaged among the five patients and they indicated less homogeneous dose distributions by 0.03%, 0.3%, 1.0%, and 2.8% for the inferior shifts and 0.2%, 1.2%, 6.3%, and 15.3% for the superior shifts. Overall, the mean doses to the organs at risk deviate by less than 2% for the 1, 2, and 5 mm shifted plans. The 10 mm shifted plans, however, showed average percent differences, over all studied organs, from the original plan of up to 5.6%. Using “gradient-optimized” plans, the average dose differences were reduced by 0.2%, 0.5%, 1.2%, and 2.1% for 1, 2, 5, and 10 mm shifts, respectively compared to the originally optimized plans, and the maximum dose differences were

  15. Everyday memory errors in older adults.

    Science.gov (United States)

    Ossher, Lynn; Flegal, Kristin E; Lustig, Cindy

    2013-01-01

    Despite concern about cognitive decline in old age, few studies document the types and frequency of memory errors older adults make in everyday life. In the present study, 105 healthy older adults completed the Everyday Memory Questionnaire (EMQ; Sunderland, Harris, & Baddeley, 1983 , Journal of Verbal Learning and Verbal Behavior, 22, 341), indicating what memory errors they had experienced in the last 24 hours, the Memory Self-Efficacy Questionnaire (MSEQ; West, Thorn, & Bagwell, 2003 , Psychology and Aging, 18, 111), and other neuropsychological and cognitive tasks. EMQ and MSEQ scores were unrelated and made separate contributions to variance on the Mini Mental State Exam (MMSE; Folstein, Folstein, & McHugh, 1975 , Journal of Psychiatric Research, 12, 189), suggesting separate constructs. Tip-of-the-tongue errors were the most commonly reported, and the EMQ Faces/Places and New Things subscales were most strongly related to MMSE. These findings may help training programs target memory errors commonly experienced by older adults, and suggest which types of memory errors could indicate cognitive declines of clinical concern.

  16. Estimation of heading gyrocompass error using a GPS 3DF system: Impact on ADCP measurements

    Directory of Open Access Journals (Sweden)

    Simón Ruiz

    2002-12-01

    Full Text Available Traditionally the horizontal orientation in a ship (heading has been obtained from a gyrocompass. This instrument is still used on research vessels but has an estimated error of about 2-3 degrees, inducing a systematic error in the cross-track velocity measured by an Acoustic Doppler Current Profiler (ADCP. The three-dimensional positioning system (GPS 3DF provides an independent heading measurement with accuracy better than 0.1 degree. The Spanish research vessel BIO Hespérides has been operating with this new system since 1996. For the first time on this vessel, the data from this new instrument are used to estimate gyrocompass error. The methodology we use follows the scheme developed by Griffiths (1994, which compares data from the gyrocompass and the GPS system in order to obtain an interpolated error function. In the present work we apply this methodology on mesoscale surveys performed during the observational phase of the OMEGA project, in the Alboran Sea. The heading-dependent gyrocompass error dominated. Errors in gyrocompass heading of 1.4-3.4 degrees have been found, which give a maximum error in measured cross-track ADCP velocity of 24 cm s-1.

  17. High energy hadron-induced errors in memory chips

    Energy Technology Data Exchange (ETDEWEB)

    Peterson, R.J. [University of Colorado, Boulder, CO (United States)

    2001-09-01

    We have measured probabilities for proton, neutron and pion beams from accelerators to induce temporary or soft errors in a wide range of modern 16 Mb and 64 Mb dRAM memory chips, typical of those used in aircraft electronics. Relations among the cross sections for these particles are deduced, and failure rates for aircraft avionics due to cosmic rays are evaluated. Measurement of alpha pha particle yields from pions on aluminum, as a surrogate for silicon, indicate that these reaction products are the proximate cause of the charge deposition resulting in errors. Heavy ions can cause damage to solar panels and other components in satellites above the atmosphere, by the heavy ionization trails they leave. However, at the earth's surface or at aircraft altitude it is known that cosmic rays, other than heavy ions, can cause soft errors in memory circuit components. Soft errors are those confusions between ones and zeroes that cause wrong contents to be stored in the memory, but without causing permanent damage to the circuit. As modern aircraft rely increasingly upon computerized and automated systems, these soft errors are important threats to safety. Protons, neutrons and pions resulting from high energy cosmic ray bombardment of the atmosphere pervade our environment. These particles do not induce damage directly by their ionization loss, but rather by reactions in the materials of the microcircuits. We have measured many cross sections for soft error upsets (SEU) in a broad range of commercial 16 Mb and 64 Mb dRAMs with accelerator beams. Here we define {sigma} SEU = induced errors/number of sample bits x particles/cm{sup 2}. We compare {sigma} SEU to find relations among results for these beams, and relations to reaction cross sections in order to systematize effects. We have modelled cosmic ray effects upon the components we have studied. (Author)

  18. High energy hadron-induced errors in memory chips

    International Nuclear Information System (INIS)

    Peterson, R.J.

    2001-01-01

    We have measured probabilities for proton, neutron and pion beams from accelerators to induce temporary or soft errors in a wide range of modern 16 Mb and 64 Mb dRAM memory chips, typical of those used in aircraft electronics. Relations among the cross sections for these particles are deduced, and failure rates for aircraft avionics due to cosmic rays are evaluated. Measurement of alpha pha particle yields from pions on aluminum, as a surrogate for silicon, indicate that these reaction products are the proximate cause of the charge deposition resulting in errors. Heavy ions can cause damage to solar panels and other components in satellites above the atmosphere, by the heavy ionization trails they leave. However, at the earth's surface or at aircraft altitude it is known that cosmic rays, other than heavy ions, can cause soft errors in memory circuit components. Soft errors are those confusions between ones and zeroes that cause wrong contents to be stored in the memory, but without causing permanent damage to the circuit. As modern aircraft rely increasingly upon computerized and automated systems, these soft errors are important threats to safety. Protons, neutrons and pions resulting from high energy cosmic ray bombardment of the atmosphere pervade our environment. These particles do not induce damage directly by their ionization loss, but rather by reactions in the materials of the microcircuits. We have measured many cross sections for soft error upsets (SEU) in a broad range of commercial 16 Mb and 64 Mb dRAMs with accelerator beams. Here we define σ SEU = induced errors/number of sample bits x particles/cm 2 . We compare σ SEU to find relations among results for these beams, and relations to reaction cross sections in order to systematize effects. We have modelled cosmic ray effects upon the components we have studied. (Author)

  19. Error related negativity and multi-source interference task in children with attention deficit hyperactivity disorder-combined type

    Directory of Open Access Journals (Sweden)

    Rosana Huerta-Albarrán

    2015-03-01

    Full Text Available Objective To compare performance of children with attention deficit hyperactivity disorders-combined (ADHD-C type with control children in multi-source interference task (MSIT evaluated by means of error related negativity (ERN. Method We studied 12 children with ADHD-C type with a median age of 7 years, control children were age- and gender-matched. Children performed MSIT and simultaneous recording of ERN. Results We found no differences in MSIT parameters among groups. We found no differences in ERN variables between groups. We found a significant association of ERN amplitude with MSIT in children with ADHD-C type. Some correlation went in positive direction (frequency of hits and MSIT amplitude, and others in negative direction (frequency of errors and RT in MSIT. Conclusion Children with ADHD-C type exhibited a significant association between ERN amplitude with MSIT. These results underline participation of a cingulo-fronto-parietal network and could help in the comprehension of pathophysiological mechanisms of ADHD.

  20. On the Correspondence between Mean Forecast Errors and Climate Errors in CMIP5 Models

    Energy Technology Data Exchange (ETDEWEB)

    Ma, H. -Y.; Xie, S.; Klein, S. A.; Williams, K. D.; Boyle, J. S.; Bony, S.; Douville, H.; Fermepin, S.; Medeiros, B.; Tyteca, S.; Watanabe, M.; Williamson, D.

    2014-02-01

    The present study examines the correspondence between short- and long-term systematic errors in five atmospheric models by comparing the 16 five-day hindcast ensembles from the Transpose Atmospheric Model Intercomparison Project II (Transpose-AMIP II) for July–August 2009 (short term) to the climate simulations from phase 5 of the Coupled Model Intercomparison Project (CMIP5) and AMIP for the June–August mean conditions of the years of 1979–2008 (long term). Because the short-term hindcasts were conducted with identical climate models used in the CMIP5/AMIP simulations, one can diagnose over what time scale systematic errors in these climate simulations develop, thus yielding insights into their origin through a seamless modeling approach. The analysis suggests that most systematic errors of precipitation, clouds, and radiation processes in the long-term climate runs are present by day 5 in ensemble average hindcasts in all models. Errors typically saturate after few days of hindcasts with amplitudes comparable to the climate errors, and the impacts of initial conditions on the simulated ensemble mean errors are relatively small. This robust bias correspondence suggests that these systematic errors across different models likely are initiated by model parameterizations since the atmospheric large-scale states remain close to observations in the first 2–3 days. However, biases associated with model physics can have impacts on the large-scale states by day 5, such as zonal winds, 2-m temperature, and sea level pressure, and the analysis further indicates a good correspondence between short- and long-term biases for these large-scale states. Therefore, improving individual model parameterizations in the hindcast mode could lead to the improvement of most climate models in simulating their climate mean state and potentially their future projections.

  1. SU-E-P-21: Impact of MLC Position Errors On Simultaneous Integrated Boost Intensity-Modulated Radiotherapy for Nasopharyngeal Carcinoma

    Energy Technology Data Exchange (ETDEWEB)

    Chengqiang, L; Yin, Y; Chen, L [Shandong Cancer Hospital and Institute, 440 Jiyan Road, Jinan, 250117 (China)

    2015-06-15

    Purpose: To investigate the impact of MLC position errors on simultaneous integrated boost intensity-modulated radiotherapy (SIB-IMRT) for patients with nasopharyngeal carcinoma. Methods: To compare the dosimetric differences between the simulated plans and the clinical plans, ten patients with locally advanced NPC treated with SIB-IMRT were enrolled in this study. All plans were calculated with an inverse planning system (Pinnacle3, Philips Medical System{sub )}. Random errors −2mm to 2mm{sub )},shift errors{sub (} 2mm,1mm and 0.5mm) and systematic extension/ contraction errors (±2mm, ±1mm and ±0.5mm) of the MLC leaf position were introduced respectively into the original plans to create the simulated plans. Dosimetry factors were compared between the original and the simulated plans. Results: The dosimetric impact of the random and system shift errors of MLC position was insignificant within 2mm, the maximum changes in D95% of PGTV,PTV1,PTV2 were-0.92±0.51%,1.00±0.24% and 0.62±0.17%, the maximum changes in the D0.1cc of spinal cord and brainstem were 1.90±2.80% and −1.78±1.42%, the maximum changes in the Dmean of parotids were1.36±1.23% and −2.25±2.04%.However,the impact of MLC extension or contraction errors was found significant. For 2mm leaf extension errors, the average changes in D95% of PGTV,PTV1,PTV2 were 4.31±0.67%,4.29±0.65% and 4.79±0.82%, the averaged value of the D0.1cc to spinal cord and brainstem were increased by 7.39±5.25% and 6.32±2.28%,the averaged value of the mean dose to left and right parotid were increased by 12.75±2.02%,13.39±2.17% respectively. Conclusion: The dosimetric effect was insignificant for random MLC leaf position errors up to 2mm. There was a high sensitivity to dose distribution for MLC extension or contraction errors.We should pay attention to the anatomic changes in target organs and anatomical structures during the course,individual radiotherapy was recommended to ensure adaptive doses.

  2. Medication errors with the use of allopurinol and colchicine: a retrospective study of a national, anonymous Internet-accessible error reporting system.

    Science.gov (United States)

    Mikuls, Ted R; Curtis, Jeffrey R; Allison, Jeroan J; Hicks, Rodney W; Saag, Kenneth G

    2006-03-01

    To more closely assess medication errors in gout care, we examined data from a national, Internet-accessible error reporting program over a 5-year reporting period. We examined data from the MEDMARX database, covering the period from January 1, 1999 through December 31, 2003. For allopurinol and colchicine, we examined error severity, source, type, contributing factors, and healthcare personnel involved in errors, and we detailed errors resulting in patient harm. Causes of error and the frequency of other error characteristics were compared for gout medications versus other musculoskeletal treatments using the chi-square statistic. Gout medication errors occurred in 39% (n = 273) of facilities participating in the MEDMARX program. Reported errors were predominantly from the inpatient hospital setting and related to the use of allopurinol (n = 524), followed by colchicine (n = 315), probenecid (n = 50), and sulfinpyrazone (n = 2). Compared to errors involving other musculoskeletal treatments, allopurinol and colchicine errors were more often ascribed to problems with physician prescribing (7% for other therapies versus 23-39% for allopurinol and colchicine, p < 0.0001) and less often due to problems with drug administration or nursing error (50% vs 23-27%, p < 0.0001). Our results suggest that inappropriate prescribing practices are characteristic of errors occurring with the use of allopurinol and colchicine. Physician prescribing practices are a potential target for quality improvement interventions in gout care.

  3. Density estimation by maximum quantum entropy

    International Nuclear Information System (INIS)

    Silver, R.N.; Wallstrom, T.; Martz, H.F.

    1993-01-01

    A new Bayesian method for non-parametric density estimation is proposed, based on a mathematical analogy to quantum statistical physics. The mathematical procedure is related to maximum entropy methods for inverse problems and image reconstruction. The information divergence enforces global smoothing toward default models, convexity, positivity, extensivity and normalization. The novel feature is the replacement of classical entropy by quantum entropy, so that local smoothing is enforced by constraints on differential operators. The linear response of the estimate is proportional to the covariance. The hyperparameters are estimated by type-II maximum likelihood (evidence). The method is demonstrated on textbook data sets

  4. Electrophysiological correlates of error processing in borderline personality disorder.

    Science.gov (United States)

    Ruchsow, Martin; Walter, Henrik; Buchheim, Anna; Martius, Philipp; Spitzer, Manfred; Kächele, Horst; Grön, Georg; Kiefer, Markus

    2006-05-01

    The electrophysiological correlates of error processing were investigated in patients with borderline personality disorder (BPD) using event-related potentials (ERP). Twelve patients with BPD and 12 healthy controls were additionally rated with the Barratt impulsiveness scale (BIS-10). Participants performed a Go/Nogo task while a 64 channel EEG was recorded. Three ERP components were of special interest: error-related negativity (ERN)/error negativity (Ne), early error positivity (early Pe) reflecting automatic error processing, and the late Pe component which is thought to mirror the awareness of erroneous responses. We found smaller amplitudes of the ERN/Ne in patients with BPD compared to controls. Moreover, significant correlations with the BIS-10 non-planning sub-score could be demonstrated for both the entire group and the patient group. No between-group differences were observed for the early and late Pe components. ERP measures appear to be a suitable tool to study clinical time courses in BPD.

  5. Prescribing errors in a Brazilian neonatal intensive care unit

    Directory of Open Access Journals (Sweden)

    Ana Paula Cezar Machado

    2015-12-01

    Full Text Available Abstract Pediatric patients, especially those admitted to the neonatal intensive care unit (ICU, are highly vulnerable to medication errors. This study aimed to measure the prescription error rate in a university hospital neonatal ICU and to identify susceptible patients, types of errors, and the medicines involved. The variables related to medicines prescribed were compared to the Neofax prescription protocol. The study enrolled 150 newborns and analyzed 489 prescription order forms, with 1,491 medication items, corresponding to 46 drugs. Prescription error rate was 43.5%. Errors were found in dosage, intervals, diluents, and infusion time, distributed across 7 therapeutic classes. Errors were more frequent in preterm newborns. Diluent and dosing were the most frequent sources of errors. The therapeutic classes most involved in errors were antimicrobial agents and drugs that act on the nervous and cardiovascular systems.

  6. Approximate maximum parsimony and ancestral maximum likelihood.

    Science.gov (United States)

    Alon, Noga; Chor, Benny; Pardi, Fabio; Rapoport, Anat

    2010-01-01

    We explore the maximum parsimony (MP) and ancestral maximum likelihood (AML) criteria in phylogenetic tree reconstruction. Both problems are NP-hard, so we seek approximate solutions. We formulate the two problems as Steiner tree problems under appropriate distances. The gist of our approach is the succinct characterization of Steiner trees for a small number of leaves for the two distances. This enables the use of known Steiner tree approximation algorithms. The approach leads to a 16/9 approximation ratio for AML and asymptotically to a 1.55 approximation ratio for MP.

  7. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  8. SU-E-T-195: Gantry Angle Dependency of MLC Leaf Position Error

    Energy Technology Data Exchange (ETDEWEB)

    Ju, S; Hong, C; Kim, M; Chung, K; Kim, J; Han, Y; Ahn, S; Chung, S; Shin, E; Shin, J; Kim, H; Kim, D; Choi, D [Department of Radiation Oncology, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul (Korea, Republic of)

    2014-06-01

    Purpose: The aim of this study was to investigate the gantry angle dependency of the multileaf collimator (MLC) leaf position error. Methods: An automatic MLC quality assurance system (AutoMLCQA) was developed to evaluate the gantry angle dependency of the MLC leaf position error using an electronic portal imaging device (EPID). To eliminate the EPID position error due to gantry rotation, we designed a reference maker (RM) that could be inserted into the wedge mount. After setting up the EPID, a reference image was taken of the RM using an open field. Next, an EPID-based picket-fence test (PFT) was performed without the RM. These procedures were repeated at every 45° intervals of the gantry angle. A total of eight reference images and PFT image sets were analyzed using in-house software. The average MLC leaf position error was calculated at five pickets (-10, -5, 0, 5, and 10 cm) in accordance with general PFT guidelines using in-house software. This test was carried out for four linear accelerators. Results: The average MLC leaf position errors were within the set criterion of <1 mm (actual errors ranged from -0.7 to 0.8 mm) for all gantry angles, but significant gantry angle dependency was observed in all machines. The error was smaller at a gantry angle of 0° but increased toward the positive direction with gantry angle increments in the clockwise direction. The error reached a maximum value at a gantry angle of 90° and then gradually decreased until 180°. In the counter-clockwise rotation of the gantry, the same pattern of error was observed but the error increased in the negative direction. Conclusion: The AutoMLCQA system was useful to evaluate the MLC leaf position error for various gantry angles without the EPID position error. The Gantry angle dependency should be considered during MLC leaf position error analysis.

  9. Inherent Error in Asynchronous Digital Flight Controls.

    Science.gov (United States)

    1980-02-01

    operation will be eliminated. If T* is close to T, the inherent error (eA) is a small value. Then the deficiency of the basic model, which is de... tK2 at m ~VCONTR0RKIPPIL I~+ R tKT 2 TKT 5 ~I G E 1 i V OTN TRIL 2-REDN HNE IjT e UT 3 ~49__I 4.) 4 -4 - 4.-4 U 4.) k-4E-- Iz E-4 P E-44)-4. 4.)l 1s...indicate the channel failure. To reduce this deficiency , the new model computes a tolerance value equal to the maximum steady-state sample covariance of the

  10. The Argos-CLS Kalman Filter: Error Structures and State-Space Modelling Relative to Fastloc GPS Data.

    Directory of Open Access Journals (Sweden)

    Andrew D Lowther

    Full Text Available Understanding how an animal utilises its surroundings requires its movements through space to be described accurately. Satellite telemetry is the only means of acquiring movement data for many species however data are prone to varying amounts of spatial error; the recent application of state-space models (SSMs to the location estimation problem have provided a means to incorporate spatial errors when characterising animal movements. The predominant platform for collecting satellite telemetry data on free-ranging animals, Service Argos, recently provided an alternative Doppler location estimation algorithm that is purported to be more accurate and generate a greater number of locations that its predecessor. We provide a comprehensive assessment of this new estimation process performance on data from free-ranging animals relative to concurrently collected Fastloc GPS data. Additionally, we test the efficacy of three readily-available SSM in predicting the movement of two focal animals. Raw Argos location estimates generated by the new algorithm were greatly improved compared to the old system. Approximately twice as many Argos locations were derived compared to GPS on the devices used. Root Mean Square Errors (RMSE for each optimal SSM were less than 4.25 km with some producing RMSE of less than 2.50 km. Differences in the biological plausibility of the tracks between the two focal animals used to investigate the utility of SSM highlights the importance of considering animal behaviour in movement studies. The ability to reprocess Argos data collected since 2008 with the new algorithm should permit questions of animal movement to be revisited at a finer resolution.

  11. WE-G-BRA-04: Common Errors and Deficiencies in Radiation Oncology Practice

    Energy Technology Data Exchange (ETDEWEB)

    Kry, S; Dromgoole, L; Alvarez, P; Lowenstein, J; Molineu, A; Taylor, P; Followill, D [UT MD Anderson Cancer Center, Houston, TX (United States)

    2015-06-15

    Purpose: Dosimetric errors in radiotherapy dose delivery lead to suboptimal treatments and outcomes. This work reviews the frequency and severity of dosimetric and programmatic errors identified by on-site audits performed by the IROC Houston QA center. Methods: IROC Houston on-site audits evaluate absolute beam calibration, relative dosimetry data compared to the treatment planning system data, and processes such as machine QA. Audits conducted from 2000-present were abstracted for recommendations, including type of recommendation and magnitude of error when applicable. Dosimetric recommendations corresponded to absolute dose errors >3% and relative dosimetry errors >2%. On-site audits of 1020 accelerators at 409 institutions were reviewed. Results: A total of 1280 recommendations were made (average 3.1/institution). The most common recommendation was for inadequate QA procedures per TG-40 and/or TG-142 (82% of institutions) with the most commonly noted deficiency being x-ray and electron off-axis constancy versus gantry angle. Dosimetrically, the most common errors in relative dosimetry were in small-field output factors (59% of institutions), wedge factors (33% of institutions), off-axis factors (21% of institutions), and photon PDD (18% of institutions). Errors in calibration were also problematic: 20% of institutions had an error in electron beam calibration, 8% had an error in photon beam calibration, and 7% had an error in brachytherapy source calibration. Almost all types of data reviewed included errors up to 7% although 20 institutions had errors in excess of 10%, and 5 had errors in excess of 20%. The frequency of electron calibration errors decreased significantly with time, but all other errors show non-significant changes. Conclusion: There are many common and often serious errors made during the establishment and maintenance of a radiotherapy program that can be identified through independent peer review. Physicists should be cautious, particularly

  12. WE-G-BRA-04: Common Errors and Deficiencies in Radiation Oncology Practice

    International Nuclear Information System (INIS)

    Kry, S; Dromgoole, L; Alvarez, P; Lowenstein, J; Molineu, A; Taylor, P; Followill, D

    2015-01-01

    Purpose: Dosimetric errors in radiotherapy dose delivery lead to suboptimal treatments and outcomes. This work reviews the frequency and severity of dosimetric and programmatic errors identified by on-site audits performed by the IROC Houston QA center. Methods: IROC Houston on-site audits evaluate absolute beam calibration, relative dosimetry data compared to the treatment planning system data, and processes such as machine QA. Audits conducted from 2000-present were abstracted for recommendations, including type of recommendation and magnitude of error when applicable. Dosimetric recommendations corresponded to absolute dose errors >3% and relative dosimetry errors >2%. On-site audits of 1020 accelerators at 409 institutions were reviewed. Results: A total of 1280 recommendations were made (average 3.1/institution). The most common recommendation was for inadequate QA procedures per TG-40 and/or TG-142 (82% of institutions) with the most commonly noted deficiency being x-ray and electron off-axis constancy versus gantry angle. Dosimetrically, the most common errors in relative dosimetry were in small-field output factors (59% of institutions), wedge factors (33% of institutions), off-axis factors (21% of institutions), and photon PDD (18% of institutions). Errors in calibration were also problematic: 20% of institutions had an error in electron beam calibration, 8% had an error in photon beam calibration, and 7% had an error in brachytherapy source calibration. Almost all types of data reviewed included errors up to 7% although 20 institutions had errors in excess of 10%, and 5 had errors in excess of 20%. The frequency of electron calibration errors decreased significantly with time, but all other errors show non-significant changes. Conclusion: There are many common and often serious errors made during the establishment and maintenance of a radiotherapy program that can be identified through independent peer review. Physicists should be cautious, particularly

  13. Entropy Error Model of Planar Geometry Features in GIS

    Institute of Scientific and Technical Information of China (English)

    LI Dajun; GUAN Yunlan; GONG Jianya; DU Daosheng

    2003-01-01

    Positional error of line segments is usually described by using "g-band", however, its band width is in relation to the confidence level choice. In fact, given different confidence levels, a series of concentric bands can be obtained. To overcome the effect of confidence level on the error indicator, by introducing the union entropy theory, we propose an entropy error ellipse index of point, then extend it to line segment and polygon,and establish an entropy error band of line segment and an entropy error donut of polygon. The research shows that the entropy error index can be determined uniquely and is not influenced by confidence level, and that they are suitable for positional uncertainty of planar geometry features.

  14. Human error theory: relevance to nurse management.

    Science.gov (United States)

    Armitage, Gerry

    2009-03-01

    Describe, discuss and critically appraise human error theory and consider its relevance for nurse managers. Healthcare errors are a persistent threat to patient safety. Effective risk management and clinical governance depends on understanding the nature of error. This paper draws upon a wide literature from published works, largely from the field of cognitive psychology and human factors. Although the content of this paper is pertinent to any healthcare professional; it is written primarily for nurse managers. Error is inevitable. Causation is often attributed to individuals, yet causation in complex environments such as healthcare is predominantly multi-factorial. Individual performance is affected by the tendency to develop prepacked solutions and attention deficits, which can in turn be related to local conditions and systems or latent failures. Blame is often inappropriate. Defences should be constructed in the light of these considerations and to promote error wisdom and organizational resilience. Managing and learning from error is seen as a priority in the British National Health Service (NHS), this can be better achieved with an understanding of the roots, nature and consequences of error. Such an understanding can provide a helpful framework for a range of risk management activities.

  15. A note on estimating errors from the likelihood function

    International Nuclear Information System (INIS)

    Barlow, Roger

    2005-01-01

    The points at which the log likelihood falls by 12 from its maximum value are often used to give the 'errors' on a result, i.e. the 68% central confidence interval. The validity of this is examined for two simple cases: a lifetime measurement and a Poisson measurement. Results are compared with the exact Neyman construction and with the simple Bartlett approximation. It is shown that the accuracy of the log likelihood method is poor, and the Bartlett construction explains why it is flawed

  16. Increased error-related brain activity distinguishes generalized anxiety disorder with and without comorbid major depressive disorder.

    Science.gov (United States)

    Weinberg, Anna; Klein, Daniel N; Hajcak, Greg

    2012-11-01

    Generalized anxiety disorder (GAD) and major depressive disorder (MDD) are so frequently comorbid that some have suggested that the 2 should be collapsed into a single overarching "distress" disorder. Yet there is also increasing evidence that the 2 categories are not redundant. Neurobehavioral markers that differentiate GAD and MDD would be helpful in ongoing efforts to refine classification schemes based on neurobiological measures. The error-related negativity (ERN) may be one such marker. The ERN is an event-related potential component presenting as a negative deflection approximately 50 ms following an erroneous response and reflects activity of the anterior cingulate cortex. There is evidence for an enhanced ERN in individuals with GAD, but the literature in MDD is mixed. The present study measured the ERN in 26 GAD, 23 comorbid GAD and MDD, and 36 control participants, all of whom were female and medication-free. Consistent with previous research, the GAD group was characterized by a larger ERN and an increased difference between error and correct trials than controls. No such enhancement was evident in the comorbid group, suggesting comorbid depression may moderate the relationship between the ERN and anxiety. The present study further suggests that the ERN is a potentially useful neurobiological marker for future studies that consider the pathophysiology of multiple disorders in order to construct or refine neurobiologically based diagnostic phenotypes. (PsycINFO Database Record (c) 2012 APA, all rights reserved).

  17. Random measurement error: Why worry? An example of cardiovascular risk factors.

    Directory of Open Access Journals (Sweden)

    Timo B Brakenhoff

    Full Text Available With the increased use of data not originally recorded for research, such as routine care data (or 'big data', measurement error is bound to become an increasingly relevant problem in medical research. A common view among medical researchers on the influence of random measurement error (i.e. classical measurement error is that its presence leads to some degree of systematic underestimation of studied exposure-outcome relations (i.e. attenuation of the effect estimate. For the common situation where the analysis involves at least one exposure and one confounder, we demonstrate that the direction of effect of random measurement error on the estimated exposure-outcome relations can be difficult to anticipate. Using three example studies on cardiovascular risk factors, we illustrate that random measurement error in the exposure and/or confounder can lead to underestimation as well as overestimation of exposure-outcome relations. We therefore advise medical researchers to refrain from making claims about the direction of effect of measurement error in their manuscripts, unless the appropriate inferential tools are used to study or alleviate the impact of measurement error from the analysis.

  18. Medication errors as malpractice-a qualitative content analysis of 585 medication errors by nurses in Sweden.

    Science.gov (United States)

    Björkstén, Karin Sparring; Bergqvist, Monica; Andersén-Karlsson, Eva; Benson, Lina; Ulfvarson, Johanna

    2016-08-24

    Many studies address the prevalence of medication errors but few address medication errors serious enough to be regarded as malpractice. Other studies have analyzed the individual and system contributory factor leading to a medication error. Nurses have a key role in medication administration, and there are contradictory reports on the nurses' work experience in relation to the risk and type for medication errors. All medication errors where a nurse was held responsible for malpractice (n = 585) during 11 years in Sweden were included. A qualitative content analysis and classification according to the type and the individual and system contributory factors was made. In order to test for possible differences between nurses' work experience and associations within and between the errors and contributory factors, Fisher's exact test was used, and Cohen's kappa (k) was performed to estimate the magnitude and direction of the associations. There were a total of 613 medication errors in the 585 cases, the most common being "Wrong dose" (41 %), "Wrong patient" (13 %) and "Omission of drug" (12 %). In 95 % of the cases, an average of 1.4 individual contributory factors was found; the most common being "Negligence, forgetfulness or lack of attentiveness" (68 %), "Proper protocol not followed" (25 %), "Lack of knowledge" (13 %) and "Practice beyond scope" (12 %). In 78 % of the cases, an average of 1.7 system contributory factors was found; the most common being "Role overload" (36 %), "Unclear communication or orders" (30 %) and "Lack of adequate access to guidelines or unclear organisational routines" (30 %). The errors "Wrong patient due to mix-up of patients" and "Wrong route" and the contributory factors "Lack of knowledge" and "Negligence, forgetfulness or lack of attentiveness" were more common in less experienced nurses. The experienced nurses were more prone to "Practice beyond scope of practice" and to make errors in spite of "Lack of adequate

  19. Large errors and severe conditions

    CERN Document Server

    Smith, D L; Van Wormer, L A

    2002-01-01

    Physical parameters that can assume real-number values over a continuous range are generally represented by inherently positive random variables. However, if the uncertainties in these parameters are significant (large errors), conventional means of representing and manipulating the associated variables can lead to erroneous results. Instead, all analyses involving them must be conducted in a probabilistic framework. Several issues must be considered: First, non-linear functional relations between primary and derived variables may lead to significant 'error amplification' (severe conditions). Second, the commonly used normal (Gaussian) probability distribution must be replaced by a more appropriate function that avoids the occurrence of negative sampling results. Third, both primary random variables and those derived through well-defined functions must be dealt with entirely in terms of their probability distributions. Parameter 'values' and 'errors' should be interpreted as specific moments of these probabil...

  20. A crowdsourcing workflow for extracting chemical-induced disease relations from free text.

    Science.gov (United States)

    Li, Tong Shu; Bravo, Àlex; Furlong, Laura I; Good, Benjamin M; Su, Andrew I

    2016-01-01

    Relations between chemicals and diseases are one of the most queried biomedical interactions. Although expert manual curation is the standard method for extracting these relations from the literature, it is expensive and impractical to apply to large numbers of documents, and therefore alternative methods are required. We describe here a crowdsourcing workflow for extracting chemical-induced disease relations from free text as part of the BioCreative V Chemical Disease Relation challenge. Five non-expert workers on the CrowdFlower platform were shown each potential chemical-induced disease relation highlighted in the original source text and asked to make binary judgments about whether the text supported the relation. Worker responses were aggregated through voting, and relations receiving four or more votes were predicted as true. On the official evaluation dataset of 500 PubMed abstracts, the crowd attained a 0.505F-score (0.475 precision, 0.540 recall), with a maximum theoretical recall of 0.751 due to errors with named entity recognition. The total crowdsourcing cost was $1290.67 ($2.58 per abstract) and took a total of 7 h. A qualitative error analysis revealed that 46.66% of sampled errors were due to task limitations and gold standard errors, indicating that performance can still be improved. All code and results are publicly available athttps://github.com/SuLab/crowd_cid_relexDatabase URL:https://github.com/SuLab/crowd_cid_relex. © The Author(s) 2016. Published by Oxford University Press.

  1. A crowdsourcing workflow for extracting chemical-induced disease relations from free text

    Science.gov (United States)

    Li, Tong Shu; Bravo, Àlex; Furlong, Laura I.; Good, Benjamin M.; Su, Andrew I.

    2016-01-01

    Relations between chemicals and diseases are one of the most queried biomedical interactions. Although expert manual curation is the standard method for extracting these relations from the literature, it is expensive and impractical to apply to large numbers of documents, and therefore alternative methods are required. We describe here a crowdsourcing workflow for extracting chemical-induced disease relations from free text as part of the BioCreative V Chemical Disease Relation challenge. Five non-expert workers on the CrowdFlower platform were shown each potential chemical-induced disease relation highlighted in the original source text and asked to make binary judgments about whether the text supported the relation. Worker responses were aggregated through voting, and relations receiving four or more votes were predicted as true. On the official evaluation dataset of 500 PubMed abstracts, the crowd attained a 0.505 F-score (0.475 precision, 0.540 recall), with a maximum theoretical recall of 0.751 due to errors with named entity recognition. The total crowdsourcing cost was $1290.67 ($2.58 per abstract) and took a total of 7 h. A qualitative error analysis revealed that 46.66% of sampled errors were due to task limitations and gold standard errors, indicating that performance can still be improved. All code and results are publicly available at https://github.com/SuLab/crowd_cid_relex Database URL: https://github.com/SuLab/crowd_cid_relex PMID:27087308

  2. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  3. Implication of spot position error on plan quality and patient safety in pencil-beam-scanning proton therapy

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Juan; Beltran, Chris J., E-mail: beltran.chris@mayo.edu; Herman, Michael G. [Division of Medical Physics, Department of Radiation Oncology, Mayo Clinic, Rochester, Minnesota 55905 (United States)

    2014-08-15

    Purpose: To quantitatively and systematically assess dosimetric effects induced by spot positioning error as a function of spot spacing (SS) on intensity-modulated proton therapy (IMPT) plan quality and to facilitate evaluation of safety tolerance limits on spot position. Methods: Spot position errors (PE) ranging from 1 to 2 mm were simulated. Simple plans were created on a water phantom, and IMPT plans were calculated on two pediatric patients with a brain tumor of 28 and 3 cc, respectively, using a commercial planning system. For the phantom, a uniform dose was delivered to targets located at different depths from 10 to 20 cm with various field sizes from 2{sup 2} to 15{sup 2} cm{sup 2}. Two nominal spot sizes, 4.0 and 6.6 mm of 1 σ in water at isocenter, were used for treatment planning. The SS ranged from 0.5 σ to 1.5 σ, which is 2–6 mm for the small spot size and 3.3–9.9 mm for the large spot size. Various perturbation scenarios of a single spot error and systematic and random multiple spot errors were studied. To quantify the dosimetric effects, percent dose error (PDE) depth profiles and the value of percent dose error at the maximum dose difference (PDE [ΔDmax]) were used for evaluation. Results: A pair of hot and cold spots was created per spot shift. PDE[ΔDmax] is found to be a complex function of PE, SS, spot size, depth, and global spot distribution that can be well defined in simple models. For volumetric targets, the PDE [ΔDmax] is not noticeably affected by the change of field size or target volume within the studied ranges. In general, reducing SS decreased the dose error. For the facility studied, given a single spot error with a PE of 1.2 mm and for both spot sizes, a SS of 1σ resulted in a 2% maximum dose error; a SS larger than 1.25 σ substantially increased the dose error and its sensitivity to PE. A similar trend was observed in multiple spot errors (both systematic and random errors). Systematic PE can lead to noticeable hot

  4. Maximum Entropy Fundamentals

    Directory of Open Access Journals (Sweden)

    F. Topsøe

    2001-09-01

    Full Text Available Abstract: In its modern formulation, the Maximum Entropy Principle was promoted by E.T. Jaynes, starting in the mid-fifties. The principle dictates that one should look for a distribution, consistent with available information, which maximizes the entropy. However, this principle focuses only on distributions and it appears advantageous to bring information theoretical thinking more prominently into play by also focusing on the "observer" and on coding. This view was brought forward by the second named author in the late seventies and is the view we will follow-up on here. It leads to the consideration of a certain game, the Code Length Game and, via standard game theoretical thinking, to a principle of Game Theoretical Equilibrium. This principle is more basic than the Maximum Entropy Principle in the sense that the search for one type of optimal strategies in the Code Length Game translates directly into the search for distributions with maximum entropy. In the present paper we offer a self-contained and comprehensive treatment of fundamentals of both principles mentioned, based on a study of the Code Length Game. Though new concepts and results are presented, the reading should be instructional and accessible to a rather wide audience, at least if certain mathematical details are left aside at a rst reading. The most frequently studied instance of entropy maximization pertains to the Mean Energy Model which involves a moment constraint related to a given function, here taken to represent "energy". This type of application is very well known from the literature with hundreds of applications pertaining to several different elds and will also here serve as important illustration of the theory. But our approach reaches further, especially regarding the study of continuity properties of the entropy function, and this leads to new results which allow a discussion of models with so-called entropy loss. These results have tempted us to speculate over

  5. Fast maximum likelihood estimation of mutation rates using a birth-death process.

    Science.gov (United States)

    Wu, Xiaowei; Zhu, Hongxiao

    2015-02-07

    Since fluctuation analysis was first introduced by Luria and Delbrück in 1943, it has been widely used to make inference about spontaneous mutation rates in cultured cells. Under certain model assumptions, the probability distribution of the number of mutants that appear in a fluctuation experiment can be derived explicitly, which provides the basis of mutation rate estimation. It has been shown that, among various existing estimators, the maximum likelihood estimator usually demonstrates some desirable properties such as consistency and lower mean squared error. However, its application in real experimental data is often hindered by slow computation of likelihood due to the recursive form of the mutant-count distribution. We propose a fast maximum likelihood estimator of mutation rates, MLE-BD, based on a birth-death process model with non-differential growth assumption. Simulation studies demonstrate that, compared with the conventional maximum likelihood estimator derived from the Luria-Delbrück distribution, MLE-BD achieves substantial improvement on computational speed and is applicable to arbitrarily large number of mutants. In addition, it still retains good accuracy on point estimation. Published by Elsevier Ltd.

  6. Maximum likelihood estimation for Cox's regression model under nested case-control sampling

    DEFF Research Database (Denmark)

    Scheike, Thomas Harder; Juul, Anders

    2004-01-01

    -like growth factor I was associated with ischemic heart disease. The study was based on a population of 3784 Danes and 231 cases of ischemic heart disease where controls were matched on age and gender. We illustrate the use of the MLE for these data and show how the maximum likelihood framework can be used......Nested case-control sampling is designed to reduce the costs of large cohort studies. It is important to estimate the parameters of interest as efficiently as possible. We present a new maximum likelihood estimator (MLE) for nested case-control sampling in the context of Cox's proportional hazards...... model. The MLE is computed by the EM-algorithm, which is easy to implement in the proportional hazards setting. Standard errors are estimated by a numerical profile likelihood approach based on EM aided differentiation. The work was motivated by a nested case-control study that hypothesized that insulin...

  7. Error Patterns

    NARCIS (Netherlands)

    Hoede, C.; Li, Z.

    2001-01-01

    In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,

  8. Larger error signals in major depression are associated with better avoidance learning

    Directory of Open Access Journals (Sweden)

    James F eCavanagh

    2011-11-01

    Full Text Available The medial prefrontal cortex (mPFC is particularly reactive to signals of error, punishment, and conflict in the service of behavioral adaptation and it is consistently implicated in the etiology of Major Depressive Disorder (MDD. This association makes conceptual sense, given that MDD has been associated with hyper-reactivity in neural systems associated with punishment processing. Yet in practice, depression-related variance in measures of mPFC functioning often fails to relate to performance. For example, neuroelectric reflections of mediofrontal error signals are often found to be larger in MDD, but a deficit in post-error performance suggests that these error signals are not being used to rapidly adapt behavior. Thus, it remains unknown if depression-related variance in error signals reflects a meaningful alteration in the use of error or punishment information. However, larger mediofrontal error signals have also been related to another behavioral tendency: increased accuracy in avoidance learning. The integrity of this error-avoidance system remains untested in MDD. In this study, EEG was recorded as 21 symptomatic, drug-free participants with current or past MDD and 24 control participants performed a probabilistic reinforcement learning task. Depressed participants had larger mPFC EEG responses to error feedback than controls. The direct relationship between error signal amplitudes and avoidance learning accuracy was replicated. Crucially, this relationship was stronger in depressed participants for high conflict lose-lose situations, demonstrating a selective alteration of avoidance learning. This investigation provided evidence that larger error signal amplitudes in depression are associated with increased avoidance learning, identifying a candidate mechanistic model for hypersensitivity to negative outcomes in depression.

  9. Error exponents for entanglement concentration

    International Nuclear Information System (INIS)

    Hayashi, Masahito; Koashi, Masato; Matsumoto, Keiji; Morikoshi, Fumiaki; Winter, Andreas

    2003-01-01

    Consider entanglement concentration schemes that convert n identical copies of a pure state into a maximally entangled state of a desired size with success probability being close to one in the asymptotic limit. We give the distillable entanglement, the number of Bell pairs distilled per copy, as a function of an error exponent, which represents the rate of decrease in failure probability as n tends to infinity. The formula fills the gap between the least upper bound of distillable entanglement in probabilistic concentration, which is the well-known entropy of entanglement, and the maximum attained in deterministic concentration. The method of types in information theory enables the detailed analysis of the distillable entanglement in terms of the error rate. In addition to the probabilistic argument, we consider another type of entanglement concentration scheme, where the initial state is deterministically transformed into a (possibly mixed) final state whose fidelity to a maximally entangled state of a desired size converges to one in the asymptotic limit. We show that the same formula as in the probabilistic argument is valid for the argument on fidelity by replacing the success probability with the fidelity. Furthermore, we also discuss entanglement yield when optimal success probability or optimal fidelity converges to zero in the asymptotic limit (strong converse), and give the explicit formulae for those cases

  10. Relative range error evaluation of terrestrial laser scanners using a plate, a sphere, and a novel dual-sphere-plate target.

    Science.gov (United States)

    Muralikrishnan, Bala; Rachakonda, Prem; Lee, Vincent; Shilling, Meghan; Sawyer, Daniel; Cheok, Geraldine; Cournoyer, Luc

    2017-12-01

    Terrestrial laser scanners (TLS) are a class of 3D imaging systems that produce a 3D point cloud by measuring the range and two angles to a point. The fundamental measurement of a TLS is range. Relative range error is one component of the overall range error of TLS and its estimation is therefore an important aspect in establishing metrological traceability of measurements performed using these systems. Target geometry is an important aspect to consider when realizing the relative range tests. The recently published ASTM E2938-15 mandates the use of a plate target for the relative range tests. While a plate target may reasonably be expected to produce distortion free data even at far distances, the target itself needs careful alignment at each of the relative range test positions. In this paper, we discuss relative range experiments performed using a plate target and then address the advantages and limitations of using a sphere target. We then present a novel dual-sphere-plate target that draws from the advantages of the sphere and the plate without the associated limitations. The spheres in the dual-sphere-plate target are used simply as fiducials to identify a point on the surface of the plate that is common to both the scanner and the reference instrument, thus overcoming the need to carefully align the target.

  11. Error monitoring and empathy: Explorations within a neurophysiological context.

    Science.gov (United States)

    Amiruddin, Azhani; Fueggle, Simone N; Nguyen, An T; Gignac, Gilles E; Clunies-Ross, Karen L; Fox, Allison M

    2017-06-01

    Past literature has proposed that empathy consists of two components: cognitive and affective empathy. Error monitoring mechanisms indexed by the error-related negativity (ERN) have been associated with empathy. Studies have found that a larger ERN is associated with higher levels of empathy. We aimed to expand upon previous work by investigating how error monitoring relates to the independent theoretical domains of cognitive and affective empathy. Study 1 (N = 24) explored the relationship between error monitoring mechanisms and subcomponents of empathy using the Questionnaire of Cognitive and Affective Empathy and found no relationship. Study 2 (N = 38) explored the relationship between the error monitoring mechanisms and overall empathy. Contrary to past findings, there was no evidence to support a relationship between error monitoring mechanisms and scores on empathy measures. A subsequent meta-analysis (Study 3, N = 125) summarizing the relationship across previously published studies together with the two studies reported in the current paper indicated that overall there was no significant association between ERN and empathy and that there was significant heterogeneity across studies. Future investigations exploring the potential variables that may moderate these relationships are discussed. © 2017 Society for Psychophysiological Research.

  12. Notes on human error analysis and prediction

    International Nuclear Information System (INIS)

    Rasmussen, J.

    1978-11-01

    The notes comprise an introductory discussion of the role of human error analysis and prediction in industrial risk analysis. Following this introduction, different classes of human errors and role in industrial systems are mentioned. Problems related to the prediction of human behaviour in reliability and safety analysis are formulated and ''criteria for analyzability'' which must be met by industrial systems so that a systematic analysis can be performed are suggested. The appendices contain illustrative case stories and a review of human error reports for the task of equipment calibration and testing as found in the US Licensee Event Reports. (author)

  13. Exploring behavioural determinants relating to health professional reporting of medication errors: a qualitative study using the Theoretical Domains Framework.

    Science.gov (United States)

    Alqubaisi, Mai; Tonna, Antonella; Strath, Alison; Stewart, Derek

    2016-07-01

    Effective and efficient medication reporting processes are essential in promoting patient safety. Few qualitative studies have explored reporting of medication errors by health professionals, and none have made reference to behavioural theories. The objective was to describe and understand the behavioural determinants of health professional reporting of medication errors in the United Arab Emirates (UAE). This was a qualitative study comprising face-to-face, semi-structured interviews within three major medical/surgical hospitals of Abu Dhabi, the UAE. Health professionals were sampled purposively in strata of profession and years of experience. The semi-structured interview schedule focused on behavioural determinants around medication error reporting, facilitators, barriers and experiences. The Theoretical Domains Framework (TDF; a framework of theories of behaviour change) was used as a coding framework. Ethical approval was obtained from a UK university and all participating hospital ethics committees. Data saturation was achieved after interviewing ten nurses, ten pharmacists and nine physicians. Whilst it appeared that patient safety and organisational improvement goals and intentions were behavioural determinants which facilitated reporting, there were key determinants which deterred reporting. These included the beliefs of the consequences of reporting (lack of any feedback following reporting and impacting professional reputation, relationships and career progression), emotions (fear and worry) and issues related to the environmental context (time taken to report). These key behavioural determinants which negatively impact error reporting can facilitate the development of an intervention, centring on organisational safety and reporting culture, to enhance reporting effectiveness and efficiency.

  14. Perspectives on Inmate Communication and Interpersonal Relations in the Maximum Security Prison.

    Science.gov (United States)

    Van Voorhis, Patricia; Meussling, Vonne

    In recent years, scholarly and applied inquiry has addressed the importance of interpersonal communication patterns and problems in maximum security institutions for males. As a result of this research, the number of programs designed to improve the interpersonal effectiveness of prison inmates has increased dramatically. Research suggests that…

  15. The error in total error reduction.

    Science.gov (United States)

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  16. Errors in fracture diagnoses in the emergency department--characteristics of patients and diurnal variation

    DEFF Research Database (Denmark)

    Hallas, Peter; Ellingsen, Trond

    2006-01-01

    Evaluation of the circumstances related to errors in diagnosis of fractures at an Emergency Department may suggest ways to reduce the incidence of such errors.......Evaluation of the circumstances related to errors in diagnosis of fractures at an Emergency Department may suggest ways to reduce the incidence of such errors....

  17. Error Tendencies in Processing Student Feedback for Instructional Decision Making.

    Science.gov (United States)

    Schermerhorn, John R., Jr.; And Others

    1985-01-01

    Seeks to assist instructors in recognizing two basic errors that can occur in processing student evaluation data on instructional development efforts; offers a research framework for future investigations of the error tendencies and related issues; and suggests ways in which instructors can confront and manage error tendencies in practice. (MBR)

  18. Research trend on human error reduction

    International Nuclear Information System (INIS)

    Miyaoka, Sadaoki

    1990-01-01

    Human error has been the problem in all industries. In 1988, the Bureau of Mines, Department of the Interior, USA, carried out the worldwide survey on the human error in all industries in relation to the fatal accidents in mines. There was difference in the results according to the methods of collecting data, but the proportion that human error took in the total accidents distributed in the wide range of 20∼85%, and was 35% on the average. The rate of occurrence of accidents and troubles in Japanese nuclear power stations is shown, and the rate of occurrence of human error is 0∼0.5 cases/reactor-year, which did not much vary. Therefore, the proportion that human error took in the total tended to increase, and it has become important to reduce human error for lowering the rate of occurrence of accidents and troubles hereafter. After the TMI accident in 1979 in USA, the research on man-machine interface became active, and after the Chernobyl accident in 1986 in USSR, the problem of organization and management has been studied. In Japan, 'Safety 21' was drawn up by the Advisory Committee for Energy, and also the annual reports on nuclear safety pointed out the importance of human factors. The state of the research on human factors in Japan and abroad and three targets to reduce human error are reported. (K.I.)

  19. Influence of calculation error of total field anomaly in strongly magnetic environments

    Science.gov (United States)

    Yuan, Xiaoyu; Yao, Changli; Zheng, Yuanman; Li, Zelin

    2016-04-01

    An assumption made in many magnetic interpretation techniques is that ΔTact (total field anomaly - the measurement given by total field magnetometers, after we remove the main geomagnetic field, T0) can be approximated mathematically by ΔTpro (the projection of anomalous field vector in the direction of the earth's normal field). In order to meet the demand for high-precision processing of magnetic prospecting, the approximate error E between ΔTact and ΔTpro is studied in this research. Generally speaking, the error E is extremely small when anomalies not greater than about 0.2T0. However, the errorE may be large in highly magnetic environments. This leads to significant effects on subsequent quantitative inference. Therefore, we investigate the error E through numerical experiments of high-susceptibility bodies. A systematic error analysis was made by using a 2-D elliptic cylinder model. Error analysis show that the magnitude of ΔTact is usually larger than that of ΔTpro. This imply that a theoretical anomaly computed without accounting for the error E overestimate the anomaly associated with the body. It is demonstrated through numerical experiments that the error E is obvious and should not be ignored. It is also shown that the curves of ΔTpro and the error E had a certain symmetry when the directions of magnetization and geomagnetic field changed. To be more specific, the Emax (the maximum of the error E) appeared above the center of the magnetic body when the magnetic parameters are determined. Some other characteristics about the error Eare discovered. For instance, the curve of Emax with respect to the latitude was symmetrical on both sides of magnetic equator, and the extremum of the Emax can always be found in the mid-latitudes, and so on. It is also demonstrated that the error Ehas great influence on magnetic processing transformation and inversion results. It is conclude that when the bodies have highly magnetic susceptibilities, the error E can

  20. Characteristics of pediatric chemotherapy medication errors in a national error reporting database.

    Science.gov (United States)

    Rinke, Michael L; Shore, Andrew D; Morlock, Laura; Hicks, Rodney W; Miller, Marlene R

    2007-07-01

    Little is known regarding chemotherapy medication errors in pediatrics despite studies suggesting high rates of overall pediatric medication errors. In this study, the authors examined patterns in pediatric chemotherapy errors. The authors queried the United States Pharmacopeia MEDMARX database, a national, voluntary, Internet-accessible error reporting system, for all error reports from 1999 through 2004 that involved chemotherapy medications and patients aged error reports, 85% reached the patient, and 15.6% required additional patient monitoring or therapeutic intervention. Forty-eight percent of errors originated in the administering phase of medication delivery, and 30% originated in the drug-dispensing phase. Of the 387 medications cited, 39.5% were antimetabolites, 14.0% were alkylating agents, 9.3% were anthracyclines, and 9.3% were topoisomerase inhibitors. The most commonly involved chemotherapeutic agents were methotrexate (15.3%), cytarabine (12.1%), and etoposide (8.3%). The most common error types were improper dose/quantity (22.9% of 327 cited error types), wrong time (22.6%), omission error (14.1%), and wrong administration technique/wrong route (12.2%). The most common error causes were performance deficit (41.3% of 547 cited error causes), equipment and medication delivery devices (12.4%), communication (8.8%), knowledge deficit (6.8%), and written order errors (5.5%). Four of the 5 most serious errors occurred at community hospitals. Pediatric chemotherapy errors often reached the patient, potentially were harmful, and differed in quality between outpatient and inpatient areas. This study indicated which chemotherapeutic agents most often were involved in errors and that administering errors were common. Investigation is needed regarding targeted medication administration safeguards for these high-risk medications. Copyright (c) 2007 American Cancer Society.

  1. Time-dependent phase error correction using digital waveform synthesis

    Science.gov (United States)

    Doerry, Armin W.; Buskirk, Stephen

    2017-10-10

    The various technologies presented herein relate to correcting a time-dependent phase error generated as part of the formation of a radar waveform. A waveform can be pre-distorted to facilitate correction of an error induced into the waveform by a downstream operation/component in a radar system. For example, amplifier power droop effect can engender a time-dependent phase error in a waveform as part of a radar signal generating operation. The error can be quantified and an according complimentary distortion can be applied to the waveform to facilitate negation of the error during the subsequent processing of the waveform. A time domain correction can be applied by a phase error correction look up table incorporated into a waveform phase generator.

  2. Barriers to Medical Error Reporting for Physicians and Nurses.

    Science.gov (United States)

    Soydemir, Dilek; Seren Intepeler, Seyda; Mert, Hatice

    2017-10-01

    The purpose of the study was to determine what barriers to error reporting exist for physicians and nurses. The study, of descriptive qualitative design, was conducted with physicians and nurses working at a training and research hospital. In-depth interviews were held with eight physicians and 15 nurses, a total of 23 participants. Physicians and nurses do not choose to report medical errors that they experience or witness. When barriers to error reporting were examined, it was seen that there were four main themes involved: fear, the attitude of administration, barriers related to the system, and the employees' perceptions of error. It is important in terms of preventing medical errors to identify the barriers that keep physicians and nurses from reporting errors.

  3. Operator error and emotions. Operator error and emotions - a major cause of human failure

    International Nuclear Information System (INIS)

    Patterson, B.K.; Bradley, M.; Artiss, W.G.

    2000-01-01

    This paper proposes the idea that a large proportion of the incidents attributed to operator and maintenance error in a nuclear or industrial plant are actually founded in our human emotions. Basic psychological theory of emotions is briefly presented and then the authors present situations and instances that can cause emotions to swell and lead to operator and maintenance error. Since emotional information is not recorded in industrial incident reports, the challenge is extended to industry, to review incident source documents for cases of emotional involvement and to develop means to collect emotion related information in future root cause analysis investigations. Training must then be provided to operators and maintainers to enable them to know one's emotions, manage emotions, motivate one's self, recognize emotions in others and handle relationships. Effective training will reduce the instances of human error based in emotions and enable a cooperative, productive environment in which to work. (author)

  4. Operator error and emotions. Operator error and emotions - a major cause of human failure

    Energy Technology Data Exchange (ETDEWEB)

    Patterson, B.K. [Human Factors Practical Incorporated (Canada); Bradley, M. [Univ. of New Brunswick, Saint John, New Brunswick (Canada); Artiss, W.G. [Human Factors Practical (Canada)

    2000-07-01

    This paper proposes the idea that a large proportion of the incidents attributed to operator and maintenance error in a nuclear or industrial plant are actually founded in our human emotions. Basic psychological theory of emotions is briefly presented and then the authors present situations and instances that can cause emotions to swell and lead to operator and maintenance error. Since emotional information is not recorded in industrial incident reports, the challenge is extended to industry, to review incident source documents for cases of emotional involvement and to develop means to collect emotion related information in future root cause analysis investigations. Training must then be provided to operators and maintainers to enable them to know one's emotions, manage emotions, motivate one's self, recognize emotions in others and handle relationships. Effective training will reduce the instances of human error based in emotions and enable a cooperative, productive environment in which to work. (author)

  5. Human error in remote Afterloading Brachytherapy

    International Nuclear Information System (INIS)

    Quinn, M.L.; Callan, J.; Schoenfeld, I.; Serig, D.

    1994-01-01

    Remote Afterloading Brachytherapy (RAB) is a medical process used in the treatment of cancer. RAB uses a computer-controlled device to remotely insert and remove radioactive sources close to a target (or tumor) in the body. Some RAB problems affecting the radiation dose to the patient have been reported and attributed to human error. To determine the root cause of human error in the RAB system, a human factors team visited 23 RAB treatment sites in the US. The team observed RAB treatment planning and delivery, interviewed RAB personnel, and performed walk-throughs, during which staff demonstrated the procedures and practices used in performing RAB tasks. Factors leading to human error in the RAB system were identified. The impact of those factors on the performance of RAB was then evaluated and prioritized in terms of safety significance. Finally, the project identified and evaluated alternative approaches for resolving the safety significant problems related to human error

  6. Numerical study of the systematic error in Monte Carlo schemes for semiconductors

    Energy Technology Data Exchange (ETDEWEB)

    Muscato, Orazio [Univ. degli Studi di Catania (Italy). Dipt. di Matematica e Informatica; Di Stefano, Vincenza [Univ. degli Studi di Messina (Italy). Dipt. di Matematica; Wagner, Wolfgang [Weierstrass-Institut fuer Angewandte Analysis und Stochastik (WIAS) im Forschungsverbund Berlin e.V. (Germany)

    2008-07-01

    The paper studies the convergence behavior of Monte Carlo schemes for semiconductors. A detailed analysis of the systematic error with respect to numerical parameters is performed. Different sources of systematic error are pointed out and illustrated in a spatially one-dimensional test case. The error with respect to the number of simulation particles occurs during the calculation of the internal electric field. The time step error, which is related to the splitting of transport and electric field calculations, vanishes sufficiently fast. The error due to the approximation of the trajectories of particles depends on the ODE solver used in the algorithm. It is negligible compared to the other sources of time step error, when a second order Runge-Kutta solver is used. The error related to the approximate scattering mechanism is the most significant source of error with respect to the time step. (orig.)

  7. Performance analysis and comparison of an Atkinson cycle coupled to variable temperature heat reservoirs under maximum power and maximum power density conditions

    International Nuclear Information System (INIS)

    Wang, P.-Y.; Hou, S.-S.

    2005-01-01

    In this paper, performance analysis and comparison based on the maximum power and maximum power density conditions have been conducted for an Atkinson cycle coupled to variable temperature heat reservoirs. The Atkinson cycle is internally reversible but externally irreversible, since there is external irreversibility of heat transfer during the processes of constant volume heat addition and constant pressure heat rejection. This study is based purely on classical thermodynamic analysis methodology. It should be especially emphasized that all the results and conclusions are based on classical thermodynamics. The power density, defined as the ratio of power output to maximum specific volume in the cycle, is taken as the optimization objective because it considers the effects of engine size as related to investment cost. The results show that an engine design based on maximum power density with constant effectiveness of the hot and cold side heat exchangers or constant inlet temperature ratio of the heat reservoirs will have smaller size but higher efficiency, compression ratio, expansion ratio and maximum temperature than one based on maximum power. From the view points of engine size and thermal efficiency, an engine design based on maximum power density is better than one based on maximum power conditions. However, due to the higher compression ratio and maximum temperature in the cycle, an engine design based on maximum power density conditions requires tougher materials for engine construction than one based on maximum power conditions

  8. Prediction-error variance in Bayesian model updating: a comparative study

    Science.gov (United States)

    Asadollahi, Parisa; Li, Jian; Huang, Yong

    2017-04-01

    In Bayesian model updating, the likelihood function is commonly formulated by stochastic embedding in which the maximum information entropy probability model of prediction error variances plays an important role and it is Gaussian distribution subject to the first two moments as constraints. The selection of prediction error variances can be formulated as a model class selection problem, which automatically involves a trade-off between the average data-fit of the model class and the information it extracts from the data. Therefore, it is critical for the robustness in the updating of the structural model especially in the presence of modeling errors. To date, three ways of considering prediction error variances have been seem in the literature: 1) setting constant values empirically, 2) estimating them based on the goodness-of-fit of the measured data, and 3) updating them as uncertain parameters by applying Bayes' Theorem at the model class level. In this paper, the effect of different strategies to deal with the prediction error variances on the model updating performance is investigated explicitly. A six-story shear building model with six uncertain stiffness parameters is employed as an illustrative example. Transitional Markov Chain Monte Carlo is used to draw samples of the posterior probability density function of the structure model parameters as well as the uncertain prediction variances. The different levels of modeling uncertainty and complexity are modeled through three FE models, including a true model, a model with more complexity, and a model with modeling error. Bayesian updating is performed for the three FE models considering the three aforementioned treatments of the prediction error variances. The effect of number of measurements on the model updating performance is also examined in the study. The results are compared based on model class assessment and indicate that updating the prediction error variances as uncertain parameters at the model

  9. Effects of learning climate and registered nurse staffing on medication errors.

    Science.gov (United States)

    Chang, YunKyung; Mark, Barbara

    2011-01-01

    Despite increasing recognition of the significance of learning from errors, little is known about how learning climate contributes to error reduction. The purpose of this study was to investigate whether learning climate moderates the relationship between error-producing conditions and medication errors. A cross-sectional descriptive study was done using data from 279 nursing units in 146 randomly selected hospitals in the United States. Error-producing conditions included work environment factors (work dynamics and nurse mix), team factors (communication with physicians and nurses' expertise), personal factors (nurses' education and experience), patient factors (age, health status, and previous hospitalization), and medication-related support services. Poisson models with random effects were used with the nursing unit as the unit of analysis. A significant negative relationship was found between learning climate and medication errors. It also moderated the relationship between nurse mix and medication errors: When learning climate was negative, having more registered nurses was associated with fewer medication errors. However, no relationship was found between nurse mix and medication errors at either positive or average levels of learning climate. Learning climate did not moderate the relationship between work dynamics and medication errors. The way nurse mix affects medication errors depends on the level of learning climate. Nursing units with fewer registered nurses and frequent medication errors should examine their learning climate. Future research should be focused on the role of learning climate as related to the relationships between nurse mix and medication errors.

  10. Harsh parenting and fearfulness in toddlerhood interact to predict amplitudes of preschool error-related negativity

    Directory of Open Access Journals (Sweden)

    Rebecca J. Brooker

    2014-07-01

    Full Text Available Temperamentally fearful children are at increased risk for the development of anxiety problems relative to less-fearful children. This risk is even greater when early environments include high levels of harsh parenting behaviors. However, the mechanisms by which harsh parenting may impact fearful children's risk for anxiety problems are largely unknown. Recent neuroscience work has suggested that punishment is associated with exaggerated error-related negativity (ERN, an event-related potential linked to performance monitoring, even after the threat of punishment is removed. In the current study, we examined the possibility that harsh parenting interacts with fearfulness, impacting anxiety risk via neural processes of performance monitoring. We found that greater fearfulness and harsher parenting at 2 years of age predicted greater fearfulness and greater ERN amplitudes at age 4. Supporting the role of cognitive processes in this association, greater fearfulness and harsher parenting also predicted less efficient neural processing during preschool. This study provides initial evidence that performance monitoring may be a candidate process by which early parenting interacts with fearfulness to predict risk for anxiety problems.

  11. Development of a methodology for classifying software errors

    Science.gov (United States)

    Gerhart, S. L.

    1976-01-01

    A mathematical formalization of the intuition behind classification of software errors is devised and then extended to a classification discipline: Every classification scheme should have an easily discernible mathematical structure and certain properties of the scheme should be decidable (although whether or not these properties hold is relative to the intended use of the scheme). Classification of errors then becomes an iterative process of generalization from actual errors to terms defining the errors together with adjustment of definitions according to the classification discipline. Alternatively, whenever possible, small scale models may be built to give more substance to the definitions. The classification discipline and the difficulties of definition are illustrated by examples of classification schemes from the literature and a new study of observed errors in published papers of programming methodologies.

  12. Stochastic modelling of the monthly average maximum and minimum temperature patterns in India 1981-2015

    Science.gov (United States)

    Narasimha Murthy, K. V.; Saravana, R.; Vijaya Kumar, K.

    2018-04-01

    The paper investigates the stochastic modelling and forecasting of monthly average maximum and minimum temperature patterns through suitable seasonal auto regressive integrated moving average (SARIMA) model for the period 1981-2015 in India. The variations and distributions of monthly maximum and minimum temperatures are analyzed through Box plots and cumulative distribution functions. The time series plot indicates that the maximum temperature series contain sharp peaks in almost all the years, while it is not true for the minimum temperature series, so both the series are modelled separately. The possible SARIMA model has been chosen based on observing autocorrelation function (ACF), partial autocorrelation function (PACF), and inverse autocorrelation function (IACF) of the logarithmic transformed temperature series. The SARIMA (1, 0, 0) × (0, 1, 1)12 model is selected for monthly average maximum and minimum temperature series based on minimum Bayesian information criteria. The model parameters are obtained using maximum-likelihood method with the help of standard error of residuals. The adequacy of the selected model is determined using correlation diagnostic checking through ACF, PACF, IACF, and p values of Ljung-Box test statistic of residuals and using normal diagnostic checking through the kernel and normal density curves of histogram and Q-Q plot. Finally, the forecasting of monthly maximum and minimum temperature patterns of India for the next 3 years has been noticed with the help of selected model.

  13. Assessing explicit error reporting in the narrative electronic medical record using keyword searching.

    Science.gov (United States)

    Cao, Hui; Stetson, Peter; Hripcsak, George

    2003-01-01

    Many types of medical errors occur in and outside of hospitals, some of which have very serious consequences and increase cost. Identifying errors is a critical step for managing and preventing them. In this study, we assessed the explicit reporting of medical errors in the electronic record. We used five search terms "mistake," "error," "incorrect," "inadvertent," and "iatrogenic" to survey several sets of narrative reports including discharge summaries, sign-out notes, and outpatient notes from 1991 to 2000. We manually reviewed all the positive cases and identified them based on the reporting of physicians. We identified 222 explicitly reported medical errors. The positive predictive value varied with different keywords. In general, the positive predictive value for each keyword was low, ranging from 3.4 to 24.4%. Therapeutic-related errors were the most common reported errors and these reported therapeutic-related errors were mainly medication errors. Keyword searches combined with manual review indicated some medical errors that were reported in medical records. It had a low sensitivity and a moderate positive predictive value, which varied by search term. Physicians were most likely to record errors in the Hospital Course and History of Present Illness sections of discharge summaries. The reported errors in medical records covered a broad range and were related to several types of care providers as well as non-health care professionals.

  14. Joint maximum-likelihood magnitudes of presumed underground nuclear test explosions

    Science.gov (United States)

    Peacock, Sheila; Douglas, Alan; Bowers, David

    2017-08-01

    Body-wave magnitudes (mb) of 606 seismic disturbances caused by presumed underground nuclear test explosions at specific test sites between 1964 and 1996 have been derived from station amplitudes collected by the International Seismological Centre (ISC), by a joint inversion for mb and station-specific magnitude corrections. A maximum-likelihood method was used to reduce the upward bias of network mean magnitudes caused by data censoring, where arrivals at stations that do not report arrivals are assumed to be hidden by the ambient noise at the time. Threshold noise levels at each station were derived from the ISC amplitudes using the method of Kelly and Lacoss, which fits to the observed magnitude-frequency distribution a Gutenberg-Richter exponential decay truncated at low magnitudes by an error function representing the low-magnitude threshold of the station. The joint maximum-likelihood inversion is applied to arrivals from the sites: Semipalatinsk (Kazakhstan) and Novaya Zemlya, former Soviet Union; Singer (Lop Nor), China; Mururoa and Fangataufa, French Polynesia; and Nevada, USA. At sites where eight or more arrivals could be used to derive magnitudes and station terms for 25 or more explosions (Nevada, Semipalatinsk and Mururoa), the resulting magnitudes and station terms were fixed and a second inversion carried out to derive magnitudes for additional explosions with three or more arrivals. 93 more magnitudes were thus derived. During processing for station thresholds, many stations were rejected for sparsity of data, obvious errors in reported amplitude, or great departure of the reported amplitude-frequency distribution from the expected left-truncated exponential decay. Abrupt changes in monthly mean amplitude at a station apparently coincide with changes in recording equipment and/or analysis method at the station.

  15. Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)

    Science.gov (United States)

    Adler, Robert; Gu, Guojun; Huffman, George

    2012-01-01

    A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a

  16. Estimation of perspective errors in 2D2C-PIV measurements for 3D concentrated vortices

    Science.gov (United States)

    Ma, Bao-Feng; Jiang, Hong-Gang

    2018-06-01

    Two-dimensional planar PIV (2D2C) is still extensively employed in flow measurement owing to its availability and reliability, although more advanced PIVs have been developed. It has long been recognized that there exist perspective errors in velocity fields when employing the 2D2C PIV to measure three-dimensional (3D) flows, the magnitude of which depends on out-of-plane velocity and geometric layouts of the PIV. For a variety of vortex flows, however, the results are commonly represented by vorticity fields, instead of velocity fields. The present study indicates that the perspective error in vorticity fields relies on gradients of the out-of-plane velocity along a measurement plane, instead of the out-of-plane velocity itself. More importantly, an estimation approach to the perspective error in 3D vortex measurements was proposed based on a theoretical vortex model and an analysis on physical characteristics of the vortices, in which the gradient of out-of-plane velocity is uniquely determined by the ratio of the maximum out-of-plane velocity to maximum swirling velocity of the vortex; meanwhile, the ratio has upper limits for naturally formed vortices. Therefore, if the ratio is imposed with the upper limits, the perspective error will only rely on the geometric layouts of PIV that are known in practical measurements. Using this approach, the upper limits of perspective errors of a concentrated vortex can be estimated for vorticity and other characteristic quantities of the vortex. In addition, the study indicates that the perspective errors in vortex location, vortex strength, and vortex radius can be all zero for axisymmetric vortices if they are calculated by proper methods. The dynamic mode decomposition on an oscillatory vortex indicates that the perspective errors of each DMD mode are also only dependent on the gradient of out-of-plane velocity if the modes are represented by vorticity.

  17. Combined Uncertainty and A-Posteriori Error Bound Estimates for General CFD Calculations: Theory and Software Implementation

    Science.gov (United States)

    Barth, Timothy J.

    2014-01-01

    This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.

  18. Errors and conflict at the task level and the response level.

    Science.gov (United States)

    Desmet, Charlotte; Fias, Wim; Hartstra, Egbert; Brass, Marcel

    2011-01-26

    In the last decade, research on error and conflict processing has become one of the most influential research areas in the domain of cognitive control. There is now converging evidence that a specific part of the posterior frontomedian cortex (pFMC), the rostral cingulate zone (RCZ), is crucially involved in the processing of errors and conflict. However, error-related research has focused primarily on a specific error type, namely, response errors. The aim of the present study was to investigate whether errors on the task level rely on the same neural and functional mechanisms. Here we report a dissociation of both error types in the pFMC: whereas response errors activate the RCZ, task errors activate the dorsal frontomedian cortex. Although this last region shows an overlap in activation for task and response errors on the group level, a closer inspection of the single-subject data is more in accordance with a functional anatomical dissociation. When investigating brain areas related to conflict on the task and response levels, a clear dissociation was perceived between areas associated with response conflict and with task conflict. Overall, our data support a dissociation between response and task levels of processing in the pFMC. In addition, we provide additional evidence for a dissociation between conflict and errors both at the response level and at the task level.

  19. Analysis of neutron reflectivity data: maximum entropy, Bayesian spectral analysis and speckle holography

    International Nuclear Information System (INIS)

    Sivia, D.S.; Hamilton, W.A.; Smith, G.S.

    1991-01-01

    The analysis of neutron reflectivity data to obtain nuclear scattering length density profiles is akin to the notorious phaseless Fourier problem, well known in many fields such as crystallography. Current methods of analysis culminate in the refinement of a few parameters of a functional model, and are often preceded by a long and laborious process of trial and error. We start by discussing the use of maximum entropy for obtained 'free-form' solutions of the density profile, as an alternative to the trial and error phase when a functional model is not available. Next we consider a Bayesian spectral analysis approach, which is appropriate for optimising the parameters of a simple (but adequate) type of model when the number of parameters is not known. Finally, we suggest a novel experimental procedure, the analogue of astronomical speckle holography, designed to alleviate the ambiguity problems inherent in traditional reflectivity measurements. (orig.)

  20. Effect of telescopic distal extension removable partial dentures on oral health related quality of life and maximum bite force: A preliminary cross over study.

    Science.gov (United States)

    Elsyad, Moustafa Abdou; Mostafa, Aisha Zakaria

    2018-01-01

    This cross over study aimed to evaluate the effect of telescopic distal extension removable partial dentures on oral health related quality of life and maximum bite force MATERIALS AND METHODS: Twenty patients with complete maxillary edentulism and partially edentulous mandibles with anterior teeth only remaining were selected for this cross over study. All patients received complete maxillary dentures and mandibular partial removable dental prosthesis (PRDP, control). After 3 months of adaptation, PRDP was replaced with conventional telescopic partial dentures (TPD) or telescopic partial dentures with cantilevered extensions (TCPD) in a quasi-random method. Oral health related quality of life (OHRQoL) was measured using OHIP-14 questionnaire and Maximum bite force (MBF) was measured using a bite force transducer. Measurements were performed 3 months after using each of the following prostheses; PRDP, TPD, and TCPD. TCPD showed the OHIP-14 lowest scores (i.e., the highest patient satisfaction with their OHRQoL), followed by TPD, and PRDP showed the highest OHIP-14 scores (i.e., the lowest patient satisfaction with OHRQoL). TCPD showed the highest MBF (70.7 ± 3.71), followed by TPD (57.4 ± 3.43) and the lowest MBF (40.2 ± 2.20) was noted with PRDP. WITHIN The Limitations of This Study, Mandibular Telescopic Distal Extension Removable Partial Dentures with Cantilevered Extensions Were Associated with Improved Oral Health Related Quality of Life and Maximum Bite Force Compared to Telescopic or Conventional PRDP. Telescopic distal extension removable prostheses is an esthetic restoration in partially edentulous patients with free end saddle. This article describes the addition of cantilevered extensions of this prosthesis. The results showed that telescopic distal extension removable prostheses with cantilevered extensions were associated with improved oral health related quality of life and maximum bite force compared to telescopic or conventional RPDs

  1. A Human Error Analysis Procedure for Identifying Potential Error Modes and Influencing Factors for Test and Maintenance Activities

    International Nuclear Information System (INIS)

    Kim, Jae Whan; Park, Jin Kyun

    2010-01-01

    Periodic or non-periodic test and maintenance (T and M) activities in large, complex systems such as nuclear power plants (NPPs) are essential for sustaining stable and safe operation of the systems. On the other hand, it also has been raised that human erroneous actions that might occur during T and M activities has the possibility of incurring unplanned reactor trips (RTs) or power derate, making safety-related systems unavailable, or making the reliability of components degraded. Contribution of human errors during normal and abnormal activities of NPPs to the unplanned RTs is known to be about 20% of the total events. This paper introduces a procedure for predictively analyzing human error potentials when maintenance personnel perform T and M tasks based on a work procedure or their work plan. This procedure helps plant maintenance team prepare for plausible human errors. The procedure to be introduced is focusing on the recurrent error forms (or modes) in execution-based errors such as wrong object, omission, too little, and wrong action

  2. A model for the statistical description of analytical errors occurring in clinical chemical laboratories with time.

    Science.gov (United States)

    Hyvärinen, A

    1985-01-01

    The main purpose of the present study was to describe the statistical behaviour of daily analytical errors in the dimensions of place and time, providing a statistical basis for realistic estimates of the analytical error, and hence allowing the importance of the error and the relative contributions of its different sources to be re-evaluated. The observation material consists of creatinine and glucose results for control sera measured in daily routine quality control in five laboratories for a period of one year. The observation data were processed and computed by means of an automated data processing system. Graphic representations of time series of daily observations, as well as their means and dispersion limits when grouped over various time intervals, were investigated. For partition of the total variation several two-way analyses of variance were done with laboratory and various time classifications as factors. Pooled sets of observations were tested for normality of distribution and for consistency of variances, and the distribution characteristics of error variation in different categories of place and time were compared. Errors were found from the time series to vary typically between days. Due to irregular fluctuations in general and particular seasonal effects in creatinine, stable estimates of means or of dispersions for errors in individual laboratories could not be easily obtained over short periods of time but only from data sets pooled over long intervals (preferably at least one year). Pooled estimates of proportions of intralaboratory variation were relatively low (less than 33%) when the variation was pooled within days. However, when the variation was pooled over longer intervals this proportion increased considerably, even to a maximum of 89-98% (95-98% in each method category) when an outlying laboratory in glucose was omitted, with a concomitant decrease in the interaction component (representing laboratory-dependent variation with time

  3. Error studies for SNS Linac. Part 1: Transverse errors

    International Nuclear Information System (INIS)

    Crandall, K.R.

    1998-01-01

    The SNS linac consist of a radio-frequency quadrupole (RFQ), a drift-tube linac (DTL), a coupled-cavity drift-tube linac (CCDTL) and a coupled-cavity linac (CCL). The RFQ and DTL are operated at 402.5 MHz; the CCDTL and CCL are operated at 805 MHz. Between the RFQ and DTL is a medium-energy beam-transport system (MEBT). This error study is concerned with the DTL, CCDTL and CCL, and each will be analyzed separately. In fact, the CCL is divided into two sections, and each of these will be analyzed separately. The types of errors considered here are those that affect the transverse characteristics of the beam. The errors that cause the beam center to be displaced from the linac axis are quad displacements and quad tilts. The errors that cause mismatches are quad gradient errors and quad rotations (roll)

  4. The Error Reporting in the ATLAS TDAQ system

    CERN Document Server

    Kolos, S; The ATLAS collaboration; Papaevgeniou, L

    2014-01-01

    The ATLAS Error Reporting feature, which is used in the TDAQ environment, provides a service that allows experts and shift crew to track and address errors relating to the data taking components and applications. This service, called the Error Reporting Service(ERS), gives software applications the opportunity to collect and send comprehensive data about errors, happening at run-time, to a place where it can be intercepted in real-time by any other system component. Other ATLAS online control and monitoring tools use the Error Reporting service as one of their main inputs to address system problems in a timely manner and to improve the quality of acquired data. The actual destination of the error messages depends solely on the run-time environment, in which the online applications are operating. When applications send information to ERS, depending on the actual configuration the information may end up in a local file, in a database, in distributed middle-ware, which can transport it to an expert system or dis...

  5. The Error Reporting in the ATLAS TDAQ System

    CERN Document Server

    Kolos, S; The ATLAS collaboration; Papaevgeniou, L

    2015-01-01

    The ATLAS Error Reporting feature, which is used in the TDAQ environment, provides a service that allows experts and shift crew to track and address errors relating to the data taking components and applications. This service, called the Error Reporting Service(ERS), gives software applications the opportunity to collect and send comprehensive data about errors, happening at run-time, to a place where it can be intercepted in real-time by any other system component. Other ATLAS online control and monitoring tools use the Error Reporting service as one of their main inputs to address system problems in a timely manner and to improve the quality of acquired data. The actual destination of the error messages depends solely on the run-time environment, in which the online applications are operating. When applications send information to ERS, depending on the actual configuration the information may end up in a local file, in a database, in distributed middle-ware, which can transport it to an expert system or dis...

  6. Two-dimensional errors

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements

  7. Dependence of the compensation error on the error of a sensor and corrector in an adaptive optics phase-conjugating system

    International Nuclear Information System (INIS)

    Kiyko, V V; Kislov, V I; Ofitserov, E N

    2015-01-01

    In the framework of a statistical model of an adaptive optics system (AOS) of phase conjugation, three algorithms based on an integrated mathematical approach are considered, each of them intended for minimisation of one of the following characteristics: the sensor error (in the case of an ideal corrector), the corrector error (in the case of ideal measurements) and the compensation error (with regard to discreteness and measurement noises and to incompleteness of a system of response functions of the corrector actuators). Functional and statistical relationships between the algorithms are studied and a relation is derived to ensure calculation of the mean-square compensation error as a function of the errors of the sensor and corrector with an accuracy better than 10%. Because in adjusting the AOS parameters, it is reasonable to proceed from the equality of the sensor and corrector errors, in the case the Hartmann sensor is used as a wavefront sensor, the required number of actuators in the absence of the noise component in the sensor error turns out 1.5 – 2.5 times less than the number of counts, and that difference grows with increasing measurement noise. (adaptive optics)

  8. Dependence of the compensation error on the error of a sensor and corrector in an adaptive optics phase-conjugating system

    Energy Technology Data Exchange (ETDEWEB)

    Kiyko, V V; Kislov, V I; Ofitserov, E N [A M Prokhorov General Physics Institute, Russian Academy of Sciences, Moscow (Russian Federation)

    2015-08-31

    In the framework of a statistical model of an adaptive optics system (AOS) of phase conjugation, three algorithms based on an integrated mathematical approach are considered, each of them intended for minimisation of one of the following characteristics: the sensor error (in the case of an ideal corrector), the corrector error (in the case of ideal measurements) and the compensation error (with regard to discreteness and measurement noises and to incompleteness of a system of response functions of the corrector actuators). Functional and statistical relationships between the algorithms are studied and a relation is derived to ensure calculation of the mean-square compensation error as a function of the errors of the sensor and corrector with an accuracy better than 10%. Because in adjusting the AOS parameters, it is reasonable to proceed from the equality of the sensor and corrector errors, in the case the Hartmann sensor is used as a wavefront sensor, the required number of actuators in the absence of the noise component in the sensor error turns out 1.5 – 2.5 times less than the number of counts, and that difference grows with increasing measurement noise. (adaptive optics)

  9. Maximum-entropy clustering algorithm and its global convergence analysis

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    Constructing a batch of differentiable entropy functions touniformly approximate an objective function by means of the maximum-entropy principle, a new clustering algorithm, called maximum-entropy clustering algorithm, is proposed based on optimization theory. This algorithm is a soft generalization of the hard C-means algorithm and possesses global convergence. Its relations with other clustering algorithms are discussed.

  10. Drug dispensing errors in a ward stock system

    DEFF Research Database (Denmark)

    Andersen, Stig Ejdrup

    2010-01-01

    . Multivariable analysis showed that surgical and psychiatric settings were more susceptible to involvement in dispensing errors and that polypharmacy was a risk factor. In this ward stock system, dispensing errors are relatively common, they depend on speciality and are associated with polypharmacy......The aim of this study was to determine the frequency of drug dispensing errors in a traditional ward stock system operated by nurses and to investigate the effect of potential contributing factors. This was a descriptive study conducted in a teaching hospital from January 2005 to June 2007. In five....... These results indicate that strategies to reduce dispensing errors should address polypharmacy and focus on high-risk units. This should, however, be substantiated by a future trial....

  11. Maximum solid concentrations of coal water slurries predicted by neural network models

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Jun; Li, Yanchang; Zhou, Junhu; Liu, Jianzhong; Cen, Kefa

    2010-12-15

    The nonlinear back-propagation (BP) neural network models were developed to predict the maximum solid concentration of coal water slurry (CWS) which is a substitute for oil fuel, based on physicochemical properties of 37 typical Chinese coals. The Levenberg-Marquardt algorithm was used to train five BP neural network models with different input factors. The data pretreatment method, learning rate and hidden neuron number were optimized by training models. It is found that the Hardgrove grindability index (HGI), moisture and coalification degree of parent coal are 3 indispensable factors for the prediction of CWS maximum solid concentration. Each BP neural network model gives a more accurate prediction result than the traditional polynomial regression equation. The BP neural network model with 3 input factors of HGI, moisture and oxygen/carbon ratio gives the smallest mean absolute error of 0.40%, which is much lower than that of 1.15% given by the traditional polynomial regression equation. (author)

  12. Using brain potentials to understand prism adaptation: the error-related negativity and the P300

    Directory of Open Access Journals (Sweden)

    Stephane Joseph Maclean

    2015-06-01

    Full Text Available Prism adaptation (PA is both a perceptual-motor learning task as well as a promising rehabilitation tool for visuo-spatial neglect (VSN – a spatial attention disorder often experienced after stroke resulting in slowed and/or inaccurate motor responses to contralesional targets. During PA, individuals are exposed to prism-induced shifts of the visual-field while performing a visuo-guided reaching task. After adaptation, with goggles removed, visuo-motor responding is shifted to the opposite direction of that initially induced by the prisms. This visuo-motor aftereffect has been used to study visuo-motor learning and adaptation and has been applied clinically to reduce VSN severity by improving motor responding to stimuli in contralesional (usually left-sided space. In order to optimize PA’s use for VSN patients, it is important to elucidate the neural and cognitive processes that alter visuomotor function during PA. In the present study, healthy young adults underwent PA while event-related potentials (ERPs were recorded at the termination of each reach (screen-touch, then binned according to accuracy (hit vs. miss and phase of exposure block (early, middle, late. Results show that two ERP components were evoked by screen-touch: an early error-related negativity (ERN, and a P300. The ERN was consistently evoked on miss trials during adaptation, while the P300 amplitude was largest during the early phase of adaptation for both hit and miss trials. This study provides evidence of two neural signals sensitive to visual feedback during PA that may sub-serve changes in visuomotor responding. Prior ERP research suggests that the ERN reflects an error processing system in medial-frontal cortex, while the P300 is suggested to reflect a system for context updating and learning. Future research is needed to elucidate the role of these ERP components in improving visuomotor responses among individuals with VSN.

  13. Using brain potentials to understand prism adaptation: the error-related negativity and the P300.

    Science.gov (United States)

    MacLean, Stephane J; Hassall, Cameron D; Ishigami, Yoko; Krigolson, Olav E; Eskes, Gail A

    2015-01-01

    Prism adaptation (PA) is both a perceptual-motor learning task as well as a promising rehabilitation tool for visuo-spatial neglect (VSN)-a spatial attention disorder often experienced after stroke resulting in slowed and/or inaccurate motor responses to contralesional targets. During PA, individuals are exposed to prism-induced shifts of the visual-field while performing a visuo-guided reaching task. After adaptation, with goggles removed, visuomotor responding is shifted to the opposite direction of that initially induced by the prisms. This visuomotor aftereffect has been used to study visuomotor learning and adaptation and has been applied clinically to reduce VSN severity by improving motor responding to stimuli in contralesional (usually left-sided) space. In order to optimize PA's use for VSN patients, it is important to elucidate the neural and cognitive processes that alter visuomotor function during PA. In the present study, healthy young adults underwent PA while event-related potentials (ERPs) were recorded at the termination of each reach (screen-touch), then binned according to accuracy (hit vs. miss) and phase of exposure block (early, middle, late). Results show that two ERP components were evoked by screen-touch: an error-related negativity (ERN), and a P300. The ERN was consistently evoked on miss trials during adaptation, while the P300 amplitude was largest during the early phase of adaptation for both hit and miss trials. This study provides evidence of two neural signals sensitive to visual feedback during PA that may sub-serve changes in visuomotor responding. Prior ERP research suggests that the ERN reflects an error processing system in medial-frontal cortex, while the P300 is suggested to reflect a system for context updating and learning. Future research is needed to elucidate the role of these ERP components in improving visuomotor responses among individuals with VSN.

  14. Cognitive emotion regulation enhances aversive prediction error activity while reducing emotional responses.

    Science.gov (United States)

    Mulej Bratec, Satja; Xie, Xiyao; Schmid, Gabriele; Doll, Anselm; Schilbach, Leonhard; Zimmer, Claus; Wohlschläger, Afra; Riedl, Valentin; Sorg, Christian

    2015-12-01

    Cognitive emotion regulation is a powerful way of modulating emotional responses. However, despite the vital role of emotions in learning, it is unknown whether the effect of cognitive emotion regulation also extends to the modulation of learning. Computational models indicate prediction error activity, typically observed in the striatum and ventral tegmental area, as a critical neural mechanism involved in associative learning. We used model-based fMRI during aversive conditioning with and without cognitive emotion regulation to test the hypothesis that emotion regulation would affect prediction error-related neural activity in the striatum and ventral tegmental area, reflecting an emotion regulation-related modulation of learning. Our results show that cognitive emotion regulation reduced emotion-related brain activity, but increased prediction error-related activity in a network involving ventral tegmental area, hippocampus, insula and ventral striatum. While the reduction of response activity was related to behavioral measures of emotion regulation success, the enhancement of prediction error-related neural activity was related to learning performance. Furthermore, functional connectivity between the ventral tegmental area and ventrolateral prefrontal cortex, an area involved in regulation, was specifically increased during emotion regulation and likewise related to learning performance. Our data, therefore, provide first-time evidence that beyond reducing emotional responses, cognitive emotion regulation affects learning by enhancing prediction error-related activity, potentially via tegmental dopaminergic pathways. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. SU-G-BRB-03: Assessing the Sensitivity and False Positive Rate of the Integrated Quality Monitor (IQM) Large Area Ion Chamber to MLC Positioning Errors

    Energy Technology Data Exchange (ETDEWEB)

    Boehnke, E McKenzie; DeMarco, J; Steers, J; Fraass, B [Cedars-Sinai Medical Center, Los Angeles, CA (United States)

    2016-06-15

    Purpose: To examine both the IQM’s sensitivity and false positive rate to varying MLC errors. By balancing these two characteristics, an optimal tolerance value can be derived. Methods: An un-modified SBRT Liver IMRT plan containing 7 fields was randomly selected as a representative clinical case. The active MLC positions for all fields were perturbed randomly from a square distribution of varying width (±1mm to ±5mm). These unmodified and modified plans were measured multiple times each by the IQM (a large area ion chamber mounted to a TrueBeam linac head). Measurements were analyzed relative to the initial, unmodified measurement. IQM readings are analyzed as a function of control points. In order to examine sensitivity to errors along a field’s delivery, each measured field was divided into 5 groups of control points, and the maximum error in each group was recorded. Since the plans have known errors, we compared how well the IQM is able to differentiate between unmodified and error plans. ROC curves and logistic regression were used to analyze this, independent of thresholds. Results: A likelihood-ratio Chi-square test showed that the IQM could significantly predict whether a plan had MLC errors, with the exception of the beginning and ending control points. Upon further examination, we determined there was ramp-up occurring at the beginning of delivery. Once the linac AFC was tuned, the subsequent measurements (relative to a new baseline) showed significant (p <0.005) abilities to predict MLC errors. Using the area under the curve, we show the IQM’s ability to detect errors increases with increasing MLC error (Spearman’s Rho=0.8056, p<0.0001). The optimal IQM count thresholds from the ROC curves are ±3%, ±2%, and ±7% for the beginning, middle 3, and end segments, respectively. Conclusion: The IQM has proven to be able to detect not only MLC errors, but also differences in beam tuning (ramp-up). Partially supported by the Susan Scott Foundation.

  16. Errors in otology.

    Science.gov (United States)

    Kartush, J M

    1996-11-01

    Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.

  17. The error-related negativity (ERN is an electrophysiological marker of motor impulsiveness on the Barratt Impulsiveness Scale (BIS-11 during adolescence

    Directory of Open Access Journals (Sweden)

    Jasmine B. Taylor

    2018-04-01

    Full Text Available Objectives: Previous studies have postulated that the error-related negativity (ERN may reflect individual differences in impulsivity; however, none have used a longitudinal framework or evaluated impulsivity as a multidimensional construct. The current study evaluated whether ERN amplitude, measured in childhood and adolescence, is predictive of impulsiveness during adolescence. Methods: Seventy-five children participated in this study, initially at ages 7–9 years and again at 12–18 years. The interval between testing sessions ranged from 5 to 9 years. The ERN was extracted in response to behavioural errors produced during a modified visual flanker task at both time points (i.e. childhood and adolescence. Participants also completed the Barratt Impulsiveness Scale − a measure that considers impulsiveness to comprise three core sub-traits − during adolescence. Results: At adolescence, the ERN amplitude was significantly larger than during childhood. Additionally, ERN amplitude during adolescence significantly predicted motor impulsiveness at that time point, after controlling for age, gender, and the number of trials included in the ERN. In contrast, ERN amplitude during childhood did not uniquely predict impulsiveness during adolescence. Conclusions: These findings provide preliminary evidence that ERN amplitude is an electrophysiological marker of self-reported motor impulsiveness (i.e. acting without thinking during adolescence. Keywords: Error-related negativity, ERN, Impulsivity, BIS, Development, Adolescence

  18. Maximum likelihood unit rooting test in the presence GARCH: A new test with increased power

    OpenAIRE

    Cook , Steve

    2008-01-01

    Abstract The literature on testing the unit root hypothesis in the presence of GARCH errors is extended. A new test based upon the combination of local-to-unity detrending and joint maximum likelihood estimation of the autoregressive parameter and GARCH process is presented. The finite sample distribution of the test is derived under alternative decisions regarding the deterministic terms employed. Using Monte Carlo simulation, the newly proposed ML t-test is shown to exhibit incre...

  19. Effective connectivity of visual word recognition and homophone orthographic errors

    Science.gov (United States)

    Guàrdia-Olmos, Joan; Peró-Cebollero, Maribel; Zarabozo-Hurtado, Daniel; González-Garrido, Andrés A.; Gudayol-Ferré, Esteve

    2015-01-01

    The study of orthographic errors in a transparent language like Spanish is an important topic in relation to writing acquisition. The development of neuroimaging techniques, particularly functional magnetic resonance imaging (fMRI), has enabled the study of such relationships between brain areas. The main objective of the present study was to explore the patterns of effective connectivity by processing pseudohomophone orthographic errors among subjects with high and low spelling skills. Two groups of 12 Mexican subjects each, matched by age, were formed based on their results in a series of ad hoc spelling-related out-scanner tests: a high spelling skills (HSSs) group and a low spelling skills (LSSs) group. During the f MRI session, two experimental tasks were applied (spelling recognition task and visuoperceptual recognition task). Regions of Interest and their signal values were obtained for both tasks. Based on these values, structural equation models (SEMs) were obtained for each group of spelling competence (HSS and LSS) and task through maximum likelihood estimation, and the model with the best fit was chosen in each case. Likewise, dynamic causal models (DCMs) were estimated for all the conditions across tasks and groups. The HSS group’s SEM results suggest that, in the spelling recognition task, the right middle temporal gyrus, and, to a lesser extent, the left parahippocampal gyrus receive most of the significant effects, whereas the DCM results in the visuoperceptual recognition task show less complex effects, but still congruent with the previous results, with an important role in several areas. In general, these results are consistent with the major findings in partial studies about linguistic activities but they are the first analyses of statistical effective brain connectivity in transparent languages. PMID:26042070

  20. Simulation model of ANN based maximum power point tracking controller for solar PV system

    Energy Technology Data Exchange (ETDEWEB)

    Rai, Anil K.; Singh, Bhupal [Department of Electrical and Electronics Engineering, Ajay Kumar Garg Engineering College, Ghaziabad 201009 (India); Kaushika, N.D.; Agarwal, Niti [School of Research and Development, Bharati Vidyapeeth College of Engineering, A-4 Paschim Vihar, New Delhi 110063 (India)

    2011-02-15

    In this paper the simulation model of an artificial neural network (ANN) based maximum power point tracking controller has been developed. The controller consists of an ANN tracker and the optimal control unit. The ANN tracker estimates the voltages and currents corresponding to a maximum power delivered by solar PV (photovoltaic) array for variable cell temperature and solar radiation. The cell temperature is considered as a function of ambient air temperature, wind speed and solar radiation. The tracker is trained employing a set of 124 patterns using the back propagation algorithm. The mean square error of tracker output and target values is set to be of the order of 10{sup -5} and the successful convergent of learning process takes 1281 epochs. The accuracy of the ANN tracker has been validated by employing different test data sets. The control unit uses the estimates of the ANN tracker to adjust the duty cycle of the chopper to optimum value needed for maximum power transfer to the specified load. (author)

  1. The Impact of Error-Management Climate, Error Type and Error Originator on Auditors’ Reporting Errors Discovered on Audit Work Papers

    NARCIS (Netherlands)

    A.H. Gold-Nöteberg (Anna); U. Gronewold (Ulfert); S. Salterio (Steve)

    2010-01-01

    textabstractWe examine factors affecting the auditor’s willingness to report their own or their peers’ self-discovered errors in working papers subsequent to detailed working paper review. Prior research has shown that errors in working papers are detected in the review process; however, such

  2. Learning from Errors

    OpenAIRE

    Martínez-Legaz, Juan Enrique; Soubeyran, Antoine

    2003-01-01

    We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.

  3. Correcting quantum errors with entanglement.

    Science.gov (United States)

    Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu

    2006-10-20

    We show how entanglement shared between encoder and decoder can simplify the theory of quantum error correction. The entanglement-assisted quantum codes we describe do not require the dual-containing constraint necessary for standard quantum error-correcting codes, thus allowing us to "quantize" all of classical linear coding theory. In particular, efficient modern classical codes that attain the Shannon capacity can be made into entanglement-assisted quantum codes attaining the hashing bound (closely related to the quantum capacity). For systems without large amounts of shared entanglement, these codes can also be used as catalytic codes, in which a small amount of initial entanglement enables quantum communication.

  4. Dynamic Modeling Accuracy Dependence on Errors in Sensor Measurements, Mass Properties, and Aircraft Geometry

    Science.gov (United States)

    Grauer, Jared A.; Morelli, Eugene A.

    2013-01-01

    A nonlinear simulation of the NASA Generic Transport Model was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of dynamic models identified from flight data. Measurements from a typical system identification maneuver were systematically and progressively deteriorated and then used to estimate stability and control derivatives within a Monte Carlo analysis. Based on the results, recommendations were provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using other flight conditions, parameter estimation methods, and a full-scale F-16 nonlinear aircraft simulation were compared with these recommendations.

  5. Performance monitoring and error significance in patients with obsessive-compulsive disorder.

    Science.gov (United States)

    Endrass, Tanja; Schuermann, Beate; Kaufmann, Christan; Spielberg, Rüdiger; Kniesche, Rainer; Kathmann, Norbert

    2010-05-01

    Performance monitoring has been consistently found to be overactive in obsessive-compulsive disorder (OCD). The present study examines whether performance monitoring in OCD is adjusted with error significance. Therefore, errors in a flanker task were followed by neutral (standard condition) or punishment feedbacks (punishment condition). In the standard condition patients had significantly larger error-related negativity (ERN) and correct-related negativity (CRN) ampliudes than controls. But, in the punishment condition groups did not differ in ERN and CRN amplitudes. While healthy controls showed an amplitude enhancement between standard and punishment condition, OCD patients showed no variation. In contrast, group differences were not found for the error positivity (Pe): both groups had larger Pe amplitudes in the punishment condition. Results confirm earlier findings of overactive error monitoring in OCD. The absence of a variation with error significance might indicate that OCD patients are unable to down-regulate their monitoring activity according to external requirements. Copyright 2010 Elsevier B.V. All rights reserved.

  6. Medication errors in the Middle East countries: a systematic review of the literature.

    Science.gov (United States)

    Alsulami, Zayed; Conroy, Sharon; Choonara, Imti

    2013-04-01

    Medication errors are a significant global concern and can cause serious medical consequences for patients. Little is known about medication errors in Middle Eastern countries. The objectives of this systematic review were to review studies of the incidence and types of medication errors in Middle Eastern countries and to identify the main contributory factors involved. A systematic review of the literature related to medication errors in Middle Eastern countries was conducted in October 2011 using the following databases: Embase, Medline, Pubmed, the British Nursing Index and the Cumulative Index to Nursing & Allied Health Literature. The search strategy included all ages and languages. Inclusion criteria were that the studies assessed or discussed the incidence of medication errors and contributory factors to medication errors during the medication treatment process in adults or in children. Forty-five studies from 10 of the 15 Middle Eastern countries met the inclusion criteria. Nine (20 %) studies focused on medication errors in paediatric patients. Twenty-one focused on prescribing errors, 11 measured administration errors, 12 were interventional studies and one assessed transcribing errors. Dispensing and documentation errors were inadequately evaluated. Error rates varied from 7.1 % to 90.5 % for prescribing and from 9.4 % to 80 % for administration. The most common types of prescribing errors reported were incorrect dose (with an incidence rate from 0.15 % to 34.8 % of prescriptions), wrong frequency and wrong strength. Computerised physician rder entry and clinical pharmacist input were the main interventions evaluated. Poor knowledge of medicines was identified as a contributory factor for errors by both doctors (prescribers) and nurses (when administering drugs). Most studies did not assess the clinical severity of the medication errors. Studies related to medication errors in the Middle Eastern countries were relatively few in number and of poor quality

  7. SU-D-201-01: A Multi-Institutional Study Quantifying the Impact of Simulated Linear Accelerator VMAT Errors for Nasopharynx

    International Nuclear Information System (INIS)

    Pogson, E; Hansen, C; Blake, S; Thwaites, D; Arumugam, S; Juresic, J; Ochoa, C; Yakobi, J; Haman, A; Trtovac, A; Holloway, L

    2016-01-01

    Purpose: To quantify the impact of differing magnitudes of simulated linear accelerator errors on the dose to the target volume and organs at risk for nasopharynx VMAT. Methods: Ten nasopharynx cancer patients were retrospectively replanned twice with one full arc VMAT by two institutions. Treatment uncertainties (gantry angle and collimator in degrees, MLC field size and MLC shifts in mm) were introduced into these plans at increments of 5,2,1,−1,−2 and −5. This was completed using an in-house Python script within Pinnacle3 and analysed using 3DVH and MatLab. The mean and maximum dose were calculated for the Planning Target Volume (PTV1), parotids, brainstem, and spinal cord and then compared to the original baseline plan. The D1cc was also calculated for the spinal cord and brainstem. Patient average results were compared across institutions. Results: Introduced gantry angle errors had the smallest effect of dose, no tolerances were exceeded for one institution, and the second institutions VMAT plans were only exceeded for gantry angle of ±5° affecting different sided parotids by 14–18%. PTV1, brainstem and spinal cord tolerances were exceeded for collimator angles of ±5 degrees, MLC shifts and MLC field sizes of ±1 and beyond, at the first institution. At the second institution, sensitivity to errors was marginally higher for some errors including the collimator error producing doses exceeding tolerances above ±2 degrees, and marginally lower with tolerances exceeded above MLC shifts of ±2. The largest differences occur with MLC field sizes, with both institutions reporting exceeded tolerances, for all introduced errors (±1 and beyond). Conclusion: The plan robustness for VMAT nasopharynx plans has been demonstrated. Gantry errors have the least impact on patient doses, however MLC field sizes exceed tolerances even with relatively low introduced errors and also produce the largest errors. This was consistent across both departments. The authors

  8. SU-D-201-01: A Multi-Institutional Study Quantifying the Impact of Simulated Linear Accelerator VMAT Errors for Nasopharynx

    Energy Technology Data Exchange (ETDEWEB)

    Pogson, E [Institute of Medical Physics, The University of Sydney, Sydney, NSW (Australia); Liverpool and Macarthur Cancer Therapy Centres, Liverpool, NSW (Australia); Ingham Institute for Applied Medical Research, Sydney, NSW (Australia); Hansen, C [Laboratory of Radiation Physics, Odense University Hospital, Odense (Denmark); Institute of Clinical Research, University of Southern Denmark, Odense (Denmark); Blake, S; Thwaites, D [Institute of Medical Physics, The University of Sydney, Sydney, NSW (Australia); Arumugam, S; Juresic, J; Ochoa, C; Yakobi, J; Haman, A; Trtovac, A [Liverpool and Macarthur Cancer Therapy Centres, Liverpool, NSW (Australia); Holloway, L [Institute of Medical Physics, The University of Sydney, Sydney, NSW (Australia); Liverpool and Macarthur Cancer Therapy Centres, Liverpool, NSW (Australia); Ingham Institute for Applied Medical Research, Sydney, NSW (Australia); South Western Sydney Clinical School, University of New South Wales, Sydney, NSW (Australia); University of Wollongong, Wollongong, NSW (Australia)

    2016-06-15

    Purpose: To quantify the impact of differing magnitudes of simulated linear accelerator errors on the dose to the target volume and organs at risk for nasopharynx VMAT. Methods: Ten nasopharynx cancer patients were retrospectively replanned twice with one full arc VMAT by two institutions. Treatment uncertainties (gantry angle and collimator in degrees, MLC field size and MLC shifts in mm) were introduced into these plans at increments of 5,2,1,−1,−2 and −5. This was completed using an in-house Python script within Pinnacle3 and analysed using 3DVH and MatLab. The mean and maximum dose were calculated for the Planning Target Volume (PTV1), parotids, brainstem, and spinal cord and then compared to the original baseline plan. The D1cc was also calculated for the spinal cord and brainstem. Patient average results were compared across institutions. Results: Introduced gantry angle errors had the smallest effect of dose, no tolerances were exceeded for one institution, and the second institutions VMAT plans were only exceeded for gantry angle of ±5° affecting different sided parotids by 14–18%. PTV1, brainstem and spinal cord tolerances were exceeded for collimator angles of ±5 degrees, MLC shifts and MLC field sizes of ±1 and beyond, at the first institution. At the second institution, sensitivity to errors was marginally higher for some errors including the collimator error producing doses exceeding tolerances above ±2 degrees, and marginally lower with tolerances exceeded above MLC shifts of ±2. The largest differences occur with MLC field sizes, with both institutions reporting exceeded tolerances, for all introduced errors (±1 and beyond). Conclusion: The plan robustness for VMAT nasopharynx plans has been demonstrated. Gantry errors have the least impact on patient doses, however MLC field sizes exceed tolerances even with relatively low introduced errors and also produce the largest errors. This was consistent across both departments. The authors

  9. 20 CFR 10.806 - How are the maximum fees defined?

    Science.gov (United States)

    2010-04-01

    ... AMENDED Information for Medical Providers Medical Fee Schedule § 10.806 How are the maximum fees defined? For professional medical services, the Director shall maintain a schedule of maximum allowable fees.../Current Procedural Terminology (HCPCS/CPT) code which represents the relative skill, effort, risk and time...

  10. correlation between maximum dry density and cohesion of ...

    African Journals Online (AJOL)

    HOD

    investigation on sandy soils to determine the correlation between relative density and compaction test parameter. Using twenty soil samples, they were able to develop correlations between relative density, coefficient of uniformity and maximum dry density. Khafaji [5] using standard proctor compaction method carried out an ...

  11. Subject-verb agreement: Error production by Tourism undergraduate students

    Directory of Open Access Journals (Sweden)

    Ana Paula Correia

    2014-11-01

    Full Text Available The aim of this paper, which is part of a more extensive research on verb tense errors, is to investigate the subject-verb agreement errors in the simple present in the texts of a group of Tourism undergraduate students. Based on the concept of interlanguage and following the error analysis model, this descriptive non-experimental study applies qualitative and quantitative procedures. Three types of instruments were used to collect data: a sociolinguistic questionnaire (to define the learners’ profile; the Dialang test (to establish their proficiency level in English; and our own learner corpus (140 texts. Errors were identified and classified by an expert panel in accordance with a verb error taxonomy developed for this study based on the taxonomy established by the Cambridge Learner Corpus. The Markin software was used to code errors in the corpus and the Wordsmith Tools software to analyze the data. Subject-verb agreement errors and their relation with the learners’ proficiency levels are described.

  12. Maximum permissible dose

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    This chapter presents a historic overview of the establishment of radiation guidelines by various national and international agencies. The use of maximum permissible dose and maximum permissible body burden limits to derive working standards is discussed

  13. Using incident reports to inform the prevention of medication administration errors.

    Science.gov (United States)

    Härkänen, Marja; Saano, Susanna; Vehviläinen-Julkunen, Katri

    2017-11-01

    To describe ways of preventing medication administration errors based on reporters' views expressed in medication administration incident reports. Medication administration errors are very common, and nurses play important roles in committing and in preventing such errors. Thus far, incident reporters' perceptions of how to prevent medication administration errors have rarely been analysed. This is a qualitative, descriptive study using an inductive content analysis of the incident reports related to medication administration errors (n = 1012). These free-text descriptions include reporters' views on preventing the reoccurrence of medication administration errors. The data were collected from two hospitals in Finland and pertain to incidents that were reported between 1 January 2013 and 31 December 2014. Reporters' views on preventing medication administration errors were divided into three main categories related to individuals (health professionals), teams and organisations. The following categories related to individuals in preventing medication administration errors were identified: (1) accuracy and preciseness; (2) verification; and (3) following the guidelines, responsibility and attitude towards work. The team categories were as follows: (1) distribution of work; (2) flow of information and cooperation; and (3) documenting and marking the drug information. The categories related to organisation were as follows: (1) work environment; (2) resources; (3) training; (4) guidelines; and (5) development of the work. Health professionals should administer medication with a high moral awareness and an attempt to concentrate on the task. Nonetheless, the system should support health professionals by providing a reasonable work environment and encouraging collaboration among the providers to facilitate the safe administration of medication. Although there are numerous approaches to supporting medication safety, approaches that support the ability of individual health

  14. 25(OHD3 Levels Relative to Muscle Strength and Maximum Oxygen Uptake in Athletes

    Directory of Open Access Journals (Sweden)

    Książek Anna

    2016-04-01

    Full Text Available Vitamin D is mainly known for its effects on the bone and calcium metabolism. The discovery of Vitamin D receptors in many extraskeletal cells suggests that it may also play a significant role in other organs and systems. The aim of our study was to assess the relationship between 25(OHD3 levels, lower limb isokinetic strength and maximum oxygen uptake in well-trained professional football players. We enrolled 43 Polish premier league soccer players. The mean age was 22.7±5.3 years. Our study showed decreased serum 25(OHD3 levels in 74.4% of the professional players. The results also demonstrated a lack of statistically significant correlation between 25(OHD3 levels and lower limb muscle strength with the exception of peak torque of the left knee extensors at an angular velocity of 150°/s (r=0.41. No significant correlations were found between hand grip strength and maximum oxygen uptake. Based on our study we concluded that in well-trained professional soccer players, there was no correlation between serum levels of 25(OHD3 and muscle strength or maximum oxygen uptake.

  15. Competition increases binding errors in visual working memory.

    Science.gov (United States)

    Emrich, Stephen M; Ferber, Susanne

    2012-04-20

    When faced with maintaining multiple objects in visual working memory, item information must be bound to the correct object in order to be correctly recalled. Sometimes, however, binding errors occur, and participants report the feature (e.g., color) of an unprobed, non-target item. In the present study, we examine whether the configuration of sample stimuli affects the proportion of these binding errors. The results demonstrate that participants mistakenly report the identity of the unprobed item (i.e., they make a non-target response) when sample items are presented close together in space, suggesting that binding errors can increase independent of increases in memory load. Moreover, the proportion of these non-target responses is linearly related to the distance between sample items, suggesting that these errors are spatially specific. Finally, presenting sample items sequentially decreases non-target responses, suggesting that reducing competition between sample stimuli reduces the number of binding errors. Importantly, these effects all occurred without increases in the amount of error in the memory representation. These results suggest that competition during encoding can account for some of the binding errors made during VWM recall.

  16. Influence of random setup error on dose distribution

    International Nuclear Information System (INIS)

    Zhai Zhenyu

    2008-01-01

    Objective: To investigate the influence of random setup error on dose distribution in radiotherapy and determine the margin from ITV to PTV. Methods: A random sample approach was used to simulate the fields position in target coordinate system. Cumulative effect of random setup error was the sum of dose distributions of all individual treatment fractions. Study of 100 cumulative effects might get shift sizes of 90% dose point position. Margins from ITV to PTV caused by random setup error were chosen by 95% probability. Spearman's correlation was used to analyze the influence of each factor. Results: The average shift sizes of 90% dose point position was 0.62, 1.84, 3.13, 4.78, 6.34 and 8.03 mm if random setup error was 1,2,3,4,5 and 6 mm,respectively. Univariate analysis showed the size of margin was associated only by the size of random setup error. Conclusions: Margin of ITV to PTV is 1.2 times random setup error for head-and-neck cancer and 1.5 times for thoracic and abdominal cancer. Field size, energy and target depth, unlike random setup error, have no relation with the size of the margin. (authors)

  17. Grinding Method and Error Analysis of Eccentric Shaft Parts

    Science.gov (United States)

    Wang, Zhiming; Han, Qiushi; Li, Qiguang; Peng, Baoying; Li, Weihua

    2017-12-01

    RV reducer and various mechanical transmission parts are widely used in eccentric shaft parts, The demand of precision grinding technology for eccentric shaft parts now, In this paper, the model of X-C linkage relation of eccentric shaft grinding is studied; By inversion method, the contour curve of the wheel envelope is deduced, and the distance from the center of eccentric circle is constant. The simulation software of eccentric shaft grinding is developed, the correctness of the model is proved, the influence of the X-axis feed error, the C-axis feed error and the wheel radius error on the grinding process is analyzed, and the corresponding error calculation model is proposed. The simulation analysis is carried out to provide the basis for the contour error compensation.

  18. Error Detection and Error Classification: Failure Awareness in Data Transfer Scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Louisiana State University; Balman, Mehmet; Kosar, Tevfik

    2010-10-27

    Data transfer in distributed environment is prone to frequent failures resulting from back-end system level problems, like connectivity failure which is technically untraceable by users. Error messages are not logged efficiently, and sometimes are not relevant/useful from users point-of-view. Our study explores the possibility of an efficient error detection and reporting system for such environments. Prior knowledge about the environment and awareness of the actual reason behind a failure would enable higher level planners to make better and accurate decisions. It is necessary to have well defined error detection and error reporting methods to increase the usability and serviceability of existing data transfer protocols and data management systems. We investigate the applicability of early error detection and error classification techniques and propose an error reporting framework and a failure-aware data transfer life cycle to improve arrangement of data transfer operations and to enhance decision making of data transfer schedulers.

  19. PERIOD–COLOR AND AMPLITUDE–COLOR RELATIONS AT MAXIMUM AND MINIMUM LIGHT FOR RR LYRAE STARS IN THE SDSS STRIPE 82 REGION

    Energy Technology Data Exchange (ETDEWEB)

    Ngeow, Chow-Choong [Graduate Institute of Astronomy, National Central University, Jhongli 32001, Taiwan (China); Kanbur, Shashi M.; Schrecengost, Zachariah [Department of Physics, SUNY Oswego, Oswego, NY 13126 (United States); Bhardwaj, Anupam; Singh, Harinder P. [Department of Physics and Astrophysics, University of Delhi, Delhi 110007 (India)

    2017-01-10

    Investigation of period–color (PC) and amplitude–color (AC) relations at the maximum and minimum light can be used to probe the interaction of the hydrogen ionization front (HIF) with the photosphere and the radiation hydrodynamics of the outer envelopes of Cepheids and RR Lyraes. For example, theoretical calculations indicated that such interactions would occur at minimum light for RR Lyrae and result in a flatter PC relation. In the past, the PC and AC relations have been investigated by using either the ( V − R ){sub MACHO} or ( V − I ) colors. In this work, we extend previous work to other bands by analyzing the RR Lyraes in the Sloan Digital Sky Survey Stripe 82 Region. Multi-epoch data are available for RR Lyraes located within the footprint of the Stripe 82 Region in five ( ugriz ) bands. We present the PC and AC relations at maximum and minimum light in four colors: ( u − g ){sub 0}, ( g − r ){sub 0}, ( r − i ){sub 0}, and ( i − z ){sub 0}, after they are corrected for extinction. We found that the PC and AC relations for this sample of RR Lyraes show a complex nature in the form of flat, linear or quadratic relations. Furthermore, the PC relations at minimum light for fundamental mode RR Lyrae stars are separated according to the Oosterhoff type, especially in the ( g − r ){sub 0} and ( r − i ){sub 0} colors. If only considering the results from linear regressions, our results are quantitatively consistent with the theory of HIF-photosphere interaction for both fundamental and first overtone RR Lyraes.

  20. Ciliates learn to diagnose and correct classical error syndromes in mating strategies.

    Science.gov (United States)

    Clark, Kevin B

    2013-01-01

    Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by "rivals" and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via "power" or "refrigeration" cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in social

  1. Ciliates learn to diagnose and correct classical error syndromes in mating strategies

    Directory of Open Access Journals (Sweden)

    Kevin Bradley Clark

    2013-08-01

    Full Text Available Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by rivals and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via power or refrigeration cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and nonmodal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in

  2. ERESYE - a expert system for the evaluation of uncertainties related to systematic experimental errors

    International Nuclear Information System (INIS)

    Martinelli, T.; Panini, G.C.; Amoroso, A.

    1989-11-01

    Information about systematic errors are not given In EXFOR, the data base of nuclear experimental measurements: their assessment is committed to the ability of the evaluator. A tool Is needed which performs this task in a fully automatic way or, at least, gives a valuable aid. The expert system ERESYE has been implemented for investigating the feasibility of an automatic evaluation of the systematic errors in the experiments. The features of the project which led to the implementation of the system are presented. (author)

  3. The power and robustness of maximum LOD score statistics.

    Science.gov (United States)

    Yoo, Y J; Mendell, N R

    2008-07-01

    The maximum LOD score statistic is extremely powerful for gene mapping when calculated using the correct genetic parameter value. When the mode of genetic transmission is unknown, the maximum of the LOD scores obtained using several genetic parameter values is reported. This latter statistic requires higher critical value than the maximum LOD score statistic calculated from a single genetic parameter value. In this paper, we compare the power of maximum LOD scores based on three fixed sets of genetic parameter values with the power of the LOD score obtained after maximizing over the entire range of genetic parameter values. We simulate family data under nine generating models. For generating models with non-zero phenocopy rates, LOD scores maximized over the entire range of genetic parameters yielded greater power than maximum LOD scores for fixed sets of parameter values with zero phenocopy rates. No maximum LOD score was consistently more powerful than the others for generating models with a zero phenocopy rate. The power loss of the LOD score maximized over the entire range of genetic parameters, relative to the maximum LOD score calculated using the correct genetic parameter value, appeared to be robust to the generating models.

  4. A modified backpropagation algorithm for training neural networks on data with error bars

    International Nuclear Information System (INIS)

    Gernoth, K.A.; Clark, J.W.

    1994-08-01

    A method is proposed for training multilayer feedforward neural networks on data contaminated with noise. Specifically, we consider the case that the artificial neural system is required to learn a physical mapping when the available values of the target variable are subject to experimental uncertainties, but are characterized by error bars. The proposed method, based on maximum likelihood criterion for parameter estimation, involves simple modifications of the on-line backpropagation learning algorithm. These include incorporation of the error-bar assignments in a pattern-specific learning rate, together with epochal updating of a new measure of model accuracy that replaces the usual mean-square error. The extended backpropagation algorithm is successfully tested on two problems relevant to the modelling of atomic-mass systematics by neural networks. Provided the underlying mapping is reasonably smooth, neural nets trained with the new procedure are able to learn the true function to a good approximation even in the presence of high levels of Gaussian noise. (author). 26 refs, 2 figs, 5 tabs

  5. Human Errors - A Taxonomy for Describing Human Malfunction in Industrial Installations

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1982-01-01

    This paper describes the definition and the characteristics of human errors. Different types of human behavior are classified, and their relation to different error mechanisms are analyzed. The effect of conditioning factors related to affective, motivating aspects of the work situation as well...... as physiological factors are also taken into consideration. The taxonomy for event analysis, including human malfunction, is presented. Possibilities for the prediction of human error are discussed. The need for careful studies in actual work situations is expressed. Such studies could provide a better...

  6. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  7. Straight line fitting and predictions: On a marginal likelihood approach to linear regression and errors-in-variables models

    Science.gov (United States)

    Christiansen, Bo

    2015-04-01

    Linear regression methods are without doubt the most used approaches to describe and predict data in the physical sciences. They are often good first order approximations and they are in general easier to apply and interpret than more advanced methods. However, even the properties of univariate regression can lead to debate over the appropriateness of various models as witnessed by the recent discussion about climate reconstruction methods. Before linear regression is applied important choices have to be made regarding the origins of the noise terms and regarding which of the two variables under consideration that should be treated as the independent variable. These decisions are often not easy to make but they may have a considerable impact on the results. We seek to give a unified probabilistic - Bayesian with flat priors - treatment of univariate linear regression and prediction by taking, as starting point, the general errors-in-variables model (Christiansen, J. Clim., 27, 2014-2031, 2014). Other versions of linear regression can be obtained as limits of this model. We derive the likelihood of the model parameters and predictands of the general errors-in-variables model by marginalizing over the nuisance parameters. The resulting likelihood is relatively simple and easy to analyze and calculate. The well known unidentifiability of the errors-in-variables model is manifested as the absence of a well-defined maximum in the likelihood. However, this does not mean that probabilistic inference can not be made; the marginal likelihoods of model parameters and the predictands have, in general, well-defined maxima. We also include a probabilistic version of classical calibration and show how it is related to the errors-in-variables model. The results are illustrated by an example from the coupling between the lower stratosphere and the troposphere in the Northern Hemisphere winter.

  8. Harsh parenting and fearfulness in toddlerhood interact to predict amplitudes of preschool error-related negativity.

    Science.gov (United States)

    Brooker, Rebecca J; Buss, Kristin A

    2014-07-01

    Temperamentally fearful children are at increased risk for the development of anxiety problems relative to less-fearful children. This risk is even greater when early environments include high levels of harsh parenting behaviors. However, the mechanisms by which harsh parenting may impact fearful children's risk for anxiety problems are largely unknown. Recent neuroscience work has suggested that punishment is associated with exaggerated error-related negativity (ERN), an event-related potential linked to performance monitoring, even after the threat of punishment is removed. In the current study, we examined the possibility that harsh parenting interacts with fearfulness, impacting anxiety risk via neural processes of performance monitoring. We found that greater fearfulness and harsher parenting at 2 years of age predicted greater fearfulness and greater ERN amplitudes at age 4. Supporting the role of cognitive processes in this association, greater fearfulness and harsher parenting also predicted less efficient neural processing during preschool. This study provides initial evidence that performance monitoring may be a candidate process by which early parenting interacts with fearfulness to predict risk for anxiety problems. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  9. Evidence that UV-inducible error-prone repair is absent in Haemophilus influenzae Rd, with a discussion of the relation to error-prone repair of alkylating-agent damage

    International Nuclear Information System (INIS)

    Kimball, R.F.; Boling, M.E.; Perdue, S.W.

    1977-01-01

    Haemophilus influenzae Rd and its derivatives are mutated either not at all or to only a very small extent by ultraviolet radiation, X-rays, methyl methanesulfonate, and nitrogen mustard, though they are readily mutated by such agents as N-methyl-N'-nitro-N-nitrosoguanidine, ethyl methanesulfonate, and nitrosocarbaryl (NC). In these respects H. influenzae Rd resembles the lexA mutants of Escherichia coli that lack the SOS or reclex UV-inducible error-prone repair system. This similarity is further brought out by the observation that chloramphenicol has little or no effect on post-replication repair after UV irradiation. In E. coli, chloramphenicol has been reported to considerably inhibit post-replication repair in the wild type but not in the lexA mutant. Earlier work has suggested that most or all the mutations induced in H. influenzae by NC result from error-prone repair. Combined treatment with NC and either X-rays or UV shows that the NC error-prone repair system does not produce mutations from the lesions induced by these radiations even while it is producing them from its own lesions. It is concluded that the NC error-prone repair system or systems and the reclex error-prone system are different

  10. Analyzing the errors of DFT approximations for compressed water systems

    International Nuclear Information System (INIS)

    Alfè, D.; Bartók, A. P.; Csányi, G.; Gillan, M. J.

    2014-01-01

    We report an extensive study of the errors of density functional theory (DFT) approximations for compressed water systems. The approximations studied are based on the widely used PBE and BLYP exchange-correlation functionals, and we characterize their errors before and after correction for 1- and 2-body errors, the corrections being performed using the methods of Gaussian approximation potentials. The errors of the uncorrected and corrected approximations are investigated for two related types of water system: first, the compressed liquid at temperature 420 K and density 1.245 g/cm 3 where the experimental pressure is 15 kilobars; second, thermal samples of compressed water clusters from the trimer to the 27-mer. For the liquid, we report four first-principles molecular dynamics simulations, two generated with the uncorrected PBE and BLYP approximations and a further two with their 1- and 2-body corrected counterparts. The errors of the simulations are characterized by comparing with experimental data for the pressure, with neutron-diffraction data for the three radial distribution functions, and with quantum Monte Carlo (QMC) benchmarks for the energies of sets of configurations of the liquid in periodic boundary conditions. The DFT errors of the configuration samples of compressed water clusters are computed using QMC benchmarks. We find that the 2-body and beyond-2-body errors in the liquid are closely related to similar errors exhibited by the clusters. For both the liquid and the clusters, beyond-2-body errors of DFT make a substantial contribution to the overall errors, so that correction for 1- and 2-body errors does not suffice to give a satisfactory description. For BLYP, a recent representation of 3-body energies due to Medders, Babin, and Paesani [J. Chem. Theory Comput. 9, 1103 (2013)] gives a reasonably good way of correcting for beyond-2-body errors, after which the remaining errors are typically 0.5 mE h ≃ 15 meV/monomer for the liquid and the

  11. Dosimetric consequences of translational and rotational errors in frame-less image-guided radiosurgery

    Directory of Open Access Journals (Sweden)

    Guckenberger Matthias

    2012-04-01

    Full Text Available Abstract Background To investigate geometric and dosimetric accuracy of frame-less image-guided radiosurgery (IG-RS for brain metastases. Methods and materials Single fraction IG-RS was practiced in 72 patients with 98 brain metastases. Patient positioning and immobilization used either double- (n = 71 or single-layer (n = 27 thermoplastic masks. Pre-treatment set-up errors (n = 98 were evaluated with cone-beam CT (CBCT based image-guidance (IG and were corrected in six degrees of freedom without an action level. CBCT imaging after treatment measured intra-fractional errors (n = 64. Pre- and post-treatment errors were simulated in the treatment planning system and target coverage and dose conformity were evaluated. Three scenarios of 0 mm, 1 mm and 2 mm GTV-to-PTV (gross tumor volume, planning target volume safety margins (SM were simulated. Results Errors prior to IG were 3.9 mm ± 1.7 mm (3D vector and the maximum rotational error was 1.7° ± 0.8° on average. The post-treatment 3D error was 0.9 mm ± 0.6 mm. No differences between double- and single-layer masks were observed. Intra-fractional errors were significantly correlated with the total treatment time with 0.7mm±0.5mm and 1.2mm±0.7mm for treatment times ≤23 minutes and >23 minutes (p5% in 14% of the patients. A 1 mm safety margin fully compensated intra-fractional patient motion. Conclusions IG-RS with online correction of translational errors achieves high geometric and dosimetric accuracy. Intra-fractional errors decrease target coverage and conformity unless compensated with appropriate safety margins.

  12. Analysis of Point Based Image Registration Errors With Applications in Single Molecule Microscopy.

    Science.gov (United States)

    Cohen, E A K; Ober, R J

    2013-12-15

    We present an asymptotic treatment of errors involved in point-based image registration where control point (CP) localization is subject to heteroscedastic noise; a suitable model for image registration in fluorescence microscopy. Assuming an affine transform, CPs are used to solve a multivariate regression problem. With measurement errors existing for both sets of CPs this is an errors-in-variable problem and linear least squares is inappropriate; the correct method being generalized least squares. To allow for point dependent errors the equivalence of a generalized maximum likelihood and heteroscedastic generalized least squares model is achieved allowing previously published asymptotic results to be extended to image registration. For a particularly useful model of heteroscedastic noise where covariance matrices are scalar multiples of a known matrix (including the case where covariance matrices are multiples of the identity) we provide closed form solutions to estimators and derive their distribution. We consider the target registration error (TRE) and define a new measure called the localization registration error (LRE) believed to be useful, especially in microscopy registration experiments. Assuming Gaussianity of the CP localization errors, it is shown that the asymptotic distribution for the TRE and LRE are themselves Gaussian and the parameterized distributions are derived. Results are successfully applied to registration in single molecule microscopy to derive the key dependence of the TRE and LRE variance on the number of CPs and their associated photon counts. Simulations show asymptotic results are robust for low CP numbers and non-Gaussianity. The method presented here is shown to outperform GLS on real imaging data.

  13. A Hybrid Unequal Error Protection / Unequal Error Resilience ...

    African Journals Online (AJOL)

    The quality layers are then assigned an Unequal Error Resilience to synchronization loss by unequally allocating the number of headers available for synchronization to them. Following that Unequal Error Protection against channel noise is provided to the layers by the use of Rate Compatible Punctured Convolutional ...

  14. Context Specificity of Post-Error and Post-Conflict Cognitive Control Adjustments

    Science.gov (United States)

    Forster, Sarah E.; Cho, Raymond Y.

    2014-01-01

    There has been accumulating evidence that cognitive control can be adaptively regulated by monitoring for processing conflict as an index of online control demands. However, it is not yet known whether top-down control mechanisms respond to processing conflict in a manner specific to the operative task context or confer a more generalized benefit. While previous studies have examined the taskset-specificity of conflict adaptation effects, yielding inconsistent results, control-related performance adjustments following errors have been largely overlooked. This gap in the literature underscores recent debate as to whether post-error performance represents a strategic, control-mediated mechanism or a nonstrategic consequence of attentional orienting. In the present study, evidence of generalized control following both high conflict correct trials and errors was explored in a task-switching paradigm. Conflict adaptation effects were not found to generalize across tasksets, despite a shared response set. In contrast, post-error slowing effects were found to extend to the inactive taskset and were predictive of enhanced post-error accuracy. In addition, post-error performance adjustments were found to persist for several trials and across multiple task switches, a finding inconsistent with attentional orienting accounts of post-error slowing. These findings indicate that error-related control adjustments confer a generalized performance benefit and suggest dissociable mechanisms of post-conflict and post-error control. PMID:24603900

  15. The treatment of commission errors in first generation human reliability analysis methods

    Energy Technology Data Exchange (ETDEWEB)

    Alvarengga, Marco Antonio Bayout; Fonseca, Renato Alves da, E-mail: bayout@cnen.gov.b, E-mail: rfonseca@cnen.gov.b [Comissao Nacional de Energia Nuclear (CNEN) Rio de Janeiro, RJ (Brazil); Melo, Paulo Fernando Frutuoso e, E-mail: frutuoso@nuclear.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Nuclear

    2011-07-01

    Human errors in human reliability analysis can be classified generically as errors of omission and commission errors. Omission errors are related to the omission of any human action that should have been performed, but does not occur. Errors of commission are those related to human actions that should not be performed, but which in fact are performed. Both involve specific types of cognitive error mechanisms, however, errors of commission are more difficult to model because they are characterized by non-anticipated actions that are performed instead of others that are omitted (omission errors) or are entered into an operational task without being part of the normal sequence of this task. The identification of actions that are not supposed to occur depends on the operational context that will influence or become easy certain unsafe actions of the operator depending on the operational performance of its parameters and variables. The survey of operational contexts and associated unsafe actions is a characteristic of second-generation models, unlike the first generation models. This paper discusses how first generation models can treat errors of commission in the steps of detection, diagnosis, decision-making and implementation, in the human information processing, particularly with the use of THERP tables of errors quantification. (author)

  16. Evaluation of robustness of maximum likelihood cone-beam CT reconstruction with total variation regularization

    International Nuclear Information System (INIS)

    Stsepankou, D; Arns, A; Hesser, J; Ng, S K; Zygmanski, P

    2012-01-01

    The objective of this paper is to evaluate an iterative maximum likelihood (ML) cone–beam computed tomography (CBCT) reconstruction with total variation (TV) regularization with respect to the robustness of the algorithm due to data inconsistencies. Three different and (for clinical application) typical classes of errors are considered for simulated phantom and measured projection data: quantum noise, defect detector pixels and projection matrix errors. To quantify those errors we apply error measures like mean square error, signal-to-noise ratio, contrast-to-noise ratio and streak indicator. These measures are derived from linear signal theory and generalized and applied for nonlinear signal reconstruction. For quality check, we focus on resolution and CT-number linearity based on a Catphan phantom. All comparisons are made versus the clinical standard, the filtered backprojection algorithm (FBP). In our results, we confirm and substantially extend previous results on iterative reconstruction such as massive undersampling of the number of projections. Errors of projection matrix parameters of up to 1° projection angle deviations are still in the tolerance level. Single defect pixels exhibit ring artifacts for each method. However using defect pixel compensation, allows up to 40% of defect pixels for passing the standard clinical quality check. Further, the iterative algorithm is extraordinarily robust in the low photon regime (down to 0.05 mAs) when compared to FPB, allowing for extremely low-dose image acquisitions, a substantial issue when considering daily CBCT imaging for position correction in radiotherapy. We conclude that the ML method studied herein is robust under clinical quality assurance conditions. Consequently, low-dose regime imaging, especially for daily patient localization in radiation therapy is possible without change of the current hardware of the imaging system. (paper)

  17. Using Tranformation Group Priors and Maximum Relative Entropy for Bayesian Glaciological Inversions

    Science.gov (United States)

    Arthern, R. J.; Hindmarsh, R. C. A.; Williams, C. R.

    2014-12-01

    One of the key advances that has allowed better simulations of the large ice sheets of Greenland and Antarctica has been the use of inverse methods. These have allowed poorly known parameters such as the basal drag coefficient and ice viscosity to be constrained using a wide variety of satellite observations. Inverse methods used by glaciologists have broadly followed one of two related approaches. The first is minimization of a cost function that describes the misfit to the observations, often accompanied by some kind of explicit or implicit regularization that promotes smallness or smoothness in the inverted parameters. The second approach is a probabilistic framework that makes use of Bayes' theorem to update prior assumptions about the probability of parameters, making use of data with known error estimates. Both approaches have much in common and questions of regularization often map onto implicit choices of prior probabilities that are made explicit in the Bayesian framework. In both approaches questions can arise that seem to demand subjective input. What should the functional form of the cost function be if there are alternatives? What kind of regularization should be applied, and how much? How should the prior probability distribution for a parameter such as basal slipperiness be specified when we know so little about the details of the subglacial environment? Here we consider some approaches that have been used to address these questions and discuss ways that probabilistic prior information used for regularizing glaciological inversions might be specified with greater objectivity.

  18. Learning from prescribing errors

    OpenAIRE

    Dean, B

    2002-01-01

    

 The importance of learning from medical error has recently received increasing emphasis. This paper focuses on prescribing errors and argues that, while learning from prescribing errors is a laudable goal, there are currently barriers that can prevent this occurring. Learning from errors can take place on an individual level, at a team level, and across an organisation. Barriers to learning from prescribing errors include the non-discovery of many prescribing errors, lack of feedback to th...

  19. Avoiding Systematic Errors in Isometric Squat-Related Studies without Pre-Familiarization by Using Sufficient Numbers of Trials

    Directory of Open Access Journals (Sweden)

    Pekünlü Ekim

    2014-10-01

    Full Text Available There is no scientific evidence in the literature indicating that maximal isometric strength measures can be assessed within 3 trials. We questioned whether the results of isometric squat-related studies in which maximal isometric squat strength (MISS testing was performed using limited numbers of trials without pre-familiarization might have included systematic errors, especially those resulting from acute learning effects. Forty resistance-trained male participants performed 8 isometric squat trials without pre-familiarization. The highest measures in the first “n” trials (3 ≤ n ≤ 8 of these 8 squats were regarded as MISS obtained using 6 different MISS test methods featuring different numbers of trials (The Best of n Trials Method [BnT]. When B3T and B8T were paired with other methods, high reliability was found between the paired methods in terms of intraclass correlation coefficients (0.93-0.98 and coefficients of variation (3.4-7.0%. The Wilcoxon’s signed rank test indicated that MISS obtained using B3T and B8T were lower (p < 0.001 and higher (p < 0.001, respectively, than those obtained using other methods. The Bland- Altman method revealed a lack of agreement between any of the paired methods. Simulation studies illustrated that increasing the number of trials to 9-10 using a relatively large sample size (i.e., ≥ 24 could be an effective means of obtaining the actual MISS values of the participants. The common use of a limited number of trials in MISS tests without pre-familiarization appears to have no solid scientific base. Our findings suggest that the number of trials should be increased in commonly used MISS tests to avoid learning effect-related systematic errors

  20. Post-error action control is neurobehaviorally modulated under conditions of constant speeded response

    Directory of Open Access Journals (Sweden)

    Takahiro eSoshi

    2015-01-01

    Full Text Available Post-error slowing is an error recovery strategy that contributes to action control, and occurs after errors in order to prevent future behavioral flaws. Error recovery often malfunctions in clinical populations, but the relationship between behavioral traits and recovery from error is unclear in healthy populations. The present study investigated the relationship between impulsivity and error recovery by simulating a speeded response situation using a Go/No-go paradigm that forced the participants to constantly make accelerated responses prior to stimuli disappearance (stimulus duration: 250 ms. Neural correlates of post-error processing were examined using event-related potentials (ERPs. Impulsivity traits were measured with self-report questionnaires (BIS-11, BIS/BAS. Behavioral results demonstrated that the commission error for No-go trials was 15%, but post-error slowing did not take place immediately. Delayed post-error slowing was negatively correlated with error rates and impulsivity traits, showing that response slowing was associated with reduced error rates and changed with impulsivity. Response-locked error ERPs were clearly observed for the error trials. Contrary to previous studies, error ERPs were not significantly related to post-error slowing. Stimulus-locked N2 was negatively correlated with post-error slowing and positively correlated with impulsivity traits at the second post-error Go trial: larger N2 activity was associated with greater post-error slowing and less impulsivity. In summary, under constant speeded conditions, error monitoring was dissociated from post-error action control, and post-error slowing did not occur quickly. Furthermore, post-error slowing and its neural correlate (N2 were modulated by impulsivity traits. These findings suggest that there may be clinical and practical efficacy of maintaining cognitive control of actions during error recovery under common daily environments that frequently evoke