WorldWideScience

Sample records for posteriori pointwise error

  1. A posteriori error estimates in voice source recovery

    Science.gov (United States)

    Leonov, A. S.; Sorokin, V. N.

    2017-12-01

    The inverse problem of voice source pulse recovery from a segment of a speech signal is under consideration. A special mathematical model is used for the solution that relates these quantities. A variational method of solving inverse problem of voice source recovery for a new parametric class of sources, that is for piecewise-linear sources (PWL-sources), is proposed. Also, a technique for a posteriori numerical error estimation for obtained solutions is presented. A computer study of the adequacy of adopted speech production model with PWL-sources is performed in solving the inverse problems for various types of voice signals, as well as corresponding study of a posteriori error estimates. Numerical experiments for speech signals show satisfactory properties of proposed a posteriori error estimates, which represent the upper bounds of possible errors in solving the inverse problem. The estimate of the most probable error in determining the source-pulse shapes is about 7-8% for the investigated speech material. It is noted that a posteriori error estimates can be used as a criterion of the quality for obtained voice source pulses in application to speaker recognition.

  2. A posteriori error estimates for axisymmetric and nonlinear problems

    Czech Academy of Sciences Publication Activity Database

    Křížek, Michal; Němec, J.; Vejchodský, Tomáš

    2001-01-01

    Roč. 15, - (2001), s. 219-236 ISSN 1019-7168 R&D Projects: GA ČR GA201/01/1200; GA MŠk ME 148 Keywords : weigted Sobolev spaces%a posteriori error estimates%finite elements Subject RIV: BA - General Mathematics Impact factor: 0.886, year: 2001

  3. Complementarity based a posteriori error estimates and their properties

    Czech Academy of Sciences Publication Activity Database

    Vejchodský, Tomáš

    2012-01-01

    Roč. 82, č. 10 (2012), s. 2033-2046 ISSN 0378-4754 R&D Projects: GA ČR(CZ) GA102/07/0496; GA AV ČR IAA100760702 Institutional research plan: CEZ:AV0Z10190503 Keywords : error majorant * a posteriori error estimates * method of hypercircle Subject RIV: BA - General Mathematics Impact factor: 0.836, year: 2012 http://www.sciencedirect.com/science/article/pii/S0378475411001509

  4. A residual-based a posteriori error estimator for single-phase Darcy flow in fractured porous media

    KAUST Repository

    Chen, Huangxin; Sun, Shuyu

    2016-01-01

    for the problem with non-intersecting fractures. The reliability and efficiency of the a posteriori error estimator are established for the error measured in an energy norm. Numerical results verifying the robustness of the proposed a posteriori error estimator

  5. A posteriori error estimates for finite volume approximations of elliptic equations on general surfaces

    Energy Technology Data Exchange (ETDEWEB)

    Ju, Lili; Tian, Li; Wang, Desheng

    2008-10-31

    In this paper, we present a residual-based a posteriori error estimate for the finite volume discretization of steady convection– diffusion–reaction equations defined on surfaces in R3, which are often implicitly represented as level sets of smooth functions. Reliability and efficiency of the proposed a posteriori error estimator are rigorously proved. Numerical experiments are also conducted to verify the theoretical results and demonstrate the robustness of the error estimator.

  6. Residual-based a posteriori error estimation for multipoint flux mixed finite element methods

    KAUST Repository

    Du, Shaohong; Sun, Shuyu; Xie, Xiaoping

    2015-01-01

    A novel residual-type a posteriori error analysis technique is developed for multipoint flux mixed finite element methods for flow in porous media in two or three space dimensions. The derived a posteriori error estimator for the velocity and pressure error in L-norm consists of discretization and quadrature indicators, and is shown to be reliable and efficient. The main tools of analysis are a locally postprocessed approximation to the pressure solution of an auxiliary problem and a quadrature error estimate. Numerical experiments are presented to illustrate the competitive behavior of the estimator.

  7. Residual-based a posteriori error estimation for multipoint flux mixed finite element methods

    KAUST Repository

    Du, Shaohong

    2015-10-26

    A novel residual-type a posteriori error analysis technique is developed for multipoint flux mixed finite element methods for flow in porous media in two or three space dimensions. The derived a posteriori error estimator for the velocity and pressure error in L-norm consists of discretization and quadrature indicators, and is shown to be reliable and efficient. The main tools of analysis are a locally postprocessed approximation to the pressure solution of an auxiliary problem and a quadrature error estimate. Numerical experiments are presented to illustrate the competitive behavior of the estimator.

  8. A Posteriori Error Analysis and Uncertainty Quantification for Adaptive Multiscale Operator Decomposition Methods for Multiphysics Problems

    Science.gov (United States)

    2014-04-01

    Integral Role in Soft Tissue Mechanics, K. Troyer, D. Estep, and C. Puttlitz, Acta Biomaterialia 8 (201 2), 234-244 • A posteriori analysis of multi rate...2013, submitted • A posteriori error estimation for the Lax -Wendroff finite difference scheme, J. B. Collins, D. Estep, and S. Tavener, Journal of...oped over neArly six decades of activity and the major developments form a highly inter- connected web. We do not. ətternpt to review the history of

  9. A Posteriori Error Estimates Including Algebraic Error and Stopping Criteria for Iterative Solvers

    Czech Academy of Sciences Publication Activity Database

    Jiránek, P.; Strakoš, Zdeněk; Vohralík, M.

    2010-01-01

    Roč. 32, č. 3 (2010), s. 1567-1590 ISSN 1064-8275 R&D Projects: GA AV ČR IAA100300802 Grant - others:GA ČR(CZ) GP201/09/P464 Institutional research plan: CEZ:AV0Z10300504 Keywords : second-order elliptic partial differential equation * finite volume method * a posteriori error estimates * iterative methods for linear algebraic systems * conjugate gradient method * stopping criteria Subject RIV: BA - General Mathematics Impact factor: 3.016, year: 2010

  10. Adaptive finite element techniques for the Maxwell equations using implicit a posteriori error estimates

    NARCIS (Netherlands)

    Harutyunyan, D.; Izsak, F.; van der Vegt, Jacobus J.W.; Bochev, Mikhail A.

    For the adaptive solution of the Maxwell equations on three-dimensional domains with N´ed´elec edge finite element methods, we consider an implicit a posteriori error estimation technique. On each element of the tessellation an equation for the error is formulated and solved with a properly chosen

  11. Robust a Posteriori Error Control and Adaptivity for Multiscale, Multinumerics, and Mortar Coupling

    KAUST Repository

    Pencheva, Gergina V.; Vohralí k, Martin; Wheeler, Mary F.; Wildey, Tim

    2013-01-01

    -order polynomials are used on the mortar interface mesh. We derive several fully computable a posteriori error estimates which deliver a guaranteed upper bound on the error measured in the energy norm. Our estimates are also locally efficient and one of them

  12. A Posteriori Error Estimation for Finite Element Methods and Iterative Linear Solvers

    Energy Technology Data Exchange (ETDEWEB)

    Melboe, Hallgeir

    2001-10-01

    This thesis addresses a posteriori error estimation for finite element methods and iterative linear solvers. Adaptive finite element methods have gained a lot of popularity over the last decades due to their ability to produce accurate results with limited computer power. In these methods a posteriori error estimates play an essential role. Not only do they give information about how large the total error is, they also indicate which parts of the computational domain should be given a more sophisticated treatment in order to reduce the error. A posteriori error estimates are traditionally aimed at estimating the global error, but more recently so called goal oriented error estimators have been shown a lot of interest. The name reflects the fact that they estimate the error in user-defined local quantities. In this thesis the main focus is on global error estimators for highly stretched grids and goal oriented error estimators for flow problems on regular grids. Numerical methods for partial differential equations, such as finite element methods and other similar techniques, typically result in a linear system of equations that needs to be solved. Usually such systems are solved using some iterative procedure which due to a finite number of iterations introduces an additional error. Most such algorithms apply the residual in the stopping criterion, whereas the control of the actual error may be rather poor. A secondary focus in this thesis is on estimating the errors that are introduced during this last part of the solution procedure. The thesis contains new theoretical results regarding the behaviour of some well known, and a few new, a posteriori error estimators for finite element methods on anisotropic grids. Further, a goal oriented strategy for the computation of forces in flow problems is devised and investigated. Finally, an approach for estimating the actual errors associated with the iterative solution of linear systems of equations is suggested. (author)

  13. A residual-based a posteriori error estimator for single-phase Darcy flow in fractured porous media

    KAUST Repository

    Chen, Huangxin

    2016-12-09

    In this paper we develop an a posteriori error estimator for a mixed finite element method for single-phase Darcy flow in a two-dimensional fractured porous media. The discrete fracture model is applied to model the fractures by one-dimensional fractures in a two-dimensional domain. We consider Raviart–Thomas mixed finite element method for the approximation of the coupled Darcy flows in the fractures and the surrounding porous media. We derive a robust residual-based a posteriori error estimator for the problem with non-intersecting fractures. The reliability and efficiency of the a posteriori error estimator are established for the error measured in an energy norm. Numerical results verifying the robustness of the proposed a posteriori error estimator are given. Moreover, our numerical results indicate that the a posteriori error estimator also works well for the problem with intersecting fractures.

  14. A posteriori error analysis of multiscale operator decomposition methods for multiphysics models

    International Nuclear Information System (INIS)

    Estep, D; Carey, V; Tavener, S; Ginting, V; Wildey, T

    2008-01-01

    Multiphysics, multiscale models present significant challenges in computing accurate solutions and for estimating the error in information computed from numerical solutions. In this paper, we describe recent advances in extending the techniques of a posteriori error analysis to multiscale operator decomposition solution methods. While the particulars of the analysis vary considerably with the problem, several key ideas underlie a general approach being developed to treat operator decomposition multiscale methods. We explain these ideas in the context of three specific examples

  15. Verification of functional a posteriori error estimates for obstacle problem in 1D

    Czech Academy of Sciences Publication Activity Database

    Harasim, P.; Valdman, Jan

    2013-01-01

    Roč. 49, č. 5 (2013), s. 738-754 ISSN 0023-5954 R&D Projects: GA ČR GA13-18652S Institutional support: RVO:67985556 Keywords : obstacle problem * a posteriori error estimate * variational inequalities Subject RIV: BA - General Mathematics Impact factor: 0.563, year: 2013 http://library.utia.cas.cz/separaty/2014/MTR/valdman-0424082.pdf

  16. Verification of functional a posteriori error estimates for obstacle problem in 2D

    Czech Academy of Sciences Publication Activity Database

    Harasim, P.; Valdman, Jan

    2014-01-01

    Roč. 50, č. 6 (2014), s. 978-1002 ISSN 0023-5954 R&D Projects: GA ČR GA13-18652S Institutional support: RVO:67985556 Keywords : obstacle problem * a posteriori error estimate * finite element method * variational inequalities Subject RIV: BA - General Mathematics Impact factor: 0.541, year: 2014 http://library.utia.cas.cz/separaty/2015/MTR/valdman-0441661.pdf

  17. A review of some a posteriori error estimates for adaptive finite element methods

    Czech Academy of Sciences Publication Activity Database

    Segeth, Karel

    2010-01-01

    Roč. 80, č. 8 (2010), s. 1589-1600 ISSN 0378-4754. [European Seminar on Coupled Problems. Jetřichovice, 08.06.2008-13.06.2008] R&D Projects: GA AV ČR(CZ) IAA100190803 Institutional research plan: CEZ:AV0Z10190503 Keywords : hp-adaptive finite element method * a posteriori error estimators * computational error estimates Subject RIV: BA - General Mathematics Impact factor: 0.812, year: 2010 http://www.sciencedirect.com/science/article/pii/S0378475408004230

  18. Discontinuous Galerkin methods and a posteriori error analysis for heterogenous diffusion problems

    International Nuclear Information System (INIS)

    Stephansen, A.F.

    2007-12-01

    In this thesis we analyse a discontinuous Galerkin (DG) method and two computable a posteriori error estimators for the linear and stationary advection-diffusion-reaction equation with heterogeneous diffusion. The DG method considered, the SWIP method, is a variation of the Symmetric Interior Penalty Galerkin method. The difference is that the SWIP method uses weighted averages with weights that depend on the diffusion. The a priori analysis shows optimal convergence with respect to mesh-size and robustness with respect to heterogeneous diffusion, which is confirmed by numerical tests. Both a posteriori error estimators are of the residual type and control the energy (semi-)norm of the error. Local lower bounds are obtained showing that almost all indicators are independent of heterogeneities. The exception is for the non-conforming part of the error, which has been evaluated using the Oswald interpolator. The second error estimator is sharper in its estimate with respect to the first one, but it is slightly more costly. This estimator is based on the construction of an H(div)-conforming Raviart-Thomas-Nedelec flux using the conservativeness of DG methods. Numerical results show that both estimators can be used for mesh-adaptation. (author)

  19. Adjoint-Based a Posteriori Error Estimation for Coupled Time-Dependent Systems

    KAUST Repository

    Asner, Liya; Tavener, Simon; Kay, David

    2012-01-01

    We consider time-dependent parabolic problem s coupled across a common interface which we formulate using a Lagrange multiplier construction and solve by applying a monolithic solution technique. We derive an adjoint-based a posteriori error representation for a quantity of interest given by a linear functional of the solution. We establish the accuracy of our error representation formula through numerical experimentation and investigate the effect of error in the adjoint solution. Crucially, the error representation affords a distinction between temporal and spatial errors and can be used as a basis for a blockwise time-space refinement strategy. Numerical tests illustrate the efficacy of the refinement strategy by capturing the distinctive behavior of a localized traveling wave solution. The saddle point systems considered here are equivalent to those arising in the mortar finite element technique for parabolic problems. © 2012 Society for Industrial and Applied Mathematics.

  20. A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates

    Science.gov (United States)

    Huang, Weizhang; Kamenski, Lennard; Lang, Jens

    2010-03-01

    A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.

  1. An A Posteriori Error Analysis of Mixed Finite Element Galerkin Approximations to Second Order Linear Parabolic Problems

    KAUST Repository

    Memon, Sajid; Nataraj, Neela; Pani, Amiya Kumar

    2012-01-01

    In this article, a posteriori error estimates are derived for mixed finite element Galerkin approximations to second order linear parabolic initial and boundary value problems. Using mixed elliptic reconstructions, a posteriori error estimates in L∞(L2)- and L2(L2)-norms for the solution as well as its flux are proved for the semidiscrete scheme. Finally, based on a backward Euler method, a completely discrete scheme is analyzed and a posteriori error bounds are derived, which improves upon earlier results on a posteriori estimates of mixed finite element approximations to parabolic problems. Results of numerical experiments verifying the efficiency of the estimators have also been provided. © 2012 Society for Industrial and Applied Mathematics.

  2. A Posteriori Error Analysis of Stochastic Differential Equations Using Polynomial Chaos Expansions

    KAUST Repository

    Butler, T.; Dawson, C.; Wildey, T.

    2011-01-01

    We develop computable a posteriori error estimates for linear functionals of a solution to a general nonlinear stochastic differential equation with random model/source parameters. These error estimates are based on a variational analysis applied to stochastic Galerkin methods for forward and adjoint problems. The result is a representation for the error estimate as a polynomial in the random model/source parameter. The advantage of this method is that we use polynomial chaos representations for the forward and adjoint systems to cheaply produce error estimates by simple evaluation of a polynomial. By comparison, the typical method of producing such estimates requires repeated forward/adjoint solves for each new choice of random parameter. We present numerical examples showing that there is excellent agreement between these methods. © 2011 Society for Industrial and Applied Mathematics.

  3. A FEM approximation of a two-phase obstacle problem and its a posteriori error estimate

    Czech Academy of Sciences Publication Activity Database

    Bozorgnia, F.; Valdman, Jan

    2017-01-01

    Roč. 73, č. 3 (2017), s. 419-432 ISSN 0898-1221 R&D Projects: GA ČR(CZ) GF16-34894L; GA MŠk(CZ) 7AMB16AT015 Institutional support: RVO:67985556 Keywords : A free boundary problem * A posteriori error analysis * Finite element method Subject RIV: BA - General Mathematics OBOR OECD: Applied mathematics Impact factor: 1.531, year: 2016 http://library.utia.cas.cz/separaty/2017/MTR/valdman-0470507.pdf

  4. A posteriori error estimator and AMR for discrete ordinates nodal transport methods

    International Nuclear Information System (INIS)

    Duo, Jose I.; Azmy, Yousry Y.; Zikatanov, Ludmil T.

    2009-01-01

    In the development of high fidelity transport solvers, optimization of the use of available computational resources and access to a tool for assessing quality of the solution are key to the success of large-scale nuclear systems' simulation. In this regard, error control provides the analyst with a confidence level in the numerical solution and enables for optimization of resources through Adaptive Mesh Refinement (AMR). In this paper, we derive an a posteriori error estimator based on the nodal solution of the Arbitrarily High Order Transport Method of the Nodal type (AHOT-N). Furthermore, by making assumptions on the regularity of the solution, we represent the error estimator as a function of computable volume and element-edges residuals. The global L 2 error norm is proved to be bound by the estimator. To lighten the computational load, we present a numerical approximation to the aforementioned residuals and split the global norm error estimator into local error indicators. These indicators are used to drive an AMR strategy for the spatial discretization. However, the indicators based on forward solution residuals alone do not bound the cell-wise error. The estimator and AMR strategy are tested in two problems featuring strong heterogeneity and highly transport streaming regime with strong flux gradients. The results show that the error estimator indeed bounds the global error norms and that the error indicator follows the cell-error's spatial distribution pattern closely. The AMR strategy proves beneficial to optimize resources, primarily by reducing the number of unknowns solved for to achieve prescribed solution accuracy in global L 2 error norm. Likewise, AMR achieves higher accuracy compared to uniform refinement when resolving sharp flux gradients, for the same number of unknowns

  5. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    Science.gov (United States)

    Jakeman, J. D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.

  6. Enhancing adaptive sparse grid approximations and improving refinement strategies using adjoint-based a posteriori error estimates

    International Nuclear Information System (INIS)

    Jakeman, J.D.; Wildey, T.

    2015-01-01

    In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation

  7. Robust a Posteriori Error Control and Adaptivity for Multiscale, Multinumerics, and Mortar Coupling

    KAUST Repository

    Pencheva, Gergina V.

    2013-01-01

    We consider discretizations of a model elliptic problem by means of different numerical methods applied separately in different subdomains, termed multinumerics, coupled using the mortar technique. The grids need not match along the interfaces. We are also interested in the multiscale setting, where the subdomains are partitioned by a mesh of size h, whereas the interfaces are partitioned by a mesh of much coarser size H, and where lower-order polynomials are used in the subdomains and higher-order polynomials are used on the mortar interface mesh. We derive several fully computable a posteriori error estimates which deliver a guaranteed upper bound on the error measured in the energy norm. Our estimates are also locally efficient and one of them is robust with respect to the ratio H/h under an assumption of sufficient regularity of the weak solution. The present approach allows bounding separately and comparing mutually the subdomain and interface errors. A subdomain/interface adaptive refinement strategy is proposed and numerically tested. © 2013 Society for Industrial and Applied Mathematics.

  8. Time-varying block codes for synchronisation errors: maximum a posteriori decoder and practical issues

    Directory of Open Access Journals (Sweden)

    Johann A. Briffa

    2014-06-01

    Full Text Available In this study, the authors consider time-varying block (TVB codes, which generalise a number of previous synchronisation error-correcting codes. They also consider various practical issues related to maximum a posteriori (MAP decoding of these codes. Specifically, they give an expression for the expected distribution of drift between transmitter and receiver because of synchronisation errors. They determine an appropriate choice for state space limits based on the drift probability distribution. In turn, they obtain an expression for the decoder complexity under given channel conditions in terms of the state space limits used. For a given state space, they also give a number of optimisations that reduce the algorithm complexity with no further loss of decoder performance. They also show how the MAP decoder can be used in the absence of known frame boundaries, and demonstrate that an appropriate choice of decoder parameters allows the decoder to approach the performance when frame boundaries are known, at the expense of some increase in complexity. Finally, they express some existing constructions as TVB codes, comparing performance with published results and showing that improved performance is possible by taking advantage of the flexibility of TVB codes.

  9. Combined Uncertainty and A-Posteriori Error Bound Estimates for General CFD Calculations: Theory and Software Implementation

    Science.gov (United States)

    Barth, Timothy J.

    2014-01-01

    This workshop presentation discusses the design and implementation of numerical methods for the quantification of statistical uncertainty, including a-posteriori error bounds, for output quantities computed using CFD methods. Hydrodynamic realizations often contain numerical error arising from finite-dimensional approximation (e.g. numerical methods using grids, basis functions, particles) and statistical uncertainty arising from incomplete information and/or statistical characterization of model parameters and random fields. The first task at hand is to derive formal error bounds for statistics given realizations containing finite-dimensional numerical error [1]. The error in computed output statistics contains contributions from both realization error and the error resulting from the calculation of statistics integrals using a numerical method. A second task is to devise computable a-posteriori error bounds by numerically approximating all terms arising in the error bound estimates. For the same reason that CFD calculations including error bounds but omitting uncertainty modeling are only of limited value, CFD calculations including uncertainty modeling but omitting error bounds are only of limited value. To gain maximum value from CFD calculations, a general software package for uncertainty quantification with quantified error bounds has been developed at NASA. The package provides implementations for a suite of numerical methods used in uncertainty quantification: Dense tensorization basis methods [3] and a subscale recovery variant [1] for non-smooth data, Sparse tensorization methods[2] utilizing node-nested hierarchies, Sampling methods[4] for high-dimensional random variable spaces.

  10. A functional-type a posteriori error estimate of approximate solutions for Reissner-Mindlin plates and its implementation

    Science.gov (United States)

    Frolov, Maxim; Chistiakova, Olga

    2017-06-01

    Paper is devoted to a numerical justification of the recent a posteriori error estimate for Reissner-Mindlin plates. This majorant provides a reliable control of accuracy of any conforming approximate solution of the problem including solutions obtained with commercial software for mechanical engineering. The estimate is developed on the basis of the functional approach and is applicable to several types of boundary conditions. To verify the approach, numerical examples with mesh refinements are provided.

  11. A-Posteriori Detection of Sensor Infrastructure Errors in Correlated Sensor Data and Business Workflows

    NARCIS (Netherlands)

    Wombacher, Andreas; Rinderle-Ma, Stefanie; Toumani, Farouk; Wolf, Karsten

    Some physical objects are influenced by business workflows and are observed by sensors. Since both sensor infrastructures and business workflows must deal with imprecise information, the correlation of sensor data and business workflow data related to physical objects might be used a-posteriori to

  12. A-Posteriori Detection of Sensor Infrastructure Errors in Correlated Sensor Data and Business Workflows

    NARCIS (Netherlands)

    Wombacher, Andreas

    2011-01-01

    Sensor data can be interpreted as a view on physical objects effected by business processes. Since both sensor infrastructures and business workflows must deal with imprecise information, the correlation of sensor data and business workflow data might be used a-posteriori to determine the source of

  13. Adaptive finite element analysis of incompressible viscous flow using posteriori error estimation and control of node density distribution

    International Nuclear Information System (INIS)

    Yashiki, Taturou; Yagawa, Genki; Okuda, Hiroshi

    1995-01-01

    The adaptive finite element method based on an 'a posteriori error estimation' is known to be a powerful technique for analyzing the engineering practical problems, since it excludes the instinctive aspect of the mesh subdivision and gives high accuracy with relatively low computational cost. In the adaptive procedure, both the error estimation and the mesh generation according to the error estimator are essential. In this paper, the adaptive procedure is realized by the automatic mesh generation based on the control of node density distribution, which is decided according to the error estimator. The global percentage error, CPU time, the degrees of freedom and the accuracy of the solution of the adaptive procedure are compared with those of the conventional method using regular meshes. Such numerical examples as the driven cavity flows of various Reynolds numbers and the flows around a cylinder have shown the very high performance of the proposed adaptive procedure. (author)

  14. An A Posteriori Error Estimate for Symplectic Euler Approximation of Optimal Control Problems

    KAUST Repository

    Karlsson, Peer Jesper

    2015-01-07

    This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns Symplectic Euler solutions of the Hamiltonian system connected with the optimal control problem. The error representation has a leading order term consisting of an error density that is computable from Symplectic Euler solutions. Under an assumption of the pathwise convergence of the approximate dual function as the maximum time step goes to zero, we prove that the remainder is of higher order than the leading error density part in the error representation. With the error representation, it is possible to perform adaptive time stepping. We apply an adaptive algorithm originally developed for ordinary differential equations.

  15. Discontinuous Galerkin methods and a posteriori error analysis for heterogenous diffusion problems; Methodes de Galerkine discontinues et analyse d'erreur a posteriori pour les problemes de diffusion heterogene

    Energy Technology Data Exchange (ETDEWEB)

    Stephansen, A.F

    2007-12-15

    In this thesis we analyse a discontinuous Galerkin (DG) method and two computable a posteriori error estimators for the linear and stationary advection-diffusion-reaction equation with heterogeneous diffusion. The DG method considered, the SWIP method, is a variation of the Symmetric Interior Penalty Galerkin method. The difference is that the SWIP method uses weighted averages with weights that depend on the diffusion. The a priori analysis shows optimal convergence with respect to mesh-size and robustness with respect to heterogeneous diffusion, which is confirmed by numerical tests. Both a posteriori error estimators are of the residual type and control the energy (semi-)norm of the error. Local lower bounds are obtained showing that almost all indicators are independent of heterogeneities. The exception is for the non-conforming part of the error, which has been evaluated using the Oswald interpolator. The second error estimator is sharper in its estimate with respect to the first one, but it is slightly more costly. This estimator is based on the construction of an H(div)-conforming Raviart-Thomas-Nedelec flux using the conservativeness of DG methods. Numerical results show that both estimators can be used for mesh-adaptation. (author)

  16. A posteriori error analysis for hydro-mechanical couplings and implementation in Code-Aster; Analyse d'erreur a posteriori pour les couplages hydro-mecaniques et mise en oeuvre dans Code-Aster

    Energy Technology Data Exchange (ETDEWEB)

    Meunier, S

    2007-11-15

    We analyse approximations by finite elements in space and finite differences in time of coupled Hydro-Mechanical (HM) problems related to the quasi-static linear poro-elasticity theory. The physical bases of this theory are briefly restated and an abstract setting is proposed to perform the mathematical study of the stationary and un-stationary versions of the HM problem. For the stationary version, the well-posedness of the continuous and discrete problems are established and the a priori error analysis is performed. Then, we propose the a posteriori error analysis by using two different techniques suited to estimate the displacement error and the pressure error, respectively, both in the H{sub x}{sup 1}-norm. The classical properties of reliability and optimality are proved for the associated error estimators. Some numerical experiments using Code-Aster illustrate the theoretical results. For the un-stationary version, we first establish a stability result for the continuous problem. Then, we present an optimal a priori error analysis using elliptic projection techniques. Finally, the a posteriori error analysis is performed by using two different approaches: a direct approach and an elliptic reconstruction approach. The first is suited to estimate the pressure error in the L{sub t}{sup 2}(H{sub x}{sup 1})-norm and the second is suited to estimate the displacement error in the L{sub t}{sup {infinity}}(H{sub x}{sup 1})-norm and the pressure error in the L{sub t}{sup {infinity}}(H{sub x}{sup 1})-norm. Numerical experiments using Code-Aster complete the theoretical results. (author)

  17. An A Posteriori Error Estimate for Symplectic Euler Approximation of Optimal Control Problems

    KAUST Repository

    Karlsson, Peer Jesper; Larsson, Stig; Sandberg, Mattias; Szepessy, Anders; Tempone, Raul

    2015-01-01

    This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns Symplectic Euler solutions of the Hamiltonian system

  18. Combined Uncertainty and A-Posteriori Error Bound Estimates for CFD Calculations: Theory and Implementation

    Science.gov (United States)

    Barth, Timothy J.

    2014-01-01

    Simulation codes often utilize finite-dimensional approximation resulting in numerical error. Some examples include, numerical methods utilizing grids and finite-dimensional basis functions, particle methods using a finite number of particles. These same simulation codes also often contain sources of uncertainty, for example, uncertain parameters and fields associated with the imposition of initial and boundary data,uncertain physical model parameters such as chemical reaction rates, mixture model parameters, material property parameters, etc.

  19. Reliable methods for computer simulation error control and a posteriori estimates

    CERN Document Server

    Neittaanmäki, P

    2004-01-01

    Recent decades have seen a very rapid success in developing numerical methods based on explicit control over approximation errors. It may be said that nowadays a new direction is forming in numerical analysis, the main goal of which is to develop methods ofreliable computations. In general, a reliable numerical method must solve two basic problems: (a) generate a sequence of approximations that converges to a solution and (b) verify the accuracy of these approximations. A computer code for such a method must consist of two respective blocks: solver and checker.In this book, we are chie

  20. A Posteriori Error Analysis and Uncertainty Quantification for Adaptive Multiscale Operator Decomposition Methods for Multiphysics Problems

    Science.gov (United States)

    2013-06-24

    l> h L), MFE2 or GFV2 = (RUR:<p h R-n h R<t> h R), MFEi or GFV, = (RPLXl-P hdhL), MFE , or GFV4 = (RPR.,^-P^), MFE5 or GFV5 = {Ri,ß h...common to both MFE and GFV, are often similar in size. As a gross measure of the effect of geometric progression and of the use of quadrature, we...their true value, the error in the quantity of interest MFE £(e,!//) or GFV £(<?, y/). Tables 1 and 2 show this using coarse and fine forward

  1. Adaptive vibrational configuration interaction (A-VCI): A posteriori error estimation to efficiently compute anharmonic IR spectra

    Science.gov (United States)

    Garnier, Romain; Odunlami, Marc; Le Bris, Vincent; Bégué, Didier; Baraille, Isabelle; Coulaud, Olivier

    2016-05-01

    A new variational algorithm called adaptive vibrational configuration interaction (A-VCI) intended for the resolution of the vibrational Schrödinger equation was developed. The main advantage of this approach is to efficiently reduce the dimension of the active space generated into the configuration interaction (CI) process. Here, we assume that the Hamiltonian writes as a sum of products of operators. This adaptive algorithm was developed with the use of three correlated conditions, i.e., a suitable starting space, a criterion for convergence, and a procedure to expand the approximate space. The velocity of the algorithm was increased with the use of a posteriori error estimator (residue) to select the most relevant direction to increase the space. Two examples have been selected for benchmark. In the case of H2CO, we mainly study the performance of A-VCI algorithm: comparison with the variation-perturbation method, choice of the initial space, and residual contributions. For CH3CN, we compare the A-VCI results with a computed reference spectrum using the same potential energy surface and for an active space reduced by about 90%.

  2. Analysis of an a posteriori error estimator for the transport equation with SN and discontinuous Galerkin discretizations

    International Nuclear Information System (INIS)

    Fournier, D.; Le Tellier, R.; Suteau, C.

    2011-01-01

    We present an error estimator for the S N neutron transport equation discretized with an arbitrary high-order discontinuous Galerkin method. As a starting point, the estimator is obtained for conforming Cartesian meshes with a uniform polynomial order for the trial space then adapted to deal with non-conforming meshes and a variable polynomial order. Some numerical tests illustrate the properties of the estimator and its limitations. Finally, a simple shielding benchmark is analyzed in order to show the relevance of the estimator in an adaptive process.

  3. Pointwise convergence of Fourier series

    CERN Document Server

    Arias de Reyna, Juan

    2002-01-01

    This book contains a detailed exposition of Carleson-Hunt theorem following the proof of Carleson: to this day this is the only one giving better bounds. It points out the motivation of every step in the proof. Thus the Carleson-Hunt theorem becomes accessible to any analyst.The book also contains the first detailed exposition of the fine results of Hunt, Sjölin, Soria, etc on the convergence of Fourier Series. Its final chapters present original material. With both Fefferman's proof and the recent one of Lacey and Thiele in print, it becomes more important than ever to understand and compare these two related proofs with that of Carleson and Hunt. These alternative proofs do not yield all the results of the Carleson-Hunt proof. The intention of this monograph is to make Carleson's proof accessible to a wider audience, and to explain its consequences for the pointwise convergence of Fourier series for functions in spaces near $äcal Lü^1$, filling a well-known gap in the literature.

  4. Enhancement of Visual Field Predictions with Pointwise Exponential Regression (PER) and Pointwise Linear Regression (PLR).

    Science.gov (United States)

    Morales, Esteban; de Leon, John Mark S; Abdollahi, Niloufar; Yu, Fei; Nouri-Mahdavi, Kouros; Caprioli, Joseph

    2016-03-01

    The study was conducted to evaluate threshold smoothing algorithms to enhance prediction of the rates of visual field (VF) worsening in glaucoma. We studied 798 patients with primary open-angle glaucoma and 6 or more years of follow-up who underwent 8 or more VF examinations. Thresholds at each VF location for the first 4 years or first half of the follow-up time (whichever was greater) were smoothed with clusters defined by the nearest neighbor (NN), Garway-Heath, Glaucoma Hemifield Test (GHT), and weighting by the correlation of rates at all other VF locations. Thresholds were regressed with a pointwise exponential regression (PER) model and a pointwise linear regression (PLR) model. Smaller root mean square error (RMSE) values of the differences between the observed and the predicted thresholds at last two follow-ups indicated better model predictions. The mean (SD) follow-up times for the smoothing and prediction phase were 5.3 (1.5) and 10.5 (3.9) years. The mean RMSE values for the PER and PLR models were unsmoothed data, 6.09 and 6.55; NN, 3.40 and 3.42; Garway-Heath, 3.47 and 3.48; GHT, 3.57 and 3.74; and correlation of rates, 3.59 and 3.64. Smoothed VF data predicted better than unsmoothed data. Nearest neighbor provided the best predictions; PER also predicted consistently more accurately than PLR. Smoothing algorithms should be used when forecasting VF results with PER or PLR. The application of smoothing algorithms on VF data can improve forecasting in VF points to assist in treatment decisions.

  5. Adjoint Based A Posteriori Analysis of Multiscale Mortar Discretizations with Multinumerics

    KAUST Repository

    Tavener, Simon; Wildey, Tim

    2013-01-01

    In this paper we derive a posteriori error estimates for linear functionals of the solution to an elliptic problem discretized using a multiscale nonoverlapping domain decomposition method. The error estimates are based on the solution

  6. A Posteriori Finite Element Bounds for Sensitivity Derivatives of Partial-Differential-Equation Outputs. Revised

    Science.gov (United States)

    Lewis, Robert Michael; Patera, Anthony T.; Peraire, Jaume

    1998-01-01

    We present a Neumann-subproblem a posteriori finite element procedure for the efficient and accurate calculation of rigorous, 'constant-free' upper and lower bounds for sensitivity derivatives of functionals of the solutions of partial differential equations. The design motivation for sensitivity derivative error control is discussed; the a posteriori finite element procedure is described; the asymptotic bounding properties and computational complexity of the method are summarized; and illustrative numerical results are presented.

  7. Energy-pointwise discrete ordinates transport methods

    International Nuclear Information System (INIS)

    Williams, M.L.; Asgari, M.; Tashakorri, R.

    1997-01-01

    A very brief description is given of a one-dimensional code, CENTRM, which computes a detailed, space-dependent flux spectrum in a pointwise-energy representation within the resolved resonance range. The code will become a component in the SCALE system to improve computation of self-shielded cross sections, thereby enhancing the accuracy of codes such as KENO. CENTRM uses discrete-ordinates transport theory with an arbitrary angular quadrature order and a Legendre expansion of scattering anisotropy for moderator materials and heavy nuclides. The CENTRM program provides capability to deterministically compute full energy range, space-dependent angular flux spectra, rigorously accounting for resonance fine-structure and scattering anisotropy effects

  8. Pointwise probability reinforcements for robust statistical inference.

    Science.gov (United States)

    Frénay, Benoît; Verleysen, Michel

    2014-02-01

    Statistical inference using machine learning techniques may be difficult with small datasets because of abnormally frequent data (AFDs). AFDs are observations that are much more frequent in the training sample that they should be, with respect to their theoretical probability, and include e.g. outliers. Estimates of parameters tend to be biased towards models which support such data. This paper proposes to introduce pointwise probability reinforcements (PPRs): the probability of each observation is reinforced by a PPR and a regularisation allows controlling the amount of reinforcement which compensates for AFDs. The proposed solution is very generic, since it can be used to robustify any statistical inference method which can be formulated as a likelihood maximisation. Experiments show that PPRs can be easily used to tackle regression, classification and projection: models are freed from the influence of outliers. Moreover, outliers can be filtered manually since an abnormality degree is obtained for each observation. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Model reduction using a posteriori analysis

    KAUST Repository

    Whiteley, Jonathan P.

    2010-01-01

    Mathematical models in biology and physiology are often represented by large systems of non-linear ordinary differential equations. In many cases, an observed behaviour may be written as a linear functional of the solution of this system of equations. A technique is presented in this study for automatically identifying key terms in the system of equations that are responsible for a given linear functional of the solution. This technique is underpinned by ideas drawn from a posteriori error analysis. This concept has been used in finite element analysis to identify regions of the computational domain and components of the solution where a fine computational mesh should be used to ensure accuracy of the numerical solution. We use this concept to identify regions of the computational domain and components of the solution where accurate representation of the mathematical model is required for accuracy of the functional of interest. The technique presented is demonstrated by application to a model problem, and then to automatically deduce known results from a cell-level cardiac electrophysiology model. © 2010 Elsevier Inc.

  10. Model reduction using a posteriori analysis

    KAUST Repository

    Whiteley, Jonathan P.

    2010-05-01

    Mathematical models in biology and physiology are often represented by large systems of non-linear ordinary differential equations. In many cases, an observed behaviour may be written as a linear functional of the solution of this system of equations. A technique is presented in this study for automatically identifying key terms in the system of equations that are responsible for a given linear functional of the solution. This technique is underpinned by ideas drawn from a posteriori error analysis. This concept has been used in finite element analysis to identify regions of the computational domain and components of the solution where a fine computational mesh should be used to ensure accuracy of the numerical solution. We use this concept to identify regions of the computational domain and components of the solution where accurate representation of the mathematical model is required for accuracy of the functional of interest. The technique presented is demonstrated by application to a model problem, and then to automatically deduce known results from a cell-level cardiac electrophysiology model. © 2010 Elsevier Inc.

  11. Rapid pointwise stabilization of vibrating strings and beams

    Directory of Open Access Journals (Sweden)

    Alia BARHOUMI

    2009-11-01

    Full Text Available Applying a general construction and using former results on the observability we prove, under rather general assumptions, a rapid pointwise stabilization of vibrating strings and beams.

  12. The undefined function differs from the pointwise undefined function

    OpenAIRE

    Dosch, Walter (Prof.)

    1993-01-01

    The undefined function differs from the pointwise undefined function. - In: Joint Conference on Declarative Programming : Proceedings / Maria I. Sessa ... (eds.). - Salerno : Univ. degli Studi, 1995. - S. 257-268

  13. A priori and a posteriori approaches in human reliability

    International Nuclear Information System (INIS)

    Griffon-Fouco, M.; Gagnolet, P.

    1981-09-01

    The French atomic energy commission (CEA) and the French supplier in electric power (EDF) have joint studies on human factors in nuclear safety. This paper deals with these studies which are a combination of two approaches: - An a posteriori approach so as to know the rate of human errors and their causes: an analysis of incident data banks and an analysis of human errors on simulator are presented. - An a priori approach so as to know the potential factors of human errors: an analysis of the control rooms design and an analysis of the writing of procedures are presented. The possibility to take into account these two approaches to prevent and quantify human errors is discussed

  14. Adjoint Based A Posteriori Analysis of Multiscale Mortar Discretizations with Multinumerics

    KAUST Repository

    Tavener, Simon

    2013-01-01

    In this paper we derive a posteriori error estimates for linear functionals of the solution to an elliptic problem discretized using a multiscale nonoverlapping domain decomposition method. The error estimates are based on the solution of an appropriately defined adjoint problem. We present a general framework that allows us to consider both primal and mixed formulations of the forward and adjoint problems within each subdomain. The primal subdomains are discretized using either an interior penalty discontinuous Galerkin method or a continuous Galerkin method with weakly imposed Dirichlet conditions. The mixed subdomains are discretized using Raviart- Thomas mixed finite elements. The a posteriori error estimate also accounts for the errors due to adjoint-inconsistent subdomain discretizations. The coupling between the subdomain discretizations is achieved via a mortar space. We show that the numerical discretization error can be broken down into subdomain and mortar components which may be used to drive adaptive refinement.Copyright © by SIAM.

  15. A computer program for the pointwise functions generation

    International Nuclear Information System (INIS)

    Caldeira, Alexandre D.

    1995-01-01

    A computer program that was developed with the objective of generating pointwise functions, by a combination of tabulated values and/or mathematical expressions, to be used as weighting functions for nuclear data is presented. This simple program can be an important tool for researchers involved in group constants generation. (author). 5 refs, 2 figs

  16. Learning With Mixed Hard/Soft Pointwise Constraints.

    Science.gov (United States)

    Gnecco, Giorgio; Gori, Marco; Melacci, Stefano; Sanguineti, Marcello

    2015-09-01

    A learning paradigm is proposed and investigated, in which the classical framework of learning from examples is enhanced by the introduction of hard pointwise constraints, i.e., constraints imposed on a finite set of examples that cannot be violated. Such constraints arise, e.g., when requiring coherent decisions of classifiers acting on different views of the same pattern. The classical examples of supervised learning, which can be violated at the cost of some penalization (quantified by the choice of a suitable loss function) play the role of soft pointwise constraints. Constrained variational calculus is exploited to derive a representer theorem that provides a description of the functional structure of the optimal solution to the proposed learning paradigm. It is shown that such an optimal solution can be represented in terms of a set of support constraints, which generalize the concept of support vectors and open the doors to a novel learning paradigm, called support constraint machines. The general theory is applied to derive the representation of the optimal solution to the problem of learning from hard linear pointwise constraints combined with soft pointwise constraints induced by supervised examples. In some cases, closed-form optimal solutions are obtained.

  17. A Point-Wise Quantification of Asymmetry Using Deformation Fields

    DEFF Research Database (Denmark)

    Ólafsdóttir, Hildur; Lanche, Stephanie; Darvann, Tron Andre

    2007-01-01

    of the resulting displacement vectors on the left and right side of the symmetry plane, gives a point-wise measure of asymmetry. The asymmetry measure was applied to the study of Crouzon syndrome using Micro CT scans of genetically modified mice. Crouzon syndrome is characterised by the premature fusion of cranial...

  18. [ETHICAL PRINCIPALS AND A POSTERIORI JUSTIFICATIONS].

    Science.gov (United States)

    Heintz, Monica

    2015-12-01

    It is difficult to conceive that the human being, while being the same everywhere, could be cared for in such different ways in other societies. Anthropologists acknowledge that the diversity of cultures implies a diversity of moral values, thus that in a multicultural society the individual could draw upon different moral frames to justify the peculiarities of her/his demand of care. But how could we determine what is the moral frame that catalyzes behaviour while all we can record are a posteriori justifications of actions? In most multicultural societies where several moralframes coexist, there is an implicit hierarchy between ethical systems derived from a hierarchy of power which falsifies these a posteriori justifications. Moreover anthropologists often fail to acknowledge that individual behaviour does not always reflect individual values, but is more often the result of negotiations between the moralframes available in society and her/his own desires and personal experience. This is certainly due to the difficulty to account for a dynamic and complex interplay of moral values that cannot be analysed as a system. The impact of individual experience on the way individuals give or receive care could also be only weakly linked to a moral system even when this reference comes up explicitly in the a posteriori justifications.

  19. Pointwise Partial Information Decomposition Using the Specificity and Ambiguity Lattices

    Science.gov (United States)

    Finn, Conor; Lizier, Joseph

    2018-04-01

    What are the distinct ways in which a set of predictor variables can provide information about a target variable? When does a variable provide unique information, when do variables share redundant information, and when do variables combine synergistically to provide complementary information? The redundancy lattice from the partial information decomposition of Williams and Beer provided a promising glimpse at the answer to these questions. However, this structure was constructed using a much criticised measure of redundant information, and despite sustained research, no completely satisfactory replacement measure has been proposed. In this paper, we take a different approach, applying the axiomatic derivation of the redundancy lattice to a single realisation from a set of discrete variables. To overcome the difficulty associated with signed pointwise mutual information, we apply this decomposition separately to the unsigned entropic components of pointwise mutual information which we refer to as the specificity and ambiguity. This yields a separate redundancy lattice for each component. Then based upon an operational interpretation of redundancy, we define measures of redundant specificity and ambiguity enabling us to evaluate the partial information atoms in each lattice. These atoms can be recombined to yield the sought-after multivariate information decomposition. We apply this framework to canonical examples from the literature and discuss the results and the various properties of the decomposition. In particular, the pointwise decomposition using specificity and ambiguity satisfies a chain rule over target variables, which provides new insights into the so-called two-bit-copy example.

  20. Maximum a posteriori decoder for digital communications

    Science.gov (United States)

    Altes, Richard A. (Inventor)

    1997-01-01

    A system and method for decoding by identification of the most likely phase coded signal corresponding to received data. The present invention has particular application to communication with signals that experience spurious random phase perturbations. The generalized estimator-correlator uses a maximum a posteriori (MAP) estimator to generate phase estimates for correlation with incoming data samples and for correlation with mean phases indicative of unique hypothesized signals. The result is a MAP likelihood statistic for each hypothesized transmission, wherein the highest value statistic identifies the transmitted signal.

  1. On Pointwise Approximation of Conjugate Functions by Some Hump Matrix Means of Conjugate Fourier Series

    Directory of Open Access Journals (Sweden)

    W. Łenski

    2015-01-01

    Full Text Available The results generalizing some theorems on N, pnE, γ summability are shown. The same degrees of pointwise approximation as in earlier papers by weaker assumptions on considered functions and examined summability methods are obtained. From presented pointwise results, the estimation on norm approximation is derived. Some special cases as corollaries are also formulated.

  2. Pointwise convergence and Ascoli theorems for nearness spaces

    Directory of Open Access Journals (Sweden)

    Zhanbo Yang

    2009-04-01

    Full Text Available We first study subspaces and product spaces in the context of nearness spaces and prove that U-N spaces, C-N spaces, PN spaces and totally bounded nearness spaces are nearness hereditary; T-N spaces and compact nearness spaces are N-closed hereditary. We prove that N2 plus compact implies N-closed subsets. We prove that totally bounded, compact and N2 are productive. We generalize the concepts of neighborhood systems into the nearness spaces and prove that the nearness neighborhood systems are consistent with existing concepts of neighborhood systems in topological spaces, uniform spaces and proximity spaces respectively when considered in the respective sub-categories. We prove that a net of functions is convergent under the pointwise convergent nearness structure if and only if its cross-section at each point is convergent. We have also proved two Ascoli-Arzelà type of theorems.

  3. A Posteriori Approach for Community Detection

    Institute of Scientific and Technical Information of China (English)

    Chuan Shi; Zhen-Yu Yan; Xin Pan; Ya-Nan Cai; Bin Wu

    2011-01-01

    Conventional community detection approaches in complex network are based on the optimization of a priori decision,i.e.,a single quality function designed beforehand.This paper proposes a posteriori decision approach for community detection.The approach includes two phases:in the search phase,a special multi-objective evolutionary algorithm is designed to search for a set of tradeoff partitions that reveal the community structure at different scales in one run; in the decision phase,three model selection criteria and the Possibility Matrix method are proposed to aid decision makers to select the preferable solutions through differentiating the set of optimal solutions according to their qualities.The experiments in five synthetic and real social networks illustrate that,in one run,our method is able to obtain many candidate solutions,which effectively avoids the resolution limit existing in priori decision approaches.In addition,our method can discover more authentic and comprehensive community structures than those priori decision approaches.

  4. A Pointwise Dimension Analysis of the Las Campanas Redshift Survey

    Science.gov (United States)

    Best, J. S.

    1999-12-01

    The modern motivation for fractal geometry may best be summed up by this quote of Benoit Mandelbrot: ``Mountains are not cones, clouds are not spheres, coastlines are not circles, and bark is not smooth, nor does lightning travel in a straight line.'' Fractals are, in simplest terms, ``objects which are (approximately) self-similar on all scales.'' The renewed modern interest in fractals has found as one of its applications the study of large-scale structure, giving a quantitative descriptive scheme to ideas that had been expressed qualitatively as early as the 1920s. This paper presents the preliminary results of an analysis of the structure of the Las Campanas Redshift Survey, or LCRS. LCRS is an approximately 26000 galaxy survey (surveyed as six declination slices) that has been studied extensively over the past few years, with an eye towards understanding large-scale structure. For this analysis, I have used the pointwise dimension, an easy-to-apply fractal statistic which has been previously used to study cluster interiors, galactic distributions, and cluster distributions. The present analysis has been performed to serve as a guide for the study of future large redshift surveys. This research has been funded by National Science Foundation grant AST-9808608.

  5. Computable Error Estimates for Finite Element Approximations of Elliptic Partial Differential Equations with Rough Stochastic Data

    KAUST Repository

    Hall, Eric Joseph; Hoel, Hå kon; Sandberg, Mattias; Szepessy, Anders; Tempone, Raul

    2016-01-01

    posteriori error estimates fail to capture. We propose goal-oriented estimates, based on local error indicators, for the pathwise Galerkin and expected quadrature errors committed in standard, continuous, piecewise linear finite element approximations

  6. Simultaneous maximum a posteriori longitudinal PET image reconstruction

    Science.gov (United States)

    Ellis, Sam; Reader, Andrew J.

    2017-09-01

    Positron emission tomography (PET) is frequently used to monitor functional changes that occur over extended time scales, for example in longitudinal oncology PET protocols that include routine clinical follow-up scans to assess the efficacy of a course of treatment. In these contexts PET datasets are currently reconstructed into images using single-dataset reconstruction methods. Inspired by recently proposed joint PET-MR reconstruction methods, we propose to reconstruct longitudinal datasets simultaneously by using a joint penalty term in order to exploit the high degree of similarity between longitudinal images. We achieved this by penalising voxel-wise differences between pairs of longitudinal PET images in a one-step-late maximum a posteriori (MAP) fashion, resulting in the MAP simultaneous longitudinal reconstruction (SLR) method. The proposed method reduced reconstruction errors and visually improved images relative to standard maximum likelihood expectation-maximisation (ML-EM) in simulated 2D longitudinal brain tumour scans. In reconstructions of split real 3D data with inserted simulated tumours, noise across images reconstructed with MAP-SLR was reduced to levels equivalent to doubling the number of detected counts when using ML-EM. Furthermore, quantification of tumour activities was largely preserved over a variety of longitudinal tumour changes, including changes in size and activity, with larger changes inducing larger biases relative to standard ML-EM reconstructions. Similar improvements were observed for a range of counts levels, demonstrating the robustness of the method when used with a single penalty strength. The results suggest that longitudinal regularisation is a simple but effective method of improving reconstructed PET images without using resolution degrading priors.

  7. Stress-induced alterations of left-right electrodermal activity coupling indexed by pointwise transinformation

    Czech Academy of Sciences Publication Activity Database

    Světlák, M.; Bob, P.; Roman, R.; Ježek, S.; Damborská, A.; Chládek, Jan; Shaw, D. J.; Kukleta, M.

    2013-01-01

    Roč. 62, č. 6 (2013), s. 711-719 ISSN 0862-8408 Institutional support: RVO:68081731 Keywords : electrodermal activity * pointwise trasinformation * autonomic nervous system * asymmetry * stress Subject RIV: CE - Biochemistry Impact factor: 1.487, year: 2013

  8. ALTERNATE PURSUIT WITH THREE PARTICIPANTS (THE CASE OF POINTWISE MEETING

    Directory of Open Access Journals (Sweden)

    Viktor Shiryayev

    2016-03-01

    Full Text Available The issues connected with alternate pursuit of escapees group are considered in a number of papers. So in papers [1–3] the solution of the problem has been found in the assumption that the next meeting is selected at the initial time (by the program and the players are moving straight. In paper [4] the solution of the task using the approach of R. Isaacs is given. In paper [5] the choice opportunities of the next meeting ( both software and positional are considered. The article deals with a simple differential game on the pursuer plane P and the coalition of two escapees E={E1,E2}.The movement of all the players are assumed as inertialess. The pursuer speed P exceeds the speed of each of the escapees. The targets, physical abilities and the exact location of each other in any moment of the game are known to all players. The price of the coalition (the pursuer P is (minus the total time spent by the pursuer P on the pointwise meeting with E1 and E2. A coincidence of pursuer and escapee location is meant under the meeting. The choice at the initial time of the persecution is supposed as given (software selectable regular meeting. The limit of the security zone of the second escapee has been found. A geometric approach is used in the problem solving. The resulting system of equations is solved numerically by means of computer algebra, in particular through the Wolfram Mathematics. After defining the boundary of the second escapee security zone one can study the game between the pursuer Р and three escapees acting in concord (the first escapee is eliminated from the game.

  9. A Posteriori Restoration of Block Transform-Compressed Data

    Science.gov (United States)

    Brown, R.; Boden, A. F.

    1995-01-01

    The Galileo spacecraft will use lossy data compression for the transmission of its science imagery over the low-bandwidth communication system. The technique chosen for image compression is a block transform technique based on the Integer Cosine Transform, a derivative of the JPEG image compression standard. Considered here are two known a posteriori enhancement techniques, which are adapted.

  10. Design of wave breaking experiments and A-Posteriori Simulations

    NARCIS (Netherlands)

    Kurnia, R.; Kurnia, Ruddy; van Groesen, Embrecht W.C.

    2014-01-01

    This report presents results of 30 wave breaking experiments conducted in the long wave tank of TU Delft, Department of Maritime and Transport Technology (6,7 and 10-12 March 2014), together with simulations performed before the experiment to determine the required wave maker motion and a-posteriori

  11. Design of wave breaking experiments and A-Posteriori Simulations

    NARCIS (Netherlands)

    Kurnia, Ruddy; van Groesen, Embrecht W.C.

    This report presents results of 30 wave breaking experiments conducted in the long wave tank of TU Delft, Department of Maritime and Transport Technology (6,7 and 10-12 March 2014), together with simulations performed before the experiment to determine the required wave maker motion and a-posteriori

  12. A posteriori uncertainty quantification of PIV-based pressure data

    NARCIS (Netherlands)

    Azijli, I.; Sciacchitano, A.; Ragni, D.; Palha Da Silva Clérigo, A.; Dwight, R.P.

    2016-01-01

    A methodology for a posteriori uncertainty quantification of pressure data retrieved from particle image velocimetry (PIV) is proposed. It relies upon the Bayesian framework, where the posterior distribution (probability distribution of the true velocity, given the PIV measurements) is obtained from

  13. Error estimation and adaptivity for incompressible hyperelasticity

    KAUST Repository

    Whiteley, J.P.

    2014-04-30

    SUMMARY: A Galerkin FEM is developed for nonlinear, incompressible (hyper) elasticity that takes account of nonlinearities in both the strain tensor and the relationship between the strain tensor and the stress tensor. By using suitably defined linearised dual problems with appropriate boundary conditions, a posteriori error estimates are then derived for both linear functionals of the solution and linear functionals of the stress on a boundary, where Dirichlet boundary conditions are applied. A second, higher order method for calculating a linear functional of the stress on a Dirichlet boundary is also presented together with an a posteriori error estimator for this approach. An implementation for a 2D model problem with known solution, where the entries of the strain tensor exhibit large, rapid variations, demonstrates the accuracy and sharpness of the error estimators. Finally, using a selection of model problems, the a posteriori error estimate is shown to provide a basis for effective mesh adaptivity. © 2014 John Wiley & Sons, Ltd.

  14. Pointwise Multipliers on Spaces of Homogeneous Type in the Sense of Coifman and Weiss

    Directory of Open Access Journals (Sweden)

    Yanchang Han

    2014-01-01

    homogeneous type in the sense of Coifman and Weiss, pointwise multipliers of inhomogeneous Besov and Triebel-Lizorkin spaces are obtained. We make no additional assumptions on the quasi-metric or the doubling measure. Hence, the results of this paper extend earlier related results to a more general setting.

  15. Extracting volatility signal using maximum a posteriori estimation

    Science.gov (United States)

    Neto, David

    2016-11-01

    This paper outlines a methodology to estimate a denoised volatility signal for foreign exchange rates using a hidden Markov model (HMM). For this purpose a maximum a posteriori (MAP) estimation is performed. A double exponential prior is used for the state variable (the log-volatility) in order to allow sharp jumps in realizations and then log-returns marginal distributions with heavy tails. We consider two routes to choose the regularization and we compare our MAP estimate to realized volatility measure for three exchange rates.

  16. A Novel A Posteriori Investigation of Scalar Flux Models for Passive Scalar Dispersion in Compressible Boundary Layer Flows

    Science.gov (United States)

    Braman, Kalen; Raman, Venkat

    2011-11-01

    A novel direct numerical simulation (DNS) based a posteriori technique has been developed to investigate scalar transport modeling error. The methodology is used to test Reynolds-averaged Navier-Stokes turbulent scalar flux models for compressible boundary layer flows. Time-averaged DNS velocity and turbulence fields provide the information necessary to evolve the time-averaged scalar transport equation without requiring the use of turbulence modeling. With this technique, passive dispersion of a scalar from a boundary layer surface in a supersonic flow is studied with scalar flux modeling error isolated from any flowfield modeling errors. Several different scalar flux models are used. It is seen that the simple gradient diffusion model overpredicts scalar dispersion, while anisotropic scalar flux models underpredict dispersion. Further, the use of more complex models does not necessarily guarantee an increase in predictive accuracy, indicating that key physics is missing from existing models. Using comparisons of both a priori and a posteriori scalar flux evaluations with DNS data, the main modeling shortcomings are identified. Results will be presented for different boundary layer conditions.

  17. A posteriori model validation for the temporal order of directed functional connectivity maps.

    Science.gov (United States)

    Beltz, Adriene M; Molenaar, Peter C M

    2015-01-01

    A posteriori model validation for the temporal order of neural directed functional connectivity maps is rare. This is striking because models that require sequential independence among residuals are regularly implemented. The aim of the current study was (a) to apply to directed functional connectivity maps of functional magnetic resonance imaging data an a posteriori model validation procedure (i.e., white noise tests of one-step-ahead prediction errors combined with decision criteria for revising the maps based upon Lagrange Multiplier tests), and (b) to demonstrate how the procedure applies to single-subject simulated, single-subject task-related, and multi-subject resting state data. Directed functional connectivity was determined by the unified structural equation model family of approaches in order to map contemporaneous and first order lagged connections among brain regions at the group- and individual-levels while incorporating external input, then white noise tests were run. Findings revealed that the validation procedure successfully detected unmodeled sequential dependencies among residuals and recovered higher order (greater than one) simulated connections, and that the procedure can accommodate task-related input. Findings also revealed that lags greater than one were present in resting state data: With a group-level network that contained only contemporaneous and first order connections, 44% of subjects required second order, individual-level connections in order to obtain maps with white noise residuals. Results have broad methodological relevance (e.g., temporal validation is necessary after directed functional connectivity analyses because the presence of unmodeled higher order sequential dependencies may bias parameter estimates) and substantive implications (e.g., higher order lags may be common in resting state data).

  18. A posteriori model validation for the temporal order of directed functional connectivity maps

    Directory of Open Access Journals (Sweden)

    Adriene M. Beltz

    2015-08-01

    Full Text Available A posteriori model validation for the temporal order of neural directed functional connectivity maps is rare. This is striking because models that require sequential independence among residuals are regularly implemented. The aim of the current study was (a to apply to directed functional connectivity maps of functional magnetic resonance imaging data an a posteriori model validation procedure (i.e., white noise tests of one-step-ahead prediction errors combined with decision criteria for revising the maps based upon Lagrange Multiplier tests, and (b to demonstrate how the procedure applies to single-subject simulated, single-subject task-related, and multi-subject resting state data. Directed functional connectivity was determined by the unified structural equation model family of approaches in order to map contemporaneous and first order lagged connections among brain regions at the group- and individual-levels while incorporating external input, then white noise tests were run. Findings revealed that the validation procedure successfully detected unmodeled sequential dependencies among residuals and recovered higher order (greater than one simulated connections, and that the procedure can accommodate task-related input. Findings also revealed that lags greater than one were present in resting state data: With a group-level network that contained only contemporaneous and first order connections, 44% of subjects required second order, individual-level connections in order to obtain maps with white noise residuals. Results have broad methodological relevance (e.g., temporal validation is necessary after directed functional connectivity analyses because the presence of unmodeled higher order sequential dependencies may bias parameter estimates and substantive implications (e.g., higher order lags may be common in resting state data.

  19. Error Estimation and Accuracy Improvements in Nodal Transport Methods

    International Nuclear Information System (INIS)

    Zamonsky, O.M.

    2000-01-01

    The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid

  20. The role of a posteriori mathematics in physics

    Science.gov (United States)

    MacKinnon, Edward

    2018-05-01

    The calculus that co-evolved with classical mechanics relied on definitions of functions and differentials that accommodated physical intuitions. In the early nineteenth century mathematicians began the rigorous reformulation of calculus and eventually succeeded in putting almost all of mathematics on a set-theoretic foundation. Physicists traditionally ignore this rigorous mathematics. Physicists often rely on a posteriori math, a practice of using physical considerations to determine mathematical formulations. This is illustrated by examples from classical and quantum physics. A justification of such practice stems from a consideration of the role of phenomenological theories in classical physics and effective theories in contemporary physics. This relates to the larger question of how physical theories should be interpreted.

  1. Theory and application of deterministic multidimensional pointwise energy lattice physics methods

    International Nuclear Information System (INIS)

    Zerkle, M.L.

    1999-01-01

    The theory and application of deterministic, multidimensional, pointwise energy lattice physics methods are discussed. These methods may be used to solve the neutron transport equation in multidimensional geometries using near-continuous energy detail to calculate equivalent few-group diffusion theory constants that rigorously account for spatial and spectral self-shielding effects. A dual energy resolution slowing down algorithm is described which reduces the computer memory and disk storage requirements for the slowing down calculation. Results are presented for a 2D BWR pin cell depletion benchmark problem

  2. Pointwise Stabilization of a Hybrid System and Optimal Location of Actuator

    International Nuclear Information System (INIS)

    Ammari, Kais; Saidi, Abdelkader

    2007-01-01

    We consider a pointwise stabilization problem for a model arising in the control of noise. We prove that we have exponential stability for the low frequencies but not for the high frequencies. Thus, we give an explicit polynomial decay estimation at high frequencies that is valid for regular initial data while clarifying that the behavior of the constant which intervenes in this estimation there, functions as the frequency of cut. We propose a numerical approximation of the model and study numerically the best location of the actuator at low frequencies

  3. Quantitative Pointwise Estimate of the Solution of the Linearized Boltzmann Equation

    Science.gov (United States)

    Lin, Yu-Chu; Wang, Haitao; Wu, Kung-Chien

    2018-04-01

    We study the quantitative pointwise behavior of the solutions of the linearized Boltzmann equation for hard potentials, Maxwellian molecules and soft potentials, with Grad's angular cutoff assumption. More precisely, for solutions inside the finite Mach number region (time like region), we obtain the pointwise fluid structure for hard potentials and Maxwellian molecules, and optimal time decay in the fluid part and sub-exponential time decay in the non-fluid part for soft potentials. For solutions outside the finite Mach number region (space like region), we obtain sub-exponential decay in the space variable. The singular wave estimate, regularization estimate and refined weighted energy estimate play important roles in this paper. Our results extend the classical results of Liu and Yu (Commun Pure Appl Math 57:1543-1608, 2004), (Bull Inst Math Acad Sin 1:1-78, 2006), (Bull Inst Math Acad Sin 6:151-243, 2011) and Lee et al. (Commun Math Phys 269:17-37, 2007) to hard and soft potentials by imposing suitable exponential velocity weight on the initial condition.

  4. Quantitative Pointwise Estimate of the Solution of the Linearized Boltzmann Equation

    Science.gov (United States)

    Lin, Yu-Chu; Wang, Haitao; Wu, Kung-Chien

    2018-06-01

    We study the quantitative pointwise behavior of the solutions of the linearized Boltzmann equation for hard potentials, Maxwellian molecules and soft potentials, with Grad's angular cutoff assumption. More precisely, for solutions inside the finite Mach number region (time like region), we obtain the pointwise fluid structure for hard potentials and Maxwellian molecules, and optimal time decay in the fluid part and sub-exponential time decay in the non-fluid part for soft potentials. For solutions outside the finite Mach number region (space like region), we obtain sub-exponential decay in the space variable. The singular wave estimate, regularization estimate and refined weighted energy estimate play important roles in this paper. Our results extend the classical results of Liu and Yu (Commun Pure Appl Math 57:1543-1608, 2004), (Bull Inst Math Acad Sin 1:1-78, 2006), (Bull Inst Math Acad Sin 6:151-243, 2011) and Lee et al. (Commun Math Phys 269:17-37, 2007) to hard and soft potentials by imposing suitable exponential velocity weight on the initial condition.

  5. Common Fixed Points for Asymptotic Pointwise Nonexpansive Mappings in Metric and Banach Spaces

    Directory of Open Access Journals (Sweden)

    P. Pasom

    2012-01-01

    Full Text Available Let C be a nonempty bounded closed convex subset of a complete CAT(0 space X. We prove that the common fixed point set of any commuting family of asymptotic pointwise nonexpansive mappings on C is nonempty closed and convex. We also show that, under some suitable conditions, the sequence {xk}k=1∞ defined by xk+1=(1-tmkxk⊕tmkTmnky(m-1k, y(m-1k=(1-t(m-1kxk⊕t(m-1kTm-1nky(m-2k,y(m-2k=(1-t(m-2kxk⊕t(m-2kTm-2nky(m-3k,…,y2k=(1-t2kxk⊕t2kT2nky1k,y1k=(1-t1kxk⊕t1kT1nky0k,y0k=xk,  k∈N, converges to a common fixed point of T1,T2,…,Tm where they are asymptotic pointwise nonexpansive mappings on C, {tik}k=1∞ are sequences in [0,1] for all i=1,2,…,m, and {nk} is an increasing sequence of natural numbers. The related results for uniformly convex Banach spaces are also included.

  6. Converting point-wise nuclear cross sections to pole representation using regularized vector fitting

    Science.gov (United States)

    Peng, Xingjie; Ducru, Pablo; Liu, Shichang; Forget, Benoit; Liang, Jingang; Smith, Kord

    2018-03-01

    Direct Doppler broadening of nuclear cross sections in Monte Carlo codes has been widely sought for coupled reactor simulations. One recent approach proposed analytical broadening using a pole representation of the commonly used resonance models and the introduction of a local windowing scheme to improve performance (Hwang, 1987; Forget et al., 2014; Josey et al., 2015, 2016). This pole representation has been achieved in the past by converting resonance parameters in the evaluation nuclear data library into poles and residues. However, cross sections of some isotopes are only provided as point-wise data in ENDF/B-VII.1 library. To convert these isotopes to pole representation, a recent approach has been proposed using the relaxed vector fitting (RVF) algorithm (Gustavsen and Semlyen, 1999; Gustavsen, 2006; Liu et al., 2018). This approach however needs to specify ahead of time the number of poles. This article addresses this issue by adding a poles and residues filtering step to the RVF procedure. This regularized VF (ReV-Fit) algorithm is shown to efficiently converge the poles close to the physical ones, eliminating most of the superfluous poles, and thus enabling the conversion of point-wise nuclear cross sections.

  7. Criticality benchmarks for COG: A new point-wise Monte Carlo code

    International Nuclear Information System (INIS)

    Alesso, H.P.; Pearson, J.; Choi, J.S.

    1989-01-01

    COG is a new point-wise Monte Carlo code being developed and tested at LLNL for the Cray computer. It solves the Boltzmann equation for the transport of neutrons, photons, and (in future versions) charged particles. Techniques included in the code for modifying the random walk of particles make COG most suitable for solving deep-penetration (shielding) problems. However, its point-wise cross-sections also make it effective for a wide variety of criticality problems. COG has some similarities to a number of other computer codes used in the shielding and criticality community. These include the Lawrence Livermore National Laboratory (LLNL) codes TART and ALICE, the Los Alamos National Laboratory code MCNP, the Oak Ridge National Laboratory codes 05R, 06R, KENO, and MORSE, the SACLAY code TRIPOLI, and the MAGI code SAM. Each code is a little different in its geometry input and its random-walk modification options. Validating COG consists in part of running benchmark calculations against critical experiments as well as other codes. The objective of this paper is to present calculational results of a variety of critical benchmark experiments using COG, and to present the resulting code bias. Numerous benchmark calculations have been completed for a wide variety of critical experiments which generally involve both simple and complex physical problems. The COG results, which they report in this paper, have been excellent

  8. A posteriori stereotype activation: The preservation of stereotypes through memory distortion

    OpenAIRE

    Knippenberg, A. van; Dijksterhuis, A.

    1996-01-01

    Four experiments investigated memory for stereotype-consistent and stereotype-inconsistent information after a posteriori stereotype activation. In Experiment 1 it was established that, in comparison with a no stereotype control condition, recall of stereotype-inconsistent behaviors of group members deteriorated after a posteriori stereotype activation, while recall of stereotype-consistent information was not affected. An explanation of this phenomenon in terms of the memory organization of ...

  9. MATRIX-VECTOR ALGORITHMS OF LOCAL POSTERIORI INFERENCE IN ALGEBRAIC BAYESIAN NETWORKS ON QUANTA PROPOSITIONS

    Directory of Open Access Journals (Sweden)

    A. A. Zolotin

    2015-07-01

    Full Text Available Posteriori inference is one of the three kinds of probabilistic-logic inferences in the probabilistic graphical models theory and the base for processing of knowledge patterns with probabilistic uncertainty using Bayesian networks. The paper deals with a task of local posteriori inference description in algebraic Bayesian networks that represent a class of probabilistic graphical models by means of matrix-vector equations. The latter are essentially based on the use of tensor product of matrices, Kronecker degree and Hadamard product. Matrix equations for calculating posteriori probabilities vectors within posteriori inference in knowledge patterns with quanta propositions are obtained. Similar equations of the same type have already been discussed within the confines of the theory of algebraic Bayesian networks, but they were built only for the case of posteriori inference in the knowledge patterns on the ideals of conjuncts. During synthesis and development of matrix-vector equations on quanta propositions probability vectors, a number of earlier results concerning normalizing factors in posteriori inference and assignment of linear projective operator with a selector vector was adapted. We consider all three types of incoming evidences - deterministic, stochastic and inaccurate - combined with scalar and interval estimation of probability truth of propositional formulas in the knowledge patterns. Linear programming problems are formed. Their solution gives the desired interval values of posterior probabilities in the case of inaccurate evidence or interval estimates in a knowledge pattern. That sort of description of a posteriori inference gives the possibility to extend the set of knowledge pattern types that we can use in the local and global posteriori inference, as well as simplify complex software implementation by use of existing third-party libraries, effectively supporting submission and processing of matrices and vectors when

  10. Error Estimation and Accuracy Improvements in Nodal Transport Methods; Estimacion de Errores y Aumento de la Precision en Metodos Nodales de Transporte

    Energy Technology Data Exchange (ETDEWEB)

    Zamonsky, O M [Comision Nacional de Energia Atomica, Centro Atomico Bariloche (Argentina)

    2000-07-01

    The accuracy of the solutions produced by the Discrete Ordinates neutron transport nodal methods is analyzed.The obtained new numerical methodologies increase the accuracy of the analyzed scheems and give a POSTERIORI error estimators. The accuracy improvement is obtained with new equations that make the numerical procedure free of truncation errors and proposing spatial reconstructions of the angular fluxes that are more accurate than those used until present. An a POSTERIORI error estimator is rigurously obtained for one dimensional systems that, in certain type of problems, allows to quantify the accuracy of the solutions. From comparisons with the one dimensional results, an a POSTERIORI error estimator is also obtained for multidimensional systems. LOCAL indicators, which quantify the spatial distribution of the errors, are obtained by the decomposition of the menctioned estimators. This makes the proposed methodology suitable to perform adaptive calculations. Some numerical examples are presented to validate the theoretical developements and to illustrate the ranges where the proposed approximations are valid.

  11. Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction

    International Nuclear Information System (INIS)

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-01-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. (paper)

  12. Towards automatic global error control: Computable weak error expansion for the tau-leap method

    KAUST Repository

    Karlsson, Peer Jesper; Tempone, Raul

    2011-01-01

    This work develops novel error expansions with computable leading order terms for the global weak error in the tau-leap discretization of pure jump processes arising in kinetic Monte Carlo models. Accurate computable a posteriori error approximations are the basis for adaptive algorithms, a fundamental tool for numerical simulation of both deterministic and stochastic dynamical systems. These pure jump processes are simulated either by the tau-leap method, or by exact simulation, also referred to as dynamic Monte Carlo, the Gillespie Algorithm or the Stochastic Simulation Slgorithm. Two types of estimates are presented: an a priori estimate for the relative error that gives a comparison between the work for the two methods depending on the propensity regime, and an a posteriori estimate with computable leading order term. © de Gruyter 2011.

  13. A fast pointwise strategy for anisotropic wave-mode separation in TI media

    KAUST Repository

    Liu, Qiancheng

    2017-08-17

    The multi-component wavefield contains both compressional and shear waves. Separating wave-modes has many applications in seismic workflows. Conventionally, anisotropic wave-mode separation is implemented by either directly filtering in the wavenumber domain or nonstationary filtering in the space domain, which are computationally expensive. These methods could be categorized into the pseudo-derivative family and only work well within Finite Difference (FD) methods. In this paper, we establish a relationship between group-velocity direction and polarity direction and propose a method, which could go beyond modeling by FD. In particular, we are interested in performing wave-mode separation in a Spectral Element Method (SEM), which is widely used for seismic wave propagation on various scales. The separation is implemented pointwise, independent of its neighbor points, suitable for running in parallel. Moreover, no correction for amplitude and phase changes caused by the derivative operator is required. We have verified our scheme using numerical examples.

  14. Analysis of the stress field in a wedge using the fast expansions with pointwise determined coefficients

    Science.gov (United States)

    Chernyshov, A. D.; Goryainov, V. V.; Danshin, A. A.

    2018-03-01

    The stress problem for the elastic wedge-shaped cutter of finite dimensions with mixed boundary conditions is considered. The differential problem is reduced to the system of linear algebraic equations by applying twice the fast expansions with respect to the angular and radial coordinate. In order to determine the unknown coefficients of fast expansions, the pointwise method is utilized. The problem solution derived has explicit analytical form and it’s valid for the entire domain including its boundary. The computed profiles of the displacements and stresses in a cross-section of the cutter are provided. The stress field is investigated for various values of opening angle and cusp’s radius.

  15. A fast pointwise strategy for anisotropic wave-mode separation in TI media

    KAUST Repository

    Liu, Qiancheng; Peter, Daniel; Lu, Yongming

    2017-01-01

    The multi-component wavefield contains both compressional and shear waves. Separating wave-modes has many applications in seismic workflows. Conventionally, anisotropic wave-mode separation is implemented by either directly filtering in the wavenumber domain or nonstationary filtering in the space domain, which are computationally expensive. These methods could be categorized into the pseudo-derivative family and only work well within Finite Difference (FD) methods. In this paper, we establish a relationship between group-velocity direction and polarity direction and propose a method, which could go beyond modeling by FD. In particular, we are interested in performing wave-mode separation in a Spectral Element Method (SEM), which is widely used for seismic wave propagation on various scales. The separation is implemented pointwise, independent of its neighbor points, suitable for running in parallel. Moreover, no correction for amplitude and phase changes caused by the derivative operator is required. We have verified our scheme using numerical examples.

  16. Pointwise second-order necessary optimality conditions and second-order sensitivity relations in optimal control

    Science.gov (United States)

    Frankowska, Hélène; Hoehener, Daniel

    2017-06-01

    This paper is devoted to pointwise second-order necessary optimality conditions for the Mayer problem arising in optimal control theory. We first show that with every optimal trajectory it is possible to associate a solution p (ṡ) of the adjoint system (as in the Pontryagin maximum principle) and a matrix solution W (ṡ) of an adjoint matrix differential equation that satisfy a second-order transversality condition and a second-order maximality condition. These conditions seem to be a natural second-order extension of the maximum principle. We then prove a Jacobson like necessary optimality condition for general control systems and measurable optimal controls that may be only ;partially singular; and may take values on the boundary of control constraints. Finally we investigate the second-order sensitivity relations along optimal trajectories involving both p (ṡ) and W (ṡ).

  17. OPTIMAL CONTROL OF A NONLINEAR COUPLED ELECTROMAGNETIC INDUCTION HEATING SYSTEM WITH POINTWISE STATE CONSTRAINTS

    Directory of Open Access Journals (Sweden)

    Irwin Yousept

    2010-07-01

    Full Text Available An optimal control problem arising in the context of 3D electromagnetic induction heating is investigated. The state equation is given by a quasilinear stationary heat equation coupled with a semilinear time harmonic eddy current equation. The temperature-dependent electrical conductivity and the presence of pointwise inequality state-constraints represent the main challenge of the paper. In the first part of the paper, the existence and regularity of the state are addressed. The second part of the paper deals with the analysis of the corresponding linearized equation. Some suffcient conditions are presented which guarantee thesolvability of the linearized system. The final part of the paper is concerned with the optimal control. The aim of the optimization is to find the optimal voltage such that a desired temperature can be achieved optimally. The corresponding first-order necessary optimality condition is presented.

  18. Summary - COG: A new point-wise Monte Carlo code for burnup credit analysis

    International Nuclear Information System (INIS)

    Alesso, H.P.

    1989-01-01

    COG, a new point-wise Monte Carlo code being developed and tested at Lawrence Livermore National Laboratory (LLNL) for the Cray-1, solves the Boltzmann equation for the transport of neutrons, photons, and (in future versions) other particles. Techniques included in the code for modifying the random walk of particles make COG most suitable for solving deep-penetration (shielding) problems and a wide variety of criticality problems. COG is similar to a number of other computer codes used in the shielding community. Each code is a little different in its geometry input and its random-walk modification options. COG is a Monte Carlo code specifically designed for the CRAY (in 1986) to be as precise as the current state of physics knowledge. It has been extensively benchmarked and used as a shielding code at LLNL since 1986, and has recently been extended to accomplish criticality calculations. It will make an excellent tool for future shipping cask studies

  19. Measuring global oil trade dependencies: An application of the point-wise mutual information method

    International Nuclear Information System (INIS)

    Kharrazi, Ali; Fath, Brian D.

    2016-01-01

    Oil trade is one of the most vital networks in the global economy. In this paper, we analyze the 1998–2012 oil trade networks using the point-wise mutual information (PMI) method and determine the pairwise trade preferences and dependencies. Using examples of the USA's trade partners, this research demonstrates the usefulness of the PMI method as an additional methodological tool to evaluate the outcomes from countries' decisions to engage in preferred trading partners. A positive PMI value indicates trade preference where trade is larger than would be expected. For example, in 2012 the USA imported 2,548.7 kbpd despite an expected 358.5 kbpd of oil from Canada. Conversely, a negative PMI value indicates trade dis-preference where the amount of trade is smaller than what would be expected. For example, the 15-year average of annual PMI between Saudi Arabia and the U.S.A. is −0.130 and between Russia and the USA −1.596. We reflect the three primary reasons of discrepancies between actual and neutral model trade can be related to position, price, and politics. The PMI can quantify the political success or failure of trade preferences and can more accurately account temporal variation of interdependencies. - Highlights: • We analyzed global oil trade networks using the point-wise mutual information method. • We identified position, price, & politics as drivers of oil trade preference. • The PMI method is useful in research on complex trade networks and dependency theory. • A time-series analysis of PMI can track dependencies & evaluate policy decisions.

  20. Complementary error bounds for elliptic systems and applications

    Czech Academy of Sciences Publication Activity Database

    Vejchodský, Tomáš

    2013-01-01

    Roč. 219, č. 3 (2013), s. 7194-7205 ISSN 0096-3003 R&D Projects: GA AV ČR IAA100760702 Institutional support: RVO:67985840 Keywords : a posteriori error estimates * guaranteed * dual finite element methods Subject RIV: BA - General Mathematics Impact factor: 1.600, year: 2013 http://www.sciencedirect.com/science/article/pii/S0096300311007910

  1. Stress-induced alterations of left-right electrodermal activity coupling indexed by pointwise transinformation.

    Science.gov (United States)

    Světlák, M; Bob, P; Roman, R; Ježek, S; Damborská, A; Chládek, J; Shaw, D J; Kukleta, M

    2013-01-01

    In this study, we tested the hypothesis that experimental stress induces a specific change of left-right electrodermal activity (EDA) coupling pattern, as indexed by pointwise transinformation (PTI). Further, we hypothesized that this change is associated with scores on psychometric measures of the chronic stress-related psychopathology. Ninety-nine university students underwent bilateral measurement of EDA during rest and stress-inducing Stroop test and completed a battery of self-report measures of chronic stress-related psychopathology. A significant decrease in the mean PTI value was the prevalent response to the stress conditions. No association between chronic stress and PTI was found. Raw scores of psychometric measures of stress-related psychopathology had no effect on either the resting levels of PTI or the amount of stress-induced PTI change. In summary, acute stress alters the level of coupling pattern of cortico-autonomic influences on the left and right sympathetic pathways to the palmar sweat glands. Different results obtained using the PTI, EDA laterality coefficient, and skin conductance level also show that the PTI algorithm represents a new analytical approach to EDA asymmetry description.

  2. Some considerations to the failure analysis of a pointwise attached steel liner membrane under constraint load

    International Nuclear Information System (INIS)

    Buchhardt, F.; Brandl, P.

    1981-01-01

    In the application of reinforced or prestressed concrete reactor containments, the safety enclosure will be obtained through a steel liner membrane, which is attached pointwise to the interior concrete surface. It is the objective and aim of this study to analyse the overall structural behaviour of the bonded system consisting of concrete containment, studs, and steel liner - especially under the aspect of extreme load and deformation conditions. The parametric analysis is carried out on the basis of the geometric length/depth ratio l/t = 12 of a single liner field. In order to reduce the considerable computational effort to a minimum, it is necessary to decouple the overall system in its structural components, i.e., at first an imperfect predeflected 'buckling' field and the residual 'plane' liner field are considered separately. A further reduction enables the use of stud anchor characteristics which are based on experiments. Three-dimensional analyses are performed for the single 'buckling' field to obtain specific load-displacement functions; the residual plane system is considered with two- as well as one-dimensional models. For the comprehensive parametric evalution of the overall system behaviour, a linear model is assumed to which these load-displacement functions are applied. Constraint temperatures are introduced as a unit scale - up to failure of the overall system; hereby partial structural failure might lead to temporary relief. (orig.)

  3. An Iterative Maximum a Posteriori Estimation of Proficiency Level to Detect Multiple Local Likelihood Maxima

    Science.gov (United States)

    Magis, David; Raiche, Gilles

    2010-01-01

    In this article the authors focus on the issue of the nonuniqueness of the maximum likelihood (ML) estimator of proficiency level in item response theory (with special attention to logistic models). The usual maximum a posteriori (MAP) method offers a good alternative within that framework; however, this article highlights some drawbacks of its…

  4. A posteriori stereotype activation: The preservation of stereotypes through memory distortion

    NARCIS (Netherlands)

    Knippenberg, A. van; Dijksterhuis, A.

    1996-01-01

    Four experiments investigated memory for stereotype-consistent and stereotype-inconsistent information after a posteriori stereotype activation. In Experiment 1 it was established that, in comparison with a no stereotype control condition, recall of stereotype-inconsistent behaviors of group members

  5. Pointwise mutual information quantifies intratumor heterogeneity in tissue sections labeled with multiple fluorescent biomarkers

    Directory of Open Access Journals (Sweden)

    Daniel M Spagnolo

    2016-01-01

    Full Text Available Background: Measures of spatial intratumor heterogeneity are potentially important diagnostic biomarkers for cancer progression, proliferation, and response to therapy. Spatial relationships among cells including cancer and stromal cells in the tumor microenvironment (TME are key contributors to heterogeneity. Methods: We demonstrate how to quantify spatial heterogeneity from immunofluorescence pathology samples, using a set of 3 basic breast cancer biomarkers as a test case. We learn a set of dominant biomarker intensity patterns and map the spatial distribution of the biomarker patterns with a network. We then describe the pairwise association statistics for each pattern within the network using pointwise mutual information (PMI and visually represent heterogeneity with a two-dimensional map. Results: We found a salient set of 8 biomarker patterns to describe cellular phenotypes from a tissue microarray cohort containing 4 different breast cancer subtypes. After computing PMI for each pair of biomarker patterns in each patient and tumor replicate, we visualize the interactions that contribute to the resulting association statistics. Then, we demonstrate the potential for using PMI as a diagnostic biomarker, by comparing PMI maps and heterogeneity scores from patients across the 4 different cancer subtypes. Estrogen receptor positive invasive lobular carcinoma patient, AL13-6, exhibited the highest heterogeneity score among those tested, while estrogen receptor negative invasive ductal carcinoma patient, AL13-14, exhibited the lowest heterogeneity score. Conclusions: This paper presents an approach for describing intratumor heterogeneity, in a quantitative fashion (via PMI, which departs from the purely qualitative approaches currently used in the clinic. PMI is generalizable to highly multiplexed/hyperplexed immunofluorescence images, as well as spatial data from complementary in situ methods including FISSEQ and CyTOF, sampling many different

  6. Estimating and localizing the algebraic and total numerical errors using flux reconstructions

    Czech Academy of Sciences Publication Activity Database

    Papež, Jan; Strakoš, Z.; Vohralík, M.

    2018-01-01

    Roč. 138, č. 3 (2018), s. 681-721 ISSN 0029-599X R&D Projects: GA ČR GA13-06684S Grant - others:GA MŠk(CZ) LL1202 Institutional support: RVO:67985807 Keywords : numerical solution of partial differential equations * finite element method * a posteriori error estimation * algebraic error * discretization error * stopping criteria * spatial distribution of the error Subject RIV: BA - General Mathematics Impact factor: 2.152, year: 2016

  7. Automatic simplification of systems of reaction-diffusion equations by a posteriori analysis.

    Science.gov (United States)

    Maybank, Philip J; Whiteley, Jonathan P

    2014-02-01

    Many mathematical models in biology and physiology are represented by systems of nonlinear differential equations. In recent years these models have become increasingly complex in order to explain the enormous volume of data now available. A key role of modellers is to determine which components of the model have the greatest effect on a given observed behaviour. An approach for automatically fulfilling this role, based on a posteriori analysis, has recently been developed for nonlinear initial value ordinary differential equations [J.P. Whiteley, Model reduction using a posteriori analysis, Math. Biosci. 225 (2010) 44-52]. In this paper we extend this model reduction technique for application to both steady-state and time-dependent nonlinear reaction-diffusion systems. Exemplar problems drawn from biology are used to demonstrate the applicability of the technique. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. B-Spline potential function for maximum a-posteriori image reconstruction in fluorescence microscopy

    Directory of Open Access Journals (Sweden)

    Shilpa Dilipkumar

    2015-03-01

    Full Text Available An iterative image reconstruction technique employing B-Spline potential function in a Bayesian framework is proposed for fluorescence microscopy images. B-splines are piecewise polynomials with smooth transition, compact support and are the shortest polynomial splines. Incorporation of the B-spline potential function in the maximum-a-posteriori reconstruction technique resulted in improved contrast, enhanced resolution and substantial background reduction. The proposed technique is validated on simulated data as well as on the images acquired from fluorescence microscopes (widefield, confocal laser scanning fluorescence and super-resolution 4Pi microscopy. A comparative study of the proposed technique with the state-of-art maximum likelihood (ML and maximum-a-posteriori (MAP with quadratic potential function shows its superiority over the others. B-Spline MAP technique can find applications in several imaging modalities of fluorescence microscopy like selective plane illumination microscopy, localization microscopy and STED.

  9. Error Decomposition and Adaptivity for Response Surface Approximations from PDEs with Parametric Uncertainty

    KAUST Repository

    Bryant, C. M.; Prudhomme, S.; Wildey, T.

    2015-01-01

    In this work, we investigate adaptive approaches to control errors in response surface approximations computed from numerical approximations of differential equations with uncertain or random data and coefficients. The adaptivity of the response surface approximation is based on a posteriori error estimation, and the approach relies on the ability to decompose the a posteriori error estimate into contributions from the physical discretization and the approximation in parameter space. Errors are evaluated in terms of linear quantities of interest using adjoint-based methodologies. We demonstrate that a significant reduction in the computational cost required to reach a given error tolerance can be achieved by refining the dominant error contributions rather than uniformly refining both the physical and stochastic discretization. Error decomposition is demonstrated for a two-dimensional flow problem, and adaptive procedures are tested on a convection-diffusion problem with discontinuous parameter dependence and a diffusion problem, where the diffusion coefficient is characterized by a 10-dimensional parameter space.

  10. Sensing Slow Mobility and Interesting Locations for Lombardy Region (italy): a Case Study Using Pointwise Geolocated Open Data

    Science.gov (United States)

    Brovelli, M. A.; Oxoli, D.; Zurbarán, M. A.

    2016-06-01

    During the past years Web 2.0 technologies have caused the emergence of platforms where users can share data related to their activities which in some cases are then publicly released with open licenses. Popular categories for this include community platforms where users can upload GPS tracks collected during slow travel activities (e.g. hiking, biking and horse riding) and platforms where users share their geolocated photos. However, due to the high heterogeneity of the information available on the Web, the sole use of these user-generated contents makes it an ambitious challenge to understand slow mobility flows as well as to detect the most visited locations in a region. Exploiting the available data on community sharing websites allows to collect near real-time open data streams and enables rigorous spatial-temporal analysis. This work presents an approach for collecting, unifying and analysing pointwise geolocated open data available from different sources with the aim of identifying the main locations and destinations of slow mobility activities. For this purpose, we collected pointwise open data from the Wikiloc platform, Twitter, Flickr and Foursquare. The analysis was confined to the data uploaded in Lombardy Region (Northern Italy) - corresponding to millions of pointwise data. Collected data was processed through the use of Free and Open Source Software (FOSS) in order to organize them into a suitable database. This allowed to run statistical analyses on data distribution in both time and space by enabling the detection of users' slow mobility preferences as well as places of interest at a regional scale.

  11. SENSING SLOW MOBILITY AND INTERESTING LOCATIONS FOR LOMBARDY REGION (ITALY: A CASE STUDY USING POINTWISE GEOLOCATED OPEN DATA

    Directory of Open Access Journals (Sweden)

    M. A. Brovelli

    2016-06-01

    Full Text Available During the past years Web 2.0 technologies have caused the emergence of platforms where users can share data related to their activities which in some cases are then publicly released with open licenses. Popular categories for this include community platforms where users can upload GPS tracks collected during slow travel activities (e.g. hiking, biking and horse riding and platforms where users share their geolocated photos. However, due to the high heterogeneity of the information available on the Web, the sole use of these user-generated contents makes it an ambitious challenge to understand slow mobility flows as well as to detect the most visited locations in a region. Exploiting the available data on community sharing websites allows to collect near real-time open data streams and enables rigorous spatial-temporal analysis. This work presents an approach for collecting, unifying and analysing pointwise geolocated open data available from different sources with the aim of identifying the main locations and destinations of slow mobility activities. For this purpose, we collected pointwise open data from the Wikiloc platform, Twitter, Flickr and Foursquare. The analysis was confined to the data uploaded in Lombardy Region (Northern Italy – corresponding to millions of pointwise data. Collected data was processed through the use of Free and Open Source Software (FOSS in order to organize them into a suitable database. This allowed to run statistical analyses on data distribution in both time and space by enabling the detection of users’ slow mobility preferences as well as places of interest at a regional scale.

  12. [Methods of a posteriori identification of food patterns in Brazilian children: a systematic review].

    Science.gov (United States)

    Carvalho, Carolina Abreu de; Fonsêca, Poliana Cristina de Almeida; Nobre, Luciana Neri; Priore, Silvia Eloiza; Franceschini, Sylvia do Carmo Castro

    2016-01-01

    The objective of this study is to provide guidance for identifying dietary patterns using the a posteriori approach, and analyze the methodological aspects of the studies conducted in Brazil that identified the dietary patterns of children. Articles were selected from the Latin American and Caribbean Literature on Health Sciences, Scientific Electronic Library Online and Pubmed databases. The key words were: Dietary pattern; Food pattern; Principal Components Analysis; Factor analysis; Cluster analysis; Reduced rank regression. We included studies that identified dietary patterns of children using the a posteriori approach. Seven studies published between 2007 and 2014 were selected, six of which were cross-sectional and one cohort, Five studies used the food frequency questionnaire for dietary assessment; one used a 24-hour dietary recall and the other a food list. The method of exploratory approach used in most publications was principal components factor analysis, followed by cluster analysis. The sample size of the studies ranged from 232 to 4231, the values of the Kaiser-Meyer-Olkin test from 0.524 to 0.873, and Cronbach's alpha from 0.51 to 0.69. Few Brazilian studies identified dietary patterns of children using the a posteriori approach and principal components factor analysis was the technique most used.

  13. Broad frequency band full field measurements for advanced applications: Point-wise comparisons between optical technologies

    Science.gov (United States)

    Zanarini, Alessandro

    2018-01-01

    The progress of optical systems gives nowadays at disposal on lightweight structures complex dynamic measurements and modal tests, each with its own advantages, drawbacks and preferred usage domains. It is thus more easy than before to obtain highly spatially defined vibration patterns for many applications in vibration engineering, testing and general product development. The potential of three completely different technologies is here benchmarked on a common test rig and advanced applications. SLDV, dynamic ESPI and hi-speed DIC are here first deployed in a complex and unique test on the estimation of FRFs with high spatial accuracy from a thin vibrating plate. The latter exhibits a broad band dynamics and high modal density in the common frequency domain where the techniques can find an operative intersection. A peculiar point-wise comparison is here addressed by means of discrete geometry transforms to put all the three technologies on trial at each physical point of the surface. Full field measurement technologies cannot estimate only displacement fields on a refined grid, but can exploit the spatial consistency of the results through neighbouring locations by means of numerical differentiation operators in the spatial domain to obtain rotational degrees of freedom and superficial dynamic strain distributions, with enhanced quality, compared to other technologies in literature. Approaching the task with the aid of superior quality receptance maps from the three different full field gears, this work calculates and compares rotational and dynamic strain FRFs. Dynamic stress FRFs can be modelled directly from the latter, by means of a constitutive model, avoiding the costly and time-consuming steps of building and tuning a numerical dynamic model of a flexible component or a structure in real life conditions. Once dynamic stress FRFs are obtained, spectral fatigue approaches can try to predict the life of a component in many excitation conditions. Different

  14. Variational Multiscale error estimator for anisotropic adaptive fluid mechanic simulations: application to convection-diffusion problems

    OpenAIRE

    Bazile , Alban; Hachem , Elie; Larroya-Huguet , Juan-Carlos; Mesri , Youssef

    2018-01-01

    International audience; In this work, we present a new a posteriori error estimator based on the Variational Multiscale method for anisotropic adaptive fluid mechanics problems. The general idea is to combine the large scale error based on the solved part of the solution with the sub-mesh scale error based on the unresolved part of the solution. We compute the latter with two different methods: one using the stabilizing parameters and the other using bubble functions. We propose two different...

  15. Uncertainty quantification in a chemical system using error estimate-based mesh adaption

    International Nuclear Information System (INIS)

    Mathelin, Lionel; Le Maitre, Olivier P.

    2012-01-01

    This paper describes a rigorous a posteriori error analysis for the stochastic solution of non-linear uncertain chemical models. The dual-based a posteriori stochastic error analysis extends the methodology developed in the deterministic finite elements context to stochastic discretization frameworks. It requires the resolution of two additional (dual) problems to yield the local error estimate. The stochastic error estimate can then be used to adapt the stochastic discretization. Different anisotropic refinement strategies are proposed, leading to a cost-efficient tool suitable for multi-dimensional problems of moderate stochastic dimension. The adaptive strategies allow both for refinement and coarsening of the stochastic discretization, as needed to satisfy a prescribed error tolerance. The adaptive strategies were successfully tested on a model for the hydrogen oxidation in supercritical conditions having 8 random parameters. The proposed methodologies are however general enough to be also applicable for a wide class of models such as uncertain fluid flows. (authors)

  16. A Review of Point-Wise Motion Tracking Algorithms for Fetal Magnetic Resonance Imaging.

    Science.gov (United States)

    Chikop, Shivaprasad; Koulagi, Girish; Kumbara, Ankita; Geethanath, Sairam

    2016-01-01

    We review recent feature-based tracking algorithms as applied to fetal magnetic resonance imaging (MRI). Motion in fetal MRI is an active and challenging area of research, but the challenge can be mitigated by strategies related to patient setup, acquisition, reconstruction, and image processing. We focus on fetal motion correction through methods based on tracking algorithms for registration of slices with similar anatomy in multiple volumes. We describe five motion detection algorithms based on corner detection and region-based methods through pseudocodes, illustrating the results of their application to fetal MRI. We compare the performance of these methods on the basis of error in registration and minimum number of feature points required for registration. Harris, a corner detection method, provides similar error when compared to the other methods and has the lowest number of feature points required at that error level. We do not discuss group-wise methods here. Finally, we attempt to communicate the application of available feature extraction methods to fetal MRI.

  17. B-spline goal-oriented error estimators for geometrically nonlinear rods

    Science.gov (United States)

    2011-04-01

    respectively, for the output functionals q2–q4 (linear and nonlinear with the trigonometric functions sine and cosine) in all the tests considered...of the errors resulting from the linear, quadratic and nonlinear (with trigonometric functions sine and cosine) outputs and for p = 1, 2. If the... Portugal . References [1] A.T. Adams. Sobolev Spaces. Academic Press, Boston, 1975. [2] M. Ainsworth and J.T. Oden. A posteriori error estimation in

  18. Level set segmentation of medical images based on local region statistics and maximum a posteriori probability.

    Science.gov (United States)

    Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan

    2013-01-01

    This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.

  19. Automatic lung lobe segmentation using particles, thin plate splines, and maximum a posteriori estimation.

    Science.gov (United States)

    Ross, James C; San José Estépar, Rail; Kindlmann, Gordon; Díaz, Alejandro; Westin, Carl-Fredrik; Silverman, Edwin K; Washko, George R

    2010-01-01

    We present a fully automatic lung lobe segmentation algorithm that is effective in high resolution computed tomography (CT) datasets in the presence of confounding factors such as incomplete fissures (anatomical structures indicating lobe boundaries), advanced disease states, high body mass index (BMI), and low-dose scanning protocols. In contrast to other algorithms that leverage segmentations of auxiliary structures (esp. vessels and airways), we rely only upon image features indicating fissure locations. We employ a particle system that samples the image domain and provides a set of candidate fissure locations. We follow this stage with maximum a posteriori (MAP) estimation to eliminate poor candidates and then perform a post-processing operation to remove remaining noise particles. We then fit a thin plate spline (TPS) interpolating surface to the fissure particles to form the final lung lobe segmentation. Results indicate that our algorithm performs comparably to pulmonologist-generated lung lobe segmentations on a set of challenging cases.

  20. A posteriori noise estimation in variable data sets. With applications to spectra and light curves

    Science.gov (United States)

    Czesla, S.; Molle, T.; Schmitt, J. H. M. M.

    2018-01-01

    Most physical data sets contain a stochastic contribution produced by measurement noise or other random sources along with the signal. Usually, neither the signal nor the noise are accurately known prior to the measurement so that both have to be estimated a posteriori. We have studied a procedure to estimate the standard deviation of the stochastic contribution assuming normality and independence, requiring a sufficiently well-sampled data set to yield reliable results. This procedure is based on estimating the standard deviation in a sample of weighted sums of arbitrarily sampled data points and is identical to the so-called DER_SNR algorithm for specific parameter settings. To demonstrate the applicability of our procedure, we present applications to synthetic data, high-resolution spectra, and a large sample of space-based light curves and, finally, give guidelines to apply the procedure in situation not explicitly considered here to promote its adoption in data analysis.

  1. Maximum a posteriori probability estimates in infinite-dimensional Bayesian inverse problems

    International Nuclear Information System (INIS)

    Helin, T; Burger, M

    2015-01-01

    A demanding challenge in Bayesian inversion is to efficiently characterize the posterior distribution. This task is problematic especially in high-dimensional non-Gaussian problems, where the structure of the posterior can be very chaotic and difficult to analyse. Current inverse problem literature often approaches the problem by considering suitable point estimators for the task. Typically the choice is made between the maximum a posteriori (MAP) or the conditional mean (CM) estimate. The benefits of either choice are not well-understood from the perspective of infinite-dimensional theory. Most importantly, there exists no general scheme regarding how to connect the topological description of a MAP estimate to a variational problem. The recent results by Dashti and others (Dashti et al 2013 Inverse Problems 29 095017) resolve this issue for nonlinear inverse problems in Gaussian framework. In this work we improve the current understanding by introducing a novel concept called the weak MAP (wMAP) estimate. We show that any MAP estimate in the sense of Dashti et al (2013 Inverse Problems 29 095017) is a wMAP estimate and, moreover, how the wMAP estimate connects to a variational formulation in general infinite-dimensional non-Gaussian problems. The variational formulation enables to study many properties of the infinite-dimensional MAP estimate that were earlier impossible to study. In a recent work by the authors (Burger and Lucka 2014 Maximum a posteriori estimates in linear inverse problems with logconcave priors are proper bayes estimators preprint) the MAP estimator was studied in the context of the Bayes cost method. Using Bregman distances, proper convex Bayes cost functions were introduced for which the MAP estimator is the Bayes estimator. Here, we generalize these results to the infinite-dimensional setting. Moreover, we discuss the implications of our results for some examples of prior models such as the Besov prior and hierarchical prior. (paper)

  2. Cosmic shear measurement with maximum likelihood and maximum a posteriori inference

    Science.gov (United States)

    Hall, Alex; Taylor, Andy

    2017-06-01

    We investigate the problem of noise bias in maximum likelihood and maximum a posteriori estimators for cosmic shear. We derive the leading and next-to-leading order biases and compute them in the context of galaxy ellipticity measurements, extending previous work on maximum likelihood inference for weak lensing. We show that a large part of the bias on these point estimators can be removed using information already contained in the likelihood when a galaxy model is specified, without the need for external calibration. We test these bias-corrected estimators on simulated galaxy images similar to those expected from planned space-based weak lensing surveys, with promising results. We find that the introduction of an intrinsic shape prior can help with mitigation of noise bias, such that the maximum a posteriori estimate can be made less biased than the maximum likelihood estimate. Second-order terms offer a check on the convergence of the estimators, but are largely subdominant. We show how biases propagate to shear estimates, demonstrating in our simple set-up that shear biases can be reduced by orders of magnitude and potentially to within the requirements of planned space-based surveys at mild signal-to-noise ratio. We find that second-order terms can exhibit significant cancellations at low signal-to-noise ratio when Gaussian noise is assumed, which has implications for inferring the performance of shear-measurement algorithms from simplified simulations. We discuss the viability of our point estimators as tools for lensing inference, arguing that they allow for the robust measurement of ellipticity and shear.

  3. Detection of progression of glaucomatous visual field damage using the point-wise method with the binomial test.

    Science.gov (United States)

    Karakawa, Ayako; Murata, Hiroshi; Hirasawa, Hiroyo; Mayama, Chihiro; Asaoka, Ryo

    2013-01-01

    To compare the performance of newly proposed point-wise linear regression (PLR) with the binomial test (binomial PLR) against mean deviation (MD) trend analysis and permutation analyses of PLR (PoPLR), in detecting global visual field (VF) progression in glaucoma. 15 VFs (Humphrey Field Analyzer, SITA standard, 24-2) were collected from 96 eyes of 59 open angle glaucoma patients (6.0 ± 1.5 [mean ± standard deviation] years). Using the total deviation of each point on the 2(nd) to 16(th) VFs (VF2-16), linear regression analysis was carried out. The numbers of VF test points with a significant trend at various probability levels (pbinomial test (one-side). A VF series was defined as "significant" if the median p-value from the binomial test was binomial PLR method (0.14 to 0.86) was significantly higher than MD trend analysis (0.04 to 0.89) and PoPLR (0.09 to 0.93). The PIS of the proposed method (0.0 to 0.17) was significantly lower than the MD approach (0.0 to 0.67) and PoPLR (0.07 to 0.33). The PBNS of the three approaches were not significantly different. The binomial BLR method gives more consistent results than MD trend analysis and PoPLR, hence it will be helpful as a tool to 'flag' possible VF deterioration.

  4. Error Patterns

    NARCIS (Netherlands)

    Hoede, C.; Li, Z.

    2001-01-01

    In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,

  5. Distribution of the Discretization and Algebraic Error in Numerical Solution of Partial Differential Equations

    Czech Academy of Sciences Publication Activity Database

    Papež, Jan; Liesen, J.; Strakoš, Z.

    2014-01-01

    Roč. 449, 15 May (2014), s. 89-114 ISSN 0024-3795 R&D Projects: GA AV ČR IAA100300802; GA ČR GA201/09/0917 Grant - others:GA MŠk(CZ) LL1202; GA UK(CZ) 695612 Institutional support: RVO:67985807 Keywords : numerical solution of partial differential equations * finite element method * adaptivity * a posteriori error analysis * discretization error * algebra ic error * spatial distribution of the error Subject RIV: BA - General Mathematics Impact factor: 0.939, year: 2014

  6. «A posteriori» Determination of the Mixing Proportions of Concrete with Calcareous Aggregates

    Directory of Open Access Journals (Sweden)

    Von Heinz Pisters, -

    1972-09-01

    Full Text Available Not availableCuando en una obra de hormigón han de examinarse a posteriori la proporción de componentes del hormigón endurecido se precisa, frecuentemente, la determinación del contenido de cemento. Los procedimientos seguidos con este fin parten de un tratamiento del hormigón con ácido clorhídrico. En él caso de que el hormigón contenga un árido soluble en él ácido y no se disponga de él por separado, se describe detalladamente un procedimiento de identificación con una corrección sobre el contenido en dióxido de carbono del hormigón. Con arreglo a las determinaciones realizadas de este modo con él hormigón, será también posible en estos casos determinar el contenido de cemento con una precisión de, cuando menos, ± 10%.

  7. On Evaluation of Recharge Model Uncertainty: a Priori and a Posteriori

    International Nuclear Information System (INIS)

    Ming Ye; Karl Pohlmann; Jenny Chapman; David Shafer

    2006-01-01

    Hydrologic environments are open and complex, rendering them prone to multiple interpretations and mathematical descriptions. Hydrologic analyses typically rely on a single conceptual-mathematical model, which ignores conceptual model uncertainty and may result in bias in predictions and under-estimation of predictive uncertainty. This study is to assess conceptual model uncertainty residing in five recharge models developed to date by different researchers based on different theories for Nevada and Death Valley area, CA. A recently developed statistical method, Maximum Likelihood Bayesian Model Averaging (MLBMA), is utilized for this analysis. In a Bayesian framework, the recharge model uncertainty is assessed, a priori, using expert judgments collected through an expert elicitation in the form of prior probabilities of the models. The uncertainty is then evaluated, a posteriori, by updating the prior probabilities to estimate posterior model probability. The updating is conducted through maximum likelihood inverse modeling by calibrating the Death Valley Regional Flow System (DVRFS) model corresponding to each recharge model against observations of head and flow. Calibration results of DVRFS for the five recharge models are used to estimate three information criteria (AIC, BIC, and KIC) used to rank and discriminate these models. Posterior probabilities of the five recharge models, evaluated using KIC, are used as weights to average head predictions, which gives posterior mean and variance. The posterior quantities incorporate both parametric and conceptual model uncertainties

  8. Maximum a posteriori reconstruction of the Patlak parametric image from sinograms in dynamic PET

    International Nuclear Information System (INIS)

    Wang Guobao; Fu Lin; Qi Jinyi

    2008-01-01

    Parametric imaging using the Patlak graphical method has been widely used to analyze dynamic PET data. Conventionally a Patlak parametric image is generated by reconstructing a sequence of dynamic images first and then performing Patlak graphical analysis on the time-activity curves pixel-by-pixel. However, because it is rather difficult to model the noise distribution in reconstructed images, the spatially variant noise correlation is simply ignored in the Patlak analysis, which leads to sub-optimal results. In this paper we present a Bayesian method for reconstructing Patlak parametric images directly from raw sinogram data by incorporating the Patlak plot model into the image reconstruction procedure. A preconditioned conjugate gradient algorithm is used to find the maximum a posteriori solution. The proposed direct method is statistically more efficient than the conventional indirect approach because the Poisson noise distribution in PET data can be accurately modeled in the direct reconstruction. The computation cost of the direct method is similar to reconstruction time of two dynamic frames. Therefore, when more than two dynamic frames are used in the Patlak analysis, the direct method is faster than the conventional indirect approach. We conduct computer simulations to validate the proposed direct method. Comparisons with the conventional indirect approach show that the proposed method results in a more accurate estimate of the parametric image. The proposed method has been applied to dynamic fully 3D PET data from a microPET scanner

  9. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  10. Revisiting maximum-a-posteriori estimation in log-concave models: from differential geometry to decision theory

    OpenAIRE

    Pereyra, Marcelo

    2016-01-01

    Maximum-a-posteriori (MAP) estimation is the main Bayesian estimation methodology in many areas of data science such as mathematical imaging and machine learning, where high dimensionality is addressed by using models that are log-concave and whose posterior mode can be computed efficiently by using convex optimisation algorithms. However, despite its success and rapid adoption, MAP estimation is not theoretically well understood yet, and the prevalent view is that it is generally not proper ...

  11. Arbitrary-Lagrangian-Eulerian Discontinuous Galerkin schemes with a posteriori subcell finite volume limiting on moving unstructured meshes

    Science.gov (United States)

    Boscheri, Walter; Dumbser, Michael

    2017-10-01

    Lagrangian formulations that are based on a fixed computational grid and which instead evolve the mapping of the reference configuration to the current one. Our new Lagrangian-type DG scheme adopts the novel a posteriori sub-cell finite volume limiter method recently developed in [62] for fixed unstructured grids. In this approach, the validity of the candidate solution produced in each cell by an unlimited ADER-DG scheme is verified against a set of physical and numerical detection criteria, such as the positivity of pressure and density, the absence of floating point errors (NaN) and the satisfaction of a relaxed discrete maximum principle (DMP) in the sense of polynomials. Those cells which do not satisfy all of the above criteria are flagged as troubled cells and are recomputed at the aid of a more robust second order TVD finite volume scheme. To preserve the subcell resolution capability of the original DG scheme, the FV limiter is run on a sub-grid that is 2 N + 1 times finer compared to the mesh of the original unlimited DG scheme. The new subcell averages are then gathered back into a high order DG polynomial by a usual conservative finite volume reconstruction operator. The numerical convergence rates of the new ALE ADER-DG schemes are studied up to fourth order in space and time and several test problems are simulated in order to check the accuracy and the robustness of the proposed numerical method in the context of the Euler and Navier-Stokes equations for compressible gas dynamics, considering both inviscid and viscous fluids. Finally, an application inspired by Inertial Confinement Fusion (ICF) type flows is considered by solving the Euler equations and the PDE of viscous and resistive magnetohydrodynamics (VRMHD).

  12. Einstein's error

    International Nuclear Information System (INIS)

    Winterflood, A.H.

    1980-01-01

    In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)

  13. A posteriori estimator and adaptive mesh refinement for finite volume finite element method for monophasic flow and solute transport in porous media

    International Nuclear Information System (INIS)

    Amor, H.; Bourgeois, M.

    2012-01-01

    Document available in extended abstract form only. The disposal of high level, long lived waste in deep underground clay formations is investigated by several countries including France. In the safety assessment of such geological repositories, a thoughtful consideration must be given to the mechanisms and possible pathways of migration of radionuclides released from waste packages. However, when modelling the transfer of radionuclides throughout the disposal facilities and geological formations, the numerical simulations must take into consideration, in addition to long durations of concern, the variety in the properties as well as in geometrical scales of the different components of the overall disposal, including the host formation. This task presents significant computational challenges. Numerical methods used in the MELODIE software The MELODIE software is developed by IRSN, and constantly upgraded, with the aim to assess the long-term containment capabilities of underground and surface radioactive waste repositories. The MELODIE software models water flow and the phenomena involved in the transport of radionuclides in saturated and unsaturated porous media in 2 and 3 dimensions; chemical processes are represented by a retardation factor and a solubility limit, for sorption and solubility respectively, integrated in the computational equations. These equations are discretized using a so-called Finite Volume Finite Element method (FVFE), which is based on a Galerkin method to discretize time and variables, together with a Finite Volume method using the Godunov scheme for the convection term. The FVFE method is used to convert partial differential equations into a finite number of algebraic equations that match the number of nodes in the mesh used to model the considered domain. It is also used to stabilise the numerical scheme. In order to manage the variety in properties and geometrical scales of underground disposal components, an a posteriori error estimator

  14. Optimal full motion video registration with rigorous error propagation

    Science.gov (United States)

    Dolloff, John; Hottel, Bryant; Doucette, Peter; Theiss, Henry; Jocher, Glenn

    2014-06-01

    Optimal full motion video (FMV) registration is a crucial need for the Geospatial community. It is required for subsequent and optimal geopositioning with simultaneous and reliable accuracy prediction. An overall approach being developed for such registration is presented that models relevant error sources in terms of the expected magnitude and correlation of sensor errors. The corresponding estimator is selected based on the level of accuracy of the a priori information of the sensor's trajectory and attitude (pointing) information, in order to best deal with non-linearity effects. Estimator choices include near real-time Kalman Filters and batch Weighted Least Squares. Registration solves for corrections to the sensor a priori information for each frame. It also computes and makes available a posteriori accuracy information, i.e., the expected magnitude and correlation of sensor registration errors. Both the registered sensor data and its a posteriori accuracy information are then made available to "down-stream" Multi-Image Geopositioning (MIG) processes. An object of interest is then measured on the registered frames and a multi-image optimal solution, including reliable predicted solution accuracy, is then performed for the object's 3D coordinates. This paper also describes a robust approach to registration when a priori information of sensor attitude is unavailable. It makes use of structure-from-motion principles, but does not use standard Computer Vision techniques, such as estimation of the Essential Matrix which can be very sensitive to noise. The approach used instead is a novel, robust, direct search-based technique.

  15. CSRL-V: processed ENDF/B-V 227-neutron-group and pointwise cross-section libraries for criticality safety, reactor, and shielding studies

    International Nuclear Information System (INIS)

    Ford, W.E. III; Diggs, B.R.; Petrie, L.M.; Webster, C.C.; Westfall, R.M.

    1982-01-01

    A P 3 227-neutron-group cross-section library has been processed for the subsequent generation of problem-dependent fine- or broad-group cross sections for a broad range of applications, including shipping cask calculations, general criticality safety analyses, and reactor core and shielding analyses. The energy group structure covers the range 10 -5 eV - 20 MeV, including 79 thermal groups below 3 eV. The 129-material library includes processed data for all materials in the ENDF/B-V General Purpose File, several data sets prepared from LENDL data, hydrogen with water- and polyethyelene-bound thermal kernels, deuterium with C 2 O-bound thermal kernels, carbon with a graphite thermal kernel, a special 1/V data set, and a dose factor data set. The library, which is in AMPX master format, is designated CSRL-V (Criticality Safety Reference Library based on ENDF/B-V data). Also included in CSRL-V is a pointwise total, fission, elastic scattering, and (n,γ) cross-section library containing data sets for all ENDF/B-V resonance materials. Data in the pointwise library were processed with the infinite dilute approximation at a temperature of 296 0 K

  16. A point-wise fiber Bragg grating displacement sensing system and its application for active vibration suppression of a smart cantilever beam subjected to multiple impact loadings

    International Nuclear Information System (INIS)

    Chuang, Kuo-Chih; Ma, Chien-Ching; Liao, Heng-Tseng

    2012-01-01

    In this work, active vibration suppression of a smart cantilever beam subjected to disturbances from multiple impact loadings is investigated with a point-wise fiber Bragg grating (FBG) displacement sensing system. An FBG demodulator is employed in the proposed fiber sensing system to dynamically demodulate the responses obtained by the FBG displacement sensor with high sensitivity. To investigate the ability of the proposed FBG displacement sensor as a feedback sensor, velocity feedback control and delay control are employed to suppress the vibrations of the first three bending modes of the smart cantilever beam. To improve the control performance for the first bending mode when the cantilever beam is subjected to an impact loading, we improve the conventional velocity feedback controller by tuning the control gain online with the aid of information from a higher vibration mode. Finally, active control of vibrations induced by multiple impact loadings due to a plastic ball is performed with the improved velocity feedback control. The experimental results show that active vibration control of smart structures subjected to disturbances such as impact loadings can be achieved by employing the proposed FBG sensing system to feed back out-of-plane point-wise displacement responses with high sensitivity. (paper)

  17. Error Analysis of Variations on Larsen's Benchmark Problem

    International Nuclear Information System (INIS)

    Azmy, YY

    2001-01-01

    Error norms for three variants of Larsen's benchmark problem are evaluated using three numerical methods for solving the discrete ordinates approximation of the neutron transport equation in multidimensional Cartesian geometry. The three variants of Larsen's test problem are concerned with the incoming flux boundary conditions: unit incoming flux on the left and bottom edges (Larsen's configuration); unit, incoming flux only on the left edge; unit incoming flux only on the bottom edge. The three methods considered are the Diamond Difference (DD) method, and the constant-approximation versions of the Arbitrarily High Order Transport method of the Nodal type (AHOT-N), and of the Characteristic (AHOT-C) type. The cell-wise error is computed as the difference between the cell-averaged flux computed by each method and the exact value, then the L 1 , L 2 , and L ∞ error norms are calculated. The results of this study demonstrate that while integral error norms, i.e. L 1 , L 2 , converge to zero with mesh refinement, the pointwise L ∞ norm does not due to solution discontinuity across the singular characteristic. Little difference is observed between the error norm behavior of the three methods considered in spite of the fact that AHOT-C is locally exact, suggesting that numerical diffusion across the singular characteristic as the major source of error on the global scale. However, AHOT-C possesses a given accuracy in a larger fraction of computational cells than DD

  18. A posteriori registration and subtraction of periapical radiographs for the evaluation of external apical root resorption after orthodontic treatment.

    Science.gov (United States)

    Kreich, Eliane Maria; Chibinski, Ana Cláudia; Coelho, Ulisses; Wambier, Letícia Stadler; Zedebski, Rosário de Arruda Moura; de Moraes, Mari Eli Leonelli; de Moraes, Luiz Cesar

    2016-03-01

    This study employed a posteriori registration and subtraction of radiographic images to quantify the apical root resorption in maxillary permanent central incisors after orthodontic treatment, and assessed whether the external apical root resorption (EARR) was related to a range of parameters involved in the treatment. A sample of 79 patients (mean age, 13.5±2.2 years) with no history of trauma or endodontic treatment of the maxillary permanent central incisors was selected. Periapical radiographs taken before and after orthodontic treatment were digitized and imported to the Regeemy software. Based on an analysis of the posttreatment radiographs, the length of the incisors was measured using Image J software. The mean EARR was described in pixels and relative root resorption (%). The patient's age and gender, tooth extraction, use of elastics, and treatment duration were evaluated to identify possible correlations with EARR. The mean EARR observed was 15.44±12.1 pixels (5.1% resorption). No differences in the mean EARR were observed according to patient characteristics (gender, age) or treatment parameters (use of elastics, treatment duration). The only parameter that influenced the mean EARR of a patient was the need for tooth extraction. A posteriori registration and subtraction of periapical radiographs was a suitable method to quantify EARR after orthodontic treatment, and the need for tooth extraction increased the extent of root resorption after orthodontic treatment.

  19. Goal-oriented error estimation for Cahn-Hilliard models of binary phase transition

    KAUST Repository

    van der Zee, Kristoffer G.

    2010-10-27

    A posteriori estimates of errors in quantities of interest are developed for the nonlinear system of evolution equations embodied in the Cahn-Hilliard model of binary phase transition. These involve the analysis of wellposedness of dual backward-in-time problems and the calculation of residuals. Mixed finite element approximations are developed and used to deliver numerical solutions of representative problems in one- and two-dimensional domains. Estimated errors are shown to be quite accurate in these numerical examples. © 2010 Wiley Periodicals, Inc.

  20. Improved Design of Unequal Error Protection LDPC Codes

    Directory of Open Access Journals (Sweden)

    Sandberg Sara

    2010-01-01

    Full Text Available We propose an improved method for designing unequal error protection (UEP low-density parity-check (LDPC codes. The method is based on density evolution. The degree distribution with the best UEP properties is found, under the constraint that the threshold should not exceed the threshold of a non-UEP code plus some threshold offset. For different codeword lengths and different construction algorithms, we search for good threshold offsets for the UEP code design. The choice of the threshold offset is based on the average a posteriori variable node mutual information. Simulations reveal the counter intuitive result that the short-to-medium length codes designed with a suitable threshold offset all outperform the corresponding non-UEP codes in terms of average bit-error rate. The proposed codes are also compared to other UEP-LDPC codes found in the literature.

  1. Allowing for MSD prevention during facilities planning for a public service: an a posteriori analysis of 10 library design projects.

    Science.gov (United States)

    Bellemare, Marie; Trudel, Louis; Ledoux, Elise; Montreuil, Sylvie; Marier, Micheline; Laberge, Marie; Vincent, Patrick

    2006-01-01

    Research was conducted to identify an ergonomics-based intervention model designed to factor in musculoskeletal disorder (MSD) prevention when library projects are being designed. The first stage of the research involved an a posteriori analysis of 10 recent redesign projects. The purpose of the analysis was to document perceptions about the attention given to MSD prevention measures over the course of a project on the part of 2 categories of employees: librarians responsible for such projects and personnel working in the libraries before and after changes. Subjects were interviewed in focus groups. Outcomes of the analysis can guide our ergonomic assessment of current situations and contribute to a better understanding of the way inclusion or improvement of prevention measures can support the workplace design process.

  2. Improving long-range dispersion predictions with ETEX real-time and a-posteriori model evaluations

    International Nuclear Information System (INIS)

    Desiato, F.

    1997-01-01

    The Italian environmental Protection Agency (ANPA), which is responsible for the evaluation of the consequences of accidental releases into the atmosphere, has participated to both the real-time (phase-1) and a-posteriori (phase-2) ETEX model evaluations. The double benchmark actually constituted an invaluable experience for better understanding the skill and limits of the present long-range dispersion modelling capabilities. In particular, the strong difference between phase-1 and phase-2 model performance emphasised the opportunity to modify, improve or tune a number of specific aspects of the overall simulation. ETEX model runs were carried out with the Lagrangian particle model APOLLO. The meteorological input was constituted by ECMWF fields. Three-hourly average concentrations paired in space and time and time-integrated concentrations were used in the evaluation of the results, based on a set of statistical indexes and concentration contour lines and scatter diagrams

  3. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  4. Error Budgeting

    Energy Technology Data Exchange (ETDEWEB)

    Vinyard, Natalia Sergeevna [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Perry, Theodore Sonne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Usov, Igor Olegovich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-10-04

    We calculate opacity from k (hn)=-ln[T(hv)]/pL, where T(hv) is the transmission for photon energy hv, p is sample density, and L is path length through the sample. The density and path length are measured together by Rutherford backscatter. Δk = $\\partial k$\\ $\\partial T$ ΔT + $\\partial k$\\ $\\partial (pL)$. We can re-write this in terms of fractional error as Δk/k = Δ1n(T)/T + Δ(pL)/(pL). Transmission itself is calculated from T=(U-E)/(V-E)=B/B0, where B is transmitted backlighter (BL) signal and B0 is unattenuated backlighter signal. Then ΔT/T=Δln(T)=ΔB/B+ΔB0/B0, and consequently Δk/k = 1/T (ΔB/B + ΔB$_0$/B$_0$ + Δ(pL)/(pL). Transmission is measured in the range of 0.2

  5. Effects of calibration methods on quantitative material decomposition in photon-counting spectral computed tomography using a maximum a posteriori estimator.

    Science.gov (United States)

    Curtis, Tyler E; Roeder, Ryan K

    2017-10-01

    Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in

  6. A-Posteriori Error Estimates for Mixed Finite Element and Finite Volume Methods for Problems Coupled Through a Boundary with Non-Matching Grids

    Science.gov (United States)

    2013-08-01

    both MFE and GFV, are often similar in size. As a gross measure of the effect of geometric projection and of the use of quadrature, we also report the...interest MFE ∑(e,ψ) or GFV ∑(e,ψ). Tables 1 and 2 show this using coarse and fine forward solutions. Table 1. The forward problem with solution (4.1) is run...adjoint data components ψu and ψp are constant everywhere and ψξ = 0. adj. grid MFE ∑(e,ψ) ∑MFEi ratio GFV ∑(e,ψ) ∑GFV i ratio 20x20 : 32x32 1.96E−3

  7. A Posteriori Analysis of Adaptive Multiscale Operator Decomposition Methods for Multiphysics Problems

    Energy Technology Data Exchange (ETDEWEB)

    Donald Estep; Michael Holst; Simon Tavener

    2010-02-08

    This project was concerned with the accurate computational error estimation for numerical solutions of multiphysics, multiscale systems that couple different physical processes acting across a large range of scales relevant to the interests of the DOE. Multiscale, multiphysics models are characterized by intimate interactions between different physics across a wide range of scales. This poses significant computational challenges addressed by the proposal, including: (1) Accurate and efficient computation; (2) Complex stability; and (3) Linking different physics. The research in this project focused on Multiscale Operator Decomposition methods for solving multiphysics problems. The general approach is to decompose a multiphysics problem into components involving simpler physics over a relatively limited range of scales, and then to seek the solution of the entire system through some sort of iterative procedure involving solutions of the individual components. MOD is a very widely used technique for solving multiphysics, multiscale problems; it is heavily used throughout the DOE computational landscape. This project made a major advance in the analysis of the solution of multiscale, multiphysics problems.

  8. A modal-based approach to the nonlinear vibration of strings against a unilateral obstacle: Simulations and experiments in the pointwise case

    Science.gov (United States)

    Issanchou, Clara; Bilbao, Stefan; Le Carrou, Jean-Loïc; Touzé, Cyril; Doaré, Olivier

    2017-04-01

    This article is concerned with the vibration of a stiff linear string in the presence of a rigid obstacle. A numerical method for unilateral and arbitrary-shaped obstacles is developed, based on a modal approach in order to take into account the frequency dependence of losses in strings. The contact force of the barrier interaction is treated using a penalty approach, while a conservative scheme is derived for time integration, in order to ensure long-term numerical stability. In this way, the linear behaviour of the string when not in contact with the barrier can be controlled via a mode by mode fitting, so that the model is particularly well suited for comparisons with experiments. An experimental configuration is used with a point obstacle either centered or near an extremity of the string. In this latter case, such a pointwise obstruction approximates the end condition found in the tanpura, an Indian stringed instrument. The second polarisation of the string is also analysed and included in the model. Numerical results are compared against experiments, showing good accuracy over a long time scale.

  9. Computable Error Estimates for Finite Element Approximations of Elliptic Partial Differential Equations with Rough Stochastic Data

    KAUST Repository

    Hall, Eric Joseph

    2016-12-08

    We derive computable error estimates for finite element approximations of linear elliptic partial differential equations with rough stochastic coefficients. In this setting, the exact solutions contain high frequency content that standard a posteriori error estimates fail to capture. We propose goal-oriented estimates, based on local error indicators, for the pathwise Galerkin and expected quadrature errors committed in standard, continuous, piecewise linear finite element approximations. Derived using easily validated assumptions, these novel estimates can be computed at a relatively low cost and have applications to subsurface flow problems in geophysics where the conductivities are assumed to have lognormal distributions with low regularity. Our theory is supported by numerical experiments on test problems in one and two dimensions.

  10. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  11. Padrão alimentar de mulheres com câncer de mama: um estudo a posteriori

    Directory of Open Access Journals (Sweden)

    Karin Sedó Sarkis

    2014-09-01

    Full Text Available Objetivo: Identificar o padrão alimentar a posteriori de mulheres com câncer de mama. Métodos: Estudo observacional, transversal e analítico. Foram avaliadas 100 mulheres com câncer de mama submetidas à quimioterapia / radioterapia, atendidas em um centro de câncer na cidade de Fortaleza, de Julho-Dezembro de 2012. Os dados clínicos, socioeconômicos e antropométricos foram analisados. Para avaliação do consumo foi utilizado um QFA validado, e o consumo diário de cada alimento foi definido em gramas. Então eles foram agrupados de acordo com sua composição nutricional, seguindo a pirâmide alimentar adaptada à população brasileira, em 10 grupos de alimentos. Resultados: Idade média 50,9 anos, com prevalência de carcinoma ductal (83% em estágio III (60%. A média do IMC indica excesso de peso (28,3 kg/m2 e a circunferência da cintura estava acima do recomendado (98,6 cm. Nós identificamos quatro componentes alimentares e 21% da variância da dieta foi explicado pelo componente alimentar 1 (Risk, com a matriz composta por carne vermelha e processada, óleos e gorduras e cereais. Conclusão: O padrão alimentar das pacientes foi caracterizado por uma dieta de risco.

  12. A Priori and a Posteriori Dietary Patterns during Pregnancy and Gestational Weight Gain: The Generation R Study

    Directory of Open Access Journals (Sweden)

    Myrte J. Tielemans

    2015-11-01

    Full Text Available Abnormal gestational weight gain (GWG is associated with adverse pregnancy outcomes. We examined whether dietary patterns are associated with GWG. Participants included 3374 pregnant women from a population-based cohort in the Netherlands. Dietary intake during pregnancy was assessed with food-frequency questionnaires. Three a posteriori-derived dietary patterns were identified using principal component analysis: a “Vegetable, oil and fish”, a “Nuts, high-fiber cereals and soy”, and a “Margarine, sugar and snacks” pattern. The a priori-defined dietary pattern was based on national dietary recommendations. Weight was repeatedly measured around 13, 20 and 30 weeks of pregnancy; pre-pregnancy and maximum weight were self-reported. Normal weight women with high adherence to the “Vegetable, oil and fish” pattern had higher early-pregnancy GWG than those with low adherence (43 g/week (95% CI 16; 69 for highest vs. lowest quartile (Q. Adherence to the “Margarine, sugar and snacks” pattern was associated with a higher prevalence of excessive GWG (OR 1.45 (95% CI 1.06; 1.99 Q4 vs. Q1. Normal weight women with higher scores on the “Nuts, high-fiber cereals and soy” pattern had more moderate GWG than women with lower scores (−0.01 (95% CI −0.02; −0.00 per SD. The a priori-defined pattern was not associated with GWG. To conclude, specific dietary patterns may play a role in early pregnancy but are not consistently associated with GWG.

  13. Learning from prescribing errors

    OpenAIRE

    Dean, B

    2002-01-01

    

 The importance of learning from medical error has recently received increasing emphasis. This paper focuses on prescribing errors and argues that, while learning from prescribing errors is a laudable goal, there are currently barriers that can prevent this occurring. Learning from errors can take place on an individual level, at a team level, and across an organisation. Barriers to learning from prescribing errors include the non-discovery of many prescribing errors, lack of feedback to th...

  14. Two-dimensional errors

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements

  15. Part two: Error propagation

    International Nuclear Information System (INIS)

    Picard, R.R.

    1989-01-01

    Topics covered in this chapter include a discussion of exact results as related to nuclear materials management and accounting in nuclear facilities; propagation of error for a single measured value; propagation of error for several measured values; error propagation for materials balances; and an application of error propagation to an example of uranium hexafluoride conversion process

  16. Learning from Errors

    OpenAIRE

    Martínez-Legaz, Juan Enrique; Soubeyran, Antoine

    2003-01-01

    We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.

  17. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  18. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  19. Metodologias de identificação de padrões alimentares a posteriori em crianças brasileiras: revisão sistemática

    Directory of Open Access Journals (Sweden)

    Carolina Abreu de Carvalho

    2016-01-01

    Full Text Available Resumo O objetivo deste estudo é fornecer orientações para identificação de padrões alimentares por abordagem a posteriori, bem como analisar os aspectos metodológicos dos estudos realizados no Brasil que os identificaram em crianças. Os artigos foram selecionados nas bases de dados da Literatura Latino-Americana e do Caribe em Ciências da Saúde, Scientific pattern; Principal component analysis; Factor analysis; Cluster analysis; Reduced rank regression. Incluíram-se pesquisas que identificaram padrões alimentares de crianças por meio da abordagem a posteriori. Selecionou-se 7 estudos, sendo 6 transversais e 1 de coorte, publicados entre 2007 e 2014. Cinco usaram como inquérito o questionário de frequência alimentar, um o recordatório de 24h e outro uma lista de alimentos. O método de abordagem exploratória mais utilizado nas publicações foi a análise fatorial por componentes principais, seguida da análise de agrupamento. O tamanho amostral dos estudos variou de 232 a 4231, os valores do teste Kaiser-Meyer-Olkin de 0,524 a 0,873, e o alfa de Cronbach de 0,51 a 0,69. Poucos estudos brasileiros identificaram padrões alimentares de crianças utilizando abordagem a posteriori, e a análise fatorial por componentes principais foi a técnica mais usada.

  20. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  1. Estima de error residual explícita para cantidades de interés utilizando funciones burbuja

    OpenAIRE

    Rosales, R.; Díez, P.

    2009-01-01

    En este trabajo se introduce un nuevo estimador de error residual explícito a posteriori para problemas elípticos orientado a cantidades de interés. Se propone utilizar funciones burbuja sobre elementos y sobre aristas. Se parte de la solución de elementos finitos del problema primal y de la de un problema junto (o dual), asociado a una cantidad de interés definida por el usuario. Por ejemplo, la variación de temperatura o el desplazamiento de un punto del dominio. La estima se calcula en...

  2. Prescription Errors in Psychiatry

    African Journals Online (AJOL)

    Arun Kumar Agnihotri

    clinical pharmacists in detecting errors before they have a (sometimes serious) clinical impact should not be underestimated. Research on medication error in mental health care is limited. .... participation in ward rounds and adverse drug.

  3. Errors in otology.

    Science.gov (United States)

    Kartush, J M

    1996-11-01

    Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.

  4. Construction of a Mean Square Error Adaptive Euler–Maruyama Method With Applications in Multilevel Monte Carlo

    KAUST Repository

    Hoel, Hakon

    2016-06-13

    A formal mean square error expansion (MSE) is derived for Euler-Maruyama numerical solutions of stochastic differential equations (SDE). The error expansion is used to construct a pathwise, a posteriori, adaptive time-stepping Euler-Maruyama algorithm for numerical solutions of SDE, and the resulting algorithm is incorporated into a multilevel Monte Carlo (MLMC) algorithm for weak approximations of SDE. This gives an efficient MSE adaptive MLMC algorithm for handling a number of low-regularity approximation problems. In low-regularity numerical example problems, the developed adaptive MLMC algorithm is shown to outperform the uniform time-stepping MLMC algorithm by orders of magnitude, producing output whose error with high probability is bounded by TOL > 0 at the near-optimal MLMC cost rate б(TOL log(TOL)) that is achieved when the cost of sample generation is б(1).

  5. The error in total error reduction.

    Science.gov (United States)

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  6. Errors in Neonatology

    OpenAIRE

    Antonio Boldrini; Rosa T. Scaramuzzo; Armando Cuttano

    2013-01-01

    Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy). Results: In Neonatology the main err...

  7. Systematic Procedural Error

    National Research Council Canada - National Science Library

    Byrne, Michael D

    2006-01-01

    .... This problem has received surprisingly little attention from cognitive psychologists. The research summarized here examines such errors in some detail both empirically and through computational cognitive modeling...

  8. Human errors and mistakes

    International Nuclear Information System (INIS)

    Wahlstroem, B.

    1993-01-01

    Human errors have a major contribution to the risks for industrial accidents. Accidents have provided important lesson making it possible to build safer systems. In avoiding human errors it is necessary to adapt the systems to their operators. The complexity of modern industrial systems is however increasing the danger of system accidents. Models of the human operator have been proposed, but the models are not able to give accurate predictions of human performance. Human errors can never be eliminated, but their frequency can be decreased by systematic efforts. The paper gives a brief summary of research in human error and it concludes with suggestions for further work. (orig.)

  9. Do testemunho à memória: o a posteriori entre a história e as ruínas

    Directory of Open Access Journals (Sweden)

    José César Coimbra

    2012-01-01

    Full Text Available Basado en investigación bibliográfica, este trabajo analiza las relaciones entre testimonio y memoria de Giorgio Agamben, Sigmund Freud, Walter Benjamin y verifica el lugar, en esas relaciones, de las nociones de olvido, de restos, de ruinas y el a posteriori. Se constata la sinergia de las elaboraciones de esos autores, las cuales destacan las posibilidades de desplazamiento de las posiciones subjetivas del testimonio frente al deber de la memoria. En el momento en que Brasil instala en primer plano su Comisión de la Verdad, la discusión abierta por el pasado revisitado resalta la importancia de comprender los límites de la expresión de la memoria que el testimonio proporciona.

  10. Learning from Errors

    Science.gov (United States)

    Metcalfe, Janet

    2017-01-01

    Although error avoidance during learning appears to be the rule in American classrooms, laboratory studies suggest that it may be a counterproductive strategy, at least for neurologically typical students. Experimental investigations indicate that errorful learning followed by corrective feedback is beneficial to learning. Interestingly, the…

  11. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  12. Considerations about expected a posteriori estimation in adaptive testing: adaptive a priori, adaptive correction for bias, and adaptive integration interval.

    Science.gov (United States)

    Raiche, Gilles; Blais, Jean-Guy

    2009-01-01

    In a computerized adaptive test, we would like to obtain an acceptable precision of the proficiency level estimate using an optimal number of items. Unfortunately, decreasing the number of items is accompanied by a certain degree of bias when the true proficiency level differs significantly from the a priori estimate. The authors suggest that it is possible to reduced the bias, and even the standard error of the estimate, by applying to each provisional estimation one or a combination of the following strategies: adaptive correction for bias proposed by Bock and Mislevy (1982), adaptive a priori estimate, and adaptive integration interval.

  13. Uncorrected refractive errors.

    Science.gov (United States)

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  14. Uncorrected refractive errors

    Directory of Open Access Journals (Sweden)

    Kovin S Naidoo

    2012-01-01

    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  15. Iterative maximum a posteriori (IMAP-DOAS for retrieval of strongly absorbing trace gases: Model studies for CH4 and CO2 retrieval from near infrared spectra of SCIAMACHY onboard ENVISAT

    Directory of Open Access Journals (Sweden)

    C. Frankenberg

    2005-01-01

    Full Text Available In the past, differential optical absorption spectroscopy (DOAS has mostly been employed for atmospheric trace gas retrieval in the UV/Vis spectral region. New spectrometers such as SCIAMACHY onboard ENVISAT also provide near infrared channels and thus allow for the detection of greenhouse gases like CH4, CO2, or N2O. However, modifications of the classical DOAS algorithm are necessary to account for the idiosyncrasies of this spectral region, i.e. the temperature and pressure dependence of the high resolution absorption lines. Furthermore, understanding the sensitivity of the measurement of these high resolution, strong absorption lines by means of a non-ideal device, i.e. having finite spectral resolution, is of special importance. This applies not only in the NIR, but can also prove to be an issue for the UV/Vis spectral region. This paper presents a modified iterative maximum a posteriori-DOAS (IMAP-DOAS algorithm based on optimal estimation theory introduced to the remote sensing community by rodgers76. This method directly iterates the vertical column densities of the absorbers of interest until the modeled total optical density fits the measurement. Although the discussion in this paper lays emphasis on satellite retrieval, the basic principles of the algorithm also hold for arbitrary measurement geometries. This new approach is applied to modeled spectra based on a comprehensive set of atmospheric temperature and pressure profiles. This analysis reveals that the sensitivity of measurement strongly depends on the prevailing pressure-height. The IMAP-DOAS algorithm properly accounts for the sensitivity of measurement on pressure due to pressure broadening of the absorption lines. Thus, biases in the retrieved vertical columns that would arise in classical algorithms, are obviated. Here, we analyse and quantify these systematic biases as well as errors due to variations in the temperature and pressure profiles, which is indispensable for

  16. Preventing Errors in Laterality

    OpenAIRE

    Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie

    2014-01-01

    An error in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such errors, very few studies have been published that describe these errors in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in sep...

  17. Errors and violations

    International Nuclear Information System (INIS)

    Reason, J.

    1988-01-01

    This paper is in three parts. The first part summarizes the human failures responsible for the Chernobyl disaster and argues that, in considering the human contribution to power plant emergencies, it is necessary to distinguish between: errors and violations; and active and latent failures. The second part presents empirical evidence, drawn from driver behavior, which suggest that errors and violations have different psychological origins. The concluding part outlines a resident pathogen view of accident causation, and seeks to identify the various system pathways along which errors and violations may be propagated

  18. Help prevent hospital errors

    Science.gov (United States)

    ... this page: //medlineplus.gov/ency/patientinstructions/000618.htm Help prevent hospital errors To use the sharing features ... in the hospital. If You Are Having Surgery, Help Keep Yourself Safe Go to a hospital you ...

  19. Pedal Application Errors

    Science.gov (United States)

    2012-03-01

    This project examined the prevalence of pedal application errors and the driver, vehicle, roadway and/or environmental characteristics associated with pedal misapplication crashes based on a literature review, analysis of news media reports, a panel ...

  20. Rounding errors in weighing

    International Nuclear Information System (INIS)

    Jeach, J.L.

    1976-01-01

    When rounding error is large relative to weighing error, it cannot be ignored when estimating scale precision and bias from calibration data. Further, if the data grouping is coarse, rounding error is correlated with weighing error and may also have a mean quite different from zero. These facts are taken into account in a moment estimation method. A copy of the program listing for the MERDA program that provides moment estimates is available from the author. Experience suggests that if the data fall into four or more cells or groups, it is not necessary to apply the moment estimation method. Rather, the estimate given by equation (3) is valid in this instance. 5 tables

  1. Spotting software errors sooner

    International Nuclear Information System (INIS)

    Munro, D.

    1989-01-01

    Static analysis is helping to identify software errors at an earlier stage and more cheaply than conventional methods of testing. RTP Software's MALPAS system also has the ability to check that a code conforms to its original specification. (author)

  2. Errors in energy bills

    International Nuclear Information System (INIS)

    Kop, L.

    2001-01-01

    On request, the Dutch Association for Energy, Environment and Water (VEMW) checks the energy bills for her customers. It appeared that in the year 2000 many small, but also big errors were discovered in the bills of 42 businesses

  3. Medical Errors Reduction Initiative

    National Research Council Canada - National Science Library

    Mutter, Michael L

    2005-01-01

    The Valley Hospital of Ridgewood, New Jersey, is proposing to extend a limited but highly successful specimen management and medication administration medical errors reduction initiative on a hospital-wide basis...

  4. The surveillance error grid.

    Science.gov (United States)

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  5. Design for Error Tolerance

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1983-01-01

    An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability.......An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability....

  6. Apologies and Medical Error

    Science.gov (United States)

    2008-01-01

    One way in which physicians can respond to a medical error is to apologize. Apologies—statements that acknowledge an error and its consequences, take responsibility, and communicate regret for having caused harm—can decrease blame, decrease anger, increase trust, and improve relationships. Importantly, apologies also have the potential to decrease the risk of a medical malpractice lawsuit and can help settle claims by patients. Patients indicate they want and expect explanations and apologies after medical errors and physicians indicate they want to apologize. However, in practice, physicians tend to provide minimal information to patients after medical errors and infrequently offer complete apologies. Although fears about potential litigation are the most commonly cited barrier to apologizing after medical error, the link between litigation risk and the practice of disclosure and apology is tenuous. Other barriers might include the culture of medicine and the inherent psychological difficulties in facing one’s mistakes and apologizing for them. Despite these barriers, incorporating apology into conversations between physicians and patients can address the needs of both parties and can play a role in the effective resolution of disputes related to medical error. PMID:18972177

  7. Thermodynamics of Error Correction

    Directory of Open Access Journals (Sweden)

    Pablo Sartori

    2015-12-01

    Full Text Available Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  8. Isolating Numerical Error Effects in LES Using DNS-Derived Sub-Grid Closures

    Science.gov (United States)

    Edoh, Ayaboe; Karagozian, Ann

    2017-11-01

    The prospect of employing an explicitly-defined filter in Large-Eddy Simulations (LES) provides the opportunity to reduce the interaction of numerical/modeling errors and offers the chance to carry out grid-converged assessments, important for model development. By utilizing a quasi a priori evaluation method - wherein the LES is assisted by closures derived from a fully-resolved computation - it then becomes possible to understand the combined impacts of filter construction (e.g., filter width, spectral sharpness) and discretization choice on the solution accuracy. The present work looks at calculations of the compressible LES Navier-Stokes system and considers discrete filtering formulations in conjunction with high-order finite differencing schemes. Accuracy of the overall method construction is compared to a consistently-filtered exact solution, and lessons are extended to a posteriori (i.e., non-assisted) evaluations. Supported by ERC, Inc. (PS150006) and AFOSR (Dr. Chiping Li).

  9. Learning from Errors

    Directory of Open Access Journals (Sweden)

    MA. Lendita Kryeziu

    2015-06-01

    Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.

  10. Compact disk error measurements

    Science.gov (United States)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  11. Errors in Neonatology

    Directory of Open Access Journals (Sweden)

    Antonio Boldrini

    2013-06-01

    Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research

  12. LIBERTARISMO & ERROR CATEGORIAL

    Directory of Open Access Journals (Sweden)

    Carlos G. Patarroyo G.

    2009-01-01

    Full Text Available En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibilidad de la libertad humana no necesariamente puede ser acusado de incurrir en ellos.

  13. Libertarismo & Error Categorial

    OpenAIRE

    PATARROYO G, CARLOS G

    2009-01-01

    En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibili...

  14. Error Free Software

    Science.gov (United States)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  15. An a posteriori measure of network modularity [v3; ref status: indexed, http://f1000r.es/2ju

    Directory of Open Access Journals (Sweden)

    Timothée Poisot

    2013-12-01

    Full Text Available Measuring the modularity of networks, and how it deviates from random expectations, important to understand their structure and emerging properties. Several measures exist to assess modularity, which when applied to the same network, can return both different modularity values (i.e. different estimates of how modular the network is and different module compositions (i.e. different groups of species forming said modules. More importantly, as each optimization method uses a different optimization criterion, there is a need to have an a posteriori measure serving as an equivalent of a goodness-of-fit. In this article, I propose such a measure of modularity, which is simply defined as the ratio of interactions established between members of the same modules vs. members of different modules. I apply this measure to a large dataset of 290 ecological networks representing host–parasite (bipartite and predator–prey (unipartite interactions, to show how the results are easy to interpret and present especially to a broad audience not familiar with modularity analyses, but still can reveal new features about modularity and the ways to measure it.

  16. Evaluation of Maximum a Posteriori Estimation as Data Assimilation Method for Forecasting Infiltration-Inflow Affected Urban Runoff with Radar Rainfall Input

    Directory of Open Access Journals (Sweden)

    Jonas W. Pedersen

    2016-09-01

    Full Text Available High quality on-line flow forecasts are useful for real-time operation of urban drainage systems and wastewater treatment plants. This requires computationally efficient models, which are continuously updated with observed data to provide good initial conditions for the forecasts. This paper presents a way of updating conceptual rainfall-runoff models using Maximum a Posteriori estimation to determine the most likely parameter constellation at the current point in time. This is done by combining information from prior parameter distributions and the model goodness of fit over a predefined period of time that precedes the forecast. The method is illustrated for an urban catchment, where flow forecasts of 0–4 h are generated by applying a lumped linear reservoir model with three cascading reservoirs. Radar rainfall observations are used as input to the model. The effects of different prior standard deviations and lengths of the auto-calibration period on the resulting flow forecast performance are evaluated. We were able to demonstrate that, if properly tuned, the method leads to a significant increase in forecasting performance compared to a model without continuous auto-calibration. Delayed responses and erratic behaviour in the parameter variations are, however, observed and the choice of prior distributions and length of auto-calibration period is not straightforward.

  17. Error Correcting Codes

    Indian Academy of Sciences (India)

    Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.

  18. Error Correcting Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...

  19. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  20. Team errors: definition and taxonomy

    International Nuclear Information System (INIS)

    Sasou, Kunihide; Reason, James

    1999-01-01

    In error analysis or error management, the focus is usually upon individuals who have made errors. In large complex systems, however, most people work in teams or groups. Considering this working environment, insufficient emphasis has been given to 'team errors'. This paper discusses the definition of team errors and its taxonomy. These notions are also applied to events that have occurred in the nuclear power industry, aviation industry and shipping industry. The paper also discusses the relations between team errors and Performance Shaping Factors (PSFs). As a result, the proposed definition and taxonomy are found to be useful in categorizing team errors. The analysis also reveals that deficiencies in communication, resource/task management, excessive authority gradient, excessive professional courtesy will cause team errors. Handling human errors as team errors provides an opportunity to reduce human errors

  1. Imagery of Errors in Typing

    Science.gov (United States)

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  2. Correction of refractive errors

    Directory of Open Access Journals (Sweden)

    Vladimir Pfeifer

    2005-10-01

    Full Text Available Background: Spectacles and contact lenses are the most frequently used, the safest and the cheapest way to correct refractive errors. The development of keratorefractive surgery has brought new opportunities for correction of refractive errors in patients who have the need to be less dependent of spectacles or contact lenses. Until recently, RK was the most commonly performed refractive procedure for nearsighted patients.Conclusions: The introduction of excimer laser in refractive surgery has given the new opportunities of remodelling the cornea. The laser energy can be delivered on the stromal surface like in PRK or deeper on the corneal stroma by means of lamellar surgery. In LASIK flap is created with microkeratome in LASEK with ethanol and in epi-LASIK the ultra thin flap is created mechanically.

  3. Error-Free Software

    Science.gov (United States)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  4. Minimum Tracking Error Volatility

    OpenAIRE

    Luca RICCETTI

    2010-01-01

    Investors assign part of their funds to asset managers that are given the task of beating a benchmark. The risk management department usually imposes a maximum value of the tracking error volatility (TEV) in order to keep the risk of the portfolio near to that of the selected benchmark. However, risk management does not establish a rule on TEV which enables us to understand whether the asset manager is really active or not and, in practice, asset managers sometimes follow passively the corres...

  5. Error-correction coding

    Science.gov (United States)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  6. Satellite Photometric Error Determination

    Science.gov (United States)

    2015-10-18

    Satellite Photometric Error Determination Tamara E. Payne, Philip J. Castro, Stephen A. Gregory Applied Optimization 714 East Monument Ave, Suite...advocate the adoption of new techniques based on in-frame photometric calibrations enabled by newly available all-sky star catalogs that contain highly...filter systems will likely be supplanted by the Sloan based filter systems. The Johnson photometric system is a set of filters in the optical

  7. Video Error Correction Using Steganography

    Science.gov (United States)

    Robie, David L.; Mersereau, Russell M.

    2002-12-01

    The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  8. Video Error Correction Using Steganography

    Directory of Open Access Journals (Sweden)

    Robie David L

    2002-01-01

    Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  9. Error-related brain activity and error awareness in an error classification paradigm.

    Science.gov (United States)

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Does the sensorimotor system minimize prediction error or select the most likely prediction during object lifting?

    Science.gov (United States)

    McGregor, Heather R.; Pun, Henry C. H.; Buckingham, Gavin; Gribble, Paul L.

    2016-01-01

    The human sensorimotor system is routinely capable of making accurate predictions about an object's weight, which allows for energetically efficient lifts and prevents objects from being dropped. Often, however, poor predictions arise when the weight of an object can vary and sensory cues about object weight are sparse (e.g., picking up an opaque water bottle). The question arises, what strategies does the sensorimotor system use to make weight predictions when one is dealing with an object whose weight may vary? For example, does the sensorimotor system use a strategy that minimizes prediction error (minimal squared error) or one that selects the weight that is most likely to be correct (maximum a posteriori)? In this study we dissociated the predictions of these two strategies by having participants lift an object whose weight varied according to a skewed probability distribution. We found, using a small range of weight uncertainty, that four indexes of sensorimotor prediction (grip force rate, grip force, load force rate, and load force) were consistent with a feedforward strategy that minimizes the square of prediction errors. These findings match research in the visuomotor system, suggesting parallels in underlying processes. We interpret our findings within a Bayesian framework and discuss the potential benefits of using a minimal squared error strategy. NEW & NOTEWORTHY Using a novel experimental model of object lifting, we tested whether the sensorimotor system models the weight of objects by minimizing lifting errors or by selecting the statistically most likely weight. We found that the sensorimotor system minimizes the square of prediction errors for object lifting. This parallels the results of studies that investigated visually guided reaching, suggesting an overlap in the underlying mechanisms between tasks that involve different sensory systems. PMID:27760821

  11. Diagnostic errors in pediatric radiology

    International Nuclear Information System (INIS)

    Taylor, George A.; Voss, Stephan D.; Melvin, Patrice R.; Graham, Dionne A.

    2011-01-01

    Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement. (orig.)

  12. Minimum Error Entropy Classification

    CERN Document Server

    Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A

    2013-01-01

    This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.

  13. Standard Errors for Matrix Correlations.

    Science.gov (United States)

    Ogasawara, Haruhiko

    1999-01-01

    Derives the asymptotic standard errors and intercorrelations for several matrix correlations assuming multivariate normality for manifest variables and derives the asymptotic standard errors of the matrix correlations for two factor-loading matrices. (SLD)

  14. Error forecasting schemes of error correction at receiver

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-08-01

    To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)

  15. Evaluating a medical error taxonomy.

    OpenAIRE

    Brixey, Juliana; Johnson, Todd R.; Zhang, Jiajie

    2002-01-01

    Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a stand...

  16. Uncertainty quantification and error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  17. Error Patterns in Problem Solving.

    Science.gov (United States)

    Babbitt, Beatrice C.

    Although many common problem-solving errors within the realm of school mathematics have been previously identified, a compilation of such errors is not readily available within learning disabilities textbooks, mathematics education texts, or teacher's manuals for school mathematics texts. Using data on error frequencies drawn from both the Fourth…

  18. Performance, postmodernity and errors

    DEFF Research Database (Denmark)

    Harder, Peter

    2013-01-01

    speaker’s competency (note the –y ending!) reflects adaptation to the community langue, including variations. This reversal of perspective also reverses our understanding of the relationship between structure and deviation. In the heyday of structuralism, it was tempting to confuse the invariant system...... with the prestige variety, and conflate non-standard variation with parole/performance and class both as erroneous. Nowadays the anti-structural sentiment of present-day linguistics makes it tempting to confuse the rejection of ideal abstract structure with a rejection of any distinction between grammatical...... as deviant from the perspective of function-based structure and discuss to what extent the recognition of a community langue as a source of adaptive pressure may throw light on different types of deviation, including language handicaps and learner errors....

  19. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Associations of Mediterranean Diet and a Posteriori Derived Dietary Patterns with Breast and Lung Cancer Risk: A Case-Control Study.

    Science.gov (United States)

    Krusinska, Beata; Hawrysz, Iwona; Wadolowska, Lidia; Slowinska, Malgorzata Anna; Biernacki, Maciej; Czerwinska, Anna; Golota, Janusz Jacek

    2018-04-11

    Lung cancer in men and breast cancer in women are the most commonly diagnosed cancers in Poland and worldwide. Results of studies involving dietary patterns (DPs) and breast or lung cancer risk in European countries outside the Mediterranean Sea region are limited and inconclusive. This study aimed to develop a 'Polish-adapted Mediterranean Diet' ('Polish-aMED') score, and then study the associations between the 'Polish-aMED' score and a posteriori -derived dietary patterns with breast or lung cancer risk in adult Poles. This pooled analysis of two case-control studies involved 560 subjects (280 men, 280 women) aged 40-75 years from Northeastern Poland. Diagnoses of breast cancer in 140 women and lung cancer in 140 men were found. The food frequency consumption of 21 selected food groups was collected using a 62-item Food Frequency Questionnaire (FFQ)-6. The 'Polish-adapted Mediterranean Diet' score which included eight items-vegetables, fruit, whole grain, fish, legumes, nuts and seeds-as well as the ratio of vegetable oils to animal fat and red and processed meat was developed (range: 0-8 points). Three DPs were identified in a Principal Component Analysis: 'Prudent', 'Non-healthy', 'Dressings and sweetened-low-fat dairy'. In a multiple logistic regression analysis, two models were created: crude, and adjusted for age, sex, type of cancer, Body Mass Index (BMI), socioeconomic status (SES) index, overall physical activity, smoking status and alcohol abuse. The risk of breast or lung cancer was lower in the average (3-5 points) and high (6-8 points) levels of the 'Polish-aMED' score compared to the low (0-2 points) level by 51% (odds ratio (OR): 0.49; 95% confidence interval (Cl): 0.30-0.80; p Mediterranean diet could be considered for adults living in non-Mediterranean countries for the prevention of the breast or lung cancers. Future studies should explore the role of a traditional Mediterranean diet fitted to local dietary patterns of non-Mediterranean Europeans

  1. Controlling errors in unidosis carts

    Directory of Open Access Journals (Sweden)

    Inmaculada Díaz Fernández

    2010-01-01

    Full Text Available Objective: To identify errors in the unidosis system carts. Method: For two months, the Pharmacy Service controlled medication either returned or missing from the unidosis carts both in the pharmacy and in the wards. Results: Uncorrected unidosis carts show a 0.9% of medication errors (264 versus 0.6% (154 which appeared in unidosis carts previously revised. In carts not revised, the error is 70.83% and mainly caused when setting up unidosis carts. The rest are due to a lack of stock or unavailability (21.6%, errors in the transcription of medical orders (6.81% or that the boxes had not been emptied previously (0.76%. The errors found in the units correspond to errors in the transcription of the treatment (3.46%, non-receipt of the unidosis copy (23.14%, the patient did not take the medication (14.36%or was discharged without medication (12.77%, was not provided by nurses (14.09%, was withdrawn from the stocks of the unit (14.62%, and errors of the pharmacy service (17.56% . Conclusions: It is concluded the need to redress unidosis carts and a computerized prescription system to avoid errors in transcription.Discussion: A high percentage of medication errors is caused by human error. If unidosis carts are overlooked before sent to hospitalization units, the error diminishes to 0.3%.

  2. Prioritising interventions against medication errors

    DEFF Research Database (Denmark)

    Lisby, Marianne; Pape-Larsen, Louise; Sørensen, Ann Lykkegaard

    errors are therefore needed. Development of definition: A definition of medication errors including an index of error types for each stage in the medication process was developed from existing terminology and through a modified Delphi-process in 2008. The Delphi panel consisted of 25 interdisciplinary......Abstract Authors: Lisby M, Larsen LP, Soerensen AL, Nielsen LP, Mainz J Title: Prioritising interventions against medication errors – the importance of a definition Objective: To develop and test a restricted definition of medication errors across health care settings in Denmark Methods: Medication...... errors constitute a major quality and safety problem in modern healthcare. However, far from all are clinically important. The prevalence of medication errors ranges from 2-75% indicating a global problem in defining and measuring these [1]. New cut-of levels focusing the clinical impact of medication...

  3. Social aspects of clinical errors.

    Science.gov (United States)

    Richman, Joel; Mason, Tom; Mason-Whitehead, Elizabeth; McIntosh, Annette; Mercer, Dave

    2009-08-01

    Clinical errors, whether committed by doctors, nurses or other professions allied to healthcare, remain a sensitive issue requiring open debate and policy formulation in order to reduce them. The literature suggests that the issues underpinning errors made by healthcare professionals involve concerns about patient safety, professional disclosure, apology, litigation, compensation, processes of recording and policy development to enhance quality service. Anecdotally, we are aware of narratives of minor errors, which may well have been covered up and remain officially undisclosed whilst the major errors resulting in damage and death to patients alarm both professionals and public with resultant litigation and compensation. This paper attempts to unravel some of these issues by highlighting the historical nature of clinical errors and drawing parallels to contemporary times by outlining the 'compensation culture'. We then provide an overview of what constitutes a clinical error and review the healthcare professional strategies for managing such errors.

  4. Errors in clinical laboratories or errors in laboratory medicine?

    Science.gov (United States)

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  5. Errors in abdominal computed tomography

    International Nuclear Information System (INIS)

    Stephens, S.; Marting, I.; Dixon, A.K.

    1989-01-01

    Sixty-nine patients are presented in whom a substantial error was made on the initial abdominal computed tomography report. Certain features of these errors have been analysed. In 30 (43.5%) a lesion was simply not recognised (error of observation); in 39 (56.5%) the wrong conclusions were drawn about the nature of normal or abnormal structures (error of interpretation). The 39 errors of interpretation were more complex; in 7 patients an abnormal structure was noted but interpreted as normal, whereas in four a normal structure was thought to represent a lesion. Other interpretive errors included those where the wrong cause for a lesion had been ascribed (24 patients), and those where the abnormality was substantially under-reported (4 patients). Various features of these errors are presented and discussed. Errors were made just as often in relation to small and large lesions. Consultants made as many errors as senior registrar radiologists. It is like that dual reporting is the best method of avoiding such errors and, indeed, this is widely practised in our unit. (Author). 9 refs.; 5 figs.; 1 tab

  6. Regularity of pointwise boundary control systems

    DEFF Research Database (Denmark)

    Pedersen, Michael

    1992-01-01

    We will in these notes address some problems arising in "real-life" control application, namely problems concerning distributional control inputs on the boundary of the spatial domain. We extend the classical variational approach and give easily checkable sufficient conditions for the solutions...

  7. Pointwise estimates of pseudo-differential operators

    DEFF Research Database (Denmark)

    Johnsen, Jon

    As a new technique it is shown how general pseudo-differential operators can be estimated at arbitrary points in Euclidean space when acting on functions u with compact spectra.The estimate is a factorisation inequality, in which one factor is the Peetre–Fefferman–Stein maximal function of u......, whilst the other is a symbol factor carrying the whole information on the symbol. The symbol factor is estimated in terms of the spectral radius of u, so that the framework is well suited for Littlewood–Paley analysis. It is also shown how it gives easy access to results on polynomial bounds...... and estimates in Lp , including a new result for type 1,1-operators that they are always bounded on Lp -functions with compact spectra....

  8. Pointwise estimates of pseudo-differential operators

    DEFF Research Database (Denmark)

    Johnsen, Jon

    2011-01-01

    As a new technique it is shown how general pseudo-differential operators can be estimated at arbitrary points in Euclidean space when acting on functions u with compact spectra. The estimate is a factorisation inequality, in which one factor is the Peetre–Fefferman–Stein maximal function of u......, whilst the other is a symbol factor carrying the whole information on the symbol. The symbol factor is estimated in terms of the spectral radius of u, so that the framework is well suited for Littlewood–Paley analysis. It is also shown how it gives easy access to results on polynomial bounds...... and estimates in Lp, including a new result for type 1, 1-operators that they are always bounded on Lp-functions with compact spectra....

  9. Pointwise extensions of GSOS-defined operations

    NARCIS (Netherlands)

    Hansen, H.H.; Klin, B.

    2011-01-01

    Final coalgebras capture system behaviours such as streams, infinite trees and processes. Algebraic operations on a final coalgebra can be defined by distributive laws (of a syntax functor S over a behaviour functor F). Such distributive laws correspond to abstract specification formats. One such

  10. Pointwise Extensions of GSOS-Defined Operations

    NARCIS (Netherlands)

    H.H. Hansen (Helle); B. Klin

    2011-01-01

    textabstractFinal coalgebras capture system behaviours such as streams, infinite trees and processes. Algebraic operations on a final coalgebra can be defined by distributive laws (of a syntax functor $\\FSig$ over a behaviour functor $F$). Such distributive laws correspond to abstract specification

  11. Laboratory errors and patient safety.

    Science.gov (United States)

    Miligy, Dawlat A

    2015-01-01

    Laboratory data are extensively used in medical practice; consequently, laboratory errors have a tremendous impact on patient safety. Therefore, programs designed to identify and reduce laboratory errors, as well as, setting specific strategies are required to minimize these errors and improve patient safety. The purpose of this paper is to identify part of the commonly encountered laboratory errors throughout our practice in laboratory work, their hazards on patient health care and some measures and recommendations to minimize or to eliminate these errors. Recording the encountered laboratory errors during May 2008 and their statistical evaluation (using simple percent distribution) have been done in the department of laboratory of one of the private hospitals in Egypt. Errors have been classified according to the laboratory phases and according to their implication on patient health. Data obtained out of 1,600 testing procedure revealed that the total number of encountered errors is 14 tests (0.87 percent of total testing procedures). Most of the encountered errors lay in the pre- and post-analytic phases of testing cycle (representing 35.7 and 50 percent, respectively, of total errors). While the number of test errors encountered in the analytic phase represented only 14.3 percent of total errors. About 85.7 percent of total errors were of non-significant implication on patients health being detected before test reports have been submitted to the patients. On the other hand, the number of test errors that have been already submitted to patients and reach the physician represented 14.3 percent of total errors. Only 7.1 percent of the errors could have an impact on patient diagnosis. The findings of this study were concomitant with those published from the USA and other countries. This proves that laboratory problems are universal and need general standardization and bench marking measures. Original being the first data published from Arabic countries that

  12. Estimación de los errores muéstrales mediante el método de los conglomerados últimos

    Directory of Open Access Journals (Sweden)

    FÉLIX APARICIO PÉREZ

    1988-01-01

    Full Text Available Los datos de las encuestas están sometidos a dos tipos de errores: errores de muestreo y otros que no son de muestreo. La mayoría de las muestras auténticas son más complicadas que los ejemplos que aparecen en los libros de texto, y a menudo, no responden a las condiciones para el uso de los fórmulas de estos libros de textos. Los términos, las definiciones, y las fórmulas para computar el error de muestreo a priori se proporcionan con mayor atención a la estimación relativa que al error absoluto. Un modelo de los Conglomerados Ultimos para estimar el error de muestreo a posteriori se explica en el contexto de una muestra estratificada, de un territorio que incluye uno o más estratos, y de un territorio incluido íntegramente dentro de la muestra. Este modelo es aplicado a los datos obtenidos en un estudio postelectoral del año 1988 de las provincias catalanas, y se comparan las estimaciones a priori de error de muestreo derivadas del modelo teóricocon las estimaciones resultantes tras aplicar el modelo real. Las diferencias confirman que el modelo de Conglomerados Ultimos debe ser utilizado cuando más que de una simple muestra escogida al azar se trate de una muestra pensada para una población finita.

  13. Dopamine reward prediction error coding.

    Science.gov (United States)

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  14. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  15. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  16. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  17. Architecture design for soft errors

    CERN Document Server

    Mukherjee, Shubu

    2008-01-01

    This book provides a comprehensive description of the architetural techniques to tackle the soft error problem. It covers the new methodologies for quantitative analysis of soft errors as well as novel, cost-effective architectural techniques to mitigate them. To provide readers with a better grasp of the broader problem deffinition and solution space, this book also delves into the physics of soft errors and reviews current circuit and software mitigation techniques.

  18. Dopamine reward prediction error coding

    OpenAIRE

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards?an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less...

  19. Identifying Error in AUV Communication

    National Research Council Canada - National Science Library

    Coleman, Joseph; Merrill, Kaylani; O'Rourke, Michael; Rajala, Andrew G; Edwards, Dean B

    2006-01-01

    Mine Countermeasures (MCM) involving Autonomous Underwater Vehicles (AUVs) are especially susceptible to error, given the constraints on underwater acoustic communication and the inconstancy of the underwater communication channel...

  20. Human Errors in Decision Making

    OpenAIRE

    Mohamad, Shahriari; Aliandrina, Dessy; Feng, Yan

    2005-01-01

    The aim of this paper was to identify human errors in decision making process. The study was focused on a research question such as: what could be the human error as a potential of decision failure in evaluation of the alternatives in the process of decision making. Two case studies were selected from the literature and analyzed to find the human errors contribute to decision fail. Then the analysis of human errors was linked with mental models in evaluation of alternative step. The results o...

  1. Finding beam focus errors automatically

    International Nuclear Information System (INIS)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.

    1987-01-01

    An automated method for finding beam focus errors using an optimization program called COMFORT-PLUS. The steps involved in finding the correction factors using COMFORT-PLUS has been used to find the beam focus errors for two damping rings at the SLAC Linear Collider. The program is to be used as an off-line program to analyze actual measured data for any SLC system. A limitation on the application of this procedure is found to be that it depends on the magnitude of the machine errors. Another is that the program is not totally automated since the user must decide a priori where to look for errors

  2. Heuristic errors in clinical reasoning.

    Science.gov (United States)

    Rylander, Melanie; Guerrasio, Jeannette

    2016-08-01

    Errors in clinical reasoning contribute to patient morbidity and mortality. The purpose of this study was to determine the types of heuristic errors made by third-year medical students and first-year residents. This study surveyed approximately 150 clinical educators inquiring about the types of heuristic errors they observed in third-year medical students and first-year residents. Anchoring and premature closure were the two most common errors observed amongst third-year medical students and first-year residents. There was no difference in the types of errors observed in the two groups. Errors in clinical reasoning contribute to patient morbidity and mortality Clinical educators perceived that both third-year medical students and first-year residents committed similar heuristic errors, implying that additional medical knowledge and clinical experience do not affect the types of heuristic errors made. Further work is needed to help identify methods that can be used to reduce heuristic errors early in a clinician's education. © 2015 John Wiley & Sons Ltd.

  3. A Hybrid Unequal Error Protection / Unequal Error Resilience ...

    African Journals Online (AJOL)

    The quality layers are then assigned an Unequal Error Resilience to synchronization loss by unequally allocating the number of headers available for synchronization to them. Following that Unequal Error Protection against channel noise is provided to the layers by the use of Rate Compatible Punctured Convolutional ...

  4. Maximum a posteriori Bayesian estimation of mycophenolic Acid area under the concentration-time curve: is this clinically useful for dosage prediction yet?

    Science.gov (United States)

    Staatz, Christine E; Tett, Susan E

    2011-12-01

    This review seeks to summarize the available data about Bayesian estimation of area under the plasma concentration-time curve (AUC) and dosage prediction for mycophenolic acid (MPA) and evaluate whether sufficient evidence is available for routine use of Bayesian dosage prediction in clinical practice. A literature search identified 14 studies that assessed the predictive performance of maximum a posteriori Bayesian estimation of MPA AUC and one report that retrospectively evaluated how closely dosage recommendations based on Bayesian forecasting achieved targeted MPA exposure. Studies to date have mostly been undertaken in renal transplant recipients, with limited investigation in patients treated with MPA for autoimmune disease or haematopoietic stem cell transplantation. All of these studies have involved use of the mycophenolate mofetil (MMF) formulation of MPA, rather than the enteric-coated mycophenolate sodium (EC-MPS) formulation. Bias associated with estimation of MPA AUC using Bayesian forecasting was generally less than 10%. However some difficulties with imprecision was evident, with values ranging from 4% to 34% (based on estimation involving two or more concentration measurements). Evaluation of whether MPA dosing decisions based on Bayesian forecasting (by the free website service https://pharmaco.chu-limoges.fr) achieved target drug exposure has only been undertaken once. When MMF dosage recommendations were applied by clinicians, a higher proportion (72-80%) of subsequent estimated MPA AUC values were within the 30-60 mg · h/L target range, compared with when dosage recommendations were not followed (only 39-57% within target range). Such findings provide evidence that Bayesian dosage prediction is clinically useful for achieving target MPA AUC. This study, however, was retrospective and focussed only on adult renal transplant recipients. Furthermore, in this study, Bayesian-generated AUC estimations and dosage predictions were not compared

  5. Error studies for SNS Linac. Part 1: Transverse errors

    International Nuclear Information System (INIS)

    Crandall, K.R.

    1998-01-01

    The SNS linac consist of a radio-frequency quadrupole (RFQ), a drift-tube linac (DTL), a coupled-cavity drift-tube linac (CCDTL) and a coupled-cavity linac (CCL). The RFQ and DTL are operated at 402.5 MHz; the CCDTL and CCL are operated at 805 MHz. Between the RFQ and DTL is a medium-energy beam-transport system (MEBT). This error study is concerned with the DTL, CCDTL and CCL, and each will be analyzed separately. In fact, the CCL is divided into two sections, and each of these will be analyzed separately. The types of errors considered here are those that affect the transverse characteristics of the beam. The errors that cause the beam center to be displaced from the linac axis are quad displacements and quad tilts. The errors that cause mismatches are quad gradient errors and quad rotations (roll)

  6. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.

  7. Dual Processing and Diagnostic Errors

    Science.gov (United States)

    Norman, Geoff

    2009-01-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…

  8. Barriers to medical error reporting

    Directory of Open Access Journals (Sweden)

    Jalal Poorolajal

    2015-01-01

    Full Text Available Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan,Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%, lack of proper reporting form (51.8%, lack of peer supporting a person who has committed an error (56.0%, and lack of personal attention to the importance of medical errors (62.9%. The rate of committing medical errors was higher in men (71.4%, age of 50-40 years (67.6%, less-experienced personnel (58.7%, educational level of MSc (87.5%, and staff of radiology department (88.9%. Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement.

  9. A theory of human error

    Science.gov (United States)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1981-01-01

    Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  10. Correcting AUC for Measurement Error.

    Science.gov (United States)

    Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang

    2015-12-01

    Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.

  11. Cognitive aspect of diagnostic errors.

    Science.gov (United States)

    Phua, Dong Haur; Tan, Nigel C K

    2013-01-01

    Diagnostic errors can result in tangible harm to patients. Despite our advances in medicine, the mental processes required to make a diagnosis exhibits shortcomings, causing diagnostic errors. Cognitive factors are found to be an important cause of diagnostic errors. With new understanding from psychology and social sciences, clinical medicine is now beginning to appreciate that our clinical reasoning can take the form of analytical reasoning or heuristics. Different factors like cognitive biases and affective influences can also impel unwary clinicians to make diagnostic errors. Various strategies have been proposed to reduce the effect of cognitive biases and affective influences when clinicians make diagnoses; however evidence for the efficacy of these methods is still sparse. This paper aims to introduce the reader to the cognitive aspect of diagnostic errors, in the hope that clinicians can use this knowledge to improve diagnostic accuracy and patient outcomes.

  12. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    Science.gov (United States)

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Human errors in NPP operations

    International Nuclear Information System (INIS)

    Sheng Jufang

    1993-01-01

    Based on the operational experiences of nuclear power plants (NPPs), the importance of studying human performance problems is described. Statistical analysis on the significance or frequency of various root-causes and error-modes from a large number of human-error-related events demonstrate that the defects in operation/maintenance procedures, working place factors, communication and training practices are primary root-causes, while omission, transposition, quantitative mistake are the most frequent among the error-modes. Recommendations about domestic research on human performance problem in NPPs are suggested

  14. Linear network error correction coding

    CERN Document Server

    Guang, Xuan

    2014-01-01

    There are two main approaches in the theory of network error correction coding. In this SpringerBrief, the authors summarize some of the most important contributions following the classic approach, which represents messages by sequences?similar to algebraic coding,?and also briefly discuss the main results following the?other approach,?that uses the theory of rank metric codes for network error correction of representing messages by subspaces. This book starts by establishing the basic linear network error correction (LNEC) model and then characterizes two equivalent descriptions. Distances an

  15. Error field considerations for BPX

    International Nuclear Information System (INIS)

    LaHaye, R.J.

    1992-01-01

    Irregularities in the position of poloidal and/or toroidal field coils in tokamaks produce resonant toroidal asymmetries in the vacuum magnetic fields. Otherwise stable tokamak discharges become non-linearly unstable to disruptive locked modes when subjected to low level error fields. Because of the field errors, magnetic islands are produced which would not otherwise occur in tearing mode table configurations; a concomitant reduction of the total confinement can result. Poloidal and toroidal asymmetries arise in the heat flux to the divertor target. In this paper, the field errors from perturbed BPX coils are used in a field line tracing code of the BPX equilibrium to study these deleterious effects. Limits on coil irregularities for device design and fabrication are computed along with possible correcting coils for reducing such field errors

  16. The uncorrected refractive error challenge

    Directory of Open Access Journals (Sweden)

    Kovin Naidoo

    2016-11-01

    Full Text Available Refractive error affects people of all ages, socio-economic status and ethnic groups. The most recent statistics estimate that, worldwide, 32.4 million people are blind and 191 million people have vision impairment. Vision impairment has been defined based on distance visual acuity only, and uncorrected distance refractive error (mainly myopia is the single biggest cause of worldwide vision impairment. However, when we also consider near visual impairment, it is clear that even more people are affected. From research it was estimated that the number of people with vision impairment due to uncorrected distance refractive error was 107.8 million,1 and the number of people affected by uncorrected near refractive error was 517 million, giving a total of 624.8 million people.

  17. Quantile Regression With Measurement Error

    KAUST Repository

    Wei, Ying

    2009-08-27

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.

  18. Comprehensive Error Rate Testing (CERT)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services (CMS) implemented the Comprehensive Error Rate Testing (CERT) program to measure improper payments in the Medicare...

  19. Numerical optimization with computational errors

    CERN Document Server

    Zaslavski, Alexander J

    2016-01-01

    This book studies the approximate solutions of optimization problems in the presence of computational errors. A number of results are presented on the convergence behavior of algorithms in a Hilbert space; these algorithms are examined taking into account computational errors. The author illustrates that algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Known computational errors are examined with the aim of determining an approximate solution. Researchers and students interested in the optimization theory and its applications will find this book instructive and informative. This monograph contains 16 chapters; including a chapters devoted to the subgradient projection algorithm, the mirror descent algorithm, gradient projection algorithm, the Weiszfelds method, constrained convex minimization problems, the convergence of a proximal point method in a Hilbert space, the continuous subgradient method, penalty methods and Newton’s meth...

  20. Dual processing and diagnostic errors.

    Science.gov (United States)

    Norman, Geoff

    2009-09-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical, conscious, and conceptual process, called System 2. Exemplar theories of categorization propose that many category decisions in everyday life are made by unconscious matching to a particular example in memory, and these remain available and retrievable individually. I then review studies of clinical reasoning based on these theories, and show that the two processes are equally effective; System 1, despite its reliance in idiosyncratic, individual experience, is no more prone to cognitive bias or diagnostic error than System 2. Further, I review evidence that instructions directed at encouraging the clinician to explicitly use both strategies can lead to consistent reduction in error rates.

  1. Error correcting coding for OTN

    DEFF Research Database (Denmark)

    Justesen, Jørn; Larsen, Knud J.; Pedersen, Lars A.

    2010-01-01

    Forward error correction codes for 100 Gb/s optical transmission are currently receiving much attention from transport network operators and technology providers. We discuss the performance of hard decision decoding using product type codes that cover a single OTN frame or a small number...... of such frames. In particular we argue that a three-error correcting BCH is the best choice for the component code in such systems....

  2. Negligence, genuine error, and litigation

    OpenAIRE

    Sohn DH

    2013-01-01

    David H SohnDepartment of Orthopedic Surgery, University of Toledo Medical Center, Toledo, OH, USAAbstract: Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort syst...

  3. Eliminating US hospital medical errors.

    Science.gov (United States)

    Kumar, Sameer; Steinebach, Marc

    2008-01-01

    Healthcare costs in the USA have continued to rise steadily since the 1980s. Medical errors are one of the major causes of deaths and injuries of thousands of patients every year, contributing to soaring healthcare costs. The purpose of this study is to examine what has been done to deal with the medical-error problem in the last two decades and present a closed-loop mistake-proof operation system for surgery processes that would likely eliminate preventable medical errors. The design method used is a combination of creating a service blueprint, implementing the six sigma DMAIC cycle, developing cause-and-effect diagrams as well as devising poka-yokes in order to develop a robust surgery operation process for a typical US hospital. In the improve phase of the six sigma DMAIC cycle, a number of poka-yoke techniques are introduced to prevent typical medical errors (identified through cause-and-effect diagrams) that may occur in surgery operation processes in US hospitals. It is the authors' assertion that implementing the new service blueprint along with the poka-yokes, will likely result in the current medical error rate to significantly improve to the six-sigma level. Additionally, designing as many redundancies as possible in the delivery of care will help reduce medical errors. Primary healthcare providers should strongly consider investing in adequate doctor and nurse staffing, and improving their education related to the quality of service delivery to minimize clinical errors. This will lead to an increase in higher fixed costs, especially in the shorter time frame. This paper focuses additional attention needed to make a sound technical and business case for implementing six sigma tools to eliminate medical errors that will enable hospital managers to increase their hospital's profitability in the long run and also ensure patient safety.

  4. Approximation errors during variance propagation

    International Nuclear Information System (INIS)

    Dinsmore, Stephen

    1986-01-01

    Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given

  5. [Medical errors: inevitable but preventable].

    Science.gov (United States)

    Giard, R W

    2001-10-27

    Medical errors are increasingly reported in the lay press. Studies have shown dramatic error rates of 10 percent or even higher. From a methodological point of view, studying the frequency and causes of medical errors is far from simple. Clinical decisions on diagnostic or therapeutic interventions are always taken within a clinical context. Reviewing outcomes of interventions without taking into account both the intentions and the arguments for a particular action will limit the conclusions from a study on the rate and preventability of errors. The interpretation of the preventability of medical errors is fraught with difficulties and probably highly subjective. Blaming the doctor personally does not do justice to the actual situation and especially the organisational framework. Attention for and improvement of the organisational aspects of error are far more important then litigating the person. To err is and will remain human and if we want to reduce the incidence of faults we must be able to learn from our mistakes. That requires an open attitude towards medical mistakes, a continuous effort in their detection, a sound analysis and, where feasible, the institution of preventive measures.

  6. Quantum error correction for beginners

    International Nuclear Information System (INIS)

    Devitt, Simon J; Nemoto, Kae; Munro, William J

    2013-01-01

    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future. (review article)

  7. Medical Error and Moral Luck.

    Science.gov (United States)

    Hubbeling, Dieneke

    2016-09-01

    This paper addresses the concept of moral luck. Moral luck is discussed in the context of medical error, especially an error of omission that occurs frequently, but only rarely has adverse consequences. As an example, a failure to compare the label on a syringe with the drug chart results in the wrong medication being administered and the patient dies. However, this error may have previously occurred many times with no tragic consequences. Discussions on moral luck can highlight conflicting intuitions. Should perpetrators receive a harsher punishment because of an adverse outcome, or should they be dealt with in the same way as colleagues who have acted similarly, but with no adverse effects? An additional element to the discussion, specifically with medical errors, is that according to the evidence currently available, punishing individual practitioners does not seem to be effective in preventing future errors. The following discussion, using relevant philosophical and empirical evidence, posits a possible solution for the moral luck conundrum in the context of medical error: namely, making a distinction between the duty to make amends and assigning blame. Blame should be assigned on the basis of actual behavior, while the duty to make amends is dependent on the outcome.

  8. Error bounds on block Gauss-Seidel solutions of coupled multiphysics problems

    KAUST Repository

    Whiteley, J. P.

    2011-05-09

    Mathematical models in many fields often consist of coupled sub-models, each of which describes a different physical process. For many applications, the quantity of interest from these models may be written as a linear functional of the solution to the governing equations. Mature numerical solution techniques for the individual sub-models often exist. Rather than derive a numerical solution technique for the full coupled model, it is therefore natural to investigate whether these techniques may be used by coupling in a block Gauss-Seidel fashion. In this study, we derive two a posteriori bounds for such linear functionals. These bounds may be used on each Gauss-Seidel iteration to estimate the error in the linear functional computed using the single physics solvers, without actually solving the full, coupled problem. We demonstrate the use of the bound first by using a model problem from linear algebra, and then a linear ordinary differential equation example. We then investigate the effectiveness of the bound using a non-linear coupled fluid-temperature problem. One of the bounds derived is very sharp for most linear functionals considered, allowing us to predict very accurately when to terminate our block Gauss-Seidel iteration. © 2011 John Wiley & Sons, Ltd.

  9. Error bounds on block Gauss-Seidel solutions of coupled multiphysics problems

    KAUST Repository

    Whiteley, J. P.; Gillow, K.; Tavener, S. J.; Walter, A. C.

    2011-01-01

    Mathematical models in many fields often consist of coupled sub-models, each of which describes a different physical process. For many applications, the quantity of interest from these models may be written as a linear functional of the solution to the governing equations. Mature numerical solution techniques for the individual sub-models often exist. Rather than derive a numerical solution technique for the full coupled model, it is therefore natural to investigate whether these techniques may be used by coupling in a block Gauss-Seidel fashion. In this study, we derive two a posteriori bounds for such linear functionals. These bounds may be used on each Gauss-Seidel iteration to estimate the error in the linear functional computed using the single physics solvers, without actually solving the full, coupled problem. We demonstrate the use of the bound first by using a model problem from linear algebra, and then a linear ordinary differential equation example. We then investigate the effectiveness of the bound using a non-linear coupled fluid-temperature problem. One of the bounds derived is very sharp for most linear functionals considered, allowing us to predict very accurately when to terminate our block Gauss-Seidel iteration. © 2011 John Wiley & Sons, Ltd.

  10. Predictors of Errors of Novice Java Programmers

    Science.gov (United States)

    Bringula, Rex P.; Manabat, Geecee Maybelline A.; Tolentino, Miguel Angelo A.; Torres, Edmon L.

    2012-01-01

    This descriptive study determined which of the sources of errors would predict the errors committed by novice Java programmers. Descriptive statistics revealed that the respondents perceived that they committed the identified eighteen errors infrequently. Thought error was perceived to be the main source of error during the laboratory programming…

  11. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  12. Redundant measurements for controlling errors

    International Nuclear Information System (INIS)

    Ehinger, M.H.; Crawford, J.M.; Madeen, M.L.

    1979-07-01

    Current federal regulations for nuclear materials control require consideration of operating data as part of the quality control program and limits of error propagation. Recent work at the BNFP has revealed that operating data are subject to a number of measurement problems which are very difficult to detect and even more difficult to correct in a timely manner. Thus error estimates based on operational data reflect those problems. During the FY 1978 and FY 1979 R and D demonstration runs at the BNFP, redundant measurement techniques were shown to be effective in detecting these problems to allow corrective action. The net effect is a reduction in measurement errors and a significant increase in measurement sensitivity. Results show that normal operation process control measurements, in conjunction with routine accountability measurements, are sensitive problem indicators when incorporated in a redundant measurement program

  13. Large errors and severe conditions

    CERN Document Server

    Smith, D L; Van Wormer, L A

    2002-01-01

    Physical parameters that can assume real-number values over a continuous range are generally represented by inherently positive random variables. However, if the uncertainties in these parameters are significant (large errors), conventional means of representing and manipulating the associated variables can lead to erroneous results. Instead, all analyses involving them must be conducted in a probabilistic framework. Several issues must be considered: First, non-linear functional relations between primary and derived variables may lead to significant 'error amplification' (severe conditions). Second, the commonly used normal (Gaussian) probability distribution must be replaced by a more appropriate function that avoids the occurrence of negative sampling results. Third, both primary random variables and those derived through well-defined functions must be dealt with entirely in terms of their probability distributions. Parameter 'values' and 'errors' should be interpreted as specific moments of these probabil...

  14. Negligence, genuine error, and litigation

    Directory of Open Access Journals (Sweden)

    Sohn DH

    2013-02-01

    Full Text Available David H SohnDepartment of Orthopedic Surgery, University of Toledo Medical Center, Toledo, OH, USAAbstract: Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort system in the United States; and review current and future solutions, including medical malpractice reform, alternative dispute resolution, health courts, and no-fault compensation systems. The current political environment favors investigation of non-cap tort reform remedies; investment into more rational oversight systems, such as health courts or no-fault systems may reap both quantitative and qualitative benefits for a less costly and safer health system.Keywords: medical malpractice, tort reform, no fault compensation, alternative dispute resolution, system errors

  15. Spacecraft and propulsion technician error

    Science.gov (United States)

    Schultz, Daniel Clyde

    Commercial aviation and commercial space similarly launch, fly, and land passenger vehicles. Unlike aviation, the U.S. government has not established maintenance policies for commercial space. This study conducted a mixed methods review of 610 U.S. space launches from 1984 through 2011, which included 31 failures. An analysis of the failure causal factors showed that human error accounted for 76% of those failures, which included workmanship error accounting for 29% of the failures. With the imminent future of commercial space travel, the increased potential for the loss of human life demands that changes be made to the standardized procedures, training, and certification to reduce human error and failure rates. Several recommendations were made by this study to the FAA's Office of Commercial Space Transportation, space launch vehicle operators, and maintenance technician schools in an effort to increase the safety of the space transportation passengers.

  16. Sensation seeking and error processing.

    Science.gov (United States)

    Zheng, Ya; Sheng, Wenbin; Xu, Jing; Zhang, Yuanyuan

    2014-09-01

    Sensation seeking is defined by a strong need for varied, novel, complex, and intense stimulation, and a willingness to take risks for such experience. Several theories propose that the insensitivity to negative consequences incurred by risks is one of the hallmarks of sensation-seeking behaviors. In this study, we investigated the time course of error processing in sensation seeking by recording event-related potentials (ERPs) while high and low sensation seekers performed an Eriksen flanker task. Whereas there were no group differences in ERPs to correct trials, sensation seeking was associated with a blunted error-related negativity (ERN), which was female-specific. Further, different subdimensions of sensation seeking were related to ERN amplitude differently. These findings indicate that the relationship between sensation seeking and error processing is sex-specific. Copyright © 2014 Society for Psychophysiological Research.

  17. Errors of Inference Due to Errors of Measurement.

    Science.gov (United States)

    Linn, Robert L.; Werts, Charles E.

    Failure to consider errors of measurement when using partial correlation or analysis of covariance techniques can result in erroneous conclusions. Certain aspects of this problem are discussed and particular attention is given to issues raised in a recent article by Brewar, Campbell, and Crano. (Author)

  18. Measurement error models with uncertainty about the error variance

    NARCIS (Netherlands)

    Oberski, D.L.; Satorra, A.

    2013-01-01

    It is well known that measurement error in observable variables induces bias in estimates in standard regression analysis and that structural equation models are a typical solution to this problem. Often, multiple indicator equations are subsumed as part of the structural equation model, allowing

  19. Reward positivity: Reward prediction error or salience prediction error?

    Science.gov (United States)

    Heydari, Sepideh; Holroyd, Clay B

    2016-08-01

    The reward positivity is a component of the human ERP elicited by feedback stimuli in trial-and-error learning and guessing tasks. A prominent theory holds that the reward positivity reflects a reward prediction error signal that is sensitive to outcome valence, being larger for unexpected positive events relative to unexpected negative events (Holroyd & Coles, 2002). Although the theory has found substantial empirical support, most of these studies have utilized either monetary or performance feedback to test the hypothesis. However, in apparent contradiction to the theory, a recent study found that unexpected physical punishments also elicit the reward positivity (Talmi, Atkinson, & El-Deredy, 2013). The authors of this report argued that the reward positivity reflects a salience prediction error rather than a reward prediction error. To investigate this finding further, in the present study participants navigated a virtual T maze and received feedback on each trial under two conditions. In a reward condition, the feedback indicated that they would either receive a monetary reward or not and in a punishment condition the feedback indicated that they would receive a small shock or not. We found that the feedback stimuli elicited a typical reward positivity in the reward condition and an apparently delayed reward positivity in the punishment condition. Importantly, this signal was more positive to the stimuli that predicted the omission of a possible punishment relative to stimuli that predicted a forthcoming punishment, which is inconsistent with the salience hypothesis. © 2016 Society for Psychophysiological Research.

  20. ERROR HANDLING IN INTEGRATION WORKFLOWS

    Directory of Open Access Journals (Sweden)

    Alexey M. Nazarenko

    2017-01-01

    Full Text Available Simulation experiments performed while solving multidisciplinary engineering and scientific problems require joint usage of multiple software tools. Further, when following a preset plan of experiment or searching for optimum solu- tions, the same sequence of calculations is run multiple times with various simulation parameters, input data, or conditions while overall workflow does not change. Automation of simulations like these requires implementing of a workflow where tool execution and data exchange is usually controlled by a special type of software, an integration environment or plat- form. The result is an integration workflow (a platform-dependent implementation of some computing workflow which, in the context of automation, is a composition of weakly coupled (in terms of communication intensity typical subtasks. These compositions can then be decomposed back into a few workflow patterns (types of subtasks interaction. The pat- terns, in their turn, can be interpreted as higher level subtasks.This paper considers execution control and data exchange rules that should be imposed by the integration envi- ronment in the case of an error encountered by some integrated software tool. An error is defined as any abnormal behavior of a tool that invalidates its result data thus disrupting the data flow within the integration workflow. The main requirementto the error handling mechanism implemented by the integration environment is to prevent abnormal termination of theentire workflow in case of missing intermediate results data. Error handling rules are formulated on the basic pattern level and on the level of a composite task that can combine several basic patterns as next level subtasks. The cases where workflow behavior may be different, depending on user's purposes, when an error takes place, and possible error handling op- tions that can be specified by the user are also noted in the work.

  1. Analysis of Medication Error Reports

    Energy Technology Data Exchange (ETDEWEB)

    Whitney, Paul D.; Young, Jonathan; Santell, John; Hicks, Rodney; Posse, Christian; Fecht, Barbara A.

    2004-11-15

    In medicine, as in many areas of research, technological innovation and the shift from paper based information to electronic records has created a climate of ever increasing availability of raw data. There has been, however, a corresponding lag in our abilities to analyze this overwhelming mass of data, and classic forms of statistical analysis may not allow researchers to interact with data in the most productive way. This is true in the emerging area of patient safety improvement. Traditionally, a majority of the analysis of error and incident reports has been carried out based on an approach of data comparison, and starts with a specific question which needs to be answered. Newer data analysis tools have been developed which allow the researcher to not only ask specific questions but also to “mine” data: approach an area of interest without preconceived questions, and explore the information dynamically, allowing questions to be formulated based on patterns brought up by the data itself. Since 1991, United States Pharmacopeia (USP) has been collecting data on medication errors through voluntary reporting programs. USP’s MEDMARXsm reporting program is the largest national medication error database and currently contains well over 600,000 records. Traditionally, USP has conducted an annual quantitative analysis of data derived from “pick-lists” (i.e., items selected from a list of items) without an in-depth analysis of free-text fields. In this paper, the application of text analysis and data analysis tools used by Battelle to analyze the medication error reports already analyzed in the traditional way by USP is described. New insights and findings were revealed including the value of language normalization and the distribution of error incidents by day of the week. The motivation for this effort is to gain additional insight into the nature of medication errors to support improvements in medication safety.

  2. Medication errors: definitions and classification

    Science.gov (United States)

    Aronson, Jeffrey K

    2009-01-01

    To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey–Lewis method (based on an understanding of theory and practice). A medication error is ‘a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient’. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is ‘a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient’. The converse of this, ‘balanced prescribing’ is ‘the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm’. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. A prescription error is ‘a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription’. The ‘normal features’ include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies. PMID:19594526

  3. Correcting quantum errors with entanglement.

    Science.gov (United States)

    Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu

    2006-10-20

    We show how entanglement shared between encoder and decoder can simplify the theory of quantum error correction. The entanglement-assisted quantum codes we describe do not require the dual-containing constraint necessary for standard quantum error-correcting codes, thus allowing us to "quantize" all of classical linear coding theory. In particular, efficient modern classical codes that attain the Shannon capacity can be made into entanglement-assisted quantum codes attaining the hashing bound (closely related to the quantum capacity). For systems without large amounts of shared entanglement, these codes can also be used as catalytic codes, in which a small amount of initial entanglement enables quantum communication.

  4. Human Error and Organizational Management

    Directory of Open Access Journals (Sweden)

    Alecxandrina DEACONU

    2009-01-01

    Full Text Available The concern for performance is a topic that raises interest in the businessenvironment but also in other areas that – even if they seem distant from thisworld – are aware of, interested in or conditioned by the economy development.As individual performance is very much influenced by the human resource, wechose to analyze in this paper the mechanisms that generate – consciously or not–human error nowadays.Moreover, the extremely tense Romanian context,where failure is rather a rule than an exception, made us investigate thephenomenon of generating a human error and the ways to diminish its effects.

  5. Preventing statistical errors in scientific journals.

    NARCIS (Netherlands)

    Nuijten, M.B.

    2016-01-01

    There is evidence for a high prevalence of statistical reporting errors in psychology and other scientific fields. These errors display a systematic preference for statistically significant results, distorting the scientific literature. There are several possible causes for this systematic error

  6. Medication errors in pediatric inpatients

    DEFF Research Database (Denmark)

    Rishoej, Rikke Mie; Almarsdóttir, Anna Birna; Christesen, Henrik Thybo

    2017-01-01

    The aim was to describe medication errors (MEs) in hospitalized children reported to the national mandatory reporting and learning system, the Danish Patient Safety Database (DPSD). MEs were extracted from DPSD from the 5-year period of 2010–2014. We included reports from public hospitals on pati...... safety in pediatric inpatients.(Table presented.)...

  7. Learner Corpora without Error Tagging

    Directory of Open Access Journals (Sweden)

    Rastelli, Stefano

    2009-01-01

    Full Text Available The article explores the possibility of adopting a form-to-function perspective when annotating learner corpora in order to get deeper insights about systematic features of interlanguage. A split between forms and functions (or categories is desirable in order to avoid the "comparative fallacy" and because – especially in basic varieties – forms may precede functions (e.g., what resembles to a "noun" might have a different function or a function may show up in unexpected forms. In the computer-aided error analysis tradition, all items produced by learners are traced to a grid of error tags which is based on the categories of the target language. Differently, we believe it is possible to record and make retrievable both words and sequence of characters independently from their functional-grammatical label in the target language. For this purpose at the University of Pavia we adapted a probabilistic POS tagger designed for L1 on L2 data. Despite the criticism that this operation can raise, we found that it is better to work with "virtual categories" rather than with errors. The article outlines the theoretical background of the project and shows some examples in which some potential of SLA-oriented (non error-based tagging will be possibly made clearer.

  8. Theory of Test Translation Error

    Science.gov (United States)

    Solano-Flores, Guillermo; Backhoff, Eduardo; Contreras-Nino, Luis Angel

    2009-01-01

    In this article, we present a theory of test translation whose intent is to provide the conceptual foundation for effective, systematic work in the process of test translation and test translation review. According to the theory, translation error is multidimensional; it is not simply the consequence of defective translation but an inevitable fact…

  9. and Correlated Error-Regressor

    African Journals Online (AJOL)

    Nekky Umera

    in queuing theory and econometrics, where the usual assumption of independent error terms may not be plausible in most cases. Also, when using time-series data on a number of micro-economic units, such as households and service oriented channels, where the stochastic disturbance terms in part reflect variables which ...

  10. Rank error-correcting pairs

    DEFF Research Database (Denmark)

    Martinez Peñas, Umberto; Pellikaan, Ruud

    2017-01-01

    Error-correcting pairs were introduced as a general method of decoding linear codes with respect to the Hamming metric using coordinatewise products of vectors, and are used for many well-known families of codes. In this paper, we define new types of vector products, extending the coordinatewise ...

  11. Clinical errors and medical negligence.

    Science.gov (United States)

    Oyebode, Femi

    2013-01-01

    This paper discusses the definition, nature and origins of clinical errors including their prevention. The relationship between clinical errors and medical negligence is examined as are the characteristics of litigants and events that are the source of litigation. The pattern of malpractice claims in different specialties and settings is examined. Among hospitalized patients worldwide, 3-16% suffer injury as a result of medical intervention, the most common being the adverse effects of drugs. The frequency of adverse drug effects appears superficially to be higher in intensive care units and emergency departments but once rates have been corrected for volume of patients, comorbidity of conditions and number of drugs prescribed, the difference is not significant. It is concluded that probably no more than 1 in 7 adverse events in medicine result in a malpractice claim and the factors that predict that a patient will resort to litigation include a prior poor relationship with the clinician and the feeling that the patient is not being kept informed. Methods for preventing clinical errors are still in their infancy. The most promising include new technologies such as electronic prescribing systems, diagnostic and clinical decision-making aids and error-resistant systems. Copyright © 2013 S. Karger AG, Basel.

  12. Finding errors in big data

    NARCIS (Netherlands)

    Puts, Marco; Daas, Piet; de Waal, A.G.

    No data source is perfect. Mistakes inevitably creep in. Spotting errors is hard enough when dealing with survey responses from several thousand people, but the difficulty is multiplied hugely when that mysterious beast Big Data comes into play. Statistics Netherlands is about to publish its first

  13. The Errors of Our Ways

    Science.gov (United States)

    Kane, Michael

    2011-01-01

    Errors don't exist in our data, but they serve a vital function. Reality is complicated, but our models need to be simple in order to be manageable. We assume that attributes are invariant over some conditions of observation, and once we do that we need some way of accounting for the variability in observed scores over these conditions of…

  14. Cascade Error Projection Learning Algorithm

    Science.gov (United States)

    Duong, T. A.; Stubberud, A. R.; Daud, T.

    1995-01-01

    A detailed mathematical analysis is presented for a new learning algorithm termed cascade error projection (CEP) and a general learning frame work. This frame work can be used to obtain the cascade correlation learning algorithm by choosing a particular set of parameters.

  15. Error and its meaning in forensic science.

    Science.gov (United States)

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes. © 2013 American Academy of Forensic Sciences.

  16. A methodology for translating positional error into measures of attribute error, and combining the two error sources

    Science.gov (United States)

    Yohay Carmel; Curtis Flather; Denis Dean

    2006-01-01

    This paper summarizes our efforts to investigate the nature, behavior, and implications of positional error and attribute error in spatiotemporal datasets. Estimating the combined influence of these errors on map analysis has been hindered by the fact that these two error types are traditionally expressed in different units (distance units, and categorical units,...

  17. Optimal classifier selection and negative bias in error rate estimation: an empirical study on high-dimensional prediction

    Directory of Open Access Journals (Sweden)

    Boulesteix Anne-Laure

    2009-12-01

    Full Text Available Abstract Background In biometric practice, researchers often apply a large number of different methods in a "trial-and-error" strategy to get as much as possible out of their data and, due to publication pressure or pressure from the consulting customer, present only the most favorable results. This strategy may induce a substantial optimistic bias in prediction error estimation, which is quantitatively assessed in the present manuscript. The focus of our work is on class prediction based on high-dimensional data (e.g. microarray data, since such analyses are particularly exposed to this kind of bias. Methods In our study we consider a total of 124 variants of classifiers (possibly including variable selection or tuning steps within a cross-validation evaluation scheme. The classifiers are applied to original and modified real microarray data sets, some of which are obtained by randomly permuting the class labels to mimic non-informative predictors while preserving their correlation structure. Results We assess the minimal misclassification rate over the different variants of classifiers in order to quantify the bias arising when the optimal classifier is selected a posteriori in a data-driven manner. The bias resulting from the parameter tuning (including gene selection parameters as a special case and the bias resulting from the choice of the classification method are examined both separately and jointly. Conclusions The median minimal error rate over the investigated classifiers was as low as 31% and 41% based on permuted uninformative predictors from studies on colon cancer and prostate cancer, respectively. We conclude that the strategy to present only the optimal result is not acceptable because it yields a substantial bias in error rate estimation, and suggest alternative approaches for properly reporting classification accuracy.

  18. Discretization vs. Rounding Error in Euler's Method

    Science.gov (United States)

    Borges, Carlos F.

    2011-01-01

    Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…

  19. Total Survey Error for Longitudinal Surveys

    NARCIS (Netherlands)

    Lynn, Peter; Lugtig, P.J.

    2016-01-01

    This article describes the application of the total survey error paradigm to longitudinal surveys. Several aspects of survey error, and of the interactions between different types of error, are distinct in the longitudinal survey context. Furthermore, error trade-off decisions in survey design and

  20. Negligence, genuine error, and litigation

    Science.gov (United States)

    Sohn, David H

    2013-01-01

    Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort system in the United States; and review current and future solutions, including medical malpractice reform, alternative dispute resolution, health courts, and no-fault compensation systems. The current political environment favors investigation of non-cap tort reform remedies; investment into more rational oversight systems, such as health courts or no-fault systems may reap both quantitative and qualitative benefits for a less costly and safer health system. PMID:23426783

  1. Robot learning and error correction

    Science.gov (United States)

    Friedman, L.

    1977-01-01

    A model of robot learning is described that associates previously unknown perceptions with the sensed known consequences of robot actions. For these actions, both the categories of outcomes and the corresponding sensory patterns are incorporated in a knowledge base by the system designer. Thus the robot is able to predict the outcome of an action and compare the expectation with the experience. New knowledge about what to expect in the world may then be incorporated by the robot in a pre-existing structure whether it detects accordance or discrepancy between a predicted consequence and experience. Errors committed during plan execution are detected by the same type of comparison process and learning may be applied to avoiding the errors.

  2. Error studies of Halbach Magnets

    Energy Technology Data Exchange (ETDEWEB)

    Brooks, S. [Brookhaven National Lab. (BNL), Upton, NY (United States)

    2017-03-02

    These error studies were done on the Halbach magnets for the CBETA “First Girder” as described in note [CBETA001]. The CBETA magnets have since changed slightly to the lattice in [CBETA009]. However, this is not a large enough change to significantly affect the results here. The QF and BD arc FFAG magnets are considered. For each assumed set of error distributions and each ideal magnet, 100 random magnets with errors are generated. These are then run through an automated version of the iron wire multipole cancellation algorithm. The maximum wire diameter allowed is 0.063” as in the proof-of-principle magnets. Initially, 32 wires (2 per Halbach wedge) are tried, then if this does not achieve 1e-­4 level accuracy in the simulation, 48 and then 64 wires. By “1e-4 accuracy”, it is meant the FOM defined by √(Σn≥sextupole an 2+bn 2) is less than 1 unit, where the multipoles are taken at the maximum nominal beam radius, R=23mm for these magnets. The algorithm initially uses 20 convergence interations. If 64 wires does not achieve 1e-­4 accuracy, this is increased to 50 iterations to check for slow converging cases. There are also classifications for magnets that do not achieve 1e-4 but do achieve 1e-3 (FOM ≤ 10 units). This is technically within the spec discussed in the Jan 30, 2017 review; however, there will be errors in practical shimming not dealt with in the simulation, so it is preferable to do much better than the spec in the simulation.

  3. [Errors in laboratory daily practice].

    Science.gov (United States)

    Larrose, C; Le Carrer, D

    2007-01-01

    Legislation set by GBEA (Guide de bonne exécution des analyses) requires that, before performing analysis, the laboratory directors have to check both the nature of the samples and the patients identity. The data processing of requisition forms, which identifies key errors, was established in 2000 and in 2002 by the specialized biochemistry laboratory, also with the contribution of the reception centre for biological samples. The laboratories follow a strict criteria of defining acceptability as a starting point for the reception to then check requisition forms and biological samples. All errors are logged into the laboratory database and analysis report are sent to the care unit specifying the problems and the consequences they have on the analysis. The data is then assessed by the laboratory directors to produce monthly or annual statistical reports. This indicates the number of errors, which are then indexed to patient files to reveal the specific problem areas, therefore allowing the laboratory directors to teach the nurses and enable corrective action.

  4. Technical errors in MR arthrography

    International Nuclear Information System (INIS)

    Hodler, Juerg

    2008-01-01

    This article discusses potential technical problems of MR arthrography. It starts with contraindications, followed by problems relating to injection technique, contrast material and MR imaging technique. For some of the aspects discussed, there is only little published evidence. Therefore, the article is based on the personal experience of the author and on local standards of procedures. Such standards, as well as medico-legal considerations, may vary from country to country. Contraindications for MR arthrography include pre-existing infection, reflex sympathetic dystrophy and possibly bleeding disorders, avascular necrosis and known allergy to contrast media. Errors in injection technique may lead to extra-articular collection of contrast agent or to contrast agent leaking from the joint space, which may cause diagnostic difficulties. Incorrect concentrations of contrast material influence image quality and may also lead to non-diagnostic examinations. Errors relating to MR imaging include delays between injection and imaging and inadequate choice of sequences. Potential solutions to the various possible errors are presented. (orig.)

  5. Technical errors in MR arthrography

    Energy Technology Data Exchange (ETDEWEB)

    Hodler, Juerg [Orthopaedic University Hospital of Balgrist, Radiology, Zurich (Switzerland)

    2008-01-15

    This article discusses potential technical problems of MR arthrography. It starts with contraindications, followed by problems relating to injection technique, contrast material and MR imaging technique. For some of the aspects discussed, there is only little published evidence. Therefore, the article is based on the personal experience of the author and on local standards of procedures. Such standards, as well as medico-legal considerations, may vary from country to country. Contraindications for MR arthrography include pre-existing infection, reflex sympathetic dystrophy and possibly bleeding disorders, avascular necrosis and known allergy to contrast media. Errors in injection technique may lead to extra-articular collection of contrast agent or to contrast agent leaking from the joint space, which may cause diagnostic difficulties. Incorrect concentrations of contrast material influence image quality and may also lead to non-diagnostic examinations. Errors relating to MR imaging include delays between injection and imaging and inadequate choice of sequences. Potential solutions to the various possible errors are presented. (orig.)

  6. Clock error models for simulation and estimation

    International Nuclear Information System (INIS)

    Meditch, J.S.

    1981-10-01

    Mathematical models for the simulation and estimation of errors in precision oscillators used as time references in satellite navigation systems are developed. The results, based on all currently known oscillator error sources, are directly implementable on a digital computer. The simulation formulation is sufficiently flexible to allow for the inclusion or exclusion of individual error sources as desired. The estimation algorithms, following from Kalman filter theory, provide directly for the error analysis of clock errors in both filtering and prediction

  7. Righting errors in writing errors: the Wing and Baddeley (1980) spelling error corpus revisited.

    Science.gov (United States)

    Wing, Alan M; Baddeley, Alan D

    2009-03-01

    We present a new analysis of our previously published corpus of handwriting errors (slips) using the proportional allocation algorithm of Machtynger and Shallice (2009). As previously, the proportion of slips is greater in the middle of the word than at the ends, however, in contrast to before, the proportion is greater at the end than at the beginning of the word. The findings are consistent with the hypothesis of memory effects in a graphemic output buffer.

  8. An adaptive orienting theory of error processing.

    Science.gov (United States)

    Wessel, Jan R

    2018-03-01

    The ability to detect and correct action errors is paramount to safe and efficient goal-directed behaviors. Existing work on the neural underpinnings of error processing and post-error behavioral adaptations has led to the development of several mechanistic theories of error processing. These theories can be roughly grouped into adaptive and maladaptive theories. While adaptive theories propose that errors trigger a cascade of processes that will result in improved behavior after error commission, maladaptive theories hold that error commission momentarily impairs behavior. Neither group of theories can account for all available data, as different empirical studies find both impaired and improved post-error behavior. This article attempts a synthesis between the predictions made by prominent adaptive and maladaptive theories. Specifically, it is proposed that errors invoke a nonspecific cascade of processing that will rapidly interrupt and inhibit ongoing behavior and cognition, as well as orient attention toward the source of the error. It is proposed that this cascade follows all unexpected action outcomes, not just errors. In the case of errors, this cascade is followed by error-specific, controlled processing, which is specifically aimed at (re)tuning the existing task set. This theory combines existing predictions from maladaptive orienting and bottleneck theories with specific neural mechanisms from the wider field of cognitive control, including from error-specific theories of adaptive post-error processing. The article aims to describe the proposed framework and its implications for post-error slowing and post-error accuracy, propose mechanistic neural circuitry for post-error processing, and derive specific hypotheses for future empirical investigations. © 2017 Society for Psychophysiological Research.

  9. WACC: Definition, misconceptions and errors

    OpenAIRE

    Fernandez, Pablo

    2011-01-01

    The WACC is just the rate at which the Free Cash Flows must be discounted to obtain the same result as in the valuation using Equity Cash Flows discounted at the required return to equity (Ke) The WACC is neither a cost nor a required return: it is a weighted average of a cost and a required return. To refer to the WACC as the "cost of capital" may be misleading because it is not a cost. The paper includes 7 errors due to not remembering the definition of WACC and shows the relationship betwe...

  10. Wavefront error sensing for LDR

    Science.gov (United States)

    Tubbs, Eldred F.; Glavich, T. A.

    1988-01-01

    Wavefront sensing is a significant aspect of the LDR control problem and requires attention at an early stage of the control system definition and design. A combination of a Hartmann test for wavefront slope measurement and an interference test for piston errors of the segments was examined and is presented as a point of departure for further discussion. The assumption is made that the wavefront sensor will be used for initial alignment and periodic alignment checks but that it will not be used during scientific observations. The Hartmann test and the interferometric test are briefly examined.

  11. Human decision error (HUMDEE) trees

    International Nuclear Information System (INIS)

    Ostrom, L.T.

    1993-01-01

    Graphical presentations of human actions in incident and accident sequences have been used for many years. However, for the most part, human decision making has been underrepresented in these trees. This paper presents a method of incorporating the human decision process into graphical presentations of incident/accident sequences. This presentation is in the form of logic trees. These trees are called Human Decision Error Trees or HUMDEE for short. The primary benefit of HUMDEE trees is that they graphically illustrate what else the individuals involved in the event could have done to prevent either the initiation or continuation of the event. HUMDEE trees also present the alternate paths available at the operator decision points in the incident/accident sequence. This is different from the Technique for Human Error Rate Prediction (THERP) event trees. There are many uses of these trees. They can be used for incident/accident investigations to show what other courses of actions were available and for training operators. The trees also have a consequence component so that not only the decision can be explored, also the consequence of that decision

  12. Apology for errors: whose responsibility?

    Science.gov (United States)

    Leape, Lucian L

    2012-01-01

    When things go wrong during a medical procedure, patients' expectations are fairly straightforward: They expect an explanation of what happened, an apology if an error was made, and assurance that something will be done to prevent it from happening to another patient. Patients have a right to full disclosure; it is also therapeutic in relieving their anxiety. But if they have been harmed by our mistake, they also need an apology to maintain trust. Apology conveys respect, mutual suffering, and responsibility. Meaningful apology requires that the patient's physician and the institution both take responsibility, show remorse, and make amends. As the patient's advocate, the physician must play the lead role. However, as custodian of the systems, the hospital has primary responsibility for the mishap, for preventing that error in the future, and for compensation. The responsibility for making all this happen rests with the CEO. The hospital must have policies and practices that ensure that every injured patient is treated the way we would want to be treated ourselves--openly, honestly, with compassion, and, when indicated, with an apology and compensation. To make that happen, hospitals need to greatly expand training of physicians and others, and develop support programs for patients and caregivers.

  13. Error exponents for entanglement concentration

    International Nuclear Information System (INIS)

    Hayashi, Masahito; Koashi, Masato; Matsumoto, Keiji; Morikoshi, Fumiaki; Winter, Andreas

    2003-01-01

    Consider entanglement concentration schemes that convert n identical copies of a pure state into a maximally entangled state of a desired size with success probability being close to one in the asymptotic limit. We give the distillable entanglement, the number of Bell pairs distilled per copy, as a function of an error exponent, which represents the rate of decrease in failure probability as n tends to infinity. The formula fills the gap between the least upper bound of distillable entanglement in probabilistic concentration, which is the well-known entropy of entanglement, and the maximum attained in deterministic concentration. The method of types in information theory enables the detailed analysis of the distillable entanglement in terms of the error rate. In addition to the probabilistic argument, we consider another type of entanglement concentration scheme, where the initial state is deterministically transformed into a (possibly mixed) final state whose fidelity to a maximally entangled state of a desired size converges to one in the asymptotic limit. We show that the same formula as in the probabilistic argument is valid for the argument on fidelity by replacing the success probability with the fidelity. Furthermore, we also discuss entanglement yield when optimal success probability or optimal fidelity converges to zero in the asymptotic limit (strong converse), and give the explicit formulae for those cases

  14. Measurement error models with interactions

    Science.gov (United States)

    Midthune, Douglas; Carroll, Raymond J.; Freedman, Laurence S.; Kipnis, Victor

    2016-01-01

    An important use of measurement error models is to correct regression models for bias due to covariate measurement error. Most measurement error models assume that the observed error-prone covariate (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$W$\\end{document}) is a linear function of the unobserved true covariate (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$X$\\end{document}) plus other covariates (\\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$Z$\\end{document}) in the regression model. In this paper, we consider models for \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$W$\\end{document} that include interactions between \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$X$\\end{document} and \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$Z$\\end{document}. We derive the conditional distribution of

  15. Game Design Principles based on Human Error

    Directory of Open Access Journals (Sweden)

    Guilherme Zaffari

    2016-03-01

    Full Text Available This paper displays the result of the authors’ research regarding to the incorporation of Human Error, through design principles, to video game design. In a general way, designers must consider Human Error factors throughout video game interface development; however, when related to its core design, adaptations are in need, since challenge is an important factor for fun and under the perspective of Human Error, challenge can be considered as a flaw in the system. The research utilized Human Error classifications, data triangulation via predictive human error analysis, and the expanded flow theory to allow the design of a set of principles in order to match the design of playful challenges with the principles of Human Error. From the results, it was possible to conclude that the application of Human Error in game design has a positive effect on player experience, allowing it to interact only with errors associated with the intended aesthetics of the game.

  16. Understanding human management of automation errors

    Science.gov (United States)

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  17. An Error Analysis on TFL Learners’ Writings

    Directory of Open Access Journals (Sweden)

    Arif ÇERÇİ

    2016-12-01

    Full Text Available The main purpose of the present study is to identify and represent TFL learners’ writing errors through error analysis. All the learners started learning Turkish as foreign language with A1 (beginner level and completed the process by taking C1 (advanced certificate in TÖMER at Gaziantep University. The data of the present study were collected from 14 students’ writings in proficiency exams for each level. The data were grouped as grammatical, syntactic, spelling, punctuation, and word choice errors. The ratio and categorical distributions of identified errors were analyzed through error analysis. The data were analyzed through statistical procedures in an effort to determine whether error types differ according to the levels of the students. The errors in this study are limited to the linguistic and intralingual developmental errors

  18. Field errors in hybrid insertion devices

    International Nuclear Information System (INIS)

    Schlueter, R.D.

    1995-02-01

    Hybrid magnet theory as applied to the error analyses used in the design of Advanced Light Source (ALS) insertion devices is reviewed. Sources of field errors in hybrid insertion devices are discussed

  19. Field errors in hybrid insertion devices

    Energy Technology Data Exchange (ETDEWEB)

    Schlueter, R.D. [Lawrence Berkeley Lab., CA (United States)

    1995-02-01

    Hybrid magnet theory as applied to the error analyses used in the design of Advanced Light Source (ALS) insertion devices is reviewed. Sources of field errors in hybrid insertion devices are discussed.

  20. Error Covariance Estimation of Mesoscale Data Assimilation

    National Research Council Canada - National Science Library

    Xu, Qin

    2005-01-01

    The goal of this project is to explore and develop new methods of error covariance estimation that will provide necessary statistical descriptions of prediction and observation errors for mesoscale data assimilation...

  1. Spectrum of diagnostic errors in radiology

    OpenAIRE

    Pinto, Antonio; Brunese, Luca

    2010-01-01

    Diagnostic errors are important in all branches of medicine because they are an indication of poor patient care. Since the early 1970s, physicians have been subjected to an increasing number of medical malpractice claims. Radiology is one of the specialties most liable to claims of medical negligence. Most often, a plaintiff’s complaint against a radiologist will focus on a failure to diagnose. The etiology of radiological error is multi-factorial. Errors fall into recurrent patterns. Errors ...

  2. Improving Type Error Messages in OCaml

    OpenAIRE

    Charguéraud , Arthur

    2015-01-01

    International audience; Cryptic type error messages are a major obstacle to learning OCaml or other ML-based languages. In many cases, error messages cannot be interpreted without a sufficiently-precise model of the type inference algorithm. The problem of improving type error messages in ML has received quite a bit of attention over the past two decades, and many different strategies have been considered. The challenge is not only to produce error messages that are both sufficiently concise ...

  3. Different grades MEMS accelerometers error characteristics

    Science.gov (United States)

    Pachwicewicz, M.; Weremczuk, J.

    2017-08-01

    The paper presents calibration effects of two different MEMS accelerometers of different price and quality grades and discusses different accelerometers errors types. The calibration for error determining is provided by reference centrifugal measurements. The design and measurement errors of the centrifuge are discussed as well. It is shown that error characteristics of the sensors are very different and it is not possible to use simple calibration methods presented in the literature in both cases.

  4. Naming game with learning errors in communications

    OpenAIRE

    Lou, Yang; Chen, Guanrong

    2014-01-01

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network topology. By pair-wise iterative interactions, the population reaches a consensus state asymptotically. In this paper, we study naming game with communication errors during pair-wise conversations, where errors are represented by error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed....

  5. How Do Simulated Error Experiences Impact Attitudes Related to Error Prevention?

    Science.gov (United States)

    Breitkreuz, Karen R; Dougal, Renae L; Wright, Melanie C

    2016-10-01

    The objective of this project was to determine whether simulated exposure to error situations changes attitudes in a way that may have a positive impact on error prevention behaviors. Using a stratified quasi-randomized experiment design, we compared risk perception attitudes of a control group of nursing students who received standard error education (reviewed medication error content and watched movies about error experiences) to an experimental group of students who reviewed medication error content and participated in simulated error experiences. Dependent measures included perceived memorability of the educational experience, perceived frequency of errors, and perceived caution with respect to preventing errors. Experienced nursing students perceived the simulated error experiences to be more memorable than movies. Less experienced students perceived both simulated error experiences and movies to be highly memorable. After the intervention, compared with movie participants, simulation participants believed errors occurred more frequently. Both types of education increased the participants' intentions to be more cautious and reported caution remained higher than baseline for medication errors 6 months after the intervention. This study provides limited evidence of an advantage of simulation over watching movies describing actual errors with respect to manipulating attitudes related to error prevention. Both interventions resulted in long-term impacts on perceived caution in medication administration. Simulated error experiences made participants more aware of how easily errors can occur, and the movie education made participants more aware of the devastating consequences of errors.

  6. Interpreting the change detection error matrix

    NARCIS (Netherlands)

    Oort, van P.A.J.

    2007-01-01

    Two different matrices are commonly reported in assessment of change detection accuracy: (1) single date error matrices and (2) binary change/no change error matrices. The third, less common form of reporting, is the transition error matrix. This paper discuses the relation between these matrices.

  7. Human Errors and Bridge Management Systems

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle; Nowak, A. S.

    on basis of reliability profiles for bridges without human errors are extended to include bridges with human errors. The first rehabilitation distributions for bridges without and with human errors are combined into a joint first rehabilitation distribution. The methodology presented is illustrated...... for reinforced concrete bridges....

  8. Error Analysis in Mathematics. Technical Report #1012

    Science.gov (United States)

    Lai, Cheng-Fei

    2012-01-01

    Error analysis is a method commonly used to identify the cause of student errors when they make consistent mistakes. It is a process of reviewing a student's work and then looking for patterns of misunderstanding. Errors in mathematics can be factual, procedural, or conceptual, and may occur for a number of reasons. Reasons why students make…

  9. On-Error Training (Book Excerpt).

    Science.gov (United States)

    Fukuda, Ryuji

    1985-01-01

    This excerpt from "Managerial Engineering: Techniques for Improving Quality and Productivity in the Workplace" describes the development, objectives, and use of On-Error Training (OET), a method which trains workers to learn from their errors. Also described is New Joharry's Window, a performance-error data analysis technique used in…

  10. Human Error Mechanisms in Complex Work Environments

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1988-01-01

    will account for most of the action errors observed. In addition, error mechanisms appear to be intimately related to the development of high skill and know-how in a complex work context. This relationship between errors and human adaptation is discussed in detail for individuals and organisations...

  11. Measurement error in a single regressor

    NARCIS (Netherlands)

    Meijer, H.J.; Wansbeek, T.J.

    2000-01-01

    For the setting of multiple regression with measurement error in a single regressor, we present some very simple formulas to assess the result that one may expect when correcting for measurement error. It is shown where the corrected estimated regression coefficients and the error variance may lie,

  12. Valuing Errors for Learning: Espouse or Enact?

    Science.gov (United States)

    Grohnert, Therese; Meuwissen, Roger H. G.; Gijselaers, Wim H.

    2017-01-01

    Purpose: This study aims to investigate how organisations can discourage covering up and instead encourage learning from errors through a supportive learning from error climate. In explaining professionals' learning from error behaviour, this study distinguishes between espoused (verbally expressed) and enacted (behaviourally expressed) values…

  13. Improved Landau gauge fixing and discretisation errors

    International Nuclear Information System (INIS)

    Bonnet, F.D.R.; Bowman, P.O.; Leinweber, D.B.; Richards, D.G.; Williams, A.G.

    2000-01-01

    Lattice discretisation errors in the Landau gauge condition are examined. An improved gauge fixing algorithm in which O(a 2 ) errors are removed is presented. O(a 2 ) improvement of the gauge fixing condition displays the secondary benefit of reducing the size of higher-order errors. These results emphasise the importance of implementing an improved gauge fixing condition

  14. Acoustic Evidence for Phonologically Mismatched Speech Errors

    Science.gov (United States)

    Gormley, Andrea

    2015-01-01

    Speech errors are generally said to accommodate to their new phonological context. This accommodation has been validated by several transcription studies. The transcription methodology is not the best choice for detecting errors at this level, however, as this type of error can be difficult to perceive. This paper presents an acoustic analysis of…

  15. Average beta-beating from random errors

    CERN Document Server

    Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department

    2018-01-01

    The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic effect on the tune.

  16. Jonas Olson's Evidence for Moral Error Theory

    NARCIS (Netherlands)

    Evers, Daan

    2016-01-01

    Jonas Olson defends a moral error theory in (2014). I first argue that Olson is not justified in believing the error theory as opposed to moral nonnaturalism in his own opinion. I then argue that Olson is not justified in believing the error theory as opposed to moral contextualism either (although

  17. List of Error-Prone Abbreviations, Symbols, and Dose Designations

    Science.gov (United States)

    ... Analysis and Coaching Report an Error Report a Medication Error Report a Vaccine Error Consumer Error Reporting Search ... which have been reported through the ISMP National Medication Errors Reporting Program (ISMP MERP) as being frequently misinterpreted ...

  18. A three-step maximum a posteriori probability method for InSAR data inversion of coseismic rupture with application to the 14 April 2010 Mw 6.9 Yushu, China, earthquake

    Science.gov (United States)

    Sun, Jianbao; Shen, Zheng-Kang; Bürgmann, Roland; Wang, Min; Chen, Lichun; Xu, Xiwei

    2013-08-01

    develop a three-step maximum a posteriori probability method for coseismic rupture inversion, which aims at maximizing the a posterior probability density function (PDF) of elastic deformation solutions of earthquake rupture. The method originates from the fully Bayesian inversion and mixed linear-nonlinear Bayesian inversion methods and shares the same posterior PDF with them, while overcoming difficulties with convergence when large numbers of low-quality data are used and greatly improving the convergence rate using optimization procedures. A highly efficient global optimization algorithm, adaptive simulated annealing, is used to search for the maximum of a posterior PDF ("mode" in statistics) in the first step. The second step inversion approaches the "true" solution further using the Monte Carlo inversion technique with positivity constraints, with all parameters obtained from the first step as the initial solution. Then slip artifacts are eliminated from slip models in the third step using the same procedure of the second step, with fixed fault geometry parameters. We first design a fault model with 45° dip angle and oblique slip, and produce corresponding synthetic interferometric synthetic aperture radar (InSAR) data sets to validate the reliability and efficiency of the new method. We then apply this method to InSAR data inversion for the coseismic slip distribution of the 14 April 2010 Mw 6.9 Yushu, China earthquake. Our preferred slip model is composed of three segments with most of the slip occurring within 15 km depth and the maximum slip reaches 1.38 m at the surface. The seismic moment released is estimated to be 2.32e+19 Nm, consistent with the seismic estimate of 2.50e+19 Nm.

  19. Analysis of error patterns in clinical radiotherapy

    International Nuclear Information System (INIS)

    Macklis, Roger; Meier, Tim; Barrett, Patricia; Weinhous, Martin

    1996-01-01

    Purpose: Until very recently, prescription errors and adverse treatment events have rarely been studied or reported systematically in oncology. We wished to understand the spectrum and severity of radiotherapy errors that take place on a day-to-day basis in a high-volume academic practice and to understand the resource needs and quality assurance challenges placed on a department by rapid upswings in contract-based clinical volumes requiring additional operating hours, procedures, and personnel. The goal was to define clinical benchmarks for operating safety and to detect error-prone treatment processes that might function as 'early warning' signs. Methods: A multi-tiered prospective and retrospective system for clinical error detection and classification was developed, with formal analysis of the antecedents and consequences of all deviations from prescribed treatment delivery, no matter how trivial. A department-wide record-and-verify system was operational during this period and was used as one method of treatment verification and error detection. Brachytherapy discrepancies were analyzed separately. Results: During the analysis year, over 2000 patients were treated with over 93,000 individual fields. A total of 59 errors affecting a total of 170 individual treated fields were reported or detected during this period. After review, all of these errors were classified as Level 1 (minor discrepancy with essentially no potential for negative clinical implications). This total treatment delivery error rate (170/93, 332 or 0.18%) is significantly better than corresponding error rates reported for other hospital and oncology treatment services, perhaps reflecting the relatively sophisticated error avoidance and detection procedures used in modern clinical radiation oncology. Error rates were independent of linac model and manufacturer, time of day (normal operating hours versus late evening or early morning) or clinical machine volumes. There was some relationship to

  20. Comparison between calorimeter and HLNC errors

    International Nuclear Information System (INIS)

    Goldman, A.S.; De Ridder, P.; Laszlo, G.

    1991-01-01

    This paper summarizes an error analysis that compares systematic and random errors of total plutonium mass estimated for high-level neutron coincidence counter (HLNC) and calorimeter measurements. This task was part of an International Atomic Energy Agency (IAEA) study on the comparison of the two instruments to determine if HLNC measurement errors met IAEA standards and if the calorimeter gave ''significantly'' better precision. Our analysis was based on propagation of error models that contained all known sources of errors including uncertainties associated with plutonium isotopic measurements. 5 refs., 2 tabs

  1. Medication errors: an overview for clinicians.

    Science.gov (United States)

    Wittich, Christopher M; Burkle, Christopher M; Lanier, William L

    2014-08-01

    Medication error is an important cause of patient morbidity and mortality, yet it can be a confusing and underappreciated concept. This article provides a review for practicing physicians that focuses on medication error (1) terminology and definitions, (2) incidence, (3) risk factors, (4) avoidance strategies, and (5) disclosure and legal consequences. A medication error is any error that occurs at any point in the medication use process. It has been estimated by the Institute of Medicine that medication errors cause 1 of 131 outpatient and 1 of 854 inpatient deaths. Medication factors (eg, similar sounding names, low therapeutic index), patient factors (eg, poor renal or hepatic function, impaired cognition, polypharmacy), and health care professional factors (eg, use of abbreviations in prescriptions and other communications, cognitive biases) can precipitate medication errors. Consequences faced by physicians after medication errors can include loss of patient trust, civil actions, criminal charges, and medical board discipline. Methods to prevent medication errors from occurring (eg, use of information technology, better drug labeling, and medication reconciliation) have been used with varying success. When an error is discovered, patients expect disclosure that is timely, given in person, and accompanied with an apology and communication of efforts to prevent future errors. Learning more about medication errors may enhance health care professionals' ability to provide safe care to their patients. Copyright © 2014 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.

  2. Analysis of errors in forensic science

    Directory of Open Access Journals (Sweden)

    Mingxiao Du

    2017-01-01

    Full Text Available Reliability of expert testimony is one of the foundations of judicial justice. Both expert bias and scientific errors affect the reliability of expert opinion, which in turn affects the trustworthiness of the findings of fact in legal proceedings. Expert bias can be eliminated by replacing experts; however, it may be more difficult to eliminate scientific errors. From the perspective of statistics, errors in operation of forensic science include systematic errors, random errors, and gross errors. In general, process repetition and abiding by the standard ISO/IEC:17025: 2005, general requirements for the competence of testing and calibration laboratories, during operation are common measures used to reduce errors that originate from experts and equipment, respectively. For example, to reduce gross errors, the laboratory can ensure that a test is repeated several times by different experts. In applying for forensic principles and methods, the Federal Rules of Evidence 702 mandate that judges consider factors such as peer review, to ensure the reliability of the expert testimony. As the scientific principles and methods may not undergo professional review by specialists in a certain field, peer review serves as an exclusive standard. This study also examines two types of statistical errors. As false-positive errors involve a higher possibility of an unfair decision-making, they should receive more attention than false-negative errors.

  3. Error management process for power stations

    International Nuclear Information System (INIS)

    Hirotsu, Yuko; Takeda, Daisuke; Fujimoto, Junzo; Nagasaka, Akihiko

    2016-01-01

    The purpose of this study is to establish 'error management process for power stations' for systematizing activities for human error prevention and for festering continuous improvement of these activities. The following are proposed by deriving concepts concerning error management process from existing knowledge and realizing them through application and evaluation of their effectiveness at a power station: an entire picture of error management process that facilitate four functions requisite for maraging human error prevention effectively (1. systematizing human error prevention tools, 2. identifying problems based on incident reports and taking corrective actions, 3. identifying good practices and potential problems for taking proactive measures, 4. prioritizeng human error prevention tools based on identified problems); detail steps for each activity (i.e. developing an annual plan for human error prevention, reporting and analyzing incidents and near misses) based on a model of human error causation; procedures and example of items for identifying gaps between current and desired levels of executions and outputs of each activity; stages for introducing and establishing the above proposed error management process into a power station. By giving shape to above proposals at a power station, systematization and continuous improvement of activities for human error prevention in line with the actual situation of the power station can be expected. (author)

  4. Impact of Measurement Error on Synchrophasor Applications

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yilu [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gracia, Jose R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ewing, Paul D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Zhao, Jiecheng [Univ. of Tennessee, Knoxville, TN (United States); Tan, Jin [Univ. of Tennessee, Knoxville, TN (United States); Wu, Ling [Univ. of Tennessee, Knoxville, TN (United States); Zhan, Lingwei [Univ. of Tennessee, Knoxville, TN (United States)

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.

  5. Advanced hardware design for error correcting codes

    CERN Document Server

    Coussy, Philippe

    2015-01-01

    This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.

  6. Approximate error conjugation gradient minimization methods

    Science.gov (United States)

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  7. Aerosol Retrieval Sensitivity and Error Analysis for the Cloud and Aerosol Polarimetric Imager on Board TanSat: The Effect of Multi-Angle Measurement

    Directory of Open Access Journals (Sweden)

    Xi Chen

    2017-02-01

    Full Text Available Aerosol scattering is an important source of error in CO2 retrievals from satellite. This paper presents an analysis of aerosol information content from the Cloud and Aerosol Polarimetric Imager (CAPI onboard the Chinese Carbon Dioxide Observation Satellite (TanSat to be launched in 2016. Based on optimal estimation theory, aerosol information content is quantified from radiance and polarization observed by CAPI in terms of the degrees of freedom for the signal (DFS. A linearized vector radiative transfer model is used with a linearized Mie code to simulate observation and sensitivity (or Jacobians with respect to aerosol parameters. In satellite nadir mode, the DFS for aerosol optical depth is the largest, but for mode radius, it is only 0.55. Observation geometry is found to affect aerosol DFS based on the aerosol scattering phase function from the comparison between different viewing zenith angles or solar zenith angles. When TanSat is operated in target mode, we note that multi-angle retrieval represented by three along-track measurements provides additional 0.31 DFS on average, mainly from mode radius. When adding another two measurements, the a posteriori error decreases by another 2%–6%. The correlation coefficients between retrieved parameters show that aerosol is strongly correlated with surface reflectance, but multi-angle retrieval can weaken this correlation.

  8. Medication errors in anesthesia: unacceptable or unavoidable?

    Directory of Open Access Journals (Sweden)

    Ira Dhawan

    Full Text Available Abstract Medication errors are the common causes of patient morbidity and mortality. It adds financial burden to the institution as well. Though the impact varies from no harm to serious adverse effects including death, it needs attention on priority basis since medication errors' are preventable. In today's world where people are aware and medical claims are on the hike, it is of utmost priority that we curb this issue. Individual effort to decrease medication error alone might not be successful until a change in the existing protocols and system is incorporated. Often drug errors that occur cannot be reversed. The best way to ‘treat' drug errors is to prevent them. Wrong medication (due to syringe swap, overdose (due to misunderstanding or preconception of the dose, pump misuse and dilution error, incorrect administration route, under dosing and omission are common causes of medication error that occur perioperatively. Drug omission and calculation mistakes occur commonly in ICU. Medication errors can occur perioperatively either during preparation, administration or record keeping. Numerous human and system errors can be blamed for occurrence of medication errors. The need of the hour is to stop the blame - game, accept mistakes and develop a safe and ‘just' culture in order to prevent medication errors. The newly devised systems like VEINROM, a fluid delivery system is a novel approach in preventing drug errors due to most commonly used medications in anesthesia. Similar developments along with vigilant doctors, safe workplace culture and organizational support all together can help prevent these errors.

  9. Human errors related to maintenance and modifications

    International Nuclear Information System (INIS)

    Laakso, K.; Pyy, P.; Reiman, L.

    1998-01-01

    The focus in human reliability analysis (HRA) relating to nuclear power plants has traditionally been on human performance in disturbance conditions. On the other hand, some studies and incidents have shown that also maintenance errors, which have taken place earlier in plant history, may have an impact on the severity of a disturbance, e.g. if they disable safety related equipment. Especially common cause and other dependent failures of safety systems may significantly contribute to the core damage risk. The first aim of the study was to identify and give examples of multiple human errors which have penetrated the various error detection and inspection processes of plant safety barriers. Another objective was to generate numerical safety indicators to describe and forecast the effectiveness of maintenance. A more general objective was to identify needs for further development of maintenance quality and planning. In the first phase of this operational experience feedback analysis, human errors recognisable in connection with maintenance were looked for by reviewing about 4400 failure and repair reports and some special reports which cover two nuclear power plant units on the same site during 1992-94. A special effort was made to study dependent human errors since they are generally the most serious ones. An in-depth root cause analysis was made for 14 dependent errors by interviewing plant maintenance foremen and by thoroughly analysing the errors. A more simple treatment was given to maintenance-related single errors. The results were shown as a distribution of errors among operating states i.a. as regards the following matters: in what operational state the errors were committed and detected; in what operational and working condition the errors were detected, and what component and error type they were related to. These results were presented separately for single and dependent maintenance-related errors. As regards dependent errors, observations were also made

  10. Angular truncation errors in integrating nephelometry

    International Nuclear Information System (INIS)

    Moosmueller, Hans; Arnott, W. Patrick

    2003-01-01

    Ideal integrating nephelometers integrate light scattered by particles over all directions. However, real nephelometers truncate light scattered in near-forward and near-backward directions below a certain truncation angle (typically 7 deg. ). This results in truncation errors, with the forward truncation error becoming important for large particles. Truncation errors are commonly calculated using Mie theory, which offers little physical insight and no generalization to nonspherical particles. We show that large particle forward truncation errors can be calculated and understood using geometric optics and diffraction theory. For small truncation angles (i.e., <10 deg. ) as typical for modern nephelometers, diffraction theory by itself is sufficient. Forward truncation errors are, by nearly a factor of 2, larger for absorbing particles than for nonabsorbing particles because for large absorbing particles most of the scattered light is due to diffraction as transmission is suppressed. Nephelometers calibration procedures are also discussed as they influence the effective truncation error

  11. Collection of offshore human error probability data

    International Nuclear Information System (INIS)

    Basra, Gurpreet; Kirwan, Barry

    1998-01-01

    Accidents such as Piper Alpha have increased concern about the effects of human errors in complex systems. Such accidents can in theory be predicted and prevented by risk assessment, and in particular human reliability assessment (HRA), but HRA ideally requires qualitative and quantitative human error data. A research initiative at the University of Birmingham led to the development of CORE-DATA, a Computerised Human Error Data Base. This system currently contains a reasonably large number of human error data points, collected from a variety of mainly nuclear-power related sources. This article outlines a recent offshore data collection study, concerned with collecting lifeboat evacuation data. Data collection methods are outlined and a selection of human error probabilities generated as a result of the study are provided. These data give insights into the type of errors and human failure rates that could be utilised to support offshore risk analyses

  12. Error-related anterior cingulate cortex activity and the prediction of conscious error awareness

    Directory of Open Access Journals (Sweden)

    Catherine eOrr

    2012-06-01

    Full Text Available Research examining the neural mechanisms associated with error awareness has consistently identified dorsal anterior cingulate activity (ACC as necessary but not predictive of conscious error detection. Two recent studies (Steinhauser and Yeung, 2010; Wessel et al. 2011 have found a contrary pattern of greater dorsal ACC activity (in the form of the error-related negativity during detected errors, but suggested that the greater activity may instead reflect task influences (e.g., response conflict, error probability and or individual variability (e.g., statistical power. We re-analyzed fMRI BOLD data from 56 healthy participants who had previously been administered the Error Awareness Task, a motor Go/No-go response inhibition task in which subjects make errors of commission of which they are aware (Aware errors, or unaware (Unaware errors. Consistent with previous data, the activity in a number of cortical regions was predictive of error awareness, including bilateral inferior parietal and insula cortices, however in contrast to previous studies, including our own smaller sample studies using the same task, error-related dorsal ACC activity was significantly greater during aware errors when compared to unaware errors. While the significantly faster RT for aware errors (compared to unaware was consistent with the hypothesis of higher response conflict increasing ACC activity, we could find no relationship between dorsal ACC activity and the error RT difference. The data suggests that individual variability in error awareness is associated with error-related dorsal ACC activity, and therefore this region may be important to conscious error detection, but it remains unclear what task and individual factors influence error awareness.

  13. Common patterns in 558 diagnostic radiology errors.

    Science.gov (United States)

    Donald, Jennifer J; Barnard, Stuart A

    2012-04-01

    As a Quality Improvement initiative our department has held regular discrepancy meetings since 2003. We performed a retrospective analysis of the cases presented and identified the most common pattern of error. A total of 558 cases were referred for discussion over 92 months, and errors were classified as perceptual or interpretative. The most common patterns of error for each imaging modality were analysed, and the misses were scored by consensus as subtle or non-subtle. Of 558 diagnostic errors, 447 (80%) were perceptual and 111 (20%) were interpretative errors. Plain radiography and computed tomography (CT) scans were the most frequent imaging modalities accounting for 246 (44%) and 241 (43%) of the total number of errors, respectively. In the plain radiography group 120 (49%) of the errors occurred in chest X-ray reports with perceptual miss of a lung nodule occurring in 40% of this subgroup. In the axial and appendicular skeleton missed fractures occurred most frequently, and metastatic bone disease was overlooked in 12 of 50 plain X-rays of the pelvis or spine. The majority of errors within the CT group were in reports of body scans with the commonest perceptual errors identified including 16 missed significant bone lesions, 14 cases of thromboembolic disease and 14 gastrointestinal tumours. Of the 558 errors, 312 (56%) were considered subtle and 246 (44%) non-subtle. Diagnostic errors are not uncommon and are most frequently perceptual in nature. Identification of the most common patterns of error has the potential to improve the quality of reporting by improving the search behaviour of radiologists. © 2012 The Authors. Journal of Medical Imaging and Radiation Oncology © 2012 The Royal Australian and New Zealand College of Radiologists.

  14. Group representations, error bases and quantum codes

    Energy Technology Data Exchange (ETDEWEB)

    Knill, E

    1996-01-01

    This report continues the discussion of unitary error bases and quantum codes. Nice error bases are characterized in terms of the existence of certain characters in a group. A general construction for error bases which are non-abelian over the center is given. The method for obtaining codes due to Calderbank et al. is generalized and expressed purely in representation theoretic terms. The significance of the inertia subgroup both for constructing codes and obtaining the set of transversally implementable operations is demonstrated.

  15. Practical, Reliable Error Bars in Quantum Tomography

    OpenAIRE

    Faist, Philippe; Renner, Renato

    2015-01-01

    Precise characterization of quantum devices is usually achieved with quantum tomography. However, most methods which are currently widely used in experiments, such as maximum likelihood estimation, lack a well-justified error analysis. Promising recent methods based on confidence regions are difficult to apply in practice or yield error bars which are unnecessarily large. Here, we propose a practical yet robust method for obtaining error bars. We do so by introducing a novel representation of...

  16. Soft errors in modern electronic systems

    CERN Document Server

    Nicolaidis, Michael

    2010-01-01

    This book provides a comprehensive presentation of the most advanced research results and technological developments enabling understanding, qualifying and mitigating the soft errors effect in advanced electronics, including the fundamental physical mechanisms of radiation induced soft errors, the various steps that lead to a system failure, the modelling and simulation of soft error at various levels (including physical, electrical, netlist, event driven, RTL, and system level modelling and simulation), hardware fault injection, accelerated radiation testing and natural environment testing, s

  17. Error calculations statistics in radioactive measurements

    International Nuclear Information System (INIS)

    Verdera, Silvia

    1994-01-01

    Basic approach and procedures frequently used in the practice of radioactive measurements.Statistical principles applied are part of Good radiopharmaceutical Practices and quality assurance.Concept of error, classification as systematic and random errors.Statistic fundamentals,probability theories, populations distributions, Bernoulli, Poisson,Gauss, t-test distribution,Ξ2 test, error propagation based on analysis of variance.Bibliography.z table,t-test table, Poisson index ,Ξ2 test

  18. Error monitoring issues for common channel signaling

    Science.gov (United States)

    Hou, Victor T.; Kant, Krishna; Ramaswami, V.; Wang, Jonathan L.

    1994-04-01

    Motivated by field data which showed a large number of link changeovers and incidences of link oscillations between in-service and out-of-service states in common channel signaling (CCS) networks, a number of analyses of the link error monitoring procedures in the SS7 protocol were performed by the authors. This paper summarizes the results obtained thus far and include the following: (1) results of an exact analysis of the performance of the error monitoring procedures under both random and bursty errors; (2) a demonstration that there exists a range of error rates within which the error monitoring procedures of SS7 may induce frequent changeovers and changebacks; (3) an analysis of the performance ofthe SS7 level-2 transmission protocol to determine the tolerable error rates within which the delay requirements can be met; (4) a demonstration that the tolerable error rate depends strongly on various link and traffic characteristics, thereby implying that a single set of error monitor parameters will not work well in all situations; (5) some recommendations on a customizable/adaptable scheme of error monitoring with a discussion on their implementability. These issues may be particularly relevant in the presence of anticipated increases in SS7 traffic due to widespread deployment of Advanced Intelligent Network (AIN) and Personal Communications Service (PCS) as well as for developing procedures for high-speed SS7 links currently under consideration by standards bodies.

  19. Neurochemical enhancement of conscious error awareness.

    Science.gov (United States)

    Hester, Robert; Nandam, L Sanjay; O'Connell, Redmond G; Wagner, Joe; Strudwick, Mark; Nathan, Pradeep J; Mattingley, Jason B; Bellgrove, Mark A

    2012-02-22

    How the brain monitors ongoing behavior for performance errors is a central question of cognitive neuroscience. Diminished awareness of performance errors limits the extent to which humans engage in corrective behavior and has been linked to loss of insight in a number of psychiatric syndromes (e.g., attention deficit hyperactivity disorder, drug addiction). These conditions share alterations in monoamine signaling that may influence the neural mechanisms underlying error processing, but our understanding of the neurochemical drivers of these processes is limited. We conducted a randomized, double-blind, placebo-controlled, cross-over design of the influence of methylphenidate, atomoxetine, and citalopram on error awareness in 27 healthy participants. The error awareness task, a go/no-go response inhibition paradigm, was administered to assess the influence of monoaminergic agents on performance errors during fMRI data acquisition. A single dose of methylphenidate, but not atomoxetine or citalopram, significantly improved the ability of healthy volunteers to consciously detect performance errors. Furthermore, this behavioral effect was associated with a strengthening of activation differences in the dorsal anterior cingulate cortex and inferior parietal lobe during the methylphenidate condition for errors made with versus without awareness. Our results have implications for the understanding of the neurochemical underpinnings of performance monitoring and for the pharmacological treatment of a range of disparate clinical conditions that are marked by poor awareness of errors.

  20. [Analysis of intrusion errors in free recall].

    Science.gov (United States)

    Diesfeldt, H F A

    2017-06-01

    Extra-list intrusion errors during five trials of the eight-word list-learning task of the Amsterdam Dementia Screening Test (ADST) were investigated in 823 consecutive psychogeriatric patients (87.1% suffering from major neurocognitive disorder). Almost half of the participants (45.9%) produced one or more intrusion errors on the verbal recall test. Correct responses were lower when subjects made intrusion errors, but learning slopes did not differ between subjects who committed intrusion errors and those who did not so. Bivariate regression analyses revealed that participants who committed intrusion errors were more deficient on measures of eight-word recognition memory, delayed visual recognition and tests of executive control (the Behavioral Dyscontrol Scale and the ADST-Graphical Sequences as measures of response inhibition). Using hierarchical multiple regression, only free recall and delayed visual recognition retained an independent effect in the association with intrusion errors, such that deficient scores on tests of episodic memory were sufficient to explain the occurrence of intrusion errors. Measures of inhibitory control did not add significantly to the explanation of intrusion errors in free recall, which makes insufficient strength of memory traces rather than a primary deficit in inhibition the preferred account for intrusion errors in free recall.

  1. Human error mechanisms in complex work environments

    International Nuclear Information System (INIS)

    Rasmussen, J.

    1988-01-01

    Human error taxonomies have been developed from analysis of industrial incident reports as well as from psychological experiments. In this paper the results of the two approaches are reviewed and compared. It is found, in both cases, that a fairly small number of basic psychological mechanisms will account for most of the action errors observed. In addition, error mechanisms appear to be intimately related to the development of high skill and know-how in a complex work context. This relationship between errors and human adaptation is discussed in detail for individuals and organisations. The implications for system safety and briefly mentioned, together with the implications for system design. (author)

  2. Human error mechanisms in complex work environments

    International Nuclear Information System (INIS)

    Rasmussen, Jens; Danmarks Tekniske Hoejskole, Copenhagen)

    1988-01-01

    Human error taxonomies have been developed from analysis of industrial incident reports as well as from psychological experiments. In this paper the results of the two approaches are reviewed and compared. It is found, in both cases, that a fairly small number of basic psychological mechanisms will account for most of the action errors observed. In addition, error mechanisms appear to be intimately related to the development of high skill and know-how in a complex work context. This relationship between errors and human adaptation is discussed in detail for individuals and organisations. The implications for system safety are briefly mentioned, together with the implications for system design. (author)

  3. Study of Errors among Nursing Students

    Directory of Open Access Journals (Sweden)

    Ella Koren

    2007-09-01

    Full Text Available The study of errors in the health system today is a topic of considerable interest aimed at reducing errors through analysis of the phenomenon and the conclusions reached. Errors that occur frequently among health professionals have also been observed among nursing students. True, in most cases they are actually “near errors,” but these could be a future indicator of therapeutic reality and the effect of nurses' work environment on their personal performance. There are two different approaches to such errors: (a The EPP (error prone person approach lays full responsibility at the door of the individual involved in the error, whether a student, nurse, doctor, or pharmacist. According to this approach, handling consists purely in identifying and penalizing the guilty party. (b The EPE (error prone environment approach emphasizes the environment as a primary contributory factor to errors. The environment as an abstract concept includes components and processes of interpersonal communications, work relations, human engineering, workload, pressures, technical apparatus, and new technologies. The objective of the present study was to examine the role played by factors in and components of personal performance as compared to elements and features of the environment. The study was based on both of the aforementioned approaches, which, when combined, enable a comprehensive understanding of the phenomenon of errors among the student population as well as a comparison of factors contributing to human error and to error deriving from the environment. The theoretical basis of the study was a model that combined both approaches: one focusing on the individual and his or her personal performance and the other focusing on the work environment. The findings emphasize the work environment of health professionals as an EPE. However, errors could have been avoided by means of strict adherence to practical procedures. The authors examined error events in the

  4. Learning from errors in super-resolution.

    Science.gov (United States)

    Tang, Yi; Yuan, Yuan

    2014-11-01

    A novel framework of learning-based super-resolution is proposed by employing the process of learning from the estimation errors. The estimation errors generated by different learning-based super-resolution algorithms are statistically shown to be sparse and uncertain. The sparsity of the estimation errors means most of estimation errors are small enough. The uncertainty of the estimation errors means the location of the pixel with larger estimation error is random. Noticing the prior information about the estimation errors, a nonlinear boosting process of learning from these estimation errors is introduced into the general framework of the learning-based super-resolution. Within the novel framework of super-resolution, a low-rank decomposition technique is used to share the information of different super-resolution estimations and to remove the sparse estimation errors from different learning algorithms or training samples. The experimental results show the effectiveness and the efficiency of the proposed framework in enhancing the performance of different learning-based algorithms.

  5. Random and Systematic Errors Share in Total Error of Probes for CNC Machine Tools

    Directory of Open Access Journals (Sweden)

    Adam Wozniak

    2018-03-01

    Full Text Available Probes for CNC machine tools, as every measurement device, have accuracy limited by random errors and by systematic errors. Random errors of these probes are described by a parameter called unidirectional repeatability. Manufacturers of probes for CNC machine tools usually specify only this parameter, while parameters describing systematic errors of the probes, such as pre-travel variation or triggering radius variation, are used rarely. Systematic errors of the probes, linked to the differences in pre-travel values for different measurement directions, can be corrected or compensated, but it is not a widely used procedure. In this paper, the share of systematic errors and random errors in total error of exemplary probes are determined. In the case of simple, kinematic probes, systematic errors are much greater than random errors, so compensation would significantly reduce the probing error. Moreover, it shows that in the case of kinematic probes commonly specified unidirectional repeatability is significantly better than 2D performance. However, in the case of more precise strain-gauge probe systematic errors are of the same order as random errors, which means that errors correction or compensation, in this case, would not yield any significant benefits.

  6. Characteristics of pediatric chemotherapy medication errors in a national error reporting database.

    Science.gov (United States)

    Rinke, Michael L; Shore, Andrew D; Morlock, Laura; Hicks, Rodney W; Miller, Marlene R

    2007-07-01

    Little is known regarding chemotherapy medication errors in pediatrics despite studies suggesting high rates of overall pediatric medication errors. In this study, the authors examined patterns in pediatric chemotherapy errors. The authors queried the United States Pharmacopeia MEDMARX database, a national, voluntary, Internet-accessible error reporting system, for all error reports from 1999 through 2004 that involved chemotherapy medications and patients aged error reports, 85% reached the patient, and 15.6% required additional patient monitoring or therapeutic intervention. Forty-eight percent of errors originated in the administering phase of medication delivery, and 30% originated in the drug-dispensing phase. Of the 387 medications cited, 39.5% were antimetabolites, 14.0% were alkylating agents, 9.3% were anthracyclines, and 9.3% were topoisomerase inhibitors. The most commonly involved chemotherapeutic agents were methotrexate (15.3%), cytarabine (12.1%), and etoposide (8.3%). The most common error types were improper dose/quantity (22.9% of 327 cited error types), wrong time (22.6%), omission error (14.1%), and wrong administration technique/wrong route (12.2%). The most common error causes were performance deficit (41.3% of 547 cited error causes), equipment and medication delivery devices (12.4%), communication (8.8%), knowledge deficit (6.8%), and written order errors (5.5%). Four of the 5 most serious errors occurred at community hospitals. Pediatric chemotherapy errors often reached the patient, potentially were harmful, and differed in quality between outpatient and inpatient areas. This study indicated which chemotherapeutic agents most often were involved in errors and that administering errors were common. Investigation is needed regarding targeted medication administration safeguards for these high-risk medications. Copyright (c) 2007 American Cancer Society.

  7. ERF/ERFC, Calculation of Error Function, Complementary Error Function, Probability Integrals

    International Nuclear Information System (INIS)

    Vogel, J.E.

    1983-01-01

    1 - Description of problem or function: ERF and ERFC are used to compute values of the error function and complementary error function for any real number. They may be used to compute other related functions such as the normal probability integrals. 4. Method of solution: The error function and complementary error function are approximated by rational functions. Three such rational approximations are used depending on whether - x .GE.4.0. In the first region the error function is computed directly and the complementary error function is computed via the identity erfc(x)=1.0-erf(x). In the other two regions the complementary error function is computed directly and the error function is computed from the identity erf(x)=1.0-erfc(x). The error function and complementary error function are real-valued functions of any real argument. The range of the error function is (-1,1). The range of the complementary error function is (0,2). 5. Restrictions on the complexity of the problem: The user is cautioned against using ERF to compute the complementary error function by using the identity erfc(x)=1.0-erf(x). This subtraction may cause partial or total loss of significance for certain values of x

  8. Iterative optimization of quantum error correcting codes

    International Nuclear Information System (INIS)

    Reimpell, M.; Werner, R.F.

    2005-01-01

    We introduce a convergent iterative algorithm for finding the optimal coding and decoding operations for an arbitrary noisy quantum channel. This algorithm does not require any error syndrome to be corrected completely, and hence also finds codes outside the usual Knill-Laflamme definition of error correcting codes. The iteration is shown to improve the figure of merit 'channel fidelity' in every step

  9. Measurement Error in Education and Growth Regressions

    NARCIS (Netherlands)

    Portela, Miguel; Alessie, Rob; Teulings, Coen

    2010-01-01

    The use of the perpetual inventory method for the construction of education data per country leads to systematic measurement error. This paper analyzes its effect on growth regressions. We suggest a methodology for correcting this error. The standard attenuation bias suggests that using these

  10. Spectrum of diagnostic errors in radiology.

    Science.gov (United States)

    Pinto, Antonio; Brunese, Luca

    2010-10-28

    Diagnostic errors are important in all branches of medicine because they are an indication of poor patient care. Since the early 1970s, physicians have been subjected to an increasing number of medical malpractice claims. Radiology is one of the specialties most liable to claims of medical negligence. Most often, a plaintiff's complaint against a radiologist will focus on a failure to diagnose. The etiology of radiological error is multi-factorial. Errors fall into recurrent patterns. Errors arise from poor technique, failures of perception, lack of knowledge and misjudgments. The work of diagnostic radiology consists of the complete detection of all abnormalities in an imaging examination and their accurate diagnosis. Every radiologist should understand the sources of error in diagnostic radiology as well as the elements of negligence that form the basis of malpractice litigation. Error traps need to be uncovered and highlighted, in order to prevent repetition of the same mistakes. This article focuses on the spectrum of diagnostic errors in radiology, including a classification of the errors, and stresses the malpractice issues in mammography, chest radiology and obstetric sonography. Missed fractures in emergency and communication issues between radiologists and physicians are also discussed.

  11. Random error in cardiovascular meta-analyses

    DEFF Research Database (Denmark)

    Albalawi, Zaina; McAlister, Finlay A; Thorlund, Kristian

    2013-01-01

    BACKGROUND: Cochrane reviews are viewed as the gold standard in meta-analyses given their efforts to identify and limit systematic error which could cause spurious conclusions. The potential for random error to cause spurious conclusions in meta-analyses is less well appreciated. METHODS: We exam...

  12. The Impact of Error-Management Climate, Error Type and Error Originator on Auditors’ Reporting Errors Discovered on Audit Work Papers

    NARCIS (Netherlands)

    A.H. Gold-Nöteberg (Anna); U. Gronewold (Ulfert); S. Salterio (Steve)

    2010-01-01

    textabstractWe examine factors affecting the auditor’s willingness to report their own or their peers’ self-discovered errors in working papers subsequent to detailed working paper review. Prior research has shown that errors in working papers are detected in the review process; however, such

  13. Error tracking in a clinical biochemistry laboratory

    DEFF Research Database (Denmark)

    Szecsi, Pal Bela; Ødum, Lars

    2009-01-01

    BACKGROUND: We report our results for the systematic recording of all errors in a standard clinical laboratory over a 1-year period. METHODS: Recording was performed using a commercial database program. All individuals in the laboratory were allowed to report errors. The testing processes were cl...

  14. Sources of Error in Satellite Navigation Positioning

    Directory of Open Access Journals (Sweden)

    Jacek Januszewski

    2017-09-01

    Full Text Available An uninterrupted information about the user’s position can be obtained generally from satellite navigation system (SNS. At the time of this writing (January 2017 currently two global SNSs, GPS and GLONASS, are fully operational, two next, also global, Galileo and BeiDou are under construction. In each SNS the accuracy of the user’s position is affected by the three main factors: accuracy of each satellite position, accuracy of pseudorange measurement and satellite geometry. The user’s position error is a function of both the pseudorange error called UERE (User Equivalent Range Error and user/satellite geometry expressed by right Dilution Of Precision (DOP coefficient. This error is decomposed into two types of errors: the signal in space ranging error called URE (User Range Error and the user equipment error UEE. The detailed analyses of URE, UEE, UERE and DOP coefficients, and the changes of DOP coefficients in different days are presented in this paper.

  15. Volterra Filtering for ADC Error Correction

    Directory of Open Access Journals (Sweden)

    J. Saliga

    2001-09-01

    Full Text Available Dynamic non-linearity of analog-to-digital converters (ADCcontributes significantly to the distortion of digitized signals. Thispaper introduces a new effective method for compensation such adistortion based on application of Volterra filtering. Considering ana-priori error model of ADC allows finding an efficient inverseVolterra model for error correction. Efficiency of proposed method isdemonstrated on experimental results.

  16. Errors and untimely radiodiagnosis of occupational diseases

    International Nuclear Information System (INIS)

    Sokolik, L.I.; Shkondin, A.N.; Sergienko, N.S.; Doroshenko, A.N.; Shumakov, A.V.

    1987-01-01

    Most errors in the diagnosis of occupational diseases occur due to hyperdiagnosis (37%), because data of dynamic clinico-roentgenological examination were not considered (23%). Defects in the organization of prophylactic fluorography results in untimely diagnosis of dust-induced occupational diseases. Errors also occurred because working conditions were not always considered atypical development and course were not always analyzed

  17. Comparing classifiers for pronunciation error detection

    NARCIS (Netherlands)

    Strik, H.; Truong, K.; Wet, F. de; Cucchiarini, C.

    2007-01-01

    Providing feedback on pronunciation errors in computer assisted language learning systems requires that pronunciation errors be detected automatically. In the present study we compare four types of classifiers that can be used for this purpose: two acoustic-phonetic classifiers (one of which employs

  18. Comparison of Prediction-Error-Modelling Criteria

    DEFF Research Database (Denmark)

    Jørgensen, John Bagterp; Jørgensen, Sten Bay

    2007-01-01

    Single and multi-step prediction-error-methods based on the maximum likelihood and least squares criteria are compared. The prediction-error methods studied are based on predictions using the Kalman filter and Kalman predictors for a linear discrete-time stochastic state space model, which is a r...

  19. Error and uncertainty in scientific practice

    NARCIS (Netherlands)

    Boumans, M.; Hon, G.; Petersen, A.C.

    2014-01-01

    Assessment of error and uncertainty is a vital component of both natural and social science. Empirical research involves dealing with all kinds of errors and uncertainties, yet there is significant variance in how such results are dealt with. Contributors to this volume present case studies of

  20. Measurement Error in Education and Growth Regressions

    NARCIS (Netherlands)

    Portela, M.; Teulings, C.N.; Alessie, R.

    The perpetual inventory method used for the construction of education data per country leads to systematic measurement error. This paper analyses the effect of this measurement error on GDP regressions. There is a systematic difference in the education level between census data and observations

  1. Measurement error in education and growth regressions

    NARCIS (Netherlands)

    Portela, Miguel; Teulings, Coen; Alessie, R.

    2004-01-01

    The perpetual inventory method used for the construction of education data per country leads to systematic measurement error. This paper analyses the effect of this measurement error on GDP regressions. There is a systematic difference in the education level between census data and observations

  2. Position Error Covariance Matrix Validation and Correction

    Science.gov (United States)

    Frisbee, Joe, Jr.

    2016-01-01

    In order to calculate operationally accurate collision probabilities, the position error covariance matrices predicted at times of closest approach must be sufficiently accurate representations of the position uncertainties. This presentation will discuss why the Gaussian distribution is a reasonable expectation for the position uncertainty and how this assumed distribution type is used in the validation and correction of position error covariance matrices.

  3. Opportunistic Error Correction for WLAN Applications

    NARCIS (Netherlands)

    Shao, X.; Schiphorst, Roelof; Slump, Cornelis H.

    2008-01-01

    The current error correction layer of IEEE 802.11a WLAN is designed for worst case scenarios, which often do not apply. In this paper, we propose a new opportunistic error correction layer based on Fountain codes and a resolution adaptive ADC. The key part in the new proposed system is that only

  4. 40 CFR 73.37 - Account error.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 16 2010-07-01 2010-07-01 false Account error. 73.37 Section 73.37 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) SULFUR DIOXIDE ALLOWANCE SYSTEM Allowance Tracking System § 73.37 Account error. The Administrator may, at his or her sole...

  5. Error Discounting in Probabilistic Category Learning

    Science.gov (United States)

    Craig, Stewart; Lewandowsky, Stephan; Little, Daniel R.

    2011-01-01

    The assumption in some current theories of probabilistic categorization is that people gradually attenuate their learning in response to unavoidable error. However, existing evidence for this error discounting is sparse and open to alternative interpretations. We report 2 probabilistic-categorization experiments in which we investigated error…

  6. Learning mechanisms to limit medication administration errors.

    Science.gov (United States)

    Drach-Zahavy, Anat; Pud, Dorit

    2010-04-01

    This paper is a report of a study conducted to identify and test the effectiveness of learning mechanisms applied by the nursing staff of hospital wards as a means of limiting medication administration errors. Since the influential report ;To Err Is Human', research has emphasized the role of team learning in reducing medication administration errors. Nevertheless, little is known about the mechanisms underlying team learning. Thirty-two hospital wards were randomly recruited. Data were collected during 2006 in Israel by a multi-method (observations, interviews and administrative data), multi-source (head nurses, bedside nurses) approach. Medication administration error was defined as any deviation from procedures, policies and/or best practices for medication administration, and was identified using semi-structured observations of nurses administering medication. Organizational learning was measured using semi-structured interviews with head nurses, and the previous year's reported medication administration errors were assessed using administrative data. The interview data revealed four learning mechanism patterns employed in an attempt to learn from medication administration errors: integrated, non-integrated, supervisory and patchy learning. Regression analysis results demonstrated that whereas the integrated pattern of learning mechanisms was associated with decreased errors, the non-integrated pattern was associated with increased errors. Supervisory and patchy learning mechanisms were not associated with errors. Superior learning mechanisms are those that represent the whole cycle of team learning, are enacted by nurses who administer medications to patients, and emphasize a system approach to data analysis instead of analysis of individual cases.

  7. Automatic error compensation in dc amplifiers

    International Nuclear Information System (INIS)

    Longden, L.L.

    1976-01-01

    When operational amplifiers are exposed to high levels of neutron fluence or total ionizing dose, significant changes may be observed in input voltages and currents. These changes may produce large errors at the output of direct-coupled amplifier stages. Therefore, the need exists for automatic compensation techniques. However, previously introduced techniques compensate only for errors in the main amplifier and neglect the errors induced by the compensating circuitry. In this paper, the techniques introduced compensate not only for errors in the main operational amplifier, but also for errors induced by the compensation circuitry. Included in the paper is a theoretical analysis of each compensation technique, along with advantages and disadvantages of each. Important design criteria and information necessary for proper selection of semiconductor switches will also be included. Introduced in this paper will be compensation circuitry for both resistive and capacitive feedback networks

  8. Heuristics and Cognitive Error in Medical Imaging.

    Science.gov (United States)

    Itri, Jason N; Patel, Sohil H

    2018-05-01

    The field of cognitive science has provided important insights into mental processes underlying the interpretation of imaging examinations. Despite these insights, diagnostic error remains a major obstacle in the goal to improve quality in radiology. In this article, we describe several types of cognitive bias that lead to diagnostic errors in imaging and discuss approaches to mitigate cognitive biases and diagnostic error. Radiologists rely on heuristic principles to reduce complex tasks of assessing probabilities and predicting values into simpler judgmental operations. These mental shortcuts allow rapid problem solving based on assumptions and past experiences. Heuristics used in the interpretation of imaging studies are generally helpful but can sometimes result in cognitive biases that lead to significant errors. An understanding of the causes of cognitive biases can lead to the development of educational content and systematic improvements that mitigate errors and improve the quality of care provided by radiologists.

  9. El error en el delito imprudente

    Directory of Open Access Journals (Sweden)

    Miguel Angel Muñoz García

    2011-12-01

    Full Text Available La teoría del error en los delitos culposos constituye un tema álgido de tratar, y controversial en la dogmática penal: existen en realidad muy escasas referencias, y no se ha llegado a un consenso razonable. Partiendo del análisis de la estructura dogmática del delito imprudente, en donde se destaca el deber objetivo de cuidado como elemento del tipo sobre el que recae el error, y de las diferentes posiciones doctrinales que defienden la aplicabilidad del error de tipo y del error de prohibición, se plantea la viabilidad de este último, con fundamento en razones dogmáticas y de política criminal, siendo la infracción del deber objetivo de cuidado en tanto consecuencia del error, un tema por analizar en sede de culpabilidad.

  10. Error Resilient Video Compression Using Behavior Models

    Directory of Open Access Journals (Sweden)

    Jacco R. Taal

    2004-03-01

    Full Text Available Wireless and Internet video applications are inherently subjected to bit errors and packet errors, respectively. This is especially so if constraints on the end-to-end compression and transmission latencies are imposed. Therefore, it is necessary to develop methods to optimize the video compression parameters and the rate allocation of these applications that take into account residual channel bit errors. In this paper, we study the behavior of a predictive (interframe video encoder and model the encoders behavior using only the statistics of the original input data and of the underlying channel prone to bit errors. The resulting data-driven behavior models are then used to carry out group-of-pictures partitioning and to control the rate of the video encoder in such a way that the overall quality of the decoded video with compression and channel errors is optimized.

  11. Telemetry location error in a forested habitat

    Science.gov (United States)

    Chu, D.S.; Hoover, B.A.; Fuller, M.R.; Geissler, P.H.; Amlaner, Charles J.

    1989-01-01

    The error associated with locations estimated by radio-telemetry triangulation can be large and variable in a hardwood forest. We assessed the magnitude and cause of telemetry location errors in a mature hardwood forest by using a 4-element Yagi antenna and compass bearings toward four transmitters, from 21 receiving sites. The distance error from the azimuth intersection to known transmitter locations ranged from 0 to 9251 meters. Ninety-five percent of the estimated locations were within 16 to 1963 meters, and 50% were within 99 to 416 meters of actual locations. Angles with 20o of parallel had larger distance errors than other angles. While angle appeared most important, greater distances and the amount of vegetation between receivers and transmitters also contributed to distance error.

  12. The District Nursing Clinical Error Reduction Programme.

    Science.gov (United States)

    McGraw, Caroline; Topping, Claire

    2011-01-01

    The District Nursing Clinical Error Reduction (DANCER) Programme was initiated in NHS Islington following an increase in the number of reported medication errors. The objectives were to reduce the actual degree of harm and the potential risk of harm associated with medication errors and to maintain the existing positive reporting culture, while robustly addressing performance issues. One hundred medication errors reported in 2007/08 were analysed using a framework that specifies the factors that predispose to adverse medication events in domiciliary care. Various contributory factors were identified and interventions were subsequently developed to address poor drug calculation and medication problem-solving skills and incorrectly transcribed medication administration record charts. Follow up data were obtained at 12 months and two years. The evaluation has shown that although medication errors do still occur, the programme has resulted in a marked shift towards a reduction in the associated actual degree of harm and the potential risk of harm.

  13. A Comparative Study on Error Analysis

    DEFF Research Database (Denmark)

    Wu, Xiaoli; Zhang, Chun

    2015-01-01

    Title: A Comparative Study on Error Analysis Subtitle: - Belgian (L1) and Danish (L1) learners’ use of Chinese (L2) comparative sentences in written production Xiaoli Wu, Chun Zhang Abstract: Making errors is an inevitable and necessary part of learning. The collection, classification and analysis...... the occurrence of errors either in linguistic or pedagogical terms. The purpose of the current study is to demonstrate the theoretical and practical relevance of error analysis approach in CFL by investigating two cases - (1) Belgian (L1) learners’ use of Chinese (L2) comparative sentences in written production...... of errors in the written and spoken production of L2 learners has a long tradition in L2 pedagogy. Yet, in teaching and learning Chinese as a foreign language (CFL), only handful studies have been made either to define the ‘error’ in a pedagogically insightful way or to empirically investigate...

  14. Medication Error, What Is the Reason?

    Directory of Open Access Journals (Sweden)

    Ali Banaozar Mohammadi

    2015-09-01

    Full Text Available Background: Medication errors due to different reasons may alter the outcome of all patients, especially patients with drug poisoning. We introduce one of the most common type of medication error in the present article. Case:A 48 year old woman with suspected organophosphate poisoning was died due to lethal medication error. Unfortunately these types of errors are not rare and had some preventable reasons included lack of suitable and enough training and practicing of medical students and some failures in medical students’ educational curriculum. Conclusion:Hereby some important reasons are discussed because sometimes they are tre-mendous. We found that most of them are easily preventable. If someone be aware about the method of use, complications, dosage and contraindication of drugs, we can minimize most of these fatal errors.

  15. [Errors in Peruvian medical journals references].

    Science.gov (United States)

    Huamaní, Charles; Pacheco-Romero, José

    2009-01-01

    References are fundamental in our studies; an adequate selection is asimportant as an adequate description. To determine the number of errors in a sample of references found in Peruvian medical journals. We reviewed 515 scientific papers references selected by systematic randomized sampling and corroborated reference information with the original document or its citation in Pubmed, LILACS or SciELO-Peru. We found errors in 47,6% (245) of the references, identifying 372 types of errors; the most frequent were errors in presentation style (120), authorship (100) and title (100), mainly due to spelling mistakes (91). References error percentage was high, varied and multiple. We suggest systematic revision of references in the editorial process as well as to extend the discussion on this theme. references, periodicals, research, bibliometrics.

  16. A qualitative description of human error

    International Nuclear Information System (INIS)

    Li Zhaohuan

    1992-11-01

    The human error has an important contribution to risk of reactor operation. The insight and analytical model are main parts in human reliability analysis. It consists of the concept of human error, the nature, the mechanism of generation, the classification and human performance influence factors. On the operating reactor the human error is defined as the task-human-machine mismatch. The human error event is focused on the erroneous action and the unfavored result. From the time limitation of performing a task, the operation is divided into time-limited and time-opened. The HCR (human cognitive reliability) model is suited for only time-limited. The basic cognitive process consists of the information gathering, cognition/thinking, decision making and action. The human erroneous action may be generated in any stage of this process. The more natural ways to classify human errors are presented. The human performance influence factors including personal, organizational and environmental factors are also listed

  17. A qualitative description of human error

    Energy Technology Data Exchange (ETDEWEB)

    Zhaohuan, Li [Academia Sinica, Beijing, BJ (China). Inst. of Atomic Energy

    1992-11-01

    The human error has an important contribution to risk of reactor operation. The insight and analytical model are main parts in human reliability analysis. It consists of the concept of human error, the nature, the mechanism of generation, the classification and human performance influence factors. On the operating reactor the human error is defined as the task-human-machine mismatch. The human error event is focused on the erroneous action and the unfavored result. From the time limitation of performing a task, the operation is divided into time-limited and time-opened. The HCR (human cognitive reliability) model is suited for only time-limited. The basic cognitive process consists of the information gathering, cognition/thinking, decision making and action. The human erroneous action may be generated in any stage of this process. The more natural ways to classify human errors are presented. The human performance influence factors including personal, organizational and environmental factors are also listed.

  18. A memory of errors in sensorimotor learning.

    Science.gov (United States)

    Herzfeld, David J; Vaswani, Pavan A; Marko, Mollie K; Shadmehr, Reza

    2014-09-12

    The current view of motor learning suggests that when we revisit a task, the brain recalls the motor commands it previously learned. In this view, motor memory is a memory of motor commands, acquired through trial-and-error and reinforcement. Here we show that the brain controls how much it is willing to learn from the current error through a principled mechanism that depends on the history of past errors. This suggests that the brain stores a previously unknown form of memory, a memory of errors. A mathematical formulation of this idea provides insights into a host of puzzling experimental data, including savings and meta-learning, demonstrating that when we are better at a motor task, it is partly because the brain recognizes the errors it experienced before. Copyright © 2014, American Association for the Advancement of Science.

  19. Common Errors in Ecological Data Sharing

    Directory of Open Access Journals (Sweden)

    Robert B. Cook

    2013-04-01

    Full Text Available Objectives: (1 to identify common errors in data organization and metadata completeness that would preclude a “reader” from being able to interpret and re-use the data for a new purpose; and (2 to develop a set of best practices derived from these common errors that would guide researchers in creating more usable data products that could be readily shared, interpreted, and used.Methods: We used directed qualitative content analysis to assess and categorize data and metadata errors identified by peer reviewers of data papers published in the Ecological Society of America’s (ESA Ecological Archives. Descriptive statistics provided the relative frequency of the errors identified during the peer review process.Results: There were seven overarching error categories: Collection & Organization, Assure, Description, Preserve, Discover, Integrate, and Analyze/Visualize. These categories represent errors researchers regularly make at each stage of the Data Life Cycle. Collection & Organization and Description errors were some of the most common errors, both of which occurred in over 90% of the papers.Conclusions: Publishing data for sharing and reuse is error prone, and each stage of the Data Life Cycle presents opportunities for mistakes. The most common errors occurred when the researcher did not provide adequate metadata to enable others to interpret and potentially re-use the data. Fortunately, there are ways to minimize these mistakes through carefully recording all details about study context, data collection, QA/ QC, and analytical procedures from the beginning of a research project and then including this descriptive information in the metadata.

  20. Analyzing temozolomide medication errors: potentially fatal.

    Science.gov (United States)

    Letarte, Nathalie; Gabay, Michael P; Bressler, Linda R; Long, Katie E; Stachnik, Joan M; Villano, J Lee

    2014-10-01

    The EORTC-NCIC regimen for glioblastoma requires different dosing of temozolomide (TMZ) during radiation and maintenance therapy. This complexity is exacerbated by the availability of multiple TMZ capsule strengths. TMZ is an alkylating agent and the major toxicity of this class is dose-related myelosuppression. Inadvertent overdose can be fatal. The websites of the Institute for Safe Medication Practices (ISMP), and the Food and Drug Administration (FDA) MedWatch database were reviewed. We searched the MedWatch database for adverse events associated with TMZ and obtained all reports including hematologic toxicity submitted from 1st November 1997 to 30th May 2012. The ISMP describes errors with TMZ resulting from the positioning of information on the label of the commercial product. The strength and quantity of capsules on the label were in close proximity to each other, and this has been changed by the manufacturer. MedWatch identified 45 medication errors. Patient errors were the most common, accounting for 21 or 47% of errors, followed by dispensing errors, which accounted for 13 or 29%. Seven reports or 16% were errors in the prescribing of TMZ. Reported outcomes ranged from reversible hematological adverse events (13%), to hospitalization for other adverse events (13%) or death (18%). Four error reports lacked detail and could not be categorized. Although the FDA issued a warning in 2003 regarding fatal medication errors and the product label warns of overdosing, errors in TMZ dosing occur for various reasons and involve both healthcare professionals and patients. Overdosing errors can be fatal.

  1. NLO error propagation exercise: statistical results

    International Nuclear Information System (INIS)

    Pack, D.J.; Downing, D.J.

    1985-09-01

    Error propagation is the extrapolation and cumulation of uncertainty (variance) above total amounts of special nuclear material, for example, uranium or 235 U, that are present in a defined location at a given time. The uncertainty results from the inevitable inexactness of individual measurements of weight, uranium concentration, 235 U enrichment, etc. The extrapolated and cumulated uncertainty leads directly to quantified limits of error on inventory differences (LEIDs) for such material. The NLO error propagation exercise was planned as a field demonstration of the utilization of statistical error propagation methodology at the Feed Materials Production Center in Fernald, Ohio from April 1 to July 1, 1983 in a single material balance area formed specially for the exercise. Major elements of the error propagation methodology were: variance approximation by Taylor Series expansion; variance cumulation by uncorrelated primary error sources as suggested by Jaech; random effects ANOVA model estimation of variance effects (systematic error); provision for inclusion of process variance in addition to measurement variance; and exclusion of static material. The methodology was applied to material balance area transactions from the indicated time period through a FORTRAN computer code developed specifically for this purpose on the NLO HP-3000 computer. This paper contains a complete description of the error propagation methodology and a full summary of the numerical results of applying the methodlogy in the field demonstration. The error propagation LEIDs did encompass the actual uranium and 235 U inventory differences. Further, one can see that error propagation actually provides guidance for reducing inventory differences and LEIDs in future time periods

  2. Repeated speech errors: evidence for learning.

    Science.gov (United States)

    Humphreys, Karin R; Menzies, Heather; Lake, Johanna K

    2010-11-01

    Three experiments elicited phonological speech errors using the SLIP procedure to investigate whether there is a tendency for speech errors on specific words to reoccur, and whether this effect can be attributed to implicit learning of an incorrect mapping from lemma to phonology for that word. In Experiment 1, when speakers made a phonological speech error in the study phase of the experiment (e.g. saying "beg pet" in place of "peg bet") they were over four times as likely to make an error on that same item several minutes later at test. A pseudo-error condition demonstrated that the effect is not simply due to a propensity for speakers to repeat phonological forms, regardless of whether or not they have been made in error. That is, saying "beg pet" correctly at study did not induce speakers to say "beg pet" in error instead of "peg bet" at test. Instead, the effect appeared to be due to learning of the error pathway. Experiment 2 replicated this finding, but also showed that after 48 h, errors made at study were no longer more likely to reoccur. As well as providing constraints on the longevity of the effect, this provides strong evidence that the error reoccurrences observed are not due to item-specific difficulty that leads individual speakers to make habitual mistakes on certain items. Experiment 3 showed that the diminishment of the effect 48 h later is not due to specific extra practice at the task. We discuss how these results fit in with a larger view of language as a dynamic system that is constantly adapting in response to experience. Copyright © 2010 Elsevier B.V. All rights reserved.

  3. Error budget calculations in laboratory medicine: linking the concepts of biological variation and allowable medical errors

    NARCIS (Netherlands)

    Stroobants, A. K.; Goldschmidt, H. M. J.; Plebani, M.

    2003-01-01

    Background: Random, systematic and sporadic errors, which unfortunately are not uncommon in laboratory medicine, can have a considerable impact on the well being of patients. Although somewhat difficult to attain, our main goal should be to prevent all possible errors. A good insight on error-prone

  4. Error-information in tutorial documentation: Supporting users' errors to facilitate initial skill learning

    NARCIS (Netherlands)

    Lazonder, Adrianus W.; van der Meij, Hans

    1995-01-01

    Novice users make many errors when they first try to learn how to work with a computer program like a spreadsheet or wordprocessor. No matter how user-friendly the software or the training manual, errors can and will occur. The current view on errors is that they can be helpful or disruptive,

  5. Evaluation of Data with Systematic Errors

    International Nuclear Information System (INIS)

    Froehner, F. H.

    2003-01-01

    Application-oriented evaluated nuclear data libraries such as ENDF and JEFF contain not only recommended values but also uncertainty information in the form of 'covariance' or 'error files'. These can neither be constructed nor utilized properly without a thorough understanding of uncertainties and correlations. It is shown how incomplete information about errors is described by multivariate probability distributions or, more summarily, by covariance matrices, and how correlations are caused by incompletely known common errors. Parameter estimation for the practically most important case of the Gaussian distribution with common errors is developed in close analogy to the more familiar case without. The formalism shows that, contrary to widespread belief, common ('systematic') and uncorrelated ('random' or 'statistical') errors are to be added in quadrature. It also shows explicitly that repetition of a measurement reduces mainly the statistical uncertainties but not the systematic ones. While statistical uncertainties are readily estimated from the scatter of repeatedly measured data, systematic uncertainties can only be inferred from prior information about common errors and their propagation. The optimal way to handle error-affected auxiliary quantities ('nuisance parameters') in data fitting and parameter estimation is to adjust them on the same footing as the parameters of interest and to integrate (marginalize) them out of the joint posterior distribution afterward

  6. Sources of medical error in refractive surgery.

    Science.gov (United States)

    Moshirfar, Majid; Simpson, Rachel G; Dave, Sonal B; Christiansen, Steven M; Edmonds, Jason N; Culbertson, William W; Pascucci, Stephen E; Sher, Neal A; Cano, David B; Trattler, William B

    2013-05-01

    To evaluate the causes of laser programming errors in refractive surgery and outcomes in these cases. In this multicenter, retrospective chart review, 22 eyes of 18 patients who had incorrect data entered into the refractive laser computer system at the time of treatment were evaluated. Cases were analyzed to uncover the etiology of these errors, patient follow-up treatments, and final outcomes. The results were used to identify potential methods to avoid similar errors in the future. Every patient experienced compromised uncorrected visual acuity requiring additional intervention, and 7 of 22 eyes (32%) lost corrected distance visual acuity (CDVA) of at least one line. Sixteen patients were suitable candidates for additional surgical correction to address these residual visual symptoms and six were not. Thirteen of 22 eyes (59%) received surgical follow-up treatment; nine eyes were treated with contact lenses. After follow-up treatment, six patients (27%) still had a loss of one line or more of CDVA. Three significant sources of error were identified: errors of cylinder conversion, data entry, and patient identification error. Twenty-seven percent of eyes with laser programming errors ultimately lost one or more lines of CDVA. Patients who underwent surgical revision had better outcomes than those who did not. Many of the mistakes identified were likely avoidable had preventive measures been taken, such as strict adherence to patient verification protocol or rigorous rechecking of treatment parameters. Copyright 2013, SLACK Incorporated.

  7. Research trend on human error reduction

    International Nuclear Information System (INIS)

    Miyaoka, Sadaoki

    1990-01-01

    Human error has been the problem in all industries. In 1988, the Bureau of Mines, Department of the Interior, USA, carried out the worldwide survey on the human error in all industries in relation to the fatal accidents in mines. There was difference in the results according to the methods of collecting data, but the proportion that human error took in the total accidents distributed in the wide range of 20∼85%, and was 35% on the average. The rate of occurrence of accidents and troubles in Japanese nuclear power stations is shown, and the rate of occurrence of human error is 0∼0.5 cases/reactor-year, which did not much vary. Therefore, the proportion that human error took in the total tended to increase, and it has become important to reduce human error for lowering the rate of occurrence of accidents and troubles hereafter. After the TMI accident in 1979 in USA, the research on man-machine interface became active, and after the Chernobyl accident in 1986 in USSR, the problem of organization and management has been studied. In Japan, 'Safety 21' was drawn up by the Advisory Committee for Energy, and also the annual reports on nuclear safety pointed out the importance of human factors. The state of the research on human factors in Japan and abroad and three targets to reduce human error are reported. (K.I.)

  8. Human error theory: relevance to nurse management.

    Science.gov (United States)

    Armitage, Gerry

    2009-03-01

    Describe, discuss and critically appraise human error theory and consider its relevance for nurse managers. Healthcare errors are a persistent threat to patient safety. Effective risk management and clinical governance depends on understanding the nature of error. This paper draws upon a wide literature from published works, largely from the field of cognitive psychology and human factors. Although the content of this paper is pertinent to any healthcare professional; it is written primarily for nurse managers. Error is inevitable. Causation is often attributed to individuals, yet causation in complex environments such as healthcare is predominantly multi-factorial. Individual performance is affected by the tendency to develop prepacked solutions and attention deficits, which can in turn be related to local conditions and systems or latent failures. Blame is often inappropriate. Defences should be constructed in the light of these considerations and to promote error wisdom and organizational resilience. Managing and learning from error is seen as a priority in the British National Health Service (NHS), this can be better achieved with an understanding of the roots, nature and consequences of error. Such an understanding can provide a helpful framework for a range of risk management activities.

  9. Wind power error estimation in resource assessments.

    Directory of Open Access Journals (Sweden)

    Osvaldo Rodríguez

    Full Text Available Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  10. Wind power error estimation in resource assessments.

    Science.gov (United States)

    Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  11. The benefit of generating errors during learning.

    Science.gov (United States)

    Potts, Rosalind; Shanks, David R

    2014-04-01

    Testing has been found to be a powerful learning tool, but educators might be reluctant to make full use of its benefits for fear that any errors made would be harmful to learning. We asked whether testing could be beneficial to memory even during novel learning, when nearly all responses were errors, and where errors were unlikely to be related to either cues or targets. In 4 experiments, participants learned definitions for unfamiliar English words, or translations for foreign vocabulary, by generating a response and being given corrective feedback, by reading the word and its definition or translation, or by selecting from a choice of definitions or translations followed by feedback. In a final test of all words, generating errors followed by feedback led to significantly better memory for the correct definition or translation than either reading or making incorrect choices, suggesting that the benefits of generation are not restricted to correctly generated items. Even when information to be learned is novel, errorful generation may play a powerful role in potentiating encoding of corrective feedback. Experiments 2A, 2B, and 3 revealed, via metacognitive judgments of learning, that participants are strikingly unaware of this benefit, judging errorful generation to be a less effective encoding method than reading or incorrect choosing, when in fact it was better. Predictions reflected participants' subjective experience during learning. If subjective difficulty leads to more effort at encoding, this could at least partly explain the errorful generation advantage.

  12. Medication Errors: New EU Good Practice Guide on Risk Minimisation and Error Prevention.

    Science.gov (United States)

    Goedecke, Thomas; Ord, Kathryn; Newbould, Victoria; Brosch, Sabine; Arlett, Peter

    2016-06-01

    A medication error is an unintended failure in the drug treatment process that leads to, or has the potential to lead to, harm to the patient. Reducing the risk of medication errors is a shared responsibility between patients, healthcare professionals, regulators and the pharmaceutical industry at all levels of healthcare delivery. In 2015, the EU regulatory network released a two-part good practice guide on medication errors to support both the pharmaceutical industry and regulators in the implementation of the changes introduced with the EU pharmacovigilance legislation. These changes included a modification of the 'adverse reaction' definition to include events associated with medication errors, and the requirement for national competent authorities responsible for pharmacovigilance in EU Member States to collaborate and exchange information on medication errors resulting in harm with national patient safety organisations. To facilitate reporting and learning from medication errors, a clear distinction has been made in the guidance between medication errors resulting in adverse reactions, medication errors without harm, intercepted medication errors and potential errors. This distinction is supported by an enhanced MedDRA(®) terminology that allows for coding all stages of the medication use process where the error occurred in addition to any clinical consequences. To better understand the causes and contributing factors, individual case safety reports involving an error should be followed-up with the primary reporter to gather information relevant for the conduct of root cause analysis where this may be appropriate. Such reports should also be summarised in periodic safety update reports and addressed in risk management plans. Any risk minimisation and prevention strategy for medication errors should consider all stages of a medicinal product's life-cycle, particularly the main sources and types of medication errors during product development. This article

  13. MEDICAL ERROR: CIVIL AND LEGAL ASPECT.

    Science.gov (United States)

    Buletsa, S; Drozd, O; Yunin, O; Mohilevskyi, L

    2018-03-01

    The scientific article is focused on the research of the notion of medical error, medical and legal aspects of this notion have been considered. The necessity of the legislative consolidation of the notion of «medical error» and criteria of its legal estimation have been grounded. In the process of writing a scientific article, we used the empirical method, general scientific and comparative legal methods. A comparison of the concept of medical error in civil and legal aspects was made from the point of view of Ukrainian, European and American scientists. It has been marked that the problem of medical errors is known since ancient times and in the whole world, in fact without regard to the level of development of medicine, there is no country, where doctors never make errors. According to the statistics, medical errors in the world are included in the first five reasons of death rate. At the same time the grant of medical services practically concerns all people. As a man and his life, health in Ukraine are acknowledged by a higher social value, medical services must be of high-quality and effective. The grant of not quality medical services causes harm to the health, and sometimes the lives of people; it may result in injury or even death. The right to the health protection is one of the fundamental human rights assured by the Constitution of Ukraine; therefore the issue of medical errors and liability for them is extremely relevant. The authors make conclusions, that the definition of the notion of «medical error» must get the legal consolidation. Besides, the legal estimation of medical errors must be based on the single principles enshrined in the legislation and confirmed by judicial practice.

  14. Notes on human error analysis and prediction

    International Nuclear Information System (INIS)

    Rasmussen, J.

    1978-11-01

    The notes comprise an introductory discussion of the role of human error analysis and prediction in industrial risk analysis. Following this introduction, different classes of human errors and role in industrial systems are mentioned. Problems related to the prediction of human behaviour in reliability and safety analysis are formulated and ''criteria for analyzability'' which must be met by industrial systems so that a systematic analysis can be performed are suggested. The appendices contain illustrative case stories and a review of human error reports for the task of equipment calibration and testing as found in the US Licensee Event Reports. (author)

  15. Error estimation in plant growth analysis

    Directory of Open Access Journals (Sweden)

    Andrzej Gregorczyk

    2014-01-01

    Full Text Available The scheme is presented for calculation of errors of dry matter values which occur during approximation of data with growth curves, determined by the analytical method (logistic function and by the numerical method (Richards function. Further formulae are shown, which describe absolute errors of growth characteristics: Growth rate (GR, Relative growth rate (RGR, Unit leaf rate (ULR and Leaf area ratio (LAR. Calculation examples concerning the growth course of oats and maize plants are given. The critical analysis of the estimation of obtained results has been done. The purposefulness of joint application of statistical methods and error calculus in plant growth analysis has been ascertained.

  16. Fixturing error measurement and analysis using CMMs

    International Nuclear Information System (INIS)

    Wang, Y; Chen, X; Gindy, N

    2005-01-01

    Influence of fixture on the errors of a machined surface can be very significant. The machined surface errors generated during machining can be measured by using a coordinate measurement machine (CMM) through the displacements of three coordinate systems on a fixture-workpiece pair in relation to the deviation of the machined surface. The surface errors consist of the component movement, component twist, deviation between actual machined surface and defined tool path. A turbine blade fixture for grinding operation is used for case study

  17. ERROR VS REJECTION CURVE FOR THE PERCEPTRON

    OpenAIRE

    PARRONDO, JMR; VAN DEN BROECK, Christian

    1993-01-01

    We calculate the generalization error epsilon for a perceptron J, trained by a teacher perceptron T, on input patterns S that form a fixed angle arccos (J.S) with the student. We show that the error is reduced from a power law to an exponentially fast decay by rejecting input patterns that lie within a given neighbourhood of the decision boundary J.S = 0. On the other hand, the error vs. rejection curve epsilon(rho), where rho is the fraction of rejected patterns, is shown to be independent ...

  18. Accounting for optical errors in microtensiometry.

    Science.gov (United States)

    Hinton, Zachary R; Alvarez, Nicolas J

    2018-09-15

    Drop shape analysis (DSA) techniques measure interfacial tension subject to error in image analysis and the optical system. While considerable efforts have been made to minimize image analysis errors, very little work has treated optical errors. There are two main sources of error when considering the optical system: the angle of misalignment and the choice of focal plane. Due to the convoluted nature of these sources, small angles of misalignment can lead to large errors in measured curvature. We demonstrate using microtensiometry the contributions of these sources to measured errors in radius, and, more importantly, deconvolute the effects of misalignment and focal plane. Our findings are expected to have broad implications on all optical techniques measuring interfacial curvature. A geometric model is developed to analytically determine the contributions of misalignment angle and choice of focal plane on measurement error for spherical cap interfaces. This work utilizes a microtensiometer to validate the geometric model and to quantify the effect of both sources of error. For the case of a microtensiometer, an empirical calibration is demonstrated that corrects for optical errors and drastically simplifies implementation. The combination of geometric modeling and experimental results reveal a convoluted relationship between the true and measured interfacial radius as a function of the misalignment angle and choice of focal plane. The validated geometric model produces a full operating window that is strongly dependent on the capillary radius and spherical cap height. In all cases, the contribution of optical errors is minimized when the height of the spherical cap is equivalent to the capillary radius, i.e. a hemispherical interface. The understanding of these errors allow for correct measure of interfacial curvature and interfacial tension regardless of experimental setup. For the case of microtensiometry, this greatly decreases the time for experimental setup

  19. Human Error Analysis by Fuzzy-Set

    International Nuclear Information System (INIS)

    Situmorang, Johnny

    1996-01-01

    In conventional HRA the probability of Error is treated as a single and exact value through constructing even tree, but in this moment the Fuzzy-Set Theory is used. Fuzzy set theory treat the probability of error as a plausibility which illustrate a linguistic variable. Most parameter or variable in human engineering been defined verbal good, fairly good, worst etc. Which describe a range of any value of probability. For example this analysis is quantified the human error in calibration task, and the probability of miscalibration is very low

  20. KMRR thermal power measurement error estimation

    International Nuclear Information System (INIS)

    Rhee, B.W.; Sim, B.S.; Lim, I.C.; Oh, S.K.

    1990-01-01

    The thermal power measurement error of the Korea Multi-purpose Research Reactor has been estimated by a statistical Monte Carlo method, and compared with those obtained by the other methods including deterministic and statistical approaches. The results show that the specified thermal power measurement error of 5% cannot be achieved if the commercial RTDs are used to measure the coolant temperatures of the secondary cooling system and the error can be reduced below the requirement if the commercial RTDs are replaced by the precision RTDs. The possible range of the thermal power control operation has been identified to be from 100% to 20% of full power

  1. Magnetic field errors tolerances of Nuclotron booster

    Science.gov (United States)

    Butenko, Andrey; Kazinova, Olha; Kostromin, Sergey; Mikhaylov, Vladimir; Tuzikov, Alexey; Khodzhibagiyan, Hamlet

    2018-04-01

    Generation of magnetic field in units of booster synchrotron for the NICA project is one of the most important conditions for getting the required parameters and qualitative accelerator operation. Research of linear and nonlinear dynamics of ion beam 197Au31+ in the booster have carried out with MADX program. Analytical estimation of magnetic field errors tolerance and numerical computation of dynamic aperture of booster DFO-magnetic lattice are presented. Closed orbit distortion with random errors of magnetic fields and errors in layout of booster units was evaluated.

  2. Analysis of field errors in existing undulators

    International Nuclear Information System (INIS)

    Kincaid, B.M.

    1990-01-01

    The Advanced Light Source (ALS) and other third generation synchrotron light sources have been designed for optimum performance with undulator insertion devices. The performance requirements for these new undulators are explored, with emphasis on the effects of errors on source spectral brightness. Analysis of magnetic field data for several existing hybrid undulators is presented, decomposing errors into systematic and random components. An attempts is made to identify the sources of these errors, and recommendations are made for designing future insertion devices. 12 refs., 16 figs

  3. Awareness of technology-induced errors and processes for identifying and preventing such errors.

    Science.gov (United States)

    Bellwood, Paule; Borycki, Elizabeth M; Kushniruk, Andre W

    2015-01-01

    There is a need to determine if organizations working with health information technology are aware of technology-induced errors and how they are addressing and preventing them. The purpose of this study was to: a) determine the degree of technology-induced error awareness in various Canadian healthcare organizations, and b) identify those processes and procedures that are currently in place to help address, manage, and prevent technology-induced errors. We identified a lack of technology-induced error awareness among participants. Participants identified there was a lack of well-defined procedures in place for reporting technology-induced errors, addressing them when they arise, and preventing them.

  4. Understanding and Confronting Our Mistakes: The Epidemiology of Error in Radiology and Strategies for Error Reduction.

    Science.gov (United States)

    Bruno, Michael A; Walker, Eric A; Abujudeh, Hani H

    2015-10-01

    Arriving at a medical diagnosis is a highly complex process that is extremely error prone. Missed or delayed diagnoses often lead to patient harm and missed opportunities for treatment. Since medical imaging is a major contributor to the overall diagnostic process, it is also a major potential source of diagnostic error. Although some diagnoses may be missed because of the technical or physical limitations of the imaging modality, including image resolution, intrinsic or extrinsic contrast, and signal-to-noise ratio, most missed radiologic diagnoses are attributable to image interpretation errors by radiologists. Radiologic interpretation cannot be mechanized or automated; it is a human enterprise based on complex psychophysiologic and cognitive processes and is itself subject to a wide variety of error types, including perceptual errors (those in which an important abnormality is simply not seen on the images) and cognitive errors (those in which the abnormality is visually detected but the meaning or importance of the finding is not correctly understood or appreciated). The overall prevalence of radiologists' errors in practice does not appear to have changed since it was first estimated in the 1960s. The authors review the epidemiology of errors in diagnostic radiology, including a recently proposed taxonomy of radiologists' errors, as well as research findings, in an attempt to elucidate possible underlying causes of these errors. The authors also propose strategies for error reduction in radiology. On the basis of current understanding, specific suggestions are offered as to how radiologists can improve their performance in practice. © RSNA, 2015.

  5. The Errors of Our Ways: Understanding Error Representations in Cerebellar-Dependent Motor Learning.

    Science.gov (United States)

    Popa, Laurentiu S; Streng, Martha L; Hewitt, Angela L; Ebner, Timothy J

    2016-04-01

    The cerebellum is essential for error-driven motor learning and is strongly implicated in detecting and correcting for motor errors. Therefore, elucidating how motor errors are represented in the cerebellum is essential in understanding cerebellar function, in general, and its role in motor learning, in particular. This review examines how motor errors are encoded in the cerebellar cortex in the context of a forward internal model that generates predictions about the upcoming movement and drives learning and adaptation. In this framework, sensory prediction errors, defined as the discrepancy between the predicted consequences of motor commands and the sensory feedback, are crucial for both on-line movement control and motor learning. While many studies support the dominant view that motor errors are encoded in the complex spike discharge of Purkinje cells, others have failed to relate complex spike activity with errors. Given these limitations, we review recent findings in the monkey showing that complex spike modulation is not necessarily required for motor learning or for simple spike adaptation. Also, new results demonstrate that the simple spike discharge provides continuous error signals that both lead and lag the actual movements in time, suggesting errors are encoded as both an internal prediction of motor commands and the actual sensory feedback. These dual error representations have opposing effects on simple spike discharge, consistent with the signals needed to generate sensory prediction errors used to update a forward internal model.

  6. Error Detection and Error Classification: Failure Awareness in Data Transfer Scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Louisiana State University; Balman, Mehmet; Kosar, Tevfik

    2010-10-27

    Data transfer in distributed environment is prone to frequent failures resulting from back-end system level problems, like connectivity failure which is technically untraceable by users. Error messages are not logged efficiently, and sometimes are not relevant/useful from users point-of-view. Our study explores the possibility of an efficient error detection and reporting system for such environments. Prior knowledge about the environment and awareness of the actual reason behind a failure would enable higher level planners to make better and accurate decisions. It is necessary to have well defined error detection and error reporting methods to increase the usability and serviceability of existing data transfer protocols and data management systems. We investigate the applicability of early error detection and error classification techniques and propose an error reporting framework and a failure-aware data transfer life cycle to improve arrangement of data transfer operations and to enhance decision making of data transfer schedulers.

  7. Simulator data on human error probabilities

    International Nuclear Information System (INIS)

    Kozinsky, E.J.; Guttmann, H.E.

    1982-01-01

    Analysis of operator errors on NPP simulators is being used to determine Human Error Probabilities (HEP) for task elements defined in NUREG/CR 1278. Simulator data tapes from research conducted by EPRI and ORNL are being analyzed for operator error rates. The tapes collected, using Performance Measurement System software developed for EPRI, contain a history of all operator manipulations during simulated casualties. Analysis yields a time history or Operational Sequence Diagram and a manipulation summary, both stored in computer data files. Data searches yield information on operator errors of omission and commission. This work experimentally determines HEPs for Probabilistic Risk Assessment calculations. It is the only practical experimental source of this data to date

  8. Soft error mechanisms, modeling and mitigation

    CERN Document Server

    Sayil, Selahattin

    2016-01-01

    This book introduces readers to various radiation soft-error mechanisms such as soft delays, radiation induced clock jitter and pulses, and single event (SE) coupling induced effects. In addition to discussing various radiation hardening techniques for combinational logic, the author also describes new mitigation strategies targeting commercial designs. Coverage includes novel soft error mitigation techniques such as the Dynamic Threshold Technique and Soft Error Filtering based on Transmission gate with varied gate and body bias. The discussion also includes modeling of SE crosstalk noise, delay and speed-up effects. Various mitigation strategies to eliminate SE coupling effects are also introduced. Coverage also includes the reliability of low power energy-efficient designs and the impact of leakage power consumption optimizations on soft error robustness. The author presents an analysis of various power optimization techniques, enabling readers to make design choices that reduce static power consumption an...

  9. Error propagation analysis for a sensor system

    International Nuclear Information System (INIS)

    Yeater, M.L.; Hockenbury, R.W.; Hawkins, J.; Wilkinson, J.

    1976-01-01

    As part of a program to develop reliability methods for operational use with reactor sensors and protective systems, error propagation analyses are being made for each model. An example is a sensor system computer simulation model, in which the sensor system signature is convoluted with a reactor signature to show the effect of each in revealing or obscuring information contained in the other. The error propagation analysis models the system and signature uncertainties and sensitivities, whereas the simulation models the signatures and by extensive repetitions reveals the effect of errors in various reactor input or sensor response data. In the approach for the example presented, the errors accumulated by the signature (set of ''noise'' frequencies) are successively calculated as it is propagated stepwise through a system comprised of sensor and signal processing components. Additional modeling steps include a Fourier transform calculation to produce the usual power spectral density representation of the product signature, and some form of pattern recognition algorithm

  10. Identifying systematic DFT errors in catalytic reactions

    DEFF Research Database (Denmark)

    Christensen, Rune; Hansen, Heine Anton; Vegge, Tejs

    2015-01-01

    Using CO2 reduction reactions as examples, we present a widely applicable method for identifying the main source of errors in density functional theory (DFT) calculations. The method has broad applications for error correction in DFT calculations in general, as it relies on the dependence...... of the applied exchange–correlation functional on the reaction energies rather than on errors versus the experimental data. As a result, improved energy corrections can now be determined for both gas phase and adsorbed reaction species, particularly interesting within heterogeneous catalysis. We show...... that for the CO2 reduction reactions, the main source of error is associated with the C[double bond, length as m-dash]O bonds and not the typically energy corrected OCO backbone....

  11. Simulator data on human error probabilities

    International Nuclear Information System (INIS)

    Kozinsky, E.J.; Guttmann, H.E.

    1981-01-01

    Analysis of operator errors on NPP simulators is being used to determine Human Error Probabilities (HEP) for task elements defined in NUREG/CR-1278. Simulator data tapes from research conducted by EPRI and ORNL are being analyzed for operator error rates. The tapes collected, using Performance Measurement System software developed for EPRI, contain a history of all operator manipulations during simulated casualties. Analysis yields a time history or Operational Sequence Diagram and a manipulation summary, both stored in computer data files. Data searches yield information on operator errors of omission and commission. This work experimentally determined HEP's for Probabilistic Risk Assessment calculations. It is the only practical experimental source of this data to date

  12. Error Sonification of a Complex Motor Task

    Directory of Open Access Journals (Sweden)

    Riener Robert

    2011-12-01

    Full Text Available Visual information is mainly used to master complex motor tasks. Thus, additional information providing augmented feedback should be displayed in other modalities than vision, e.g. hearing. The present work evaluated the potential of error sonification to enhance learning of a rowing-type motor task. In contrast to a control group receiving self-controlled terminal feedback, the experimental group could not significantly reduce spatial errors. Thus, motor learning was not enhanced by error sonification, although during the training the participant could benefit from it. It seems that the motor task was too slow, resulting in immediate corrections of the movement rather than in an internal representation of the general characteristics of the motor task. Therefore, further studies should elaborate the impact of error sonification when general characteristics of the motor tasks are already known.

  13. Human error in remote Afterloading Brachytherapy

    International Nuclear Information System (INIS)

    Quinn, M.L.; Callan, J.; Schoenfeld, I.; Serig, D.

    1994-01-01

    Remote Afterloading Brachytherapy (RAB) is a medical process used in the treatment of cancer. RAB uses a computer-controlled device to remotely insert and remove radioactive sources close to a target (or tumor) in the body. Some RAB problems affecting the radiation dose to the patient have been reported and attributed to human error. To determine the root cause of human error in the RAB system, a human factors team visited 23 RAB treatment sites in the US. The team observed RAB treatment planning and delivery, interviewed RAB personnel, and performed walk-throughs, during which staff demonstrated the procedures and practices used in performing RAB tasks. Factors leading to human error in the RAB system were identified. The impact of those factors on the performance of RAB was then evaluated and prioritized in terms of safety significance. Finally, the project identified and evaluated alternative approaches for resolving the safety significant problems related to human error

  14. Systematic sampling with errors in sample locations

    DEFF Research Database (Denmark)

    Ziegel, Johanna; Baddeley, Adrian; Dorph-Petersen, Karl-Anton

    2010-01-01

    analysis using point process methods. We then analyze three different models for the error process, calculate exact expressions for the variances, and derive asymptotic variances. Errors in the placement of sample points can lead to substantial inflation of the variance, dampening of zitterbewegung......Systematic sampling of points in continuous space is widely used in microscopy and spatial surveys. Classical theory provides asymptotic expressions for the variance of estimators based on systematic sampling as the grid spacing decreases. However, the classical theory assumes that the sample grid...... is exactly periodic; real physical sampling procedures may introduce errors in the placement of the sample points. This paper studies the effect of errors in sample positioning on the variance of estimators in the case of one-dimensional systematic sampling. First we sketch a general approach to variance...

  15. Assessing Measurement Error in Medicare Coverage

    Data.gov (United States)

    U.S. Department of Health & Human Services — Assessing Measurement Error in Medicare Coverage From the National Health Interview Survey Using linked administrative data, to validate Medicare coverage estimates...

  16. Validation of Metrics as Error Predictors

    Science.gov (United States)

    Mendling, Jan

    In this chapter, we test the validity of metrics that were defined in the previous chapter for predicting errors in EPC business process models. In Section 5.1, we provide an overview of how the analysis data is generated. Section 5.2 describes the sample of EPCs from practice that we use for the analysis. Here we discuss a disaggregation by the EPC model group and by error as well as a correlation analysis between metrics and error. Based on this sample, we calculate a logistic regression model for predicting error probability with the metrics as input variables in Section 5.3. In Section 5.4, we then test the regression function for an independent sample of EPC models from textbooks as a cross-validation. Section 5.5 summarizes the findings.

  17. Cascade Error Projection: A New Learning Algorithm

    Science.gov (United States)

    Duong, T. A.; Stubberud, A. R.; Daud, T.; Thakoor, A. P.

    1995-01-01

    A new neural network architecture and a hardware implementable learning algorithm is proposed. The algorithm, called cascade error projection (CEP), handles lack of precision and circuit noise better than existing algorithms.

  18. Chernobyl - system accident or human error?

    International Nuclear Information System (INIS)

    Stang, E.

    1996-01-01

    Did human error cause the Chernobyl disaster? The standard point of view is that operator error was the root cause of the disaster. This was also the view of the Soviet Accident Commission. The paper analyses the operator errors at Chernobyl in a system context. The reactor operators committed errors that depended upon a lot of other failures that made up a complex accident scenario. The analysis is based on Charles Perrow's analysis of technological disasters. Failure possibility is an inherent property of high-risk industrial installations. The Chernobyl accident consisted of a chain of events that were both extremely improbable and difficult to predict. It is not reasonable to put the blame for the disaster on the operators. (author)

  19. Open quantum systems and error correction

    Science.gov (United States)

    Shabani Barzegar, Alireza

    Quantum effects can be harnessed to manipulate information in a desired way. Quantum systems which are designed for this purpose are suffering from harming interaction with their surrounding environment or inaccuracy in control forces. Engineering different methods to combat errors in quantum devices are highly demanding. In this thesis, I focus on realistic formulations of quantum error correction methods. A realistic formulation is the one that incorporates experimental challenges. This thesis is presented in two sections of open quantum system and quantum error correction. Chapters 2 and 3 cover the material on open quantum system theory. It is essential to first study a noise process then to contemplate methods to cancel its effect. In the second chapter, I present the non-completely positive formulation of quantum maps. Most of these results are published in [Shabani and Lidar, 2009b,a], except a subsection on geometric characterization of positivity domain of a quantum map. The real-time formulation of the dynamics is the topic of the third chapter. After introducing the concept of Markovian regime, A new post-Markovian quantum master equation is derived, published in [Shabani and Lidar, 2005a]. The section of quantum error correction is presented in three chapters of 4, 5, 6 and 7. In chapter 4, we introduce a generalized theory of decoherence-free subspaces and subsystems (DFSs), which do not require accurate initialization (published in [Shabani and Lidar, 2005b]). In Chapter 5, we present a semidefinite program optimization approach to quantum error correction that yields codes and recovery procedures that are robust against significant variations in the noise channel. Our approach allows us to optimize the encoding, recovery, or both, and is amenable to approximations that significantly improve computational cost while retaining fidelity (see [Kosut et al., 2008] for a published version). Chapter 6 is devoted to a theory of quantum error correction (QEC

  20. Addressing Medical Errors in Hand Surgery

    OpenAIRE

    Johnson, Shepard P.; Adkinson, Joshua M.; Chung, Kevin C.

    2014-01-01

    Influential think-tank such as the Institute of Medicine has raised awareness about the implications of medical errors. In response, organizations, medical societies, and institutions have initiated programs to decrease the incidence and effects of these errors. Surgeons deal with the direct implications of adverse events involving patients. In addition to managing the physical consequences, they are confronted with ethical and social issues when caring for a harmed patient. Although there is...

  1. Radiologic errors, past, present and future.

    Science.gov (United States)

    Berlin, Leonard

    2014-01-01

    During the 10-year period beginning in 1949 with publication of five articles in two radiology journals and UKs The Lancet, a California radiologist named L.H. Garland almost single-handedly shocked the entire medical and especially the radiologic community. He focused their attention on the fact now known and accepted by all, but at that time not previously recognized and acknowledged only with great reluctance, that a substantial degree of observer error was prevalent in radiologic interpretation. In the more than half-century that followed, Garland's pioneering work has been affirmed and reaffirmed by numerous researchers. Retrospective studies disclosed then and still disclose today that diagnostic errors in radiologic interpretations of plain radiographic (as well as CT, MR, ultrasound, and radionuclide) images hover in the 30% range, not too dissimilar to the error rates in clinical medicine. Seventy percent of these errors are perceptual in nature, i.e., the radiologist does not "see" the abnormality on the imaging exam, perhaps due to poor conspicuity, satisfaction of search, or simply the "inexplicable psycho-visual phenomena of human perception." The remainder are cognitive errors: the radiologist sees an abnormality but fails to render a correct diagnoses by attaching the wrong significance to what is seen, perhaps due to inadequate knowledge, or an alliterative or judgmental error. Computer-assisted detection (CAD), a technology that for the past two decades has been utilized primarily in mammographic interpretation, increases sensitivity but at the same time decreases specificity; whether it reduces errors is debatable. Efforts to reduce diagnostic radiological errors continue, but the degree to which they will be successful remains to be determined.

  2. Parameters and error of a theoretical model

    International Nuclear Information System (INIS)

    Moeller, P.; Nix, J.R.; Swiatecki, W.

    1986-09-01

    We propose a definition for the error of a theoretical model of the type whose parameters are determined from adjustment to experimental data. By applying a standard statistical method, the maximum-likelihoodlmethod, we derive expressions for both the parameters of the theoretical model and its error. We investigate the derived equations by solving them for simulated experimental and theoretical quantities generated by use of random number generators. 2 refs., 4 tabs

  3. Counting OCR errors in typeset text

    Science.gov (United States)

    Sandberg, Jonathan S.

    1995-03-01

    Frequently object recognition accuracy is a key component in the performance analysis of pattern matching systems. In the past three years, the results of numerous excellent and rigorous studies of OCR system typeset-character accuracy (henceforth OCR accuracy) have been published, encouraging performance comparisons between a variety of OCR products and technologies. These published figures are important; OCR vendor advertisements in the popular trade magazines lead readers to believe that published OCR accuracy figures effect market share in the lucrative OCR market. Curiously, a detailed review of many of these OCR error occurrence counting results reveals that they are not reproducible as published and they are not strictly comparable due to larger variances in the counts than would be expected by the sampling variance. Naturally, since OCR accuracy is based on a ratio of the number of OCR errors over the size of the text searched for errors, imprecise OCR error accounting leads to similar imprecision in OCR accuracy. Some published papers use informal, non-automatic, or intuitively correct OCR error accounting. Still other published results present OCR error accounting methods based on string matching algorithms such as dynamic programming using Levenshtein (edit) distance but omit critical implementation details (such as the existence of suspect markers in the OCR generated output or the weights used in the dynamic programming minimization procedure). The problem with not specifically revealing the accounting method is that the number of errors found by different methods are significantly different. This paper identifies the basic accounting methods used to measure OCR errors in typeset text and offers an evaluation and comparison of the various accounting methods.

  4. Error Bounds: Necessary and Sufficient Conditions

    Czech Academy of Sciences Publication Activity Database

    Outrata, Jiří; Kruger, A.Y.; Fabian, Marián; Henrion, R.

    2010-01-01

    Roč. 18, č. 2 (2010), s. 121-149 ISSN 1877-0533 R&D Projects: GA AV ČR IAA100750802 Institutional research plan: CEZ:AV0Z10750506; CEZ:AV0Z10190503 Keywords : Error bounds * Calmness * Subdifferential * Slope Subject RIV: BA - General Mathematics Impact factor: 0.333, year: 2010 http://library.utia.cas.cz/separaty/2010/MTR/outrata-error bounds necessary and sufficient conditions.pdf

  5. Temperature error in digital bathythermograph data

    Digital Repository Service at National Institute of Oceanography (India)

    Pankajakshan, T.; Reddy, G.V.; Ratnakaran, L.; Sarupria, J.S.; RameshBabu, V.

    Sciences Vol. 32(3), September 2003, pp. 234-236 Short Communication Temperature error in digital bathythermograph data Thadathil Pankajakshan, G. V. Reddy, Lasitha Ratnakaran, J. S. Sarupria & V. Ramesh Babu Data and Information Division... Oceanographic Data Centre (JODC) 17,305 Short communication 235 Mean difference between DBT and Nansen temperature (here after referred to ‘error’) from surface to 800 m depth and for the two cruises is given in Fig. 3. Error bars are provided...

  6. A theory of cross-validation error

    OpenAIRE

    Turney, Peter D.

    1994-01-01

    This paper presents a theory of error in cross-validation testing of algorithms for predicting real-valued attributes. The theory justifies the claim that predicting real-valued attributes requires balancing the conflicting demands of simplicity and accuracy. Furthermore, the theory indicates precisely how these conflicting demands must be balanced, in order to minimize cross-validation error. A general theory is presented, then it is developed in detail for linear regression and instance-bas...

  7. Errors and mistakes in breast ultrasound diagnostics

    Directory of Open Access Journals (Sweden)

    Wiesław Jakubowski

    2012-09-01

    Full Text Available Sonomammography is often the first additional examination performed in the diagnostics of breast diseases. The development of ultrasound imaging techniques, particularly the introduction of high frequency transducers, matrix transducers, harmonic imaging and finally, elastography, influenced the improvement of breast disease diagnostics. Neverthe‑ less, as in each imaging method, there are errors and mistakes resulting from the techni‑ cal limitations of the method, breast anatomy (fibrous remodeling, insufficient sensitivity and, in particular, specificity. Errors in breast ultrasound diagnostics can be divided into impossible to be avoided and potentially possible to be reduced. In this article the most frequently made errors in ultrasound have been presented, including the ones caused by the presence of artifacts resulting from volumetric averaging in the near and far field, artifacts in cysts or in dilated lactiferous ducts (reverberations, comet tail artifacts, lateral beam artifacts, improper setting of general enhancement or time gain curve or range. Errors dependent on the examiner, resulting in the wrong BIRADS‑usg classification, are divided into negative and positive errors. The sources of these errors have been listed. The methods of minimization of the number of errors made have been discussed, includ‑ ing the ones related to the appropriate examination technique, taking into account data from case history and the use of the greatest possible number of additional options such as: harmonic imaging, color and power Doppler and elastography. In the article examples of errors resulting from the technical conditions of the method have been presented, and those dependent on the examiner which are related to the great diversity and variation of ultrasound images of pathological breast lesions.

  8. Error Estimation in Preconditioned Conjugate Gradients

    Czech Academy of Sciences Publication Activity Database

    Strakoš, Zdeněk; Tichý, Petr

    2005-01-01

    Roč. 45, - (2005), s. 789-817 ISSN 0006-3835 R&D Projects: GA AV ČR 1ET400300415; GA AV ČR KJB1030306 Institutional research plan: CEZ:AV0Z10300504 Keywords : preconditioned conjugate gradient method * error bounds * stopping criteria * evaluation of convergence * numerical stability * finite precision arithmetic * rounding errors Subject RIV: BA - General Mathematics Impact factor: 0.509, year: 2005

  9. Initialization Errors in Quantum Data Base Recall

    OpenAIRE

    Natu, Kalyani

    2016-01-01

    This paper analyzes the relationship between initialization error and recall of a specific memory in the Grover algorithm for quantum database search. It is shown that the correct memory is obtained with high probability even when the initial state is far removed from the correct one. The analysis is done by relating the variance of error in the initial state to the recovery of the correct memory and the surprising result is obtained that the relationship between the two is essentially linear.

  10. Improving Type Error Messages in OCaml

    Directory of Open Access Journals (Sweden)

    Arthur Charguéraud

    2015-12-01

    Full Text Available Cryptic type error messages are a major obstacle to learning OCaml or other ML-based languages. In many cases, error messages cannot be interpreted without a sufficiently-precise model of the type inference algorithm. The problem of improving type error messages in ML has received quite a bit of attention over the past two decades, and many different strategies have been considered. The challenge is not only to produce error messages that are both sufficiently concise and systematically useful to the programmer, but also to handle a full-blown programming language and to cope with large-sized programs efficiently. In this work, we present a modification to the traditional ML type inference algorithm implemented in OCaml that, by significantly reducing the left-to-right bias, allows us to report error messages that are more helpful to the programmer. Our algorithm remains fully predictable and continues to produce fairly concise error messages that always help making some progress towards fixing the code. We implemented our approach as a patch to the OCaml compiler in just a few hundred lines of code. We believe that this patch should benefit not just to beginners, but also to experienced programs developing large-scale OCaml programs.

  11. Friendship at work and error disclosure

    Directory of Open Access Journals (Sweden)

    Hsiao-Yen Mao

    2017-10-01

    Full Text Available Organizations rely on contextual factors to promote employee disclosure of self-made errors, which induces a resource dilemma (i.e., disclosure entails costing one's own resources to bring others resources and a friendship dilemma (i.e., disclosure is seemingly easier through friendship, yet the cost of friendship is embedded. This study proposes that friendship at work enhances error disclosure and uses conservation of resources theory as underlying explanation. A three-wave survey collected data from 274 full-time employees with a variety of occupational backgrounds. Empirical results indicated that friendship enhanced error disclosure partially through relational mechanisms of employees’ attitudes toward coworkers (i.e., employee engagement and of coworkers’ attitudes toward employees (i.e., perceived social worth. Such effects hold when controlling for established predictors of error disclosure. This study expands extant perspectives on employee error and the theoretical lenses used to explain the influence of friendship at work. We propose that, while promoting error disclosure through both contextual and relational approaches, organizations should be vigilant about potential incongruence.

  12. [Medication errors in Spanish intensive care units].

    Science.gov (United States)

    Merino, P; Martín, M C; Alonso, A; Gutiérrez, I; Alvarez, J; Becerril, F

    2013-01-01

    To estimate the incidence of medication errors in Spanish intensive care units. Post hoc study of the SYREC trial. A longitudinal observational study carried out during 24 hours in patients admitted to the ICU. Spanish intensive care units. Patients admitted to the intensive care unit participating in the SYREC during the period of study. Risk, individual risk, and rate of medication errors. The final study sample consisted of 1017 patients from 79 intensive care units; 591 (58%) were affected by one or more incidents. Of these, 253 (43%) had at least one medication-related incident. The total number of incidents reported was 1424, of which 350 (25%) were medication errors. The risk of suffering at least one incident was 22% (IQR: 8-50%) while the individual risk was 21% (IQR: 8-42%). The medication error rate was 1.13 medication errors per 100 patient-days of stay. Most incidents occurred in the prescription (34%) and administration (28%) phases, 16% resulted in patient harm, and 82% were considered "totally avoidable". Medication errors are among the most frequent types of incidents in critically ill patients, and are more common in the prescription and administration stages. Although most such incidents have no clinical consequences, a significant percentage prove harmful for the patient, and a large proportion are avoidable. Copyright © 2012 Elsevier España, S.L. and SEMICYUC. All rights reserved.

  13. Medication administration errors in Eastern Saudi Arabia

    International Nuclear Information System (INIS)

    Mir Sadat-Ali

    2010-01-01

    To assess the prevalence and characteristics of medication errors (ME) in patients admitted to King Fahd University Hospital, Alkhobar, Kingdom of Saudi Arabia. Medication errors are documented by the nurses and physicians standard reporting forms (Hospital Based Incident Report). The study was carried out in King Fahd University Hospital, Alkhobar, Kingdom of Saudi Arabia and all the incident reports were collected during the period from January 2008 to December 2009. The incident reports were analyzed for age, gender, nationality, nursing unit, and time where ME was reported. The data were analyzed and the statistical significance differences between groups were determined by Student's t-test, and p-values of <0.05 using confidence interval of 95% were considered significant. There were 38 ME reported for the study period. The youngest patient was 5 days and the oldest 70 years. There were 31 Saudis, and 7 non-Saudi patients involved. The most common error was missed medication, which was seen in 15 (39.5%) patients. Over 15 (39.5%) of errors occurred in 2 units (pediatric medicine, and obstetrics and gynecology). Nineteen (50%) of the errors occurred during the 3-11 pm shift. Our study shows that the prevalence of ME in our institution is low, in comparison with the world literature. This could be due to under reporting of the errors, and we believe that ME reporting should be made less punitive so that ME can be studied and preventive measures implemented (Author).

  14. Error Parsing: An alternative method of implementing social judgment theory

    OpenAIRE

    Crystal C. Hall; Daniel M. Oppenheimer

    2015-01-01

    We present a novel method of judgment analysis called Error Parsing, based upon an alternative method of implementing Social Judgment Theory (SJT). SJT and Error Parsing both posit the same three components of error in human judgment: error due to noise, error due to cue weighting, and error due to inconsistency. In that sense, the broad theory and framework are the same. However, SJT and Error Parsing were developed to answer different questions, and thus use different m...

  15. High cortisol awakening response is associated with impaired error monitoring and decreased post-error adjustment.

    Science.gov (United States)

    Zhang, Liang; Duan, Hongxia; Qin, Shaozheng; Yuan, Yiran; Buchanan, Tony W; Zhang, Kan; Wu, Jianhui

    2015-01-01

    The cortisol awakening response (CAR), a rapid increase in cortisol levels following morning awakening, is an important aspect of hypothalamic-pituitary-adrenocortical axis activity. Alterations in the CAR have been linked to a variety of mental disorders and cognitive function. However, little is known regarding the relationship between the CAR and error processing, a phenomenon that is vital for cognitive control and behavioral adaptation. Using high-temporal resolution measures of event-related potentials (ERPs) combined with behavioral assessment of error processing, we investigated whether and how the CAR is associated with two key components of error processing: error detection and subsequent behavioral adjustment. Sixty university students performed a Go/No-go task while their ERPs were recorded. Saliva samples were collected at 0, 15, 30 and 60 min after awakening on the two consecutive days following ERP data collection. The results showed that a higher CAR was associated with slowed latency of the error-related negativity (ERN) and a higher post-error miss rate. The CAR was not associated with other behavioral measures such as the false alarm rate and the post-correct miss rate. These findings suggest that high CAR is a biological factor linked to impairments of multiple steps of error processing in healthy populations, specifically, the automatic detection of error and post-error behavioral adjustment. A common underlying neural mechanism of physiological and cognitive control may be crucial for engaging in both CAR and error processing.

  16. Generalizing human error rates: A taxonomic approach

    International Nuclear Information System (INIS)

    Buffardi, L.; Fleishman, E.; Allen, J.

    1989-01-01

    It is well established that human error plays a major role in malfunctioning of complex, technological systems and in accidents associated with their operation. Estimates of the rate of human error in the nuclear industry range from 20-65% of all system failures. In response to this, the Nuclear Regulatory Commission has developed a variety of techniques for estimating human error probabilities for nuclear power plant personnel. Most of these techniques require the specification of the range of human error probabilities for various tasks. Unfortunately, very little objective performance data on error probabilities exist for nuclear environments. Thus, when human reliability estimates are required, for example in computer simulation modeling of system reliability, only subjective estimates (usually based on experts' best guesses) can be provided. The objective of the current research is to provide guidelines for the selection of human error probabilities based on actual performance data taken in other complex environments and applying them to nuclear settings. A key feature of this research is the application of a comprehensive taxonomic approach to nuclear and non-nuclear tasks to evaluate their similarities and differences, thus providing a basis for generalizing human error estimates across tasks. In recent years significant developments have occurred in classifying and describing tasks. Initial goals of the current research are to: (1) identify alternative taxonomic schemes that can be applied to tasks, and (2) describe nuclear tasks in terms of these schemes. Three standardized taxonomic schemes (Ability Requirements Approach, Generalized Information-Processing Approach, Task Characteristics Approach) are identified, modified, and evaluated for their suitability in comparing nuclear and non-nuclear power plant tasks. An agenda for future research and its relevance to nuclear power plant safety is also discussed

  17. Detected-jump-error-correcting quantum codes, quantum error designs, and quantum computation

    International Nuclear Information System (INIS)

    Alber, G.; Mussinger, M.; Beth, Th.; Charnes, Ch.; Delgado, A.; Grassl, M.

    2003-01-01

    The recently introduced detected-jump-correcting quantum codes are capable of stabilizing qubit systems against spontaneous decay processes arising from couplings to statistically independent reservoirs. These embedded quantum codes exploit classical information about which qubit has emitted spontaneously and correspond to an active error-correcting code embedded in a passive error-correcting code. The construction of a family of one-detected-jump-error-correcting quantum codes is shown and the optimal redundancy, encoding, and recovery as well as general properties of detected-jump-error-correcting quantum codes are discussed. By the use of design theory, multiple-jump-error-correcting quantum codes can be constructed. The performance of one-jump-error-correcting quantum codes under nonideal conditions is studied numerically by simulating a quantum memory and Grover's algorithm

  18. Parts of the Whole: Error Estimation for Science Students

    Directory of Open Access Journals (Sweden)

    Dorothy Wallace

    2017-01-01

    Full Text Available It is important for science students to understand not only how to estimate error sizes in measurement data, but also to see how these errors contribute to errors in conclusions they may make about the data. Relatively small errors in measurement, errors in assumptions, and roundoff errors in computation may result in large error bounds on computed quantities of interest. In this column, we look closely at a standard method for measuring the volume of cancer tumor xenografts to see how small errors in each of these three factors may contribute to relatively large observed errors in recorded tumor volumes.

  19. Nursing Errors in Intensive Care Unit by Human Error Identification in Systems Tool: A Case Study

    Directory of Open Access Journals (Sweden)

    Nezamodini

    2016-03-01

    Full Text Available Background Although health services are designed and implemented to improve human health, the errors in health services are a very common phenomenon and even sometimes fatal in this field. Medical errors and their cost are global issues with serious consequences for the patients’ community that are preventable and require serious attention. Objectives The current study aimed to identify possible nursing errors applying human error identification in systems tool (HEIST in the intensive care units (ICUs of hospitals. Patients and Methods This descriptive research was conducted in the intensive care unit of a hospital in Khuzestan province in 2013. Data were collected through observation and interview by nine nurses in this section in a period of four months. Human error classification was based on Rose and Rose and Swain and Guttmann models. According to HEIST work sheets the guide questions were answered and error causes were identified after the determination of the type of errors. Results In total 527 errors were detected. The performing operation on the wrong path had the highest frequency which was 150, and the second rate with a frequency of 136 was doing the tasks later than the deadline. Management causes with a frequency of 451 were the first rank among identified errors. Errors mostly occurred in the system observation stage and among the performance shaping factors (PSFs, time was the most influencing factor in occurrence of human errors. Conclusions Finally, in order to prevent the occurrence and reduce the consequences of identified errors the following suggestions were proposed : appropriate training courses, applying work guidelines and monitoring their implementation, increasing the number of work shifts, hiring professional workforce, equipping work space with appropriate facilities and equipment.

  20. Medication Administration Errors Involving Paediatric In-Patients in a ...

    African Journals Online (AJOL)

    The drug mostly associated with error was gentamicin with 29 errors (1.2 %). Conclusion: During the study, a high frequency of error was observed. There is a need to modify the way information is handled and shared by professionals as wrong time error was the most implicated error. Attention should also be given to IV ...

  1. VOLUMETRIC ERROR COMPENSATION IN FIVE-AXIS CNC MACHINING CENTER THROUGH KINEMATICS MODELING OF GEOMETRIC ERROR

    Directory of Open Access Journals (Sweden)

    Pooyan Vahidi Pashsaki

    2016-06-01

    Full Text Available Accuracy of a five-axis CNC machine tool is affected by a vast number of error sources. This paper investigates volumetric error modeling and its compensation to the basis for creation of new tool path for improvement of work pieces accuracy. The volumetric error model of a five-axis machine tool with the configuration RTTTR (tilting head B-axis and rotary table in work piece side A΄ was set up taking into consideration rigid body kinematics and homogeneous transformation matrix, in which 43 error components are included. Volumetric error comprises 43 error components that can separately reduce geometrical and dimensional accuracy of work pieces. The machining accuracy of work piece is guaranteed due to the position of the cutting tool center point (TCP relative to the work piece. The cutting tool is deviated from its ideal position relative to the work piece and machining error is experienced. For compensation process detection of the present tool path and analysis of the RTTTR five-axis CNC machine tools geometrical error, translating current position of component to compensated positions using the Kinematics error model, converting newly created component to new tool paths using the compensation algorithms and finally editing old G-codes using G-code generator algorithm have been employed.

  2. Perceptual learning eases crowding by reducing recognition errors but not position errors.

    Science.gov (United States)

    Xiong, Ying-Zi; Yu, Cong; Zhang, Jun-Yun

    2015-08-01

    When an observer reports a letter flanked by additional letters in the visual periphery, the response errors (the crowding effect) may result from failure to recognize the target letter (recognition errors), from mislocating a correctly recognized target letter at a flanker location (target misplacement errors), or from reporting a flanker as the target letter (flanker substitution errors). Crowding can be reduced through perceptual learning. However, it is not known how perceptual learning operates to reduce crowding. In this study we trained observers with a partial-report task (Experiment 1), in which they reported the central target letter of a three-letter string presented in the visual periphery, or a whole-report task (Experiment 2), in which they reported all three letters in order. We then assessed the impact of training on recognition of both unflanked and flanked targets, with particular attention to how perceptual learning affected the types of errors. Our results show that training improved target recognition but not single-letter recognition, indicating that training indeed affected crowding. However, training did not reduce target misplacement errors or flanker substitution errors. This dissociation between target recognition and flanker substitution errors supports the view that flanker substitution may be more likely a by-product (due to response bias), rather than a cause, of crowding. Moreover, the dissociation is not consistent with hypothesized mechanisms of crowding that would predict reduced positional errors.

  3. Everyday memory errors in older adults.

    Science.gov (United States)

    Ossher, Lynn; Flegal, Kristin E; Lustig, Cindy

    2013-01-01

    Despite concern about cognitive decline in old age, few studies document the types and frequency of memory errors older adults make in everyday life. In the present study, 105 healthy older adults completed the Everyday Memory Questionnaire (EMQ; Sunderland, Harris, & Baddeley, 1983 , Journal of Verbal Learning and Verbal Behavior, 22, 341), indicating what memory errors they had experienced in the last 24 hours, the Memory Self-Efficacy Questionnaire (MSEQ; West, Thorn, & Bagwell, 2003 , Psychology and Aging, 18, 111), and other neuropsychological and cognitive tasks. EMQ and MSEQ scores were unrelated and made separate contributions to variance on the Mini Mental State Exam (MMSE; Folstein, Folstein, & McHugh, 1975 , Journal of Psychiatric Research, 12, 189), suggesting separate constructs. Tip-of-the-tongue errors were the most commonly reported, and the EMQ Faces/Places and New Things subscales were most strongly related to MMSE. These findings may help training programs target memory errors commonly experienced by older adults, and suggest which types of memory errors could indicate cognitive declines of clinical concern.

  4. Edge maps: Representing flow with bounded error

    KAUST Repository

    Bhatia, Harsh

    2011-03-01

    Robust analysis of vector fields has been established as an important tool for deriving insights from the complex systems these fields model. Many analysis techniques rely on computing streamlines, a task often hampered by numerical instabilities. Approaches that ignore the resulting errors can lead to inconsistencies that may produce unreliable visualizations and ultimately prevent in-depth analysis. We propose a new representation for vector fields on surfaces that replaces numerical integration through triangles with linear maps defined on its boundary. This representation, called edge maps, is equivalent to computing all possible streamlines at a user defined error threshold. In spite of this error, all the streamlines computed using edge maps will be pairwise disjoint. Furthermore, our representation stores the error explicitly, and thus can be used to produce more informative visualizations. Given a piecewise-linear interpolated vector field, a recent result [15] shows that there are only 23 possible map classes for a triangle, permitting a concise description of flow behaviors. This work describes the details of computing edge maps, provides techniques to quantify and refine edge map error, and gives qualitative and visual comparisons to more traditional techniques. © 2011 IEEE.

  5. Comparing different error conditions in filmdosemeter evaluation

    International Nuclear Information System (INIS)

    Roed, H.; Figel, M.

    2005-01-01

    Full text: In the evaluation of a film used as a personal dosemeter it may be necessary to mark the dosemeters when possible error conditions are recognized. These are errors that might have an influence on the ability to make a correct evaluation of the dose value, and include broken, contaminated or improperly handled dosemeters. In this project we have examined how two services (NIRH, GSF), from two different countries within the EU, mark their dosemeters. The services have a large difference in size, customer composition and issuing period, but both use film as their primary dosemeters. The possible error conditions that are examined here are dosemeters being contaminated, dosemeters exposed to moisture or light, missing filters in the dosemeter badges among others. The data are collected for the year 2003 where NIRH evaluated approximately 50 thousand and GSF about one million filmdosemeters. For each error condition the percentage of filmdosemeters belonging hereto is calculated as well as the distribution among different employee categories, i.e. industry, medicine, research, veterinary and other. For some error conditions we see a common pattern, while for others there is a large discrepancy between the services. The differences and possible explanations are discussed. The results of the investigation may motivate further comparisons between the different monitoring services in Europe. (author)

  6. Forecast Combination under Heavy-Tailed Errors

    Directory of Open Access Journals (Sweden)

    Gang Cheng

    2015-11-01

    Full Text Available Forecast combination has been proven to be a very important technique to obtain accurate predictions for various applications in economics, finance, marketing and many other areas. In many applications, forecast errors exhibit heavy-tailed behaviors for various reasons. Unfortunately, to our knowledge, little has been done to obtain reliable forecast combinations for such situations. The familiar forecast combination methods, such as simple average, least squares regression or those based on the variance-covariance of the forecasts, may perform very poorly due to the fact that outliers tend to occur, and they make these methods have unstable weights, leading to un-robust forecasts. To address this problem, in this paper, we propose two nonparametric forecast combination methods. One is specially proposed for the situations in which the forecast errors are strongly believed to have heavy tails that can be modeled by a scaled Student’s t-distribution; the other is designed for relatively more general situations when there is a lack of strong or consistent evidence on the tail behaviors of the forecast errors due to a shortage of data and/or an evolving data-generating process. Adaptive risk bounds of both methods are developed. They show that the resulting combined forecasts yield near optimal mean forecast errors relative to the candidate forecasts. Simulations and a real example demonstrate their superior performance in that they indeed tend to have significantly smaller prediction errors than the previous combination methods in the presence of forecast outliers.

  7. Drought Persistence Errors in Global Climate Models

    Science.gov (United States)

    Moon, H.; Gudmundsson, L.; Seneviratne, S. I.

    2018-04-01

    The persistence of drought events largely determines the severity of socioeconomic and ecological impacts, but the capability of current global climate models (GCMs) to simulate such events is subject to large uncertainties. In this study, the representation of drought persistence in GCMs is assessed by comparing state-of-the-art GCM model simulations to observation-based data sets. For doing so, we consider dry-to-dry transition probabilities at monthly and annual scales as estimates for drought persistence, where a dry status is defined as negative precipitation anomaly. Though there is a substantial spread in the drought persistence bias, most of the simulations show systematic underestimation of drought persistence at global scale. Subsequently, we analyzed to which degree (i) inaccurate observations, (ii) differences among models, (iii) internal climate variability, and (iv) uncertainty of the employed statistical methods contribute to the spread in drought persistence errors using an analysis of variance approach. The results show that at monthly scale, model uncertainty and observational uncertainty dominate, while the contribution from internal variability is small in most cases. At annual scale, the spread of the drought persistence error is dominated by the statistical estimation error of drought persistence, indicating that the partitioning of the error is impaired by the limited number of considered time steps. These findings reveal systematic errors in the representation of drought persistence in current GCMs and suggest directions for further model improvement.

  8. Pointwise asymptotic convergence of solutions for a phase separation model

    Czech Academy of Sciences Publication Activity Database

    Krejčí, Pavel; Zheng, S.

    2006-01-01

    Roč. 16, č. 1 (2006), s. 1-18 ISSN 1078-0947 Institutional research plan: CEZ:AV0Z10190503 Keywords : phase-field system * asymptotic phase separation * energy Subject RIV: BA - General Mathematics Impact factor: 1.087, year: 2006 http://aimsciences.org/journals/pdfs.jsp?paperID=1875&mode=full

  9. Measurement Errors and Uncertainties Theory and Practice

    CERN Document Server

    Rabinovich, Semyon G

    2006-01-01

    Measurement Errors and Uncertainties addresses the most important problems that physicists and engineers encounter when estimating errors and uncertainty. Building from the fundamentals of measurement theory, the author develops the theory of accuracy of measurements and offers a wealth of practical recommendations and examples of applications. This new edition covers a wide range of subjects, including: - Basic concepts of metrology - Measuring instruments characterization, standardization and calibration -Estimation of errors and uncertainty of single and multiple measurements - Modern probability-based methods of estimating measurement uncertainty With this new edition, the author completes the development of the new theory of indirect measurements. This theory provides more accurate and efficient methods for processing indirect measurement data. It eliminates the need to calculate the correlation coefficient - a stumbling block in measurement data processing - and offers for the first time a way to obtain...

  10. Analysis of the interface tracking errors

    International Nuclear Information System (INIS)

    Cerne, G.; Tiselj, I.; Petelin, S.

    2001-01-01

    An important limitation of the interface-tracking algorithm is the grid density, which determines the space scale of the surface tracking. In this paper the analysis of the interface tracking errors, which occur in a dispersed flow, is performed for the VOF interface tracking method. A few simple two-fluid tests are proposed for the investigation of the interface tracking errors and their grid dependence. When the grid density becomes too coarse to follow the interface changes, the errors can be reduced either by using denser nodalization or by switching to the two-fluid model during the simulation. Both solutions are analyzed and compared on a simple vortex-flow test.(author)

  11. [Diagnostic and organizational error in head injuries].

    Science.gov (United States)

    Zaba, Czesław; Zaba, Zbigniew; Swiderski, Paweł; Lorkiewicz-Muszyíska, Dorota

    2009-01-01

    The study aimed at presenting a case of a diagnostic and organizational error involving lack of detection of foreign body presence in the soft tissues of the head. Head radiograms in two projections clearly demonstrated foreign bodies that resembled in shape flattened bullets, which could not have been missed upon evaluation of the X-rays. On the other hand, description of the radiograms entered by the attending physicians to the patient's medical record indicated an absence of traumatic injuries or foreign bodies. In the opinion of the authors, the case in question involved a diagnostic error: the doctors failed to detect the presence of foreign bodies in the head. The organizational error involved the failure of radiogram evaluation performed by a radiologist.

  12. Understanding error generation in fused deposition modeling

    International Nuclear Information System (INIS)

    Bochmann, Lennart; Transchel, Robert; Wegener, Konrad; Bayley, Cindy; Helu, Moneer; Dornfeld, David

    2015-01-01

    Additive manufacturing offers completely new possibilities for the manufacturing of parts. The advantages of flexibility and convenience of additive manufacturing have had a significant impact on many industries, and optimizing part quality is crucial for expanding its utilization. This research aims to determine the sources of imprecision in fused deposition modeling (FDM). Process errors in terms of surface quality, accuracy and precision are identified and quantified, and an error-budget approach is used to characterize errors of the machine tool. It was determined that accuracy and precision in the y direction (0.08–0.30 mm) are generally greater than in the x direction (0.12–0.62 mm) and the z direction (0.21–0.57 mm). Furthermore, accuracy and precision tend to decrease at increasing axis positions. The results of this work can be used to identify possible process improvements in the design and control of FDM technology. (paper)

  13. Understanding error generation in fused deposition modeling

    Science.gov (United States)

    Bochmann, Lennart; Bayley, Cindy; Helu, Moneer; Transchel, Robert; Wegener, Konrad; Dornfeld, David

    2015-03-01

    Additive manufacturing offers completely new possibilities for the manufacturing of parts. The advantages of flexibility and convenience of additive manufacturing have had a significant impact on many industries, and optimizing part quality is crucial for expanding its utilization. This research aims to determine the sources of imprecision in fused deposition modeling (FDM). Process errors in terms of surface quality, accuracy and precision are identified and quantified, and an error-budget approach is used to characterize errors of the machine tool. It was determined that accuracy and precision in the y direction (0.08-0.30 mm) are generally greater than in the x direction (0.12-0.62 mm) and the z direction (0.21-0.57 mm). Furthermore, accuracy and precision tend to decrease at increasing axis positions. The results of this work can be used to identify possible process improvements in the design and control of FDM technology.

  14. Asteroid orbital error analysis: Theory and application

    Science.gov (United States)

    Muinonen, K.; Bowell, Edward

    1992-01-01

    We present a rigorous Bayesian theory for asteroid orbital error estimation in which the probability density of the orbital elements is derived from the noise statistics of the observations. For Gaussian noise in a linearized approximation the probability density is also Gaussian, and the errors of the orbital elements at a given epoch are fully described by the covariance matrix. The law of error propagation can then be applied to calculate past and future positional uncertainty ellipsoids (Cappellari et al. 1976, Yeomans et al. 1987, Whipple et al. 1991). To our knowledge, this is the first time a Bayesian approach has been formulated for orbital element estimation. In contrast to the classical Fisherian school of statistics, the Bayesian school allows a priori information to be formally present in the final estimation. However, Bayesian estimation does give the same results as Fisherian estimation when no priori information is assumed (Lehtinen 1988, and reference therein).

  15. Angular discretization errors in transport theory

    International Nuclear Information System (INIS)

    Nelson, P.; Yu, F.

    1992-01-01

    Elements of the information-based complexity theory are computed for several types of information and associated algorithms for angular approximations in the setting of a on-dimensional model problem. For point-evaluation information, the local and global radii of information are computed, a (trivial) optimal algorithm is determined, and the local and global error of a discrete ordinates algorithm are shown to be infinite. For average cone-integral information, the local and global radii of information are computed, the local and global error tends to zero as the underlying partition is indefinitely refined. A central algorithm for such information and an optimal partition (of given cardinality) are described. It is further shown that the analytic first-collision source method has zero error (for the purely absorbing model problem). Implications of the restricted problem domains suitable for the various types of information are discussed

  16. Frequent methodological errors in clinical research.

    Science.gov (United States)

    Silva Aycaguer, L C

    2018-03-07

    Several errors that are frequently present in clinical research are listed, discussed and illustrated. A distinction is made between what can be considered an "error" arising from ignorance or neglect, from what stems from a lack of integrity of researchers, although it is recognized and documented that it is not easy to establish when we are in a case and when in another. The work does not intend to make an exhaustive inventory of such problems, but focuses on those that, while frequent, are usually less evident or less marked in the various lists that have been published with this type of problems. It has been a decision to develop in detail the examples that illustrate the problems identified, instead of making a list of errors accompanied by an epidermal description of their characteristics. Copyright © 2018 Elsevier España, S.L.U. y SEMICYUC. All rights reserved.

  17. Conically scanning lidar error in complex terrain

    Directory of Open Access Journals (Sweden)

    Ferhat Bingöl

    2009-05-01

    Full Text Available Conically scanning lidars assume the flow to be homogeneous in order to deduce the horizontal wind speed. However, in mountainous or complex terrain this assumption is not valid implying a risk that the lidar will derive an erroneous wind speed. The magnitude of this error is measured by collocating a meteorological mast and a lidar at two Greek sites, one hilly and one mountainous. The maximum error for the sites investigated is of the order of 10 %. In order to predict the error for various wind directions the flows at both sites are simulated with the linearized flow model, WAsP Engineering 2.0. The measurement data are compared with the model predictions with good results for the hilly site, but with less success at the mountainous site. This is a deficiency of the flow model, but the methods presented in this paper can be used with any flow model.

  18. Experimental quantum error correction with high fidelity

    International Nuclear Information System (INIS)

    Zhang Jingfu; Gangloff, Dorian; Moussa, Osama; Laflamme, Raymond

    2011-01-01

    More than ten years ago a first step toward quantum error correction (QEC) was implemented [Phys. Rev. Lett. 81, 2152 (1998)]. The work showed there was sufficient control in nuclear magnetic resonance to implement QEC, and demonstrated that the error rate changed from ε to ∼ε 2 . In the current work we reproduce a similar experiment using control techniques that have been since developed, such as the pulses generated by gradient ascent pulse engineering algorithm. We show that the fidelity of the QEC gate sequence and the comparative advantage of QEC are appreciably improved. This advantage is maintained despite the errors introduced by the additional operations needed to protect the quantum states.

  19. Statistical mechanics of error-correcting codes

    Science.gov (United States)

    Kabashima, Y.; Saad, D.

    1999-01-01

    We investigate the performance of error-correcting codes, where the code word comprises products of K bits selected from the original message and decoding is carried out utilizing a connectivity tensor with C connections per index. Shannon's bound for the channel capacity is recovered for large K and zero temperature when the code rate K/C is finite. Close to optimal error-correcting capability is obtained for finite K and C. We examine the finite-temperature case to assess the use of simulated annealing for decoding and extend the analysis to accommodate other types of noisy channels.

  20. A chance to avoid mistakes human error

    International Nuclear Information System (INIS)

    Amaro, Pablo; Obeso, Eduardo; Gomez, Ruben

    2010-01-01

    Trying to give an answer to the lack of public information in the industry, in relationship with the different tools that are managed in the nuclear industry for minimizing the human error, a group of workers from different sections of the St. Maria de Garona NPP (Quality Assurance/ Organization and Human Factors) decided to embark on a challenging and exciting project: 'Write a book collecting all the knowledge accumulated during their daily activities, very often during lecture time of external information received from different organizations within the nuclear industry (INPO, WANO...), but also visiting different NPP's, maintaining meetings and participating in training courses related de Human and Organizational Factors'. Main objective of the book is presenting to the industry in general, the different tools that are used and fostered in the nuclear industry, in a practical way. In this way, the assimilation and implementation in others industries could be possible and achievable in and efficient context. One year of work, and our project is a reality. We have presented and abstract during the last Spanish Nuclear Society meeting in Sevilla, last October...and the best, the book is into the market for everybody in web-site: www.bubok.com. The book is structured in the following areas: 'Errare humanum est': Trying to present what is the human error to the reader, its origin and the different barriers. The message is that the reader see the error like something continuously present in our lives... even more frequently than we think. Studying its origin can be established aimed at barriers to avoid or at least minimize it. 'Error's bitter face': Shows the possible consequences of human errors. What better that presenting real experiences that have occurred in the industry. In the book, accidents in the nuclear industry, like Tree Mile Island NPP, Chernobyl NPP, and incidents like Davis Besse NPP in the past, helps to the reader to make a reflection about the

  1. Valuing real options: frequently made errors

    OpenAIRE

    Fernández, Pablo

    2002-01-01

    In this paper we analyze frequently made errors when valuing real options. The best way of doing it is through examples. We start by analyzing Damodaran's proposal to value the option to expand the business of Home Depot. Some of the errors and problems of this and other approaches are: - Assuming that the option is replicable and using Black and Scholes' formula. - The estimation of the option's volatility is arbitrary and has a decisive effect on the option's value. - As there is no riskles...

  2. Analysis of errors of radiation relay, (1)

    International Nuclear Information System (INIS)

    Koyanagi, Takami; Nakajima, Sinichi

    1976-01-01

    The statistical error of liquid level controlled by radiation relay is analysed and a method of minimizing the error is proposed. This method comes to the problem of optimum setting of the time constant of radiation relay. The equations for obtaining the value of time constant are presented and the numerical results are shown in a table and plotted in a figure. The optimum time constant of the upper level control relay is entirely different from that of the lower level control relay. (auth.)

  3. Overview of error-tolerant cockpit research

    Science.gov (United States)

    Abbott, Kathy

    1990-01-01

    The objectives of research in intelligent cockpit aids and intelligent error-tolerant systems are stated. In intelligent cockpit aids research, the objective is to provide increased aid and support to the flight crew of civil transport aircraft through the use of artificial intelligence techniques combined with traditional automation. In intelligent error-tolerant systems, the objective is to develop and evaluate cockpit systems that provide flight crews with safe and effective ways and means to manage aircraft systems, plan and replan flights, and respond to contingencies. A subsystems fault management functional diagram is given. All information is in viewgraph form.

  4. Software errors and complexity: An empirical investigation

    Science.gov (United States)

    Basili, Victor R.; Perricone, Berry T.

    1983-01-01

    The distributions and relationships derived from the change data collected during the development of a medium scale satellite software project show that meaningful results can be obtained which allow an insight into software traits and the environment in which it is developed. Modified and new modules were shown to behave similarly. An abstract classification scheme for errors which allows a better understanding of the overall traits of a software project is also shown. Finally, various size and complexity metrics are examined with respect to errors detected within the software yielding some interesting results.

  5. Relaxed error control in shape optimization that utilizes remeshing

    CSIR Research Space (South Africa)

    Wilke, DN

    2013-02-01

    Full Text Available Shape optimization strategies based on error indicators usually require strict error control for every computed design during the optimization run. The strict error control serves two purposes. Firstly, it allows for the accurate computation...

  6. Adjusting for the Incidence of Measurement Errors in Multilevel ...

    African Journals Online (AJOL)

    the incidence of measurement errors using these techniques generally revealed coefficient estimates of ... physical, biological, social and medical science, measurement errors are found. The errors are ... (M) and Science and Technology (ST).

  7. Characteristics of medication errors with parenteral cytotoxic drugs

    OpenAIRE

    Fyhr, A; Akselsson, R

    2012-01-01

    Errors involving cytotoxic drugs have the potential of being fatal and should therefore be prevented. The objective of this article is to identify the characteristics of medication errors involving parenteral cytotoxic drugs in Sweden. A total of 60 cases reported to the national error reporting systems from 1996 to 2008 were reviewed. Classification was made to identify cytotoxic drugs involved, type of error, where the error occurred, error detection mechanism, and consequences for the pati...

  8. Errors and Understanding: The Effects of Error-Management Training on Creative Problem-Solving

    Science.gov (United States)

    Robledo, Issac C.; Hester, Kimberly S.; Peterson, David R.; Barrett, Jamie D.; Day, Eric A.; Hougen, Dean P.; Mumford, Michael D.

    2012-01-01

    People make errors in their creative problem-solving efforts. The intent of this article was to assess whether error-management training would improve performance on creative problem-solving tasks. Undergraduates were asked to solve an educational leadership problem known to call for creative thought where problem solutions were scored for…

  9. Deductive Error Diagnosis and Inductive Error Generalization for Intelligent Tutoring Systems.

    Science.gov (United States)

    Hoppe, H. Ulrich

    1994-01-01

    Examines the deductive approach to error diagnosis for intelligent tutoring systems. Topics covered include the principles of the deductive approach to diagnosis; domain-specific heuristics to solve the problem of generalizing error patterns; and deductive diagnosis and the hypertext-based learning environment. (Contains 26 references.) (JLB)

  10. Magnetic Nanoparticle Thermometer: An Investigation of Minimum Error Transmission Path and AC Bias Error

    Directory of Open Access Journals (Sweden)

    Zhongzhou Du

    2015-04-01

    Full Text Available The signal transmission module of a magnetic nanoparticle thermometer (MNPT was established in this study to analyze the error sources introduced during the signal flow in the hardware system. The underlying error sources that significantly affected the precision of the MNPT were determined through mathematical modeling and simulation. A transfer module path with the minimum error in the hardware system was then proposed through the analysis of the variations of the system error caused by the significant error sources when the signal flew through the signal transmission module. In addition, a system parameter, named the signal-to-AC bias ratio (i.e., the ratio between the signal and AC bias, was identified as a direct determinant of the precision of the measured temperature. The temperature error was below 0.1 K when the signal-to-AC bias ratio was higher than 80 dB, and other system errors were not considered. The temperature error was below 0.1 K in the experiments with a commercial magnetic fluid (Sample SOR-10, Ocean Nanotechnology, Springdale, AR, USA when the hardware system of the MNPT was designed with the aforementioned method.

  11. Responses to Error: Sentence-Level Error and the Teacher of Basic Writing

    Science.gov (United States)

    Foltz-Gray, Dan

    2012-01-01

    In this article, the author talks about sentence-level error, error in grammar, mechanics, punctuation, usage, and the teacher of basic writing. He states that communities are crawling with teachers and administrators and parents and state legislators and school board members who are engaged in sometimes rancorous debate over what to do about…

  12. Medication errors : the impact of prescribing and transcribing errors on preventable harm in hospitalised patients

    NARCIS (Netherlands)

    van Doormaal, J.E.; van der Bemt, P.M.L.A.; Mol, P.G.M.; Egberts, A.C.G.; Haaijer-Ruskamp, F.M.; Kosterink, J.G.W.; Zaal, Rianne J.

    Background: Medication errors (MEs) affect patient safety to a significant extent. Because these errors can lead to preventable adverse drug events (pADEs), it is important to know what type of ME is the most prevalent cause of these pADEs. This study determined the impact of the various types of

  13. Applying Intelligent Algorithms to Automate the Identification of Error Factors.

    Science.gov (United States)

    Jin, Haizhe; Qu, Qingxing; Munechika, Masahiko; Sano, Masataka; Kajihara, Chisato; Duffy, Vincent G; Chen, Han

    2018-05-03

    Medical errors are the manifestation of the defects occurring in medical processes. Extracting and identifying defects as medical error factors from these processes are an effective approach to prevent medical errors. However, it is a difficult and time-consuming task and requires an analyst with a professional medical background. The issues of identifying a method to extract medical error factors and reduce the extraction difficulty need to be resolved. In this research, a systematic methodology to extract and identify error factors in the medical administration process was proposed. The design of the error report, extraction of the error factors, and identification of the error factors were analyzed. Based on 624 medical error cases across four medical institutes in both Japan and China, 19 error-related items and their levels were extracted. After which, they were closely related to 12 error factors. The relational model between the error-related items and error factors was established based on a genetic algorithm (GA)-back-propagation neural network (BPNN) model. Additionally, compared to GA-BPNN, BPNN, partial least squares regression and support vector regression, GA-BPNN exhibited a higher overall prediction accuracy, being able to promptly identify the error factors from the error-related items. The combination of "error-related items, their different levels, and the GA-BPNN model" was proposed as an error-factor identification technology, which could automatically identify medical error factors.

  14. Errors in practical measurement in surveying, engineering, and technology

    International Nuclear Information System (INIS)

    Barry, B.A.; Morris, M.D.

    1991-01-01

    This book discusses statistical measurement, error theory, and statistical error analysis. The topics of the book include an introduction to measurement, measurement errors, the reliability of measurements, probability theory of errors, measures of reliability, reliability of repeated measurements, propagation of errors in computing, errors and weights, practical application of the theory of errors in measurement, two-dimensional errors and includes a bibliography. Appendices are included which address significant figures in measurement, basic concepts of probability and the normal probability curve, writing a sample specification for a procedure, classification, standards of accuracy, and general specifications of geodetic control surveys, the geoid, the frequency distribution curve and the computer and calculator solution of problems

  15. Medication error detection in two major teaching hospitals: What are the types of errors?

    Directory of Open Access Journals (Sweden)

    Fatemeh Saghafi

    2014-01-01

    Full Text Available Background: Increasing number of reports on medication errors and relevant subsequent damages, especially in medical centers has become a growing concern for patient safety in recent decades. Patient safety and in particular, medication safety is a major concern and challenge for health care professionals around the world. Our prospective study was designed to detect prescribing, transcribing, dispensing, and administering medication errors in two major university hospitals. Materials and Methods: After choosing 20 similar hospital wards in two large teaching hospitals in the city of Isfahan, Iran, the sequence was randomly selected. Diagrams for drug distribution were drawn by the help of pharmacy directors. Direct observation technique was chosen as the method for detecting the errors. A total of 50 doses were studied in each ward to detect prescribing, transcribing and administering errors in each ward. The dispensing error was studied on 1000 doses dispensed in each hospital pharmacy. Results: A total of 8162 number of doses of medications were studied during the four stages, of which 8000 were complete data to be analyzed. 73% of prescribing orders were incomplete and did not have all six parameters (name, dosage form, dose and measuring unit, administration route, and intervals of administration. We found 15% transcribing errors. One-third of administration of medications on average was erroneous in both hospitals. Dispensing errors ranged between 1.4% and 2.2%. Conclusion: Although prescribing and administrating compromise most of the medication errors, improvements are needed in all four stages with regard to medication errors. Clear guidelines must be written and executed in both hospitals to reduce the incidence of medication errors.

  16. Error suppression and error correction in adiabatic quantum computation: non-equilibrium dynamics

    International Nuclear Information System (INIS)

    Sarovar, Mohan; Young, Kevin C

    2013-01-01

    While adiabatic quantum computing (AQC) has some robustness to noise and decoherence, it is widely believed that encoding, error suppression and error correction will be required to scale AQC to large problem sizes. Previous works have established at least two different techniques for error suppression in AQC. In this paper we derive a model for describing the dynamics of encoded AQC and show that previous constructions for error suppression can be unified with this dynamical model. In addition, the model clarifies the mechanisms of error suppression and allows the identification of its weaknesses. In the second half of the paper, we utilize our description of non-equilibrium dynamics in encoded AQC to construct methods for error correction in AQC by cooling local degrees of freedom (qubits). While this is shown to be possible in principle, we also identify the key challenge to this approach: the requirement of high-weight Hamiltonians. Finally, we use our dynamical model to perform a simplified thermal stability analysis of concatenated-stabilizer-code encoded many-body systems for AQC or quantum memories. This work is a companion paper to ‘Error suppression and error correction in adiabatic quantum computation: techniques and challenges (2013 Phys. Rev. X 3 041013)’, which provides a quantum information perspective on the techniques and limitations of error suppression and correction in AQC. In this paper we couch the same results within a dynamical framework, which allows for a detailed analysis of the non-equilibrium dynamics of error suppression and correction in encoded AQC. (paper)

  17. Rotational error in path integration: encoding and execution errors in angle reproduction.

    Science.gov (United States)

    Chrastil, Elizabeth R; Warren, William H

    2017-06-01

    Path integration is fundamental to human navigation. When a navigator leaves home on a complex outbound path, they are able to keep track of their approximate position and orientation and return to their starting location on a direct homebound path. However, there are several sources of error during path integration. Previous research has focused almost exclusively on encoding error-the error in registering the outbound path in memory. Here, we also consider execution error-the error in the response, such as turning and walking a homebound trajectory. In two experiments conducted in ambulatory virtual environments, we examined the contribution of execution error to the rotational component of path integration using angle reproduction tasks. In the reproduction tasks, participants rotated once and then rotated again to face the original direction, either reproducing the initial turn or turning through the supplementary angle. One outstanding difficulty in disentangling encoding and execution error during a typical angle reproduction task is that as the encoding angle increases, so does the required response angle. In Experiment 1, we dissociated these two variables by asking participants to report each encoding angle using two different responses: by turning to walk on a path parallel to the initial facing direction in the same (reproduction) or opposite (supplementary angle) direction. In Experiment 2, participants reported the encoding angle by turning both rightward and leftward onto a path parallel to the initial facing direction, over a larger range of angles. The results suggest that execution error, not encoding error, is the predominant source of error in angular path integration. These findings also imply that the path integrator uses an intrinsic (action-scaled) rather than an extrinsic (objective) metric.

  18. Equating error in observed-score equating

    NARCIS (Netherlands)

    van der Linden, Willem J.

    2006-01-01

    Traditionally, error in equating observed scores on two versions of a test is defined as the difference between the transformations that equate the quantiles of their distributions in the sample and population of test takers. But it is argued that if the goal of equating is to adjust the scores of

  19. ERRORS AND DIFFICULTIES IN TRANSLATING LEGAL TEXTS

    Directory of Open Access Journals (Sweden)

    Camelia, CHIRILA

    2014-11-01

    Full Text Available Nowadays the accurate translation of legal texts has become highly important as the mistranslation of a passage in a contract, for example, could lead to lawsuits and loss of money. Consequently, the translation of legal texts to other languages faces many difficulties and only professional translators specialised in legal translation should deal with the translation of legal documents and scholarly writings. The purpose of this paper is to analyze translation from three perspectives: translation quality, errors and difficulties encountered in translating legal texts and consequences of such errors in professional translation. First of all, the paper points out the importance of performing a good and correct translation, which is one of the most important elements to be considered when discussing translation. Furthermore, the paper presents an overview of the errors and difficulties in translating texts and of the consequences of errors in professional translation, with applications to the field of law. The paper is also an approach to the differences between languages (English and Romanian that can hinder comprehension for those who have embarked upon the difficult task of translation. The research method that I have used to achieve the objectives of the paper was the content analysis of various Romanian and foreign authors' works.

  20. Role of memory errors in quantum repeaters

    International Nuclear Information System (INIS)

    Hartmann, L.; Kraus, B.; Briegel, H.-J.; Duer, W.

    2007-01-01

    We investigate the influence of memory errors in the quantum repeater scheme for long-range quantum communication. We show that the communication distance is limited in standard operation mode due to memory errors resulting from unavoidable waiting times for classical signals. We show how to overcome these limitations by (i) improving local memory and (ii) introducing two operational modes of the quantum repeater. In both operational modes, the repeater is run blindly, i.e., without waiting for classical signals to arrive. In the first scheme, entanglement purification protocols based on one-way classical communication are used allowing to communicate over arbitrary distances. However, the error thresholds for noise in local control operations are very stringent. The second scheme makes use of entanglement purification protocols with two-way classical communication and inherits the favorable error thresholds of the repeater run in standard mode. One can increase the possible communication distance by an order of magnitude with reasonable overhead in physical resources. We outline the architecture of a quantum repeater that can possibly ensure intercontinental quantum communication