WorldWideScience

Sample records for voi definition errors

  1. The impact of 3D volume of interest definition on accuracy and precision of activity estimation in quantitative SPECT and planar processing methods

    Science.gov (United States)

    He, Bin; Frey, Eric C.

    2010-06-01

    Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT) and planar (QPlanar) processing. Another important factor impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimates. The 3D NCAT phantom was used with activities that modeled clinically observed 111In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively, of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g. in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from -1 to 1 voxels in increments of with 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ activity estimations were

  2. The impact of 3D volume of interest definition on accuracy and precision of activity estimation in quantitative SPECT and planar processing methods

    Energy Technology Data Exchange (ETDEWEB)

    He Bin [Division of Nuclear Medicine, Department of Radiology, New York Presbyterian Hospital-Weill Medical College of Cornell University, New York, NY 10021 (United States); Frey, Eric C, E-mail: bih2006@med.cornell.ed, E-mail: efrey1@jhmi.ed [Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins Medical Institutions, Baltimore, MD 21287-0859 (United States)

    2010-06-21

    Accurate and precise estimation of organ activities is essential for treatment planning in targeted radionuclide therapy. We have previously evaluated the impact of processing methodology, statistical noise and variability in activity distribution and anatomy on the accuracy and precision of organ activity estimates obtained with quantitative SPECT (QSPECT) and planar (QPlanar) processing. Another important factor impacting the accuracy and precision of organ activity estimates is accuracy of and variability in the definition of organ regions of interest (ROI) or volumes of interest (VOI). The goal of this work was thus to systematically study the effects of VOI definition on the reliability of activity estimates. To this end, we performed Monte Carlo simulation studies using randomly perturbed and shifted VOIs to assess the impact on organ activity estimates. The 3D NCAT phantom was used with activities that modeled clinically observed {sup 111}In ibritumomab tiuxetan distributions. In order to study the errors resulting from misdefinitions due to manual segmentation errors, VOIs of the liver and left kidney were first manually defined. Each control point was then randomly perturbed to one of the nearest or next-nearest voxels in three ways: with no, inward or outward directional bias, resulting in random perturbation, erosion or dilation, respectively, of the VOIs. In order to study the errors resulting from the misregistration of VOIs, as would happen, e.g. in the case where the VOIs were defined using a misregistered anatomical image, the reconstructed SPECT images or projections were shifted by amounts ranging from -1 to 1 voxels in increments of with 0.1 voxels in both the transaxial and axial directions. The activity estimates from the shifted reconstructions or projections were compared to those from the originals, and average errors were computed for the QSPECT and QPlanar methods, respectively. For misregistration, errors in organ activity estimations

  3. Construction site Voice Operated Information System (VOIS) test

    Science.gov (United States)

    Lawrence, Debbie J.; Hettchen, William

    1991-01-01

    The Voice Activated Information System (VAIS), developed by USACERL, allows inspectors to verbally log on-site inspection reports on a hand held tape recorder. The tape is later processed by the VAIS, which enters the information into the system's database and produces a written report. The Voice Operated Information System (VOIS), developed by USACERL and Automated Sciences Group, through a ESACERL cooperative research and development agreement (CRDA), is an improved voice recognition system based on the concepts and function of the VAIS. To determine the applicability of the VOIS to Corps of Engineers construction projects, Technology Transfer Test Bad (T3B) funds were provided to the Corps of Engineers National Security Agency (NSA) Area Office (Fort Meade) to procure and implement the VOIS, and to train personnel in its use. This report summarizes the NSA application of the VOIS to quality assurance inspection of radio frequency shielding and to progress payment logs, and concludes that the VOIS is an easily implemented system that can offer improvements when applied to repetitive inspection procedures. Use of VOIS can save time during inspection, improve documentation storage, and provide flexible retrieval of stored information.

  4. Team errors: definition and taxonomy

    International Nuclear Information System (INIS)

    Sasou, Kunihide; Reason, James

    1999-01-01

    In error analysis or error management, the focus is usually upon individuals who have made errors. In large complex systems, however, most people work in teams or groups. Considering this working environment, insufficient emphasis has been given to 'team errors'. This paper discusses the definition of team errors and its taxonomy. These notions are also applied to events that have occurred in the nuclear power industry, aviation industry and shipping industry. The paper also discusses the relations between team errors and Performance Shaping Factors (PSFs). As a result, the proposed definition and taxonomy are found to be useful in categorizing team errors. The analysis also reveals that deficiencies in communication, resource/task management, excessive authority gradient, excessive professional courtesy will cause team errors. Handling human errors as team errors provides an opportunity to reduce human errors

  5. The relationship of VOI threshold, volume and B/S on DISA images

    International Nuclear Information System (INIS)

    Song Liejing; Wang Mingming; Si Hongwei; Li Fei

    2011-01-01

    Objective: To explore the relationship of VOI threshold, Volume and B/S on DISA phantom images. Methods: Ten hollow spheres were placed in cylinder phantom. According to the B/S of 1 : 7, 1 : 5 and 1 : 4, 99m TcO 4- and 18 F-FDG was filled into the container and spheres simultaneously and separately. Images were acquired by DISA and SIDA protocol. Volume of interest (VOI) for each sphere was analyzed by threshold method and to fit expression individually for validating of the relationship. Results: The equation for the estimation of optimal threshold was as following Tm = d + c × Bm/(e + f × Vm) + b/Vm. In majority of data, the calculated threshold was in the 1% interval that optimal thresholds were really in. Those who were not in were at the lower or upper intervals. Conclusions: Both DISA and SIDA images, based o the relationship of VOI thresh- old. Volume and B/S and real volume, this method could accurately calculate optimal threshold with an error less than 1% for spheres whose volumes ranged from 3.3 to 30.8 ml. (authors)

  6. Feasibility of volume-of-interest (VOI) scanning technique in cone beam breast CT - a preliminary study

    International Nuclear Information System (INIS)

    Chen Lingyun; Shaw, Chris C.; Altunbas, Mustafa C.; Lai, C.-J.; Liu Xinming; Han Tao; Wang Tianpeng; Yang, Wei T.; Whitman, Gary J.

    2008-01-01

    This work is to demonstrate that high quality cone beam CT images can be generated for a volume of interest (VOI) and to investigate the exposure reduction effect, dose saving, and scatter reduction with the VOI scanning technique. The VOI scanning technique involves inserting a filtering mask between the x-ray source and the breast during image acquisition. The mask has an opening to allow full x-ray exposure to be delivered to a preselected VOI and a lower, filtered exposure to the region outside the VOI. To investigate the effects of increased noise due to reduced exposure outside the VOI on the reconstructed VOI image, we directly extracted the projection data inside the VOI from the full-field projection data and added additional data to the projection outside the VOI to simulate the relative noise increase due to reduced exposure. The nonuniform reference images were simulated in an identical manner to normalize the projection images and measure the x-ray attenuation factor for the object. Regular Feldkamp-Davis-Kress filtered backprojection algorithm was used to reconstruct the 3D images. The noise level inside the VOI was evaluated and compared with that of the full-field higher exposure image. Calcifications phantom and low contrast phantom were imaged. Dose reduction was investigated by estimating the dose distribution in a cylindrical water phantom using Monte Carlo simulation based Geant4 package. Scatter reduction at the detector input was also studied. Our results show that with the exposure level reduced by the VOI mask, the dose levels were significantly reduced both inside and outside the VOI without compromising the accuracy of image reconstruction, allowing for the VOI to be imaged with more clarity and helping to reduce the breast dose. The contrast-to-noise ratio inside the VOI was improved. The VOI images were not adversely affected by noisier projection data outside the VOI. Scatter intensities at the detector input were also shown to

  7. Calculating potential error in sodium MRI with respect to the analysis of small objects.

    Science.gov (United States)

    Stobbe, Robert W; Beaulieu, Christian

    2018-06-01

    To facilitate correct interpretation of sodium MRI measurements, calculation of error with respect to rapid signal decay is introduced and combined with that of spatially correlated noise to assess volume-of-interest (VOI) 23 Na signal measurement inaccuracies, particularly for small objects. Noise and signal decay-related error calculations were verified using twisted projection imaging and a specially designed phantom with different sized spheres of constant elevated sodium concentration. As a demonstration, lesion signal measurement variation (5 multiple sclerosis participants) was compared with that predicted from calculation. Both theory and phantom experiment showed that VOI signal measurement in a large 10-mL, 314-voxel sphere was 20% less than expected on account of point-spread-function smearing when the VOI was drawn to include the full sphere. Volume-of-interest contraction reduced this error but increased noise-related error. Errors were even greater for smaller spheres (40-60% less than expected for a 0.35-mL, 11-voxel sphere). Image-intensity VOI measurements varied and increased with multiple sclerosis lesion size in a manner similar to that predicted from theory. Correlation suggests large underestimation of 23 Na signal in small lesions. Acquisition-specific measurement error calculation aids 23 Na MRI data analysis and highlights the limitations of current low-resolution methodologies. Magn Reson Med 79:2968-2977, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  8. Medication errors: definitions and classification

    Science.gov (United States)

    Aronson, Jeffrey K

    2009-01-01

    To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey–Lewis method (based on an understanding of theory and practice). A medication error is ‘a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient’. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is ‘a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient’. The converse of this, ‘balanced prescribing’ is ‘the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm’. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. A prescription error is ‘a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription’. The ‘normal features’ include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies. PMID:19594526

  9. WACC: Definition, misconceptions and errors

    OpenAIRE

    Fernandez, Pablo

    2011-01-01

    The WACC is just the rate at which the Free Cash Flows must be discounted to obtain the same result as in the valuation using Equity Cash Flows discounted at the required return to equity (Ke) The WACC is neither a cost nor a required return: it is a weighted average of a cost and a required return. To refer to the WACC as the "cost of capital" may be misleading because it is not a cost. The paper includes 7 errors due to not remembering the definition of WACC and shows the relationship betwe...

  10. Errors in MR-based attenuation correction for brain imaging with PET/MR scanners

    International Nuclear Information System (INIS)

    Rota Kops, Elena; Herzog, Hans

    2013-01-01

    Aim: Attenuation correction of PET data acquired by hybrid MR/PET scanners remains a challenge, even if several methods for brain and whole-body measurements have been developed recently. A template-based attenuation correction for brain imaging proposed by our group is easy to handle and delivers reliable attenuation maps in a short time. However, some potential error sources are analyzed in this study. We investigated the choice of template reference head among all the available data (error A), and possible skull anomalies of the specific patient, such as discontinuities due to surgery (error B). Materials and methods: An anatomical MR measurement and a 2-bed-position transmission scan covering the whole head and neck region were performed in eight normal subjects (4 females, 4 males). Error A: Taking alternatively one of the eight heads as reference, eight different templates were created by nonlinearly registering the images to the reference and calculating the average. Eight patients (4 females, 4 males; 4 with brain lesions, 4 w/o brain lesions) were measured in the Siemens BrainPET/MR scanner. The eight templates were used to generate the patients' attenuation maps required for reconstruction. ROI and VOI atlas-based comparisons were performed employing all the reconstructed images. Error B: CT-based attenuation maps of two volunteers were manipulated by manually inserting several skull lesions and filling a nasal cavity. The corresponding attenuation coefficients were substituted with the water's coefficient (0.096/cm). Results: Error A: The mean SUVs over the eight templates pairs for all eight patients and all VOIs did not differ significantly one from each other. Standard deviations up to 1.24% were found. Error B: After reconstruction of the volunteers' BrainPET data with the CT-based attenuation maps without and with skull anomalies, a VOI-atlas analysis was performed revealing very little influence of the skull lesions (less than 3%), while the filled

  11. Errors in MR-based attenuation correction for brain imaging with PET/MR scanners

    Science.gov (United States)

    Rota Kops, Elena; Herzog, Hans

    2013-02-01

    AimAttenuation correction of PET data acquired by hybrid MR/PET scanners remains a challenge, even if several methods for brain and whole-body measurements have been developed recently. A template-based attenuation correction for brain imaging proposed by our group is easy to handle and delivers reliable attenuation maps in a short time. However, some potential error sources are analyzed in this study. We investigated the choice of template reference head among all the available data (error A), and possible skull anomalies of the specific patient, such as discontinuities due to surgery (error B). Materials and methodsAn anatomical MR measurement and a 2-bed-position transmission scan covering the whole head and neck region were performed in eight normal subjects (4 females, 4 males). Error A: Taking alternatively one of the eight heads as reference, eight different templates were created by nonlinearly registering the images to the reference and calculating the average. Eight patients (4 females, 4 males; 4 with brain lesions, 4 w/o brain lesions) were measured in the Siemens BrainPET/MR scanner. The eight templates were used to generate the patients' attenuation maps required for reconstruction. ROI and VOI atlas-based comparisons were performed employing all the reconstructed images. Error B: CT-based attenuation maps of two volunteers were manipulated by manually inserting several skull lesions and filling a nasal cavity. The corresponding attenuation coefficients were substituted with the water's coefficient (0.096/cm). ResultsError A: The mean SUVs over the eight templates pairs for all eight patients and all VOIs did not differ significantly one from each other. Standard deviations up to 1.24% were found. Error B: After reconstruction of the volunteers' BrainPET data with the CT-based attenuation maps without and with skull anomalies, a VOI-atlas analysis was performed revealing very little influence of the skull lesions (less than 3%), while the filled nasal

  12. How are medication errors defined? A systematic literature review of definitions and characteristics

    DEFF Research Database (Denmark)

    Lisby, Marianne; Nielsen, L P; Brock, Birgitte

    2010-01-01

    Multiplicity in terminology has been suggested as a possible explanation for the variation in the prevalence of medication errors. So far, few empirical studies have challenged this assertion. The objective of this review was, therefore, to describe the extent and characteristics of medication er...... error definitions in hospitals and to consider the consequences for measuring the prevalence of medication errors....

  13. A technique for manual definition of an irregular volume of interest in single photon emission computed tomography

    International Nuclear Information System (INIS)

    Fleming, J.S.; Kemp, P.M.; Bolt, L.

    1999-01-01

    A technique is described for manually outlining a volume of interest (VOI) in a three-dimensional SPECT dataset. Regions of interest (ROIs) are drawn on three orthogonal maximum intensity projections. Image masks based on these ROIs are backprojected through the image volume and the resultant 3D dataset is segmented to produce the VOI. The technique has been successfully applied in the exclusion of unwanted areas of activity adjacent to the brain when segmenting the organ in SPECT imaging using 99m Tc HMPAO. An example of its use for segmentation in tumour imaging is also presented. The technique is of value for applications involving semi-automatic VOI definition in SPECT. (author)

  14. Cross-cultural adaptation of the Chilean version of the Voice Symptom Scale - VoiSS.

    Science.gov (United States)

    Ruston, Francisco Contreras; Moreti, Felipe; Vivero, Martín; Malebran, Celina; Behlau, Mara

    This research aims to accomplish the cross-cultural equivalence of the Chilean version of the VoiSS protocol through its cultural and linguistic adaptation. After the translation of the VoiSS protocol to Chilean Spanish by two bilingual speech therapists and its back translation to English, we compared the items of the original tool with the previous translated version. The existing discrepancies were modified by a consensus committee of five speech therapists and the translated version was entitled Escala de Sintomas Vocales - ESV, with 30 questions and five answers: "Never", "Occasionally", "Sometimes", "Most of the time", "Always". For cross-cultural equivalence, the protocol was applied to 15 individuals with vocal problems. In each question the option of "Not applicable" was added to the answer choices for identification of the questions not comprehended or not appropriate for the target population. Two individuals had difficulty answering two questions, which made it necessary to adapt the translation of only one of them. The modified ESV was applied to three individuals with vocal problems, and there were incomprehensible inappropriate questions for the Chilean culture. The ESV reflects the original English version, both in the number of questions and the limitations of the emotional and physical domains. There is now a cross-cultural equivalence of VoiSS in Chilean Spanish, titled ESV. The validation of the ESV for Chilean Spanish is ongoing. RESUMEN Este estudio tuvo como objetivo realizar la equivalencia cultural de la versión Chilena del protocolo Voice Symptom Scale - VoiSS por medio de su adaptación cultural y lingüística. Después de la traducción del VoiSS para el Español Chileno, por dos fonoaudiólogos bilingües, y de la retro traducción para el inglés, se realizó una comparación de los ítems del instrumento original con la versión traducida, surgiendo discrepancias; tales divergencias fueron resueltas por un comité compuesto por

  15. Estimating study costs for use in VOI, a study of dutch publicly funded drug related research

    NARCIS (Netherlands)

    Van Asselt, A.D.; Ramaekers, B.L.; Corro Ramos, I.; Joore, M.A.; Al, M.J.; Lesman-Leegte, I.; Postma, M.J.; Vemer, P.; Feenstra, T.F.

    2016-01-01

    Objectives: To perform value of information (VOI) analyses, an estimate of research costs is needed. However, reference values for such costs are not available. This study aimed to analyze empirical data on research budgets and, by means of a cost tool, provide an overview of costs of several types

  16. Error or "act of God"? A study of patients' and operating room team members' perceptions of error definition, reporting, and disclosure.

    Science.gov (United States)

    Espin, Sherry; Levinson, Wendy; Regehr, Glenn; Baker, G Ross; Lingard, Lorelei

    2006-01-01

    Calls abound for a culture change in health care to improve patient safety. However, effective change cannot proceed without a clear understanding of perceptions and beliefs about error. In this study, we describe and compare operative team members' and patients' perceptions of error, reporting of error, and disclosure of error. Thirty-nine interviews of team members (9 surgeons, 9 nurses, 10 anesthesiologists) and patients (11) were conducted at 2 teaching hospitals using 4 scenarios as prompts. Transcribed responses to open questions were analyzed by 2 researchers for recurrent themes using the grounded-theory method. Yes/no answers were compared across groups using chi-square analyses. Team members and patients agreed on what constitutes an error. Deviation from standards and negative outcome were emphasized as definitive features. Patients and nurse professionals differed significantly in their perception of whether errors should be reported. Nurses were willing to report only events within their disciplinary scope of practice. Although most patients strongly advocated full disclosure of errors (what happened and how), team members preferred to disclose only what happened. When patients did support partial disclosure, their rationales varied from that of team members. Both operative teams and patients define error in terms of breaking the rules and the concept of "no harm no foul." These concepts pose challenges for treating errors as system failures. A strong culture of individualism pervades nurses' perception of error reporting, suggesting that interventions are needed to foster collective responsibility and a constructive approach to error identification.

  17. Prioritising interventions against medication errors

    DEFF Research Database (Denmark)

    Lisby, Marianne; Pape-Larsen, Louise; Sørensen, Ann Lykkegaard

    errors are therefore needed. Development of definition: A definition of medication errors including an index of error types for each stage in the medication process was developed from existing terminology and through a modified Delphi-process in 2008. The Delphi panel consisted of 25 interdisciplinary......Abstract Authors: Lisby M, Larsen LP, Soerensen AL, Nielsen LP, Mainz J Title: Prioritising interventions against medication errors – the importance of a definition Objective: To develop and test a restricted definition of medication errors across health care settings in Denmark Methods: Medication...... errors constitute a major quality and safety problem in modern healthcare. However, far from all are clinically important. The prevalence of medication errors ranges from 2-75% indicating a global problem in defining and measuring these [1]. New cut-of levels focusing the clinical impact of medication...

  18. SU-F-T-252: An Investigation of Gamma Knife Frame Definition Error When Using a Pre-Planning Workflow

    International Nuclear Information System (INIS)

    Johnson, P

    2016-01-01

    Purpose: To determine causal factors related to high frame definition error when treating GK patients using a pre-planning workflow. Methods: 160 cases were retrospectively reviewed. All patients received treatment using a pre-planning workflow whereby stereotactic coordinates are determined from a CT scan acquired after framing using a fiducial box. The planning software automatically detects the fiducials and compares their location to expected values based on the rigid design of the fiducial system. Any difference is reported as mean and maximum frame definition error. The manufacturer recommends these values be less than 1.0 mm and 1.5 mm. In this study, frame definition error was analyzed in comparison with a variety of factors including which neurosurgeon/oncologist/physicist was involved with the procedure, number of post used during framing (3 or 4), type of lesion, and which CT scanner was utilized for acquisition. An analysis of variance (ANOVA) approach was used to statistically evaluate the data and determine causal factors related to instances of high frame definition error. Results: Two factors were identified as significant: number of post (p=0.0003) and CT scanner (p=0.0001). Further analysis showed that one of the four scanners was significantly different than the others. This diagnostic scanner was identified as an older model with localization lasers not tightly calibrated. The average value for maximum frame definition error using this scanner was 1.48 mm (4 posts) and 1.75 mm (3 posts). For the other scanners this value was 1.13 mm (4 posts) and 1.40 mm (3 posts). Conclusion: In utilizing a pre-planning workflow the choice of CT scanner matters. Any scanner utilized for GK should undergo routine QA at a level appropriate for radiation oncology. In terms of 3 vs 4 post, it is hypothesized that three posts provide less stability during CT acquisition. This will be tested in future work.

  19. SU-F-T-252: An Investigation of Gamma Knife Frame Definition Error When Using a Pre-Planning Workflow

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, P [University of Miami, Miami, FL (United States)

    2016-06-15

    Purpose: To determine causal factors related to high frame definition error when treating GK patients using a pre-planning workflow. Methods: 160 cases were retrospectively reviewed. All patients received treatment using a pre-planning workflow whereby stereotactic coordinates are determined from a CT scan acquired after framing using a fiducial box. The planning software automatically detects the fiducials and compares their location to expected values based on the rigid design of the fiducial system. Any difference is reported as mean and maximum frame definition error. The manufacturer recommends these values be less than 1.0 mm and 1.5 mm. In this study, frame definition error was analyzed in comparison with a variety of factors including which neurosurgeon/oncologist/physicist was involved with the procedure, number of post used during framing (3 or 4), type of lesion, and which CT scanner was utilized for acquisition. An analysis of variance (ANOVA) approach was used to statistically evaluate the data and determine causal factors related to instances of high frame definition error. Results: Two factors were identified as significant: number of post (p=0.0003) and CT scanner (p=0.0001). Further analysis showed that one of the four scanners was significantly different than the others. This diagnostic scanner was identified as an older model with localization lasers not tightly calibrated. The average value for maximum frame definition error using this scanner was 1.48 mm (4 posts) and 1.75 mm (3 posts). For the other scanners this value was 1.13 mm (4 posts) and 1.40 mm (3 posts). Conclusion: In utilizing a pre-planning workflow the choice of CT scanner matters. Any scanner utilized for GK should undergo routine QA at a level appropriate for radiation oncology. In terms of 3 vs 4 post, it is hypothesized that three posts provide less stability during CT acquisition. This will be tested in future work.

  20. Two-Tier VoI Prioritization System on Requirement-Based Data Streaming toward IoT

    Directory of Open Access Journals (Sweden)

    Sunyanan Choochotkaew

    2017-01-01

    Full Text Available Toward the world of Internet of Things, people utilize knowledge from sensor streams in various kinds of smart applications. The number of sensing devices is rapidly increasing along with the amount of sensing data. Consequently, a bottleneck problem at the local gateway has attracted high concern. An example scenario is smart elderly houses in rural areas where each house installs thousands of sensors and all connect to resource-limited and unstable 2G/3G networks. The bottleneck state can incur unacceptable latency and loss of significant data due to the limited waiting-queue. Orthogonally to the existing solutions, we propose a two-tier prioritization system to enhance information quality, indicated by VoI, at the local gateway. The proposed system has been designed to support several requirements with several conflicting criteria over shared sensing streams. Our approach adopts Multicriteria Decision Analysis technique to merge requirements and to assess the VoI. We introduce the framework that can reduce the computational cost by precalculation. Through a case study of building management systems, we have shown that our merge algorithm can provide 0.995 cosine-similarity for representing all user requirements and the evaluation approach can obtain satisfaction values around 3 times higher than the naïve strategies for the top-list data.

  1. Error rates in forensic DNA analysis: definition, numbers, impact and communication.

    Science.gov (United States)

    Kloosterman, Ate; Sjerps, Marjan; Quak, Astrid

    2014-09-01

    Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and published. The forensic domain is lagging behind concerning this transparency for various reasons. In this paper we provide definitions and observed frequencies for different types of errors at the Human Biological Traces Department of the Netherlands Forensic Institute (NFI) over the years 2008-2012. Furthermore, we assess their actual and potential impact and describe how the NFI deals with the communication of these numbers to the legal justice system. We conclude that the observed relative frequency of quality failures is comparable to studies from clinical laboratories and genetic testing centres. Furthermore, this frequency is constant over the five-year study period. The most common causes of failures related to the laboratory process were contamination and human error. Most human errors could be corrected, whereas gross contamination in crime samples often resulted in irreversible consequences. Hence this type of contamination is identified as the most significant source of error. Of the known contamination incidents, most were detected by the NFI quality control system before the report was issued to the authorities, and thus did not lead to flawed decisions like false convictions. However in a very limited number of cases crucial errors were detected after the report was issued, sometimes with severe consequences. Many of these errors were made in the post-analytical phase. The error rates reported in this paper are useful for quality improvement and benchmarking, and contribute to an open research culture that promotes public trust. However, they are irrelevant in the context of a particular case. Here case-specific probabilities of undetected errors are needed

  2. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  3. Two-dimensional errors

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements

  4. Evaluation of elastix-based propagated align algorithm for VOI- and voxel-based analysis of longitudinal F-18-FDG PET/CT data from patients with non-small cell lung cancer (NSCLC)

    OpenAIRE

    Kerner, Gerald S. M. A.; Fischer, Alexander; Koole, Michel J. B.; Pruim, Jan; Groen, Harry J. M.

    2015-01-01

    Background: Deformable image registration allows volume of interest (VOI)- and voxel-based analysis of longitudinal changes in fluorodeoxyglucose (FDG) tumor uptake in patients with non-small cell lung cancer (NSCLC). This study evaluates the performance of the elastix toolbox deformable image registration algorithm for VOI and voxel-wise assessment of longitudinal variations in FDG tumor uptake in NSCLC patients. Methods: Evaluation of the elastix toolbox was performed using F-18-FDG PET/CT ...

  5. Les kystes hydatiques du foie rompus dans les voies biliaires: à propos de 120 cas

    Science.gov (United States)

    Moujahid, Mountassir; Tajdine, Mohamed Tarik

    2011-01-01

    Etude rétrospective rapportant une série de kystes hydatiques rompus dans les voies biliaires colligés dans le service de chirurgie de l'hôpital militaire Avicenne à Marrakech. Entre 1990 à 2008, sur 536 kystes hydatiques du foie opérés dans le service, 120 étaient compliqués de rupture dans les voies biliaires soit 22,38%. Il y avait 82hommes et 38 femmes. L’âge moyen était de 35 ans avec des extrêmes allant de 10 à 60 ans. La clinique était dominée par la crise d'angiocholite ou une douleur du flanc droit. L'ictère était isolé dans huit cas. La fistule biliokystique était latente dans plus de 50% des cas. Le traitement a consisté en une résection du dôme saillant dans103cas (85,84%), une périkystectomie chez 11 malades (9,16%) et une lobectomie gauche dans six cas (5%). Le traitement de la fistule bilio kystique a consisté en une suture chez 36malades et un drainage bipolaire dans 25 cas, La déconnexion kysto-biliaire ou cholédocotomie trans hépatico kystique selon Perdomo était pratiquée dans 49cas et une anastomose bilio-digestive cholédoco-duodénale dans 10 cas. La durée moyenne d'hospitalisation était de 20jours. Nous déplorons deux décès par choc septique et un troisième par encéphalopathie secondaire à une cirrhose biliaire. La morbidité était représentée par huit abcès sous phrénique, douze fistules biliaires prolongées et deux occlusions intestinales. Les kystes hydatiques rompus dans les voies biliaires représentent la complication la plus grave de cette pathologie bénigne. Le traitement repose sur des méthodes radicales qui sont d'une efficacité reconnue, mais de réalisation dangereuse et les méthodes conservatrices, en particulier la déconnexion kysto-biliaire qui est une méthode simple et qui donne de bons résultats à court et à long terme. PMID:22384289

  6. Identification d’une nouvelle molécule d’intérêt chez le cheval atteint d’obstruction récurrente des voies respiratoires: La Pentraxine 3

    OpenAIRE

    Ramery, Eve

    2010-01-01

    L’ORVR ou obstruction récurrente des voies respiratoires (ORVR) est la cause la plus fréquente de maladie pulmonaire chronique chez le cheval adulte. La maladie se caractérise par une hyperréactivité bronchique, une production excessive de mucus et une inflammation neutrophilique pulmonaire qui ont pour effet de réduire la compliance dynamique du poumon et d’augmenter la résistance des voies respiratoires au débit aérien. Alors que la maladie est une entité documentée dans la littérature depu...

  7. Development of a methodology for classifying software errors

    Science.gov (United States)

    Gerhart, S. L.

    1976-01-01

    A mathematical formalization of the intuition behind classification of software errors is devised and then extended to a classification discipline: Every classification scheme should have an easily discernible mathematical structure and certain properties of the scheme should be decidable (although whether or not these properties hold is relative to the intended use of the scheme). Classification of errors then becomes an iterative process of generalization from actual errors to terms defining the errors together with adjustment of definitions according to the classification discipline. Alternatively, whenever possible, small scale models may be built to give more substance to the definitions. The classification discipline and the difficulties of definition are illustrated by examples of classification schemes from the literature and a new study of observed errors in published papers of programming methodologies.

  8. Under which conditions, additional monitoring data are worth gathering for improving decision making? Application of the VOI theory in the Bayesian Event Tree eruption forecasting framework

    Science.gov (United States)

    Loschetter, Annick; Rohmer, Jérémy

    2016-04-01

    Standard and new generation of monitoring observations provide in almost real-time important information about the evolution of the volcanic system. These observations are used to update the model and contribute to a better hazard assessment and to support decision making concerning potential evacuation. The framework BET_EF (based on Bayesian Event Tree) developed by INGV enables dealing with the integration of information from monitoring with the prospect of decision making. Using this framework, the objectives of the present work are i. to propose a method to assess the added value of information (within the Value Of Information (VOI) theory) from monitoring; ii. to perform sensitivity analysis on the different parameters that influence the VOI from monitoring. VOI consists in assessing the possible increase in expected value provided by gathering information, for instance through monitoring. Basically, the VOI is the difference between the value with information and the value without additional information in a Cost-Benefit approach. This theory is well suited to deal with situations that can be represented in the form of a decision tree such as the BET_EF tool. Reference values and ranges of variation (for sensitivity analysis) were defined for input parameters, based on data from the MESIMEX exercise (performed at Vesuvio volcano in 2006). Complementary methods for sensitivity analyses were implemented: local, global using Sobol' indices and regional using Contribution to Sample Mean and Variance plots. The results (specific to the case considered) obtained with the different techniques are in good agreement and enable answering the following questions: i. Which characteristics of monitoring are important for early warning (reliability)? ii. How do experts' opinions influence the hazard assessment and thus the decision? Concerning the characteristics of monitoring, the more influent parameters are the means rather than the variances for the case considered

  9. Uncertainty quantification and error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  10. The benefit of generating errors during learning.

    Science.gov (United States)

    Potts, Rosalind; Shanks, David R

    2014-04-01

    Testing has been found to be a powerful learning tool, but educators might be reluctant to make full use of its benefits for fear that any errors made would be harmful to learning. We asked whether testing could be beneficial to memory even during novel learning, when nearly all responses were errors, and where errors were unlikely to be related to either cues or targets. In 4 experiments, participants learned definitions for unfamiliar English words, or translations for foreign vocabulary, by generating a response and being given corrective feedback, by reading the word and its definition or translation, or by selecting from a choice of definitions or translations followed by feedback. In a final test of all words, generating errors followed by feedback led to significantly better memory for the correct definition or translation than either reading or making incorrect choices, suggesting that the benefits of generation are not restricted to correctly generated items. Even when information to be learned is novel, errorful generation may play a powerful role in potentiating encoding of corrective feedback. Experiments 2A, 2B, and 3 revealed, via metacognitive judgments of learning, that participants are strikingly unaware of this benefit, judging errorful generation to be a less effective encoding method than reading or incorrect choosing, when in fact it was better. Predictions reflected participants' subjective experience during learning. If subjective difficulty leads to more effort at encoding, this could at least partly explain the errorful generation advantage.

  11. ”EI OIKEASSA ELÄMÄSSÄ VOI JUOSTA MIEKKA OJOSSA LOHIKÄÄRMETTÄ KOHTI”: Digipelaajien kokemus omasta hyvinvoinnistaan

    OpenAIRE

    Engström, Paula

    2016-01-01

    Paula Engström. ”Ei oikeassa elämässä voi juosta miekka ojossa lohikäärmettä kohti”: Digipelaajien kokemus omasta hyvinvoinnistaan. Diak Etelä, Helsinki, syksy 2016, 57s., 1 liite. Diakonia-ammattikorkeakoulu, Sosiaalialan koulutusohjelma, sosionomi (AMK). Opinnäytetyön aihe syntyi Ehkäisevä päihdetyö EHYT ry:n toiveista saada kooste vuonna 2015 toteutetun Pelaajien hyvinvointikyselyn tuloksista. Opinnäytetyön tavoitteena oli saada selville, miten digitaalisia pelejä harrastavat nuor...

  12. Evaluation of software tools for automated identification of neuroanatomical structures in quantitative β-amyloid PET imaging to diagnose Alzheimer's disease

    Energy Technology Data Exchange (ETDEWEB)

    Tuszynski, Tobias; Luthardt, Julia; Butzke, Daniel; Tiepolt, Solveig; Seese, Anita; Barthel, Henryk [Leipzig University Medical Centre, Department of Nuclear Medicine, Leipzig (Germany); Rullmann, Michael; Hesse, Swen; Sabri, Osama [Leipzig University Medical Centre, Department of Nuclear Medicine, Leipzig (Germany); Leipzig University Medical Centre, Integrated Treatment and Research Centre (IFB) Adiposity Diseases, Leipzig (Germany); Gertz, Hermann-Josef [Leipzig University Medical Centre, Department of Psychiatry, Leipzig (Germany); Lobsien, Donald [Leipzig University Medical Centre, Department of Neuroradiology, Leipzig (Germany)

    2016-06-15

    For regional quantification of nuclear brain imaging data, defining volumes of interest (VOIs) by hand is still the gold standard. As this procedure is time-consuming and operator-dependent, a variety of software tools for automated identification of neuroanatomical structures were developed. As the quality and performance of those tools are poorly investigated so far in analyzing amyloid PET data, we compared in this project four algorithms for automated VOI definition (HERMES Brass, two PMOD approaches, and FreeSurfer) against the conventional method. We systematically analyzed florbetaben brain PET and MRI data of ten patients with probable Alzheimer's dementia (AD) and ten age-matched healthy controls (HCs) collected in a previous clinical study. VOIs were manually defined on the data as well as through the four automated workflows. Standardized uptake value ratios (SUVRs) with the cerebellar cortex as a reference region were obtained for each VOI. SUVR comparisons between ADs and HCs were carried out using Mann-Whitney-U tests, and effect sizes (Cohen's d) were calculated. SUVRs of automatically generated VOIs were correlated with SUVRs of conventionally derived VOIs (Pearson's tests). The composite neocortex SUVRs obtained by manually defined VOIs were significantly higher for ADs vs. HCs (p=0.010, d=1.53). This was also the case for the four tested automated approaches which achieved effect sizes of d=1.38 to d=1.62. SUVRs of automatically generated VOIs correlated significantly with those of the hand-drawn VOIs in a number of brain regions, with regional differences in the degree of these correlations. Best overall correlation was observed in the lateral temporal VOI for all tested software tools (r=0.82 to r=0.95, p<0.001). Automated VOI definition by the software tools tested has a great potential to substitute for the current standard procedure to manually define VOIs in β-amyloid PET data analysis. (orig.)

  13. Definition of the limit of quantification in the presence of instrumental and non-instrumental errors. Comparison among various definitions applied to the calibration of zinc by inductively coupled plasma-mass spectrometry

    Science.gov (United States)

    Badocco, Denis; Lavagnini, Irma; Mondin, Andrea; Favaro, Gabriella; Pastore, Paolo

    2015-12-01

    The limit of quantification (LOQ) in the presence of instrumental and non-instrumental errors was proposed. It was theoretically defined combining the two-component variance regression and LOQ schemas already present in the literature and applied to the calibration of zinc by the ICP-MS technique. At low concentration levels, the two-component variance LOQ definition should be always used above all when a clean room is not available. Three LOQ definitions were accounted for. One of them in the concentration and two in the signal domain. The LOQ computed in the concentration domain, proposed by Currie, was completed by adding the third order terms in the Taylor expansion because they are of the same order of magnitude of the second ones so that they cannot be neglected. In this context, the error propagation was simplified by eliminating the correlation contributions by using independent random variables. Among the signal domain definitions, a particular attention was devoted to the recently proposed approach based on at least one significant digit in the measurement. The relative LOQ values resulted very large in preventing the quantitative analysis. It was found that the Currie schemas in the signal and concentration domains gave similar LOQ values but the former formulation is to be preferred as more easily computable.

  14. Quantitative Evaluation of Segmentation- and Atlas-Based Attenuation Correction for PET/MR on Pediatric Patients.

    Science.gov (United States)

    Bezrukov, Ilja; Schmidt, Holger; Gatidis, Sergios; Mantlik, Frédéric; Schäfer, Jürgen F; Schwenzer, Nina; Pichler, Bernd J

    2015-07-01

    Pediatric imaging is regarded as a key application for combined PET/MR imaging systems. Because existing MR-based attenuation-correction methods were not designed specifically for pediatric patients, we assessed the impact of 2 potentially influential factors: inter- and intrapatient variability of attenuation coefficients and anatomic variability. Furthermore, we evaluated the quantification accuracy of 3 methods for MR-based attenuation correction without (SEGbase) and with bone prediction using an adult and a pediatric atlas (SEGwBONEad and SEGwBONEpe, respectively) on PET data of pediatric patients. The variability of attenuation coefficients between and within pediatric (5-17 y, n = 17) and adult (27-66 y, n = 16) patient collectives was assessed on volumes of interest (VOIs) in CT datasets for different tissue types. Anatomic variability was assessed on SEGwBONEad/pe attenuation maps by computing mean differences to CT-based attenuation maps for regions of bone tissue, lungs, and soft tissue. PET quantification was evaluated on VOIs with physiologic uptake and on 80% isocontour VOIs with elevated uptake in the thorax and abdomen/pelvis. Inter- and intrapatient variability of the bias was assessed for each VOI group and method. Statistically significant differences in mean VOI Hounsfield unit values and linear attenuation coefficients between adult and pediatric collectives were found in the lungs and femur. The prediction of attenuation maps using the pediatric atlas showed a reduced error in bone tissue and better delineation of bone structure. Evaluation of PET quantification accuracy showed statistically significant mean errors in mean standardized uptake values of -14% ± 5% and -23% ± 6% in bone marrow and femur-adjacent VOIs with physiologic uptake for SEGbase, which could be reduced to 0% ± 4% and -1% ± 5% using SEGwBONEpe attenuation maps. Bias in soft-tissue VOIs was less than 5% for all methods. Lung VOIs showed high SDs in the range of 15% for

  15. Evaluation of elastix-based propagated align algorithm for VOI- and voxel-based analysis of longitudinal F-18-FDG PET/CT data from patients with non-small cell lung cancer (NSCLC)

    NARCIS (Netherlands)

    Kerner, Gerald S. M. A.; Fischer, Alexander; Koole, Michel J. B.; Pruim, Jan; Groen, Harry J. M.

    2015-01-01

    Background: Deformable image registration allows volume of interest (VOI)- and voxel-based analysis of longitudinal changes in fluorodeoxyglucose (FDG) tumor uptake in patients with non-small cell lung cancer (NSCLC). This study evaluates the performance of the elastix toolbox deformable image

  16. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    of generic run-time error types, design of methods of observing application software behaviorduring execution and design of methods of evaluating run time constraints. In the definition of error types it is attempted to cover all relevant aspects of the application softwaree behavior. Methods of observation......In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...

  17. Gossypiboma of the Abdomen and Pelvis; A Recurring Error

    African Journals Online (AJOL)

    Moi Hospital, Voi, Kenya. Correspondence to: Dr. Gilbert Maranya, P.O Box 91066-80103 Mombasa, Kenya. Email: gilbertmaranya@gmail.com. CASE SERIES. Abstract. Introduction: Gossypiboma is a retained surgical sponge commonly in the abdomen and pelvis. Risk factors include emergency and prolonged surgery.

  18. State-independent error-disturbance trade-off for measurement operators

    International Nuclear Information System (INIS)

    Zhou, S.S.; Wu, Shengjun; Chau, H.F.

    2016-01-01

    In general, classical measurement statistics of a quantum measurement is disturbed by performing an additional incompatible quantum measurement beforehand. Using this observation, we introduce a state-independent definition of disturbance by relating it to the distinguishability problem between two classical statistical distributions – one resulting from a single quantum measurement and the other from a succession of two quantum measurements. Interestingly, we find an error-disturbance trade-off relation for any measurements in two-dimensional Hilbert space and for measurements with mutually unbiased bases in any finite-dimensional Hilbert space. This relation shows that error should be reduced to zero in order to minimize the sum of error and disturbance. We conjecture that a similar trade-off relation with a slightly relaxed definition of error can be generalized to any measurements in an arbitrary finite-dimensional Hilbert space.

  19. Errors in clinical laboratories or errors in laboratory medicine?

    Science.gov (United States)

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  20. Iterative optimization of quantum error correcting codes

    International Nuclear Information System (INIS)

    Reimpell, M.; Werner, R.F.

    2005-01-01

    We introduce a convergent iterative algorithm for finding the optimal coding and decoding operations for an arbitrary noisy quantum channel. This algorithm does not require any error syndrome to be corrected completely, and hence also finds codes outside the usual Knill-Laflamme definition of error correcting codes. The iteration is shown to improve the figure of merit 'channel fidelity' in every step

  1. Error quantification of osteometric data in forensic anthropology.

    Science.gov (United States)

    Langley, Natalie R; Meadows Jantz, Lee; McNulty, Shauna; Maijanen, Heli; Ousley, Stephen D; Jantz, Richard L

    2018-04-10

    This study evaluates the reliability of osteometric data commonly used in forensic case analyses, with specific reference to the measurements in Data Collection Procedures 2.0 (DCP 2.0). Four observers took a set of 99 measurements four times on a sample of 50 skeletons (each measurement was taken 200 times by each observer). Two-way mixed ANOVAs and repeated measures ANOVAs with pairwise comparisons were used to examine interobserver (between-subjects) and intraobserver (within-subjects) variability. Relative technical error of measurement (TEM) was calculated for measurements with significant ANOVA results to examine the error among a single observer repeating a measurement multiple times (e.g. repeatability or intraobserver error), as well as the variability between multiple observers (interobserver error). Two general trends emerged from these analyses: (1) maximum lengths and breadths have the lowest error across the board (TEMForensic Skeletal Material, 3rd edition. Each measurement was examined carefully to determine the likely source of the error (e.g. data input, instrumentation, observer's method, or measurement definition). For several measurements (e.g. anterior sacral breadth, distal epiphyseal breadth of the tibia) only one observer differed significantly from the remaining observers, indicating a likely problem with the measurement definition as interpreted by that observer; these definitions were clarified in DCP 2.0 to eliminate this confusion. Other measurements were taken from landmarks that are difficult to locate consistently (e.g. pubis length, ischium length); these measurements were omitted from DCP 2.0. This manual is available for free download online (https://fac.utk.edu/wp-content/uploads/2016/03/DCP20_webversion.pdf), along with an accompanying instructional video (https://www.youtube.com/watch?v=BtkLFl3vim4). Copyright © 2018 Elsevier B.V. All rights reserved.

  2. Medication errors of nurses and factors in refusal to report medication errors among nurses in a teaching medical center of iran in 2012.

    Science.gov (United States)

    Mostafaei, Davoud; Barati Marnani, Ahmad; Mosavi Esfahani, Haleh; Estebsari, Fatemeh; Shahzaidi, Shiva; Jamshidi, Ensiyeh; Aghamiri, Seyed Samad

    2014-10-01

    About one third of unwanted reported medication consequences are due to medication errors, resulting in one-fifth of hospital injuries. The aim of this study was determined formal and informal medication errors of nurses and the level of importance of factors in refusal to report medication errors among nurses. The cross-sectional study was done on the nursing staff of Shohada Tajrish Hospital, Tehran, Iran in 2012. The data was gathered through a questionnaire, made by the researchers. The questionnaires' face and content validity was confirmed by experts and for measuring its reliability test-retest was used. The data was analyzed by descriptive statistics. We used SPSS for related statistical analyses. The most important factors in refusal to report medication errors respectively were: lack of medication error recording and reporting system in the hospital (3.3%), non-significant error reporting to hospital authorities and lack of appropriate feedback (3.1%), and lack of a clear definition for a medication error (3%). There were both formal and informal reporting of medication errors in this study. Factors pertaining to management in hospitals as well as the fear of the consequences of reporting are two broad fields among the factors that make nurses not report their medication errors. In this regard, providing enough education to nurses, boosting the job security for nurses, management support and revising related processes and definitions are some factors that can help decreasing medication errors and increasing their report in case of occurrence.

  3. THE PRACTICAL ANALYSIS OF FINITE ELEMENTS METHOD ERRORS

    Directory of Open Access Journals (Sweden)

    Natalia Bakhova

    2011-03-01

    Full Text Available Abstract. The most important in the practical plan questions of reliable estimations of finite elementsmethod errors are considered. Definition rules of necessary calculations accuracy are developed. Methodsand ways of the calculations allowing receiving at economical expenditures of computing work the best finalresults are offered.Keywords: error, given the accuracy, finite element method, lagrangian and hermitian elements.

  4. On the determinants of measurement error in time-driven costing

    NARCIS (Netherlands)

    Cardinaels, E.; Labro, E.

    2008-01-01

    Although time estimates are used extensively for costing purposes, they are prone to measurement error. In an experimental setting, we research how measurement error in time estimates varies with: (1) the level of aggregation in the definition of costing system activities (aggregated or

  5. Parameters and error of a theoretical model

    International Nuclear Information System (INIS)

    Moeller, P.; Nix, J.R.; Swiatecki, W.

    1986-09-01

    We propose a definition for the error of a theoretical model of the type whose parameters are determined from adjustment to experimental data. By applying a standard statistical method, the maximum-likelihoodlmethod, we derive expressions for both the parameters of the theoretical model and its error. We investigate the derived equations by solving them for simulated experimental and theoretical quantities generated by use of random number generators. 2 refs., 4 tabs

  6. Application of value of information of tank waste characterization: A new paradigm for defining tank waste characterization requirements

    International Nuclear Information System (INIS)

    Fassbender, L.L.; Brewster, M.E.; Brothers, A.J.

    1996-11-01

    This report presents the rationale for adopting a recommended characterization strategy that uses a risk-based decision-making framework for managing the Tank Waste Characterization program at Hanford. The risk-management/value-of-information (VOI) strategy that is illustrated explicitly links each information-gathering activity to its cost and provides a mechanism to ensure that characterization funds are spent where they can produce the largest reduction in risk. The approach was developed by tailoring well-known decision analysis techniques to specific tank waste characterization applications. This report illustrates how VOI calculations are performed and demonstrates that the VOI approach can definitely be used for real Tank Waste Remediation System (TWRS) characterization problems

  7. Application of value of information of tank waste characterization: A new paradigm for defining tank waste characterization requirements

    Energy Technology Data Exchange (ETDEWEB)

    Fassbender, L.L.; Brewster, M.E.; Brothers, A.J. [and others

    1996-11-01

    This report presents the rationale for adopting a recommended characterization strategy that uses a risk-based decision-making framework for managing the Tank Waste Characterization program at Hanford. The risk-management/value-of-information (VOI) strategy that is illustrated explicitly links each information-gathering activity to its cost and provides a mechanism to ensure that characterization funds are spent where they can produce the largest reduction in risk. The approach was developed by tailoring well-known decision analysis techniques to specific tank waste characterization applications. This report illustrates how VOI calculations are performed and demonstrates that the VOI approach can definitely be used for real Tank Waste Remediation System (TWRS) characterization problems.

  8. Putting a face on medical errors: a patient perspective.

    Science.gov (United States)

    Kooienga, Sarah; Stewart, Valerie T

    2011-01-01

    Knowledge of the patient's perspective on medical error is limited. Research efforts have centered on how best to disclose error and how patients desire to have medical error disclosed. On the basis of a qualitative descriptive component of a mixed method study, a purposive sample of 30 community members told their stories of medical error. Their experiences focused on lack of communication, missed communication, or provider's poor interpersonal style of communication, greatly contrasting with the formal definition of error as failure to follow a set standard of care. For these participants, being a patient was more important than error or how an error is disclosed. The patient's understanding of error must be a key aspect of any quality improvement strategy. © 2010 National Association for Healthcare Quality.

  9. Composite Gauss-Legendre Quadrature with Error Control

    Science.gov (United States)

    Prentice, J. S. C.

    2011-01-01

    We describe composite Gauss-Legendre quadrature for determining definite integrals, including a means of controlling the approximation error. We compare the form and performance of the algorithm with standard Newton-Cotes quadrature. (Contains 1 table.)

  10. Error-diffusion binarization for joint transform correlators

    Science.gov (United States)

    Inbar, Hanni; Mendlovic, David; Marom, Emanuel

    1993-02-01

    A normalized nonlinearly scaled binary joint transform image correlator (JTC) based on a 1D error-diffusion binarization method has been studied. The behavior of the error-diffusion method is compared with hard-clipping, the most widely used method of binarized JTC approaches, using a single spatial light modulator. Computer simulations indicate that the error-diffusion method is advantageous for the production of a binarized power spectrum interference pattern in JTC configurations, leading to better definition of the correlation location. The error-diffusion binary JTC exhibits autocorrelation characteristics which are superior to those of the high-clipping binary JTC over the whole nonlinear scaling range of the Fourier-transform interference intensity for all noise levels considered.

  11. Effects of computing parameters and measurement locations on the estimation of 3D NPS in non-stationary MDCT images.

    Science.gov (United States)

    Miéville, Frédéric A; Bolard, Gregory; Bulling, Shelley; Gudinchet, François; Bochud, François O; Verdun, François R

    2013-11-01

    The goal of this study was to investigate the impact of computing parameters and the location of volumes of interest (VOI) on the calculation of 3D noise power spectrum (NPS) in order to determine an optimal set of computing parameters and propose a robust method for evaluating the noise properties of imaging systems. Noise stationarity in noise volumes acquired with a water phantom on a 128-MDCT and a 320-MDCT scanner were analyzed in the spatial domain in order to define locally stationary VOIs. The influence of the computing parameters in the 3D NPS measurement: the sampling distances bx,y,z and the VOI lengths Lx,y,z, the number of VOIs NVOI and the structured noise were investigated to minimize measurement errors. The effect of the VOI locations on the NPS was also investigated. Results showed that the noise (standard deviation) varies more in the r-direction (phantom radius) than z-direction plane. A 25 × 25 × 40 mm(3) VOI associated with DFOV = 200 mm (Lx,y,z = 64, bx,y = 0.391 mm with 512 × 512 matrix) and a first-order detrending method to reduce structured noise led to an accurate NPS estimation. NPS estimated from off centered small VOIs had a directional dependency contrary to NPS obtained from large VOIs located in the center of the volume or from small VOIs located on a concentric circle. This showed that the VOI size and location play a major role in the determination of NPS when images are not stationary. This study emphasizes the need for consistent measurement methods to assess and compare image quality in CT. Copyright © 2012 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  12. Error rates in forensic DNA analysis: Definition, numbers, impact and communication

    NARCIS (Netherlands)

    Kloosterman, A.; Sjerps, M.; Quak, A.

    2014-01-01

    Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and

  13. Medication Errors: New EU Good Practice Guide on Risk Minimisation and Error Prevention.

    Science.gov (United States)

    Goedecke, Thomas; Ord, Kathryn; Newbould, Victoria; Brosch, Sabine; Arlett, Peter

    2016-06-01

    A medication error is an unintended failure in the drug treatment process that leads to, or has the potential to lead to, harm to the patient. Reducing the risk of medication errors is a shared responsibility between patients, healthcare professionals, regulators and the pharmaceutical industry at all levels of healthcare delivery. In 2015, the EU regulatory network released a two-part good practice guide on medication errors to support both the pharmaceutical industry and regulators in the implementation of the changes introduced with the EU pharmacovigilance legislation. These changes included a modification of the 'adverse reaction' definition to include events associated with medication errors, and the requirement for national competent authorities responsible for pharmacovigilance in EU Member States to collaborate and exchange information on medication errors resulting in harm with national patient safety organisations. To facilitate reporting and learning from medication errors, a clear distinction has been made in the guidance between medication errors resulting in adverse reactions, medication errors without harm, intercepted medication errors and potential errors. This distinction is supported by an enhanced MedDRA(®) terminology that allows for coding all stages of the medication use process where the error occurred in addition to any clinical consequences. To better understand the causes and contributing factors, individual case safety reports involving an error should be followed-up with the primary reporter to gather information relevant for the conduct of root cause analysis where this may be appropriate. Such reports should also be summarised in periodic safety update reports and addressed in risk management plans. Any risk minimisation and prevention strategy for medication errors should consider all stages of a medicinal product's life-cycle, particularly the main sources and types of medication errors during product development. This article

  14. Article Errors in the English Writing of Saudi EFL Preparatory Year Students

    Science.gov (United States)

    Alhaisoni, Eid; Gaudel, Daya Ram; Al-Zuoud, Khalid M.

    2017-01-01

    This study aims at providing a comprehensive account of the types of errors produced by Saudi EFL students enrolled in the preparatory year programe in their use of articles, based on the Surface Structure Taxonomies (SST) of errors. The study describes the types, frequency and sources of the definite and indefinite article errors in writing…

  15. Measuring Articulatory Error Consistency in Children with Developmental Apraxia of Speech

    Science.gov (United States)

    Betz, Stacy K.; Stoel-Gammon, Carol

    2005-01-01

    Error inconsistency is often cited as a characteristic of children with speech disorders, particularly developmental apraxia of speech (DAS); however, few researchers operationally define error inconsistency and the definitions that do exist are not standardized across studies. This study proposes three formulas for measuring various aspects of…

  16. Medication errors: an overview for clinicians.

    Science.gov (United States)

    Wittich, Christopher M; Burkle, Christopher M; Lanier, William L

    2014-08-01

    Medication error is an important cause of patient morbidity and mortality, yet it can be a confusing and underappreciated concept. This article provides a review for practicing physicians that focuses on medication error (1) terminology and definitions, (2) incidence, (3) risk factors, (4) avoidance strategies, and (5) disclosure and legal consequences. A medication error is any error that occurs at any point in the medication use process. It has been estimated by the Institute of Medicine that medication errors cause 1 of 131 outpatient and 1 of 854 inpatient deaths. Medication factors (eg, similar sounding names, low therapeutic index), patient factors (eg, poor renal or hepatic function, impaired cognition, polypharmacy), and health care professional factors (eg, use of abbreviations in prescriptions and other communications, cognitive biases) can precipitate medication errors. Consequences faced by physicians after medication errors can include loss of patient trust, civil actions, criminal charges, and medical board discipline. Methods to prevent medication errors from occurring (eg, use of information technology, better drug labeling, and medication reconciliation) have been used with varying success. When an error is discovered, patients expect disclosure that is timely, given in person, and accompanied with an apology and communication of efforts to prevent future errors. Learning more about medication errors may enhance health care professionals' ability to provide safe care to their patients. Copyright © 2014 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.

  17. Errors in imaging patients in the emergency setting.

    Science.gov (United States)

    Pinto, Antonio; Reginelli, Alfonso; Pinto, Fabio; Lo Re, Giuseppe; Midiri, Federico; Muzj, Carlo; Romano, Luigia; Brunese, Luca

    2016-01-01

    Emergency and trauma care produces a "perfect storm" for radiological errors: uncooperative patients, inadequate histories, time-critical decisions, concurrent tasks and often junior personnel working after hours in busy emergency departments. The main cause of diagnostic errors in the emergency department is the failure to correctly interpret radiographs, and the majority of diagnoses missed on radiographs are fractures. Missed diagnoses potentially have important consequences for patients, clinicians and radiologists. Radiologists play a pivotal role in the diagnostic assessment of polytrauma patients and of patients with non-traumatic craniothoracoabdominal emergencies, and key elements to reduce errors in the emergency setting are knowledge, experience and the correct application of imaging protocols. This article aims to highlight the definition and classification of errors in radiology, the causes of errors in emergency radiology and the spectrum of diagnostic errors in radiography, ultrasonography and CT in the emergency setting.

  18. Methods of Run-Time Error Detection in Distributed Process Control Software

    DEFF Research Database (Denmark)

    Drejer, N.

    In this thesis, methods of run-time error detection in application software for distributed process control is designed. The error detection is based upon a monitoring approach in which application software is monitored by system software during the entire execution. The thesis includes definition...... and constraint evaluation is designed for the modt interesting error types. These include: a) semantical errors in data communicated between application tasks; b) errors in the execution of application tasks; and c) errors in the timing of distributed events emitted by the application software. The design...... of error detection methods includes a high level software specification. this has the purpose of illustrating that the designed can be used in practice....

  19. VoiLA: A multidisciplinary study of Volatile recycling in the Lesser Antilles Arc

    Science.gov (United States)

    Collier, J.; Blundy, J. D.; Goes, S. D. B.; Henstock, T.; Harmon, N.; Kendall, J. M.; Macpherson, C.; Rietbrock, A.; Rychert, C.; Van Hunen, J.; Wilkinson, J.; Wilson, M.

    2017-12-01

    Project VoiLA will address the role of volatiles in controlling geological processes at subduction zones. The study area was chosen as it subducts oceanic lithosphere formed at the slow-spreading Mid Atlantic Ridge. This should result in a different level and pattern of hydration to compare with subduction zones in the Pacific which consume oceanic lithosphere generated at faster spreading rates. In five project components, we will test (1) where volatiles are held within the incoming plate; (2) where they are transported and released below the arc; (3) how the volatile distribution and pathways relate to the construction of the arc; and (4) their relationship to seismic and volcanic hazards and the fractionation of economic metals. Finally, (5) the behaviour of the Lesser Antilles arc will be compared with that of other well-studied systems to improve our wider understanding of the role of water in subduction processes. To address these questions the project will combine seismology; petrology and numerical modelling of wedge dynamics and its consequences on dehydration and melting. So-far island-based fieldwork has included mantle xenolith collection and installation of a temporary seismometer network. In 2016 and 2017 we conducted cruises onboard the RRS James Cook that collected a network of passive-recording and active-recording ocean-bottom seismometer data within the back-arc, fore-arc and incoming plate region. A total of 175 deployments and recoveries were made with the loss of only 6 stations. The presentation will present preliminary results from the project.

  20. Target definition in prostate, head, and neck

    NARCIS (Netherlands)

    Rasch, Coen; Steenbakkers, Roel; van Herk, Marcel

    2005-01-01

    Target definition is a major source of errors in both prostate and head and neck external-beam radiation treatment. Delineation errors remain constant during the course of radiation and therefore have a large impact on the dose to the tumor. Major sources of delineation variation are visibility of

  1. The error performance analysis over cyclic redundancy check codes

    Science.gov (United States)

    Yoon, Hee B.

    1991-06-01

    The burst error is generated in digital communication networks by various unpredictable conditions, which occur at high error rates, for short durations, and can impact services. To completely describe a burst error one has to know the bit pattern. This is impossible in practice on working systems. Therefore, under the memoryless binary symmetric channel (MBSC) assumptions, the performance evaluation or estimation schemes for digital signal 1 (DS1) transmission systems carrying live traffic is an interesting and important problem. This study will present some analytical methods, leading to efficient detecting algorithms of burst error using cyclic redundancy check (CRC) code. The definition of burst error is introduced using three different models. Among the three burst error models, the mathematical model is used in this study. The probability density function, function(b) of burst error of length b is proposed. The performance of CRC-n codes is evaluated and analyzed using function(b) through the use of a computer simulation model within CRC block burst error. The simulation result shows that the mean block burst error tends to approach the pattern of the burst error which random bit errors generate.

  2. Definition of a matrix of the generalized parameters asymmetrical multiphase transmission lines

    Directory of Open Access Journals (Sweden)

    Suslov V.M.

    2005-12-01

    Full Text Available Idle time, without introduction of wave characteristics, algorithm of definition of a matrix of the generalized parameters asymmetrical multiphase transmission lines is offered. Definition of a matrix of parameters is based on a matrix primary specific of parameters of line and simple iterative procedure. The amount of iterations of iterative procedure is determined by a set error of performance of the resulted matrix ratio between separate blocks of a determined matrix. The given error is connected by close image of with a margin error determined matrix.

  3. Defining near misses : towards a sharpened definition based on empirical data about error handling processes

    NARCIS (Netherlands)

    Kessels-Habraken, M.M.P.; Schaaf, van der T.W.; Jonge, de J.; Rutte, C.G.

    2010-01-01

    Medical errors in health care still occur frequently. Unfortunately, errors cannot be completely prevented and 100% safety can never be achieved. Therefore, in addition to error reduction strategies, health care organisations could also implement strategies that promote timely error detection and

  4. Human Errors - A Taxonomy for Describing Human Malfunction in Industrial Installations

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1982-01-01

    This paper describes the definition and the characteristics of human errors. Different types of human behavior are classified, and their relation to different error mechanisms are analyzed. The effect of conditioning factors related to affective, motivating aspects of the work situation as well...... as physiological factors are also taken into consideration. The taxonomy for event analysis, including human malfunction, is presented. Possibilities for the prediction of human error are discussed. The need for careful studies in actual work situations is expressed. Such studies could provide a better...

  5. Effects of structural error on the estimates of parameters of dynamical systems

    Science.gov (United States)

    Hadaegh, F. Y.; Bekey, G. A.

    1986-01-01

    In this paper, the notion of 'near-equivalence in probability' is introduced for identifying a system in the presence of several error sources. Following some basic definitions, necessary and sufficient conditions for the identifiability of parameters are given. The effects of structural error on the parameter estimates for both the deterministic and stochastic cases are considered.

  6. The role of error in organizing behaviour

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    2003-01-01

    information technology. Consequently, the topic of the present contribution is not a definition of the concept or a proper taxonomy. Instead, a review is given of two professional contexts for which the concept of error is important. Three cases of analysis of human-system interaction are reviewed: (1...... be cognitive control of behaviour in complex environments....

  7. The role of error in organizing behaviour

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1990-01-01

    information technology. Consequently, the topic of the present contribution is not a definition of the concept or a proper taxonomy. Instead, a review is given of two professional contexts for which the concept of error is important. Three cases of analysis of human-system interaction are reviewed: (1...... of study should be cognitive control of behaviour in complex environments....

  8. Minimum-error discrimination of entangled quantum states

    International Nuclear Information System (INIS)

    Lu, Y.; Coish, N.; Kaltenbaek, R.; Hamel, D. R.; Resch, K. J.; Croke, S.

    2010-01-01

    Strategies to optimally discriminate between quantum states are critical in quantum technologies. We present an experimental demonstration of minimum-error discrimination between entangled states, encoded in the polarization of pairs of photons. Although the optimal measurement involves projection onto entangled states, we use a result of J. Walgate et al. [Phys. Rev. Lett. 85, 4972 (2000)] to design an optical implementation employing only local polarization measurements and feed-forward, which performs at the Helstrom bound. Our scheme can achieve perfect discrimination of orthogonal states and minimum-error discrimination of nonorthogonal states. Our experimental results show a definite advantage over schemes not using feed-forward.

  9. Clinical errors and medical negligence.

    Science.gov (United States)

    Oyebode, Femi

    2013-01-01

    This paper discusses the definition, nature and origins of clinical errors including their prevention. The relationship between clinical errors and medical negligence is examined as are the characteristics of litigants and events that are the source of litigation. The pattern of malpractice claims in different specialties and settings is examined. Among hospitalized patients worldwide, 3-16% suffer injury as a result of medical intervention, the most common being the adverse effects of drugs. The frequency of adverse drug effects appears superficially to be higher in intensive care units and emergency departments but once rates have been corrected for volume of patients, comorbidity of conditions and number of drugs prescribed, the difference is not significant. It is concluded that probably no more than 1 in 7 adverse events in medicine result in a malpractice claim and the factors that predict that a patient will resort to litigation include a prior poor relationship with the clinician and the feeling that the patient is not being kept informed. Methods for preventing clinical errors are still in their infancy. The most promising include new technologies such as electronic prescribing systems, diagnostic and clinical decision-making aids and error-resistant systems. Copyright © 2013 S. Karger AG, Basel.

  10. Theoretical explanations and practices regarding the distinction between the concepts: judicial error, error of law and fundamental vice in the legislation of the Republic of Moldova

    Directory of Open Access Journals (Sweden)

    Vasilisa Muntean

    2017-10-01

    Full Text Available In the research, a doctrinal and legal analysis of the concept of legal error is carried out. The author provides a self-defined definition of the concept addressed and highlights the main causes and conditions for the occurrence of judicial errors. At present, in the specialized legal doctrine of the Republic of Moldova, the problem of defining the judicial error has been little approached. In this respect, this scientific article is a scientific approach aimed at elucidating the theoretical and normative deficiencies and errors that occur in the area of reparation of the prejudice caused by judicial errors. In order to achieve our goal, we aim to create a core of ideas and referral mechanisms that ensure a certain interpretative and decisional homogeneity in the doctrinal and legal characterization of the phrase "judicial error".

  11. Errors in chest x-ray interpretation

    International Nuclear Information System (INIS)

    Woznitza, N.; Piper, K.

    2015-01-01

    Full text: Reporting of adult chest x-rays by appropriately trained radiographers is frequently used in the United Kingdom as one method to maintain a patient focused radiology service in times of increasing workload. With models of advanced practice being developed in Australia, New Zealand and Canada, the spotlight is on the evidence base which underpins radiographer reporting. It is essential that any radiographer who extends their scope of practice to incorporate definitive clinical reporting perform at a level comparable to a consultant radiologist. In any analysis of performance it is important to quantify levels of sensitivity and specificity and to evaluate areas of error and variation. A critical review of the errors made by reporting radiographers in the interpretation of adult chest x-rays will be performed, examining performance in structured clinical examinations, clinical audit and a diagnostic accuracy study from research undertaken by the authors, and including studies which have compared the performance of reporting radiographers and consultant radiologists. overall performance will be examined and common errors discussed using a case based approach. Methods of error reduction, including multidisciplinary team meetings and ongoing learning will be considered

  12. Evaluating the Appropriateness and Use of Domain Critical Errors

    Directory of Open Access Journals (Sweden)

    Chad W. Buckendahl

    2012-10-01

    Full Text Available The consequences associated with the uses and interpretations of scores for many credentialing testing programs have important implications for a range of stakeholders. Within licensure settings specifically, results from examination programs are often one of the final steps in the process of assessing whether individuals will be allowed to enter practice. This article focuses on the concept of domain critical errors and suggests a framework for considering their use in practice. Domain critical errors are defined here as knowledge, skills, abilities, or judgments that are essential to the definition of minimum qualifications in a testing program's pass-'fail decision-making process. Using domain critical errors has psychometric and policy implications, particularly for licensure programs that are mandatory for entry-level practice. Because these errors greatly influence pass-'fail decisions, the measurement community faces an ongoing challenge to promote defensible practices while concurrently providing assessment literacy development about the appropriate design and use of testing methods like domain critical errors.

  13. MEDICAL ERROR: CIVIL AND LEGAL ASPECT.

    Science.gov (United States)

    Buletsa, S; Drozd, O; Yunin, O; Mohilevskyi, L

    2018-03-01

    The scientific article is focused on the research of the notion of medical error, medical and legal aspects of this notion have been considered. The necessity of the legislative consolidation of the notion of «medical error» and criteria of its legal estimation have been grounded. In the process of writing a scientific article, we used the empirical method, general scientific and comparative legal methods. A comparison of the concept of medical error in civil and legal aspects was made from the point of view of Ukrainian, European and American scientists. It has been marked that the problem of medical errors is known since ancient times and in the whole world, in fact without regard to the level of development of medicine, there is no country, where doctors never make errors. According to the statistics, medical errors in the world are included in the first five reasons of death rate. At the same time the grant of medical services practically concerns all people. As a man and his life, health in Ukraine are acknowledged by a higher social value, medical services must be of high-quality and effective. The grant of not quality medical services causes harm to the health, and sometimes the lives of people; it may result in injury or even death. The right to the health protection is one of the fundamental human rights assured by the Constitution of Ukraine; therefore the issue of medical errors and liability for them is extremely relevant. The authors make conclusions, that the definition of the notion of «medical error» must get the legal consolidation. Besides, the legal estimation of medical errors must be based on the single principles enshrined in the legislation and confirmed by judicial practice.

  14. Comparison of manual and automated quantification methods of 123I-ADAM

    International Nuclear Information System (INIS)

    Kauppinen, T.; Keski-Rahkonen, A.; Sihvola, E.; Helsinki Univ. Central Hospital

    2005-01-01

    123 I-ADAM is a novel radioligand for imaging of the brain serotonin transporters (SERTs). Traditionally, the analysis of brain receptor studies has been based on observer-dependent manual region of interest definitions and visual interpretation. Our aim was to create a template for automated image registrations and volume of interest (VOI) quantification and to show that an automated quantification method of 123 I-ADAM is more repeatable than the manual method. Patients, methods: A template and a predefined VOI map was created from 123 I-ADAM scans done for healthy volunteers (n=15). Scans of another group of healthy persons (HS, n=12) and patients with bulimia nervosa (BN, n=10) were automatically fitted to the template and specific binding ratios (SBRs) were calculated by using the VOI map. Manual VOI definitions were done for the HS and BN groups by both one and two observers. The repeatability of the automated method was evaluated by using the BN group. Results: For the manual method, the interobserver coefficient of repeatability was 0.61 for the HS group and 1.00 for the BN group. The intra-observer coefficient of repeatability for the BN group was 0.70. For the automated method, the coefficient of repeatability was 0.13 for SBRs in midbrain. Conclusion: An automated quantification gives valuable information in addition to visual interpretation decreasing also the total image handling time and giving clear advantages for research work. An automated method for analysing 123 I-ADAM binding to the brain SERT gives repeatable results for fitting the studies to the template and for calculating SBRs, and could therefore replace manual methods. (orig.)

  15. Comparison of manual and automated quantification methods of {sup 123}I-ADAM

    Energy Technology Data Exchange (ETDEWEB)

    Kauppinen, T. [Helsinki Univ. Central Hospital (Finland). HUS Helsinki Medical Imaging Center; Helsinki Univ. Central Hospital (Finland). Division of Nuclear Medicine; Koskela, A.; Ahonen, A. [Helsinki Univ. Central Hospital (Finland). Division of Nuclear Medicine; Diemling, M. [Hermes Medical Solutions, Stockholm (Sweden); Keski-Rahkonen, A.; Sihvola, E. [Helsinki Univ. (Finland). Dept. of Public Health; Helsinki Univ. Central Hospital (Finland). Dept. of Psychiatry

    2005-07-01

    {sup 123}I-ADAM is a novel radioligand for imaging of the brain serotonin transporters (SERTs). Traditionally, the analysis of brain receptor studies has been based on observer-dependent manual region of interest definitions and visual interpretation. Our aim was to create a template for automated image registrations and volume of interest (VOI) quantification and to show that an automated quantification method of {sup 123}I-ADAM is more repeatable than the manual method. Patients, methods: A template and a predefined VOI map was created from {sup 123}I-ADAM scans done for healthy volunteers (n=15). Scans of another group of healthy persons (HS, n=12) and patients with bulimia nervosa (BN, n=10) were automatically fitted to the template and specific binding ratios (SBRs) were calculated by using the VOI map. Manual VOI definitions were done for the HS and BN groups by both one and two observers. The repeatability of the automated method was evaluated by using the BN group. Results: For the manual method, the interobserver coefficient of repeatability was 0.61 for the HS group and 1.00 for the BN group. The intra-observer coefficient of repeatability for the BN group was 0.70. For the automated method, the coefficient of repeatability was 0.13 for SBRs in midbrain. Conclusion: An automated quantification gives valuable information in addition to visual interpretation decreasing also the total image handling time and giving clear advantages for research work. An automated method for analysing {sup 123}I-ADAM binding to the brain SERT gives repeatable results for fitting the studies to the template and for calculating SBRs, and could therefore replace manual methods. (orig.)

  16. Medication administration error reporting and associated factors among nurses working at the University of Gondar referral hospital, Northwest Ethiopia, 2015.

    Science.gov (United States)

    Bifftu, Berhanu Boru; Dachew, Berihun Assefa; Tiruneh, Bewket Tadesse; Beshah, Debrework Tesgera

    2016-01-01

    Medication administration is the final step/phase of medication process in which its error directly affects the patient health. Due to the central role of nurses in medication administration, whether they are the source of an error, a contributor, or an observer they have the professional, legal and ethical responsibility to recognize and report. The aim of this study was to assess the prevalence of medication administration error reporting and associated factors among nurses working at The University of Gondar Referral Hospital, Northwest Ethiopia. Institution based quantitative cross - sectional study was conducted among 282 Nurses. Data were collected using semi-structured, self-administered questionnaire of the Medication Administration Errors Reporting (MAERs). Binary logistic regression with 95 % confidence interval was used to identify factors associated with medication administration errors reporting. The estimated medication administration error reporting was found to be 29.1 %. The perceived rates of medication administration errors reporting for non-intravenous related medications were ranged from 16.8 to 28.6 % and for intravenous-related from 20.6 to 33.4 %. Education status (AOR =1.38, 95 % CI: 4.009, 11.128), disagreement over time - error definition (AOR = 0.44, 95 % CI: 0.468, 0.990), administrative reason (AOR = 0.35, 95 % CI: 0.168, 0.710) and fear (AOR = 0.39, 95 % CI: 0.257, 0.838) were factors statistically significant for the refusal of reporting medication administration errors at p-value definition, administrative reason and fear were factors statistically significant for the refusal of errors reporting at p-value definition of reportable errors and strengthen the educational status of nurses by the health care organization.

  17. First-order error budgeting for LUVOIR mission

    Science.gov (United States)

    Lightsey, Paul A.; Knight, J. Scott; Feinberg, Lee D.; Bolcar, Matthew R.; Shaklan, Stuart B.

    2017-09-01

    Future large astronomical telescopes in space will have architectures that will have complex and demanding requirements to meet the science goals. The Large UV/Optical/IR Surveyor (LUVOIR) mission concept being assessed by the NASA/Goddard Space Flight Center is expected to be 9 to 15 meters in diameter, have a segmented primary mirror and be diffraction limited at a wavelength of 500 nanometers. The optical stability is expected to be in the picometer range for minutes to hours. Architecture studies to support the NASA Science and Technology Definition teams (STDTs) are underway to evaluate systems performance improvements to meet the science goals. To help define the technology needs and assess performance, a first order error budget has been developed. Like the JWST error budget, the error budget includes the active, adaptive and passive elements in spatial and temporal domains. JWST performance is scaled using first order approximations where appropriate and includes technical advances in telescope control.

  18. A definitional framework for the human/biometric sensor interaction model

    Science.gov (United States)

    Elliott, Stephen J.; Kukula, Eric P.

    2010-04-01

    Existing definitions for biometric testing and evaluation do not fully explain errors in a biometric system. This paper provides a definitional framework for the Human Biometric-Sensor Interaction (HBSI) model. This paper proposes six new definitions based around two classifications of presentations, erroneous and correct. The new terms are: defective interaction (DI), concealed interaction (CI), false interaction (FI), failure to detect (FTD), failure to extract (FTX), and successfully acquired samples (SAS). As with all definitions, the new terms require a modification to the general biometric model developed by Mansfield and Wayman [1].

  19. SU-E-J-114: A Practical Hybrid Method for Improving the Quality of CT-CBCT Deformable Image Registration for Head and Neck Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Liu, C; Kumarasiri, A; Chetvertkov, M; Gordon, J; Chetty, I; Siddiqui, F; Kim, J [Henry Ford Health System, Detroit, MI (United States)

    2015-06-15

    Purpose: Accurate deformable image registration (DIR) between CT and CBCT in H&N is challenging. In this study, we propose a practical hybrid method that uses not only the pixel intensities but also organ physical properties, structure volume of interest (VOI), and interactive local registrations. Methods: Five oropharyngeal cancer patients were selected retrospectively. For each patient, the planning CT was registered to the last fraction CBCT, where the anatomy difference was largest. A three step registration strategy was tested; Step1) DIR using pixel intensity only, Step2) DIR with additional use of structure VOI and rigidity penalty, and Step3) interactive local correction. For Step1, a public-domain open-source DIR algorithm was used (cubic B-spline, mutual information, steepest gradient optimization, and 4-level multi-resolution). For Step2, rigidity penalty was applied on bony anatomies and brain, and a structure VOI was used to handle the body truncation such as the shoulder cut-off on CBCT. Finally, in Step3, the registrations were reviewed on our in-house developed software and the erroneous areas were corrected via a local registration using level-set motion algorithm. Results: After Step1, there were considerable amount of registration errors in soft tissues and unrealistic stretching in the posterior to the neck and near the shoulder due to body truncation. The brain was also found deformed to a measurable extent near the superior border of CBCT. Such errors could be effectively removed by using a structure VOI and rigidity penalty. The rest of the local soft tissue error could be corrected using the interactive software tool. The estimated interactive correction time was approximately 5 minutes. Conclusion: The DIR using only the image pixel intensity was vulnerable to noise and body truncation. A corrective action was inevitable to achieve good quality of registrations. We found the proposed three-step hybrid method efficient and practical for CT

  20. Advancing the research agenda for diagnostic error reduction.

    Science.gov (United States)

    Zwaan, Laura; Schiff, Gordon D; Singh, Hardeep

    2013-10-01

    Diagnostic errors remain an underemphasised and understudied area of patient safety research. We briefly summarise the methods that have been used to conduct research on epidemiology, contributing factors and interventions related to diagnostic error and outline directions for future research. Research methods that have studied epidemiology of diagnostic error provide some estimate on diagnostic error rates. However, there appears to be a large variability in the reported rates due to the heterogeneity of definitions and study methods used. Thus, future methods should focus on obtaining more precise estimates in different settings of care. This would lay the foundation for measuring error rates over time to evaluate improvements. Research methods have studied contributing factors for diagnostic error in both naturalistic and experimental settings. Both approaches have revealed important and complementary information. Newer conceptual models from outside healthcare are needed to advance the depth and rigour of analysis of systems and cognitive insights of causes of error. While the literature has suggested many potentially fruitful interventions for reducing diagnostic errors, most have not been systematically evaluated and/or widely implemented in practice. Research is needed to study promising intervention areas such as enhanced patient involvement in diagnosis, improving diagnosis through the use of electronic tools and identification and reduction of specific diagnostic process 'pitfalls' (eg, failure to conduct appropriate diagnostic evaluation of a breast lump after a 'normal' mammogram). The last decade of research on diagnostic error has made promising steps and laid a foundation for more rigorous methods to advance the field.

  1. THE SELF-CORRECTION OF ENGLISH SPEECH ERRORS IN SECOND LANGUANGE LEARNING

    Directory of Open Access Journals (Sweden)

    Ketut Santi Indriani

    2015-05-01

    Full Text Available The process of second language (L2 learning is strongly influenced by the factors of error reconstruction that occur when the language is learned. Errors will definitely appear in the learning process. However, errors can be used as a step to accelerate the process of understanding the language. Doing self-correction (with or without giving cues is one of the examples. In the aspect of speaking, self-correction is done immediately after the error appears. This study is aimed at finding (i what speech errors the L2 speakers are able to identify, (ii of the errors identified, what speech errors the L2 speakers are able to self correct and (iii whether the self-correction of speech error are able to immediately improve the L2 learning. Based on the data analysis, it was found that the majority identified errors are related to noun (plurality, subject-verb agreement, grammatical structure and pronunciation.. B2 speakers tend to correct errors properly. Of the 78% identified speech errors, as much as 66% errors could be self-corrected accurately by the L2 speakers. Based on the analysis, it was also found that self-correction is able to improve L2 learning ability directly. This is evidenced by the absence of repetition of the same error after the error had been corrected.

  2. The ethics and practical importance of defining, distinguishing and disclosing nursing errors: a discussion paper.

    Science.gov (United States)

    Johnstone, Megan-Jane; Kanitsaki, Olga

    2006-03-01

    Nurses globally are required and expected to report nursing errors. As is clearly demonstrated in the international literature, fulfilling this requirement is not, however, without risks. In this discussion paper, the notion of 'nursing error', the practical and moral importance of defining, distinguishing and disclosing nursing errors and how a distinct definition of 'nursing error' fits with the new 'system approach' to human-error management in health care are critiqued. Drawing on international literature and two key case exemplars from the USA and Australia, arguments are advanced to support the view that although it is 'right' for nurses to report nursing errors, it will be very difficult for them to do so unless a non-punitive approach to nursing-error management is adopted.

  3. Chinese Translation Errors in English/Chinese Bilingual Children's Picture Books

    Science.gov (United States)

    Huang, Qiaoya; Chen, Xiaoning

    2012-01-01

    The aim of this study was to review the Chinese translation errors in 31 English/Chinese bilingual children's picture books. While bilingual children's books make definite contributions to language acquisition, few studies have examined the quality of these books, and even fewer have specifically focused on English/Chinese bilingual books.…

  4. Wavefront error sensing for LDR

    Science.gov (United States)

    Tubbs, Eldred F.; Glavich, T. A.

    1988-01-01

    Wavefront sensing is a significant aspect of the LDR control problem and requires attention at an early stage of the control system definition and design. A combination of a Hartmann test for wavefront slope measurement and an interference test for piston errors of the segments was examined and is presented as a point of departure for further discussion. The assumption is made that the wavefront sensor will be used for initial alignment and periodic alignment checks but that it will not be used during scientific observations. The Hartmann test and the interferometric test are briefly examined.

  5. A precise error bound for quantum phase estimation.

    Directory of Open Access Journals (Sweden)

    James M Chappell

    Full Text Available Quantum phase estimation is one of the key algorithms in the field of quantum computing, but up until now, only approximate expressions have been derived for the probability of error. We revisit these derivations, and find that by ensuring symmetry in the error definitions, an exact formula can be found. This new approach may also have value in solving other related problems in quantum computing, where an expected error is calculated. Expressions for two special cases of the formula are also developed, in the limit as the number of qubits in the quantum computer approaches infinity and in the limit as the extra added qubits to improve reliability goes to infinity. It is found that this formula is useful in validating computer simulations of the phase estimation procedure and in avoiding the overestimation of the number of qubits required in order to achieve a given reliability. This formula thus brings improved precision in the design of quantum computers.

  6. Error-related brain activity and error awareness in an error classification paradigm.

    Science.gov (United States)

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. Perceptual error and the culture of open disclosure in Australian radiology.

    Science.gov (United States)

    Pitman, A G

    2006-06-01

    The work of diagnostic radiology consists of the complete detection of all abnormalities in an imaging examination and their accurate diagnosis. Errors in diagnostic radiology comprise perceptual errors, which are a failure of detection, and interpretation errors, which are errors of diagnosis. Perceptual errors are subject to rules of human perception and can be expected in a proportion of observations by any human observer including a trained professional under ideal conditions. Current legal standards of medical negligence make no allowance for perceptual errors, comparing human performance to an ideal standard. Diagnostic radiology in Australia has a culture of open disclosure, where full unbiased evidence from an examination is provided to the patient together with the report. This practice benefits the public by allowing genuine differences of opinion and also by allowing a second chance of correct diagnosis in cases of perceptual error. The culture of open disclosure, which is unique to diagnostic radiology, places radiologists at distinct medicolegal disadvantage compared with other specialties. (i) Perceptual error should be acknowledged as an integral inevitable part of diagnostic radiology; (ii) culture of open disclosure should be encouraged by the profession; and (iii) a pragmatic definition of medical negligence should reflect the imperfect performance of human observers.

  8. Errors in veterinary practice: preliminary lessons for building better veterinary teams.

    Science.gov (United States)

    Kinnison, T; Guile, D; May, S A

    2015-11-14

    Case studies in two typical UK veterinary practices were undertaken to explore teamwork, including interprofessional working. Each study involved one week of whole team observation based on practice locations (reception, operating theatre), one week of shadowing six focus individuals (veterinary surgeons, veterinary nurses and administrators) and a final week consisting of semistructured interviews regarding teamwork. Errors emerged as a finding of the study. The definition of errors was inclusive, pertaining to inputs or omitted actions with potential adverse outcomes for patients, clients or the practice. The 40 identified instances could be grouped into clinical errors (dosing/drugs, surgical preparation, lack of follow-up), lost item errors, and most frequently, communication errors (records, procedures, missing face-to-face communication, mistakes within face-to-face communication). The qualitative nature of the study allowed the underlying cause of the errors to be explored. In addition to some individual mistakes, system faults were identified as a major cause of errors. Observed examples and interviews demonstrated several challenges to interprofessional teamworking which may cause errors, including: lack of time, part-time staff leading to frequent handovers, branch differences and individual veterinary surgeon work preferences. Lessons are drawn for building better veterinary teams and implications for Disciplinary Proceedings considered. British Veterinary Association.

  9. Perceptual error and the culture of open disclosure in Australian radiology

    International Nuclear Information System (INIS)

    Pitman, A.G.

    2006-01-01

    The work of diagnostic radiology consists of the complete detection of all abnormalities in an imaging examination and their accurate diagnosis. Errors in diagnostic radiology comprise perceptual errors, which are a failure of detection, and interpretation errors, which are errors of diagnosis. Perceptual errors are subject to rules of human perception and can be expected in a proportion of observations by any human observer including a trained professional under ideal conditions. Current legal standards of medical negligence make no allowance for perceptual errors, comparing human performance to an ideal standard. Diagnostic radiology in Australia has a culture of open disclosure, where full unbiased evidence from an examination is provided to the patient together with the report. This practice benefits the public by allowing genuine differences of opinion and also by allowing a second chance of correct diagnosis in cases of perceptual error. The culture of open disclosure, which is unique to diagnostic radiology, places radiologists at distinct medicolegal disadvantage compared with other specialties, (i) Perceptual error should be acknowledged as an integral inevitable part of diagnostic radiology; (ii) culture of open disclosure should be encouraged by the profession; and (iii) a pragmatic definition of medical negligence should reflect the imperfect performance of human observers Copyright (2006) Blackwell Publishing Asia Pty Ltd

  10. Dynamic Target Definition: A novel approach for PTV definition in ion beam therapy

    International Nuclear Information System (INIS)

    Cabal, Gonzalo A.; Jäkel, Oliver

    2013-01-01

    Purpose: To present a beam arrangement specific approach for PTV definition in ion beam therapy. Materials and methods: By means of a Monte Carlo error propagation analysis a criteria is formulated to assess whether a voxel is safely treated. Based on this a non-isotropical expansion rule is proposed aiming to minimize the impact of uncertainties on the dose delivered. Results: The method is exemplified in two cases: a Head and Neck case and a Prostate case. In both cases the modality used is proton beam irradiation and the sources of uncertainties taken into account are positioning (set up) errors and range uncertainties. It is shown how different beam arrangements have an impact on plan robustness which leads to different target expansions necessary to assure a predefined level of plan robustness. The relevance of appropriate beam angle arrangements as a way to minimize uncertainties is demonstrated. Conclusions: A novel method for PTV definition in on beam therapy is presented. The method show promising results by improving the probability of correct dose CTV coverage while reducing the size of the PTV volume. In a clinical scenario this translates into an enhanced tumor control probability while reducing the volume of healthy tissue being irradiated

  11. Rectifying calibration error of Goldmann applanation tonometer is easy!

    Directory of Open Access Journals (Sweden)

    Nikhil S Choudhari

    2014-01-01

    Full Text Available Purpose: Goldmann applanation tonometer (GAT is the current Gold standard tonometer. However, its calibration error is common and can go unnoticed in clinics. Its company repair has limitations. The purpose of this report is to describe a self-taught technique of rectifying calibration error of GAT. Materials and Methods: Twenty-nine slit-lamp-mounted Haag-Streit Goldmann tonometers (Model AT 900 C/M; Haag-Streit, Switzerland were included in this cross-sectional interventional pilot study. The technique of rectification of calibration error of the tonometer involved cleaning and lubrication of the instrument followed by alignment of weights when lubrication alone didn′t suffice. We followed the South East Asia Glaucoma Interest Group′s definition of calibration error tolerance (acceptable GAT calibration error within ±2, ±3 and ±4 mm Hg at the 0, 20 and 60-mm Hg testing levels, respectively. Results: Twelve out of 29 (41.3% GATs were out of calibration. The range of positive and negative calibration error at the clinically most important 20-mm Hg testing level was 0.5 to 20 mm Hg and -0.5 to -18 mm Hg, respectively. Cleaning and lubrication alone sufficed to rectify calibration error of 11 (91.6% faulty instruments. Only one (8.3% faulty GAT required alignment of the counter-weight. Conclusions: Rectification of calibration error of GAT is possible in-house. Cleaning and lubrication of GAT can be carried out even by eye care professionals and may suffice to rectify calibration error in the majority of faulty instruments. Such an exercise may drastically reduce the downtime of the Gold standard tonometer.

  12. The role of three-dimensional high-definition laparoscopic surgery for gynaecology.

    Science.gov (United States)

    Usta, Taner A; Gundogdu, Elif C

    2015-08-01

    This article reviews the potential benefits and disadvantages of new three-dimensional (3D) high-definition laparoscopic surgery for gynaecology. With the new-generation 3D high-definition laparoscopic vision systems (LVSs), operation time and learning period are reduced and procedural error margin is decreased. New-generation 3D high-definition LVSs enable to reduce operation time both for novice and experienced surgeons. Headache, eye fatigue or nausea reported with first-generation systems are not different than two-dimensional (2D) LVSs. The system's being more expensive, having the obligation to wear glasses, big and heavy camera probe in some of the devices are accounted for negative aspects of the system that need to be improved. Depth loss in tissues in 2D LVSs and associated adverse events can be eliminated with 3D high-definition LVSs. By virtue of faster learning curve, shorter operation time, reduced error margin and lack of side-effects reported by surgeons with first-generation systems, 3D LVSs seem to be a strong competition to classical laparoscopic imaging systems. Thanks to technological advancements, using lighter and smaller cameras and monitors without glasses is in the near future.

  13. Definitions of mass in special relativity

    International Nuclear Information System (INIS)

    Whitaker, M.A.B.

    1976-01-01

    Reference is made to the textbook on special relativity by Taylor and Wheeler (Space-time Physics. San Francisco. W H Freeman) in which the concept of relativistic mass is not used but momentum and energy are defined as γm 0 ν and γm 0 c 2 . The two approaches are compared and the particular problem of inelastic collisions between two particles with zero coefficient of restitution is used to demonstrate that the Taylor Wheeler definition of the rest mass of a system may lead to lack of clarity of thought, and even error. Alternative definitions of the rest mass of a system are proposed. (U.K.)

  14. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  15. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  16. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. The cost of human error intervention

    International Nuclear Information System (INIS)

    Bennett, C.T.; Banks, W.W.; Jones, E.D.

    1994-03-01

    DOE has directed that cost-benefit analyses be conducted as part of the review process for all new DOE orders. This new policy will have the effect of ensuring that DOE analysts can justify the implementation costs of the orders that they develop. We would like to argue that a cost-benefit analysis is merely one phase of a complete risk management program -- one that would more than likely start with a probabilistic risk assessment. The safety community defines risk as the probability of failure times the severity of consequence. An engineering definition of failure can be considered in terms of physical performance, as in mean-time-between-failure; or, it can be thought of in terms of human performance, as in probability of human error. The severity of consequence of a failure can be measured along any one of a number of dimensions -- economic, political, or social. Clearly, an analysis along one dimension cannot be directly compared to another but, a set of cost-benefit analyses, based on a series of cost-dimensions, can be extremely useful to managers who must prioritize their resources. Over the last two years, DOE has been developing a series of human factors orders, directed a lowering the probability of human error -- or at least changing the distribution of those errors. The following discussion presents a series of cost-benefit analyses using historical events in the nuclear industry. However, we would first like to discuss some of the analytic cautions that must be considered when we deal with human error

  18. Analysis of positioning errors in radiotherapy; Analyse des erreurs de positionnement en radiotherapie

    Energy Technology Data Exchange (ETDEWEB)

    Josset-Gaudaire, S.; Lisbona, A.; Llagostera, C.; Delpon, G.; Chiavassa, S.; Brunet, G. [Service de physique medicale, ICO Rene-Gauducheau, Saint Herblain (France); Rousset, S.; Nerriere, E.; Leblanc, M. [Service de radiotherapie, ICO Rene-Gauducheau, Saint Herblain (France)

    2011-10-15

    Within the frame of a study of control imagery management in radiotherapy, the authors report the study of positioning errors associated with control imagery in order to give an overview of practice and to help the adjustment or definition of action levels for clinical practice. Twenty groups of patients have been defined by considering tumour locations (head, ENT, thorax, breast, abdomen, and pelvis), treatment positions, immobilization systems and imagery systems. Positioning errors have thus been analyzed for 340 patients. Aspects and practice to be improved are identified. Short communication

  19. Blood specimen labelling errors: Implications for nephrology nursing practice.

    Science.gov (United States)

    Duteau, Jennifer

    2014-01-01

    Patient safety is the foundation of high-quality health care, as recognized both nationally and worldwide. Patient blood specimen identification is critical in ensuring the delivery of safe and appropriate care. The practice of nephrology nursing involves frequent patient blood specimen withdrawals to treat and monitor kidney disease. A critical review of the literature reveals that incorrect patient identification is one of the major causes of blood specimen labelling errors. Misidentified samples create a serious risk to patient safety leading to multiple specimen withdrawals, delay in diagnosis, misdiagnosis, incorrect treatment, transfusion reactions, increased length of stay and other negative patient outcomes. Barcode technology has been identified as a preferred method for positive patient identification leading to a definitive decrease in blood specimen labelling errors by as much as 83% (Askeland, et al., 2008). The use of a root cause analysis followed by an action plan is one approach to decreasing the occurrence of blood specimen labelling errors. This article will present a review of the evidence-based literature surrounding blood specimen labelling errors, followed by author recommendations for completing a root cause analysis and action plan. A failure modes and effects analysis (FMEA) will be presented as one method to determine root cause, followed by the Ottawa Model of Research Use (OMRU) as a framework for implementation of strategies to reduce blood specimen labelling errors.

  20. Dissipative quantum error correction and application to quantum sensing with trapped ions.

    Science.gov (United States)

    Reiter, F; Sørensen, A S; Zoller, P; Muschik, C A

    2017-11-28

    Quantum-enhanced measurements hold the promise to improve high-precision sensing ranging from the definition of time standards to the determination of fundamental constants of nature. However, quantum sensors lose their sensitivity in the presence of noise. To protect them, the use of quantum error-correcting codes has been proposed. Trapped ions are an excellent technological platform for both quantum sensing and quantum error correction. Here we present a quantum error correction scheme that harnesses dissipation to stabilize a trapped-ion qubit. In our approach, always-on couplings to an engineered environment protect the qubit against spin-flips or phase-flips. Our dissipative error correction scheme operates in a continuous manner without the need to perform measurements or feedback operations. We show that the resulting enhanced coherence time translates into a significantly enhanced precision for quantum measurements. Our work constitutes a stepping stone towards the paradigm of self-correcting quantum information processing.

  1. Is a genome a codeword of an error-correcting code?

    Directory of Open Access Journals (Sweden)

    Luzinete C B Faria

    Full Text Available Since a genome is a discrete sequence, the elements of which belong to a set of four letters, the question as to whether or not there is an error-correcting code underlying DNA sequences is unavoidable. The most common approach to answering this question is to propose a methodology to verify the existence of such a code. However, none of the methodologies proposed so far, although quite clever, has achieved that goal. In a recent work, we showed that DNA sequences can be identified as codewords in a class of cyclic error-correcting codes known as Hamming codes. In this paper, we show that a complete intron-exon gene, and even a plasmid genome, can be identified as a Hamming code codeword as well. Although this does not constitute a definitive proof that there is an error-correcting code underlying DNA sequences, it is the first evidence in this direction.

  2. On the definition of microhardness

    International Nuclear Information System (INIS)

    Yost, F.G.

    1983-01-01

    Microhardness testing can be a very useful tool for studying modern materials, but is plagued by well-known experimental difficulties. Reasons for the unusual behavior of hardness data at very low loads are explored by Monte Carlo simulation. These simulations bear remarkable resemblance to the results of actual hardness experiments. The limit of hardness as load or indentation depth tends to zero is shown to depend on experimental error rather than upon intrinsic material properties. The large scatter of hardness data at very low loads is insured by the accepted definition of hardness. A new definition of hardness is suggested which eliminates much of this scatter and possesses a limit as indentation depth approaches zero. Some simple calculations are used to show the utility of this new approach to hardness testing

  3. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.

  4. Variation of gross tumor volume and clinical target volume definition for lung cancer

    International Nuclear Information System (INIS)

    Liang Jun; Li Minghui; Chen Dongdu

    2011-01-01

    Objective: To study the variation of gross tumor volume (GTV) and clinical target volume (CTV) definition for lung cancer between different doctors. Methods: Ten lung cancer patients with PET-CT simulation were selected from January 2008 to December 2009.GTV and CTV of these patients were defined by four professors or associate professors of radiotherapy independently. Results: The mean ratios of largest to smallest GTV and CTV were 1.66 and 1.65, respectively. The mean coefficients of variation for GTV and CTV were 0.20 and 0.17, respectively. System errors of CTV definition in three dimension were less than 5 mm, which was the largest in inferior and superior (0.48 cm, 0.37 cm, 0.32 cm; F=0.40, 0.60, 0.15, P=0.755, 0.618, 0.928). Conclusions: The variation of GTV and CTV definition for lung cancer between different doctors exist. The mean ratios of largest to smallest GTV and CTV were less than 1.7. The variation was in hilar and mediastinum lymphanode regions. System error of CTV definition was the largest (<5 mm) in cranio-caudal direction. (authors)

  5. Treatment delay and radiological errors in patients with bone metastases

    International Nuclear Information System (INIS)

    Ichinohe, K.; Takahashi, M.; Tooyama, N.

    2003-01-01

    During routine investigations, we are surprised to find that therapy for bone metastases is sometimes delayed for a considerable period of time. To determine the extent of this delay and its causes, we reviewed the medical records of symptomatic patients seen at our hospital who had been recently diagnosed as having bone metastases for the last four years. The treatment delay was defined as the interval between presentation with symptoms and definitive treatment for bone metastases. The diagnostic delay was defined as the interval between presentation with symptoms and diagnosis of bone metastases. The results of diagnostic radiological examinations were also reviewed for errors. The study population included 76 males and 34 females with a median age of 66 years. Most bone metastases were diagnosed radiologically. Over 75% of patients were treated with radiotherapy. The treatment delay ranged from 2 to 307 days, with a mean of 53.3 days. In 490 radiological studies reviewed, we identified 166 (33.9%) errors concerning 62 (56.4%) patients. The diagnostic delay was significantly longer for patients with radiological errors than for patients without radiological errors (P < 0.001), and much of it was due to radiological errors. In conclusion, the treatment delay in patients with symptomatic bone metastases was much longer than expected, and much of it was caused by radiological errors. Considerable efforts should therefore be made to more carefully examine the radiological studies in order to ensure prompt treatment of bone metastases. (author)

  6. Refractive errors in 3-6 year-old Chinese children: a very low prevalence of myopia?

    Directory of Open Access Journals (Sweden)

    Weizhong Lan

    Full Text Available PURPOSE: To examine the prevalence of refractive errors in children aged 3-6 years in China. METHODS: Children were recruited for a trial of a home-based amblyopia screening kit in Guangzhou preschools, during which cycloplegic refractions were measured in both eyes of 2480 children. Cycloplegic refraction (from 3 to 4 drops of 1% cyclopentolate to ensure abolition of the light reflex was measured by both autorefraction and retinoscopy. Refractive errors were defined as followed: myopia (at least -0.50 D in the worse eye, hyperopia (at least +2.00 D in the worse eye and astigmatism (at least 1.50 D in the worse eye. Different definitions, as specified in the text, were also used to facilitate comparison with other studies. RESULTS: The mean spherical equivalent refractive error was at least +1.22 D for all ages and both genders. The prevalence of myopia for any definition at any age was at most 2.5%, and lower in most cases. In contrast, the prevalence of hyperopia was generally over 20%, and declined slightly with age. The prevalence of astigmatism was between 6% and 11%. There was very little change in refractive error with age over this age range. CONCLUSIONS: Previous reports of less hyperopic mean spherical equivalent refractive error, and more myopia and less hyperopia in children of this age may be due to problems with achieving adequate cycloplegia in children with dark irises. Using up to 4 drops of 1% cyclopentolate may be necessary to accurately measure refractive error in paediatric studies of such children. Our results suggest that children from all ethnic groups may follow a similar pattern of early refractive development, with little myopia and a hyperopic mean spherical equivalent over +1.00 D up to the age of 5-6 years in most conditions.

  7. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    Science.gov (United States)

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. Preanalytical errors in medical laboratories: a review of the available methodologies of data collection and analysis.

    Science.gov (United States)

    West, Jamie; Atherton, Jennifer; Costelloe, Seán J; Pourmahram, Ghazaleh; Stretton, Adam; Cornes, Michael

    2017-01-01

    Preanalytical errors have previously been shown to contribute a significant proportion of errors in laboratory processes and contribute to a number of patient safety risks. Accreditation against ISO 15189:2012 requires that laboratory Quality Management Systems consider the impact of preanalytical processes in areas such as the identification and control of non-conformances, continual improvement, internal audit and quality indicators. Previous studies have shown that there is a wide variation in the definition, repertoire and collection methods for preanalytical quality indicators. The International Federation of Clinical Chemistry Working Group on Laboratory Errors and Patient Safety has defined a number of quality indicators for the preanalytical stage, and the adoption of harmonized definitions will support interlaboratory comparisons and continual improvement. There are a variety of data collection methods, including audit, manual recording processes, incident reporting mechanisms and laboratory information systems. Quality management processes such as benchmarking, statistical process control, Pareto analysis and failure mode and effect analysis can be used to review data and should be incorporated into clinical governance mechanisms. In this paper, The Association for Clinical Biochemistry and Laboratory Medicine PreAnalytical Specialist Interest Group review the various data collection methods available. Our recommendation is the use of the laboratory information management systems as a recording mechanism for preanalytical errors as this provides the easiest and most standardized mechanism of data capture.

  9. Trends in Health Information Technology Safety: From Technology-Induced Errors to Current Approaches for Ensuring Technology Safety

    Science.gov (United States)

    2013-01-01

    Objectives Health information technology (HIT) research findings suggested that new healthcare technologies could reduce some types of medical errors while at the same time introducing classes of medical errors (i.e., technology-induced errors). Technology-induced errors have their origins in HIT, and/or HIT contribute to their occurrence. The objective of this paper is to review current trends in the published literature on HIT safety. Methods A review and synthesis of the medical and life sciences literature focusing on the area of technology-induced error was conducted. Results There were four main trends in the literature on technology-induced error. The following areas were addressed in the literature: definitions of technology-induced errors; models, frameworks and evidence for understanding how technology-induced errors occur; a discussion of monitoring; and methods for preventing and learning about technology-induced errors. Conclusions The literature focusing on technology-induced errors continues to grow. Research has focused on the defining what an error is, models and frameworks used to understand these new types of errors, monitoring of such errors and methods that can be used to prevent these errors. More research will be needed to better understand and mitigate these types of errors. PMID:23882411

  10. Minimization of the effect of errors in approximate radiation view factors

    International Nuclear Information System (INIS)

    Clarksean, R.; Solbrig, C.

    1993-01-01

    The maximum temperature of irradiated fuel rods in storage containers was investigated taking credit only for radiation heat transfer. Estimating view factors is often easy but in many references the emphasis is placed on calculating the quadruple integrals exactly. Selecting different view factors in the view factor matrix as independent, yield somewhat different view factor matrices. In this study ten to twenty percent error in view factors produced small errors in the temperature which are well within the uncertainty due to the surface emissivities uncertainty. However, the enclosure and reciprocity principles must be strictly observed or large errors in the temperatures and wall heat flux were observed (up to a factor of 3). More than just being an aid for calculating the dependent view factors, satisfying these principles, particularly reciprocity, is more important than the calculation accuracy of the view factors. Comparison to experiment showed that the result of the radiation calculation was definitely conservative as desired in spite of the approximations to the view factors

  11. Binary joint transform correlation using error-diffusion techniques

    Science.gov (United States)

    Inbar, Hanni; Marom, Emanuel; Konforti, Naim

    1993-08-01

    Optical pattern recognition techniques based on the optical joint transform correlator (JTC) scheme are attractive due to their simplicity. Recent improvements in spatial light modulators (SLM) increased the popularity of the JTC, providing means for real time operation. Using a binary SLM for the display of the Fourier spectrum, first requires binarization of the joint power spectrum distribution. Although hard-clipping is the simplest and most common binarization method used, we suggest to apply error-diffusion as an improved binarization technique. The performance of a binary JTC, whose input image is considered to contain additive zero-mean white Gaussian noise, is investigated. Various ways for nonlinearly modifying the joint power spectrum prior to the binarization step, which is based on either error-diffusion or hard-clipping techniques, are discussed. These nonlinear modifications aim at increasing the contrast of the interference fringes at the joint power spectrum plane, leading to better definition of the correlation signal. Mathematical analysis, computer simulations and experimental results are presented.

  12. The Countermeasures against the Human Errors in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Lee, Yong Hee; Kwon, Ki Chun; Lee, Jung Woon; Lee, Hyun; Jang, Tong Il

    2009-10-01

    Due to human error, the failure of nuclear power facilities essential for the prevention of accidents and related research in ergonomics and human factors, including the long term, comprehensive measures are considered technology is urgently required. Past nuclear facilities for the hardware in terms of continuing interest over subsequent definite improvement even have brought, now a nuclear facility to engage in people-related human factors for attention by nuclear facilities, ensuring the safety of its economic and industrial aspects. The point of the improvement is urgently required. The purpose of this research, including nuclear power plants in various nuclear facilities to minimize the possibility of human error by ensuring the safety for human engineering aspects will be implemented in the medium and long term preventive measures is to establish comprehensive

  13. The Countermeasures against the Human Errors in Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Hee; Kwon, Ki Chun; Lee, Jung Woon; Lee, Hyun; Jang, Tong Il

    2009-10-15

    Due to human error, the failure of nuclear power facilities essential for the prevention of accidents and related research in ergonomics and human factors, including the long term, comprehensive measures are considered technology is urgently required. Past nuclear facilities for the hardware in terms of continuing interest over subsequent definite improvement even have brought, now a nuclear facility to engage in people-related human factors for attention by nuclear facilities, ensuring the safety of its economic and industrial aspects. The point of the improvement is urgently required. The purpose of this research, including nuclear power plants in various nuclear facilities to minimize the possibility of human error by ensuring the safety for human engineering aspects will be implemented in the medium and long term preventive measures is to establish comprehensive.

  14. MRI definition of target volumes using fuzzy logic method for three-dimensional conformal radiation therapy

    International Nuclear Information System (INIS)

    Caudrelier, Jean-Michel; Vial, Stephane; Gibon, David; Kulik, Carine; Fournier, Charles; Castelain, Bernard; Coche-Dequeant, Bernard; Rousseau, Jean

    2003-01-01

    Purpose: Three-dimensional (3D) volume determination is one of the most important problems in conformal radiation therapy. Techniques of volume determination from tomographic medical imaging are usually based on two-dimensional (2D) contour definition with the result dependent on the segmentation method used, as well as on the user's manual procedure. The goal of this work is to describe and evaluate a new method that reduces the inaccuracies generally observed in the 2D contour definition and 3D volume reconstruction process. Methods and Materials: This new method has been developed by integrating the fuzziness in the 3D volume definition. It first defines semiautomatically a minimal 2D contour on each slice that definitely contains the volume and a maximal 2D contour that definitely does not contain the volume. The fuzziness region in between is processed using possibility functions in possibility theory. A volume of voxels, including the membership degree to the target volume, is then created on each slice axis, taking into account the slice position and slice profile. A resulting fuzzy volume is obtained after data fusion between multiorientation slices. Different studies have been designed to evaluate and compare this new method of target volume reconstruction and a classical reconstruction method. First, target definition accuracy and robustness were studied on phantom targets. Second, intra- and interobserver variations were studied on radiosurgery clinical cases. Results: The absolute volume errors are less than or equal to 1.5% for phantom volumes calculated by the fuzzy logic method, whereas the values obtained with the classical method are much larger than the actual volumes (absolute volume errors up to 72%). With increasing MRI slice thickness (1 mm to 8 mm), the phantom volumes calculated by the classical method are increasing exponentially with a maximum absolute error up to 300%. In contrast, the absolute volume errors are less than 12% for phantom

  15. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  16. Types and Severity of Medication Errors in Iran; a Review of the Current Literature

    Directory of Open Access Journals (Sweden)

    Ava Mansouri

    2013-06-01

    Full Text Available Medication error (ME is the most common single preventable cause of adverse drug events which negatively affects patient safety. ME prevalence is a valuable safety indicator in healthcare system. Inadequate studies on ME, shortage of high-quality studies and wide variations in estimations from developing countries including Iran, decreases the reliability of ME evaluations. In order to clarify the status of MEs, we aimed to review current available literature on this subject from Iran. We searched Scopus, Web of Science, PubMed, CINAHL, EBSCOHOST and also Persian databases (IranMedex, and SID up to October 2012 to find studies on adults and children about prescription, transcription, dispensing, and administration errors. Two authors independently selected and one of them reviewed and extracted data for types, definitions and severity of MEs. The results were classified based on different stages of drug delivery process. Eighteen articles (11 Persian and 7 English were included in our review. All study designs were cross-sectional and conducted in hospital settings. Nursing staff and students were the most frequent populations under observation (12 studies; 66.7%. Most of studies did not report the overall frequency of MEs aside from ME types. Most of studies (15; 83.3% reported prevalence of administration errors between 14.3%-70.0%. Prescribing error prevalence ranged from 29.8%-47.8%. The prevalence of dispensing and transcribing errors were from 11.3%-33.6% and 10.0%-51.8% respectively. We did not find any follow up or repeated studies. Only three studies reported findings on severity of MEs. The most reported types of and the highest percentages for any type of ME in Iran were administration errors. Studying ME in Iran is a new area considering the duration and number of publications. Wide ranges of estimations for MEs in different stages may be because of the poor quality of studies with diversity in definitions, methods, and populations

  17. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  18. An Empirical State Error Covariance Matrix Orbit Determination Example

    Science.gov (United States)

    Frisbee, Joseph H., Jr.

    2015-01-01

    is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.

  19. Covariate measurement error correction methods in mediation analysis with failure time data.

    Science.gov (United States)

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.

  20. Monitoring and reporting of preanalytical errors in laboratory medicine: the UK situation.

    Science.gov (United States)

    Cornes, Michael P; Atherton, Jennifer; Pourmahram, Ghazaleh; Borthwick, Hazel; Kyle, Betty; West, Jamie; Costelloe, Seán J

    2016-03-01

    Most errors in the clinical laboratory occur in the preanalytical phase. This study aimed to comprehensively describe the prevalence and nature of preanalytical quality monitoring practices in UK clinical laboratories. A survey was sent on behalf of the Association for Clinical Biochemistry and Laboratory Medicine Preanalytical Working Group (ACB-WG-PA) to all heads of department of clinical laboratories in the UK. The survey captured data on the analytical platform and Laboratory Information Management System in use; which preanalytical errors were recorded and how they were classified and gauged interest in an external quality assurance scheme for preanalytical errors. Of the 157 laboratories asked to participate, responses were received from 104 (66.2%). Laboratory error rates were recorded per number of specimens, rather than per number of requests in 51% of respondents. Aside from serum indices for haemolysis, icterus and lipaemia, which were measured in 80% of laboratories, the most common errors recorded were booking-in errors (70.1%) and sample mislabelling (56.9%) in laboratories who record preanalytical errors. Of the laboratories surveyed, 95.9% expressed an interest in guidance on recording preanalytical error and 91.8% expressed interest in an external quality assurance scheme. This survey observes a wide variation in the definition, repertoire and collection methods for preanalytical errors in the UK. Data indicate there is a lot of interest in improving preanalytical data collection. The ACB-WG-PA aims to produce guidance and support for laboratories to standardize preanalytical data collection and to help establish and validate an external quality assurance scheme for interlaboratory comparison. © The Author(s) 2015.

  1. On the definition of the detection limit for non-selective determination of low activities

    International Nuclear Information System (INIS)

    Tschurlovits, M.

    1977-01-01

    Based on the latest published results, a detection limit which is easy to use in practical work without intensive consideration of counting statistics, is presented. The primary application of the given definition is the determination of gross activity. In the definition the error of the second kind as well as one-sided boundedness of the normal distribution are included. The results are given in graphical form. (orig.) [de

  2. The sensitivity of gamma-index method to the positioning errors of high-definition MLC in patient-specific VMAT QA for SBRT

    International Nuclear Information System (INIS)

    Kim, Jung-in; Park, So-Yeon; Kim, Hak Jae; Kim, Jin Ho; Ye, Sung-Joon; Park, Jong Min

    2014-01-01

    To investigate the sensitivity of various gamma criteria used in the gamma-index method for patient-specific volumetric modulated arc therapy (VMAT) quality assurance (QA) for stereotactic body radiation therapy (SBRT) using a flattening filter free (FFF) photon beam. Three types of intentional misalignments were introduced to original high-definition multi-leaf collimator (HD-MLC) plans. The first type, referred to Class Out, involved the opening of each bank of leaves. The second type, Class In, involved the closing of each bank of leaves. The third type, Class Shift, involved the shifting of each bank of leaves towards the ground. Patient-specific QAs for the original and the modified plans were performed with MapCHECK2 and EBT2 films. The sensitivity of the gamma-index method using criteria of 1%/1 mm, 1.5%/1.5 mm, 1%/2 mm, 2%/1 mm and 2%/2 mm was investigated with absolute passing rates according to the magnitudes of MLCs misalignments. In addition, the changes in dose-volumetric indicators due to the magnitudes of MLC misalignments were investigated. The correlations between passing rates and the changes in dose-volumetric indicators were also investigated using Spearman’s rank correlation coefficient (γ). The criterion of 2%/1 mm was able to detect Class Out and Class In MLC misalignments of 0.5 mm and Class Shift misalignments of 1 mm. The widely adopted clinical criterion of 2%/2 mm was not able to detect 0.5 mm MLC errors of the Class Out or Class In types, and also unable to detect 3 mm Class Shift errors. No correlations were observed between dose-volumetric changes and gamma passing rates (γ < 0.8). Gamma criterion of 2%/1 mm was found to be suitable as a tolerance level with passing rates of 90% and 80% for patient-specific VMAT QA for SBRT when using MapCHECK2 and EBT2 film, respectively

  3. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  4. Distinguishing mixed quantum states: Minimum-error discrimination versus optimum unambiguous discrimination

    International Nuclear Information System (INIS)

    Herzog, Ulrike; Bergou, Janos A.

    2004-01-01

    We consider two different optimized measurement strategies for the discrimination of nonorthogonal quantum states. The first is ambiguous discrimination with a minimum probability of inferring an erroneous result, and the second is unambiguous, i.e., error-free, discrimination with a minimum probability of getting an inconclusive outcome, where the measurement fails to give a definite answer. For distinguishing between two mixed quantum states, we investigate the relation between the minimum-error probability achievable in ambiguous discrimination, and the minimum failure probability that can be reached in unambiguous discrimination of the same two states. The latter turns out to be at least twice as large as the former for any two given states. As an example, we treat the case where the state of the quantum system is known to be, with arbitrary prior probability, either a given pure state, or a uniform statistical mixture of any number of mutually orthogonal states. For this case we derive an analytical result for the minimum probability of error and perform a quantitative comparison with the minimum failure probability

  5. Error forecasting schemes of error correction at receiver

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-08-01

    To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)

  6. Influence of model errors in optimal sensor placement

    Science.gov (United States)

    Vincenzi, Loris; Simonini, Laura

    2017-02-01

    The paper investigates the role of model errors and parametric uncertainties in optimal or near optimal sensor placements for structural health monitoring (SHM) and modal testing. The near optimal set of measurement locations is obtained by the Information Entropy theory; the results of placement process considerably depend on the so-called covariance matrix of prediction error as well as on the definition of the correlation function. A constant and an exponential correlation function depending on the distance between sensors are firstly assumed; then a proposal depending on both distance and modal vectors is presented. With reference to a simple case-study, the effect of model uncertainties on results is described and the reliability and the robustness of the proposed correlation function in the case of model errors are tested with reference to 2D and 3D benchmark case studies. A measure of the quality of the obtained sensor configuration is considered through the use of independent assessment criteria. In conclusion, the results obtained by applying the proposed procedure on a real 5-spans steel footbridge are described. The proposed method also allows to better estimate higher modes when the number of sensors is greater than the number of modes of interest. In addition, the results show a smaller variation in the sensor position when uncertainties occur.

  7. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  8. IMRT QA: Selecting gamma criteria based on error detection sensitivity

    Energy Technology Data Exchange (ETDEWEB)

    Steers, Jennifer M. [Department of Radiation Oncology, Cedars-Sinai Medical Center, Los Angeles, California 90048 and Physics and Biology in Medicine IDP, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90095 (United States); Fraass, Benedick A., E-mail: benedick.fraass@cshs.org [Department of Radiation Oncology, Cedars-Sinai Medical Center, Los Angeles, California 90048 (United States)

    2016-04-15

    Purpose: The gamma comparison is widely used to evaluate the agreement between measurements and treatment planning system calculations in patient-specific intensity modulated radiation therapy (IMRT) quality assurance (QA). However, recent publications have raised concerns about the lack of sensitivity when employing commonly used gamma criteria. Understanding the actual sensitivity of a wide range of different gamma criteria may allow the definition of more meaningful gamma criteria and tolerance limits in IMRT QA. We present a method that allows the quantitative determination of gamma criteria sensitivity to induced errors which can be applied to any unique combination of device, delivery technique, and software utilized in a specific clinic. Methods: A total of 21 DMLC IMRT QA measurements (ArcCHECK®, Sun Nuclear) were compared to QA plan calculations with induced errors. Three scenarios were studied: MU errors, multi-leaf collimator (MLC) errors, and the sensitivity of the gamma comparison to changes in penumbra width. Gamma comparisons were performed between measurements and error-induced calculations using a wide range of gamma criteria, resulting in a total of over 20 000 gamma comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using 36 different gamma criteria. Results: This study demonstrates that systematic errors and case-specific errors can be detected by the error curve analysis. Depending on the location of the error curve peak (e.g., not centered about zero), 3%/3 mm threshold = 10% at 90% pixels passing may miss errors as large as 15% MU errors and ±1 cm random MLC errors for some cases. As the dose threshold parameter was increased for a given %Diff/distance-to-agreement (DTA) setting, error sensitivity was increased by up to a factor of two for select cases. This increased sensitivity with increasing dose

  9. Medication Administration Errors in an Adult Emergency Department of a Tertiary Health Care Facility in Ghana.

    Science.gov (United States)

    Acheampong, Franklin; Tetteh, Ashalley Raymond; Anto, Berko Panyin

    2016-12-01

    This study determined the incidence, types, clinical significance, and potential causes of medication administration errors (MAEs) at the emergency department (ED) of a tertiary health care facility in Ghana. This study used a cross-sectional nonparticipant observational technique. Study participants (nurses) were observed preparing and administering medication at the ED of a 2000-bed tertiary care hospital in Accra, Ghana. The observations were then compared with patients' medication charts, and identified errors were clarified with staff for possible causes. Of the 1332 observations made, involving 338 patients and 49 nurses, 362 had errors, representing 27.2%. However, the error rate excluding "lack of drug availability" fell to 12.8%. Without wrong time error, the error rate was 22.8%. The 2 most frequent error types were omission (n = 281, 77.6%) and wrong time (n = 58, 16%) errors. Omission error was mainly due to unavailability of medicine, 48.9% (n = 177). Although only one of the errors was potentially fatal, 26.7% were definitely clinically severe. The common themes that dominated the probable causes of MAEs were unavailability, staff factors, patient factors, prescription, and communication problems. This study gives credence to similar studies in different settings that MAEs occur frequently in the ED of hospitals. Most of the errors identified were not potentially fatal; however, preventive strategies need to be used to make life-saving processes such as drug administration in such specialized units error-free.

  10. Irregular analytical errors in diagnostic testing - a novel concept.

    Science.gov (United States)

    Vogeser, Michael; Seger, Christoph

    2018-02-23

    -isotope-dilution mass spectrometry methods are increasingly used for pre-market validation of routine diagnostic assays (these tests also involve substantial sets of clinical validation samples). Based on this definition/terminology, we list recognized causes of irregular analytical error as a risk catalog for clinical chemistry in this article. These issues include reproducible individual analytical errors (e.g. caused by anti-reagent antibodies) and non-reproducible, sporadic errors (e.g. errors due to incorrect pipetting volume due to air bubbles in a sample), which can both lead to inaccurate results and risks for patients.

  11. How Do Simulated Error Experiences Impact Attitudes Related to Error Prevention?

    Science.gov (United States)

    Breitkreuz, Karen R; Dougal, Renae L; Wright, Melanie C

    2016-10-01

    The objective of this project was to determine whether simulated exposure to error situations changes attitudes in a way that may have a positive impact on error prevention behaviors. Using a stratified quasi-randomized experiment design, we compared risk perception attitudes of a control group of nursing students who received standard error education (reviewed medication error content and watched movies about error experiences) to an experimental group of students who reviewed medication error content and participated in simulated error experiences. Dependent measures included perceived memorability of the educational experience, perceived frequency of errors, and perceived caution with respect to preventing errors. Experienced nursing students perceived the simulated error experiences to be more memorable than movies. Less experienced students perceived both simulated error experiences and movies to be highly memorable. After the intervention, compared with movie participants, simulation participants believed errors occurred more frequently. Both types of education increased the participants' intentions to be more cautious and reported caution remained higher than baseline for medication errors 6 months after the intervention. This study provides limited evidence of an advantage of simulation over watching movies describing actual errors with respect to manipulating attitudes related to error prevention. Both interventions resulted in long-term impacts on perceived caution in medication administration. Simulated error experiences made participants more aware of how easily errors can occur, and the movie education made participants more aware of the devastating consequences of errors.

  12. UNDERSTANDING OR NURSES' REACTIONS TO ERRORS AND USING THIS UNDERSTANDING TO IMPROVE PATIENT SAFETY.

    Science.gov (United States)

    Taifoori, Ladan; Valiee, Sina

    2015-09-01

    The operating room can be home to many different types of nursing errors due to the invasiveness of OR procedures. The nurses' reactions towards errors can be a key factor in patient safety. This article is based on a study, with the aim of investigating nurses' reactions toward nursing errors and the various contributing and resulting factors, conducted at Kurdistan University of Medical Sciences in Sanandaj, Iran in 2014. The goal of the study was to determine how OR nurses' reacted to nursing errors with the goal of having this information used to improve patient safety. Research was conducted as a cross-sectional descriptive study. The participants were all nurses employed in the operating rooms of the teaching hospitals of Kurdistan University of Medical Sciences, which was selected by a consensus method (170 persons). The information was gathered through questionnaires that focused on demographic information, error definition, reasons for error occurrence, and emotional reactions for error occurrence, and emotional reactions toward the errors. 153 questionnaires were completed and analyzed by SPSS software version 16.0. "Not following sterile technique" (82.4 percent) was the most reported nursing error, "tiredness" (92.8 percent) was the most reported reason for the error occurrence, "being upset at having harmed the patient" (85.6 percent) was the most reported emotional reaction after error occurrence", with "decision making for a better approach to tasks the next time" (97.7 percent) as the most common goal and "paying more attention to details" (98 percent) was the most reported planned strategy for future improved outcomes. While healthcare facilities are focused on planning for the prevention and elimination of errors it was shown that nurses can also benefit from support after error occurrence. Their reactions, and coping strategies, need guidance and, with both individual and organizational support, can be a factor in improving patient safety.

  13. Phraseologismen und stereotype Sprechakte im Deutschen und im Französischen

    Directory of Open Access Journals (Sweden)

    Kauffer, Maurice

    2013-12-01

    Full Text Available The topic of this paper is the definition and lexicographic treatment of pragmatic phraseologisms and in particular stereotypical speech acts in German and French. We begin with a critical examination of the traditional distinctions within pragmatic phraseologisms, i. e. between formulaic expressions, context-dependent phraseologisms and phraseologisms functioning as sentences. As a result, we propose a new, more clearly delineated set of stereotypical speech acts, i. e. phrases such as Na warte mal!,Sieh mal einer an!, Tu parles ! Tu vois ce que je vois ?.Stereotypical speech acts meet three requirements: semantic idiomaticity, utterance value and pragmatic function, and are generally used in spontaneous or fiction dialogues. Finally, we present a context-rich, corpus-based, bilingual dictionary of stereotypical speech acts that is being compiled in Nancy. Content and design of the dictionary are illustrated by two examples, la belle affaire and das ist die Höhe.

  14. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  15. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  16. Advanced MRI assessment to predict benefit of anti-programmed cell death 1 protein immunotherapy response in patients with recurrent glioblastoma

    Energy Technology Data Exchange (ETDEWEB)

    Qin, Lei [Dana-Farber Cancer Institute, Department of Imaging, Boston, MA (United States); Harvard Medical School, Department of Radiology, Boston, MA (United States); Li, Xiang; Qu, Jinrong [Affiliated Cancer Hospital of Zhengzhou University, Department of Radiology, Zhengzhou, Henan (China); Brigham and Women' s Hospital, Department of Radiology, Boston, MA (United States); Stroiney, Amanda [Dana-Farber Cancer Institute, Department of Imaging, Boston, MA (United States); Northeastern University, Department of Behavioral Neuroscience, College of Sciences, Boston, MA (United States); Helgager, Jeffrey [Brigham and Women' s Hospital, Department of Pathology, Boston, MA (United States); Reardon, David A. [Dana-Farber Cancer Institute, CenterforNeuro-Oncology, Boston, MA (United States); Department of Medicine, Boston, MA (United States); Young, Geoffrey S. [Harvard Medical School, Department of Radiology, Boston, MA (United States); Brigham and Women' s Hospital, Department of Radiology, Boston, MA (United States)

    2017-02-15

    We describe the imaging findings encountered in GBM patients receiving immune checkpoint blockade and assess the potential of quantitative MRI biomarkers to differentiate patients who derive therapeutic benefit from those who do not. A retrospective analysis was performed on longitudinal MRIs obtained on recurrent GBM patients enrolled on clinical trials. Among 10 patients with analyzable data, bidirectional diameters were measured on contrast enhanced T1 (pGd-T1WI) and volumes of interest (VOI) representing measurable abnormality suggestive of tumor were selected on pGdT1WI (pGdT1 VOI), FLAIR-T2WI (FLAIR VOI), and ADC maps. Intermediate ADC (IADC) VOI represented voxels within the FLAIR VOI having ADC in the range of highly cellular tumor (0.7-1.1 x 10{sup -3} mm{sup 2}/s) (IADC VOI). Therapeutic benefit was determined by tissue pathology and survival on trial. IADC VOI, pGdT1 VOI, FLAIR VOI, and RANO assessment results were correlated with patient benefit. Five patients were deemed to have received therapeutic benefit and the other five patients did not. The average time on trial for the benefit group was 194 days, as compared to 81 days for the no benefit group. IADC VOI correlated well with the presence or absence of clinical benefit in 10 patients. Furthermore, pGd VOI, FLAIR VOI, and RANO assessment correlated less well with response. MRI reveals an initial increase in volumes of abnormal tissue with contrast enhancement, edema, and intermediate ADC suggesting hypercellularity within the first 0-6 months of immunotherapy. Subsequent stabilization and improvement in IADC VOI appear to better predict ultimate therapeutic benefit from these agents than conventional imaging. (orig.)

  17. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  18. Cone beam computed tomography for diagnosis of bisphosphonate-related osteonecrosis of the jaw: evaluation of quantitative and qualitative image parameters

    International Nuclear Information System (INIS)

    Guggenberger, Roman; Koral, Emrah; Andreisek, Gustav; Zemann, Wolfgang; Jacobsen, Christine; Metzler, Philipp

    2014-01-01

    To assess the diagnostic performance of quantitative and qualitative image parameters in cone-beam computed tomography (CBCT) for diagnosis of bisphosphonate-related osteonecrosis of the jaw (BRONJ). A BRONJ (22 patients, mean age 70.0 years) group was age and gender matched to a healthy control group (22 patients, mean age 68.0 years). On CBCT images two independent readers performed quantitative bone density value (BDV) measurements with region and volume-of-interest (ROI and VOI) based approaches and qualitative scoring of BRONJ-associated necrosis, sclerosis and periosteal thickening (1 = not present to 5 = definitely present). Intraoperative and clinical findings served as standard of reference. Interreader agreements and diagnostic performance were assessed by intraclass correlation coefficients (ICC), kappa-statistics and receiver-operating characteristic (ROC) analysis. Twenty-three regions in 22 patients were affected by BRONJ. ICC values for mean BDV VOI and mean BDV ROI were 0.864 and 0.968, respectively (p < 0.001). The area under the curve (AUC) for mean BDV VOI and mean BDV ROI was 0.58/0.83 with a sensitivity of 57/83 % and specificity of 61/77 % for diagnosis of BRONJ, respectively. Kappa values for presence of necrosis, sclerosis and periosteal thickening were 0.575, 0.617 and 0.885, respectively. AUC values for qualitative parameters ranged between 0.90-0.96 with sensitivity of 96 % and specificities between 79-96 % at respective cutoff scores. BRONJ can be effectively diagnosed with CBCT. Qualitative image parameters yield a higher diagnostic performance than quantitative parameters, and ROI-based attenuation measurements were more accurate than VOI-based measurements. (orig.)

  19. Pulmonary nodule registration in serial CT scans based on rib anatomy and nodule template matching

    International Nuclear Information System (INIS)

    Shi Jiazheng; Sahiner, Berkman; Chan, H.-P.; Hadjiiski, Lubomir; Zhou, C.; Cascade, Philip N.; Bogot, Naama; Kazerooni, Ella A.; Wu, Y.-T.; Wei, J.

    2007-01-01

    An automated method is being developed in order to identify corresponding nodules in serial thoracic CT scans for interval change analysis. The method uses the rib centerlines as the reference for initial nodule registration. A spatially adaptive rib segmentation method first locates the regions where the ribs join the spine, which define the starting locations for rib tracking. Each rib is tracked and locally segmented by expectation-maximization. The ribs are automatically labeled, and the centerlines are estimated using skeletonization. For a given nodule in the source scan, the closest three ribs are identified. A three-dimensional (3D) rigid affine transformation guided by simplex optimization aligns the centerlines of each of the three rib pairs in the source and target CT volumes. Automatically defined control points along the centerlines of the three ribs in the source scan and the registered ribs in the target scan are used to guide an initial registration using a second 3D rigid affine transformation. A search volume of interest (VOI) is then located in the target scan. Nodule candidate locations within the search VOI are identified as regions with high Hessian responses. The initial registration is refined by searching for the maximum cross-correlation between the nodule template from the source scan and the candidate locations. The method was evaluated on 48 CT scans from 20 patients. Experienced radiologists identified 101 pairs of corresponding nodules. Three metrics were used for performance evaluation. The first metric was the Euclidean distance between the nodule centers identified by the radiologist and the computer registration, the second metric was a volume overlap measure between the nodule VOIs identified by the radiologist and the computer registration, and the third metric was the hit rate, which measures the fraction of nodules whose centroid computed by the computer registration in the target scan falls within the VOI identified by the

  20. Modified Redundancy based Technique—a New Approach to Combat Error Propagation Effect of AES

    Science.gov (United States)

    Sarkar, B.; Bhunia, C. T.; Maulik, U.

    2012-06-01

    Advanced encryption standard (AES) is a great research challenge. It has been developed to replace the data encryption standard (DES). AES suffers from a major limitation of error propagation effect. To tackle this limitation, two methods are available. One is redundancy based technique and the other one is bite based parity technique. The first one has a significant advantage of correcting any error on definite term over the second one but at the cost of higher level of overhead and hence lowering the processing speed. In this paper, a new approach based on the redundancy based technique is proposed that would certainly speed up the process of reliable encryption and hence the secured communication.

  1. ERF/ERFC, Calculation of Error Function, Complementary Error Function, Probability Integrals

    International Nuclear Information System (INIS)

    Vogel, J.E.

    1983-01-01

    1 - Description of problem or function: ERF and ERFC are used to compute values of the error function and complementary error function for any real number. They may be used to compute other related functions such as the normal probability integrals. 4. Method of solution: The error function and complementary error function are approximated by rational functions. Three such rational approximations are used depending on whether - x .GE.4.0. In the first region the error function is computed directly and the complementary error function is computed via the identity erfc(x)=1.0-erf(x). In the other two regions the complementary error function is computed directly and the error function is computed from the identity erf(x)=1.0-erfc(x). The error function and complementary error function are real-valued functions of any real argument. The range of the error function is (-1,1). The range of the complementary error function is (0,2). 5. Restrictions on the complexity of the problem: The user is cautioned against using ERF to compute the complementary error function by using the identity erfc(x)=1.0-erf(x). This subtraction may cause partial or total loss of significance for certain values of x

  2. Multi-isocenter stereotactic radiotherapy: implications for target dose distributions of systematic and random localization errors

    International Nuclear Information System (INIS)

    Ebert, M.A.; Zavgorodni, S.F.; Kendrick, L.A.; Weston, S.; Harper, C.S.

    2001-01-01

    Purpose: This investigation examined the effect of alignment and localization errors on dose distributions in stereotactic radiotherapy (SRT) with arced circular fields. In particular, it was desired to determine the effect of systematic and random localization errors on multi-isocenter treatments. Methods and Materials: A research version of the FastPlan system from Surgical Navigation Technologies was used to generate a series of SRT plans of varying complexity. These plans were used to examine the influence of random setup errors by recalculating dose distributions with successive setup errors convolved into the off-axis ratio data tables used in the dose calculation. The influence of systematic errors was investigated by displacing isocenters from their planned positions. Results: For single-isocenter plans, it is found that the influences of setup error are strongly dependent on the size of the target volume, with minimum doses decreasing most significantly with increasing random and systematic alignment error. For multi-isocenter plans, similar variations in target dose are encountered, with this result benefiting from the conventional method of prescribing to a lower isodose value for multi-isocenter treatments relative to single-isocenter treatments. Conclusions: It is recommended that the systematic errors associated with target localization in SRT be tracked via a thorough quality assurance program, and that random setup errors be minimized by use of a sufficiently robust relocation system. These errors should also be accounted for by incorporating corrections into the treatment planning algorithm or, alternatively, by inclusion of sufficient margins in target definition

  3. Exploring key considerations when determining bona fide inadvertent errors resulting in understatements

    Directory of Open Access Journals (Sweden)

    Chrizanne de Villiers

    2016-03-01

    Full Text Available Chapter 16 of the Tax Administration Act (28 of 2011 (the TA Act deals with understatement penalties. In the event of an ‘understatement’, in terms of Section 222 of the TA Act, a taxpayer must pay an understatement penalty, unless the understatement results from a bona fide inadvertent error. The determining of a bona fide inadvertent error on taxpayers’ returns is a totally new concept in the tax fraternity. It is of utmost importance that this section is applied correctly based on sound evaluation principles and not on professional judgement when determining if the error was indeed the result of a bona fide inadvertent error. This research study focuses on exploring key considerations when determining bona fide inadvertent errors resulting in understatements. The role and importance of tax penalty provisions is explored and the meaning of the different components in the term ‘bona fide inadvertent error’ critically analysed with the purpose to find a possible definition for the term ‘bona fide inadvertent error’. The study also compares the provisions of other tax jurisdictions with regards to errors made resulting in tax understatements in order to find possible guidelines on the application of bona fide inadvertent errors as contained in Section 222 of the TA Act. The findings of the research study revealed that the term ‘bona fide inadvertent error’ contained in Section 222 of the TA Act should be defined urgently and that guidelines must be provided by SARS on the application of the new amendment. SARS should also clarify the application of a bona fide inadvertent error in light of the behaviours contained in Section 223 of the TA Act to avoid any confusion.

  4. Four-dimensional volume-of-interest reconstruction for cone-beam computed tomography-guided radiation therapy.

    Science.gov (United States)

    Ahmad, Moiz; Balter, Peter; Pan, Tinsu

    2011-10-01

    Data sufficiency are a major problem in four-dimensional cone-beam computed tomography (4D-CBCT) on linear accelerator-integrated scanners for image-guided radiotherapy. Scan times must be in the range of 4-6 min to avoid undersampling artifacts. Various image reconstruction algorithms have been proposed to accommodate undersampled data acquisitions, but these algorithms are computationally expensive, may require long reconstruction times, and may require algorithm parameters to be optimized. The authors present a novel reconstruction method, 4D volume-of-interest (4D-VOI) reconstruction which suppresses undersampling artifacts and resolves lung tumor motion for undersampled 1-min scans. The 4D-VOI reconstruction is much less computationally expensive than other 4D-CBCT algorithms. The 4D-VOI method uses respiration-correlated projection data to reconstruct a four-dimensional (4D) image inside a VOI containing the moving tumor, and uncorrelated projection data to reconstruct a three-dimensional (3D) image outside the VOI. Anatomical motion is resolved inside the VOI and blurred outside the VOI. The authors acquired a 1-min. scan of an anthropomorphic chest phantom containing a moving water-filled sphere. The authors also used previously acquired 1-min scans for two lung cancer patients who had received CBCT-guided radiation therapy. The same raw data were used to test and compare the 4D-VOI reconstruction with the standard 4D reconstruction and the McKinnon-Bates (MB) reconstruction algorithms. Both the 4D-VOI and the MB reconstructions suppress nearly all the streak artifacts compared with the standard 4D reconstruction, but the 4D-VOI has 3-8 times greater contrast-to-noise ratio than the MB reconstruction. In the dynamic chest phantom study, the 4D-VOI and the standard 4D reconstructions both resolved a moving sphere with an 18 mm displacement. The 4D-VOI reconstruction shows a motion blur of only 3 mm, whereas the MB reconstruction shows a motion blur of 13 mm

  5. A methodology for translating positional error into measures of attribute error, and combining the two error sources

    Science.gov (United States)

    Yohay Carmel; Curtis Flather; Denis Dean

    2006-01-01

    This paper summarizes our efforts to investigate the nature, behavior, and implications of positional error and attribute error in spatiotemporal datasets. Estimating the combined influence of these errors on map analysis has been hindered by the fact that these two error types are traditionally expressed in different units (distance units, and categorical units,...

  6. Medication Errors in Hospitals: A Study of Factors Affecting Nursing Reporting in a Selected Center Affiliated with Shahid Beheshti University of Medical Sciences

    Directory of Open Access Journals (Sweden)

    HamidReza Mirzaee

    2015-10-01

    Full Text Available Background: Medication errors are mentioned as the most common important challenges threatening healthcare system in all countries worldwide. This study is conducted to investigate the most significant factors in refusal to report medication errors among nursing staff.Methods: The cross-sectional study was conducted on all nursing staff of a selected Education& Treatment Center in 2013. Data was collected through a teacher made questionnaire. The questionnaires’ face and content validity was confirmed by experts and for measuring its reliability test-retest was used. Data was analyzed by descriptive and analytic statistics. 16th  version of SPSS was also used for related statistics.Results: The most important factors in refusal to report medication errors respectively are: lack of reporting system in the hospital(3.3%, non-significance of reporting medication errors to hospital authorities and lack of appropriate feedback(3.1%, and lack of a clear definition for a medication error (3%. there was a significant relationship between the most important factors of refusal to report medication errors and work shift (p:0.002, age(p:0.003, gender(p:0.005, work experience(p<0.001 and employment type of nurses(p:0.002.Conclusion: Factors pertaining to management in hospitals as well as the fear of the consequences of reporting are two broad fields among the factors that make nurses not report their medication errors. In this regard, providing enough education to nurses, boosting the job security for nurses, management support and revising related processes and definitions are some factors that can help decreasing medication errors and increasing their report in case of occurrence.

  7. Statistical methods for biodosimetry in the presence of both Berkson and classical measurement error

    Science.gov (United States)

    Miller, Austin

    In radiation epidemiology, the true dose received by those exposed cannot be assessed directly. Physical dosimetry uses a deterministic function of the source term, distance and shielding to estimate dose. For the atomic bomb survivors, the physical dosimetry system is well established. The classical measurement errors plaguing the location and shielding inputs to the physical dosimetry system are well known. Adjusting for the associated biases requires an estimate for the classical measurement error variance, for which no data-driven estimate exists. In this case, an instrumental variable solution is the most viable option to overcome the classical measurement error indeterminacy. Biological indicators of dose may serve as instrumental variables. Specification of the biodosimeter dose-response model requires identification of the radiosensitivity variables, for which we develop statistical definitions and variables. More recently, researchers have recognized Berkson error in the dose estimates, introduced by averaging assumptions for many components in the physical dosimetry system. We show that Berkson error induces a bias in the instrumental variable estimate of the dose-response coefficient, and then address the estimation problem. This model is specified by developing an instrumental variable mixed measurement error likelihood function, which is then maximized using a Monte Carlo EM Algorithm. These methods produce dose estimates that incorporate information from both physical and biological indicators of dose, as well as the first instrumental variable based data-driven estimate for the classical measurement error variance.

  8. 76 FR 34577 - Wassenaar Arrangement 2010 Plenary Agreements Implementation: Commerce Control List, Definitions...

    Science.gov (United States)

    2011-06-14

    ...This document corrects errors in a final rule published by the Bureau of Industry and Security (BIS) in the Federal Register on Friday, May 20, 2011 that revised the Export Administration Regulations (EAR) by amending entries for certain items that are controlled for national security reasons in Categories 1, 2, 3, 4, 5 Parts I & II, 6, 7, 8, and 9; adding and amending definitions to the EAR; and revising reporting requirements. That final rule contained errors concerning radial ball bearings, as well as editorial mistakes.

  9. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  10. Evaluation of elastix-based propagated align algorithm for VOI- and voxel-based analysis of longitudinal (18)F-FDG PET/CT data from patients with non-small cell lung cancer (NSCLC).

    Science.gov (United States)

    Kerner, Gerald Sma; Fischer, Alexander; Koole, Michel Jb; Pruim, Jan; Groen, Harry Jm

    2015-01-01

    Deformable image registration allows volume of interest (VOI)- and voxel-based analysis of longitudinal changes in fluorodeoxyglucose (FDG) tumor uptake in patients with non-small cell lung cancer (NSCLC). This study evaluates the performance of the elastix toolbox deformable image registration algorithm for VOI and voxel-wise assessment of longitudinal variations in FDG tumor uptake in NSCLC patients. Evaluation of the elastix toolbox was performed using (18)F-FDG PET/CT at baseline and after 2 cycles of therapy (follow-up) data in advanced NSCLC patients. The elastix toolbox, an integrated part of the IMALYTICS workstation, was used to apply a CT-based non-linear image registration of follow-up PET/CT data using the baseline PET/CT data as reference. Lesion statistics were compared to assess the impact on therapy response assessment. Next, CT-based deformable image registration was performed anew on the deformed follow-up PET/CT data using the original follow-up PET/CT data as reference, yielding a realigned follow-up PET dataset. Performance was evaluated by determining the correlation coefficient between original and realigned follow-up PET datasets. The intra- and extra-thoracic tumors were automatically delineated on the original PET using a 41% of maximum standardized uptake value (SUVmax) adaptive threshold. Equivalence between reference and realigned images was tested (determining 95% range of the difference) and estimating the percentage of voxel values that fell within that range. Thirty-nine patients with 191 tumor lesions were included. In 37/39 and 12/39 patients, respectively, thoracic and non-thoracic lesions were evaluable for response assessment. Using the EORTC/SUVmax-based criteria, 5/37 patients had a discordant response of thoracic, and 2/12 a discordant response of non-thoracic lesions between the reference and the realigned image. FDG uptake values of corresponding tumor voxels in the original and realigned reference PET correlated well (R

  11. Multi-GNSS signal-in-space range error assessment - Methodology and results

    Science.gov (United States)

    Montenbruck, Oliver; Steigenberger, Peter; Hauschild, André

    2018-06-01

    The positioning accuracy of global and regional navigation satellite systems (GNSS/RNSS) depends on a variety of influence factors. For constellation-specific performance analyses it has become common practice to separate a geometry-related quality factor (the dilution of precision, DOP) from the measurement and modeling errors of the individual ranging measurements (known as user equivalent range error, UERE). The latter is further divided into user equipment errors and contributions related to the space and control segment. The present study reviews the fundamental concepts and underlying assumptions of signal-in-space range error (SISRE) analyses and presents a harmonized framework for multi-GNSS performance monitoring based on the comparison of broadcast and precise ephemerides. The implications of inconsistent geometric reference points, non-common time systems, and signal-specific range biases are analyzed, and strategies for coping with these issues in the definition and computation of SIS range errors are developed. The presented concepts are, furthermore, applied to current navigation satellite systems, and representative results are presented along with a discussion of constellation-specific problems in their determination. Based on data for the January to December 2017 time frame, representative global average root-mean-square (RMS) SISRE values of 0.2 m, 0.6 m, 1 m, and 2 m are obtained for Galileo, GPS, BeiDou-2, and GLONASS, respectively. Roughly two times larger values apply for the corresponding 95th-percentile values. Overall, the study contributes to a better understanding and harmonization of multi-GNSS SISRE analyses and their use as key performance indicators for the various constellations.

  12. Error Patterns

    NARCIS (Netherlands)

    Hoede, C.; Li, Z.

    2001-01-01

    In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,

  13. Seismic Imaging of the Lesser Antilles Subduction Zone Using S-to-P Receiver Functions: Insights From VoiLA

    Science.gov (United States)

    Chichester, B.; Rychert, C.; Harmon, N.; Rietbrock, A.; Collier, J.; Henstock, T.; Goes, S. D. B.; Kendall, J. M.; Krueger, F.

    2017-12-01

    In the Lesser Antilles subduction zone Atlantic oceanic lithosphere, expected to be highly hydrated, is being subducted beneath the Caribbean plate. Water and other volatiles from the down-going plate are released and cause the overlying mantle to melt, feeding volcanoes with magma and hence forming the volcanic island arc. However, the depths and pathways of volatiles and melt within the mantle wedge are not well known. Here, we use S-to-P receiver functions to image seismic velocity contrasts with depth within the subduction zone in order to constrain the release of volatiles and the presence of melt in the mantle wedge, as well as slab structure and arc-lithosphere structure. We use data from 55-80° epicentral distances recorded by 32 recovered broadband ocean-bottom seismometers that were deployed during the 2016-2017 Volatiles in the Lesser Antilles (VoiLA) project for 15 months on the back- and fore-arc. The S-to-P receiver functions are calculated using two methods: extended time multi-taper deconvolution followed by migration to depth to constrain 3-D discontinuity structure of the subduction zone; and simultaneous deconvolution to determine structure beneath single stations. In the south of the island arc, we image a velocity increase with depth associated with the Moho at depths of 32-40 ± 4 km on the fore- and back-arc, consistent with various previous studies. At depths of 65-80 ± 4 km beneath the fore-arc we image a strong velocity decrease with depth that is west-dipping. At 96-120 ± 5 km beneath the fore-arc, we image a velocity increase with depth that is also west-dipping. The dipping negative-positive phase could represent velocity contrasts related to the top of the down-going plate, a feature commonly imaged in subduction zone receiver function studies. The negative phase is strong, so there may also be contributions to the negative velocity discontinuity from slab dehydration and/or mantle wedge serpentinization in the fore-arc.

  14. The error in total error reduction.

    Science.gov (United States)

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. Characteristics of pediatric chemotherapy medication errors in a national error reporting database.

    Science.gov (United States)

    Rinke, Michael L; Shore, Andrew D; Morlock, Laura; Hicks, Rodney W; Miller, Marlene R

    2007-07-01

    Little is known regarding chemotherapy medication errors in pediatrics despite studies suggesting high rates of overall pediatric medication errors. In this study, the authors examined patterns in pediatric chemotherapy errors. The authors queried the United States Pharmacopeia MEDMARX database, a national, voluntary, Internet-accessible error reporting system, for all error reports from 1999 through 2004 that involved chemotherapy medications and patients aged error reports, 85% reached the patient, and 15.6% required additional patient monitoring or therapeutic intervention. Forty-eight percent of errors originated in the administering phase of medication delivery, and 30% originated in the drug-dispensing phase. Of the 387 medications cited, 39.5% were antimetabolites, 14.0% were alkylating agents, 9.3% were anthracyclines, and 9.3% were topoisomerase inhibitors. The most commonly involved chemotherapeutic agents were methotrexate (15.3%), cytarabine (12.1%), and etoposide (8.3%). The most common error types were improper dose/quantity (22.9% of 327 cited error types), wrong time (22.6%), omission error (14.1%), and wrong administration technique/wrong route (12.2%). The most common error causes were performance deficit (41.3% of 547 cited error causes), equipment and medication delivery devices (12.4%), communication (8.8%), knowledge deficit (6.8%), and written order errors (5.5%). Four of the 5 most serious errors occurred at community hospitals. Pediatric chemotherapy errors often reached the patient, potentially were harmful, and differed in quality between outpatient and inpatient areas. This study indicated which chemotherapeutic agents most often were involved in errors and that administering errors were common. Investigation is needed regarding targeted medication administration safeguards for these high-risk medications. Copyright (c) 2007 American Cancer Society.

  16. Mathematical Definition, Mapping, and Detection of (Anti)Fragility

    OpenAIRE

    Taleb, Nassim N.; Douady, Raphael

    2012-01-01

    URL des Documents de travail : http://centredeconomiesorbonne.univ-paris1.fr/documents-de-travail/; Documents de travail du Centre d'Economie de la Sorbonne 2014.93 - ISSN : 1955-611X; We provide a mathematical definition of fragility and antifragility as negative or positive sensitivity to a semi-measure of dispersion and volatility (a variant of negative or positive "vega") and examine the link to nonlinear effects. We integrate model error (and biases) into the fragile or antifragile conte...

  17. Prevalence of refractive errors in the European adult population: the Gutenberg Health Study (GHS).

    Science.gov (United States)

    Wolfram, Christian; Höhn, René; Kottler, Ulrike; Wild, Philipp; Blettner, Maria; Bühren, Jens; Pfeiffer, Norbert; Mirshahi, Alireza

    2014-07-01

    To study the distribution of refractive errors among adults of European descent. Population-based eye study in Germany with 15010 participants aged 35-74 years. The study participants underwent a detailed ophthalmic examination according to a standardised protocol. Refractive error was determined by an automatic refraction device (Humphrey HARK 599) without cycloplegia. Definitions for the analysis were myopia +0.5 D, astigmatism >0.5 cylinder D and anisometropia >1.0 D difference in the spherical equivalent between the eyes. Exclusion criterion was previous cataract or refractive surgery. 13959 subjects were eligible. Refractive errors ranged from -21.5 to +13.88 D. Myopia was present in 35.1% of this study sample, hyperopia in 31.8%, astigmatism in 32.3% and anisometropia in 13.5%. The prevalence of myopia decreased, while the prevalence of hyperopia, astigmatism and anisometropia increased with age. 3.5% of the study sample had no refractive correction for their ametropia. Refractive errors affect the majority of the population. The Gutenberg Health Study sample contains more myopes than other study cohorts in adult populations. Our findings do not support the hypothesis of a generally lower prevalence of myopia among adults in Europe as compared with East Asia.

  18. Error studies for SNS Linac. Part 1: Transverse errors

    International Nuclear Information System (INIS)

    Crandall, K.R.

    1998-01-01

    The SNS linac consist of a radio-frequency quadrupole (RFQ), a drift-tube linac (DTL), a coupled-cavity drift-tube linac (CCDTL) and a coupled-cavity linac (CCL). The RFQ and DTL are operated at 402.5 MHz; the CCDTL and CCL are operated at 805 MHz. Between the RFQ and DTL is a medium-energy beam-transport system (MEBT). This error study is concerned with the DTL, CCDTL and CCL, and each will be analyzed separately. In fact, the CCL is divided into two sections, and each of these will be analyzed separately. The types of errors considered here are those that affect the transverse characteristics of the beam. The errors that cause the beam center to be displaced from the linac axis are quad displacements and quad tilts. The errors that cause mismatches are quad gradient errors and quad rotations (roll)

  19. Rigorous covariance propagation of geoid errors to geodetic MDT estimates

    Science.gov (United States)

    Pail, R.; Albertella, A.; Fecher, T.; Savcenko, R.

    2012-04-01

    The mean dynamic topography (MDT) is defined as the difference between the mean sea surface (MSS) derived from satellite altimetry, averaged over several years, and the static geoid. Assuming geostrophic conditions, from the MDT the ocean surface velocities as important component of global ocean circulation can be derived from it. Due to the availability of GOCE gravity field models, for the very first time MDT can now be derived solely from satellite observations (altimetry and gravity) down to spatial length-scales of 100 km and even below. Global gravity field models, parameterized in terms of spherical harmonic coefficients, are complemented by the full variance-covariance matrix (VCM). Therefore, for the geoid component a realistic statistical error estimate is available, while the error description of the altimetric component is still an open issue and is, if at all, attacked empirically. In this study we make the attempt to perform, based on the full gravity VCM, rigorous error propagation to derived geostrophic surface velocities, thus also considering all correlations. For the definition of the static geoid we use the third release of the time-wise GOCE model, as well as the satellite-only combination model GOCO03S. In detail, we will investigate the velocity errors resulting from the geoid component in dependence of the harmonic degree, and the impact of using/no using covariances on the MDT errors and its correlations. When deriving an MDT, it is spectrally filtered to a certain maximum degree, which is usually driven by the signal content of the geoid model, by applying isotropic or non-isotropic filters. Since this filtering is acting also on the geoid component, the consistent integration of this filter process into the covariance propagation shall be performed, and its impact shall be quantified. The study will be performed for MDT estimates in specific test areas of particular oceanographic interest.

  20. Error-related anterior cingulate cortex activity and the prediction of conscious error awareness

    Directory of Open Access Journals (Sweden)

    Catherine eOrr

    2012-06-01

    Full Text Available Research examining the neural mechanisms associated with error awareness has consistently identified dorsal anterior cingulate activity (ACC as necessary but not predictive of conscious error detection. Two recent studies (Steinhauser and Yeung, 2010; Wessel et al. 2011 have found a contrary pattern of greater dorsal ACC activity (in the form of the error-related negativity during detected errors, but suggested that the greater activity may instead reflect task influences (e.g., response conflict, error probability and or individual variability (e.g., statistical power. We re-analyzed fMRI BOLD data from 56 healthy participants who had previously been administered the Error Awareness Task, a motor Go/No-go response inhibition task in which subjects make errors of commission of which they are aware (Aware errors, or unaware (Unaware errors. Consistent with previous data, the activity in a number of cortical regions was predictive of error awareness, including bilateral inferior parietal and insula cortices, however in contrast to previous studies, including our own smaller sample studies using the same task, error-related dorsal ACC activity was significantly greater during aware errors when compared to unaware errors. While the significantly faster RT for aware errors (compared to unaware was consistent with the hypothesis of higher response conflict increasing ACC activity, we could find no relationship between dorsal ACC activity and the error RT difference. The data suggests that individual variability in error awareness is associated with error-related dorsal ACC activity, and therefore this region may be important to conscious error detection, but it remains unclear what task and individual factors influence error awareness.

  1. Dual resolution cone beam breast CT: A feasibility study

    International Nuclear Information System (INIS)

    Chen Lingyun; Shen Youtao; Lai, Chao-Jen; Han Tao; Zhong Yuncheng; Ge Shuaiping; Liu Xinming; Wang Tianpeng; Yang, Wei T.; Whitman, Gary J.; Shaw, Chris C.

    2009-01-01

    Purpose: In this study, the authors investigated the feasibility of a dual resolution volume-of-interest (VOI) cone beam breast CT technique and compared two implementation approaches in terms of dose saving and scatter reduction. Methods: With this technique, a lead VOI mask with an opening is inserted between the x-ray source and the breast to deliver x-ray exposure to the VOI while blocking x rays outside the VOI. A CCD detector is used to collect the high resolution projection data of the VOI. Low resolution cone beam CT (CBCT) images of the entire breast, acquired with a flat panel (FP) detector, were used to calculate the projection data outside the VOI with the ray-tracing reprojection method. The Feldkamp-Davis-Kress filtered backprojection algorithm was used to reconstruct the dual resolution 3D images. Breast phantoms with 180 μm and smaller microcalcifications (MCs) were imaged with both FP and FP-CCD dual resolution CBCT systems, respectively. Two approaches of implementing the dual resolution technique, breast-centered approach and VOI-centered approach, were investigated and evaluated for dose saving and scatter reduction with Monte Carlo simulation using a GEANT4 package. Results: The results showed that the breast-centered approach saved more breast absorbed dose than did VOI-centered approach with similar scatter reduction. The MCs in fatty breast phantom, which were invisible with FP CBCT scan, became visible with the FP-CCD dual resolution CBCT scan. Conclusions: These results indicate potential improvement of the image quality inside the VOI with reduced breast dose both inside and outside the VOI.

  2. Errors in otology.

    Science.gov (United States)

    Kartush, J M

    1996-11-01

    Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.

  3. The Impact of Error-Management Climate, Error Type and Error Originator on Auditors’ Reporting Errors Discovered on Audit Work Papers

    NARCIS (Netherlands)

    A.H. Gold-Nöteberg (Anna); U. Gronewold (Ulfert); S. Salterio (Steve)

    2010-01-01

    textabstractWe examine factors affecting the auditor’s willingness to report their own or their peers’ self-discovered errors in working papers subsequent to detailed working paper review. Prior research has shown that errors in working papers are detected in the review process; however, such

  4. Learning from Errors

    OpenAIRE

    Martínez-Legaz, Juan Enrique; Soubeyran, Antoine

    2003-01-01

    We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.

  5. Assessing energy forecasting inaccuracy by simultaneously considering temporal and absolute errors

    International Nuclear Information System (INIS)

    Frías-Paredes, Laura; Mallor, Fermín; Gastón-Romeo, Martín; León, Teresa

    2017-01-01

    Highlights: • A new method to match time series is defined to assess energy forecasting accuracy. • This method relies in a new family of step patterns that optimizes the MAE. • A new definition of the Temporal Distortion Index between two series is provided. • A parametric extension controls both the temporal distortion index and the MAE. • Pareto optimal transformations of the forecast series are obtained for both indexes. - Abstract: Recent years have seen a growing trend in wind and solar energy generation globally and it is expected that an important percentage of total energy production comes from these energy sources. However, they present inherent variability that implies fluctuations in energy generation that are difficult to forecast. Thus, forecasting errors have a considerable role in the impacts and costs of renewable energy integration, management, and commercialization. This study presents an important advance in the task of analyzing prediction models, in particular, in the timing component of prediction error, which improves previous pioneering results. A new method to match time series is defined in order to assess energy forecasting accuracy. This method relies on a new family of step patterns, an essential component of the algorithm to evaluate the temporal distortion index (TDI). This family minimizes the mean absolute error (MAE) of the transformation with respect to the reference series (the real energy series) and also allows detailed control of the temporal distortion entailed in the prediction series. The simultaneous consideration of temporal and absolute errors allows the use of Pareto frontiers as characteristic error curves. Real examples of wind energy forecasts are used to illustrate the results.

  6. Error Detection and Error Classification: Failure Awareness in Data Transfer Scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Louisiana State University; Balman, Mehmet; Kosar, Tevfik

    2010-10-27

    Data transfer in distributed environment is prone to frequent failures resulting from back-end system level problems, like connectivity failure which is technically untraceable by users. Error messages are not logged efficiently, and sometimes are not relevant/useful from users point-of-view. Our study explores the possibility of an efficient error detection and reporting system for such environments. Prior knowledge about the environment and awareness of the actual reason behind a failure would enable higher level planners to make better and accurate decisions. It is necessary to have well defined error detection and error reporting methods to increase the usability and serviceability of existing data transfer protocols and data management systems. We investigate the applicability of early error detection and error classification techniques and propose an error reporting framework and a failure-aware data transfer life cycle to improve arrangement of data transfer operations and to enhance decision making of data transfer schedulers.

  7. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  8. A template-based procedure for determining white matter integrity in the internal capsule early after stroke

    Directory of Open Access Journals (Sweden)

    Matthew A. Petoe

    2014-01-01

    Full Text Available The integrity of descending white matter pathways, measured by fractional anisotropy from DW-MRI, is a key prognostic indicator of motor recovery after stroke. Barriers to translation of fractional anisotropy measures into routine clinical practice include the time required for manually delineating volumes of interest (VOIs, and inter-examiner variability in this process. This study investigated whether registering and then editing template volumes of interest ‘as required’ would improve inter-examiner reliability compared with manual delineation, without compromising validity. MRI was performed with 30 sub-acute stroke patients with motor deficits (mean NIHSS = 11, range 0–17. Four independent examiners manually delineated VOIs for the posterior limbs of the internal capsules on T1 images, or edited template VOIs that had been registered to the T1 images if they encroached on ventricles or basal ganglia. Fractional anisotropy within each VOI and interhemispheric asymmetry were then calculated. We found that 13/30 registered template VOIs required editing. Edited template VOIs were more spatially similar between examiners than the manually delineated VOIs (p = 0.005. Both methods produced similar asymmetry values that correlated with clinical scores with near perfect levels of agreement between examiners. Contralesional fractional anisotropy correlated with age when edited template VOIs were used but not when VOIs were manually delineated. Editing template VOIs as required is reliable, increases the validity of fractional anisotropy measurements in the posterior limb of the internal capsule, and is less time-consuming compared to manual delineation. This approach could support the use of FA asymmetry measures in routine clinical practice.

  9. A Hybrid Unequal Error Protection / Unequal Error Resilience ...

    African Journals Online (AJOL)

    The quality layers are then assigned an Unequal Error Resilience to synchronization loss by unequally allocating the number of headers available for synchronization to them. Following that Unequal Error Protection against channel noise is provided to the layers by the use of Rate Compatible Punctured Convolutional ...

  10. Polynomial theory of error correcting codes

    CERN Document Server

    Cancellieri, Giovanni

    2015-01-01

    The book offers an original view on channel coding, based on a unitary approach to block and convolutional codes for error correction. It presents both new concepts and new families of codes. For example, lengthened and modified lengthened cyclic codes are introduced as a bridge towards time-invariant convolutional codes and their extension to time-varying versions. The novel families of codes include turbo codes and low-density parity check (LDPC) codes, the features of which are justified from the structural properties of the component codes. Design procedures for regular LDPC codes are proposed, supported by the presented theory. Quasi-cyclic LDPC codes, in block or convolutional form, represent one of the most original contributions of the book. The use of more than 100 examples allows the reader gradually to gain an understanding of the theory, and the provision of a list of more than 150 definitions, indexed at the end of the book, permits rapid location of sought information.

  11. Learning from prescribing errors

    OpenAIRE

    Dean, B

    2002-01-01

    

 The importance of learning from medical error has recently received increasing emphasis. This paper focuses on prescribing errors and argues that, while learning from prescribing errors is a laudable goal, there are currently barriers that can prevent this occurring. Learning from errors can take place on an individual level, at a team level, and across an organisation. Barriers to learning from prescribing errors include the non-discovery of many prescribing errors, lack of feedback to th...

  12. Random and Systematic Errors Share in Total Error of Probes for CNC Machine Tools

    Directory of Open Access Journals (Sweden)

    Adam Wozniak

    2018-03-01

    Full Text Available Probes for CNC machine tools, as every measurement device, have accuracy limited by random errors and by systematic errors. Random errors of these probes are described by a parameter called unidirectional repeatability. Manufacturers of probes for CNC machine tools usually specify only this parameter, while parameters describing systematic errors of the probes, such as pre-travel variation or triggering radius variation, are used rarely. Systematic errors of the probes, linked to the differences in pre-travel values for different measurement directions, can be corrected or compensated, but it is not a widely used procedure. In this paper, the share of systematic errors and random errors in total error of exemplary probes are determined. In the case of simple, kinematic probes, systematic errors are much greater than random errors, so compensation would significantly reduce the probing error. Moreover, it shows that in the case of kinematic probes commonly specified unidirectional repeatability is significantly better than 2D performance. However, in the case of more precise strain-gauge probe systematic errors are of the same order as random errors, which means that errors correction or compensation, in this case, would not yield any significant benefits.

  13. Realization of fluence field modulated CT on a clinical TomoTherapy megavoltage CT system

    International Nuclear Information System (INIS)

    Szczykutowicz, Timothy P; Hermus, James; Geurts, Mark; Smilowitz, Jennifer

    2015-01-01

    The multi-leaf collimator (MLC) assembly present on TomoTherapy (Accuray, Madison WI) radiation therapy (RT) and mega voltage CT machines is well suited to perform fluence field modulated CT (FFMCT). In addition, there is a demand in the RT environment for FFMCT imaging techniques, specifically volume of interest (VOI) imaging.A clinical TomoTherapy machine was programmed to perform VOI. Four different size ROIs were placed at varying distances from isocenter. Projections intersecting the VOI received ‘full dose’ while those not intersecting the VOI received 30% of the dose (i.e. the incident fluence for non VOI projections was 30% of the incident fluence for projections intersecting the VOI). Additional scans without fluence field modulation were acquired at ‘full’ and 30% dose. The noise (pixel standard deviation) and mean CT number were measured inside the VOI region and compared between the three scans. Dose maps were generated using a dedicated TomoTherapy treatment planning dose calculator.The VOI-FFMCT technique produced an image noise 1.05, 1.00, 1.03, and 1.05 times higher than the ‘full dose’ scan for ROI sizes of 10 cm, 13 cm, 10 cm, and 6 cm respectively within the VOI region. The VOI-FFMCT technique required a total imaging dose equal to 0.61, 0.69, 0.60, and 0.50 times the ‘full dose’ acquisition dose for ROI sizes of 10 cm, 13 cm, 10 cm, and 6 cm respectively within the VOI region.Noise levels can be almost unchanged within clinically relevant VOIs sizes for RT applications while the integral imaging dose to the patient can be decreased, and/or the image quality in RT can be dramatically increased with no change in dose relative to non-FFMCT RT imaging. The ability to shift dose away from regions unimportant for clinical evaluation in order to improve image quality or reduce imaging dose has been demonstrated. This paper demonstrates that FFMCT can be performed using the MLC on a clinical TomoTherapy machine for the

  14. MR-based automatic delineation of volumes of interest in human brain PET images using probability maps

    DEFF Research Database (Denmark)

    Svarer, Claus; Madsen, Karina; Hasselbalch, Steen G.

    2005-01-01

    subjects' MR-images, where VOI sets have been defined manually. High-resolution structural MR-images and 5-HT(2A) receptor binding PET-images (in terms of (18)F-altanserin binding) from 10 healthy volunteers and 10 patients with mild cognitive impairment were included for the analysis. A template including...... 35 VOIs was manually delineated on the subjects' MR images. Through a warping algorithm template VOI sets defined from each individual were transferred to the other subjects MR-images and the voxel overlap was compared to the VOI set specifically drawn for that particular individual. Comparisons were...... delineation of the VOI set. The approach was also shown to work equally well in individuals with pronounced cerebral atrophy. Probability-map-based automatic delineation of VOIs is a fast, objective, reproducible, and safe way to assess regional brain values from PET or SPECT scans. In addition, the method...

  15. Uncorrected refractive errors.

    Science.gov (United States)

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  16. Uncorrected refractive errors

    Directory of Open Access Journals (Sweden)

    Kovin S Naidoo

    2012-01-01

    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  17. Detected-jump-error-correcting quantum codes, quantum error designs, and quantum computation

    International Nuclear Information System (INIS)

    Alber, G.; Mussinger, M.; Beth, Th.; Charnes, Ch.; Delgado, A.; Grassl, M.

    2003-01-01

    The recently introduced detected-jump-correcting quantum codes are capable of stabilizing qubit systems against spontaneous decay processes arising from couplings to statistically independent reservoirs. These embedded quantum codes exploit classical information about which qubit has emitted spontaneously and correspond to an active error-correcting code embedded in a passive error-correcting code. The construction of a family of one-detected-jump-error-correcting quantum codes is shown and the optimal redundancy, encoding, and recovery as well as general properties of detected-jump-error-correcting quantum codes are discussed. By the use of design theory, multiple-jump-error-correcting quantum codes can be constructed. The performance of one-jump-error-correcting quantum codes under nonideal conditions is studied numerically by simulating a quantum memory and Grover's algorithm

  18. Perceptual learning eases crowding by reducing recognition errors but not position errors.

    Science.gov (United States)

    Xiong, Ying-Zi; Yu, Cong; Zhang, Jun-Yun

    2015-08-01

    When an observer reports a letter flanked by additional letters in the visual periphery, the response errors (the crowding effect) may result from failure to recognize the target letter (recognition errors), from mislocating a correctly recognized target letter at a flanker location (target misplacement errors), or from reporting a flanker as the target letter (flanker substitution errors). Crowding can be reduced through perceptual learning. However, it is not known how perceptual learning operates to reduce crowding. In this study we trained observers with a partial-report task (Experiment 1), in which they reported the central target letter of a three-letter string presented in the visual periphery, or a whole-report task (Experiment 2), in which they reported all three letters in order. We then assessed the impact of training on recognition of both unflanked and flanked targets, with particular attention to how perceptual learning affected the types of errors. Our results show that training improved target recognition but not single-letter recognition, indicating that training indeed affected crowding. However, training did not reduce target misplacement errors or flanker substitution errors. This dissociation between target recognition and flanker substitution errors supports the view that flanker substitution may be more likely a by-product (due to response bias), rather than a cause, of crowding. Moreover, the dissociation is not consistent with hypothesized mechanisms of crowding that would predict reduced positional errors.

  19. Error suppression and error correction in adiabatic quantum computation: non-equilibrium dynamics

    International Nuclear Information System (INIS)

    Sarovar, Mohan; Young, Kevin C

    2013-01-01

    While adiabatic quantum computing (AQC) has some robustness to noise and decoherence, it is widely believed that encoding, error suppression and error correction will be required to scale AQC to large problem sizes. Previous works have established at least two different techniques for error suppression in AQC. In this paper we derive a model for describing the dynamics of encoded AQC and show that previous constructions for error suppression can be unified with this dynamical model. In addition, the model clarifies the mechanisms of error suppression and allows the identification of its weaknesses. In the second half of the paper, we utilize our description of non-equilibrium dynamics in encoded AQC to construct methods for error correction in AQC by cooling local degrees of freedom (qubits). While this is shown to be possible in principle, we also identify the key challenge to this approach: the requirement of high-weight Hamiltonians. Finally, we use our dynamical model to perform a simplified thermal stability analysis of concatenated-stabilizer-code encoded many-body systems for AQC or quantum memories. This work is a companion paper to ‘Error suppression and error correction in adiabatic quantum computation: techniques and challenges (2013 Phys. Rev. X 3 041013)’, which provides a quantum information perspective on the techniques and limitations of error suppression and correction in AQC. In this paper we couch the same results within a dynamical framework, which allows for a detailed analysis of the non-equilibrium dynamics of error suppression and correction in encoded AQC. (paper)

  20. Error analysis for determination of accuracy of an ultrasound navigation system for head and neck surgery.

    Science.gov (United States)

    Kozak, J; Krysztoforski, K; Kroll, T; Helbig, S; Helbig, M

    2009-01-01

    The use of conventional CT- or MRI-based navigation systems for head and neck surgery is unsatisfactory due to tissue shift. Moreover, changes occurring during surgical procedures cannot be visualized. To overcome these drawbacks, we developed a novel ultrasound-guided navigation system for head and neck surgery. A comprehensive error analysis was undertaken to determine the accuracy of this new system. The evaluation of the system accuracy was essentially based on the method of error definition for well-established fiducial marker registration methods (point-pair matching) as used in, for example, CT- or MRI-based navigation. This method was modified in accordance with the specific requirements of ultrasound-guided navigation. The Fiducial Localization Error (FLE), Fiducial Registration Error (FRE) and Target Registration Error (TRE) were determined. In our navigation system, the real error (the TRE actually measured) did not exceed a volume of 1.58 mm(3) with a probability of 0.9. A mean value of 0.8 mm (standard deviation: 0.25 mm) was found for the FRE. The quality of the coordinate tracking system (Polaris localizer) could be defined with an FLE of 0.4 +/- 0.11 mm (mean +/- standard deviation). The quality of the coordinates of the crosshairs of the phantom was determined with a deviation of 0.5 mm (standard deviation: 0.07 mm). The results demonstrate that our newly developed ultrasound-guided navigation system shows only very small system deviations and therefore provides very accurate data for practical applications.

  1. The Errors of Our Ways: Understanding Error Representations in Cerebellar-Dependent Motor Learning.

    Science.gov (United States)

    Popa, Laurentiu S; Streng, Martha L; Hewitt, Angela L; Ebner, Timothy J

    2016-04-01

    The cerebellum is essential for error-driven motor learning and is strongly implicated in detecting and correcting for motor errors. Therefore, elucidating how motor errors are represented in the cerebellum is essential in understanding cerebellar function, in general, and its role in motor learning, in particular. This review examines how motor errors are encoded in the cerebellar cortex in the context of a forward internal model that generates predictions about the upcoming movement and drives learning and adaptation. In this framework, sensory prediction errors, defined as the discrepancy between the predicted consequences of motor commands and the sensory feedback, are crucial for both on-line movement control and motor learning. While many studies support the dominant view that motor errors are encoded in the complex spike discharge of Purkinje cells, others have failed to relate complex spike activity with errors. Given these limitations, we review recent findings in the monkey showing that complex spike modulation is not necessarily required for motor learning or for simple spike adaptation. Also, new results demonstrate that the simple spike discharge provides continuous error signals that both lead and lag the actual movements in time, suggesting errors are encoded as both an internal prediction of motor commands and the actual sensory feedback. These dual error representations have opposing effects on simple spike discharge, consistent with the signals needed to generate sensory prediction errors used to update a forward internal model.

  2. Video Error Correction Using Steganography

    Science.gov (United States)

    Robie, David L.; Mersereau, Russell M.

    2002-12-01

    The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  3. Computer-based route-definition system for peripheral bronchoscopy.

    Science.gov (United States)

    Graham, Michael W; Gibbs, Jason D; Higgins, William E

    2012-04-01

    Multi-detector computed tomography (MDCT) scanners produce high-resolution images of the chest. Given a patient's MDCT scan, a physician can use an image-guided intervention system to first plan and later perform bronchoscopy to diagnostic sites situated deep in the lung periphery. An accurate definition of complete routes through the airway tree leading to the diagnostic sites, however, is vital for avoiding navigation errors during image-guided bronchoscopy. We present a system for the robust definition of complete airway routes suitable for image-guided bronchoscopy. The system incorporates both automatic and semiautomatic MDCT analysis methods for this purpose. Using an intuitive graphical user interface, the user invokes automatic analysis on a patient's MDCT scan to produce a series of preliminary routes. Next, the user visually inspects each route and quickly corrects the observed route defects using the built-in semiautomatic methods. Application of the system to a human study for the planning and guidance of peripheral bronchoscopy demonstrates the efficacy of the system.

  4. Rotational error in path integration: encoding and execution errors in angle reproduction.

    Science.gov (United States)

    Chrastil, Elizabeth R; Warren, William H

    2017-06-01

    Path integration is fundamental to human navigation. When a navigator leaves home on a complex outbound path, they are able to keep track of their approximate position and orientation and return to their starting location on a direct homebound path. However, there are several sources of error during path integration. Previous research has focused almost exclusively on encoding error-the error in registering the outbound path in memory. Here, we also consider execution error-the error in the response, such as turning and walking a homebound trajectory. In two experiments conducted in ambulatory virtual environments, we examined the contribution of execution error to the rotational component of path integration using angle reproduction tasks. In the reproduction tasks, participants rotated once and then rotated again to face the original direction, either reproducing the initial turn or turning through the supplementary angle. One outstanding difficulty in disentangling encoding and execution error during a typical angle reproduction task is that as the encoding angle increases, so does the required response angle. In Experiment 1, we dissociated these two variables by asking participants to report each encoding angle using two different responses: by turning to walk on a path parallel to the initial facing direction in the same (reproduction) or opposite (supplementary angle) direction. In Experiment 2, participants reported the encoding angle by turning both rightward and leftward onto a path parallel to the initial facing direction, over a larger range of angles. The results suggest that execution error, not encoding error, is the predominant source of error in angular path integration. These findings also imply that the path integrator uses an intrinsic (action-scaled) rather than an extrinsic (objective) metric.

  5. Investigation of error sources in regional inverse estimates of greenhouse gas emissions in Canada

    Science.gov (United States)

    Chan, E.; Chan, D.; Ishizawa, M.; Vogel, F.; Brioude, J.; Delcloo, A.; Wu, Y.; Jin, B.

    2015-08-01

    Inversion models can use atmospheric concentration measurements to estimate surface fluxes. This study is an evaluation of the errors in a regional flux inversion model for different provinces of Canada, Alberta (AB), Saskatchewan (SK) and Ontario (ON). Using CarbonTracker model results as the target, the synthetic data experiment analyses examined the impacts of the errors from the Bayesian optimisation method, prior flux distribution and the atmospheric transport model, as well as their interactions. The scaling factors for different sub-regions were estimated by the Markov chain Monte Carlo (MCMC) simulation and cost function minimization (CFM) methods. The CFM method results are sensitive to the relative size of the assumed model-observation mismatch and prior flux error variances. Experiment results show that the estimation error increases with the number of sub-regions using the CFM method. For the region definitions that lead to realistic flux estimates, the numbers of sub-regions for the western region of AB/SK combined and the eastern region of ON are 11 and 4 respectively. The corresponding annual flux estimation errors for the western and eastern regions using the MCMC (CFM) method are -7 and -3 % (0 and 8 %) respectively, when there is only prior flux error. The estimation errors increase to 36 and 94 % (40 and 232 %) resulting from transport model error alone. When prior and transport model errors co-exist in the inversions, the estimation errors become 5 and 85 % (29 and 201 %). This result indicates that estimation errors are dominated by the transport model error and can in fact cancel each other and propagate to the flux estimates non-linearly. In addition, it is possible for the posterior flux estimates having larger differences than the prior compared to the target fluxes, and the posterior uncertainty estimates could be unrealistically small that do not cover the target. The systematic evaluation of the different components of the inversion

  6. Video Error Correction Using Steganography

    Directory of Open Access Journals (Sweden)

    Robie David L

    2002-01-01

    Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  7. Part two: Error propagation

    International Nuclear Information System (INIS)

    Picard, R.R.

    1989-01-01

    Topics covered in this chapter include a discussion of exact results as related to nuclear materials management and accounting in nuclear facilities; propagation of error for a single measured value; propagation of error for several measured values; error propagation for materials balances; and an application of error propagation to an example of uranium hexafluoride conversion process

  8. Measurement Matters: Comparing Old and New Definitions of Rape in Federal Statistical Reporting.

    Science.gov (United States)

    Bierie, David M; Davis-Siegel, James C

    2015-10-01

    National statistics on the incidence of rape play an important role in the work of policymakers and academics. The Uniform Crime Reports (UCR) have provided some of the most widely used and influential statistics on the incidence of rape across the United States over the past 80 years. The definition of rape used by UCR changed in 2012 to include substantially more types of sexual assault. This article draws on 20 years of data from the National Incident-Based Reporting System to describe the impact this definitional change will have on estimates of the incidence of rape and trends over time. Drawing on time series as well as panel random effects methodologies, we show that 40% of sexual assaults have been excluded by the prior definition and that the magnitude of this error has grown over time. However, the overall trend in rape over time (year-to-year change) was not substantially different when comparing events meeting the prior definition and the subgroups of sexual assault that will now be counted. © The Author(s) 2014.

  9. The economics of health care quality and medical errors.

    Science.gov (United States)

    Andel, Charles; Davidow, Stephen L; Hollander, Mark; Moreno, David A

    2012-01-01

    Hospitals have been looking for ways to improve quality and operational efficiency and cut costs for nearly three decades, using a variety of quality improvement strategies. However, based on recent reports, approximately 200,000 Americans die from preventable medical errors including facility-acquired conditions and millions may experience errors. In 2008, medical errors cost the United States $19.5 billion. About 87 percent or $17 billion were directly associated with additional medical cost, including: ancillary services, prescription drug services, and inpatient and outpatient care, according to a study sponsored by the Society for Actuaries and conducted by Milliman in 2010. Additional costs of $1.4 billion were attributed to increased mortality rates with $1.1 billion or 10 million days of lost productivity from missed work based on short-term disability claims. The authors estimate that the economic impact is much higher, perhaps nearly $1 trillion annually when quality-adjusted life years (QALYs) are applied to those that die. Using the Institute of Medicine's (IOM) estimate of 98,000 deaths due to preventable medical errors annually in its 1998 report, To Err Is Human, and an average of ten lost years of life at $75,000 to $100,000 per year, there is a loss of $73.5 billion to $98 billion in QALYs for those deaths--conservatively. These numbers are much greater than those we cite from studies that explore the direct costs of medical errors. And if the estimate of a recent Health Affairs article is correct-preventable death being ten times the IOM estimate-the cost is $735 billion to $980 billion. Quality care is less expensive care. It is better, more efficient, and by definition, less wasteful. It is the right care, at the right time, every time. It should mean that far fewer patients are harmed or injured. Obviously, quality care is not being delivered consistently throughout U.S. hospitals. Whatever the measure, poor quality is costing payers and

  10. Prediction-error of Prediction Error (PPE)-based Reversible Data Hiding

    OpenAIRE

    Wu, Han-Zhou; Wang, Hong-Xia; Shi, Yun-Qing

    2016-01-01

    This paper presents a novel reversible data hiding (RDH) algorithm for gray-scaled images, in which the prediction-error of prediction error (PPE) of a pixel is used to carry the secret data. In the proposed method, the pixels to be embedded are firstly predicted with their neighboring pixels to obtain the corresponding prediction errors (PEs). Then, by exploiting the PEs of the neighboring pixels, the prediction of the PEs of the pixels can be determined. And, a sorting technique based on th...

  11. Diagnostic errors in pediatric radiology

    International Nuclear Information System (INIS)

    Taylor, George A.; Voss, Stephan D.; Melvin, Patrice R.; Graham, Dionne A.

    2011-01-01

    Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement. (orig.)

  12. Computed-tomography-guided anatomic standardization for quantitative assessment of dopamine transporter SPECT

    Energy Technology Data Exchange (ETDEWEB)

    Yokoyama, Kota [National Center of Neurology and Psychiatry, Department of Radiology, Tokyo (Japan); National Center of Neurology and Psychiatry, Integrative Brain Imaging Center, Tokyo (Japan); Imabayashi, Etsuko; Matsuda, Hiroshi [National Center of Neurology and Psychiatry, Integrative Brain Imaging Center, Tokyo (Japan); Sumida, Kaoru; Sone, Daichi; Kimura, Yukio; Sato, Noriko [National Center of Neurology and Psychiatry, Department of Radiology, Tokyo (Japan); Mukai, Youhei; Murata, Miho [National Center of Neurology and Psychiatry, Department of Neurology, Tokyo (Japan)

    2017-03-15

    For the quantitative assessment of dopamine transporter (DAT) using [{sup 123}I]FP-CIT single-photon emission computed tomography (SPECT) (DaTscan), anatomic standardization is preferable for achieving objective and user-independent quantification of striatal binding using a volume-of-interest (VOI) template. However, low accumulation of DAT in Parkinson's disease (PD) would lead to a deformation error when using a DaTscan-specific template without any structural information. To avoid this deformation error, we applied computed tomography (CT) data obtained using SPECT/CT equipment to anatomic standardization. We retrospectively analyzed DaTscan images of 130 patients with parkinsonian syndromes (PS), including 80 PD and 50 non-PD patients. First we segmented gray matter from CT images using statistical parametric mapping 12 (SPM12). These gray-matter images were then anatomically standardized using the diffeomorphic anatomical registration using exponentiated Lie algebra (DARTEL) algorithm. Next, DaTscan images were warped with the same parameters used in the CT anatomic standardization. The target striatal VOIs for decreased DAT in PD were generated from the SPM12 group comparison of 20 DaTscan images from each group. We applied these VOIs to DaTscan images of the remaining patients in both groups and calculated the specific binding ratios (SBRs) using nonspecific counts in a reference area. In terms of the differential diagnosis of PD and non-PD groups using SBR, we compared the present method with two other methods, DaTQUANT and DaTView, which have already been released as software programs for the quantitative assessment of DaTscan images. The SPM12 group comparison showed a significant DAT decrease in PD patients in the bilateral whole striatum. Of the three methods assessed, the present CT-guided method showed the greatest power for discriminating PD and non-PD groups, as it completely separated the two groups. CT-guided anatomic standardization using

  13. Fuzzy hidden Markov chains segmentation for volume determination and quantitation in PET

    Energy Technology Data Exchange (ETDEWEB)

    Hatt, M [INSERM U650, Laboratoire du Traitement de l' Information Medicale (LaTIM), CHU Morvan, Bat 2bis (I3S), 5 avenue Foch, Brest, 29609 (France); Lamare, F [INSERM U650, Laboratoire du Traitement de l' Information Medicale (LaTIM), CHU Morvan, Bat 2bis (I3S), 5 avenue Foch, Brest, 29609, (France); Boussion, N [INSERM U650, Laboratoire du Traitement de l' Information Medicale (LaTIM), CHU Morvan, Bat 2bis (I3S), 5 avenue Foch, Brest, 29609 (France); Turzo, A [INSERM U650, Laboratoire du Traitement de l' Information Medicale (LaTIM), CHU Morvan, Bat 2bis (I3S), 5 avenue Foch, Brest, 29609 (France); Collet, C [Ecole Nationale Superieure de Physique de Strasbourg (ENSPS), ULP, Strasbourg, F-67000 (France); Salzenstein, F [Institut d' Electronique du Solide et des Systemes (InESS), ULP, Strasbourg, F-67000 (France); Roux, C [INSERM U650, Laboratoire du Traitement de l' Information Medicale (LaTIM), CHU Morvan, Bat 2bis (I3S), 5 avenue Foch, Brest, 29609 (France); Jarritt, P [Medical Physics Agency, Royal Victoria Hospital, Belfast (United Kingdom); Carson, K [Medical Physics Agency, Royal Victoria Hospital, Belfast (United Kingdom); Rest, C Cheze-Le [INSERM U650, Laboratoire du Traitement de l' Information Medicale (LaTIM), CHU Morvan, Bat 2bis (I3S), 5 avenue Foch, Brest, 29609 (France); Visvikis, D [INSERM U650, Laboratoire du Traitement de l' Information Medicale (LaTIM), CHU Morvan, Bat 2bis (I3S), 5 avenue Foch, Brest, 29609 (France)

    2007-07-21

    Accurate volume of interest (VOI) estimation in PET is crucial in different oncology applications such as response to therapy evaluation and radiotherapy treatment planning. The objective of our study was to evaluate the performance of the proposed algorithm for automatic lesion volume delineation; namely the fuzzy hidden Markov chains (FHMC), with that of current state of the art in clinical practice threshold based techniques. As the classical hidden Markov chain (HMC) algorithm, FHMC takes into account noise, voxel intensity and spatial correlation, in order to classify a voxel as background or functional VOI. However the novelty of the fuzzy model consists of the inclusion of an estimation of imprecision, which should subsequently lead to a better modelling of the 'fuzzy' nature of the object of interest boundaries in emission tomography data. The performance of the algorithms has been assessed on both simulated and acquired datasets of the IEC phantom, covering a large range of spherical lesion sizes (from 10 to 37 mm), contrast ratios (4:1 and 8:1) and image noise levels. Both lesion activity recovery and VOI determination tasks were assessed in reconstructed images using two different voxel sizes (8 mm{sup 3} and 64 mm{sup 3}). In order to account for both the functional volume location and its size, the concept of % classification errors was introduced in the evaluation of volume segmentation using the simulated datasets. Results reveal that FHMC performs substantially better than the threshold based methodology for functional volume determination or activity concentration recovery considering a contrast ratio of 4:1 and lesion sizes of <28 mm. Furthermore differences between classification and volume estimation errors evaluated were smaller for the segmented volumes provided by the FHMC algorithm. Finally, the performance of the automatic algorithms was less susceptible to image noise levels in comparison to the threshold based techniques. The

  14. Combined 18F-Fluciclovine PET/MRI Shows Potential for Detection and Characterization of High-Risk Prostate Cancer.

    Science.gov (United States)

    Elschot, Mattijs; Selnæs, Kirsten M; Sandsmark, Elise; Krüger-Stokke, Brage; Størkersen, Øystein; Giskeødegård, Guro F; Tessem, May-Britt; Moestue, Siver A; Bertilsson, Helena; Bathen, Tone F

    2018-05-01

    The objective of this study was to investigate whether quantitative imaging features derived from combined 18 F-fluciclovine PET/multiparametric MRI show potential for detection and characterization of primary prostate cancer. Methods: Twenty-eight patients diagnosed with high-risk prostate cancer underwent simultaneous 18 F-fluciclovine PET/MRI before radical prostatectomy. Volumes of interest (VOIs) for prostate tumors, benign prostatic hyperplasia (BPH) nodules, prostatitis, and healthy tissue were delineated on T2-weighted images, using histology as a reference. Tumor VOIs were marked as high-grade (≥Gleason grade group 3) or not. MRI and PET features were extracted on the voxel and VOI levels. Partial least-squared discriminant analysis (PLS-DA) with double leave-one-patient-out cross-validation was performed to distinguish tumors from benign tissue (BPH, prostatitis, or healthy tissue) and high-grade tumors from other tissue (low-grade tumors or benign tissue). The performance levels of PET, MRI, and combined PET/MRI features were compared using the area under the receiver-operating-characteristic curve (AUC). Results: Voxel and VOI features were extracted from 40 tumor VOIs (26 high-grade), 36 BPH VOIs, 6 prostatitis VOIs, and 37 healthy-tissue VOIs. PET/MRI performed better than MRI and PET alone for distinguishing tumors from benign tissue (AUCs of 87%, 81%, and 83%, respectively, at the voxel level and 96%, 93%, and 93%, respectively, at the VOI level) and high-grade tumors from other tissue (AUCs of 85%, 79%, and 81%, respectively, at the voxel level and 93%, 93%, and 91%, respectively, at the VOI level). T2-weighted MRI, diffusion-weighted MRI, and PET features were the most important for classification. Conclusion: Combined 18 F-fluciclovine PET/multiparametric MRI shows potential for improving detection and characterization of high-risk prostate cancer, in comparison to MRI and PET alone. © 2018 by the Society of Nuclear Medicine and Molecular

  15. TU-E-BRA-11: Volume of Interest Cone Beam CT with a Low-Z Linear Accelerator Target: Proof-of-Concept.

    Science.gov (United States)

    Robar, J; Parsons, D; Berman, A; MacDonald, A

    2012-06-01

    This study demonstrates feasibility and advantages of volume of interest (VOI) cone beam CT (CBCT) imaging performed with an x-ray beam generated from 2.35 MeV electrons incident on a carbon linear accelerator target. The electron beam energy was reduced to 2.35 MeV in a Varian 21EX linear accelerator containing a 7.6 mm thick carbon x-ray target. Arbitrary imaging volumes were defined in the planning system to produce dynamic MLC sequences capable of tracking off-axis VOIs in phantoms. To reduce truncation artefacts, missing data in projection images were completed using a priori DRR information from the planning CT set. The feasibility of the approach was shown through imaging of an anthropomorphic phantom and the head-and-neck section of a lamb. TLD800 and EBT2 radiochromic film measurements were used to compare the VOI dose distributions with those for full-field techniques. CNR was measured for VOIs ranging from 4 to 15 cm diameter. The 2.35 MV/Carbon beam provides favorable CNR characteristics, although marked boundary and cupping artefacts arise due to truncation of projection data. These artefacts are largely eliminated using the DRR filling technique. Imaging dose was reduced by 5-10% and 75% inside and outside of the VOI, respectively, compared to full-field imaging for a cranial VOI. For the 2.35 MV/Carbon beam, CNR was shown to be approximately invariant with VOI dimension for bone and lung objects. This indicates that the advantage of the VOI approach with the low-Z target beam is substantial imaging dose reduction, not improvement of image quality. VOI CBCT using a 2.35 MV/Carbon beam is a feasible technique whereby a chosen imaging volume can be defined in the planning system and tracked during acquisition. The novel x-ray beam affords good CNR characteristics while imaging dose is localized to the chosen VOI. Funding for this project has been received from Varian Medical, Incorporated. © 2012 American Association of Physicists in Medicine.

  16. New definitions of pointing stability - ac and dc effects. [constant and time-dependent pointing error effects on image sensor performance

    Science.gov (United States)

    Lucke, Robert L.; Sirlin, Samuel W.; San Martin, A. M.

    1992-01-01

    For most imaging sensors, a constant (dc) pointing error is unimportant (unless large), but time-dependent (ac) errors degrade performance by either distorting or smearing the image. When properly quantified, the separation of the root-mean-square effects of random line-of-sight motions into dc and ac components can be used to obtain the minimum necessary line-of-sight stability specifications. The relation between stability requirements and sensor resolution is discussed, with a view to improving communication between the data analyst and the control systems engineer.

  17. Laboratory errors and patient safety.

    Science.gov (United States)

    Miligy, Dawlat A

    2015-01-01

    Laboratory data are extensively used in medical practice; consequently, laboratory errors have a tremendous impact on patient safety. Therefore, programs designed to identify and reduce laboratory errors, as well as, setting specific strategies are required to minimize these errors and improve patient safety. The purpose of this paper is to identify part of the commonly encountered laboratory errors throughout our practice in laboratory work, their hazards on patient health care and some measures and recommendations to minimize or to eliminate these errors. Recording the encountered laboratory errors during May 2008 and their statistical evaluation (using simple percent distribution) have been done in the department of laboratory of one of the private hospitals in Egypt. Errors have been classified according to the laboratory phases and according to their implication on patient health. Data obtained out of 1,600 testing procedure revealed that the total number of encountered errors is 14 tests (0.87 percent of total testing procedures). Most of the encountered errors lay in the pre- and post-analytic phases of testing cycle (representing 35.7 and 50 percent, respectively, of total errors). While the number of test errors encountered in the analytic phase represented only 14.3 percent of total errors. About 85.7 percent of total errors were of non-significant implication on patients health being detected before test reports have been submitted to the patients. On the other hand, the number of test errors that have been already submitted to patients and reach the physician represented 14.3 percent of total errors. Only 7.1 percent of the errors could have an impact on patient diagnosis. The findings of this study were concomitant with those published from the USA and other countries. This proves that laboratory problems are universal and need general standardization and bench marking measures. Original being the first data published from Arabic countries that

  18. Errors in Neonatology

    OpenAIRE

    Antonio Boldrini; Rosa T. Scaramuzzo; Armando Cuttano

    2013-01-01

    Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy). Results: In Neonatology the main err...

  19. Errors in abdominal computed tomography

    International Nuclear Information System (INIS)

    Stephens, S.; Marting, I.; Dixon, A.K.

    1989-01-01

    Sixty-nine patients are presented in whom a substantial error was made on the initial abdominal computed tomography report. Certain features of these errors have been analysed. In 30 (43.5%) a lesion was simply not recognised (error of observation); in 39 (56.5%) the wrong conclusions were drawn about the nature of normal or abnormal structures (error of interpretation). The 39 errors of interpretation were more complex; in 7 patients an abnormal structure was noted but interpreted as normal, whereas in four a normal structure was thought to represent a lesion. Other interpretive errors included those where the wrong cause for a lesion had been ascribed (24 patients), and those where the abnormality was substantially under-reported (4 patients). Various features of these errors are presented and discussed. Errors were made just as often in relation to small and large lesions. Consultants made as many errors as senior registrar radiologists. It is like that dual reporting is the best method of avoiding such errors and, indeed, this is widely practised in our unit. (Author). 9 refs.; 5 figs.; 1 tab

  20. Definition of blindness under National Programme for Control of Blindness: Do we need to revise it?

    Science.gov (United States)

    Vashist, Praveen; Senjam, Suraj Singh; Gupta, Vivek; Gupta, Noopur; Kumar, Atul

    2017-02-01

    A review appropriateness of the current definition of blindness under National Programme for Control of Blindness (NPCB), Government of India. Online search of peer-reviewed scientific published literature and guidelines using PubMed, the World Health Organization (WHO) IRIS, and Google Scholar with keywords, namely blindness and visual impairment, along with offline examination of reports of national and international organizations, as well as their cross-references was done until December 2016, to identify relevant documents on the definition of blindness. The evidence for the historical and currently adopted definition of blindness under the NPCB, the WHO, and other countries was reviewed. Differences in the NPCB and WHO definitions were analyzed to assess the impact on the epidemiological status of blindness and visual impairment in India. The differences in the criteria for blindness under the NPCB and the WHO definitions cause an overestimation of the prevalence of blindness in India. These variations are also associated with an over-representation of refractive errors as a cause of blindness and an under-representation of other causes under the NPCB definition. The targets for achieving elimination of blindness also become much more difficult to achieve under the NPCB definition. Ignoring differences in definitions when comparing the global and Indian prevalence of blindness will cause erroneous interpretations. We recommend that the appropriate modifications should be made in the NPCB definition of blindness to make it consistent with the WHO definition.

  1. Definition of blindness under National Programme for Control of Blindness: Do we need to revise it?

    Directory of Open Access Journals (Sweden)

    Praveen Vashist

    2017-01-01

    Full Text Available A review appropriateness of the current definition of blindness under National Programme for Control of Blindness (NPCB, Government of India. Online search of peer-reviewed scientific published literature and guidelines using PubMed, the World Health Organization (WHO IRIS, and Google Scholar with keywords, namely blindness and visual impairment, along with offline examination of reports of national and international organizations, as well as their cross-references was done until December 2016, to identify relevant documents on the definition of blindness. The evidence for the historical and currently adopted definition of blindness under the NPCB, the WHO, and other countries was reviewed. Differences in the NPCB and WHO definitions were analyzed to assess the impact on the epidemiological status of blindness and visual impairment in India. The differences in the criteria for blindness under the NPCB and the WHO definitions cause an overestimation of the prevalence of blindness in India. These variations are also associated with an over-representation of refractive errors as a cause of blindness and an under-representation of other causes under the NPCB definition. The targets for achieving elimination of blindness also become much more difficult to achieve under the NPCB definition. Ignoring differences in definitions when comparing the global and Indian prevalence of blindness will cause erroneous interpretations. We recommend that the appropriate modifications should be made in the NPCB definition of blindness to make it consistent with the WHO definition.

  2. Scaling prediction errors to reward variability benefits error-driven learning in humans.

    Science.gov (United States)

    Diederen, Kelly M J; Schultz, Wolfram

    2015-09-01

    Effective error-driven learning requires individuals to adapt learning to environmental reward variability. The adaptive mechanism may involve decays in learning rate across subsequent trials, as shown previously, and rescaling of reward prediction errors. The present study investigated the influence of prediction error scaling and, in particular, the consequences for learning performance. Participants explicitly predicted reward magnitudes that were drawn from different probability distributions with specific standard deviations. By fitting the data with reinforcement learning models, we found scaling of prediction errors, in addition to the learning rate decay shown previously. Importantly, the prediction error scaling was closely related to learning performance, defined as accuracy in predicting the mean of reward distributions, across individual participants. In addition, participants who scaled prediction errors relative to standard deviation also presented with more similar performance for different standard deviations, indicating that increases in standard deviation did not substantially decrease "adapters'" accuracy in predicting the means of reward distributions. However, exaggerated scaling beyond the standard deviation resulted in impaired performance. Thus efficient adaptation makes learning more robust to changing variability. Copyright © 2015 the American Physiological Society.

  3. Fractional Order Differentiation by Integration and Error Analysis in Noisy Environment

    KAUST Repository

    Liu, Dayan

    2015-03-31

    The integer order differentiation by integration method based on the Jacobi orthogonal polynomials for noisy signals was originally introduced by Mboup, Join and Fliess. We propose to extend this method from the integer order to the fractional order to estimate the fractional order derivatives of noisy signals. Firstly, two fractional order differentiators are deduced from the Jacobi orthogonal polynomial filter, using the Riemann-Liouville and the Caputo fractional order derivative definitions respectively. Exact and simple formulae for these differentiators are given by integral expressions. Hence, they can be used for both continuous-time and discrete-time models in on-line or off-line applications. Secondly, some error bounds are provided for the corresponding estimation errors. These bounds allow to study the design parameters\\' influence. The noise error contribution due to a large class of stochastic processes is studied in discrete case. The latter shows that the differentiator based on the Caputo fractional order derivative can cope with a class of noises, whose mean value and variance functions are polynomial time-varying. Thanks to the design parameters analysis, the proposed fractional order differentiators are significantly improved by admitting a time-delay. Thirdly, in order to reduce the calculation time for on-line applications, a recursive algorithm is proposed. Finally, the proposed differentiator based on the Riemann-Liouville fractional order derivative is used to estimate the state of a fractional order system and numerical simulations illustrate the accuracy and the robustness with respect to corrupting noises.

  4. Comparing Absolute Error with Squared Error for Evaluating Empirical Models of Continuous Variables: Compositions, Implications, and Consequences

    Science.gov (United States)

    Gao, J.

    2014-12-01

    Reducing modeling error is often a major concern of empirical geophysical models. However, modeling errors can be defined in different ways: When the response variable is continuous, the most commonly used metrics are squared (SQ) and absolute (ABS) errors. For most applications, ABS error is the more natural, but SQ error is mathematically more tractable, so is often used as a substitute with little scientific justification. Existing literature has not thoroughly investigated the implications of using SQ error in place of ABS error, especially not geospatially. This study compares the two metrics through the lens of bias-variance decomposition (BVD). BVD breaks down the expected modeling error of each model evaluation point into bias (systematic error), variance (model sensitivity), and noise (observation instability). It offers a way to probe the composition of various error metrics. I analytically derived the BVD of ABS error and compared it with the well-known SQ error BVD, and found that not only the two metrics measure the characteristics of the probability distributions of modeling errors differently, but also the effects of these characteristics on the overall expected error are different. Most notably, under SQ error all bias, variance, and noise increase expected error, while under ABS error certain parts of the error components reduce expected error. Since manipulating these subtractive terms is a legitimate way to reduce expected modeling error, SQ error can never capture the complete story embedded in ABS error. I then empirically compared the two metrics with a supervised remote sensing model for mapping surface imperviousness. Pair-wise spatially-explicit comparison for each error component showed that SQ error overstates all error components in comparison to ABS error, especially variance-related terms. Hence, substituting ABS error with SQ error makes model performance appear worse than it actually is, and the analyst would more likely accept a

  5. Investigating the Factors Affecting the Occurrence and Reporting of Medication Errors from the Viewpoint of Nurses in Sina Hospital, Tabriz, Iran

    Directory of Open Access Journals (Sweden)

    Massumeh gholizadeh

    2016-09-01

    Full Text Available Background and objectives: Medication errors can cause serious problems to patients and health system. Initial results of medication errors increase duration of hospitalization and costs. The aim of this study was to determine the reasons of medication errors and the barriers of errors reporting from nurses’ viewpoints. Material and Methods: A cross-sectional descriptive study was conducted in 2013. The study population included all of the nurses working in Tabriz Sina hospital. Study sample was calculated 124 by census method. The data collection tool was questionnaire and data were analyzed using SPSS software version 20 package. Results: In this study, from the viewpoint of nurses, the most important reasons of medication errors included the wrong infusion speed, illegible medication orders, work-related fatigue, noise of ambient and shortages of staff.  Regarding barriers of error reporting, the most important factors were the emphasis of the directors on the person regardless of other factors involved in medication errors and the lake of a clear definition of medication errors. Conclusion: Given the importance of ensuring patient safety, the following corrections can lead to improvement of hospital safety: establishing an effective system for reporting and recording errors, minimizing barriers to reporting by establishing a positive relationship between managers and staff and positive reaction towards reporting error. To reduce medication errors, establishing training classes in relation to drugs information for nurses and continuing evaluation of personnel in the field of drug information using the results of pharmaceutical information in the ward are recommended.

  6. Abnormal error monitoring in math-anxious individuals: evidence from error-related brain potentials.

    Directory of Open Access Journals (Sweden)

    Macarena Suárez-Pellicioni

    Full Text Available This study used event-related brain potentials to investigate whether math anxiety is related to abnormal error monitoring processing. Seventeen high math-anxious (HMA and seventeen low math-anxious (LMA individuals were presented with a numerical and a classical Stroop task. Groups did not differ in terms of trait or state anxiety. We found enhanced error-related negativity (ERN in the HMA group when subjects committed an error on the numerical Stroop task, but not on the classical Stroop task. Groups did not differ in terms of the correct-related negativity component (CRN, the error positivity component (Pe, classical behavioral measures or post-error measures. The amplitude of the ERN was negatively related to participants' math anxiety scores, showing a more negative amplitude as the score increased. Moreover, using standardized low resolution electromagnetic tomography (sLORETA we found greater activation of the insula in errors on a numerical task as compared to errors in a non-numerical task only for the HMA group. The results were interpreted according to the motivational significance theory of the ERN.

  7. Heuristic errors in clinical reasoning.

    Science.gov (United States)

    Rylander, Melanie; Guerrasio, Jeannette

    2016-08-01

    Errors in clinical reasoning contribute to patient morbidity and mortality. The purpose of this study was to determine the types of heuristic errors made by third-year medical students and first-year residents. This study surveyed approximately 150 clinical educators inquiring about the types of heuristic errors they observed in third-year medical students and first-year residents. Anchoring and premature closure were the two most common errors observed amongst third-year medical students and first-year residents. There was no difference in the types of errors observed in the two groups. Errors in clinical reasoning contribute to patient morbidity and mortality Clinical educators perceived that both third-year medical students and first-year residents committed similar heuristic errors, implying that additional medical knowledge and clinical experience do not affect the types of heuristic errors made. Further work is needed to help identify methods that can be used to reduce heuristic errors early in a clinician's education. © 2015 John Wiley & Sons Ltd.

  8. Awareness of technology-induced errors and processes for identifying and preventing such errors.

    Science.gov (United States)

    Bellwood, Paule; Borycki, Elizabeth M; Kushniruk, Andre W

    2015-01-01

    There is a need to determine if organizations working with health information technology are aware of technology-induced errors and how they are addressing and preventing them. The purpose of this study was to: a) determine the degree of technology-induced error awareness in various Canadian healthcare organizations, and b) identify those processes and procedures that are currently in place to help address, manage, and prevent technology-induced errors. We identified a lack of technology-induced error awareness among participants. Participants identified there was a lack of well-defined procedures in place for reporting technology-induced errors, addressing them when they arise, and preventing them.

  9. Einstein's error

    International Nuclear Information System (INIS)

    Winterflood, A.H.

    1980-01-01

    In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)

  10. MR-based automatic delineation of volumes of interest in human brain PET images using probability maps

    DEFF Research Database (Denmark)

    Svarer, Claus; Madsen, Karina; Hasselbalch, Steen G.

    2005-01-01

    The purpose of this study was to develop and validate an observer-independent approach for automatic generation of volume-of-interest (VOI) brain templates to be used in emission tomography studies of the brain. The method utilizes a VOI probability map created on the basis of a database of several...... delineation of the VOI set. The approach was also shown to work equally well in individuals with pronounced cerebral atrophy. Probability-map-based automatic delineation of VOIs is a fast, objective, reproducible, and safe way to assess regional brain values from PET or SPECT scans. In addition, the method...

  11. Controlling errors in unidosis carts

    Directory of Open Access Journals (Sweden)

    Inmaculada Díaz Fernández

    2010-01-01

    Full Text Available Objective: To identify errors in the unidosis system carts. Method: For two months, the Pharmacy Service controlled medication either returned or missing from the unidosis carts both in the pharmacy and in the wards. Results: Uncorrected unidosis carts show a 0.9% of medication errors (264 versus 0.6% (154 which appeared in unidosis carts previously revised. In carts not revised, the error is 70.83% and mainly caused when setting up unidosis carts. The rest are due to a lack of stock or unavailability (21.6%, errors in the transcription of medical orders (6.81% or that the boxes had not been emptied previously (0.76%. The errors found in the units correspond to errors in the transcription of the treatment (3.46%, non-receipt of the unidosis copy (23.14%, the patient did not take the medication (14.36%or was discharged without medication (12.77%, was not provided by nurses (14.09%, was withdrawn from the stocks of the unit (14.62%, and errors of the pharmacy service (17.56% . Conclusions: It is concluded the need to redress unidosis carts and a computerized prescription system to avoid errors in transcription.Discussion: A high percentage of medication errors is caused by human error. If unidosis carts are overlooked before sent to hospitalization units, the error diminishes to 0.3%.

  12. Errors and violations

    International Nuclear Information System (INIS)

    Reason, J.

    1988-01-01

    This paper is in three parts. The first part summarizes the human failures responsible for the Chernobyl disaster and argues that, in considering the human contribution to power plant emergencies, it is necessary to distinguish between: errors and violations; and active and latent failures. The second part presents empirical evidence, drawn from driver behavior, which suggest that errors and violations have different psychological origins. The concluding part outlines a resident pathogen view of accident causation, and seeks to identify the various system pathways along which errors and violations may be propagated

  13. Imagery of Errors in Typing

    Science.gov (United States)

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  14. A Potential Tension in DSM-5: The General Definition of Mental Disorder versus Some Specific Diagnostic Criteria.

    Science.gov (United States)

    Amoretti, M Cristina; Lalumera, Elisabetta

    2018-05-30

    The general concept of mental disorder specified in the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders is definitional in character: a mental disorder might be identified with a harmful dysfunction. The manual also contains the explicit claim that each individual mental disorder should meet the requirements posed by the definition. The aim of this article is two-fold. First, we shall analyze the definition of the superordinate concept of mental disorder to better understand what necessary (and sufficient) criteria actually characterize such a concept. Second, we shall consider the concepts of some individual mental disorders and show that they are in tension with the definition of the superordinate concept, taking pyromania and narcissistic personality disorder as case studies. Our main point is that an unexplained and not-operationalized dysfunction requirement that is included in the general definition, while being systematically violated by the diagnostic criteria of specific mental disorders, is a logical error. Then, either we unpack and operationalize the dysfunction requirement, and include explicit diagnostic criteria that can actually meet it, or we simply drop it.

  15. Magnetic Nanoparticle Thermometer: An Investigation of Minimum Error Transmission Path and AC Bias Error

    Directory of Open Access Journals (Sweden)

    Zhongzhou Du

    2015-04-01

    Full Text Available The signal transmission module of a magnetic nanoparticle thermometer (MNPT was established in this study to analyze the error sources introduced during the signal flow in the hardware system. The underlying error sources that significantly affected the precision of the MNPT were determined through mathematical modeling and simulation. A transfer module path with the minimum error in the hardware system was then proposed through the analysis of the variations of the system error caused by the significant error sources when the signal flew through the signal transmission module. In addition, a system parameter, named the signal-to-AC bias ratio (i.e., the ratio between the signal and AC bias, was identified as a direct determinant of the precision of the measured temperature. The temperature error was below 0.1 K when the signal-to-AC bias ratio was higher than 80 dB, and other system errors were not considered. The temperature error was below 0.1 K in the experiments with a commercial magnetic fluid (Sample SOR-10, Ocean Nanotechnology, Springdale, AR, USA when the hardware system of the MNPT was designed with the aforementioned method.

  16. Learning from Errors

    Science.gov (United States)

    Metcalfe, Janet

    2017-01-01

    Although error avoidance during learning appears to be the rule in American classrooms, laboratory studies suggest that it may be a counterproductive strategy, at least for neurologically typical students. Experimental investigations indicate that errorful learning followed by corrective feedback is beneficial to learning. Interestingly, the…

  17. Error-information in tutorial documentation: Supporting users' errors to facilitate initial skill learning

    NARCIS (Netherlands)

    Lazonder, Adrianus W.; van der Meij, Hans

    1995-01-01

    Novice users make many errors when they first try to learn how to work with a computer program like a spreadsheet or wordprocessor. No matter how user-friendly the software or the training manual, errors can and will occur. The current view on errors is that they can be helpful or disruptive,

  18. Medication error detection in two major teaching hospitals: What are the types of errors?

    Directory of Open Access Journals (Sweden)

    Fatemeh Saghafi

    2014-01-01

    Full Text Available Background: Increasing number of reports on medication errors and relevant subsequent damages, especially in medical centers has become a growing concern for patient safety in recent decades. Patient safety and in particular, medication safety is a major concern and challenge for health care professionals around the world. Our prospective study was designed to detect prescribing, transcribing, dispensing, and administering medication errors in two major university hospitals. Materials and Methods: After choosing 20 similar hospital wards in two large teaching hospitals in the city of Isfahan, Iran, the sequence was randomly selected. Diagrams for drug distribution were drawn by the help of pharmacy directors. Direct observation technique was chosen as the method for detecting the errors. A total of 50 doses were studied in each ward to detect prescribing, transcribing and administering errors in each ward. The dispensing error was studied on 1000 doses dispensed in each hospital pharmacy. Results: A total of 8162 number of doses of medications were studied during the four stages, of which 8000 were complete data to be analyzed. 73% of prescribing orders were incomplete and did not have all six parameters (name, dosage form, dose and measuring unit, administration route, and intervals of administration. We found 15% transcribing errors. One-third of administration of medications on average was erroneous in both hospitals. Dispensing errors ranged between 1.4% and 2.2%. Conclusion: Although prescribing and administrating compromise most of the medication errors, improvements are needed in all four stages with regard to medication errors. Clear guidelines must be written and executed in both hospitals to reduce the incidence of medication errors.

  19. Social aspects of clinical errors.

    Science.gov (United States)

    Richman, Joel; Mason, Tom; Mason-Whitehead, Elizabeth; McIntosh, Annette; Mercer, Dave

    2009-08-01

    Clinical errors, whether committed by doctors, nurses or other professions allied to healthcare, remain a sensitive issue requiring open debate and policy formulation in order to reduce them. The literature suggests that the issues underpinning errors made by healthcare professionals involve concerns about patient safety, professional disclosure, apology, litigation, compensation, processes of recording and policy development to enhance quality service. Anecdotally, we are aware of narratives of minor errors, which may well have been covered up and remain officially undisclosed whilst the major errors resulting in damage and death to patients alarm both professionals and public with resultant litigation and compensation. This paper attempts to unravel some of these issues by highlighting the historical nature of clinical errors and drawing parallels to contemporary times by outlining the 'compensation culture'. We then provide an overview of what constitutes a clinical error and review the healthcare professional strategies for managing such errors.

  20. Passive quantum error correction of linear optics networks through error averaging

    Science.gov (United States)

    Marshman, Ryan J.; Lund, Austin P.; Rohde, Peter P.; Ralph, Timothy C.

    2018-02-01

    We propose and investigate a method of error detection and noise correction for bosonic linear networks using a method of unitary averaging. The proposed error averaging does not rely on ancillary photons or control and feedforward correction circuits, remaining entirely passive in its operation. We construct a general mathematical framework for this technique and then give a series of proof of principle examples including numerical analysis. Two methods for the construction of averaging are then compared to determine the most effective manner of implementation and probe the related error thresholds. Finally we discuss some of the potential uses of this scheme.

  1. Apologies and Medical Error

    Science.gov (United States)

    2008-01-01

    One way in which physicians can respond to a medical error is to apologize. Apologies—statements that acknowledge an error and its consequences, take responsibility, and communicate regret for having caused harm—can decrease blame, decrease anger, increase trust, and improve relationships. Importantly, apologies also have the potential to decrease the risk of a medical malpractice lawsuit and can help settle claims by patients. Patients indicate they want and expect explanations and apologies after medical errors and physicians indicate they want to apologize. However, in practice, physicians tend to provide minimal information to patients after medical errors and infrequently offer complete apologies. Although fears about potential litigation are the most commonly cited barrier to apologizing after medical error, the link between litigation risk and the practice of disclosure and apology is tenuous. Other barriers might include the culture of medicine and the inherent psychological difficulties in facing one’s mistakes and apologizing for them. Despite these barriers, incorporating apology into conversations between physicians and patients can address the needs of both parties and can play a role in the effective resolution of disputes related to medical error. PMID:18972177

  2. Errors in Neonatology

    Directory of Open Access Journals (Sweden)

    Antonio Boldrini

    2013-06-01

    Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research

  3. Human errors and mistakes

    International Nuclear Information System (INIS)

    Wahlstroem, B.

    1993-01-01

    Human errors have a major contribution to the risks for industrial accidents. Accidents have provided important lesson making it possible to build safer systems. In avoiding human errors it is necessary to adapt the systems to their operators. The complexity of modern industrial systems is however increasing the danger of system accidents. Models of the human operator have been proposed, but the models are not able to give accurate predictions of human performance. Human errors can never be eliminated, but their frequency can be decreased by systematic efforts. The paper gives a brief summary of research in human error and it concludes with suggestions for further work. (orig.)

  4. Error-related potentials during continuous feedback: using EEG to detect errors of different type and severity

    Science.gov (United States)

    Spüler, Martin; Niethammer, Christian

    2015-01-01

    When a person recognizes an error during a task, an error-related potential (ErrP) can be measured as response. It has been shown that ErrPs can be automatically detected in tasks with time-discrete feedback, which is widely applied in the field of Brain-Computer Interfaces (BCIs) for error correction or adaptation. However, there are only a few studies that concentrate on ErrPs during continuous feedback. With this study, we wanted to answer three different questions: (i) Can ErrPs be measured in electroencephalography (EEG) recordings during a task with continuous cursor control? (ii) Can ErrPs be classified using machine learning methods and is it possible to discriminate errors of different origins? (iii) Can we use EEG to detect the severity of an error? To answer these questions, we recorded EEG data from 10 subjects during a video game task and investigated two different types of error (execution error, due to inaccurate feedback; outcome error, due to not achieving the goal of an action). We analyzed the recorded data to show that during the same task, different kinds of error produce different ErrP waveforms and have a different spectral response. This allows us to detect and discriminate errors of different origin in an event-locked manner. By utilizing the error-related spectral response, we show that also a continuous, asynchronous detection of errors is possible. Although the detection of error severity based on EEG was one goal of this study, we did not find any significant influence of the severity on the EEG. PMID:25859204

  5. Error-related potentials during continuous feedback: using EEG to detect errors of different type and severity

    Directory of Open Access Journals (Sweden)

    Martin eSpüler

    2015-03-01

    Full Text Available When a person recognizes an error during a task, an error-related potential (ErrP can be measured as response. It has been shown that ErrPs can be automatically detected in tasks with time-discrete feedback, which is widely applied in the field of Brain-Computer Interfaces (BCIs for error correction or adaptation. However, there are only a few studies that concentrate on ErrPs during continuous feedback.With this study, we wanted to answer three different questions: (i Can ErrPs be measured in electroencephalography (EEG recordings during a task with continuous cursor control? (ii Can ErrPs be classified using machine learning methods and is it possible to discriminate errors of different origins? (iii Can we use EEG to detect the severity of an error? To answer these questions, we recorded EEG data from 10 subjects during a video game task and investigated two different types of error (execution error, due to inaccurate feedback; outcome error, due to not achieving the goal of an action. We analyzed the recorded data to show that during the same task, different kinds of error produce different ErrP waveforms and have a different spectral response. This allows us to detect and discriminate errors of different origin in an event-locked manner. By utilizing the error-related spectral response, we show that also a continuous, asynchronous detection of errors is possible.Although the detection of error severity based on EEG was one goal of this study, we did not find any significant influence of the severity on the EEG.

  6. An investigation of kV CBCT image quality and dose reduction for volume-of-interest imaging using dynamic collimation

    Energy Technology Data Exchange (ETDEWEB)

    Parsons, David, E-mail: david.parsons@dal.ca, E-mail: james.robar@cdha.nshealth.ca [Department of Physics and Atmospheric Science, Dalhousie University, 5820 University Avenue, Halifax, Nova Scotia B3H 1V7 (Canada); Robar, James L., E-mail: david.parsons@dal.ca, E-mail: james.robar@cdha.nshealth.ca [Department of Radiation Oncology and Department of Physics and Atmospheric Science, Dalhousie University, 5820 University Avenue, Halifax, Nova Scotia B3H 1V7 (Canada)

    2015-09-15

    Purpose: The focus of this work was to investigate the improvements in image quality and dose reduction for volume-of-interest (VOI) kilovoltage-cone beam CT (CBCT) using dynamic collimation. Methods: A prototype iris aperture was used to track a VOI during a CBCT acquisition. The current aperture design is capable of 1D translation as a function of gantry angle and dynamic adjustment of the iris radius. The aperture occupies the location of the bow-tie filter on a Varian On-Board Imager system. CBCT and planar image quality were investigated as a function of aperture radius, while maintaining the same dose to the VOI, for a 20 cm diameter cylindrical water phantom with a 9 mm diameter bone insert centered on isocenter. Corresponding scatter-to-primary ratios (SPR) were determined at the detector plane with Monte Carlo simulation using EGSnrc. Dose distributions for various sizes VOI were modeled using a dynamic BEAMnrc library and DOSXYZnrc. The resulting VOI dose distributions were compared to full-field distributions. Results: SPR was reduced by a factor of 8.4 when decreasing iris diameter from 21.2 to 2.4 cm (at isocenter). Depending upon VOI location and size, dose was reduced to 16%–90% of the full-field value along the central axis plane and down to 4% along the axis of rotation, while maintaining the same dose to the VOI compared to full-field techniques. When maintaining constant dose to the VOI, this change in iris diameter corresponds to a factor increase of approximately 1.6 in image contrast and a factor decrease in image noise of approximately 1.2. This results in a measured gain in contrast-to-noise ratio by a factor of approximately 2.0. Conclusions: The presented VOI technique offers improved image quality for image-guided radiotherapy while sparing the surrounding volume of unnecessary dose compared to full-field techniques.

  7. Principal component and volume of interest analyses in depressed patients imaged by {sup 99m}Tc-HMPAO SPET: a methodological comparison

    Energy Technology Data Exchange (ETDEWEB)

    Pagani, Marco [Institute of Cognitive Sciences and Technologies, CNR, Rome (Italy); Section of Nuclear Medicine, Department of Hospital Physics, Karolinska Hospital, Stockholm (Sweden); Gardner, Ann; Haellstroem, Tore [NEUROTEC, Division of Psychiatry, Karolinska Institutet, Huddinge University Hospital, Stockholm (Sweden); Salmaso, Dario [Institute of Cognitive Sciences and Technologies, CNR, Rome (Italy); Sanchez Crespo, Alejandro; Jonsson, Cathrine; Larsson, Stig A. [Section of Nuclear Medicine, Department of Hospital Physics, Karolinska Hospital, Stockholm (Sweden); Jacobsson, Hans [Department of Radiology, Karolinska Hospital, Stockholm (Sweden); Lindberg, Greger [Department of Medicine, Division of Gastroenterology and Hepatology, Karolinska Institutet, Huddinge University Hospital, Stockholm (Sweden); Waegner, Anna [Department of Clinical Neuroscience, Division of Neurology, Karolinska Hospital, Stockholm (Sweden)

    2004-07-01

    Previous regional cerebral blood flow (rCBF) studies on patients with unipolar major depressive disorder (MDD) have analysed clusters of voxels or single regions and yielded conflicting results, showing either higher or lower rCBF in MDD as compared to normal controls (CTR). The aim of this study was to assess rCBF distribution changes in 68 MDD patients, investigating the data set with both volume of interest (VOI) analysis and principal component analysis (PCA). The rCBF distribution in 68 MDD and 66 CTR, at rest, was compared. Technetium-99m d,l-hexamethylpropylene amine oxime single-photon emission tomography was performed and the uptake in 27 VOIs, bilaterally, was assessed using a standardising brain atlas. Data were then grouped into factors by means of PCA performed on rCBF of all 134 subjects and based on all 54 VOIs. VOI analysis showed a significant group x VOI x hemisphere interaction (P<0.001). rCBF in eight VOIs (in the prefrontal, temporal, occipital and central structures) differed significantly between groups at the P<0.05 level. PCA identified 11 anatomo-functional regions that interacted with groups (P<0.001). As compared to CTR, MDD rCBF was relatively higher in right associative temporo-parietal-occipital cortex (P<0.01) and bilaterally in prefrontal (P<0.005) and frontal cortex (P<0.025), anterior temporal cortex and central structures (P<0.05 and P<0.001 respectively). Higher rCBF in a selected group of MDD as compared to CTR at rest was found using PCA in five clusters of regions sharing close anatomical and functional relationships. At the single VOI level, all eight regions showing group differences were included in such clusters. PCA is a data-driven method for recasting VOIs to be used for group evaluation and comparison. The appearance of significant differences absent at the VOI level emphasises the value of analysing the relationships among brain regions for the investigation of psychiatric disease. (orig.)

  8. Deductive Error Diagnosis and Inductive Error Generalization for Intelligent Tutoring Systems.

    Science.gov (United States)

    Hoppe, H. Ulrich

    1994-01-01

    Examines the deductive approach to error diagnosis for intelligent tutoring systems. Topics covered include the principles of the deductive approach to diagnosis; domain-specific heuristics to solve the problem of generalizing error patterns; and deductive diagnosis and the hypertext-based learning environment. (Contains 26 references.) (JLB)

  9. VOLUMETRIC ERROR COMPENSATION IN FIVE-AXIS CNC MACHINING CENTER THROUGH KINEMATICS MODELING OF GEOMETRIC ERROR

    Directory of Open Access Journals (Sweden)

    Pooyan Vahidi Pashsaki

    2016-06-01

    Full Text Available Accuracy of a five-axis CNC machine tool is affected by a vast number of error sources. This paper investigates volumetric error modeling and its compensation to the basis for creation of new tool path for improvement of work pieces accuracy. The volumetric error model of a five-axis machine tool with the configuration RTTTR (tilting head B-axis and rotary table in work piece side A΄ was set up taking into consideration rigid body kinematics and homogeneous transformation matrix, in which 43 error components are included. Volumetric error comprises 43 error components that can separately reduce geometrical and dimensional accuracy of work pieces. The machining accuracy of work piece is guaranteed due to the position of the cutting tool center point (TCP relative to the work piece. The cutting tool is deviated from its ideal position relative to the work piece and machining error is experienced. For compensation process detection of the present tool path and analysis of the RTTTR five-axis CNC machine tools geometrical error, translating current position of component to compensated positions using the Kinematics error model, converting newly created component to new tool paths using the compensation algorithms and finally editing old G-codes using G-code generator algorithm have been employed.

  10. Errorful and errorless learning: The impact of cue-target constraint in learning from errors.

    Science.gov (United States)

    Bridger, Emma K; Mecklinger, Axel

    2014-08-01

    The benefits of testing on learning are well described, and attention has recently turned to what happens when errors are elicited during learning: Is testing nonetheless beneficial, or can errors hinder learning? Whilst recent findings have indicated that tests boost learning even if errors are made on every trial, other reports, emphasizing the benefits of errorless learning, have indicated that errors lead to poorer later memory performance. The possibility that this discrepancy is a function of the materials that must be learned-in particular, the relationship between the cues and targets-was addressed here. Cued recall after either a study-only errorless condition or an errorful learning condition was contrasted across cue-target associations, for which the extent to which the target was constrained by the cue was either high or low. Experiment 1 showed that whereas errorful learning led to greater recall for low-constraint stimuli, it led to a significant decrease in recall for high-constraint stimuli. This interaction is thought to reflect the extent to which retrieval is constrained by the cue-target association, as well as by the presence of preexisting semantic associations. The advantage of errorful retrieval for low-constraint stimuli was replicated in Experiment 2, and the interaction with stimulus type was replicated in Experiment 3, even when guesses were randomly designated as being either correct or incorrect. This pattern provides support for inferences derived from reports in which participants made errors on all learning trials, whilst highlighting the impact of material characteristics on the benefits and disadvantages that accrue from errorful learning in episodic memory.

  11. Main error sources in sorbtion technique and plasma electron component parameter definition by continuous X radiation

    International Nuclear Information System (INIS)

    Gavrilov, V.V.; Torokhova, N.V.; Fasakhov, I.K.

    1986-01-01

    Recombination radiation effect on the relation of signals behind the filters depending on the plasma temperature(sorption method for T determination) is demonstrated. This factor produces the main effect on the method accuracy (100-400%), the other factors analysed in combination make an error in temperature at the level of 50%. Method of plasma electron distribution function reconstruction by continuous x-radiation spectrum, based on the correctness (under certain limitations for the required function) of the equation, linking the electron distribution function with bremmsstrahlung spectral density is presented

  12. Decoding of DBEC-TBED Reed-Solomon codes. [Double-Byte-Error-Correcting, Triple-Byte-Error-Detecting

    Science.gov (United States)

    Deng, Robert H.; Costello, Daniel J., Jr.

    1987-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256 K bit DRAM's are organized in 32 K x 8 bit-bytes. Byte-oriented codes such as Reed-Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. The paper presents a special decoding technique for double-byte-error-correcting, triple-byte-error-detecting RS codes which is capable of high-speed operation. This technique is designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.

  13. High cortisol awakening response is associated with impaired error monitoring and decreased post-error adjustment.

    Science.gov (United States)

    Zhang, Liang; Duan, Hongxia; Qin, Shaozheng; Yuan, Yiran; Buchanan, Tony W; Zhang, Kan; Wu, Jianhui

    2015-01-01

    The cortisol awakening response (CAR), a rapid increase in cortisol levels following morning awakening, is an important aspect of hypothalamic-pituitary-adrenocortical axis activity. Alterations in the CAR have been linked to a variety of mental disorders and cognitive function. However, little is known regarding the relationship between the CAR and error processing, a phenomenon that is vital for cognitive control and behavioral adaptation. Using high-temporal resolution measures of event-related potentials (ERPs) combined with behavioral assessment of error processing, we investigated whether and how the CAR is associated with two key components of error processing: error detection and subsequent behavioral adjustment. Sixty university students performed a Go/No-go task while their ERPs were recorded. Saliva samples were collected at 0, 15, 30 and 60 min after awakening on the two consecutive days following ERP data collection. The results showed that a higher CAR was associated with slowed latency of the error-related negativity (ERN) and a higher post-error miss rate. The CAR was not associated with other behavioral measures such as the false alarm rate and the post-correct miss rate. These findings suggest that high CAR is a biological factor linked to impairments of multiple steps of error processing in healthy populations, specifically, the automatic detection of error and post-error behavioral adjustment. A common underlying neural mechanism of physiological and cognitive control may be crucial for engaging in both CAR and error processing.

  14. Learning from Errors

    Directory of Open Access Journals (Sweden)

    MA. Lendita Kryeziu

    2015-06-01

    Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.

  15. Incremental Volumetric Remapping Method: Analysis and Error Evaluation

    International Nuclear Information System (INIS)

    Baptista, A. J.; Oliveira, M. C.; Rodrigues, D. M.; Menezes, L. F.; Alves, J. L.

    2007-01-01

    In this paper the error associated with the remapping problem is analyzed. A range of numerical results that assess the performance of three different remapping strategies, applied to FE meshes that typically are used in sheet metal forming simulation, are evaluated. One of the selected strategies is the previously presented Incremental Volumetric Remapping method (IVR), which was implemented in the in-house code DD3TRIM. The IVR method fundaments consists on the premise that state variables in all points associated to a Gauss volume of a given element are equal to the state variable quantities placed in the correspondent Gauss point. Hence, given a typical remapping procedure between a donor and a target mesh, the variables to be associated to a target Gauss volume (and point) are determined by a weighted average. The weight function is the Gauss volume percentage of each donor element that is located inside the target Gauss volume. The calculus of the intersecting volumes between the donor and target Gauss volumes is attained incrementally, for each target Gauss volume, by means of a discrete approach. The other two remapping strategies selected are based in the interpolation/extrapolation of variables by using the finite element shape functions or moving least square interpolants. The performance of the three different remapping strategies is address with two tests. The first remapping test was taken from a literature work. The test consists in remapping successively a rotating symmetrical mesh, throughout N increments, in an angular span of 90 deg. The second remapping error evaluation test consists of remapping an irregular element shape target mesh from a given regular element shape donor mesh and proceed with the inverse operation. In this second test the computation effort is also measured. The results showed that the error level associated to IVR can be very low and with a stable evolution along the number of remapping procedures when compared with the

  16. Putting into practice error management theory: Unlearning and learning to manage action errors in construction.

    Science.gov (United States)

    Love, Peter E D; Smith, Jim; Teo, Pauline

    2018-05-01

    Error management theory is drawn upon to examine how a project-based organization, which took the form of a program alliance, was able to change its established error prevention mindset to one that enacted a learning mindfulness that provided an avenue to curtail its action errors. The program alliance was required to unlearn its existing routines and beliefs to accommodate the practices required to embrace error management. As a result of establishing an error management culture the program alliance was able to create a collective mindfulness that nurtured learning and supported innovation. The findings provide a much-needed context to demonstrate the relevance of error management theory to effectively address rework and safety problems in construction projects. The robust theoretical underpinning that is grounded in practice and presented in this paper provides a mechanism to engender learning from errors, which can be utilized by construction organizations to improve the productivity and performance of their projects. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Rounding errors in weighing

    International Nuclear Information System (INIS)

    Jeach, J.L.

    1976-01-01

    When rounding error is large relative to weighing error, it cannot be ignored when estimating scale precision and bias from calibration data. Further, if the data grouping is coarse, rounding error is correlated with weighing error and may also have a mean quite different from zero. These facts are taken into account in a moment estimation method. A copy of the program listing for the MERDA program that provides moment estimates is available from the author. Experience suggests that if the data fall into four or more cells or groups, it is not necessary to apply the moment estimation method. Rather, the estimate given by equation (3) is valid in this instance. 5 tables

  18. Error-finding and error-correcting methods for the start-up of the SLC

    International Nuclear Information System (INIS)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.; Selig, L.J.

    1987-02-01

    During the commissioning of an accelerator, storage ring, or beam transfer line, one of the important tasks of an accelertor physicist is to check the first-order optics of the beam line and to look for errors in the system. Conceptually, it is important to distinguish between techniques for finding the machine errors that are the cause of the problem and techniques for correcting the beam errors that are the result of the machine errors. In this paper we will limit our presentation to certain applications of these two methods for finding or correcting beam-focus errors and beam-kick errors that affect the profile and trajectory of the beam respectively. Many of these methods have been used successfully in the commissioning of SLC systems. In order not to waste expensive beam time we have developed and used a beam-line simulator to test the ideas that have not been tested experimentally. To save valuable physicist's time we have further automated the beam-kick error-finding procedures by adopting methods from the field of artificial intelligence to develop a prototype expert system. Our experience with this prototype has demonstrated the usefulness of expert systems in solving accelerator control problems. The expert system is able to find the same solutions as an expert physicist but in a more systematic fashion. The methods used in these procedures and some of the recent applications will be described in this paper

  19. Lane Level Localization; Using Images and HD Maps to Mitigate the Lateral Error

    Science.gov (United States)

    Hosseinyalamdary, S.; Peter, M.

    2017-05-01

    In urban canyon where the GNSS signals are blocked by buildings, the accuracy of measured position significantly deteriorates. GIS databases have been frequently utilized to improve the accuracy of measured position using map matching approaches. In map matching, the measured position is projected to the road links (centerlines) in this approach and the lateral error of measured position is reduced. By the advancement in data acquision approaches, high definition maps which contain extra information, such as road lanes are generated. These road lanes can be utilized to mitigate the positional error and improve the accuracy in position. In this paper, the image content of a camera mounted on the platform is utilized to detect the road boundaries in the image. We apply color masks to detect the road marks, apply the Hough transform to fit lines to the left and right road boundaries, find the corresponding road segment in GIS database, estimate the homography transformation between the global and image coordinates of the road boundaries, and estimate the camera pose with respect to the global coordinate system. The proposed approach is evaluated on a benchmark. The position is measured by a smartphone's GPS receiver, images are taken from smartphone's camera and the ground truth is provided by using Real-Time Kinematic (RTK) technique. Results show the proposed approach significantly improves the accuracy of measured GPS position. The error in measured GPS position with average and standard deviation of 11.323 and 11.418 meters is reduced to the error in estimated postion with average and standard deviation of 6.725 and 5.899 meters.

  20. LANE LEVEL LOCALIZATION; USING IMAGES AND HD MAPS TO MITIGATE THE LATERAL ERROR

    Directory of Open Access Journals (Sweden)

    S. Hosseinyalamdary

    2017-05-01

    Full Text Available In urban canyon where the GNSS signals are blocked by buildings, the accuracy of measured position significantly deteriorates. GIS databases have been frequently utilized to improve the accuracy of measured position using map matching approaches. In map matching, the measured position is projected to the road links (centerlines in this approach and the lateral error of measured position is reduced. By the advancement in data acquision approaches, high definition maps which contain extra information, such as road lanes are generated. These road lanes can be utilized to mitigate the positional error and improve the accuracy in position. In this paper, the image content of a camera mounted on the platform is utilized to detect the road boundaries in the image. We apply color masks to detect the road marks, apply the Hough transform to fit lines to the left and right road boundaries, find the corresponding road segment in GIS database, estimate the homography transformation between the global and image coordinates of the road boundaries, and estimate the camera pose with respect to the global coordinate system. The proposed approach is evaluated on a benchmark. The position is measured by a smartphone’s GPS receiver, images are taken from smartphone’s camera and the ground truth is provided by using Real-Time Kinematic (RTK technique. Results show the proposed approach significantly improves the accuracy of measured GPS position. The error in measured GPS position with average and standard deviation of 11.323 and 11.418 meters is reduced to the error in estimated postion with average and standard deviation of 6.725 and 5.899 meters.

  1. Laktaattimääritys kemian analysaattorilla ja verikaasuanalysaattorilla

    OpenAIRE

    Raatikainen, Jenni

    2015-01-01

    Normaalisti hapellisissa oloissa glukoosi hajoaa aineenvaihdunnan seurauksena solujen käyttöön vapautuvaksi energiaksi, vedeksi ja hiilidioksidiksi. Hapettomissa oloissa glukoosi kuitenkin hajoaa laktaatiksi. Sydämen, verenkierron tai keuhkojen vakava toimintahäiriö voi aiheuttaa kudosten vähentynyttä hapensaantia, joka voi johtaa anaerobiseen aineenvaihduntaan soluissa ja laktaatin muodostumiseen. Tämän vuoksi laktaatti on yleinen tutkimus esimerkiksi teho-osastolla. Laktaattia voi muodostua...

  2. Metabolic liver function measured in vivo by dynamic (18)F-FDGal PET/CT without arterial blood sampling.

    Science.gov (United States)

    Horsager, Jacob; Munk, Ole Lajord; Sørensen, Michael

    2015-01-01

    Metabolic liver function can be measured by dynamic PET/CT with the radio-labelled galactose-analogue 2-[(18)F]fluoro-2-deoxy-D-galactose ((18)F-FDGal) in terms of hepatic systemic clearance of (18)F-FDGal (K, ml blood/ml liver tissue/min). The method requires arterial blood sampling from a radial artery (arterial input function), and the aim of this study was to develop a method for extracting an image-derived, non-invasive input function from a volume of interest (VOI). Dynamic (18)F-FDGal PET/CT data from 16 subjects without liver disease (healthy subjects) and 16 patients with liver cirrhosis were included in the study. Five different input VOIs were tested: four in the abdominal aorta and one in the left ventricle of the heart. Arterial input function from manual blood sampling was available for all subjects. K*-values were calculated using time-activity curves (TACs) from each VOI as input and compared to the K-value calculated using arterial blood samples as input. Each input VOI was tested on PET data reconstructed with and without resolution modelling. All five image-derived input VOIs yielded K*-values that correlated significantly with K calculated using arterial blood samples. Furthermore, TACs from two different VOIs yielded K*-values that did not statistically deviate from K calculated using arterial blood samples. A semicircle drawn in the posterior part of the abdominal aorta was the only VOI that was successful for both healthy subjects and patients as well as for PET data reconstructed with and without resolution modelling. Metabolic liver function using (18)F-FDGal PET/CT can be measured without arterial blood samples by using input data from a semicircle VOI drawn in the posterior part of the abdominal aorta.

  3. Barriers to medical error reporting

    Directory of Open Access Journals (Sweden)

    Jalal Poorolajal

    2015-01-01

    Full Text Available Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan,Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%, lack of proper reporting form (51.8%, lack of peer supporting a person who has committed an error (56.0%, and lack of personal attention to the importance of medical errors (62.9%. The rate of committing medical errors was higher in men (71.4%, age of 50-40 years (67.6%, less-experienced personnel (58.7%, educational level of MSc (87.5%, and staff of radiology department (88.9%. Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement.

  4. Reward positivity: Reward prediction error or salience prediction error?

    Science.gov (United States)

    Heydari, Sepideh; Holroyd, Clay B

    2016-08-01

    The reward positivity is a component of the human ERP elicited by feedback stimuli in trial-and-error learning and guessing tasks. A prominent theory holds that the reward positivity reflects a reward prediction error signal that is sensitive to outcome valence, being larger for unexpected positive events relative to unexpected negative events (Holroyd & Coles, 2002). Although the theory has found substantial empirical support, most of these studies have utilized either monetary or performance feedback to test the hypothesis. However, in apparent contradiction to the theory, a recent study found that unexpected physical punishments also elicit the reward positivity (Talmi, Atkinson, & El-Deredy, 2013). The authors of this report argued that the reward positivity reflects a salience prediction error rather than a reward prediction error. To investigate this finding further, in the present study participants navigated a virtual T maze and received feedback on each trial under two conditions. In a reward condition, the feedback indicated that they would either receive a monetary reward or not and in a punishment condition the feedback indicated that they would receive a small shock or not. We found that the feedback stimuli elicited a typical reward positivity in the reward condition and an apparently delayed reward positivity in the punishment condition. Importantly, this signal was more positive to the stimuli that predicted the omission of a possible punishment relative to stimuli that predicted a forthcoming punishment, which is inconsistent with the salience hypothesis. © 2016 Society for Psychophysiological Research.

  5. Learning from errors in super-resolution.

    Science.gov (United States)

    Tang, Yi; Yuan, Yuan

    2014-11-01

    A novel framework of learning-based super-resolution is proposed by employing the process of learning from the estimation errors. The estimation errors generated by different learning-based super-resolution algorithms are statistically shown to be sparse and uncertain. The sparsity of the estimation errors means most of estimation errors are small enough. The uncertainty of the estimation errors means the location of the pixel with larger estimation error is random. Noticing the prior information about the estimation errors, a nonlinear boosting process of learning from these estimation errors is introduced into the general framework of the learning-based super-resolution. Within the novel framework of super-resolution, a low-rank decomposition technique is used to share the information of different super-resolution estimations and to remove the sparse estimation errors from different learning algorithms or training samples. The experimental results show the effectiveness and the efficiency of the proposed framework in enhancing the performance of different learning-based algorithms.

  6. Error management process for power stations

    International Nuclear Information System (INIS)

    Hirotsu, Yuko; Takeda, Daisuke; Fujimoto, Junzo; Nagasaka, Akihiko

    2016-01-01

    The purpose of this study is to establish 'error management process for power stations' for systematizing activities for human error prevention and for festering continuous improvement of these activities. The following are proposed by deriving concepts concerning error management process from existing knowledge and realizing them through application and evaluation of their effectiveness at a power station: an entire picture of error management process that facilitate four functions requisite for maraging human error prevention effectively (1. systematizing human error prevention tools, 2. identifying problems based on incident reports and taking corrective actions, 3. identifying good practices and potential problems for taking proactive measures, 4. prioritizeng human error prevention tools based on identified problems); detail steps for each activity (i.e. developing an annual plan for human error prevention, reporting and analyzing incidents and near misses) based on a model of human error causation; procedures and example of items for identifying gaps between current and desired levels of executions and outputs of each activity; stages for introducing and establishing the above proposed error management process into a power station. By giving shape to above proposals at a power station, systematization and continuous improvement of activities for human error prevention in line with the actual situation of the power station can be expected. (author)

  7. Error detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3

    Science.gov (United States)

    Fujiwara, Toru; Kasami, Tadao; Lin, Shu

    1989-09-01

    The error-detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3 are investigated. These codes are also used for error detection in the data link layer of the Ethernet, a local area network. The weight distributions for various code lengths are calculated to obtain the probability of undetectable error and that of detectable error for a binary symmetric channel with bit-error rate between 0.00001 and 1/2.

  8. Evaluating a medical error taxonomy.

    OpenAIRE

    Brixey, Juliana; Johnson, Todd R.; Zhang, Jiajie

    2002-01-01

    Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a stand...

  9. The surveillance error grid.

    Science.gov (United States)

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  10. Dopamine reward prediction error coding.

    Science.gov (United States)

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  11. Radiation doses in volume-of-interest breast computed tomography—A Monte Carlo simulation study

    Energy Technology Data Exchange (ETDEWEB)

    Lai, Chao-Jen, E-mail: cjlai3711@gmail.com; Zhong, Yuncheng; Yi, Ying; Wang, Tianpeng; Shaw, Chris C. [Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030-4009 (United States)

    2015-06-15

    Purpose: Cone beam breast computed tomography (breast CT) with true three-dimensional, nearly isotropic spatial resolution has been developed and investigated over the past decade to overcome the problem of lesions overlapping with breast anatomical structures on two-dimensional mammographic images. However, the ability of breast CT to detect small objects, such as tissue structure edges and small calcifications, is limited. To resolve this problem, the authors proposed and developed a volume-of-interest (VOI) breast CT technique to image a small VOI using a higher radiation dose to improve that region’s visibility. In this study, the authors performed Monte Carlo simulations to estimate average breast dose and average glandular dose (AGD) for the VOI breast CT technique. Methods: Electron–Gamma-Shower system code-based Monte Carlo codes were used to simulate breast CT. The Monte Carlo codes estimated were validated using physical measurements of air kerma ratios and point doses in phantoms with an ion chamber and optically stimulated luminescence dosimeters. The validated full cone x-ray source was then collimated to simulate half cone beam x-rays to image digital pendant-geometry, hemi-ellipsoidal, homogeneous breast phantoms and to estimate breast doses with full field scans. 13-cm in diameter, 10-cm long hemi-ellipsoidal homogeneous phantoms were used to simulate median breasts. Breast compositions of 25% and 50% volumetric glandular fractions (VGFs) were used to investigate the influence on breast dose. The simulated half cone beam x-rays were then collimated to a narrow x-ray beam with an area of 2.5 × 2.5 cm{sup 2} field of view at the isocenter plane and to perform VOI field scans. The Monte Carlo results for the full field scans and the VOI field scans were then used to estimate the AGD for the VOI breast CT technique. Results: The ratios of air kerma ratios and dose measurement results from the Monte Carlo simulation to those from the physical

  12. Understanding and Confronting Our Mistakes: The Epidemiology of Error in Radiology and Strategies for Error Reduction.

    Science.gov (United States)

    Bruno, Michael A; Walker, Eric A; Abujudeh, Hani H

    2015-10-01

    Arriving at a medical diagnosis is a highly complex process that is extremely error prone. Missed or delayed diagnoses often lead to patient harm and missed opportunities for treatment. Since medical imaging is a major contributor to the overall diagnostic process, it is also a major potential source of diagnostic error. Although some diagnoses may be missed because of the technical or physical limitations of the imaging modality, including image resolution, intrinsic or extrinsic contrast, and signal-to-noise ratio, most missed radiologic diagnoses are attributable to image interpretation errors by radiologists. Radiologic interpretation cannot be mechanized or automated; it is a human enterprise based on complex psychophysiologic and cognitive processes and is itself subject to a wide variety of error types, including perceptual errors (those in which an important abnormality is simply not seen on the images) and cognitive errors (those in which the abnormality is visually detected but the meaning or importance of the finding is not correctly understood or appreciated). The overall prevalence of radiologists' errors in practice does not appear to have changed since it was first estimated in the 1960s. The authors review the epidemiology of errors in diagnostic radiology, including a recently proposed taxonomy of radiologists' errors, as well as research findings, in an attempt to elucidate possible underlying causes of these errors. The authors also propose strategies for error reduction in radiology. On the basis of current understanding, specific suggestions are offered as to how radiologists can improve their performance in practice. © RSNA, 2015.

  13. Thermodynamics of Error Correction

    Directory of Open Access Journals (Sweden)

    Pablo Sartori

    2015-12-01

    Full Text Available Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  14. Nursing Errors in Intensive Care Unit by Human Error Identification in Systems Tool: A Case Study

    Directory of Open Access Journals (Sweden)

    Nezamodini

    2016-03-01

    Full Text Available Background Although health services are designed and implemented to improve human health, the errors in health services are a very common phenomenon and even sometimes fatal in this field. Medical errors and their cost are global issues with serious consequences for the patients’ community that are preventable and require serious attention. Objectives The current study aimed to identify possible nursing errors applying human error identification in systems tool (HEIST in the intensive care units (ICUs of hospitals. Patients and Methods This descriptive research was conducted in the intensive care unit of a hospital in Khuzestan province in 2013. Data were collected through observation and interview by nine nurses in this section in a period of four months. Human error classification was based on Rose and Rose and Swain and Guttmann models. According to HEIST work sheets the guide questions were answered and error causes were identified after the determination of the type of errors. Results In total 527 errors were detected. The performing operation on the wrong path had the highest frequency which was 150, and the second rate with a frequency of 136 was doing the tasks later than the deadline. Management causes with a frequency of 451 were the first rank among identified errors. Errors mostly occurred in the system observation stage and among the performance shaping factors (PSFs, time was the most influencing factor in occurrence of human errors. Conclusions Finally, in order to prevent the occurrence and reduce the consequences of identified errors the following suggestions were proposed : appropriate training courses, applying work guidelines and monitoring their implementation, increasing the number of work shifts, hiring professional workforce, equipping work space with appropriate facilities and equipment.

  15. Impact of exposure measurement error in air pollution epidemiology: effect of error type in time-series studies.

    Science.gov (United States)

    Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E

    2011-06-22

    Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling

  16. Analysis of errors in forensic science

    Directory of Open Access Journals (Sweden)

    Mingxiao Du

    2017-01-01

    Full Text Available Reliability of expert testimony is one of the foundations of judicial justice. Both expert bias and scientific errors affect the reliability of expert opinion, which in turn affects the trustworthiness of the findings of fact in legal proceedings. Expert bias can be eliminated by replacing experts; however, it may be more difficult to eliminate scientific errors. From the perspective of statistics, errors in operation of forensic science include systematic errors, random errors, and gross errors. In general, process repetition and abiding by the standard ISO/IEC:17025: 2005, general requirements for the competence of testing and calibration laboratories, during operation are common measures used to reduce errors that originate from experts and equipment, respectively. For example, to reduce gross errors, the laboratory can ensure that a test is repeated several times by different experts. In applying for forensic principles and methods, the Federal Rules of Evidence 702 mandate that judges consider factors such as peer review, to ensure the reliability of the expert testimony. As the scientific principles and methods may not undergo professional review by specialists in a certain field, peer review serves as an exclusive standard. This study also examines two types of statistical errors. As false-positive errors involve a higher possibility of an unfair decision-making, they should receive more attention than false-negative errors.

  17. Prescription Errors in Psychiatry

    African Journals Online (AJOL)

    Arun Kumar Agnihotri

    clinical pharmacists in detecting errors before they have a (sometimes serious) clinical impact should not be underestimated. Research on medication error in mental health care is limited. .... participation in ward rounds and adverse drug.

  18. Study of Errors among Nursing Students

    Directory of Open Access Journals (Sweden)

    Ella Koren

    2007-09-01

    Full Text Available The study of errors in the health system today is a topic of considerable interest aimed at reducing errors through analysis of the phenomenon and the conclusions reached. Errors that occur frequently among health professionals have also been observed among nursing students. True, in most cases they are actually “near errors,” but these could be a future indicator of therapeutic reality and the effect of nurses' work environment on their personal performance. There are two different approaches to such errors: (a The EPP (error prone person approach lays full responsibility at the door of the individual involved in the error, whether a student, nurse, doctor, or pharmacist. According to this approach, handling consists purely in identifying and penalizing the guilty party. (b The EPE (error prone environment approach emphasizes the environment as a primary contributory factor to errors. The environment as an abstract concept includes components and processes of interpersonal communications, work relations, human engineering, workload, pressures, technical apparatus, and new technologies. The objective of the present study was to examine the role played by factors in and components of personal performance as compared to elements and features of the environment. The study was based on both of the aforementioned approaches, which, when combined, enable a comprehensive understanding of the phenomenon of errors among the student population as well as a comparison of factors contributing to human error and to error deriving from the environment. The theoretical basis of the study was a model that combined both approaches: one focusing on the individual and his or her personal performance and the other focusing on the work environment. The findings emphasize the work environment of health professionals as an EPE. However, errors could have been avoided by means of strict adherence to practical procedures. The authors examined error events in the

  19. An overview of intravenous-related medication administration errors as reported to MEDMARX, a national medication error-reporting program.

    Science.gov (United States)

    Hicks, Rodney W; Becker, Shawn C

    2006-01-01

    Medication errors can be harmful, especially if they involve the intravenous (IV) route of administration. A mixed-methodology study using a 5-year review of 73,769 IV-related medication errors from a national medication error reporting program indicates that between 3% and 5% of these errors were harmful. The leading type of error was omission, and the leading cause of error involved clinician performance deficit. Using content analysis, three themes-product shortage, calculation errors, and tubing interconnectivity-emerge and appear to predispose patients to harm. Nurses often participate in IV therapy, and these findings have implications for practice and patient safety. Voluntary medication error-reporting programs afford an opportunity to improve patient care and to further understanding about the nature of IV-related medication errors.

  20. Quantifying and handling errors in instrumental measurements using the measurement error theory

    DEFF Research Database (Denmark)

    Andersen, Charlotte Møller; Bro, R.; Brockhoff, P.B.

    2003-01-01

    . This is a new way of using the measurement error theory. Reliability ratios illustrate that the models for the two fish species are influenced differently by the error. However, the error seems to influence the predictions of the two reference measures in the same way. The effect of using replicated x...... measurements. A new general formula is given for how to correct the least squares regression coefficient when a different number of replicated x-measurements is used for prediction than for calibration. It is shown that the correction should be applied when the number of replicates in prediction is less than...

  1. Metabolic connectivity by interregional correlation analysis using statistical parametric mapping (SPM) and FDG brain PET; methodological development and patterns of metabolic connectivity in adults

    International Nuclear Information System (INIS)

    Lee, Dong Soo; Oh, Jungsu S.; Lee, Jae Sung; Lee, Myung Chul; Kang, Hyejin; Kim, Heejung; Park, Hyojin

    2008-01-01

    Regionally connected areas of the resting brain can be detected by fluorodeoxyglucose-positron emission tomography (FDG-PET). Voxel-wise metabolic connectivity was examined, and normative data were established by performing interregional correlation analysis on statistical parametric mapping of FDG-PET data. Characteristics of seed volumes of interest (VOIs) as functional brain units were represented by their locations, sizes, and the independent methods of their determination. Seed brain areas were identified as population-based gyral VOIs (n=70) or as population-based cytoarchitectonic Brodmann areas (BA; n=28). FDG uptakes in these areas were used as independent variables in a general linear model to search for voxels correlated with average seed VOI counts. Positive correlations were searched in entire brain areas. In normal adults, one third of gyral VOIs yielded correlations that were confined to themselves, but in the others, correlated voxels extended to adjacent areas and/or contralateral homologous regions. In tens of these latter areas with extensive connectivity, correlated voxels were found across midline, and asymmetry was observed in the patterns of connectivity of left and right homologous seed VOIs. Most of the available BAs yielded correlations reaching contralateral homologous regions and/or neighboring areas. Extents of metabolic connectivity were not found to be related to seed VOI size or to the methods used to define seed VOIs. These findings indicate that patterns of metabolic connectivity of functional brain units depend on their regional locations. We propose that interregional correlation analysis of FDG-PET data offers a means of examining voxel-wise regional metabolic connectivity of the resting human brain. (orig.)

  2. Metabolic connectivity by interregional correlation analysis using statistical parametric mapping (SPM) and FDG brain PET; methodological development and patterns of metabolic connectivity in adults

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Dong Soo; Oh, Jungsu S.; Lee, Jae Sung; Lee, Myung Chul [Seoul National University, College of Medicine, Department of Nuclear Medicine, Jongno-gu, Seoul (Korea); Kang, Hyejin [Seoul National University, College of Medicine, Department of Nuclear Medicine, Jongno-gu, Seoul (Korea); Seoul National University, Programs in Brain and Neuroscience, Seoul (Korea); Kim, Heejung; Park, Hyojin [Seoul National University, College of Medicine, Department of Nuclear Medicine, Jongno-gu, Seoul (Korea); Seoul National University, Interdisciplinary Program in Cognitive Science, Seoul (Korea)

    2008-09-15

    Regionally connected areas of the resting brain can be detected by fluorodeoxyglucose-positron emission tomography (FDG-PET). Voxel-wise metabolic connectivity was examined, and normative data were established by performing interregional correlation analysis on statistical parametric mapping of FDG-PET data. Characteristics of seed volumes of interest (VOIs) as functional brain units were represented by their locations, sizes, and the independent methods of their determination. Seed brain areas were identified as population-based gyral VOIs (n=70) or as population-based cytoarchitectonic Brodmann areas (BA; n=28). FDG uptakes in these areas were used as independent variables in a general linear model to search for voxels correlated with average seed VOI counts. Positive correlations were searched in entire brain areas. In normal adults, one third of gyral VOIs yielded correlations that were confined to themselves, but in the others, correlated voxels extended to adjacent areas and/or contralateral homologous regions. In tens of these latter areas with extensive connectivity, correlated voxels were found across midline, and asymmetry was observed in the patterns of connectivity of left and right homologous seed VOIs. Most of the available BAs yielded correlations reaching contralateral homologous regions and/or neighboring areas. Extents of metabolic connectivity were not found to be related to seed VOI size or to the methods used to define seed VOIs. These findings indicate that patterns of metabolic connectivity of functional brain units depend on their regional locations. We propose that interregional correlation analysis of FDG-PET data offers a means of examining voxel-wise regional metabolic connectivity of the resting human brain. (orig.)

  3. Pancreas segmentation from 3D abdominal CT images using patient-specific weighted subspatial probabilistic atlases

    Science.gov (United States)

    Karasawa, Kenichi; Oda, Masahiro; Hayashi, Yuichiro; Nimura, Yukitaka; Kitasaka, Takayuki; Misawa, Kazunari; Fujiwara, Michitaka; Rueckert, Daniel; Mori, Kensaku

    2015-03-01

    Abdominal organ segmentations from CT volumes are now widely used in the computer-aided diagnosis and surgery assistance systems. Among abdominal organs, the pancreas is especially difficult to segment because of its large individual differences of the shape and position. In this paper, we propose a new pancreas segmentation method from 3D abdominal CT volumes using patient-specific weighted-subspatial probabilistic atlases. First of all, we perform normalization of organ shapes in training volumes and an input volume. We extract the Volume Of Interest (VOI) of the pancreas from the training volumes and an input volume. We divide each training VOI and input VOI into some cubic regions. We use a nonrigid registration method to register these cubic regions of the training VOI to corresponding regions of the input VOI. Based on the registration results, we calculate similarities between each cubic region of the training VOI and corresponding region of the input VOI. We select cubic regions of training volumes having the top N similarities in each cubic region. We subspatially construct probabilistic atlases weighted by the similarities in each cubic region. After integrating these probabilistic atlases in cubic regions into one, we perform a rough-to-precise segmentation of the pancreas using the atlas. The results of the experiments showed that utilization of the training volumes having the top N similarities in each cubic region led good results of the pancreas segmentation. The Jaccard Index and the average surface distance of the result were 58.9% and 2.04mm on average, respectively.

  4. Error and its meaning in forensic science.

    Science.gov (United States)

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes. © 2013 American Academy of Forensic Sciences.

  5. Error threshold ghosts in a simple hypercycle with error prone self-replication

    International Nuclear Information System (INIS)

    Sardanyes, Josep

    2008-01-01

    A delayed transition because of mutation processes is shown to happen in a simple hypercycle composed by two indistinguishable molecular species with error prone self-replication. The appearance of a ghost near the hypercycle error threshold causes a delay in the extinction and thus in the loss of information of the mutually catalytic replicators, in a kind of information memory. The extinction time, τ, scales near bifurcation threshold according to the universal square-root scaling law i.e. τ ∼ (Q hc - Q) -1/2 , typical of dynamical systems close to a saddle-node bifurcation. Here, Q hc represents the bifurcation point named hypercycle error threshold, involved in the change among the asymptotic stability phase and the so-called Random Replication State (RRS) of the hypercycle; and the parameter Q is the replication quality factor. The ghost involves a longer transient towards extinction once the saddle-node bifurcation has occurred, being extremely long near the bifurcation threshold. The role of this dynamical effect is expected to be relevant in fluctuating environments. Such a phenomenon should also be found in larger hypercycles when considering the hypercycle species in competition with their error tail. The implications of the ghost in the survival and evolution of error prone self-replicating molecules with hypercyclic organization are discussed

  6. Mendelian susceptibility to mycobacterial disease: genetic, immunological, and clinical features of inborn errors of IFN-γ immunity

    Science.gov (United States)

    Bustamante, Jacinta; Boisson-Dupuis, Stéphanie; Abel, Laurent; Casanova, Jean-Laurent

    2014-01-01

    Mendelian susceptibility to mycobacterial disease (MSMD) is a rare condition characterized by predisposition to clinical disease caused by weakly virulent mycobacteria, such as BCG vaccines and environmental mycobacteria, in otherwise healthy individuals with no overt abnormalities in routine hematological and immunological tests. MSMD designation does not recapitulate all the clinical features, as patients are also prone to salmonellosis, candidiasis and tuberculosis, and more rarely to infections with other intramacrophagic bacteria, fungi, or parasites, and even, perhaps, a few viruses. Since 1996, nine MSMD-causing genes, including seven autosomal (IFNGR1, IFNGR2, STAT1, IL12B, IL12RB1, ISG15, and IRF8) and two X-linked (NEMO, CYBB) genes have been discovered. The high level of allelic heterogeneity has already led to the definition of 18 different disorders. The nine gene products are physiologically related, as all are involved in IFN-γ-dependent immunity. These disorders impair the production of (IL12B, IL12RB1, IRF8, ISG15, NEMO) or the response to (IFNGR1, IFNGR2, STAT1, IRF8, CYBB) IFN-γ. These defects account for only about half the known MSMD cases. Patients with MSMD-causing genetic defects may display other infectious diseases, or even remain asymptomatic. Most of these inborn errors do not show complete clinical penetrance for the case-definition phenotype of MSMD. We review here the genetic, immunological, and clinical features of patients with inborn errors of IFN-γ-dependent immunity. PMID:25453225

  7. Total Survey Error for Longitudinal Surveys

    NARCIS (Netherlands)

    Lynn, Peter; Lugtig, P.J.

    2016-01-01

    This article describes the application of the total survey error paradigm to longitudinal surveys. Several aspects of survey error, and of the interactions between different types of error, are distinct in the longitudinal survey context. Furthermore, error trade-off decisions in survey design and

  8. On-Error Training (Book Excerpt).

    Science.gov (United States)

    Fukuda, Ryuji

    1985-01-01

    This excerpt from "Managerial Engineering: Techniques for Improving Quality and Productivity in the Workplace" describes the development, objectives, and use of On-Error Training (OET), a method which trains workers to learn from their errors. Also described is New Joharry's Window, a performance-error data analysis technique used in…

  9. Error Patterns in Problem Solving.

    Science.gov (United States)

    Babbitt, Beatrice C.

    Although many common problem-solving errors within the realm of school mathematics have been previously identified, a compilation of such errors is not readily available within learning disabilities textbooks, mathematics education texts, or teacher's manuals for school mathematics texts. Using data on error frequencies drawn from both the Fourth…

  10. Human Errors in Decision Making

    OpenAIRE

    Mohamad, Shahriari; Aliandrina, Dessy; Feng, Yan

    2005-01-01

    The aim of this paper was to identify human errors in decision making process. The study was focused on a research question such as: what could be the human error as a potential of decision failure in evaluation of the alternatives in the process of decision making. Two case studies were selected from the literature and analyzed to find the human errors contribute to decision fail. Then the analysis of human errors was linked with mental models in evaluation of alternative step. The results o...

  11. Preventing Errors in Laterality

    OpenAIRE

    Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie

    2014-01-01

    An error in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such errors, very few studies have been published that describe these errors in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in sep...

  12. Learning without Borders: A Review of the Implementation of Medical Error Reporting in Médecins Sans Frontières.

    Directory of Open Access Journals (Sweden)

    Leslie Shanks

    Full Text Available To analyse the results from the first 3 years of implementation of a medical error reporting system in Médecins Sans Frontières-Operational Centre Amsterdam (MSF programs.A medical error reporting policy was developed with input from frontline workers and introduced to the organisation in June 2010. The definition of medical error used was "the failure of a planned action to be completed as intended or the use of a wrong plan to achieve an aim." All confirmed error reports were entered into a database without the use of personal identifiers.179 errors were reported from 38 projects in 18 countries over the period of June 2010 to May 2013. The rate of reporting was 31, 42, and 106 incidents/year for reporting year 1, 2 and 3 respectively. The majority of errors were categorized as dispensing errors (62 cases or 34.6%, errors or delays in diagnosis (24 cases or 13.4% and inappropriate treatment (19 cases or 10.6%. The impact of the error was categorized as no harm (58, 32.4%, harm (70, 39.1%, death (42, 23.5% and unknown in 9 (5.0% reports. Disclosure to the patient took place in 34 cases (19.0%, did not take place in 46 (25.7%, was not applicable for 5 (2.8% cases and not reported for 94 (52.5%. Remedial actions introduced at headquarters level included guideline revisions and changes to medical supply procedures. At field level improvements included increased training and supervision, adjustments in staffing levels, and adaptations to the organization of the pharmacy.It was feasible to implement a voluntary reporting system for medical errors despite the complex contexts in which MSF intervenes. The reporting policy led to system changes that improved patient safety and accountability to patients. Challenges remain in achieving widespread acceptance of the policy as evidenced by the low reporting and disclosure rates.

  13. Understanding human management of automation errors

    Science.gov (United States)

    McBride, Sara E.; Rogers, Wendy A.; Fisk, Arthur D.

    2013-01-01

    Automation has the potential to aid humans with a diverse set of tasks and support overall system performance. Automated systems are not always reliable, and when automation errs, humans must engage in error management, which is the process of detecting, understanding, and correcting errors. However, this process of error management in the context of human-automation interaction is not well understood. Therefore, we conducted a systematic review of the variables that contribute to error management. We examined relevant research in human-automation interaction and human error to identify critical automation, person, task, and emergent variables. We propose a framework for management of automation errors to incorporate and build upon previous models. Further, our analysis highlights variables that may be addressed through design and training to positively influence error management. Additional efforts to understand the error management process will contribute to automation designed and implemented to support safe and effective system performance. PMID:25383042

  14. Predictive validity of different definitions of hypertension for type 2 diabetes.

    Science.gov (United States)

    Gulliford, Martin C; Charlton, Judith; Latinovic, Radoslav

    2006-01-01

    Models to predict diabetes or pre-diabetes often incorporate the assessment of hypertension, but proposed definitions for 'hypertension' are inconsistent. We compared the classifications obtained using different definitions for 'hypertension'. We compared records for 5158 cases from 181 family practices, who were later diagnosed with diabetes and prescribed oral hypoglycaemic drugs, with 5158 controls, matched for age, sex and family practice, who were never diagnosed with diabetes. We compared classifications obtained using definitions of hypertension based on medical diagnoses, prescription of blood pressure lowering drugs or both. We compared family practices where diagnosis or prescribing varied systematically. Classification of hypertension based on recorded medical diagnoses gave a sensitivity of 32.2% for diabetes (95% confidence interval from 30.4 to 34.1%). Prescription of blood pressure lowering drugs in the 12 months before diagnosis gave a sensitivity of 47.2% (45.7 to 48.7%). Combining either a medical diagnosis or a blood pressure lowering prescription gave a sensitivity of 52.8% (51.3 to 54.3%). In family practices where hypertension was least frequently recorded, a diagnosis of hypertension gave a sensitivity of 19.5% for diabetes (17.4 to 21.6%) compared with 50.8% (46.3 to 55.3%) in the highest quintile. Prescription of blood pressure lowering drugs gave a sensitivity of 36.1% (33.1 to 39.0%) in the lowest prescribing practices but 58.2% (55.5 to 61.0%) in the highest quintile. Misclassification errors depend on the definition of hypertension and its implementation in practice. Definitions of hypertension that depend on access or quality in health care should be avoided.

  15. An adaptive orienting theory of error processing.

    Science.gov (United States)

    Wessel, Jan R

    2018-03-01

    The ability to detect and correct action errors is paramount to safe and efficient goal-directed behaviors. Existing work on the neural underpinnings of error processing and post-error behavioral adaptations has led to the development of several mechanistic theories of error processing. These theories can be roughly grouped into adaptive and maladaptive theories. While adaptive theories propose that errors trigger a cascade of processes that will result in improved behavior after error commission, maladaptive theories hold that error commission momentarily impairs behavior. Neither group of theories can account for all available data, as different empirical studies find both impaired and improved post-error behavior. This article attempts a synthesis between the predictions made by prominent adaptive and maladaptive theories. Specifically, it is proposed that errors invoke a nonspecific cascade of processing that will rapidly interrupt and inhibit ongoing behavior and cognition, as well as orient attention toward the source of the error. It is proposed that this cascade follows all unexpected action outcomes, not just errors. In the case of errors, this cascade is followed by error-specific, controlled processing, which is specifically aimed at (re)tuning the existing task set. This theory combines existing predictions from maladaptive orienting and bottleneck theories with specific neural mechanisms from the wider field of cognitive control, including from error-specific theories of adaptive post-error processing. The article aims to describe the proposed framework and its implications for post-error slowing and post-error accuracy, propose mechanistic neural circuitry for post-error processing, and derive specific hypotheses for future empirical investigations. © 2017 Society for Psychophysiological Research.

  16. Compact disk error measurements

    Science.gov (United States)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  17. Human error risk management for engineering systems: a methodology for design, safety assessment, accident investigation and training

    International Nuclear Information System (INIS)

    Cacciabue, P.C.

    2004-01-01

    The objective of this paper is to tackle methodological issues associated with the inclusion of cognitive and dynamic considerations into Human Reliability methods. A methodology called Human Error Risk Management for Engineering Systems is presented that offers a 'roadmap' for selecting and consistently applying Human Factors approaches in different areas of application and contains also a 'body' of possible methods and techniques of its own. Two types of possible application are discussed to demonstrate practical applications of the methodology. Specific attention is dedicated to the issue of data collection and definition from specific field assessment

  18. TH-CD-206-12: Image-Based Motion Estimation for Plaque Visualization in Coronary Computed Tomography Angiography

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, X; Sisniega, A; Zbijewski, W; Stayman, J [Johns Hopkins University, Balitmore, MD (United States); Contijoch, F; McVeigh, E [University of California, San Diego, San Diego, CA (United States)

    2016-06-15

    Purpose: Visualization and quantification of coronary artery calcification and atherosclerotic plaque benefits from coronary artery motion (CAM) artifact elimination. This work applies a rigid linear motion model to a Volume of Interest (VoI) for estimating motion estimation and compensation of image degradation in Coronary Computed Tomography Angiography (CCTA). Methods: In both simulation and testbench experiments, translational CAM was generated by displacement of the imaging object (i.e. simulated coronary artery and explanted human heart) by ∼8 mm, approximating the motion of a main coronary branch. Rotation was assumed to be negligible. A motion degraded region containing a calcification was selected as the VoI. Local residual motion was assumed to be rigid and linear over the acquisition window, simulating motion observed during diastasis. The (negative) magnitude of the image gradient of the reconstructed VoI was chosen as the motion estimation objective and was minimized with Covariance Matrix Adaptation Evolution Strategy (CMAES). Results: Reconstruction incorporated the estimated CAM yielded signification recovery of fine calcification structures as well as reduced motion artifacts within the selected local region. The compensated reconstruction was further evaluated using two image similarity metrics, the structural similarity index (SSIM) and Root Mean Square Error (RMSE). At the calcification site, the compensated data achieved a 3% increase in SSIM and a 91.2% decrease in RMSE in comparison with the uncompensated reconstruction. Conclusion: Results demonstrate the feasibility of our image-based motion estimation method exploiting a local rigid linear model for CAM compensation. The method shows promising preliminary results for the application of such estimation in CCTA. Further work will involve motion estimation of complex motion corrupted patient data acquired from clinical CT scanner.

  19. A theory of human error

    Science.gov (United States)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1981-01-01

    Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  20. Demonstrating the robustness of population surveillance data: implications of error rates on demographic and mortality estimates.

    Science.gov (United States)

    Fottrell, Edward; Byass, Peter; Berhane, Yemane

    2008-03-25

    As in any measurement process, a certain amount of error may be expected in routine population surveillance operations such as those in demographic surveillance sites (DSSs). Vital events are likely to be missed and errors made no matter what method of data capture is used or what quality control procedures are in place. The extent to which random errors in large, longitudinal datasets affect overall health and demographic profiles has important implications for the role of DSSs as platforms for public health research and clinical trials. Such knowledge is also of particular importance if the outputs of DSSs are to be extrapolated and aggregated with realistic margins of error and validity. This study uses the first 10-year dataset from the Butajira Rural Health Project (BRHP) DSS, Ethiopia, covering approximately 336,000 person-years of data. Simple programmes were written to introduce random errors and omissions into new versions of the definitive 10-year Butajira dataset. Key parameters of sex, age, death, literacy and roof material (an indicator of poverty) were selected for the introduction of errors based on their obvious importance in demographic and health surveillance and their established significant associations with mortality. Defining the original 10-year dataset as the 'gold standard' for the purposes of this investigation, population, age and sex compositions and Poisson regression models of mortality rate ratios were compared between each of the intentionally erroneous datasets and the original 'gold standard' 10-year data. The composition of the Butajira population was well represented despite introducing random errors, and differences between population pyramids based on the derived datasets were subtle. Regression analyses of well-established mortality risk factors were largely unaffected even by relatively high levels of random errors in the data. The low sensitivity of parameter estimates and regression analyses to significant amounts of

  1. Demonstrating the robustness of population surveillance data: implications of error rates on demographic and mortality estimates

    Directory of Open Access Journals (Sweden)

    Berhane Yemane

    2008-03-01

    Full Text Available Abstract Background As in any measurement process, a certain amount of error may be expected in routine population surveillance operations such as those in demographic surveillance sites (DSSs. Vital events are likely to be missed and errors made no matter what method of data capture is used or what quality control procedures are in place. The extent to which random errors in large, longitudinal datasets affect overall health and demographic profiles has important implications for the role of DSSs as platforms for public health research and clinical trials. Such knowledge is also of particular importance if the outputs of DSSs are to be extrapolated and aggregated with realistic margins of error and validity. Methods This study uses the first 10-year dataset from the Butajira Rural Health Project (BRHP DSS, Ethiopia, covering approximately 336,000 person-years of data. Simple programmes were written to introduce random errors and omissions into new versions of the definitive 10-year Butajira dataset. Key parameters of sex, age, death, literacy and roof material (an indicator of poverty were selected for the introduction of errors based on their obvious importance in demographic and health surveillance and their established significant associations with mortality. Defining the original 10-year dataset as the 'gold standard' for the purposes of this investigation, population, age and sex compositions and Poisson regression models of mortality rate ratios were compared between each of the intentionally erroneous datasets and the original 'gold standard' 10-year data. Results The composition of the Butajira population was well represented despite introducing random errors, and differences between population pyramids based on the derived datasets were subtle. Regression analyses of well-established mortality risk factors were largely unaffected even by relatively high levels of random errors in the data. Conclusion The low sensitivity of parameter

  2. Students’ Written Production Error Analysis in the EFL Classroom Teaching: A Study of Adult English Learners Errors

    Directory of Open Access Journals (Sweden)

    Ranauli Sihombing

    2016-12-01

    Full Text Available Errors analysis has become one of the most interesting issues in the study of Second Language Acquisition. It can not be denied that some teachers do not know a lot about error analysis and related theories of how L1, L2 or foreign language acquired. In addition, the students often feel upset since they find a gap between themselves and the teachers for the errors the students make and the teachers’ understanding about the error correction. The present research aims to investigate what errors adult English learners make in written production of English. The significances of the study is to know what errors students make in writing that the teachers can find solution to the errors the students make for a better English language teaching and learning especially in teaching English for adults. The study employed qualitative method. The research was undertaken at an airline education center in Bandung. The result showed that syntax errors are more frequently found than morphology errors, especially in terms of verb phrase errors. It is recommended that it is important for teacher to know the theory of second language acquisition in order to know how the students learn and produce theirlanguage. In addition, it will be advantages for teachers if they know what errors students frequently make in their learning, so that the teachers can give solution to the students for a better English language learning achievement.   DOI: https://doi.org/10.24071/llt.2015.180205

  3. Overview of Akatsuki data products: definition of data levels, method and accuracy of geometric correction

    Science.gov (United States)

    Ogohara, Kazunori; Takagi, Masahiro; Murakami, Shin-ya; Horinouchi, Takeshi; Yamada, Manabu; Kouyama, Toru; Hashimoto, George L.; Imamura, Takeshi; Yamamoto, Yukio; Kashimura, Hiroki; Hirata, Naru; Sato, Naoki; Yamazaki, Atsushi; Satoh, Takehiko; Iwagami, Naomoto; Taguchi, Makoto; Watanabe, Shigeto; Sato, Takao M.; Ohtsuki, Shoko; Fukuhara, Tetsuya; Futaguchi, Masahiko; Sakanoi, Takeshi; Kameda, Shingo; Sugiyama, Ko-ichiro; Ando, Hiroki; Lee, Yeon Joo; Nakamura, Masato; Suzuki, Makoto; Hirose, Chikako; Ishii, Nobuaki; Abe, Takumi

    2017-12-01

    We provide an overview of data products from observations by the Japanese Venus Climate Orbiter, Akatsuki, and describe the definition and content of each data-processing level. Levels 1 and 2 consist of non-calibrated and calibrated radiance (or brightness temperature), respectively, as well as geometry information (e.g., illumination angles). Level 3 data are global-grid data in the regular longitude-latitude coordinate system, produced from the contents of Level 2. Non-negligible errors in navigational data and instrumental alignment can result in serious errors in the geometry calculations. Such errors cause mismapping of the data and lead to inconsistencies between radiances and illumination angles, along with errors in cloud-motion vectors. Thus, we carefully correct the boresight pointing of each camera by fitting an ellipse to the observed Venusian limb to provide improved longitude-latitude maps for Level 3 products, if possible. The accuracy of the pointing correction is also estimated statistically by simulating observed limb distributions. The results show that our algorithm successfully corrects instrumental pointing and will enable a variety of studies on the Venusian atmosphere using Akatsuki data.[Figure not available: see fulltext.

  4. The impact of transmission errors on progressive 720 lines HDTV coded with H.264

    Science.gov (United States)

    Brunnström, Kjell; Stålenbring, Daniel; Pettersson, Martin; Gustafsson, Jörgen

    2010-02-01

    TV sent over the networks based on the Internet Protocol i.e IPTV is moving towards high definition (HDTV). There has been quite a lot of work on how the HDTV is affected by different codecs and bitrates, but the impact of transmission errors over IP-networks have been less studied. The study was focusing on H.264 encoded 1280x720 progressive HDTV format and was comparing three different concealment methods for different packet loss rates. One is included in a propriety decoder, one is part of FFMPEG and different length of freezing. The target is to simulate what typically IPTV settop-boxes will do when encountering packet loss. Another aim is to study whether the presentation upscaled on the full HDTV screen or presented pixel mapped in a smaller area in the center of the sceen would have an effect on the quality. The results show that there were differences between the two packet loss concealment methods in FFMPEG and in the propriety codec. Freezing seemed to have similar effect as been reported before. For low rates of transmission errors the coding impairments has impact on the quality, but for higher degree of transmission errors these does not affect the quality, since they become overshadowed by transmission error. An interesting effect where the higher bitrate videos goes from having higher quality for lower degree of packet loss, to having lower quality than the lower bitrate video at higher packet loss, was discovered. The different way of presenting the video i.e. upscaled or not-upscaled was significant on the 95% level, but just about.

  5. Approximate error conjugation gradient minimization methods

    Science.gov (United States)

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  6. Responses to Error: Sentence-Level Error and the Teacher of Basic Writing

    Science.gov (United States)

    Foltz-Gray, Dan

    2012-01-01

    In this article, the author talks about sentence-level error, error in grammar, mechanics, punctuation, usage, and the teacher of basic writing. He states that communities are crawling with teachers and administrators and parents and state legislators and school board members who are engaged in sometimes rancorous debate over what to do about…

  7. Systematic literature review of hospital medication administration errors in children

    Directory of Open Access Journals (Sweden)

    Ameer A

    2015-11-01

    the definition and method used to investigate MAEs. The review also illustrated the complexity and multifaceted nature of MAEs. Therefore, there is a need to develop a set of safety measures to tackle these errors in pediatric practice. Keywords: medication administration errors, children's hospital, pediatric, nature, incidence, intervention

  8. Inborn Errors of Metabolism with Acidosis: Organic Acidemias and Defects of Pyruvate and Ketone Body Metabolism.

    Science.gov (United States)

    Schillaci, Lori-Anne P; DeBrosse, Suzanne D; McCandless, Shawn E

    2018-04-01

    When a child presents with high-anion gap metabolic acidosis, the pediatrician can proceed with confidence by recalling some basic principles. Defects of organic acid, pyruvate, and ketone body metabolism that present with acute acidosis are reviewed. Flowcharts for identifying the underlying cause and initiating life-saving therapy are provided. By evaluating electrolytes, blood sugar, lactate, ammonia, and urine ketones, the provider can determine the likelihood of an inborn error of metabolism. Freezing serum, plasma, and urine samples during the acute presentation for definitive diagnostic testing at the provider's convenience aids in the differential diagnosis. Copyright © 2017 Elsevier Inc. All rights reserved.

  9. TU-CD-BRA-04: Evaluation of An Atlas-Based Segmentation Method for Prostate and Peripheral Zone Regions On MRI

    International Nuclear Information System (INIS)

    Nelson, AS; Piper, J; Curry, K; Swallen, A; Padgett, K; Pollack, A; Stoyanova, RS

    2015-01-01

    Purpose: Prostate MRI plays an important role in diagnosis, biopsy guidance, and therapy planning for prostate cancer. Prostate MRI contours can be used to aid in image fusion for ultrasound biopsy guidance and delivery of radiation. Our goal in this study is to evaluate an automatic atlas-based segmentation method for generating prostate and peripheral zone (PZ) contours on MRI. Methods: T2-weighted MRIs were acquired on 3T-Discovery MR750 System (GE, Milwaukee). The Volumes of Interest (VOIs): prostate and PZ were outlined by an expert radiation oncologist and used to create an atlas library for atlas-based segmentation. The atlas-segmentation accuracy was evaluated using a leave-one-out analysis. The method involved automatically finding the atlas subject that best matched the test subject followed by a normalized intensity-based free-form deformable registration of the atlas subject to the test subject. The prostate and PZ contours were transformed to the test subject using the same deformation. For each test subject the three best matches were used and the final contour was combined using Majority Vote. The atlas-segmentation process was fully automatic. Dice similarity coefficients (DSC) and mean Hausdorff values were used for comparison. Results: VOIs contours were available for 28 subjects. For the prostate, the atlas-based segmentation method resulted in an average DSC of 0.88+/−0.08 and a mean Hausdorff distance of 1.1+/−0.9mm. The number of patients (#) in DSC ranges are as follows: 0.60–0.69(1), 0.70–0.79(2), 0.80–0.89(13), >0.89(11). For the PZ, the average DSC was 0.72+/−0.17 and average Hausdorff of 0.9+/−0.9mm. The number of patients (#) in DSC ranges are as follows: 0.89(1). Conclusion: The MRI atlas-based segmentation method achieved good results for both the whole prostate and PZ compared to expert defined VOIs. The technique is fast, fully automatic, and has the potential to provide significant time savings for prostate VOI

  10. OARSI-OMERACT definition of relevant radiological progression in hip/knee osteoarthritis.

    Science.gov (United States)

    Ornetti, P; Brandt, K; Hellio-Le Graverand, M-P; Hochberg, M; Hunter, D J; Kloppenburg, M; Lane, N; Maillefert, J-F; Mazzuca, S A; Spector, T; Utard-Wlerick, G; Vignon, E; Dougados, M

    2009-07-01

    Joint space width (JSW) evaluated in millimeters on plain X-rays is the currently optimal recognized technique to evaluate osteoarthritis (OA) structural progression. Data obtained can be presented at the group level (e.g., mean+/-standard deviation of the changes). Such presentation makes difficult the interpretation of the clinical relevance of the reported results. Therefore, a presentation at the individual level (e.g., % progressors) seems more attractive but requires to determining a cut-off. Several methodologies have been proposed to define cut-offs in JSW: arbitrary chosen cut-off, cut-off based on the validity to predict a relevant end-point such as the requirement of total articular replacement or cut-off based on the measurement error such as smallest detectable difference (SDD). The objective of this OARSI-OMERACT initiative was to define a cut-off evaluated in millimeters on plain X-rays above which a change in JSW could be considered as relevant in patients with hip and knee OA. The first step consisted in a systematic literature research performed using Medline database up to July 2007 to obtain all manuscripts published between 1990 and 2007 reporting a cut-off value in JSW evaluated in millimeters at either the knee or hip level. The second step consisted in a consensus based on the best knowledge of the 11 experts with the support of the available evidence. Among the 506 articles selected by the search, 47 articles reported cut-off of JSW in millimeters. There was a broad heterogeneity in cut-off values, whatever the methodologies or the OA localization considered (e.g., from 0.12 to 0.84 mm and from 0.22 to 0.78 mm for Knee (seven studies) and hip (seven studies), respectively when considering the data obtained based on the reliability). Based on the data extracted in the literature, the expert committee proposed a definition of relevant change in JSW based on plain X-rays, on an absolute change of JSW in millimeters and on the measurement error

  11. Comparison between calorimeter and HLNC errors

    International Nuclear Information System (INIS)

    Goldman, A.S.; De Ridder, P.; Laszlo, G.

    1991-01-01

    This paper summarizes an error analysis that compares systematic and random errors of total plutonium mass estimated for high-level neutron coincidence counter (HLNC) and calorimeter measurements. This task was part of an International Atomic Energy Agency (IAEA) study on the comparison of the two instruments to determine if HLNC measurement errors met IAEA standards and if the calorimeter gave ''significantly'' better precision. Our analysis was based on propagation of error models that contained all known sources of errors including uncertainties associated with plutonium isotopic measurements. 5 refs., 2 tabs

  12. EVALUATION OF ERRORS OF NUTRIENTS AND BIOACTIVE SUBSTANCES IN ANIMAL FEED PRODUCTION

    Directory of Open Access Journals (Sweden)

    .

    2015-01-01

    Full Text Available The definition of feed nutrients assumes the following: assessment of its chemical composition; estimate of the amount contained therein of digestible nutrients; estimate of the amount of energy released by them. We estimate the chemical composition of the components of the indices, which balanced diet. This seemingly simple requirement is not always fulfilled. In the practice of forage production are cases when during the chemical analysis of the finished feed is a discrepancy between the estimated and actual nutritional value, and with the same probability of deviation from the declared value of both in one and in the other direction. The database of contemporary programs for compiling feed rations contained digestibility coefficients of nutrients for all types of raw materials for all kinds of animals from the program of special factors, allow to balance feed rations on digestibility of nutrients and energy value component count. The paper proposes a mathematical tool for assessing the margin of variation of content of biologically active substances in the party regarding the premix recipe data. The reasons for the variations are considered random error methods of quantitative chemical analysis of biologically active substances (BAS and random error estimates of the masses of carriers of active substances when they are dosed into the mixer.

  13. Dependence of fluence errors in dynamic IMRT on leaf-positional errors varying with time and leaf number

    International Nuclear Information System (INIS)

    Zygmanski, Piotr; Kung, Jong H.; Jiang, Steve B.; Chin, Lee

    2003-01-01

    In d-MLC based IMRT, leaves move along a trajectory that lies within a user-defined tolerance (TOL) about the ideal trajectory specified in a d-MLC sequence file. The MLC controller measures leaf positions multiple times per second and corrects them if they deviate from ideal positions by a value greater than TOL. The magnitude of leaf-positional errors resulting from finite mechanical precision depends on the performance of the MLC motors executing leaf motions and is generally larger if leaves are forced to move at higher speeds. The maximum value of leaf-positional errors can be limited by decreasing TOL. However, due to the inherent time delay in the MLC controller, this may not happen at all times. Furthermore, decreasing the leaf tolerance results in a larger number of beam hold-offs, which, in turn leads, to a longer delivery time and, paradoxically, to higher chances of leaf-positional errors (≤TOL). On the other end, the magnitude of leaf-positional errors depends on the complexity of the fluence map to be delivered. Recently, it has been shown that it is possible to determine the actual distribution of leaf-positional errors either by the imaging of moving MLC apertures with a digital imager or by analysis of a MLC log file saved by a MLC controller. This leads next to an important question: What is the relation between the distribution of leaf-positional errors and fluence errors. In this work, we introduce an analytical method to determine this relation in dynamic IMRT delivery. We model MLC errors as Random-Leaf Positional (RLP) errors described by a truncated normal distribution defined by two characteristic parameters: a standard deviation σ and a cut-off value Δx 0 (Δx 0 ∼TOL). We quantify fluence errors for two cases: (i) Δx 0 >>σ (unrestricted normal distribution) and (ii) Δx 0 0 --limited normal distribution). We show that an average fluence error of an IMRT field is proportional to (i) σ/ALPO and (ii) Δx 0 /ALPO, respectively, where

  14. The Iatroref study: medical errors are associated with symptoms of depression in ICU staff but not burnout or safety culture.

    Science.gov (United States)

    Garrouste-Orgeas, Maité; Perrin, Marion; Soufir, Lilia; Vesin, Aurélien; Blot, François; Maxime, Virginie; Beuret, Pascal; Troché, Gilles; Klouche, Kada; Argaud, Laurent; Azoulay, Elie; Timsit, Jean-François

    2015-02-01

    Staff behaviours to optimise patient safety may be influenced by burnout, depression and strength of the safety culture. We evaluated whether burnout, symptoms of depression and safety culture affected the frequency of medical errors and adverse events (selected using Delphi techniques) in ICUs. Prospective, observational, multicentre (31 ICUs) study from August 2009 to December 2011. Burnout, depression symptoms and safety culture were evaluated using the Maslach Burnout Inventory (MBI), CES-Depression scale and Safety Attitudes Questionnaire, respectively. Of 1,988 staff members, 1,534 (77.2 %) participated. Frequencies of medical errors and adverse events were 804.5/1,000 and 167.4/1,000 patient-days, respectively. Burnout prevalence was 3 or 40 % depending on the definition (severe emotional exhaustion, depersonalisation and low personal accomplishment; or MBI score greater than -9). Depression symptoms were identified in 62/330 (18.8 %) physicians and 188/1,204 (15.6 %) nurses/nursing assistants. Median safety culture score was 60.7/100 [56.8-64.7] in physicians and 57.5/100 [52.4-61.9] in nurses/nursing assistants. Depression symptoms were an independent risk factor for medical errors. Burnout was not associated with medical errors. The safety culture score had a limited influence on medical errors. Other independent risk factors for medical errors or adverse events were related to ICU organisation (40 % of ICU staff off work on the previous day), staff (specific safety training) and patients (workload). One-on-one training of junior physicians during duties and existence of a hospital risk-management unit were associated with lower risks. The frequency of selected medical errors in ICUs was high and was increased when staff members had symptoms of depression.

  15. Error sensitivity analysis in 10-30-day extended range forecasting by using a nonlinear cross-prediction error model

    Science.gov (United States)

    Xia, Zhiye; Xu, Lisheng; Chen, Hongbin; Wang, Yongqian; Liu, Jinbao; Feng, Wenlan

    2017-06-01

    Extended range forecasting of 10-30 days, which lies between medium-term and climate prediction in terms of timescale, plays a significant role in decision-making processes for the prevention and mitigation of disastrous meteorological events. The sensitivity of initial error, model parameter error, and random error in a nonlinear crossprediction error (NCPE) model, and their stability in the prediction validity period in 10-30-day extended range forecasting, are analyzed quantitatively. The associated sensitivity of precipitable water, temperature, and geopotential height during cases of heavy rain and hurricane is also discussed. The results are summarized as follows. First, the initial error and random error interact. When the ratio of random error to initial error is small (10-6-10-2), minor variation in random error cannot significantly change the dynamic features of a chaotic system, and therefore random error has minimal effect on the prediction. When the ratio is in the range of 10-1-2 (i.e., random error dominates), attention should be paid to the random error instead of only the initial error. When the ratio is around 10-2-10-1, both influences must be considered. Their mutual effects may bring considerable uncertainty to extended range forecasting, and de-noising is therefore necessary. Second, in terms of model parameter error, the embedding dimension m should be determined by the factual nonlinear time series. The dynamic features of a chaotic system cannot be depicted because of the incomplete structure of the attractor when m is small. When m is large, prediction indicators can vanish because of the scarcity of phase points in phase space. A method for overcoming the cut-off effect ( m > 4) is proposed. Third, for heavy rains, precipitable water is more sensitive to the prediction validity period than temperature or geopotential height; however, for hurricanes, geopotential height is most sensitive, followed by precipitable water.

  16. Medication errors as malpractice-a qualitative content analysis of 585 medication errors by nurses in Sweden.

    Science.gov (United States)

    Björkstén, Karin Sparring; Bergqvist, Monica; Andersén-Karlsson, Eva; Benson, Lina; Ulfvarson, Johanna

    2016-08-24

    Many studies address the prevalence of medication errors but few address medication errors serious enough to be regarded as malpractice. Other studies have analyzed the individual and system contributory factor leading to a medication error. Nurses have a key role in medication administration, and there are contradictory reports on the nurses' work experience in relation to the risk and type for medication errors. All medication errors where a nurse was held responsible for malpractice (n = 585) during 11 years in Sweden were included. A qualitative content analysis and classification according to the type and the individual and system contributory factors was made. In order to test for possible differences between nurses' work experience and associations within and between the errors and contributory factors, Fisher's exact test was used, and Cohen's kappa (k) was performed to estimate the magnitude and direction of the associations. There were a total of 613 medication errors in the 585 cases, the most common being "Wrong dose" (41 %), "Wrong patient" (13 %) and "Omission of drug" (12 %). In 95 % of the cases, an average of 1.4 individual contributory factors was found; the most common being "Negligence, forgetfulness or lack of attentiveness" (68 %), "Proper protocol not followed" (25 %), "Lack of knowledge" (13 %) and "Practice beyond scope" (12 %). In 78 % of the cases, an average of 1.7 system contributory factors was found; the most common being "Role overload" (36 %), "Unclear communication or orders" (30 %) and "Lack of adequate access to guidelines or unclear organisational routines" (30 %). The errors "Wrong patient due to mix-up of patients" and "Wrong route" and the contributory factors "Lack of knowledge" and "Negligence, forgetfulness or lack of attentiveness" were more common in less experienced nurses. The experienced nurses were more prone to "Practice beyond scope of practice" and to make errors in spite of "Lack of adequate

  17. Predictors of Errors of Novice Java Programmers

    Science.gov (United States)

    Bringula, Rex P.; Manabat, Geecee Maybelline A.; Tolentino, Miguel Angelo A.; Torres, Edmon L.

    2012-01-01

    This descriptive study determined which of the sources of errors would predict the errors committed by novice Java programmers. Descriptive statistics revealed that the respondents perceived that they committed the identified eighteen errors infrequently. Thought error was perceived to be the main source of error during the laboratory programming…

  18. Air pollution in moderately polluted urban areas: How does the definition of “neighborhood” impact exposure assessment?

    International Nuclear Information System (INIS)

    Tenailleau, Quentin M.; Mauny, Frédéric; Joly, Daniel; François, Stéphane; Bernard, Nadine

    2015-01-01

    Environmental health studies commonly quantify subjects' pollution exposure in their neighborhood. How this neighborhood is defined can vary, however, leading to different approaches to quantification whose impacts on exposure levels remain unclear. We explore the relationship between neighborhood definition and exposure assessment. NO 2 , benzene, PM 10 and PM 2.5 exposure estimates were computed in the vicinity of 10,825 buildings using twelve exposure assessment techniques reflecting different definitions of “neighborhood”. At the city scale, its definition does not significantly influence exposure estimates. It does impact levels at the building scale, however: at least a quarter of the buildings' exposure estimates for a 400 m buffer differ from the estimated 50 m buffer value (±1.0 μg/m 3 for NO 2 , PM 10 and PM 2.5 ; and ±0.05 μg/m 3 for benzene). This variation is significantly related to the definition of neighborhood. It is vitally important for investigators to understand the impact of chosen assessment techniques on exposure estimates. - Highlights: • Residential building air pollution was calculated using 12 assessment techniques. • These techniques refer to common epidemiological definitions of neighborhood. • At the city scale, neighborhood definition does not impact exposure estimates. • At the building scale, neighborhood definition does impact exposure estimates. • The impact of neighborhood definition varies with physical/deprivation variables. - Ignoring the impact of the neighborhood's definition on exposure estimates could lead to exposure quantification errors that impact resulting health studies, health risk evaluation, and consequently all the decision-making process.

  19. The error model and experiment of measuring angular position error based on laser collimation

    Science.gov (United States)

    Cai, Yangyang; Yang, Jing; Li, Jiakun; Feng, Qibo

    2018-01-01

    Rotary axis is the reference component of rotation motion. Angular position error is the most critical factor which impair the machining precision among the six degree-of-freedom (DOF) geometric errors of rotary axis. In this paper, the measuring method of angular position error of rotary axis based on laser collimation is thoroughly researched, the error model is established and 360 ° full range measurement is realized by using the high precision servo turntable. The change of space attitude of each moving part is described accurately by the 3×3 transformation matrices and the influences of various factors on the measurement results is analyzed in detail. Experiments results show that the measurement method can achieve high measurement accuracy and large measurement range.

  20. [Medical errors: inevitable but preventable].

    Science.gov (United States)

    Giard, R W

    2001-10-27

    Medical errors are increasingly reported in the lay press. Studies have shown dramatic error rates of 10 percent or even higher. From a methodological point of view, studying the frequency and causes of medical errors is far from simple. Clinical decisions on diagnostic or therapeutic interventions are always taken within a clinical context. Reviewing outcomes of interventions without taking into account both the intentions and the arguments for a particular action will limit the conclusions from a study on the rate and preventability of errors. The interpretation of the preventability of medical errors is fraught with difficulties and probably highly subjective. Blaming the doctor personally does not do justice to the actual situation and especially the organisational framework. Attention for and improvement of the organisational aspects of error are far more important then litigating the person. To err is and will remain human and if we want to reduce the incidence of faults we must be able to learn from our mistakes. That requires an open attitude towards medical mistakes, a continuous effort in their detection, a sound analysis and, where feasible, the institution of preventive measures.

  1. Correcting AUC for Measurement Error.

    Science.gov (United States)

    Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang

    2015-12-01

    Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.

  2. Cognitive aspect of diagnostic errors.

    Science.gov (United States)

    Phua, Dong Haur; Tan, Nigel C K

    2013-01-01

    Diagnostic errors can result in tangible harm to patients. Despite our advances in medicine, the mental processes required to make a diagnosis exhibits shortcomings, causing diagnostic errors. Cognitive factors are found to be an important cause of diagnostic errors. With new understanding from psychology and social sciences, clinical medicine is now beginning to appreciate that our clinical reasoning can take the form of analytical reasoning or heuristics. Different factors like cognitive biases and affective influences can also impel unwary clinicians to make diagnostic errors. Various strategies have been proposed to reduce the effect of cognitive biases and affective influences when clinicians make diagnoses; however evidence for the efficacy of these methods is still sparse. This paper aims to introduce the reader to the cognitive aspect of diagnostic errors, in the hope that clinicians can use this knowledge to improve diagnostic accuracy and patient outcomes.

  3. Spectrum of diagnostic errors in radiology.

    Science.gov (United States)

    Pinto, Antonio; Brunese, Luca

    2010-10-28

    Diagnostic errors are important in all branches of medicine because they are an indication of poor patient care. Since the early 1970s, physicians have been subjected to an increasing number of medical malpractice claims. Radiology is one of the specialties most liable to claims of medical negligence. Most often, a plaintiff's complaint against a radiologist will focus on a failure to diagnose. The etiology of radiological error is multi-factorial. Errors fall into recurrent patterns. Errors arise from poor technique, failures of perception, lack of knowledge and misjudgments. The work of diagnostic radiology consists of the complete detection of all abnormalities in an imaging examination and their accurate diagnosis. Every radiologist should understand the sources of error in diagnostic radiology as well as the elements of negligence that form the basis of malpractice litigation. Error traps need to be uncovered and highlighted, in order to prevent repetition of the same mistakes. This article focuses on the spectrum of diagnostic errors in radiology, including a classification of the errors, and stresses the malpractice issues in mammography, chest radiology and obstetric sonography. Missed fractures in emergency and communication issues between radiologists and physicians are also discussed.

  4. Seeing your error alters my pointing: observing systematic pointing errors induces sensori-motor after-effects.

    Directory of Open Access Journals (Sweden)

    Roberta Ronchi

    Full Text Available During the procedure of prism adaptation, subjects execute pointing movements to visual targets under a lateral optical displacement: as consequence of the discrepancy between visual and proprioceptive inputs, their visuo-motor activity is characterized by pointing errors. The perception of such final errors triggers error-correction processes that eventually result into sensori-motor compensation, opposite to the prismatic displacement (i.e., after-effects. Here we tested whether the mere observation of erroneous pointing movements, similar to those executed during prism adaptation, is sufficient to produce adaptation-like after-effects. Neurotypical participants observed, from a first-person perspective, the examiner's arm making incorrect pointing movements that systematically overshot visual targets location to the right, thus simulating a rightward optical deviation. Three classical after-effect measures (proprioceptive, visual and visual-proprioceptive shift were recorded before and after first-person's perspective observation of pointing errors. Results showed that mere visual exposure to an arm that systematically points on the right-side of a target (i.e., without error correction produces a leftward after-effect, which mostly affects the observer's proprioceptive estimation of her body midline. In addition, being exposed to such a constant visual error induced in the observer the illusion "to feel" the seen movement. These findings indicate that it is possible to elicit sensori-motor after-effects by mere observation of movement errors.

  5. Finding beam focus errors automatically

    International Nuclear Information System (INIS)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.

    1987-01-01

    An automated method for finding beam focus errors using an optimization program called COMFORT-PLUS. The steps involved in finding the correction factors using COMFORT-PLUS has been used to find the beam focus errors for two damping rings at the SLAC Linear Collider. The program is to be used as an off-line program to analyze actual measured data for any SLC system. A limitation on the application of this procedure is found to be that it depends on the magnitude of the machine errors. Another is that the program is not totally automated since the user must decide a priori where to look for errors

  6. Automated River Reach Definition Strategies: Applications for the Surface Water and Ocean Topography Mission

    Science.gov (United States)

    Frasson, Renato Prata de Moraes; Wei, Rui; Durand, Michael; Minear, J. Toby; Domeneghetti, Alessio; Schumann, Guy; Williams, Brent A.; Rodriguez, Ernesto; Picamilh, Christophe; Lion, Christine; Pavelsky, Tamlin; Garambois, Pierre-André

    2017-10-01

    The upcoming Surface Water and Ocean Topography (SWOT) mission will measure water surface heights and widths for rivers wider than 100 m. At its native resolution, SWOT height errors are expected to be on the order of meters, which prevent the calculation of water surface slopes and the use of slope-dependent discharge equations. To mitigate height and width errors, the high-resolution measurements will be grouped into reaches (˜5 to 15 km), where slope and discharge are estimated. We describe three automated river segmentation strategies for defining optimum reaches for discharge estimation: (1) arbitrary lengths, (2) identification of hydraulic controls, and (3) sinuosity. We test our methodologies on 9 and 14 simulated SWOT overpasses over the Sacramento and the Po Rivers, respectively, which we compare against hydraulic models of each river. Our results show that generally, height, width, and slope errors decrease with increasing reach length. However, the hydraulic controls and the sinuosity methods led to better slopes and often height errors that were either smaller or comparable to those of arbitrary reaches of compatible sizes. Estimated discharge errors caused by the propagation of height, width, and slope errors through the discharge equation were often smaller for sinuosity (on average 8.5% for the Sacramento and 6.9% for the Po) and hydraulic control (Sacramento: 7.3% and Po: 5.9%) reaches than for arbitrary reaches of comparable lengths (Sacramento: 8.6% and Po: 7.8%). This analysis suggests that reach definition methods that preserve the hydraulic properties of the river network may lead to better discharge estimates.

  7. Neurochemical enhancement of conscious error awareness.

    Science.gov (United States)

    Hester, Robert; Nandam, L Sanjay; O'Connell, Redmond G; Wagner, Joe; Strudwick, Mark; Nathan, Pradeep J; Mattingley, Jason B; Bellgrove, Mark A

    2012-02-22

    How the brain monitors ongoing behavior for performance errors is a central question of cognitive neuroscience. Diminished awareness of performance errors limits the extent to which humans engage in corrective behavior and has been linked to loss of insight in a number of psychiatric syndromes (e.g., attention deficit hyperactivity disorder, drug addiction). These conditions share alterations in monoamine signaling that may influence the neural mechanisms underlying error processing, but our understanding of the neurochemical drivers of these processes is limited. We conducted a randomized, double-blind, placebo-controlled, cross-over design of the influence of methylphenidate, atomoxetine, and citalopram on error awareness in 27 healthy participants. The error awareness task, a go/no-go response inhibition paradigm, was administered to assess the influence of monoaminergic agents on performance errors during fMRI data acquisition. A single dose of methylphenidate, but not atomoxetine or citalopram, significantly improved the ability of healthy volunteers to consciously detect performance errors. Furthermore, this behavioral effect was associated with a strengthening of activation differences in the dorsal anterior cingulate cortex and inferior parietal lobe during the methylphenidate condition for errors made with versus without awareness. Our results have implications for the understanding of the neurochemical underpinnings of performance monitoring and for the pharmacological treatment of a range of disparate clinical conditions that are marked by poor awareness of errors.

  8. Analyzing temozolomide medication errors: potentially fatal.

    Science.gov (United States)

    Letarte, Nathalie; Gabay, Michael P; Bressler, Linda R; Long, Katie E; Stachnik, Joan M; Villano, J Lee

    2014-10-01

    The EORTC-NCIC regimen for glioblastoma requires different dosing of temozolomide (TMZ) during radiation and maintenance therapy. This complexity is exacerbated by the availability of multiple TMZ capsule strengths. TMZ is an alkylating agent and the major toxicity of this class is dose-related myelosuppression. Inadvertent overdose can be fatal. The websites of the Institute for Safe Medication Practices (ISMP), and the Food and Drug Administration (FDA) MedWatch database were reviewed. We searched the MedWatch database for adverse events associated with TMZ and obtained all reports including hematologic toxicity submitted from 1st November 1997 to 30th May 2012. The ISMP describes errors with TMZ resulting from the positioning of information on the label of the commercial product. The strength and quantity of capsules on the label were in close proximity to each other, and this has been changed by the manufacturer. MedWatch identified 45 medication errors. Patient errors were the most common, accounting for 21 or 47% of errors, followed by dispensing errors, which accounted for 13 or 29%. Seven reports or 16% were errors in the prescribing of TMZ. Reported outcomes ranged from reversible hematological adverse events (13%), to hospitalization for other adverse events (13%) or death (18%). Four error reports lacked detail and could not be categorized. Although the FDA issued a warning in 2003 regarding fatal medication errors and the product label warns of overdosing, errors in TMZ dosing occur for various reasons and involve both healthcare professionals and patients. Overdosing errors can be fatal.

  9. Common Errors in Ecological Data Sharing

    Directory of Open Access Journals (Sweden)

    Robert B. Cook

    2013-04-01

    Full Text Available Objectives: (1 to identify common errors in data organization and metadata completeness that would preclude a “reader” from being able to interpret and re-use the data for a new purpose; and (2 to develop a set of best practices derived from these common errors that would guide researchers in creating more usable data products that could be readily shared, interpreted, and used.Methods: We used directed qualitative content analysis to assess and categorize data and metadata errors identified by peer reviewers of data papers published in the Ecological Society of America’s (ESA Ecological Archives. Descriptive statistics provided the relative frequency of the errors identified during the peer review process.Results: There were seven overarching error categories: Collection & Organization, Assure, Description, Preserve, Discover, Integrate, and Analyze/Visualize. These categories represent errors researchers regularly make at each stage of the Data Life Cycle. Collection & Organization and Description errors were some of the most common errors, both of which occurred in over 90% of the papers.Conclusions: Publishing data for sharing and reuse is error prone, and each stage of the Data Life Cycle presents opportunities for mistakes. The most common errors occurred when the researcher did not provide adequate metadata to enable others to interpret and potentially re-use the data. Fortunately, there are ways to minimize these mistakes through carefully recording all details about study context, data collection, QA/ QC, and analytical procedures from the beginning of a research project and then including this descriptive information in the metadata.

  10. NLO error propagation exercise: statistical results

    International Nuclear Information System (INIS)

    Pack, D.J.; Downing, D.J.

    1985-09-01

    Error propagation is the extrapolation and cumulation of uncertainty (variance) above total amounts of special nuclear material, for example, uranium or 235 U, that are present in a defined location at a given time. The uncertainty results from the inevitable inexactness of individual measurements of weight, uranium concentration, 235 U enrichment, etc. The extrapolated and cumulated uncertainty leads directly to quantified limits of error on inventory differences (LEIDs) for such material. The NLO error propagation exercise was planned as a field demonstration of the utilization of statistical error propagation methodology at the Feed Materials Production Center in Fernald, Ohio from April 1 to July 1, 1983 in a single material balance area formed specially for the exercise. Major elements of the error propagation methodology were: variance approximation by Taylor Series expansion; variance cumulation by uncorrelated primary error sources as suggested by Jaech; random effects ANOVA model estimation of variance effects (systematic error); provision for inclusion of process variance in addition to measurement variance; and exclusion of static material. The methodology was applied to material balance area transactions from the indicated time period through a FORTRAN computer code developed specifically for this purpose on the NLO HP-3000 computer. This paper contains a complete description of the error propagation methodology and a full summary of the numerical results of applying the methodlogy in the field demonstration. The error propagation LEIDs did encompass the actual uranium and 235 U inventory differences. Further, one can see that error propagation actually provides guidance for reducing inventory differences and LEIDs in future time periods

  11. Accounting for optical errors in microtensiometry.

    Science.gov (United States)

    Hinton, Zachary R; Alvarez, Nicolas J

    2018-09-15

    Drop shape analysis (DSA) techniques measure interfacial tension subject to error in image analysis and the optical system. While considerable efforts have been made to minimize image analysis errors, very little work has treated optical errors. There are two main sources of error when considering the optical system: the angle of misalignment and the choice of focal plane. Due to the convoluted nature of these sources, small angles of misalignment can lead to large errors in measured curvature. We demonstrate using microtensiometry the contributions of these sources to measured errors in radius, and, more importantly, deconvolute the effects of misalignment and focal plane. Our findings are expected to have broad implications on all optical techniques measuring interfacial curvature. A geometric model is developed to analytically determine the contributions of misalignment angle and choice of focal plane on measurement error for spherical cap interfaces. This work utilizes a microtensiometer to validate the geometric model and to quantify the effect of both sources of error. For the case of a microtensiometer, an empirical calibration is demonstrated that corrects for optical errors and drastically simplifies implementation. The combination of geometric modeling and experimental results reveal a convoluted relationship between the true and measured interfacial radius as a function of the misalignment angle and choice of focal plane. The validated geometric model produces a full operating window that is strongly dependent on the capillary radius and spherical cap height. In all cases, the contribution of optical errors is minimized when the height of the spherical cap is equivalent to the capillary radius, i.e. a hemispherical interface. The understanding of these errors allow for correct measure of interfacial curvature and interfacial tension regardless of experimental setup. For the case of microtensiometry, this greatly decreases the time for experimental setup

  12. The account of sagging of wires at definition of specific potential factors of air High-Voltage Power Transmission Lines

    Directory of Open Access Journals (Sweden)

    Suslov V.M.

    2005-12-01

    Full Text Available The opportunity approached is shown, but more exact as it is usually accepted, the account of sagging of wires at definition of specific potential factors air High-Voltage Power Transmission Lines. The technique of reception of analytical expressions is resulted. For an opportunity of comparison traditional expressions for specific potential factors are resulted also. Communication of the offered and traditional analytical expressions is shown. Offered analytical expressions are not difficult for programming on a personal computer of any class and besides they allow to make an estimation of an error of traditional expressions by means of parallel definition of specific potential factors by both ways.

  13. The effect of volume-of-interest misregistration on quantitative planar activity and dose estimation

    International Nuclear Information System (INIS)

    Song, N; Frey, E C; He, B

    2010-01-01

    In targeted radionuclide therapy (TRT), dose estimation is essential for treatment planning and tumor dose response studies. Dose estimates are typically based on a time series of whole-body conjugate view planar or SPECT scans of the patient acquired after administration of a planning dose. Quantifying the activity in the organs from these studies is an essential part of dose estimation. The quantitative planar (QPlanar) processing method involves accurate compensation for image degrading factors and correction for organ and background overlap via the combination of computational models of the image formation process and 3D volumes of interest defining the organs to be quantified. When the organ VOIs are accurately defined, the method intrinsically compensates for attenuation, scatter and partial volume effects, as well as overlap with other organs and the background. However, alignment between the 3D organ volume of interest (VOIs) used in QPlanar processing and the true organ projections in the planar images is required. The aim of this research was to study the effects of VOI misregistration on the accuracy and precision of organ activity estimates obtained using the QPlanar method. In this work, we modeled the degree of residual misregistration that would be expected after an automated registration procedure by randomly misaligning 3D SPECT/CT images, from which the VOI information was derived, and planar images. Mutual information-based image registration was used to align the realistic simulated 3D SPECT images with the 2D planar images. The residual image misregistration was used to simulate realistic levels of misregistration and allow investigation of the effects of misregistration on the accuracy and precision of the QPlanar method. We observed that accurate registration is especially important for small organs or ones with low activity concentrations compared to neighboring organs. In addition, residual misregistration gave rise to a loss of precision

  14. Clinical evaluation of respiration-induced attenuation uncertainties in pulmonary 3D PET/CT.

    Science.gov (United States)

    Kruis, Matthijs F; van de Kamer, Jeroen B; Vogel, Wouter V; Belderbos, José Sa; Sonke, Jan-Jakob; van Herk, Marcel

    2015-12-01

    In contemporary positron emission tomography (PET)/computed tomography (CT) scanners, PET attenuation correction is performed by means of a CT-based attenuation map. Respiratory motion can however induce offsets between the PET and CT data. Studies have demonstrated that these offsets can cause errors in quantitative PET measures. The purpose of this study is to quantify the effects of respiration-induced CT differences on the attenuation correction of pulmonary 18-fluordeoxyglucose (FDG) 3D PET/CT in a patient population and to investigate contributing factors. For 32 lung cancer patients, 3D-CT, 4D-PET and 4D-CT data were acquired. The 4D FDG PET data were attenuation corrected (AC) using a free-breathing 3D-CT (3D-AC), the end-inspiration CT (EI-AC), the end-expiration CT (EE-AC) or phase-by-phase (P-AC). After reconstruction and AC, the 4D-PET data were averaged. In the 4Davg data, we measured maximum tumour standardised uptake value (SUV)max in the tumour, SUVmean in a lung volume of interest (VOI) and average SUV (SUVmean) in a muscle VOI. On the 4D-CT, we measured the lung volume differences and CT number changes between inhale and exhale in the lung VOI. Compared to P-AC, we found -2.3% (range -9.7% to 1.2%) lower tumour SUVmax in EI-AC and 2.0% (range -0.9% to 9.5%) higher SUVmax in EE-AC. No differences in the muscle SUV were found. The use of 3D-AC led to respiration-induced SUVmax differences up to 20% compared to the use of P-AC. SUVmean differences in the lung VOI between EI-AC and EE-AC correlated to average CT differences in this region (ρ = 0.83). SUVmax differences in the tumour correlated to the volume changes of the lungs (ρ = -0.55) and the motion amplitude of the tumour (ρ = 0.53), both as measured on the 4D-CT. Respiration-induced CT variations in clinical data can in extreme cases lead to SUV effects larger than 10% on PET attenuation correction. These differences were case specific and correlated to differences in CT number

  15. Error budget calculations in laboratory medicine: linking the concepts of biological variation and allowable medical errors

    NARCIS (Netherlands)

    Stroobants, A. K.; Goldschmidt, H. M. J.; Plebani, M.

    2003-01-01

    Background: Random, systematic and sporadic errors, which unfortunately are not uncommon in laboratory medicine, can have a considerable impact on the well being of patients. Although somewhat difficult to attain, our main goal should be to prevent all possible errors. A good insight on error-prone

  16. [Immortal time bias in pharmacoepidemiological studies: definition, solutions and examples].

    Science.gov (United States)

    Faillie, Jean-Luc; Suissa, Samy

    2015-01-01

    Among the observational studies of drug effects in chronic diseases, many of them have found effects that were exaggerated or wrong. Among bias responsible for these errors, the immortal time bias, concerning the definition of exposure and exposure periods, is relevantly important as it usually tends to wrongly attribute a significant benefit to the study drug (or exaggerate a real benefit). In this article, we define the mechanism of immortal time bias, we present possible solutions and illustrate its consequences through examples of pharmacoepidemiological studies of drug effects. © 2014 Société Française de Pharmacologie et de Thérapeutique.

  17. Architecture design for soft errors

    CERN Document Server

    Mukherjee, Shubu

    2008-01-01

    This book provides a comprehensive description of the architetural techniques to tackle the soft error problem. It covers the new methodologies for quantitative analysis of soft errors as well as novel, cost-effective architectural techniques to mitigate them. To provide readers with a better grasp of the broader problem deffinition and solution space, this book also delves into the physics of soft errors and reviews current circuit and software mitigation techniques.

  18. Eliminating US hospital medical errors.

    Science.gov (United States)

    Kumar, Sameer; Steinebach, Marc

    2008-01-01

    Healthcare costs in the USA have continued to rise steadily since the 1980s. Medical errors are one of the major causes of deaths and injuries of thousands of patients every year, contributing to soaring healthcare costs. The purpose of this study is to examine what has been done to deal with the medical-error problem in the last two decades and present a closed-loop mistake-proof operation system for surgery processes that would likely eliminate preventable medical errors. The design method used is a combination of creating a service blueprint, implementing the six sigma DMAIC cycle, developing cause-and-effect diagrams as well as devising poka-yokes in order to develop a robust surgery operation process for a typical US hospital. In the improve phase of the six sigma DMAIC cycle, a number of poka-yoke techniques are introduced to prevent typical medical errors (identified through cause-and-effect diagrams) that may occur in surgery operation processes in US hospitals. It is the authors' assertion that implementing the new service blueprint along with the poka-yokes, will likely result in the current medical error rate to significantly improve to the six-sigma level. Additionally, designing as many redundancies as possible in the delivery of care will help reduce medical errors. Primary healthcare providers should strongly consider investing in adequate doctor and nurse staffing, and improving their education related to the quality of service delivery to minimize clinical errors. This will lead to an increase in higher fixed costs, especially in the shorter time frame. This paper focuses additional attention needed to make a sound technical and business case for implementing six sigma tools to eliminate medical errors that will enable hospital managers to increase their hospital's profitability in the long run and also ensure patient safety.

  19. Towards automatic global error control: Computable weak error expansion for the tau-leap method

    KAUST Repository

    Karlsson, Peer Jesper; Tempone, Raul

    2011-01-01

    This work develops novel error expansions with computable leading order terms for the global weak error in the tau-leap discretization of pure jump processes arising in kinetic Monte Carlo models. Accurate computable a posteriori error approximations are the basis for adaptive algorithms, a fundamental tool for numerical simulation of both deterministic and stochastic dynamical systems. These pure jump processes are simulated either by the tau-leap method, or by exact simulation, also referred to as dynamic Monte Carlo, the Gillespie Algorithm or the Stochastic Simulation Slgorithm. Two types of estimates are presented: an a priori estimate for the relative error that gives a comparison between the work for the two methods depending on the propensity regime, and an a posteriori estimate with computable leading order term. © de Gruyter 2011.

  20. Repeated speech errors: evidence for learning.

    Science.gov (United States)

    Humphreys, Karin R; Menzies, Heather; Lake, Johanna K

    2010-11-01

    Three experiments elicited phonological speech errors using the SLIP procedure to investigate whether there is a tendency for speech errors on specific words to reoccur, and whether this effect can be attributed to implicit learning of an incorrect mapping from lemma to phonology for that word. In Experiment 1, when speakers made a phonological speech error in the study phase of the experiment (e.g. saying "beg pet" in place of "peg bet") they were over four times as likely to make an error on that same item several minutes later at test. A pseudo-error condition demonstrated that the effect is not simply due to a propensity for speakers to repeat phonological forms, regardless of whether or not they have been made in error. That is, saying "beg pet" correctly at study did not induce speakers to say "beg pet" in error instead of "peg bet" at test. Instead, the effect appeared to be due to learning of the error pathway. Experiment 2 replicated this finding, but also showed that after 48 h, errors made at study were no longer more likely to reoccur. As well as providing constraints on the longevity of the effect, this provides strong evidence that the error reoccurrences observed are not due to item-specific difficulty that leads individual speakers to make habitual mistakes on certain items. Experiment 3 showed that the diminishment of the effect 48 h later is not due to specific extra practice at the task. We discuss how these results fit in with a larger view of language as a dynamic system that is constantly adapting in response to experience. Copyright © 2010 Elsevier B.V. All rights reserved.

  1. An Error Analysis on TFL Learners’ Writings

    Directory of Open Access Journals (Sweden)

    Arif ÇERÇİ

    2016-12-01

    Full Text Available The main purpose of the present study is to identify and represent TFL learners’ writing errors through error analysis. All the learners started learning Turkish as foreign language with A1 (beginner level and completed the process by taking C1 (advanced certificate in TÖMER at Gaziantep University. The data of the present study were collected from 14 students’ writings in proficiency exams for each level. The data were grouped as grammatical, syntactic, spelling, punctuation, and word choice errors. The ratio and categorical distributions of identified errors were analyzed through error analysis. The data were analyzed through statistical procedures in an effort to determine whether error types differ according to the levels of the students. The errors in this study are limited to the linguistic and intralingual developmental errors

  2. Medication errors in anesthesia: unacceptable or unavoidable?

    Directory of Open Access Journals (Sweden)

    Ira Dhawan

    Full Text Available Abstract Medication errors are the common causes of patient morbidity and mortality. It adds financial burden to the institution as well. Though the impact varies from no harm to serious adverse effects including death, it needs attention on priority basis since medication errors' are preventable. In today's world where people are aware and medical claims are on the hike, it is of utmost priority that we curb this issue. Individual effort to decrease medication error alone might not be successful until a change in the existing protocols and system is incorporated. Often drug errors that occur cannot be reversed. The best way to ‘treat' drug errors is to prevent them. Wrong medication (due to syringe swap, overdose (due to misunderstanding or preconception of the dose, pump misuse and dilution error, incorrect administration route, under dosing and omission are common causes of medication error that occur perioperatively. Drug omission and calculation mistakes occur commonly in ICU. Medication errors can occur perioperatively either during preparation, administration or record keeping. Numerous human and system errors can be blamed for occurrence of medication errors. The need of the hour is to stop the blame - game, accept mistakes and develop a safe and ‘just' culture in order to prevent medication errors. The newly devised systems like VEINROM, a fluid delivery system is a novel approach in preventing drug errors due to most commonly used medications in anesthesia. Similar developments along with vigilant doctors, safe workplace culture and organizational support all together can help prevent these errors.

  3. Medical Error and Moral Luck.

    Science.gov (United States)

    Hubbeling, Dieneke

    2016-09-01

    This paper addresses the concept of moral luck. Moral luck is discussed in the context of medical error, especially an error of omission that occurs frequently, but only rarely has adverse consequences. As an example, a failure to compare the label on a syringe with the drug chart results in the wrong medication being administered and the patient dies. However, this error may have previously occurred many times with no tragic consequences. Discussions on moral luck can highlight conflicting intuitions. Should perpetrators receive a harsher punishment because of an adverse outcome, or should they be dealt with in the same way as colleagues who have acted similarly, but with no adverse effects? An additional element to the discussion, specifically with medical errors, is that according to the evidence currently available, punishing individual practitioners does not seem to be effective in preventing future errors. The following discussion, using relevant philosophical and empirical evidence, posits a possible solution for the moral luck conundrum in the context of medical error: namely, making a distinction between the duty to make amends and assigning blame. Blame should be assigned on the basis of actual behavior, while the duty to make amends is dependent on the outcome.

  4. Different grades MEMS accelerometers error characteristics

    Science.gov (United States)

    Pachwicewicz, M.; Weremczuk, J.

    2017-08-01

    The paper presents calibration effects of two different MEMS accelerometers of different price and quality grades and discusses different accelerometers errors types. The calibration for error determining is provided by reference centrifugal measurements. The design and measurement errors of the centrifuge are discussed as well. It is shown that error characteristics of the sensors are very different and it is not possible to use simple calibration methods presented in the literature in both cases.

  5. Preventing statistical errors in scientific journals.

    NARCIS (Netherlands)

    Nuijten, M.B.

    2016-01-01

    There is evidence for a high prevalence of statistical reporting errors in psychology and other scientific fields. These errors display a systematic preference for statistically significant results, distorting the scientific literature. There are several possible causes for this systematic error

  6. Dopamine reward prediction error coding

    OpenAIRE

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards?an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less...

  7. Errors and Understanding: The Effects of Error-Management Training on Creative Problem-Solving

    Science.gov (United States)

    Robledo, Issac C.; Hester, Kimberly S.; Peterson, David R.; Barrett, Jamie D.; Day, Eric A.; Hougen, Dean P.; Mumford, Michael D.

    2012-01-01

    People make errors in their creative problem-solving efforts. The intent of this article was to assess whether error-management training would improve performance on creative problem-solving tasks. Undergraduates were asked to solve an educational leadership problem known to call for creative thought where problem solutions were scored for…

  8. Libertarismo & Error Categorial

    OpenAIRE

    PATARROYO G, CARLOS G

    2009-01-01

    En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibili...

  9. SPACE-BORNE LASER ALTIMETER GEOLOCATION ERROR ANALYSIS

    Directory of Open Access Journals (Sweden)

    Y. Wang

    2018-05-01

    Full Text Available This paper reviews the development of space-borne laser altimetry technology over the past 40 years. Taking the ICESAT satellite as an example, a rigorous space-borne laser altimeter geolocation model is studied, and an error propagation equation is derived. The influence of the main error sources, such as the platform positioning error, attitude measurement error, pointing angle measurement error and range measurement error, on the geolocation accuracy of the laser spot are analysed by simulated experiments. The reasons for the different influences on geolocation accuracy in different directions are discussed, and to satisfy the accuracy of the laser control point, a design index for each error source is put forward.

  10. On the Correspondence between Mean Forecast Errors and Climate Errors in CMIP5 Models

    Energy Technology Data Exchange (ETDEWEB)

    Ma, H. -Y.; Xie, S.; Klein, S. A.; Williams, K. D.; Boyle, J. S.; Bony, S.; Douville, H.; Fermepin, S.; Medeiros, B.; Tyteca, S.; Watanabe, M.; Williamson, D.

    2014-02-01

    The present study examines the correspondence between short- and long-term systematic errors in five atmospheric models by comparing the 16 five-day hindcast ensembles from the Transpose Atmospheric Model Intercomparison Project II (Transpose-AMIP II) for July–August 2009 (short term) to the climate simulations from phase 5 of the Coupled Model Intercomparison Project (CMIP5) and AMIP for the June–August mean conditions of the years of 1979–2008 (long term). Because the short-term hindcasts were conducted with identical climate models used in the CMIP5/AMIP simulations, one can diagnose over what time scale systematic errors in these climate simulations develop, thus yielding insights into their origin through a seamless modeling approach. The analysis suggests that most systematic errors of precipitation, clouds, and radiation processes in the long-term climate runs are present by day 5 in ensemble average hindcasts in all models. Errors typically saturate after few days of hindcasts with amplitudes comparable to the climate errors, and the impacts of initial conditions on the simulated ensemble mean errors are relatively small. This robust bias correspondence suggests that these systematic errors across different models likely are initiated by model parameterizations since the atmospheric large-scale states remain close to observations in the first 2–3 days. However, biases associated with model physics can have impacts on the large-scale states by day 5, such as zonal winds, 2-m temperature, and sea level pressure, and the analysis further indicates a good correspondence between short- and long-term biases for these large-scale states. Therefore, improving individual model parameterizations in the hindcast mode could lead to the improvement of most climate models in simulating their climate mean state and potentially their future projections.

  11. Definition of Videogames

    Directory of Open Access Journals (Sweden)

    Grant Tavinor

    2008-01-01

    Full Text Available Can videogames be defined? The new field of games studies has generated three somewhat competing models of videogaming that characterize games as new forms of gaming, narratives, and interactive fictions. When treated as necessary and sufficient condition definitions, however, each of the three approaches fails to pick out all and only videogames. In this paper I argue that looking more closely at the formal qualities of definition helps to set out the range of definitional options open to the games theorist. A disjunctive definition of videogaming seems the most appropriate of these definitional options. The disjunctive definition I offer here is motivated by the observation that there is more than one characteristic way of being a videogame.

  12. Improving Type Error Messages in OCaml

    OpenAIRE

    Charguéraud , Arthur

    2015-01-01

    International audience; Cryptic type error messages are a major obstacle to learning OCaml or other ML-based languages. In many cases, error messages cannot be interpreted without a sufficiently-precise model of the type inference algorithm. The problem of improving type error messages in ML has received quite a bit of attention over the past two decades, and many different strategies have been considered. The challenge is not only to produce error messages that are both sufficiently concise ...

  13. Spectrum of diagnostic errors in radiology

    OpenAIRE

    Pinto, Antonio; Brunese, Luca

    2010-01-01

    Diagnostic errors are important in all branches of medicine because they are an indication of poor patient care. Since the early 1970s, physicians have been subjected to an increasing number of medical malpractice claims. Radiology is one of the specialties most liable to claims of medical negligence. Most often, a plaintiff’s complaint against a radiologist will focus on a failure to diagnose. The etiology of radiological error is multi-factorial. Errors fall into recurrent patterns. Errors ...

  14. Analysis of error patterns in clinical radiotherapy

    International Nuclear Information System (INIS)

    Macklis, Roger; Meier, Tim; Barrett, Patricia; Weinhous, Martin

    1996-01-01

    Purpose: Until very recently, prescription errors and adverse treatment events have rarely been studied or reported systematically in oncology. We wished to understand the spectrum and severity of radiotherapy errors that take place on a day-to-day basis in a high-volume academic practice and to understand the resource needs and quality assurance challenges placed on a department by rapid upswings in contract-based clinical volumes requiring additional operating hours, procedures, and personnel. The goal was to define clinical benchmarks for operating safety and to detect error-prone treatment processes that might function as 'early warning' signs. Methods: A multi-tiered prospective and retrospective system for clinical error detection and classification was developed, with formal analysis of the antecedents and consequences of all deviations from prescribed treatment delivery, no matter how trivial. A department-wide record-and-verify system was operational during this period and was used as one method of treatment verification and error detection. Brachytherapy discrepancies were analyzed separately. Results: During the analysis year, over 2000 patients were treated with over 93,000 individual fields. A total of 59 errors affecting a total of 170 individual treated fields were reported or detected during this period. After review, all of these errors were classified as Level 1 (minor discrepancy with essentially no potential for negative clinical implications). This total treatment delivery error rate (170/93, 332 or 0.18%) is significantly better than corresponding error rates reported for other hospital and oncology treatment services, perhaps reflecting the relatively sophisticated error avoidance and detection procedures used in modern clinical radiation oncology. Error rates were independent of linac model and manufacturer, time of day (normal operating hours versus late evening or early morning) or clinical machine volumes. There was some relationship to

  15. Electronic error-reporting systems: a case study into the impact on nurse reporting of medical errors.

    Science.gov (United States)

    Lederman, Reeva; Dreyfus, Suelette; Matchan, Jessica; Knott, Jonathan C; Milton, Simon K

    2013-01-01

    Underreporting of errors in hospitals persists despite the claims of technology companies that electronic systems will facilitate reporting. This study builds on previous analyses to examine error reporting by nurses in hospitals using electronic media. This research asks whether the electronic media creates additional barriers to error reporting, and, if so, what practical steps can all hospitals take to reduce these barriers. This is a mixed-method case study nurses' use of an error reporting system, RiskMan, in two hospitals. The case study involved one large private hospital and one large public hospital in Victoria, Australia, both of which use the RiskMan medical error reporting system. Information technology-based error reporting systems have unique access problems and time demands and can encourage nurses to develop alternative reporting mechanisms. This research focuses on nurses and raises important findings for hospitals using such systems or considering installation. This article suggests organizational and technical responses that could reduce some of the identified barriers. Crown Copyright © 2013. Published by Mosby, Inc. All rights reserved.

  16. Quantile Regression With Measurement Error

    KAUST Repository

    Wei, Ying

    2009-08-27

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.

  17. The uncorrected refractive error challenge

    Directory of Open Access Journals (Sweden)

    Kovin Naidoo

    2016-11-01

    Full Text Available Refractive error affects people of all ages, socio-economic status and ethnic groups. The most recent statistics estimate that, worldwide, 32.4 million people are blind and 191 million people have vision impairment. Vision impairment has been defined based on distance visual acuity only, and uncorrected distance refractive error (mainly myopia is the single biggest cause of worldwide vision impairment. However, when we also consider near visual impairment, it is clear that even more people are affected. From research it was estimated that the number of people with vision impairment due to uncorrected distance refractive error was 107.8 million,1 and the number of people affected by uncorrected near refractive error was 517 million, giving a total of 624.8 million people.

  18. Human errors, countermeasures for their prevention and evaluation

    International Nuclear Information System (INIS)

    Kohda, Takehisa; Inoue, Koichi

    1992-01-01

    The accidents originated in human errors have occurred as ever in recent large accidents such as the TMI accident and the Chernobyl accident. The proportion of the accidents originated in human errors is unexpectedly high, therefore, the reliability and safety of hardware are improved hereafter, but the improvement of human reliability cannot be expected. Human errors arise by the difference between the function required for men and the function actually accomplished by men, and the results exert some adverse effect to systems. Human errors are classified into design error, manufacture error, operation error, maintenance error, checkup error and general handling error. In terms of behavior, human errors are classified into forget to do, fail to do, do that must not be done, mistake in order and do at improper time. The factors in human error occurrence are circumstantial factor, personal factor and stress factor. As the method of analyzing and evaluating human errors, system engineering method such as probabilistic risk assessment is used. The technique for human error rate prediction, the method for human cognitive reliability, confusion matrix and SLIM-MAUD are also used. (K.I.)

  19. On influential analysis of random observation errors in water environment evaluation%水环境评价中随机观测误差的影响分析

    Institute of Scientific and Technical Information of China (English)

    田鹏伟

    2015-01-01

    The paper illustrates the reasons for the random observation errors in water environment,analyzes definitions,features and regulation of the random observation errors,and explores precautions in the random observation errors in water environment evaluation,so as to provide some reference for the treatment of water pollution.%阐述了水环境评价中产生随机观测误差的原因,分析了随机观测误差的定义、特点及其规律性,并对随机观测误差需注意的事项进行了研究探讨,为水污染的治理提供了一定的参考依据。

  20. Interpreting the change detection error matrix

    NARCIS (Netherlands)

    Oort, van P.A.J.

    2007-01-01

    Two different matrices are commonly reported in assessment of change detection accuracy: (1) single date error matrices and (2) binary change/no change error matrices. The third, less common form of reporting, is the transition error matrix. This paper discuses the relation between these matrices.

  1. Error evaluation method for material accountancy measurement. Evaluation of random and systematic errors based on material accountancy data

    International Nuclear Information System (INIS)

    Nidaira, Kazuo

    2008-01-01

    International Target Values (ITV) shows random and systematic measurement uncertainty components as a reference for routinely achievable measurement quality in the accountancy measurement. The measurement uncertainty, called error henceforth, needs to be periodically evaluated and checked against ITV for consistency as the error varies according to measurement methods, instruments, operators, certified reference samples, frequency of calibration, and so on. In the paper an error evaluation method was developed with focuses on (1) Specifying clearly error calculation model, (2) Getting always positive random and systematic error variances, (3) Obtaining probability density distribution of an error variance and (4) Confirming the evaluation method by simulation. In addition the method was demonstrated by applying real data. (author)

  2. The index of abdominal obesity as a marker of disorder of blood serum triglicerides fatty-acid spectrum in patients with diabetes mellitus type 2

    Directory of Open Access Journals (Sweden)

    Наталія Миколаївна Кушнарьова

    2015-12-01

    Full Text Available Aim of research. To determine the possibility to use the visceral obesity index (VOI for diagnostics of lipid metabolism disorders in patients with diabetes mellitus (DM type 2 on the base of the study of adipose tissue and triglycerides fatty acids content in the blood serum of patients.Materials and methods. There were determined the body mass, height, waist size, blood serum  lipid fraction (triglycerides, LPHD, calculated the body mass index and VOI in 19 patients with DM type 2 older then 50 years. There were determined the content of fatty acids (palmitic С16:0, stearin С18:0, oleic С18:1 and linoleic С18:2 in triglycerides using the method of liquid-gas chromatography.Results. Examined patients were separated into 3 groups according to VOI value. There was detected that the higher VOI values in patients with diabetes mellitus type 2 (upper tertile were associated with the most intensive unfavorable changes of the fatty-acid spectrum of triglyceride fraction in the blood serum at the expense of an increase of saturated palmitic and stearin fatty acids fraction and decrease of unsaturated oleic and linoleic acids content. There were revealed the correlations between VOI and the levels of saturated and unsaturated triglyceride fatty acids.Conclusion. The calculation of VOI in patients with DM type 2 can be the useful indicator of the lipid metabolism disorder, especially the deviations of triglyceride fatty-acid spectrum

  3. Error Free Software

    Science.gov (United States)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  4. Measurement error in a single regressor

    NARCIS (Netherlands)

    Meijer, H.J.; Wansbeek, T.J.

    2000-01-01

    For the setting of multiple regression with measurement error in a single regressor, we present some very simple formulas to assess the result that one may expect when correcting for measurement error. It is shown where the corrected estimated regression coefficients and the error variance may lie,

  5. Human Errors and Bridge Management Systems

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle; Nowak, A. S.

    on basis of reliability profiles for bridges without human errors are extended to include bridges with human errors. The first rehabilitation distributions for bridges without and with human errors are combined into a joint first rehabilitation distribution. The methodology presented is illustrated...... for reinforced concrete bridges....

  6. Quantum error correction for beginners

    International Nuclear Information System (INIS)

    Devitt, Simon J; Nemoto, Kae; Munro, William J

    2013-01-01

    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future. (review article)

  7. Angular truncation errors in integrating nephelometry

    International Nuclear Information System (INIS)

    Moosmueller, Hans; Arnott, W. Patrick

    2003-01-01

    Ideal integrating nephelometers integrate light scattered by particles over all directions. However, real nephelometers truncate light scattered in near-forward and near-backward directions below a certain truncation angle (typically 7 deg. ). This results in truncation errors, with the forward truncation error becoming important for large particles. Truncation errors are commonly calculated using Mie theory, which offers little physical insight and no generalization to nonspherical particles. We show that large particle forward truncation errors can be calculated and understood using geometric optics and diffraction theory. For small truncation angles (i.e., <10 deg. ) as typical for modern nephelometers, diffraction theory by itself is sufficient. Forward truncation errors are, by nearly a factor of 2, larger for absorbing particles than for nonabsorbing particles because for large absorbing particles most of the scattered light is due to diffraction as transmission is suppressed. Nephelometers calibration procedures are also discussed as they influence the effective truncation error

  8. Standard Errors for Matrix Correlations.

    Science.gov (United States)

    Ogasawara, Haruhiko

    1999-01-01

    Derives the asymptotic standard errors and intercorrelations for several matrix correlations assuming multivariate normality for manifest variables and derives the asymptotic standard errors of the matrix correlations for two factor-loading matrices. (SLD)

  9. Game Design Principles based on Human Error

    Directory of Open Access Journals (Sweden)

    Guilherme Zaffari

    2016-03-01

    Full Text Available This paper displays the result of the authors’ research regarding to the incorporation of Human Error, through design principles, to video game design. In a general way, designers must consider Human Error factors throughout video game interface development; however, when related to its core design, adaptations are in need, since challenge is an important factor for fun and under the perspective of Human Error, challenge can be considered as a flaw in the system. The research utilized Human Error classifications, data triangulation via predictive human error analysis, and the expanded flow theory to allow the design of a set of principles in order to match the design of playful challenges with the principles of Human Error. From the results, it was possible to conclude that the application of Human Error in game design has a positive effect on player experience, allowing it to interact only with errors associated with the intended aesthetics of the game.

  10. Quantum error-correcting code for ternary logic

    Science.gov (United States)

    Majumdar, Ritajit; Basu, Saikat; Ghosh, Shibashis; Sur-Kolay, Susmita

    2018-05-01

    Ternary quantum systems are being studied because they provide more computational state space per unit of information, known as qutrit. A qutrit has three basis states, thus a qubit may be considered as a special case of a qutrit where the coefficient of one of the basis states is zero. Hence both (2 ×2 ) -dimensional and (3 ×3 ) -dimensional Pauli errors can occur on qutrits. In this paper, we (i) explore the possible (2 ×2 ) -dimensional as well as (3 ×3 ) -dimensional Pauli errors in qutrits and show that any pairwise bit swap error can be expressed as a linear combination of shift errors and phase errors, (ii) propose a special type of error called a quantum superposition error and show its equivalence to arbitrary rotation, (iii) formulate a nine-qutrit code which can correct a single error in a qutrit, and (iv) provide its stabilizer and circuit realization.

  11. Sources of Error in Satellite Navigation Positioning

    Directory of Open Access Journals (Sweden)

    Jacek Januszewski

    2017-09-01

    Full Text Available An uninterrupted information about the user’s position can be obtained generally from satellite navigation system (SNS. At the time of this writing (January 2017 currently two global SNSs, GPS and GLONASS, are fully operational, two next, also global, Galileo and BeiDou are under construction. In each SNS the accuracy of the user’s position is affected by the three main factors: accuracy of each satellite position, accuracy of pseudorange measurement and satellite geometry. The user’s position error is a function of both the pseudorange error called UERE (User Equivalent Range Error and user/satellite geometry expressed by right Dilution Of Precision (DOP coefficient. This error is decomposed into two types of errors: the signal in space ranging error called URE (User Range Error and the user equipment error UEE. The detailed analyses of URE, UEE, UERE and DOP coefficients, and the changes of DOP coefficients in different days are presented in this paper.

  12. [Errors in Peruvian medical journals references].

    Science.gov (United States)

    Huamaní, Charles; Pacheco-Romero, José

    2009-01-01

    References are fundamental in our studies; an adequate selection is asimportant as an adequate description. To determine the number of errors in a sample of references found in Peruvian medical journals. We reviewed 515 scientific papers references selected by systematic randomized sampling and corroborated reference information with the original document or its citation in Pubmed, LILACS or SciELO-Peru. We found errors in 47,6% (245) of the references, identifying 372 types of errors; the most frequent were errors in presentation style (120), authorship (100) and title (100), mainly due to spelling mistakes (91). References error percentage was high, varied and multiple. We suggest systematic revision of references in the editorial process as well as to extend the discussion on this theme. references, periodicals, research, bibliometrics.

  13. Impact of Measurement Error on Synchrophasor Applications

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yilu [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gracia, Jose R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ewing, Paul D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Zhao, Jiecheng [Univ. of Tennessee, Knoxville, TN (United States); Tan, Jin [Univ. of Tennessee, Knoxville, TN (United States); Wu, Ling [Univ. of Tennessee, Knoxville, TN (United States); Zhan, Lingwei [Univ. of Tennessee, Knoxville, TN (United States)

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.

  14. Error field considerations for BPX

    International Nuclear Information System (INIS)

    LaHaye, R.J.

    1992-01-01

    Irregularities in the position of poloidal and/or toroidal field coils in tokamaks produce resonant toroidal asymmetries in the vacuum magnetic fields. Otherwise stable tokamak discharges become non-linearly unstable to disruptive locked modes when subjected to low level error fields. Because of the field errors, magnetic islands are produced which would not otherwise occur in tearing mode table configurations; a concomitant reduction of the total confinement can result. Poloidal and toroidal asymmetries arise in the heat flux to the divertor target. In this paper, the field errors from perturbed BPX coils are used in a field line tracing code of the BPX equilibrium to study these deleterious effects. Limits on coil irregularities for device design and fabrication are computed along with possible correcting coils for reducing such field errors

  15. Discretization vs. Rounding Error in Euler's Method

    Science.gov (United States)

    Borges, Carlos F.

    2011-01-01

    Euler's method for solving initial value problems is an excellent vehicle for observing the relationship between discretization error and rounding error in numerical computation. Reductions in stepsize, in order to decrease discretization error, necessarily increase the number of steps and so introduce additional rounding error. The problem is…

  16. Errors of Inference Due to Errors of Measurement.

    Science.gov (United States)

    Linn, Robert L.; Werts, Charles E.

    Failure to consider errors of measurement when using partial correlation or analysis of covariance techniques can result in erroneous conclusions. Certain aspects of this problem are discussed and particular attention is given to issues raised in a recent article by Brewar, Campbell, and Crano. (Author)

  17. Average beta-beating from random errors

    CERN Document Server

    Tomas Garcia, Rogelio; Langner, Andy Sven; Malina, Lukas; Franchi, Andrea; CERN. Geneva. ATS Department

    2018-01-01

    The impact of random errors on average β-beating is studied via analytical derivations and simulations. A systematic positive β-beating is expected from random errors quadratic with the sources or, equivalently, with the rms β-beating. However, random errors do not have a systematic effect on the tune.

  18. FRamework Assessing Notorious Contributing Influences for Error (FRANCIE): Perspective on Taxonomy Development to Support Error Reporting and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lon N. Haney; David I. Gertman

    2003-04-01

    Beginning in the 1980s a primary focus of human reliability analysis was estimation of human error probabilities. However, detailed qualitative modeling with comprehensive representation of contextual variables often was lacking. This was likely due to the lack of comprehensive error and performance shaping factor taxonomies, and the limited data available on observed error rates and their relationship to specific contextual variables. In the mid 90s Boeing, America West Airlines, NASA Ames Research Center and INEEL partnered in a NASA sponsored Advanced Concepts grant to: assess the state of the art in human error analysis, identify future needs for human error analysis, and develop an approach addressing these needs. Identified needs included the need for a method to identify and prioritize task and contextual characteristics affecting human reliability. Other needs identified included developing comprehensive taxonomies to support detailed qualitative modeling and to structure meaningful data collection efforts across domains. A result was the development of the FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) with a taxonomy for airline maintenance tasks. The assignment of performance shaping factors to generic errors by experts proved to be valuable to qualitative modeling. Performance shaping factors and error types from such detailed approaches can be used to structure error reporting schemes. In a recent NASA Advanced Human Support Technology grant FRANCIE was refined, and two new taxonomies for use on space missions were developed. The development, sharing, and use of error taxonomies, and the refinement of approaches for increased fidelity of qualitative modeling is offered as a means to help direct useful data collection strategies.

  19. Human errors related to maintenance and modifications

    International Nuclear Information System (INIS)

    Laakso, K.; Pyy, P.; Reiman, L.

    1998-01-01

    The focus in human reliability analysis (HRA) relating to nuclear power plants has traditionally been on human performance in disturbance conditions. On the other hand, some studies and incidents have shown that also maintenance errors, which have taken place earlier in plant history, may have an impact on the severity of a disturbance, e.g. if they disable safety related equipment. Especially common cause and other dependent failures of safety systems may significantly contribute to the core damage risk. The first aim of the study was to identify and give examples of multiple human errors which have penetrated the various error detection and inspection processes of plant safety barriers. Another objective was to generate numerical safety indicators to describe and forecast the effectiveness of maintenance. A more general objective was to identify needs for further development of maintenance quality and planning. In the first phase of this operational experience feedback analysis, human errors recognisable in connection with maintenance were looked for by reviewing about 4400 failure and repair reports and some special reports which cover two nuclear power plant units on the same site during 1992-94. A special effort was made to study dependent human errors since they are generally the most serious ones. An in-depth root cause analysis was made for 14 dependent errors by interviewing plant maintenance foremen and by thoroughly analysing the errors. A more simple treatment was given to maintenance-related single errors. The results were shown as a distribution of errors among operating states i.a. as regards the following matters: in what operational state the errors were committed and detected; in what operational and working condition the errors were detected, and what component and error type they were related to. These results were presented separately for single and dependent maintenance-related errors. As regards dependent errors, observations were also made

  20. Human errors in NPP operations

    International Nuclear Information System (INIS)

    Sheng Jufang

    1993-01-01

    Based on the operational experiences of nuclear power plants (NPPs), the importance of studying human performance problems is described. Statistical analysis on the significance or frequency of various root-causes and error-modes from a large number of human-error-related events demonstrate that the defects in operation/maintenance procedures, working place factors, communication and training practices are primary root-causes, while omission, transposition, quantitative mistake are the most frequent among the error-modes. Recommendations about domestic research on human performance problem in NPPs are suggested

  1. Learning from Errors: Effects of Teachers Training on Students' Attitudes towards and Their Individual Use of Errors

    Science.gov (United States)

    Rach, Stefanie; Ufer, Stefan; Heinze, Aiso

    2013-01-01

    Constructive error handling is considered an important factor for individual learning processes. In a quasi-experimental study with Grades 6 to 9 students, we investigate effects on students' attitudes towards errors as learning opportunities in two conditions: an error-tolerant classroom culture, and the first condition along with additional…

  2. Evaluation of Data with Systematic Errors

    International Nuclear Information System (INIS)

    Froehner, F. H.

    2003-01-01

    Application-oriented evaluated nuclear data libraries such as ENDF and JEFF contain not only recommended values but also uncertainty information in the form of 'covariance' or 'error files'. These can neither be constructed nor utilized properly without a thorough understanding of uncertainties and correlations. It is shown how incomplete information about errors is described by multivariate probability distributions or, more summarily, by covariance matrices, and how correlations are caused by incompletely known common errors. Parameter estimation for the practically most important case of the Gaussian distribution with common errors is developed in close analogy to the more familiar case without. The formalism shows that, contrary to widespread belief, common ('systematic') and uncorrelated ('random' or 'statistical') errors are to be added in quadrature. It also shows explicitly that repetition of a measurement reduces mainly the statistical uncertainties but not the systematic ones. While statistical uncertainties are readily estimated from the scatter of repeatedly measured data, systematic uncertainties can only be inferred from prior information about common errors and their propagation. The optimal way to handle error-affected auxiliary quantities ('nuisance parameters') in data fitting and parameter estimation is to adjust them on the same footing as the parameters of interest and to integrate (marginalize) them out of the joint posterior distribution afterward

  3. Inducible error-prone repair in B. subtilis. Progress report, September 1, 1981-April 30, 1983

    International Nuclear Information System (INIS)

    Yasbin, R.E.

    1982-12-01

    Considerable progress has been made on determining the mechanisms of mutagenesis in B. subtilis and on elucidating the interactions between DNA repair systems and mutagenesis in this bacterium. Specifically, the B. subtilis W-reactivation system has been shown to involve a damage-specific (pyrimidine dimer) repair mechanism which may or may not be error-free. On the other hand, error-prone repair (as defined by the ability of cells to be mutated by low doses of uv) has been definitively established in this bacterium. The investigation of the genes controlling the error-prone repair system has revealed that uv mutagenesis is significantly decreased in cells carrying the recG13 mutation. In addition, cells lacking a functional excision repair system are hypermutable to EMS, although these cells are not hypersensitive to the killing activity of EMS. Both EMS and uv generate the same spectrum of mutants (reversions vs suppressors); however, cells lacking a functional excision repair system apparently generate more suppressor mutations when exposed to uv as compared to the other strains tested. A genomic library for B. subtilis has been established. This library will be specifically used to isolate a cloned fragment of DNA which codes for the major subunit of the Bacillus DNA polymerase III. However, this bank can also be used to isolate Bacillus genes which control most of the repair functions. Furthermore, we have begun the process of cloning the E. coli phr + gene in to B. subtilis

  4. Clock error models for simulation and estimation

    International Nuclear Information System (INIS)

    Meditch, J.S.

    1981-10-01

    Mathematical models for the simulation and estimation of errors in precision oscillators used as time references in satellite navigation systems are developed. The results, based on all currently known oscillator error sources, are directly implementable on a digital computer. The simulation formulation is sufficiently flexible to allow for the inclusion or exclusion of individual error sources as desired. The estimation algorithms, following from Kalman filter theory, provide directly for the error analysis of clock errors in both filtering and prediction

  5. Advanced hardware design for error correcting codes

    CERN Document Server

    Coussy, Philippe

    2015-01-01

    This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.

  6. Error management for musicians: an interdisciplinary conceptual framework.

    Science.gov (United States)

    Kruse-Weber, Silke; Parncutt, Richard

    2014-01-01

    Musicians tend to strive for flawless performance and perfection, avoiding errors at all costs. Dealing with errors while practicing or performing is often frustrating and can lead to anger and despair, which can explain musicians' generally negative attitude toward errors and the tendency to aim for flawless learning in instrumental music education. But even the best performances are rarely error-free, and research in general pedagogy and psychology has shown that errors provide useful information for the learning process. Research in instrumental pedagogy is still neglecting error issues; the benefits of risk management (before the error) and error management (during and after the error) are still underestimated. It follows that dealing with errors is a key aspect of music practice at home, teaching, and performance in public. And yet, to be innovative, or to make their performance extraordinary, musicians need to risk errors. Currently, most music students only acquire the ability to manage errors implicitly - or not at all. A more constructive, creative, and differentiated culture of errors would balance error tolerance and risk-taking against error prevention in ways that enhance music practice and music performance. The teaching environment should lay the foundation for the development of such an approach. In this contribution, we survey recent research in aviation, medicine, economics, psychology, and interdisciplinary decision theory that has demonstrated that specific error-management training can promote metacognitive skills that lead to better adaptive transfer and better performance skills. We summarize how this research can be applied to music, and survey-relevant research that is specifically tailored to the needs of musicians, including generic guidelines for risk and error management in music teaching and performance. On this basis, we develop a conceptual framework for risk management that can provide orientation for further music education and

  7. Error management for musicians: an interdisciplinary conceptual framework

    Directory of Open Access Journals (Sweden)

    Silke eKruse-Weber

    2014-07-01

    Full Text Available Musicians tend to strive for flawless performance and perfection, avoiding errors at all costs. Dealing with errors while practicing or performing is often frustrating and can lead to anger and despair, which can explain musicians’ generally negative attitude toward errors and the tendency to aim for errorless learning in instrumental music education. But even the best performances are rarely error-free, and research in general pedagogy and psychology has shown that errors provide useful information for the learning process. Research in instrumental pedagogy is still neglecting error issues; the benefits of risk management (before the error and error management (during and after the error are still underestimated. It follows that dealing with errors is a key aspect of music practice at home, teaching, and performance in public. And yet, to be innovative, or to make their performance extraordinary, musicians need to risk errors. Currently, most music students only acquire the ability to manage errors implicitly - or not at all. A more constructive, creative and differentiated culture of errors would balance error tolerance and risk-taking against error prevention in ways that enhance music practice and music performance. The teaching environment should lay the foundation for the development of these abilities. In this contribution, we survey recent research in aviation, medicine, economics, psychology, and interdisciplinary decision theory that has demonstrated that specific error-management training can promote metacognitive skills that lead to better adaptive transfer and better performance skills. We summarize how this research can be applied to music, and survey relevant research that is specifically tailored to the needs of musicians, including generic guidelines for risk and error management in music teaching and performance. On this basis, we develop a conceptual framework for risk management that can provide orientation for further

  8. Scope Definition

    DEFF Research Database (Denmark)

    Bjørn, Anders; Owsianiak, Mikołaj; Laurent, Alexis

    2018-01-01

    The scope definition is the second phase of an LCA. It determines what product systems are to be assessed and how this assessment should take place. This chapter teaches how to perform a scope definition. First, important terminology and key concepts of LCA are introduced. Then, the nine items...... making up a scope definition are elaborately explained: (1) Deliverables. (2) Object of assessment, (3) LCI modelling framework and handling of multifunctional processes, (4) System boundaries and completeness requirements, (5) Representativeness of LCI data, (6) Preparing the basis for the impact...... assessment, (7) Special requirements for system comparisons, (8) Critical review needs and (9) Planning reporting of results. The instructions relate both to the performance and reporting of a scope definition and are largely based on ILCD....

  9. Medication Error, What Is the Reason?

    Directory of Open Access Journals (Sweden)

    Ali Banaozar Mohammadi

    2015-09-01

    Full Text Available Background: Medication errors due to different reasons may alter the outcome of all patients, especially patients with drug poisoning. We introduce one of the most common type of medication error in the present article. Case:A 48 year old woman with suspected organophosphate poisoning was died due to lethal medication error. Unfortunately these types of errors are not rare and had some preventable reasons included lack of suitable and enough training and practicing of medical students and some failures in medical students’ educational curriculum. Conclusion:Hereby some important reasons are discussed because sometimes they are tre-mendous. We found that most of them are easily preventable. If someone be aware about the method of use, complications, dosage and contraindication of drugs, we can minimize most of these fatal errors.

  10. Common patterns in 558 diagnostic radiology errors.

    Science.gov (United States)

    Donald, Jennifer J; Barnard, Stuart A

    2012-04-01

    As a Quality Improvement initiative our department has held regular discrepancy meetings since 2003. We performed a retrospective analysis of the cases presented and identified the most common pattern of error. A total of 558 cases were referred for discussion over 92 months, and errors were classified as perceptual or interpretative. The most common patterns of error for each imaging modality were analysed, and the misses were scored by consensus as subtle or non-subtle. Of 558 diagnostic errors, 447 (80%) were perceptual and 111 (20%) were interpretative errors. Plain radiography and computed tomography (CT) scans were the most frequent imaging modalities accounting for 246 (44%) and 241 (43%) of the total number of errors, respectively. In the plain radiography group 120 (49%) of the errors occurred in chest X-ray reports with perceptual miss of a lung nodule occurring in 40% of this subgroup. In the axial and appendicular skeleton missed fractures occurred most frequently, and metastatic bone disease was overlooked in 12 of 50 plain X-rays of the pelvis or spine. The majority of errors within the CT group were in reports of body scans with the commonest perceptual errors identified including 16 missed significant bone lesions, 14 cases of thromboembolic disease and 14 gastrointestinal tumours. Of the 558 errors, 312 (56%) were considered subtle and 246 (44%) non-subtle. Diagnostic errors are not uncommon and are most frequently perceptual in nature. Identification of the most common patterns of error has the potential to improve the quality of reporting by improving the search behaviour of radiologists. © 2012 The Authors. Journal of Medical Imaging and Radiation Oncology © 2012 The Royal Australian and New Zealand College of Radiologists.

  11. LIBERTARISMO & ERROR CATEGORIAL

    Directory of Open Access Journals (Sweden)

    Carlos G. Patarroyo G.

    2009-01-01

    Full Text Available En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibilidad de la libertad humana no necesariamente puede ser acusado de incurrir en ellos.

  12. Numerical optimization with computational errors

    CERN Document Server

    Zaslavski, Alexander J

    2016-01-01

    This book studies the approximate solutions of optimization problems in the presence of computational errors. A number of results are presented on the convergence behavior of algorithms in a Hilbert space; these algorithms are examined taking into account computational errors. The author illustrates that algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Known computational errors are examined with the aim of determining an approximate solution. Researchers and students interested in the optimization theory and its applications will find this book instructive and informative. This monograph contains 16 chapters; including a chapters devoted to the subgradient projection algorithm, the mirror descent algorithm, gradient projection algorithm, the Weiszfelds method, constrained convex minimization problems, the convergence of a proximal point method in a Hilbert space, the continuous subgradient method, penalty methods and Newton’s meth...

  13. Friendship at work and error disclosure

    Directory of Open Access Journals (Sweden)

    Hsiao-Yen Mao

    2017-10-01

    Full Text Available Organizations rely on contextual factors to promote employee disclosure of self-made errors, which induces a resource dilemma (i.e., disclosure entails costing one's own resources to bring others resources and a friendship dilemma (i.e., disclosure is seemingly easier through friendship, yet the cost of friendship is embedded. This study proposes that friendship at work enhances error disclosure and uses conservation of resources theory as underlying explanation. A three-wave survey collected data from 274 full-time employees with a variety of occupational backgrounds. Empirical results indicated that friendship enhanced error disclosure partially through relational mechanisms of employees’ attitudes toward coworkers (i.e., employee engagement and of coworkers’ attitudes toward employees (i.e., perceived social worth. Such effects hold when controlling for established predictors of error disclosure. This study expands extant perspectives on employee error and the theoretical lenses used to explain the influence of friendship at work. We propose that, while promoting error disclosure through both contextual and relational approaches, organizations should be vigilant about potential incongruence.

  14. Error-Resilient Unequal Error Protection of Fine Granularity Scalable Video Bitstreams

    Science.gov (United States)

    Cai, Hua; Zeng, Bing; Shen, Guobin; Xiong, Zixiang; Li, Shipeng

    2006-12-01

    This paper deals with the optimal packet loss protection issue for streaming the fine granularity scalable (FGS) video bitstreams over IP networks. Unlike many other existing protection schemes, we develop an error-resilient unequal error protection (ER-UEP) method that adds redundant information optimally for loss protection and, at the same time, cancels completely the dependency among bitstream after loss recovery. In our ER-UEP method, the FGS enhancement-layer bitstream is first packetized into a group of independent and scalable data packets. Parity packets, which are also scalable, are then generated. Unequal protection is finally achieved by properly shaping the data packets and the parity packets. We present an algorithm that can optimally allocate the rate budget between data packets and parity packets, together with several simplified versions that have lower complexity. Compared with conventional UEP schemes that suffer from bit contamination (caused by the bit dependency within a bitstream), our method guarantees successful decoding of all received bits, thus leading to strong error-resilience (at any fixed channel bandwidth) and high robustness (under varying and/or unclean channel conditions).

  15. Medication errors detected in non-traditional databases

    DEFF Research Database (Denmark)

    Perregaard, Helene; Aronson, Jeffrey K; Dalhoff, Kim

    2015-01-01

    AIMS: We have looked for medication errors involving the use of low-dose methotrexate, by extracting information from Danish sources other than traditional pharmacovigilance databases. We used the data to establish the relative frequencies of different types of errors. METHODS: We searched four...... errors, whereas knowledge-based errors more often resulted in near misses. CONCLUSIONS: The medication errors in this survey were most often action-based (50%) and knowledge-based (34%), suggesting that greater attention should be paid to education and surveillance of medical personnel who prescribe...

  16. Linear network error correction coding

    CERN Document Server

    Guang, Xuan

    2014-01-01

    There are two main approaches in the theory of network error correction coding. In this SpringerBrief, the authors summarize some of the most important contributions following the classic approach, which represents messages by sequences?similar to algebraic coding,?and also briefly discuss the main results following the?other approach,?that uses the theory of rank metric codes for network error correction of representing messages by subspaces. This book starts by establishing the basic linear network error correction (LNEC) model and then characterizes two equivalent descriptions. Distances an

  17. [Analysis of intrusion errors in free recall].

    Science.gov (United States)

    Diesfeldt, H F A

    2017-06-01

    Extra-list intrusion errors during five trials of the eight-word list-learning task of the Amsterdam Dementia Screening Test (ADST) were investigated in 823 consecutive psychogeriatric patients (87.1% suffering from major neurocognitive disorder). Almost half of the participants (45.9%) produced one or more intrusion errors on the verbal recall test. Correct responses were lower when subjects made intrusion errors, but learning slopes did not differ between subjects who committed intrusion errors and those who did not so. Bivariate regression analyses revealed that participants who committed intrusion errors were more deficient on measures of eight-word recognition memory, delayed visual recognition and tests of executive control (the Behavioral Dyscontrol Scale and the ADST-Graphical Sequences as measures of response inhibition). Using hierarchical multiple regression, only free recall and delayed visual recognition retained an independent effect in the association with intrusion errors, such that deficient scores on tests of episodic memory were sufficient to explain the occurrence of intrusion errors. Measures of inhibitory control did not add significantly to the explanation of intrusion errors in free recall, which makes insufficient strength of memory traces rather than a primary deficit in inhibition the preferred account for intrusion errors in free recall.

  18. Volume of interest CBCT and tube current modulation for image guidance using dynamic kV collimation

    Energy Technology Data Exchange (ETDEWEB)

    Parsons, David, E-mail: david.parsons@dal.ca, E-mail: james.robar@nshealth.ca [Department of Physics and Atmospheric Science, Dalhousie University, 5820 University Avenue, Halifax, Nova Scotia B3H 1V7 (Canada); Robar, James L., E-mail: david.parsons@dal.ca, E-mail: james.robar@nshealth.ca [Department of Radiation Oncology and Department of Physics and Atmospheric Science, Dalhousie University, 5820 University Avenue, Halifax, Nova Scotia B3H 1V7 (Canada)

    2016-04-15

    Purpose: The focus of this work is the development of a novel blade collimation system enabling volume of interest (VOI) CBCT with tube current modulation using the kV image guidance source on a linear accelerator. Advantages of the system are assessed, particularly with regard to reduction and localization of dose and improvement of image quality. Methods: A four blade dynamic kV collimator was developed to track a VOI during a CBCT acquisition. The current prototype is capable of tracking an arbitrary volume defined by the treatment planner for subsequent CBCT guidance. During gantry rotation, the collimator tracks the VOI with adjustment of position and dimension. CBCT image quality was investigated as a function of collimator dimension, while maintaining the same dose to the VOI, for a 22.2 cm diameter cylindrical water phantom with a 9 mm diameter bone insert centered on isocenter. Dose distributions were modeled using a dynamic BEAMnrc library and DOSXYZnrc. The resulting VOI dose distributions were compared to full-field CBCT distributions to quantify dose reduction and localization to the target volume. A novel method of optimizing x-ray tube current during CBCT acquisition was developed and assessed with regard to contrast-to-noise ratio (CNR) and imaging dose. Results: Measurements show that the VOI CBCT method using the dynamic blade system yields an increase in contrast-to-noise ratio by a factor of approximately 2.2. Depending upon the anatomical site, dose was reduced to 15%–80% of the full-field CBCT value along the central axis plane and down to less than 1% out of plane. The use of tube current modulation allowed for specification of a desired SNR within projection data. For approximately the same dose to the VOI, CNR was further increased by a factor of 1.2 for modulated VOI CBCT, giving a combined improvement of 2.6 compared to full-field CBCT. Conclusions: The present dynamic blade system provides significant improvements in CNR for the same

  19. Association of medication errors with drug classifications, clinical units, and consequence of errors: Are they related?

    Science.gov (United States)

    Muroi, Maki; Shen, Jay J; Angosta, Alona

    2017-02-01

    Registered nurses (RNs) play an important role in safe medication administration and patient safety. This study examined a total of 1276 medication error (ME) incident reports made by RNs in hospital inpatient settings in the southwestern region of the United States. The most common drug class associated with MEs was cardiovascular drugs (24.7%). Among this class, anticoagulants had the most errors (11.3%). The antimicrobials was the second most common drug class associated with errors (19.1%) and vancomycin was the most common antimicrobial that caused errors in this category (6.1%). MEs occurred more frequently in the medical-surgical and intensive care units than any other hospital units. Ten percent of MEs reached the patients with harm and 11% reached the patients with increased monitoring. Understanding the contributing factors related to MEs, addressing and eliminating risk of errors across hospital units, and providing education and resources for nurses may help reduce MEs. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Positive Beliefs about Errors as an Important Element of Adaptive Individual Dealing with Errors during Academic Learning

    Science.gov (United States)

    Tulis, Maria; Steuer, Gabriele; Dresel, Markus

    2018-01-01

    Research on learning from errors gives reason to assume that errors provide a high potential to facilitate deep learning if students are willing and able to take these learning opportunities. The first aim of this study was to analyse whether beliefs about errors as learning opportunities can be theoretically and empirically distinguished from…

  1. Error Modeling and Design Optimization of Parallel Manipulators

    DEFF Research Database (Denmark)

    Wu, Guanglei

    /backlash, manufacturing and assembly errors and joint clearances. From the error prediction model, the distributions of the pose errors due to joint clearances are mapped within its constant-orientation workspace and the correctness of the developed model is validated experimentally. ix Additionally, using the screw......, dynamic modeling etc. Next, the rst-order dierential equation of the kinematic closure equation of planar parallel manipulator is obtained to develop its error model both in Polar and Cartesian coordinate systems. The established error model contains the error sources of actuation error...

  2. Error characterization for asynchronous computations: Proxy equation approach

    Science.gov (United States)

    Sallai, Gabriella; Mittal, Ankita; Girimaji, Sharath

    2017-11-01

    Numerical techniques for asynchronous fluid flow simulations are currently under development to enable efficient utilization of massively parallel computers. These numerical approaches attempt to accurately solve time evolution of transport equations using spatial information at different time levels. The truncation error of asynchronous methods can be divided into two parts: delay dependent (EA) or asynchronous error and delay independent (ES) or synchronous error. The focus of this study is a specific asynchronous error mitigation technique called proxy-equation approach. The aim of this study is to examine these errors as a function of the characteristic wavelength of the solution. Mitigation of asynchronous effects requires that the asynchronous error be smaller than synchronous truncation error. For a simple convection-diffusion equation, proxy-equation error analysis identifies critical initial wave-number, λc. At smaller wave numbers, synchronous error are larger than asynchronous errors. We examine various approaches to increase the value of λc in order to improve the range of applicability of proxy-equation approach.

  3. Current error vector based prediction control of the section winding permanent magnet linear synchronous motor

    Energy Technology Data Exchange (ETDEWEB)

    Hong Junjie, E-mail: hongjjie@mail.sysu.edu.cn [School of Engineering, Sun Yat-Sen University, Guangzhou 510006 (China); Li Liyi, E-mail: liliyi@hit.edu.cn [Dept. Electrical Engineering, Harbin Institute of Technology, Harbin 150000 (China); Zong Zhijian; Liu Zhongtu [School of Engineering, Sun Yat-Sen University, Guangzhou 510006 (China)

    2011-10-15

    Highlights: {yields} The structure of the permanent magnet linear synchronous motor (SW-PMLSM) is new. {yields} A new current control method CEVPC is employed in this motor. {yields} The sectional power supply method is different to the others and effective. {yields} The performance gets worse with voltage and current limitations. - Abstract: To include features such as greater thrust density, higher efficiency without reducing the thrust stability, this paper proposes a section winding permanent magnet linear synchronous motor (SW-PMLSM), whose iron core is continuous, whereas winding is divided. The discrete system model of the motor is derived. With the definition of the current error vector and selection of the value function, the theory of the current error vector based prediction control (CEVPC) for the motor currents is explained clearly. According to the winding section feature, the motion region of the mover is divided into five zones, in which the implementation of the current predictive control method is proposed. Finally, the experimental platform is constructed and experiments are carried out. The results show: the current control effect has good dynamic response, and the thrust on the mover remains constant basically.

  4. Haplotype reconstruction error as a classical misclassification problem: introducing sensitivity and specificity as error measures.

    Directory of Open Access Journals (Sweden)

    Claudia Lamina

    Full Text Available BACKGROUND: Statistically reconstructing haplotypes from single nucleotide polymorphism (SNP genotypes, can lead to falsely classified haplotypes. This can be an issue when interpreting haplotype association results or when selecting subjects with certain haplotypes for subsequent functional studies. It was our aim to quantify haplotype reconstruction error and to provide tools for it. METHODS AND RESULTS: By numerous simulation scenarios, we systematically investigated several error measures, including discrepancy, error rate, and R(2, and introduced the sensitivity and specificity to this context. We exemplified several measures in the KORA study, a large population-based study from Southern Germany. We find that the specificity is slightly reduced only for common haplotypes, while the sensitivity was decreased for some, but not all rare haplotypes. The overall error rate was generally increasing with increasing number of loci, increasing minor allele frequency of SNPs, decreasing correlation between the alleles and increasing ambiguity. CONCLUSIONS: We conclude that, with the analytical approach presented here, haplotype-specific error measures can be computed to gain insight into the haplotype uncertainty. This method provides the information, if a specific risk haplotype can be expected to be reconstructed with rather no or high misclassification and thus on the magnitude of expected bias in association estimates. We also illustrate that sensitivity and specificity separate two dimensions of the haplotype reconstruction error, which completely describe the misclassification matrix and thus provide the prerequisite for methods accounting for misclassification.

  5. Automatic error compensation in dc amplifiers

    International Nuclear Information System (INIS)

    Longden, L.L.

    1976-01-01

    When operational amplifiers are exposed to high levels of neutron fluence or total ionizing dose, significant changes may be observed in input voltages and currents. These changes may produce large errors at the output of direct-coupled amplifier stages. Therefore, the need exists for automatic compensation techniques. However, previously introduced techniques compensate only for errors in the main amplifier and neglect the errors induced by the compensating circuitry. In this paper, the techniques introduced compensate not only for errors in the main operational amplifier, but also for errors induced by the compensation circuitry. Included in the paper is a theoretical analysis of each compensation technique, along with advantages and disadvantages of each. Important design criteria and information necessary for proper selection of semiconductor switches will also be included. Introduced in this paper will be compensation circuitry for both resistive and capacitive feedback networks

  6. El error en el delito imprudente

    Directory of Open Access Journals (Sweden)

    Miguel Angel Muñoz García

    2011-12-01

    Full Text Available La teoría del error en los delitos culposos constituye un tema álgido de tratar, y controversial en la dogmática penal: existen en realidad muy escasas referencias, y no se ha llegado a un consenso razonable. Partiendo del análisis de la estructura dogmática del delito imprudente, en donde se destaca el deber objetivo de cuidado como elemento del tipo sobre el que recae el error, y de las diferentes posiciones doctrinales que defienden la aplicabilidad del error de tipo y del error de prohibición, se plantea la viabilidad de este último, con fundamento en razones dogmáticas y de política criminal, siendo la infracción del deber objetivo de cuidado en tanto consecuencia del error, un tema por analizar en sede de culpabilidad.

  7. Characteristics of medication errors with parenteral cytotoxic drugs

    OpenAIRE

    Fyhr, A; Akselsson, R

    2012-01-01

    Errors involving cytotoxic drugs have the potential of being fatal and should therefore be prevented. The objective of this article is to identify the characteristics of medication errors involving parenteral cytotoxic drugs in Sweden. A total of 60 cases reported to the national error reporting systems from 1996 to 2008 were reviewed. Classification was made to identify cytotoxic drugs involved, type of error, where the error occurred, error detection mechanism, and consequences for the pati...

  8. Stochastic goal-oriented error estimation with memory

    Science.gov (United States)

    Ackmann, Jan; Marotzke, Jochem; Korn, Peter

    2017-11-01

    We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.

  9. Error Control in Distributed Node Self-Localization

    Directory of Open Access Journals (Sweden)

    Ying Zhang

    2008-03-01

    Full Text Available Location information of nodes in an ad hoc sensor network is essential to many tasks such as routing, cooperative sensing, and service delivery. Distributed node self-localization is lightweight and requires little communication overhead, but often suffers from the adverse effects of error propagation. Unlike other localization papers which focus on designing elaborate localization algorithms, this paper takes a different perspective, focusing on the error propagation problem, addressing questions such as where localization error comes from and how it propagates from node to node. To prevent error from propagating and accumulating, we develop an error-control mechanism based on characterization of node uncertainties and discrimination between neighboring nodes. The error-control mechanism uses only local knowledge and is fully decentralized. Simulation results have shown that the active selection strategy significantly mitigates the effect of error propagation for both range and directional sensors. It greatly improves localization accuracy and robustness.

  10. Formulation of uncertainty relation of error and disturbance in quantum measurement by using quantum estimation theory

    International Nuclear Information System (INIS)

    Yu Watanabe; Masahito Ueda

    2012-01-01

    Full text: When we try to obtain information about a quantum system, we need to perform measurement on the system. The measurement process causes unavoidable state change. Heisenberg discussed a thought experiment of the position measurement of a particle by using a gamma-ray microscope, and found a trade-off relation between the error of the measured position and the disturbance in the momentum caused by the measurement process. The trade-off relation epitomizes the complementarity in quantum measurements: we cannot perform a measurement of an observable without causing disturbance in its canonically conjugate observable. However, at the time Heisenberg found the complementarity, quantum measurement theory was not established yet, and Kennard and Robertson's inequality erroneously interpreted as a mathematical formulation of the complementarity. Kennard and Robertson's inequality actually implies the indeterminacy of the quantum state: non-commuting observables cannot have definite values simultaneously. However, Kennard and Robertson's inequality reflects the inherent nature of a quantum state alone, and does not concern any trade-off relation between the error and disturbance in the measurement process. In this talk, we report a resolution to the complementarity in quantum measurements. First, we find that it is necessary to involve the estimation process from the outcome of the measurement for quantifying the error and disturbance in the quantum measurement. We clarify the implicitly involved estimation process in Heisenberg's gamma-ray microscope and other measurement schemes, and formulate the error and disturbance for an arbitrary quantum measurement by using quantum estimation theory. The error and disturbance are defined in terms of the Fisher information, which gives the upper bound of the accuracy of the estimation. Second, we obtain uncertainty relations between the measurement errors of two observables [1], and between the error and disturbance in the

  11. Medication errors with the use of allopurinol and colchicine: a retrospective study of a national, anonymous Internet-accessible error reporting system.

    Science.gov (United States)

    Mikuls, Ted R; Curtis, Jeffrey R; Allison, Jeroan J; Hicks, Rodney W; Saag, Kenneth G

    2006-03-01

    To more closely assess medication errors in gout care, we examined data from a national, Internet-accessible error reporting program over a 5-year reporting period. We examined data from the MEDMARX database, covering the period from January 1, 1999 through December 31, 2003. For allopurinol and colchicine, we examined error severity, source, type, contributing factors, and healthcare personnel involved in errors, and we detailed errors resulting in patient harm. Causes of error and the frequency of other error characteristics were compared for gout medications versus other musculoskeletal treatments using the chi-square statistic. Gout medication errors occurred in 39% (n = 273) of facilities participating in the MEDMARX program. Reported errors were predominantly from the inpatient hospital setting and related to the use of allopurinol (n = 524), followed by colchicine (n = 315), probenecid (n = 50), and sulfinpyrazone (n = 2). Compared to errors involving other musculoskeletal treatments, allopurinol and colchicine errors were more often ascribed to problems with physician prescribing (7% for other therapies versus 23-39% for allopurinol and colchicine, p < 0.0001) and less often due to problems with drug administration or nursing error (50% vs 23-27%, p < 0.0001). Our results suggest that inappropriate prescribing practices are characteristic of errors occurring with the use of allopurinol and colchicine. Physician prescribing practices are a potential target for quality improvement interventions in gout care.

  12. The Power of Neuroimaging Biomarkers for Screening Frontotemporal Dementia

    Science.gov (United States)

    McMillan, Corey T.; Avants, Brian B.; Cook, Philip; Ungar, Lyle; Trojanowski, John Q.; Grossman, Murray

    2014-01-01

    Frontotemporal dementia (FTD) is a clinically and pathologically heterogeneous neurodegenerative disease that can result from either frontotemporal lobar degeneration (FTLD) or Alzheimer’s disease (AD) pathology. It is critical to establish statistically powerful biomarkers that can achieve substantial cost-savings and increase feasibility of clinical trials. We assessed three broad categories of neuroimaging methods to screen underlying FTLD and AD pathology in a clinical FTD series: global measures (e.g., ventricular volume), anatomical volumes of interest (VOIs) (e.g., hippocampus) using a standard atlas, and data-driven VOIs using Eigenanatomy. We evaluated clinical FTD patients (N=93) with cerebrospinal fluid, gray matter (GM) MRI, and diffusion tensor imaging (DTI) to assess whether they had underlying FTLD or AD pathology. Linear regression was performed to identify the optimal VOIs for each method in a training dataset and then we evaluated classification sensitivity and specificity in an independent test cohort. Power was evaluated by calculating minimum sample sizes (mSS) required in the test classification analyses for each model. The data-driven VOI analysis using a multimodal combination of GM MRI and DTI achieved the greatest classification accuracy (89% SENSITIVE; 89% SPECIFIC) and required a lower minimum sample size (N=26) relative to anatomical VOI and global measures. We conclude that a data-driven VOI approach employing Eigenanatomy provides more accurate classification, benefits from increased statistical power in unseen datasets, and therefore provides a robust method for screening underlying pathology in FTD patients for entry into clinical trials. PMID:24687814

  13. Errors generated with the use of rectangular collimation

    International Nuclear Information System (INIS)

    Parks, E.T.

    1991-01-01

    This study was designed to determine whether various techniques for achieving rectangular collimation generate different numbers and types of errors and remakes and to determine whether operator skill level influences errors and remakes. Eighteen students exposed full-mouth series of radiographs on manikins with the use of six techniques. The students were grouped according to skill level. The radiographs were evaluated for errors and remakes resulting from errors in the following categories: cone cutting, vertical angulation, and film placement. Significant differences were found among the techniques in cone cutting errors and remakes, vertical angulation errors and remakes, and total errors and remakes. Operator skill did not appear to influence the number or types of errors or remakes generated. Rectangular collimation techniques produced more errors than did the round collimation techniques. However, only one rectangular collimation technique generated significantly more remakes than the other techniques

  14. Counting OCR errors in typeset text

    Science.gov (United States)

    Sandberg, Jonathan S.

    1995-03-01

    Frequently object recognition accuracy is a key component in the performance analysis of pattern matching systems. In the past three years, the results of numerous excellent and rigorous studies of OCR system typeset-character accuracy (henceforth OCR accuracy) have been published, encouraging performance comparisons between a variety of OCR products and technologies. These published figures are important; OCR vendor advertisements in the popular trade magazines lead readers to believe that published OCR accuracy figures effect market share in the lucrative OCR market. Curiously, a detailed review of many of these OCR error occurrence counting results reveals that they are not reproducible as published and they are not strictly comparable due to larger variances in the counts than would be expected by the sampling variance. Naturally, since OCR accuracy is based on a ratio of the number of OCR errors over the size of the text searched for errors, imprecise OCR error accounting leads to similar imprecision in OCR accuracy. Some published papers use informal, non-automatic, or intuitively correct OCR error accounting. Still other published results present OCR error accounting methods based on string matching algorithms such as dynamic programming using Levenshtein (edit) distance but omit critical implementation details (such as the existence of suspect markers in the OCR generated output or the weights used in the dynamic programming minimization procedure). The problem with not specifically revealing the accounting method is that the number of errors found by different methods are significantly different. This paper identifies the basic accounting methods used to measure OCR errors in typeset text and offers an evaluation and comparison of the various accounting methods.

  15. Radiologic errors, past, present and future.

    Science.gov (United States)

    Berlin, Leonard

    2014-01-01

    During the 10-year period beginning in 1949 with publication of five articles in two radiology journals and UKs The Lancet, a California radiologist named L.H. Garland almost single-handedly shocked the entire medical and especially the radiologic community. He focused their attention on the fact now known and accepted by all, but at that time not previously recognized and acknowledged only with great reluctance, that a substantial degree of observer error was prevalent in radiologic interpretation. In the more than half-century that followed, Garland's pioneering work has been affirmed and reaffirmed by numerous researchers. Retrospective studies disclosed then and still disclose today that diagnostic errors in radiologic interpretations of plain radiographic (as well as CT, MR, ultrasound, and radionuclide) images hover in the 30% range, not too dissimilar to the error rates in clinical medicine. Seventy percent of these errors are perceptual in nature, i.e., the radiologist does not "see" the abnormality on the imaging exam, perhaps due to poor conspicuity, satisfaction of search, or simply the "inexplicable psycho-visual phenomena of human perception." The remainder are cognitive errors: the radiologist sees an abnormality but fails to render a correct diagnoses by attaching the wrong significance to what is seen, perhaps due to inadequate knowledge, or an alliterative or judgmental error. Computer-assisted detection (CAD), a technology that for the past two decades has been utilized primarily in mammographic interpretation, increases sensitivity but at the same time decreases specificity; whether it reduces errors is debatable. Efforts to reduce diagnostic radiological errors continue, but the degree to which they will be successful remains to be determined.

  16. A Comparative Study on Error Analysis

    DEFF Research Database (Denmark)

    Wu, Xiaoli; Zhang, Chun

    2015-01-01

    Title: A Comparative Study on Error Analysis Subtitle: - Belgian (L1) and Danish (L1) learners’ use of Chinese (L2) comparative sentences in written production Xiaoli Wu, Chun Zhang Abstract: Making errors is an inevitable and necessary part of learning. The collection, classification and analysis...... the occurrence of errors either in linguistic or pedagogical terms. The purpose of the current study is to demonstrate the theoretical and practical relevance of error analysis approach in CFL by investigating two cases - (1) Belgian (L1) learners’ use of Chinese (L2) comparative sentences in written production...... of errors in the written and spoken production of L2 learners has a long tradition in L2 pedagogy. Yet, in teaching and learning Chinese as a foreign language (CFL), only handful studies have been made either to define the ‘error’ in a pedagogically insightful way or to empirically investigate...

  17. Value of Information Web Application

    Science.gov (United States)

    2015-04-01

    their understanding of VoI attributes (source reliable, information content, and latency). The VoI web application emulates many features of a...only when using the Firefox web browser on those computers (Internet Explorer was not viable due to unchangeable user settings). During testing, the

  18. When do we need more data? A primer on calculating the value of information for applied ecologists

    Science.gov (United States)

    Canessa, Stefano; Guillera-Arroita, Gurutzeta; Lahoz-Monfort, José J.; Southwell, Darren M; Armstrong, Doug P.; Chadès, Iadine; Lacy, Robert C; Converse, Sarah J.

    2015-01-01

    Applied ecologists continually advocate further research, under the assumption that obtaining more information will lead to better decisions. Value of information (VoI) analysis can be used to quantify how additional information may improve management outcomes: despite its potential, this method is still underused in environmental decision-making. We provide a primer on how to calculate the VoI and assess whether reducing uncertainty will change a decision. Our aim is to facilitate the application of VoI by managers who are not familiar with decision-analytic principles and notation, by increasing the technical accessibility of the tool.

  19. Error calculations statistics in radioactive measurements

    International Nuclear Information System (INIS)

    Verdera, Silvia

    1994-01-01

    Basic approach and procedures frequently used in the practice of radioactive measurements.Statistical principles applied are part of Good radiopharmaceutical Practices and quality assurance.Concept of error, classification as systematic and random errors.Statistic fundamentals,probability theories, populations distributions, Bernoulli, Poisson,Gauss, t-test distribution,Ξ2 test, error propagation based on analysis of variance.Bibliography.z table,t-test table, Poisson index ,Ξ2 test

  20. Republished error management: Descriptions of verbal communication errors between staff. An analysis of 84 root cause analysis-reports from Danish hospitals

    DEFF Research Database (Denmark)

    Rabøl, Louise Isager; Andersen, Mette Lehmann; Østergaard, Doris

    2011-01-01

    Introduction Poor teamwork and communication between healthcare staff are correlated to patient safety incidents. However, the organisational factors responsible for these issues are unexplored. Root cause analyses (RCA) use human factors thinking to analyse the systems behind severe patient safety...... and characteristics of verbal communication errors such as handover errors and error during teamwork. Results Raters found description of verbal communication errors in 44 reports (52%). These included handover errors (35 (86%)), communication errors between different staff groups (19 (43%)), misunderstandings (13...... (30%)), communication errors between junior and senior staff members (11 (25%)), hesitance in speaking up (10 (23%)) and communication errors during teamwork (8 (18%)). The kappa values were 0.44-0.78. Unproceduralized communication and information exchange via telephone, related to transfer between...

  1. Dual processing and diagnostic errors.

    Science.gov (United States)

    Norman, Geoff

    2009-09-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical, conscious, and conceptual process, called System 2. Exemplar theories of categorization propose that many category decisions in everyday life are made by unconscious matching to a particular example in memory, and these remain available and retrievable individually. I then review studies of clinical reasoning based on these theories, and show that the two processes are equally effective; System 1, despite its reliance in idiosyncratic, individual experience, is no more prone to cognitive bias or diagnostic error than System 2. Further, I review evidence that instructions directed at encouraging the clinician to explicitly use both strategies can lead to consistent reduction in error rates.

  2. Research trend on human error reduction

    International Nuclear Information System (INIS)

    Miyaoka, Sadaoki

    1990-01-01

    Human error has been the problem in all industries. In 1988, the Bureau of Mines, Department of the Interior, USA, carried out the worldwide survey on the human error in all industries in relation to the fatal accidents in mines. There was difference in the results according to the methods of collecting data, but the proportion that human error took in the total accidents distributed in the wide range of 20∼85%, and was 35% on the average. The rate of occurrence of accidents and troubles in Japanese nuclear power stations is shown, and the rate of occurrence of human error is 0∼0.5 cases/reactor-year, which did not much vary. Therefore, the proportion that human error took in the total tended to increase, and it has become important to reduce human error for lowering the rate of occurrence of accidents and troubles hereafter. After the TMI accident in 1979 in USA, the research on man-machine interface became active, and after the Chernobyl accident in 1986 in USSR, the problem of organization and management has been studied. In Japan, 'Safety 21' was drawn up by the Advisory Committee for Energy, and also the annual reports on nuclear safety pointed out the importance of human factors. The state of the research on human factors in Japan and abroad and three targets to reduce human error are reported. (K.I.)

  3. Error monitoring issues for common channel signaling

    Science.gov (United States)

    Hou, Victor T.; Kant, Krishna; Ramaswami, V.; Wang, Jonathan L.

    1994-04-01

    Motivated by field data which showed a large number of link changeovers and incidences of link oscillations between in-service and out-of-service states in common channel signaling (CCS) networks, a number of analyses of the link error monitoring procedures in the SS7 protocol were performed by the authors. This paper summarizes the results obtained thus far and include the following: (1) results of an exact analysis of the performance of the error monitoring procedures under both random and bursty errors; (2) a demonstration that there exists a range of error rates within which the error monitoring procedures of SS7 may induce frequent changeovers and changebacks; (3) an analysis of the performance ofthe SS7 level-2 transmission protocol to determine the tolerable error rates within which the delay requirements can be met; (4) a demonstration that the tolerable error rate depends strongly on various link and traffic characteristics, thereby implying that a single set of error monitor parameters will not work well in all situations; (5) some recommendations on a customizable/adaptable scheme of error monitoring with a discussion on their implementability. These issues may be particularly relevant in the presence of anticipated increases in SS7 traffic due to widespread deployment of Advanced Intelligent Network (AIN) and Personal Communications Service (PCS) as well as for developing procedures for high-speed SS7 links currently under consideration by standards bodies.

  4. Fatores de erros na mensuração da mortalidade infantil Error factors in the measurement of infant mortality

    Directory of Open Access Journals (Sweden)

    Ruy Laurenti

    1975-12-01

    Full Text Available Dentre os indicadores de saúde tradicionalmente utilizados a mortalidade infantil destaca-se como um dos mais importantes. Frequentemente é utilizada por profissionais de saúde pública na caracterização do nível de saúde e em avaliações de programas. Existem, porém, vários fatores de erros que afetam o seu valor e dentre esses são destacados: a definição dos nascidos vivos e sua aplicação na prática, o sub-registro de óbito e de nascimento, o registro do óbito por local de ocorrência, a definição de nascido vivo no ano e a declaração errada na idade. Existem também erros qualitativos que dizem respeito, principalmente, a declarações erradas da causa de morte. Vários desses fatores foram medidos para São Paulo.Among the traditionally used health indices the infant mortality rate is distinguished as the most important one. Frequently it is used by the public health professionals for health level characterization and for the evaluation of programmes. There are, however, several error factors that affect its value, among which are the live birth definition and its true use; underregistration of deaths and births; the death register by place of occurrence; live birth definition in the year, and the wrong age information. There are also qualitative errors due to wrong information as regards the causes of death. Several of these factors were discussed for S. Paulo.

  5. Wind power error estimation in resource assessments.

    Directory of Open Access Journals (Sweden)

    Osvaldo Rodríguez

    Full Text Available Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  6. Wind power error estimation in resource assessments.

    Science.gov (United States)

    Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  7. A qualitative description of human error

    International Nuclear Information System (INIS)

    Li Zhaohuan

    1992-11-01

    The human error has an important contribution to risk of reactor operation. The insight and analytical model are main parts in human reliability analysis. It consists of the concept of human error, the nature, the mechanism of generation, the classification and human performance influence factors. On the operating reactor the human error is defined as the task-human-machine mismatch. The human error event is focused on the erroneous action and the unfavored result. From the time limitation of performing a task, the operation is divided into time-limited and time-opened. The HCR (human cognitive reliability) model is suited for only time-limited. The basic cognitive process consists of the information gathering, cognition/thinking, decision making and action. The human erroneous action may be generated in any stage of this process. The more natural ways to classify human errors are presented. The human performance influence factors including personal, organizational and environmental factors are also listed

  8. A qualitative description of human error

    Energy Technology Data Exchange (ETDEWEB)

    Zhaohuan, Li [Academia Sinica, Beijing, BJ (China). Inst. of Atomic Energy

    1992-11-01

    The human error has an important contribution to risk of reactor operation. The insight and analytical model are main parts in human reliability analysis. It consists of the concept of human error, the nature, the mechanism of generation, the classification and human performance influence factors. On the operating reactor the human error is defined as the task-human-machine mismatch. The human error event is focused on the erroneous action and the unfavored result. From the time limitation of performing a task, the operation is divided into time-limited and time-opened. The HCR (human cognitive reliability) model is suited for only time-limited. The basic cognitive process consists of the information gathering, cognition/thinking, decision making and action. The human erroneous action may be generated in any stage of this process. The more natural ways to classify human errors are presented. The human performance influence factors including personal, organizational and environmental factors are also listed.

  9. Error due to unresolved scales in estimation problems for atmospheric data assimilation

    Science.gov (United States)

    Janjic, Tijana

    The error arising due to unresolved scales in data assimilation procedures is examined. The problem of estimating the projection of the state of a passive scalar undergoing advection at a sequence of times is considered. The projection belongs to a finite- dimensional function space and is defined on the continuum. Using the continuum projection of the state of a passive scalar, a mathematical definition is obtained for the error arising due to the presence, in the continuum system, of scales unresolved by the discrete dynamical model. This error affects the estimation procedure through point observations that include the unresolved scales. In this work, two approximate methods for taking into account the error due to unresolved scales and the resulting correlations are developed and employed in the estimation procedure. The resulting formulas resemble the Schmidt-Kalman filter and the usual discrete Kalman filter, respectively. For this reason, the newly developed filters are called the Schmidt-Kalman filter and the traditional filter. In order to test the assimilation methods, a two- dimensional advection model with nonstationary spectrum was developed for passive scalar transport in the atmosphere. An analytical solution on the sphere was found depicting the model dynamics evolution. Using this analytical solution the model error is avoided, and the error due to unresolved scales is the only error left in the estimation problem. It is demonstrated that the traditional and the Schmidt- Kalman filter work well provided the exact covariance function of the unresolved scales is known. However, this requirement is not satisfied in practice, and the covariance function must be modeled. The Schmidt-Kalman filter cannot be computed in practice without further approximations. Therefore, the traditional filter is better suited for practical use. Also, the traditional filter does not require modeling of the full covariance function of the unresolved scales, but only

  10. Automated volume of interest delineation and rendering of cone beam CT images in interventional cardiology

    Science.gov (United States)

    Lorenz, Cristian; Schäfer, Dirk; Eshuis, Peter; Carroll, John; Grass, Michael

    2012-02-01

    Interventional C-arm systems allow the efficient acquisition of 3D cone beam CT images. They can be used for intervention planning, navigation, and outcome assessment. We present a fast and completely automated volume of interest (VOI) delineation for cardiac interventions, covering the whole visceral cavity including mediastinum and lungs but leaving out rib-cage and spine. The problem is addressed in a model based approach. The procedure has been evaluated on 22 patient cases and achieves an average surface error below 2mm. The method is able to cope with varying image intensities, varying truncations due to the limited reconstruction volume, and partially with heavy metal and motion artifacts.

  11. Open quantum systems and error correction

    Science.gov (United States)

    Shabani Barzegar, Alireza

    Quantum effects can be harnessed to manipulate information in a desired way. Quantum systems which are designed for this purpose are suffering from harming interaction with their surrounding environment or inaccuracy in control forces. Engineering different methods to combat errors in quantum devices are highly demanding. In this thesis, I focus on realistic formulations of quantum error correction methods. A realistic formulation is the one that incorporates experimental challenges. This thesis is presented in two sections of open quantum system and quantum error correction. Chapters 2 and 3 cover the material on open quantum system theory. It is essential to first study a noise process then to contemplate methods to cancel its effect. In the second chapter, I present the non-completely positive formulation of quantum maps. Most of these results are published in [Shabani and Lidar, 2009b,a], except a subsection on geometric characterization of positivity domain of a quantum map. The real-time formulation of the dynamics is the topic of the third chapter. After introducing the concept of Markovian regime, A new post-Markovian quantum master equation is derived, published in [Shabani and Lidar, 2005a]. The section of quantum error correction is presented in three chapters of 4, 5, 6 and 7. In chapter 4, we introduce a generalized theory of decoherence-free subspaces and subsystems (DFSs), which do not require accurate initialization (published in [Shabani and Lidar, 2005b]). In Chapter 5, we present a semidefinite program optimization approach to quantum error correction that yields codes and recovery procedures that are robust against significant variations in the noise channel. Our approach allows us to optimize the encoding, recovery, or both, and is amenable to approximations that significantly improve computational cost while retaining fidelity (see [Kosut et al., 2008] for a published version). Chapter 6 is devoted to a theory of quantum error correction (QEC

  12. Naming game with learning errors in communications

    OpenAIRE

    Lou, Yang; Chen, Guanrong

    2014-01-01

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network topology. By pair-wise iterative interactions, the population reaches a consensus state asymptotically. In this paper, we study naming game with communication errors during pair-wise conversations, where errors are represented by error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed....

  13. Error Analysis in a Written Composition Análisis de errores en una composición escrita

    Directory of Open Access Journals (Sweden)

    David Alberto Londoño Vásquez

    2008-12-01

    Full Text Available Learners make errors in both comprehension and production. Some theoreticians have pointed out the difficulty of assigning the cause of failures in comprehension to an inadequate knowledge of a particular syntactic feature of a misunderstood utterance. Indeed, an error can be defined as a deviation from the norms of the target language. In this investigation, based on personal and professional experience, a written composition entitled "My Life in Colombia" will be analyzed based on clinical elicitation (CE research. CE involves getting the informant to produce data of any sort, for example, by means of a general interview or by asking the learner to write a composition. Some errors produced by a foreign language learner in her acquisition process will be analyzed, identifying the possible sources of these errors. Finally, four kinds of errors are classified: omission, addition, misinformation, and misordering.Los aprendices comenten errores tanto en la comprensión como en la producción. Algunos teóricos han identificado que la dificultad para clasificar las diferentes fallas en comprensión se debe al conocimiento inadecuado de una característica sintáctica particular. Por tanto, el error puede definirse como una desviación de las normas del idioma objetivo. En esta experiencia profesional se analizará una composición escrita sobre "Mi vida en Colombia" con base en la investigación a través de la elicitación clínica (EC. Esta se centra en cómo el informante produce datos de cualquier tipo, por ejemplo, a través de una entrevista general o solicitándole al aprendiz una composición escrita. Se analizarán algunos errores producidos por un aprendiz de una lengua extranjera en su proceso de adquisición, identificando sus posibles causas. Finalmente, se clasifican cuatro tipos de errores: omisión, adición, desinformación y yuxtaposición sintáctica.

  14. Parts of the Whole: Error Estimation for Science Students

    Directory of Open Access Journals (Sweden)

    Dorothy Wallace

    2017-01-01

    Full Text Available It is important for science students to understand not only how to estimate error sizes in measurement data, but also to see how these errors contribute to errors in conclusions they may make about the data. Relatively small errors in measurement, errors in assumptions, and roundoff errors in computation may result in large error bounds on computed quantities of interest. In this column, we look closely at a standard method for measuring the volume of cancer tumor xenografts to see how small errors in each of these three factors may contribute to relatively large observed errors in recorded tumor volumes.

  15. Review and comparison of content growth in word definition of Persian speaking children with 4.5 to 7.5 years of age

    Directory of Open Access Journals (Sweden)

    Maryam Malekian

    2014-10-01

    Full Text Available Background and Aim: Word definition is one of the complicated language skills that require education and linguistic awareness. In this study , comparison was made in word definition ability of children between ages of 4.5 to 7.5 years.Methods: This study was cross-sectional and descriptive-analytical in nature . Participants included 107 girls and boys who where in age group 1 (54-65 months, age group 2 (66-77 months, and age group 3 (78-90 months. They were selected by multistage sampling method and recruited from nurseries and primary schools in 1, 7, and 17 municipal districts of Tehran . Word definition task was performed on each subject. The reliability was assessed by two independent values and the validity was determined by the content. Kruskal- Wallis and Mann-Whitney U statistical methods were used for analysis.Results: Mean score of the content in word definition was significantly increased by age (p=0.001. In the second and third age groups there was no significant difference in the content of word definition. The most response that used in all ages was the functional response. With increasing age, error rate (p=0.002 and identical (p=0.003 responses significantly decreased; however, percentage use of combination II (p<0.001 responses significantly increased.Conclusion: By increasing age, quality of definitions in terms of content is improved and definitions change from functional and concrete responses to c ombination II definitions.

  16. Medication errors reported to the National Medication Error Reporting System in Malaysia: a 4-year retrospective review (2009 to 2012).

    Science.gov (United States)

    Samsiah, A; Othman, Noordin; Jamshed, Shazia; Hassali, Mohamed Azmi; Wan-Mohaina, W M

    2016-12-01

    Reporting and analysing the data on medication errors (MEs) is important and contributes to a better understanding of the error-prone environment. This study aims to examine the characteristics of errors submitted to the National Medication Error Reporting System (MERS) in Malaysia. A retrospective review of reports received from 1 January 2009 to 31 December 2012 was undertaken. Descriptive statistics method was applied. A total of 17,357 MEs reported were reviewed. The majority of errors were from public-funded hospitals. Near misses were classified in 86.3 % of the errors. The majority of errors (98.1 %) had no harmful effects on the patients. Prescribing contributed to more than three-quarters of the overall errors (76.1 %). Pharmacists detected and reported the majority of errors (92.1 %). Cases of erroneous dosage or strength of medicine (30.75 %) were the leading type of error, whilst cardiovascular (25.4 %) was the most common category of drug found. MERS provides rich information on the characteristics of reported MEs. Low contribution to reporting from healthcare facilities other than government hospitals and non-pharmacists requires further investigation. Thus, a feasible approach to promote MERS among healthcare providers in both public and private sectors needs to be formulated and strengthened. Preventive measures to minimise MEs should be directed to improve prescribing competency among the fallible prescribers identified.

  17. Forecasting Error Calculation with Mean Absolute Deviation and Mean Absolute Percentage Error

    Science.gov (United States)

    Khair, Ummul; Fahmi, Hasanul; Hakim, Sarudin Al; Rahim, Robbi

    2017-12-01

    Prediction using a forecasting method is one of the most important things for an organization, the selection of appropriate forecasting methods is also important but the percentage error of a method is more important in order for decision makers to adopt the right culture, the use of the Mean Absolute Deviation and Mean Absolute Percentage Error to calculate the percentage of mistakes in the least square method resulted in a percentage of 9.77% and it was decided that the least square method be worked for time series and trend data.

  18. Simulator data on human error probabilities

    International Nuclear Information System (INIS)

    Kozinsky, E.J.; Guttmann, H.E.

    1982-01-01

    Analysis of operator errors on NPP simulators is being used to determine Human Error Probabilities (HEP) for task elements defined in NUREG/CR 1278. Simulator data tapes from research conducted by EPRI and ORNL are being analyzed for operator error rates. The tapes collected, using Performance Measurement System software developed for EPRI, contain a history of all operator manipulations during simulated casualties. Analysis yields a time history or Operational Sequence Diagram and a manipulation summary, both stored in computer data files. Data searches yield information on operator errors of omission and commission. This work experimentally determines HEPs for Probabilistic Risk Assessment calculations. It is the only practical experimental source of this data to date

  19. Simulator data on human error probabilities

    International Nuclear Information System (INIS)

    Kozinsky, E.J.; Guttmann, H.E.

    1981-01-01

    Analysis of operator errors on NPP simulators is being used to determine Human Error Probabilities (HEP) for task elements defined in NUREG/CR-1278. Simulator data tapes from research conducted by EPRI and ORNL are being analyzed for operator error rates. The tapes collected, using Performance Measurement System software developed for EPRI, contain a history of all operator manipulations during simulated casualties. Analysis yields a time history or Operational Sequence Diagram and a manipulation summary, both stored in computer data files. Data searches yield information on operator errors of omission and commission. This work experimentally determined HEP's for Probabilistic Risk Assessment calculations. It is the only practical experimental source of this data to date

  20. Research Costs Investigated: A Study Into the Budgets of Dutch Publicly Funded Drug-Related Research

    NARCIS (Netherlands)

    T. van Asselt (Thea); B.L.T. Ramaekers (Bram); I. Corro Ramos (Isaac); M.A. Joore (Manuela); M.J. Al (Maiwenn); Lesman-Leegte, I. (Ivonne); M.J. Postma (Maarten); P. Vemer (Pepijn); T.L. Feenstra (Talitha)

    2017-01-01

    textabstractBackground: The costs of performing research are an important input in value of information (VOI) analyses but are difficult to assess. Objective: The aim of this study was to investigate the costs of research, serving two purposes: (1) estimating research costs for use in VOI analyses;

  1. The definite article in Romance expletives and long weak definites

    Directory of Open Access Journals (Sweden)

    M.Teresa Espinal

    2017-03-01

    Full Text Available This paper focuses on some issues involving expletive articles and long weak definites in Romance (mainly Spanish, Brazilian Portuguese and Catalan, in comparison to DPs that elicit a strong reading. We show the similarities between expletive definites and long weak definites, and we argue for an analysis in common to other polarity items in terms of polarity sensitivity. We reach the conclusion that the definite article in Romance comes in two variants: the referentially unique variant (to be translated as the semantic 'iota 'operator and the polar variant, formally characterized with an abstract [+σ] feature, that encodes a weak bound reading (to be semantically translated by an existential operator.

  2. Radiology errors: are we learning from our mistakes?

    International Nuclear Information System (INIS)

    Mankad, K.; Hoey, E.T.D.; Jones, J.B.; Tirukonda, P.; Smith, J.T.

    2009-01-01

    Aim: To question practising radiologists and radiology trainees at a large international meeting in an attempt to survey individuals about error reporting. Materials and methods: Radiologists attending the 2007 Radiological Society of North America (RSNA) annual meeting were approached to fill in a written questionnaire. Participants were questioned as to their grade, country in which they practised, and subspecialty interest. They were asked whether they kept a personal log of their errors (with an error defined as 'a mistake that has management implications for the patient'), how many errors they had made in the preceding 12 months, and the types of errors that had occurred. They were also asked whether their local department held regular discrepancy/errors meetings, how many they had attended in the preceding 12 months, and the perceived atmosphere at these meetings (on a qualitative scale). Results: A total of 301 radiologists with a wide range of specialty interests from 32 countries agreed to take part. One hundred and sixty-six of 301 (55%) of responders were consultant/attending grade. One hundred and thirty-five of 301 (45%) were residents/fellows. Fifty-nine of 301 (20%) of responders kept a personal record of their errors. The number of errors made per person per year ranged from none (2%) to 16 or more (7%). The majority (91%) reported making between one and 15 errors/year. Overcalls (40%), under-calls (25%), and interpretation error (15%) were the predominant error types. One hundred and seventy-eight of 301 (59%) of participants stated that their department held regular errors meeting. One hundred and twenty-seven of 301 (42%) had attended three or more meetings in the preceding year. The majority (55%) who had attended errors meetings described the atmosphere as 'educational.' Only a small minority (2%) described the atmosphere as 'poor' meaning non-educational and/or blameful. Conclusion: Despite the undeniable importance of learning from errors

  3. CORRECTING ERRORS: THE RELATIVE EFFICACY OF DIFFERENT FORMS OF ERROR FEEDBACK IN SECOND LANGUAGE WRITING

    Directory of Open Access Journals (Sweden)

    Chitra Jayathilake

    2013-01-01

    Full Text Available Error correction in ESL (English as a Second Language classes has been a focal phenomenon in SLA (Second Language Acquisition research due to some controversial research results and diverse feedback practices. This paper presents a study which explored the relative efficacy of three forms of error correction employed in ESL writing classes: focusing on the acquisition of one grammar element both for immediate and delayed language contexts, and collecting data from university undergraduates, this study employed an experimental research design with a pretest-treatment-posttests structure. The research revealed that the degree of success in acquiring L2 (Second Language grammar through error correction differs according to the form of the correction and to learning contexts. While the findings are discussed in relation to the previous literature, this paper concludes creating a cline of error correction forms to be promoted in Sri Lankan L2 writing contexts, particularly in ESL contexts in Universities.

  4. Dual Processing and Diagnostic Errors

    Science.gov (United States)

    Norman, Geoff

    2009-01-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…

  5. Electronic prescribing reduces prescribing error in public hospitals.

    Science.gov (United States)

    Shawahna, Ramzi; Rahman, Nisar-Ur; Ahmad, Mahmood; Debray, Marcel; Yliperttula, Marjo; Declèves, Xavier

    2011-11-01

    To examine the incidence of prescribing errors in a main public hospital in Pakistan and to assess the impact of introducing electronic prescribing system on the reduction of their incidence. Medication errors are persistent in today's healthcare system. The impact of electronic prescribing on reducing errors has not been tested in developing world. Prospective review of medication and discharge medication charts before and after the introduction of an electronic inpatient record and prescribing system. Inpatient records (n = 3300) and 1100 discharge medication sheets were reviewed for prescribing errors before and after the installation of electronic prescribing system in 11 wards. Medications (13,328 and 14,064) were prescribed for inpatients, among which 3008 and 1147 prescribing errors were identified, giving an overall error rate of 22·6% and 8·2% throughout paper-based and electronic prescribing, respectively. Medications (2480 and 2790) were prescribed for discharge patients, among which 418 and 123 errors were detected, giving an overall error rate of 16·9% and 4·4% during paper-based and electronic prescribing, respectively. Electronic prescribing has a significant effect on the reduction of prescribing errors. Prescribing errors are commonplace in Pakistan public hospitals. The study evaluated the impact of introducing electronic inpatient records and electronic prescribing in the reduction of prescribing errors in a public hospital in Pakistan. © 2011 Blackwell Publishing Ltd.

  6. Improving Type Error Messages in OCaml

    Directory of Open Access Journals (Sweden)

    Arthur Charguéraud

    2015-12-01

    Full Text Available Cryptic type error messages are a major obstacle to learning OCaml or other ML-based languages. In many cases, error messages cannot be interpreted without a sufficiently-precise model of the type inference algorithm. The problem of improving type error messages in ML has received quite a bit of attention over the past two decades, and many different strategies have been considered. The challenge is not only to produce error messages that are both sufficiently concise and systematically useful to the programmer, but also to handle a full-blown programming language and to cope with large-sized programs efficiently. In this work, we present a modification to the traditional ML type inference algorithm implemented in OCaml that, by significantly reducing the left-to-right bias, allows us to report error messages that are more helpful to the programmer. Our algorithm remains fully predictable and continues to produce fairly concise error messages that always help making some progress towards fixing the code. We implemented our approach as a patch to the OCaml compiler in just a few hundred lines of code. We believe that this patch should benefit not just to beginners, but also to experienced programs developing large-scale OCaml programs.

  7. Measurement error models with uncertainty about the error variance

    NARCIS (Netherlands)

    Oberski, D.L.; Satorra, A.

    2013-01-01

    It is well known that measurement error in observable variables induces bias in estimates in standard regression analysis and that structural equation models are a typical solution to this problem. Often, multiple indicator equations are subsumed as part of the structural equation model, allowing

  8. Error and discrepancy in radiology: inevitable or avoidable?

    Science.gov (United States)

    Brady, Adrian P

    2017-02-01

    Errors and discrepancies in radiology practice are uncomfortably common, with an estimated day-to-day rate of 3-5% of studies reported, and much higher rates reported in many targeted studies. Nonetheless, the meaning of the terms "error" and "discrepancy" and the relationship to medical negligence are frequently misunderstood. This review outlines the incidence of such events, the ways they can be categorized to aid understanding, and potential contributing factors, both human- and system-based. Possible strategies to minimise error are considered, along with the means of dealing with perceived underperformance when it is identified. The inevitability of imperfection is explained, while the importance of striving to minimise such imperfection is emphasised. • Discrepancies between radiology reports and subsequent patient outcomes are not inevitably errors. • Radiologist reporting performance cannot be perfect, and some errors are inevitable. • Error or discrepancy in radiology reporting does not equate negligence. • Radiologist errors occur for many reasons, both human- and system-derived. • Strategies exist to minimise error causes and to learn from errors made.

  9. Teacher knowledge of error analysis in differential calculus

    Directory of Open Access Journals (Sweden)

    Eunice K. Moru

    2014-12-01

    Full Text Available The study investigated teacher knowledge of error analysis in differential calculus. Two teachers were the sample of the study: one a subject specialist and the other a mathematics education specialist. Questionnaires and interviews were used for data collection. The findings of the study reflect that the teachers’ knowledge of error analysis was characterised by the following assertions, which are backed up with some evidence: (1 teachers identified the errors correctly, (2 the generalised error identification resulted in opaque analysis, (3 some of the identified errors were not interpreted from multiple perspectives, (4 teachers’ evaluation of errors was either local or global and (5 in remedying errors accuracy and efficiency were emphasised more than conceptual understanding. The implications of the findings of the study for teaching include engaging in error analysis continuously as this is one way of improving knowledge for teaching.

  10. Collection of offshore human error probability data

    International Nuclear Information System (INIS)

    Basra, Gurpreet; Kirwan, Barry

    1998-01-01

    Accidents such as Piper Alpha have increased concern about the effects of human errors in complex systems. Such accidents can in theory be predicted and prevented by risk assessment, and in particular human reliability assessment (HRA), but HRA ideally requires qualitative and quantitative human error data. A research initiative at the University of Birmingham led to the development of CORE-DATA, a Computerised Human Error Data Base. This system currently contains a reasonably large number of human error data points, collected from a variety of mainly nuclear-power related sources. This article outlines a recent offshore data collection study, concerned with collecting lifeboat evacuation data. Data collection methods are outlined and a selection of human error probabilities generated as a result of the study are provided. These data give insights into the type of errors and human failure rates that could be utilised to support offshore risk analyses

  11. Image-derived input function obtained in a 3TMR-brainPET

    Energy Technology Data Exchange (ETDEWEB)

    Silva, N.A. da [Institute of Biophysics and Biomedical Engineering, University of Lisbon (Portugal); Institute of Neurosciences and Medicine - 4, Juelich (Germany); Herzog, H., E-mail: h.herzog@fz-juelich.de [Institute of Neurosciences and Medicine - 4, Juelich (Germany); Weirich, C.; Tellmann, L.; Rota Kops, E. [Institute of Neurosciences and Medicine - 4, Juelich (Germany); Hautzel, H. [Department of Nuclear Medicine (KME), University of Duesseldorf, Medical Faculty at Research Center Juelich, Juelich (Germany); Almeida, P. [Institute of Biophysics and Biomedical Engineering, University of Lisbon (Portugal)

    2013-02-21

    Aim: The combination of a high-resolution MR-compatible BrainPET insert operated within a 3 T MAGNETOM Trio MR scanner is an excellent tool for obtaining an image derived input function (IDIF), due to simultaneous imaging. In this work, we explore the possibility of obtaining an IDIF from volumes of interest (VOI) defined over the carotid arteries (CAs) using the MR data. Material and methods: FDG data from three patients without brain disorders were included. VOIs were drawn bilaterally over the CAs on a MPRAGE image using a 50% isocontour (MR50VOI). CA PET/MR co-registration was examined based on an individual and combined CA co-registration. After that, to estimate the IDIF, the MR50VOI average (IDIF-A), four hottest pixels per plane (IDIF-4H) and four hottest pixels in VOI (IDIF-4V) were considered. A model-based correction for residual partial volume effects involving venous blood samples was applied, from which partial volume (PV) and spillover (SP) coefficients were estimated. Additionally, a theoretical PV coefficient (PVt) was calculated based on MR50VOI. Results: The results show an excellent co-registration between the MR and PET, with an area under the curve ratio between both co-registration methods of 1.00±0.04. A good agreement between PV and PVt was found for IDIF-A, with PV of 0.39±0.06 and PVt 0.40±0.03, and for IDIF-4H, with PV of 0.47±0.05 and PVt 0.47±0.03. The SPs were 0.20±0.03 and 0.21±0.03 for IDIF-A and IDIF-4H, respectively. Conclusion: The integration of a high resolution BrainPET in an MR scanner allows to obtain an IDIF from an MR-based VOI. This must be corrected for a residual partial volume effect.

  12. Systematic Procedural Error

    National Research Council Canada - National Science Library

    Byrne, Michael D

    2006-01-01

    .... This problem has received surprisingly little attention from cognitive psychologists. The research summarized here examines such errors in some detail both empirically and through computational cognitive modeling...

  13. Goal Definition

    DEFF Research Database (Denmark)

    Bjørn, Anders; Laurent, Alexis; Owsianiak, Mikołaj

    2018-01-01

    The goal definition is the first phase of an LCA and determines the purpose of a study in detail. This chapter teaches how to perform the six aspects of a goal definition: (1) Intended applications of the results, (2) Limitations due to methodological choices, (3) Decision context and reasons...... for carrying out the study, (4) Target audience , (5) Comparative studies to be disclosed to the public and (6) Commissioner of the study and other influential actors. The instructions address both the conduct and reporting of a goal definition and are largely based on the ILCD guidance document (EC...

  14. Design for Error Tolerance

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1983-01-01

    An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability.......An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability....

  15. Valuing Errors for Learning: Espouse or Enact?

    Science.gov (United States)

    Grohnert, Therese; Meuwissen, Roger H. G.; Gijselaers, Wim H.

    2017-01-01

    Purpose: This study aims to investigate how organisations can discourage covering up and instead encourage learning from errors through a supportive learning from error climate. In explaining professionals' learning from error behaviour, this study distinguishes between espoused (verbally expressed) and enacted (behaviourally expressed) values…

  16. [Medication error management climate and perception for system use according to construction of medication error prevention system].

    Science.gov (United States)

    Kim, Myoung Soo

    2012-08-01

    The purpose of this cross-sectional study was to examine current status of IT-based medication error prevention system construction and the relationships among system construction, medication error management climate and perception for system use. The participants were 124 patient safety chief managers working for 124 hospitals with over 300 beds in Korea. The characteristics of the participants, construction status and perception of systems (electric pharmacopoeia, electric drug dosage calculation system, computer-based patient safety reporting and bar-code system) and medication error management climate were measured in this study. The data were collected between June and August 2011. Descriptive statistics, partial Pearson correlation and MANCOVA were used for data analysis. Electric pharmacopoeia were constructed in 67.7% of participating hospitals, computer-based patient safety reporting systems were constructed in 50.8%, electric drug dosage calculation systems were in use in 32.3%. Bar-code systems showed up the lowest construction rate at 16.1% of Korean hospitals. Higher rates of construction of IT-based medication error prevention systems resulted in greater safety and a more positive error management climate prevailed. The supportive strategies for improving perception for use of IT-based systems would add to system construction, and positive error management climate would be more easily promoted.

  17. Medication errors : the impact of prescribing and transcribing errors on preventable harm in hospitalised patients

    NARCIS (Netherlands)

    van Doormaal, J.E.; van der Bemt, P.M.L.A.; Mol, P.G.M.; Egberts, A.C.G.; Haaijer-Ruskamp, F.M.; Kosterink, J.G.W.; Zaal, Rianne J.

    Background: Medication errors (MEs) affect patient safety to a significant extent. Because these errors can lead to preventable adverse drug events (pADEs), it is important to know what type of ME is the most prevalent cause of these pADEs. This study determined the impact of the various types of

  18. Evaluation of drug administration errors in a teaching hospital

    Directory of Open Access Journals (Sweden)

    Berdot Sarah

    2012-03-01

    Full Text Available Abstract Background Medication errors can occur at any of the three steps of the medication use process: prescribing, dispensing and administration. We aimed to determine the incidence, type and clinical importance of drug administration errors and to identify risk factors. Methods Prospective study based on disguised observation technique in four wards in a teaching hospital in Paris, France (800 beds. A pharmacist accompanied nurses and witnessed the preparation and administration of drugs to all patients during the three drug rounds on each of six days per ward. Main outcomes were number, type and clinical importance of errors and associated risk factors. Drug administration error rate was calculated with and without wrong time errors. Relationship between the occurrence of errors and potential risk factors were investigated using logistic regression models with random effects. Results Twenty-eight nurses caring for 108 patients were observed. Among 1501 opportunities for error, 415 administrations (430 errors with one or more errors were detected (27.6%. There were 312 wrong time errors, ten simultaneously with another type of error, resulting in an error rate without wrong time error of 7.5% (113/1501. The most frequently administered drugs were the cardiovascular drugs (425/1501, 28.3%. The highest risks of error in a drug administration were for dermatological drugs. No potentially life-threatening errors were witnessed and 6% of errors were classified as having a serious or significant impact on patients (mainly omission. In multivariate analysis, the occurrence of errors was associated with drug administration route, drug classification (ATC and the number of patient under the nurse's care. Conclusion Medication administration errors are frequent. The identification of its determinants helps to undertake designed interventions.

  19. Human error theory: relevance to nurse management.

    Science.gov (United States)

    Armitage, Gerry

    2009-03-01

    Describe, discuss and critically appraise human error theory and consider its relevance for nurse managers. Healthcare errors are a persistent threat to patient safety. Effective risk management and clinical governance depends on understanding the nature of error. This paper draws upon a wide literature from published works, largely from the field of cognitive psychology and human factors. Although the content of this paper is pertinent to any healthcare professional; it is written primarily for nurse managers. Error is inevitable. Causation is often attributed to individuals, yet causation in complex environments such as healthcare is predominantly multi-factorial. Individual performance is affected by the tendency to develop prepacked solutions and attention deficits, which can in turn be related to local conditions and systems or latent failures. Blame is often inappropriate. Defences should be constructed in the light of these considerations and to promote error wisdom and organizational resilience. Managing and learning from error is seen as a priority in the British National Health Service (NHS), this can be better achieved with an understanding of the roots, nature and consequences of error. Such an understanding can provide a helpful framework for a range of risk management activities.

  20. 16 CFR 316.2 - Definitions.

    Science.gov (United States)

    2010-01-01

    ... or general partnership, corporation, or other business entity. (i) The definition of the term... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Definitions. 316.2 Section 316.2 Commercial... Definitions. (a) The definition of the term “affirmative consent” is the same as the definition of that term...

  1. Chernobyl - system accident or human error?

    International Nuclear Information System (INIS)

    Stang, E.

    1996-01-01

    Did human error cause the Chernobyl disaster? The standard point of view is that operator error was the root cause of the disaster. This was also the view of the Soviet Accident Commission. The paper analyses the operator errors at Chernobyl in a system context. The reactor operators committed errors that depended upon a lot of other failures that made up a complex accident scenario. The analysis is based on Charles Perrow's analysis of technological disasters. Failure possibility is an inherent property of high-risk industrial installations. The Chernobyl accident consisted of a chain of events that were both extremely improbable and difficult to predict. It is not reasonable to put the blame for the disaster on the operators. (author)

  2. Invasive urodynamic testing prior to surgical treatment for stress urinary incontinence in women: cost-effectiveness and value of information analyses in the context of a mixed methods feasibility study.

    Science.gov (United States)

    Homer, Tara; Shen, Jing; Vale, Luke; McColl, Elaine; Tincello, Douglas G; Hilton, Paul

    2018-01-01

    INVESTIGATE-I (INVasive Evaluation before Surgical Treatment of Incontinence Gives Added Therapeutic Effect?) was a mixed methods study to assess the feasibility of a future randomised controlled trial of invasive urodynamic testing (IUT) prior to surgery for stress urinary incontinence (SUI) in women. Here we report one of the study's five components, with the specific objectives of (i) exploring the cost-effectiveness of IUT compared with clinical assessment plus non-invasive tests (henceforth described as 'IUT' and 'no IUT' respectively) in women with SUI or stress-predominant mixed urinary incontinence (MUI) prior to surgery, and (ii) determining the expected net gain (ENG) from additional research. Study participants were women with SUI or stress-predominant MUI who had failed to respond to conservative treatments recruited from seven UK urogynaecology and female urology units. They were randomised to receive either 'IUT' or 'no IUT' before undergoing further treatment. Data from 218 women were used in the economic analysis. Cost utility, net benefit and value of information (VoI) analyses were performed within a randomised controlled pilot trial. Costs and quality-adjusted life years (QALYs) were estimated over 6 months to determine the incremental cost per QALY of 'IUT' compared to 'no IUT'. Net monetary benefit informed the VoI analysis. The VoI estimated the ENG and optimal sample size for a future definitive trial. At 6 months, the mean difference in total average cost was £138 ( p  = 0.071) in favour of 'IUT'; there was no difference in QALYs estimated from the SF-12 (difference 0.004; p  = 0.425) and EQ-5D-3L (difference - 0.004; p  = 0.725); therefore, the probability of IUT being cost-effective remains uncertain. The estimated ENG was positive for further research to address this uncertainty with an optimal sample size of 404 women. This is the largest economic evaluation of IUT. On average, up to 6 months after treatment, 'IUT' may

  3. TU-CD-BRA-04: Evaluation of An Atlas-Based Segmentation Method for Prostate and Peripheral Zone Regions On MRI

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, AS; Piper, J; Curry, K; Swallen, A [MIM Software Inc., Cleveland, OH (United States); Padgett, K; Pollack, A; Stoyanova, RS [University of Miami, Miami, FL (United States)

    2015-06-15

    to provide significant time savings for prostate VOI definition. AS Nelson and J Piper are partial owners of MIM Software, Inc. AS Nelson, J Piper, K Curry, and A Swallen are current employees at MIM Software, Inc.

  4. List of Error-Prone Abbreviations, Symbols, and Dose Designations

    Science.gov (United States)

    ... Analysis and Coaching Report an Error Report a Medication Error Report a Vaccine Error Consumer Error Reporting Search ... which have been reported through the ISMP National Medication Errors Reporting Program (ISMP MERP) as being frequently misinterpreted ...

  5. Analysis of Medication Error Reports

    Energy Technology Data Exchange (ETDEWEB)

    Whitney, Paul D.; Young, Jonathan; Santell, John; Hicks, Rodney; Posse, Christian; Fecht, Barbara A.

    2004-11-15

    In medicine, as in many areas of research, technological innovation and the shift from paper based information to electronic records has created a climate of ever increasing availability of raw data. There has been, however, a corresponding lag in our abilities to analyze this overwhelming mass of data, and classic forms of statistical analysis may not allow researchers to interact with data in the most productive way. This is true in the emerging area of patient safety improvement. Traditionally, a majority of the analysis of error and incident reports has been carried out based on an approach of data comparison, and starts with a specific question which needs to be answered. Newer data analysis tools have been developed which allow the researcher to not only ask specific questions but also to “mine” data: approach an area of interest without preconceived questions, and explore the information dynamically, allowing questions to be formulated based on patterns brought up by the data itself. Since 1991, United States Pharmacopeia (USP) has been collecting data on medication errors through voluntary reporting programs. USP’s MEDMARXsm reporting program is the largest national medication error database and currently contains well over 600,000 records. Traditionally, USP has conducted an annual quantitative analysis of data derived from “pick-lists” (i.e., items selected from a list of items) without an in-depth analysis of free-text fields. In this paper, the application of text analysis and data analysis tools used by Battelle to analyze the medication error reports already analyzed in the traditional way by USP is described. New insights and findings were revealed including the value of language normalization and the distribution of error incidents by day of the week. The motivation for this effort is to gain additional insight into the nature of medication errors to support improvements in medication safety.

  6. ERROR HANDLING IN INTEGRATION WORKFLOWS

    Directory of Open Access Journals (Sweden)

    Alexey M. Nazarenko

    2017-01-01

    Full Text Available Simulation experiments performed while solving multidisciplinary engineering and scientific problems require joint usage of multiple software tools. Further, when following a preset plan of experiment or searching for optimum solu- tions, the same sequence of calculations is run multiple times with various simulation parameters, input data, or conditions while overall workflow does not change. Automation of simulations like these requires implementing of a workflow where tool execution and data exchange is usually controlled by a special type of software, an integration environment or plat- form. The result is an integration workflow (a platform-dependent implementation of some computing workflow which, in the context of automation, is a composition of weakly coupled (in terms of communication intensity typical subtasks. These compositions can then be decomposed back into a few workflow patterns (types of subtasks interaction. The pat- terns, in their turn, can be interpreted as higher level subtasks.This paper considers execution control and data exchange rules that should be imposed by the integration envi- ronment in the case of an error encountered by some integrated software tool. An error is defined as any abnormal behavior of a tool that invalidates its result data thus disrupting the data flow within the integration workflow. The main requirementto the error handling mechanism implemented by the integration environment is to prevent abnormal termination of theentire workflow in case of missing intermediate results data. Error handling rules are formulated on the basic pattern level and on the level of a composite task that can combine several basic patterns as next level subtasks. The cases where workflow behavior may be different, depending on user's purposes, when an error takes place, and possible error handling op- tions that can be specified by the user are also noted in the work.

  7. Research Costs Investigated : A Study Into the Budgets of Dutch Publicly Funded Drug-Related Research

    NARCIS (Netherlands)

    van Asselt, Thea; Ramaekers, Bram; Corro Ramos, Isaac; Joore, Manuela; Al, Maiwenn; Lesman-Leegte, Ivonne; Postma, Maarten; Vemer, Pepijn; Feenstra, Talitha

    BACKGROUND: The costs of performing research are an important input in value of information (VOI) analyses but are difficult to assess. OBJECTIVE: The aim of this study was to investigate the costs of research, serving two purposes: (1) estimating research costs for use in VOI analyses; and (2)

  8. Barriers to medication error reporting among hospital nurses.

    Science.gov (United States)

    Rutledge, Dana N; Retrosi, Tina; Ostrowski, Gary

    2018-03-01

    The study purpose was to report medication error reporting barriers among hospital nurses, and to determine validity and reliability of an existing medication error reporting barriers questionnaire. Hospital medication errors typically occur between ordering of a medication to its receipt by the patient with subsequent staff monitoring. To decrease medication errors, factors surrounding medication errors must be understood; this requires reporting by employees. Under-reporting can compromise patient safety by disabling improvement efforts. This 2017 descriptive study was part of a larger workforce engagement study at a faith-based Magnet ® -accredited community hospital in California (United States). Registered nurses (~1,000) were invited to participate in the online survey via email. Reported here are sample demographics (n = 357) and responses to the 20-item medication error reporting barriers questionnaire. Using factor analysis, four factors that accounted for 67.5% of the variance were extracted. These factors (subscales) were labelled Fear, Cultural Barriers, Lack of Knowledge/Feedback and Practical/Utility Barriers; each demonstrated excellent internal consistency. The medication error reporting barriers questionnaire, originally developed in long-term care, demonstrated good validity and excellent reliability among hospital nurses. Substantial proportions of American hospital nurses (11%-48%) considered specific factors as likely reporting barriers. Average scores on most barrier items were categorised "somewhat unlikely." The highest six included two barriers concerning the time-consuming nature of medication error reporting and four related to nurses' fear of repercussions. Hospitals need to determine the presence of perceived barriers among nurses using questionnaires such as the medication error reporting barriers and work to encourage better reporting. Barriers to medication error reporting make it less likely that nurses will report medication

  9. Human Error Mechanisms in Complex Work Environments

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1988-01-01

    will account for most of the action errors observed. In addition, error mechanisms appear to be intimately related to the development of high skill and know-how in a complex work context. This relationship between errors and human adaptation is discussed in detail for individuals and organisations...

  10. Error Control for Network-on-Chip Links

    CERN Document Server

    Fu, Bo

    2012-01-01

    As technology scales into nanoscale regime, it is impossible to guarantee the perfect hardware design. Moreover, if the requirement of 100% correctness in hardware can be relaxed, the cost of manufacturing, verification, and testing will be significantly reduced. Many approaches have been proposed to address the reliability problem of on-chip communications. This book focuses on the use of error control codes (ECCs) to improve on-chip interconnect reliability. Coverage includes detailed description of key issues in NOC error control faced by circuit and system designers, as well as practical error control techniques to minimize the impact of these errors on system performance. Provides a detailed background on the state of error control methods for on-chip interconnects; Describes the use of more complex concatenated codes such as Hamming Product Codes with Type-II HARQ, while emphasizing integration techniques for on-chip interconnect links; Examines energy-efficient techniques for integrating multiple error...

  11. Can human error theory explain non-adherence?

    Science.gov (United States)

    Barber, Nick; Safdar, A; Franklin, Bryoney D

    2005-08-01

    To apply human error theory to explain non-adherence and examine how well it fits. Patients who were taking chronic medication were telephoned and asked whether they had been adhering to their medicine, and if not the reasons were explored and analysed according to a human error theory. Of 105 patients, 87 were contacted by telephone and they took part in the study. Forty-two recalled being non-adherent, 17 of them in the last 7 days; 11 of the 42 were intentionally non-adherent. The errors could be described by human error theory, and it explained unintentional non-adherence well, however, the application of 'rules' was difficult when considering mistakes. The consideration of error producing conditions and latent failures also revealed useful contributing factors. Human error theory offers a new and valuable way of understanding non-adherence, and could inform interventions. However, the theory needs further development to explain intentional non-adherence.

  12. Internal Error Propagation in Explicit Runge--Kutta Methods

    KAUST Repository

    Ketcheson, David I.

    2014-09-11

    In practical computation with Runge--Kutta methods, the stage equations are not satisfied exactly, due to roundoff errors, algebraic solver errors, and so forth. We show by example that propagation of such errors within a single step can have catastrophic effects for otherwise practical and well-known methods. We perform a general analysis of internal error propagation, emphasizing that it depends significantly on how the method is implemented. We show that for a fixed method, essentially any set of internal stability polynomials can be obtained by modifying the implementation details. We provide bounds on the internal error amplification constants for some classes of methods with many stages, including strong stability preserving methods and extrapolation methods. These results are used to prove error bounds in the presence of roundoff or other internal errors.

  13. Categorizing errors and adverse events for learning: a provider perspective.

    Science.gov (United States)

    Ginsburg, Liane R; Chuang, You-Ta; Richardson, Julia; Norton, Peter G; Berta, Whitney; Tregunno, Deborah; Ng, Peggy

    2009-01-01

    There is little agreement in the literature as to what types of patient safety events (PSEs) should be the focus for learning, change and improvement, and we lack clear and universally accepted definitions of error. In particular, the way front-line providers or managers understand and categorize different types of errors, adverse events and near misses and the kinds of events this audience believes to be valuable for learning are not well understood. Focus groups of front-line providers, managers and patient safety officers were used to explore how people in healthcare organizations understand and categorize different types of PSEs in the context of bringing about learning from such events. A typology of PSEs was developed from the focus group data and then mailed, along with a short questionnaire, to focus group participants for member checking and validation. Four themes emerged from our data: (1) incidence study categories are problematic for those working in organizations; (2) preventable events should be the focus for learning; (3) near misses are an important but complex category, differentiated based on harm potential and proximity to patients; (4) staff disagree on whether events causing severe harm or events with harm potential are most valuable for learning. A typology of PSEs based on these themes and checked by focus group participants indicates that staff and their managers divide events into simple categories of minor and major events, which are differentiated based on harm or harm potential. Confusion surrounding patient safety terminology detracts from the abilities of providers to talk about and reflect on a range of PSEs, and from opportunities to enhance learning, reduce event reoccurrence and improve patient safety at the point of care.

  14. Analysis of Employee's Survey for Preventing Human-Errors

    International Nuclear Information System (INIS)

    Sung, Chanho; Kim, Younggab; Joung, Sanghoun

    2013-01-01

    Human errors in nuclear power plant can cause large and small events or incidents. These events or incidents are one of main contributors of reactor trip and might threaten the safety of nuclear plants. To prevent human-errors, KHNP(nuclear power plants) introduced 'Human-error prevention techniques' and have applied the techniques to main parts such as plant operation, operation support, and maintenance and engineering. This paper proposes the methods to prevent and reduce human-errors in nuclear power plants through analyzing survey results which includes the utilization of the human-error prevention techniques and the employees' awareness of preventing human-errors. With regard to human-error prevention, this survey analysis presented the status of the human-error prevention techniques and the employees' awareness of preventing human-errors. Employees' understanding and utilization of the techniques was generally high and training level of employee and training effect on actual works were in good condition. Also, employees answered that the root causes of human-error were due to working environment including tight process, manpower shortage, and excessive mission rather than personal negligence or lack of personal knowledge. Consideration of working environment is certainly needed. At the present time, based on analyzing this survey, the best methods of preventing human-error are personal equipment, training/education substantiality, private mental health check before starting work, prohibit of multiple task performing, compliance with procedures, and enhancement of job site review. However, the most important and basic things for preventing human-error are interests of workers and organizational atmosphere such as communication between managers and workers, and communication between employees and bosses

  15. Everyday memory errors in older adults.

    Science.gov (United States)

    Ossher, Lynn; Flegal, Kristin E; Lustig, Cindy

    2013-01-01

    Despite concern about cognitive decline in old age, few studies document the types and frequency of memory errors older adults make in everyday life. In the present study, 105 healthy older adults completed the Everyday Memory Questionnaire (EMQ; Sunderland, Harris, & Baddeley, 1983 , Journal of Verbal Learning and Verbal Behavior, 22, 341), indicating what memory errors they had experienced in the last 24 hours, the Memory Self-Efficacy Questionnaire (MSEQ; West, Thorn, & Bagwell, 2003 , Psychology and Aging, 18, 111), and other neuropsychological and cognitive tasks. EMQ and MSEQ scores were unrelated and made separate contributions to variance on the Mini Mental State Exam (MMSE; Folstein, Folstein, & McHugh, 1975 , Journal of Psychiatric Research, 12, 189), suggesting separate constructs. Tip-of-the-tongue errors were the most commonly reported, and the EMQ Faces/Places and New Things subscales were most strongly related to MMSE. These findings may help training programs target memory errors commonly experienced by older adults, and suggest which types of memory errors could indicate cognitive declines of clinical concern.

  16. Quantification of 18F-FDG PET images using probabilistic brain atlas: clinical application in temporal lobe epilepsy patients

    International Nuclear Information System (INIS)

    Kang, Keon Wook; Lee, Dong Soo; Cho, Jae Hoon; Lee, Jae Sung; Yeo, Jeong Seok; Lee, Sang Gun; Chung, June Key; Lee, Myung Chul

    2000-01-01

    A probabilistic atlas of the human brain (Statistical Probability Anatomical Maps: SPAM) was developed by the international consortium for brain mapping (ICBM). After calculating the counts in volume of interest (VOI) using the product of probability of SPAM images and counts in FDG images, asymmetric indexes(AI) were calculated and used for finding epileptogenic zones in temporal lobe epilepsy (TLE). FDG PET images from 28 surgically confirmed TLE patients and 12 age-matched controls were spatially normalized to the averaged brain MRI atlas of ICBM. The counts from normalized PET images were multiplied with the probability of 12 VOIs (superior temporal gyrus, middle temporal gyrus, inferior temporal gyrus, hippocampus, parahippocampal gyrus, and amygdala in each hemisphere) of SPAM images of Montreal Neurological Institute. Finally AI was calculated on each pair of VOI, and compared with visual assessment. If AI was deviated more than 2 standard deviation of normal controls, we considered epileptogenic zones were found successfully. The counts of VOIs in normal controls were symmetric (AI 0.05) except those of inferior temporal gyrus (p<0.01). AIs in 5 pairs of VOI excluding inferior temporal gyrus were deviated to one side in TLE (p<0.05). Lateralization was correct in 23/28 of patients by AI, but all of 28 were consistent with visual inspection. In 3 patients with normal AI was symmetric on visual inspection. In 2 patients falsely lateralized using AI, metabolism was also decreased visually on contra-lateral side. Asymmetric index obtained by the product of statistical probability anatomical map and FDG PET correlated well with visual assessment in TLE patients. SPAM is useful for quantification of VOIs in functional images

  17. Receiver operating characteristic (ROC) curve for classification of {sup 18}F-NaF uptake on PET/CT

    Energy Technology Data Exchange (ETDEWEB)

    Valadares, Agnes Araujo, E-mail: agnesvaladares@me.com [Universidade de Sao Paulo (HC/FM/USP), SP (Brazil). Hospital das Clinicas. Fac. de Mediciana; Duarte, Paulo Schiavom; Ono, Carla Rachel; Coura-Filho, George Barberio; Sado, Heitor Naoki; Carvalho, Giovanna [Instituto do Cancer do Estado de Sao Paulo Octavio Frias de Oliveira (ICESP), Sao Paulo, SP (Brazil). Servico de Medicina Nuclear; Sapienza, Marcelo Tatit; Buchpiguel, Carlos Alberto [Universidade de Sao Paulo (FM/USP), Sao Paulo, SP (Brazil). Fac. de Medicina. Dept. de Radiologia e Oncologia

    2016-01-15

    Objective: To assess the cutoff values established by ROC curves to classify {sup 18}F-NaF uptake as normal or malignant. Materials and Methods: PET/CT images were acquired 1 hour after administration of 185 MBq of {sup 18}F-NaF. Volumes of interest (VOIs) were drawn on three regions of the skeleton as follows: proximal right humerus diaphysis (HD), proximal right femoral diaphysis (FD) and first vertebral body (VB1), in a total of 254 patients, totalling 762 VOIs. The uptake in the VOIs was classified as normal or malignant on the basis of the radiopharmaceutical distribution pattern and of the CT images. A total of 675 volumes were classified as normal and 52 were classified as malignant. Thirty-five VOIs classified as indeterminate or nonmalignant lesions were excluded from analysis. The standardized uptake value (SUV) measured on the VOIs were plotted on an ROC curve for each one of the three regions. The area under the ROC (AUC) as well as the best cutoff SUVs to classify the VOIs were calculated. The best cutoff values were established as the ones with higher result of the sum of sensitivity and specificity. Results: The AUCs were 0.933, 0.889 and 0.975 for UD, FD and VB1, respectively. The best SUV cutoffs were 9.0 (sensitivity: 73%; specificity: 99%), 8.4 (sensitivity: 79%; specificity: 94%) and 21.0 (sensitivity: 93%; specificity: 95%) for UD, FD and VB1, respectively. Conclusion: The best cutoff value varies according to bone region of analysis and it is not possible to establish one value for the whole body. (author)

  18. Error Correction for Non-Abelian Topological Quantum Computation

    Directory of Open Access Journals (Sweden)

    James R. Wootton

    2014-03-01

    Full Text Available The possibility of quantum computation using non-Abelian anyons has been considered for over a decade. However, the question of how to obtain and process information about what errors have occurred in order to negate their effects has not yet been considered. This is in stark contrast with quantum computation proposals for Abelian anyons, for which decoding algorithms have been tailor-made for many topological error-correcting codes and error models. Here, we address this issue by considering the properties of non-Abelian error correction, in general. We also choose a specific anyon model and error model to probe the problem in more detail. The anyon model is the charge submodel of D(S_{3}. This shares many properties with important models such as the Fibonacci anyons, making our method more generally applicable. The error model is a straightforward generalization of those used in the case of Abelian anyons for initial benchmarking of error correction methods. It is found that error correction is possible under a threshold value of 7% for the total probability of an error on each physical spin. This is remarkably comparable with the thresholds for Abelian models.

  19. Field errors in hybrid insertion devices

    International Nuclear Information System (INIS)

    Schlueter, R.D.

    1995-02-01

    Hybrid magnet theory as applied to the error analyses used in the design of Advanced Light Source (ALS) insertion devices is reviewed. Sources of field errors in hybrid insertion devices are discussed

  20. Field errors in hybrid insertion devices

    Energy Technology Data Exchange (ETDEWEB)

    Schlueter, R.D. [Lawrence Berkeley Lab., CA (United States)

    1995-02-01

    Hybrid magnet theory as applied to the error analyses used in the design of Advanced Light Source (ALS) insertion devices is reviewed. Sources of field errors in hybrid insertion devices are discussed.

  1. Telemetry location error in a forested habitat

    Science.gov (United States)

    Chu, D.S.; Hoover, B.A.; Fuller, M.R.; Geissler, P.H.; Amlaner, Charles J.

    1989-01-01

    The error associated with locations estimated by radio-telemetry triangulation can be large and variable in a hardwood forest. We assessed the magnitude and cause of telemetry location errors in a mature hardwood forest by using a 4-element Yagi antenna and compass bearings toward four transmitters, from 21 receiving sites. The distance error from the azimuth intersection to known transmitter locations ranged from 0 to 9251 meters. Ninety-five percent of the estimated locations were within 16 to 1963 meters, and 50% were within 99 to 416 meters of actual locations. Angles with 20o of parallel had larger distance errors than other angles. While angle appeared most important, greater distances and the amount of vegetation between receivers and transmitters also contributed to distance error.

  2. Life is hard: countering definitional pessimism concerning the definition of life

    Science.gov (United States)

    Smith, Kelly C.

    2016-10-01

    Cleland and Chyba published a classic piece in 2002 that began a movement I call definitional pessimism, where it is argued that there is no point in attempting anything like a general definition of life. This paper offers a critical response to the pessimist position in general and the influential arguments offered by Cleland and her collaborators in particular. One such argument is that all definitions of life fall short of an ideal in which necessary and sufficient conditions produce unambiguous categorizations that dispose of all counterexamples. But this concept of definition is controversial within philosophy; a fact that greatly diminishes the force of the admonition that biologists should conform to such an ideal. Moreover, biology may well be fundamentally different from logic and the physical sciences from which this ideal is drawn, to the point where definitional conformity misrepresents biological reality. Another idea often pushed is that the prospects for definitional success concerning life are on a par with medieval alchemy's attempts to define matter - that is, doomed to fail for lack of a unifying scientific theory. But this comparison to alchemy is both historically inaccurate and unfair. Planetary science before the discovery of the first exoplanets offers a much better analogy, with much more optimistic conclusions. The pessimists also make much of the desirability of using microbes as models for any universal concept of life, from which they conclude that certain types of 'Darwinian' evolutionary definitions are inadequate. But this argument posits an unrealistic ideal, as no account of life can both be universal and do justice to the sorts of precise causal mechanisms microbes exemplify. The character of biology and the demand for universality in definitions of life thus probably accords better with functional rather than structural categories. The bottom line is that there is simply no viable alternative, either pragmatically or

  3. Technological Advancements and Error Rates in Radiation Therapy Delivery

    Energy Technology Data Exchange (ETDEWEB)

    Margalit, Danielle N., E-mail: dmargalit@partners.org [Harvard Radiation Oncology Program, Boston, MA (United States); Harvard Cancer Consortium and Brigham and Women' s Hospital/Dana Farber Cancer Institute, Boston, MA (United States); Chen, Yu-Hui; Catalano, Paul J.; Heckman, Kenneth; Vivenzio, Todd; Nissen, Kristopher; Wolfsberger, Luciant D.; Cormack, Robert A.; Mauch, Peter; Ng, Andrea K. [Harvard Cancer Consortium and Brigham and Women' s Hospital/Dana Farber Cancer Institute, Boston, MA (United States)

    2011-11-15

    Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system at Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique

  4. Technological Advancements and Error Rates in Radiation Therapy Delivery

    International Nuclear Information System (INIS)

    Margalit, Danielle N.; Chen, Yu-Hui; Catalano, Paul J.; Heckman, Kenneth; Vivenzio, Todd; Nissen, Kristopher; Wolfsberger, Luciant D.; Cormack, Robert A.; Mauch, Peter; Ng, Andrea K.

    2011-01-01

    Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)–conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system at Brigham and Women’s Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher’s exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01–0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08–0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique

  5. Relating Complexity and Error Rates of Ontology Concepts. More Complex NCIt Concepts Have More Errors.

    Science.gov (United States)

    Min, Hua; Zheng, Ling; Perl, Yehoshua; Halper, Michael; De Coronado, Sherri; Ochs, Christopher

    2017-05-18

    Ontologies are knowledge structures that lend support to many health-information systems. A study is carried out to assess the quality of ontological concepts based on a measure of their complexity. The results show a relation between complexity of concepts and error rates of concepts. A measure of lateral complexity defined as the number of exhibited role types is used to distinguish between more complex and simpler concepts. Using a framework called an area taxonomy, a kind of abstraction network that summarizes the structural organization of an ontology, concepts are divided into two groups along these lines. Various concepts from each group are then subjected to a two-phase QA analysis to uncover and verify errors and inconsistencies in their modeling. A hierarchy of the National Cancer Institute thesaurus (NCIt) is used as our test-bed. A hypothesis pertaining to the expected error rates of the complex and simple concepts is tested. Our study was done on the NCIt's Biological Process hierarchy. Various errors, including missing roles, incorrect role targets, and incorrectly assigned roles, were discovered and verified in the two phases of our QA analysis. The overall findings confirmed our hypothesis by showing a statistically significant difference between the amounts of errors exhibited by more laterally complex concepts vis-à-vis simpler concepts. QA is an essential part of any ontology's maintenance regimen. In this paper, we reported on the results of a QA study targeting two groups of ontology concepts distinguished by their level of complexity, defined in terms of the number of exhibited role types. The study was carried out on a major component of an important ontology, the NCIt. The findings suggest that more complex concepts tend to have a higher error rate than simpler concepts. These findings can be utilized to guide ongoing efforts in ontology QA.

  6. Soft errors in modern electronic systems

    CERN Document Server

    Nicolaidis, Michael

    2010-01-01

    This book provides a comprehensive presentation of the most advanced research results and technological developments enabling understanding, qualifying and mitigating the soft errors effect in advanced electronics, including the fundamental physical mechanisms of radiation induced soft errors, the various steps that lead to a system failure, the modelling and simulation of soft error at various levels (including physical, electrical, netlist, event driven, RTL, and system level modelling and simulation), hardware fault injection, accelerated radiation testing and natural environment testing, s

  7. Jonas Olson's Evidence for Moral Error Theory

    NARCIS (Netherlands)

    Evers, Daan

    2016-01-01

    Jonas Olson defends a moral error theory in (2014). I first argue that Olson is not justified in believing the error theory as opposed to moral nonnaturalism in his own opinion. I then argue that Olson is not justified in believing the error theory as opposed to moral contextualism either (although

  8. Learning mechanisms to limit medication administration errors.

    Science.gov (United States)

    Drach-Zahavy, Anat; Pud, Dorit

    2010-04-01

    This paper is a report of a study conducted to identify and test the effectiveness of learning mechanisms applied by the nursing staff of hospital wards as a means of limiting medication administration errors. Since the influential report ;To Err Is Human', research has emphasized the role of team learning in reducing medication administration errors. Nevertheless, little is known about the mechanisms underlying team learning. Thirty-two hospital wards were randomly recruited. Data were collected during 2006 in Israel by a multi-method (observations, interviews and administrative data), multi-source (head nurses, bedside nurses) approach. Medication administration error was defined as any deviation from procedures, policies and/or best practices for medication administration, and was identified using semi-structured observations of nurses administering medication. Organizational learning was measured using semi-structured interviews with head nurses, and the previous year's reported medication administration errors were assessed using administrative data. The interview data revealed four learning mechanism patterns employed in an attempt to learn from medication administration errors: integrated, non-integrated, supervisory and patchy learning. Regression analysis results demonstrated that whereas the integrated pattern of learning mechanisms was associated with decreased errors, the non-integrated pattern was associated with increased errors. Supervisory and patchy learning mechanisms were not associated with errors. Superior learning mechanisms are those that represent the whole cycle of team learning, are enacted by nurses who administer medications to patients, and emphasize a system approach to data analysis instead of analysis of individual cases.

  9. Introduction to precision machine design and error assessment

    CERN Document Server

    Mekid, Samir

    2008-01-01

    While ultra-precision machines are now achieving sub-nanometer accuracy, unique challenges continue to arise due to their tight specifications. Written to meet the growing needs of mechanical engineers and other professionals to understand these specialized design process issues, Introduction to Precision Machine Design and Error Assessment places a particular focus on the errors associated with precision design, machine diagnostics, error modeling, and error compensation. Error Assessment and ControlThe book begins with a brief overview of precision engineering and applications before introdu

  10. Reducing diagnostic errors in medicine: what's the goal?

    Science.gov (United States)

    Graber, Mark; Gordon, Ruthanna; Franklin, Nancy

    2002-10-01

    This review considers the feasibility of reducing or eliminating the three major categories of diagnostic errors in medicine: "No-fault errors" occur when the disease is silent, presents atypically, or mimics something more common. These errors will inevitably decline as medical science advances, new syndromes are identified, and diseases can be detected more accurately or at earlier stages. These errors can never be eradicated, unfortunately, because new diseases emerge, tests are never perfect, patients are sometimes noncompliant, and physicians will inevitably, at times, choose the most likely diagnosis over the correct one, illustrating the concept of necessary fallibility and the probabilistic nature of choosing a diagnosis. "System errors" play a role when diagnosis is delayed or missed because of latent imperfections in the health care system. These errors can be reduced by system improvements, but can never be eliminated because these improvements lag behind and degrade over time, and each new fix creates the opportunity for novel errors. Tradeoffs also guarantee system errors will persist, when resources are just shifted. "Cognitive errors" reflect misdiagnosis from faulty data collection or interpretation, flawed reasoning, or incomplete knowledge. The limitations of human processing and the inherent biases in using heuristics guarantee that these errors will persist. Opportunities exist, however, for improving the cognitive aspect of diagnosis by adopting system-level changes (e.g., second opinions, decision-support systems, enhanced access to specialists) and by training designed to improve cognition or cognitive awareness. Diagnostic error can be substantially reduced, but never eradicated.

  11. Scoliosis angle. Conceptual basis and proposed definition

    Energy Technology Data Exchange (ETDEWEB)

    Marklund, T [Linkoepings Hoegskola (Sweden)

    1978-01-01

    The most commonly used methods of assessing the scoliotic deviation measure angles that are not clearly defined in relation to the anatomy of the patient. In order to give an anatomic basis for such measurements it is proposed to define the scoliotic deviation as the deviation the vertebral column makes with the sagittal plane. Both the Cobb and the Ferguson angles may be based on this definition. The present methods of measurement are then attempts to measure these angles. If the plane of these angles is parallel to the film, the measurement will be correct. Errors in the measurements may be incurred by the projection. A hypothetical projection, called a 'rectified orthogonal projection', is presented, which correctly represents all scoliotic angles in accordance with these principles. It can be constructed in practice with the aid of a computer and by performing measurements on two projections of the vertebral column; a scoliotic curve can be represented independent of the kyphosis and lordosis.

  12. Validation of Metrics as Error Predictors

    Science.gov (United States)

    Mendling, Jan

    In this chapter, we test the validity of metrics that were defined in the previous chapter for predicting errors in EPC business process models. In Section 5.1, we provide an overview of how the analysis data is generated. Section 5.2 describes the sample of EPCs from practice that we use for the analysis. Here we discuss a disaggregation by the EPC model group and by error as well as a correlation analysis between metrics and error. Based on this sample, we calculate a logistic regression model for predicting error probability with the metrics as input variables in Section 5.3. In Section 5.4, we then test the regression function for an independent sample of EPC models from textbooks as a cross-validation. Section 5.5 summarizes the findings.

  13. Assessment of the uncertainty associated with systematic errors in digital instruments: an experimental study on offset errors

    International Nuclear Information System (INIS)

    Attivissimo, F; Giaquinto, N; Savino, M; Cataldo, A

    2012-01-01

    This paper deals with the assessment of the uncertainty due to systematic errors, particularly in A/D conversion-based instruments. The problem of defining and assessing systematic errors is briefly discussed, and the conceptual scheme of gauge repeatability and reproducibility is adopted. A practical example regarding the evaluation of the uncertainty caused by the systematic offset error is presented. The experimental results, obtained under various ambient conditions, show that modelling the variability of systematic errors is more problematic than suggested by the ISO 5725 norm. Additionally, the paper demonstrates the substantial difference between the type B uncertainty evaluation, obtained via the maximum entropy principle applied to manufacturer's specifications, and the type A (experimental) uncertainty evaluation, which reflects actually observable reality. Although it is reasonable to assume a uniform distribution of the offset error, experiments demonstrate that the distribution is not centred and that a correction must be applied. In such a context, this work motivates a more pragmatic and experimental approach to uncertainty, with respect to the directions of supplement 1 of GUM. (paper)

  14. How to Cope with the Rare Human Error Events Involved with organizational Factors in Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sa Kil; Luo, Meiling; Lee, Yong Hee [Korea Atomic Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    The current human error guidelines (e.g. US DOD handbooks, US NRC Guidelines) are representative tools to prevent human errors. These tools, however, have limits that they do not adapt all operating situations and circumstances such as design base events. In other words, these tools are only adapted foreseeable standardized operating situations and circumstances. In this study, our research team proposed an evidence-based approach such as UK's safety case to coping with the rare human error events such as TMI, Chernobyl, Fukushima accidents. These accidents are representative events involved with rare human errors. Our research team defined the 'rare human errors' as the follow three characterized events; Extremely low frequency Extremely high complicated structure Extremely serious damage of human life and property A safety case is a structured argument, supported by evidence, intended to justify that a system is acceptably safe. The definition by UK defense standard 00-56 issue 4 states that such an evidence-based approach can be contrast with a prescriptive approach to safety certification, which require safety to be justified using a prescribed process. Safety managements and safety regulatory activities based on safety case are effective to control organizational factors in terms of integrated safety management. Especially safety issues relevant with public acceptance are useful to provide practical evidences to the public reasonably. European Union including UK has developed the concept of engineered safety management system to deal with public acceptance using the safety case. In Korea nuclear industry, the Korean Atomic Research Institute has firstly performed a basic research to adapt the safety case in the field of radioactive waste according to the IAEA SSG-23(KAERI/TR-4497, 4531). Excepting the radioactive waste, there is no try to adapt the safety case yet. Most incidents and accidents involved human during operating NPPs have a tendency

  15. How to Cope with the Rare Human Error Events Involved with organizational Factors in Nuclear Power Plants

    International Nuclear Information System (INIS)

    Kim, Sa Kil; Luo, Meiling; Lee, Yong Hee

    2014-01-01

    The current human error guidelines (e.g. US DOD handbooks, US NRC Guidelines) are representative tools to prevent human errors. These tools, however, have limits that they do not adapt all operating situations and circumstances such as design base events. In other words, these tools are only adapted foreseeable standardized operating situations and circumstances. In this study, our research team proposed an evidence-based approach such as UK's safety case to coping with the rare human error events such as TMI, Chernobyl, Fukushima accidents. These accidents are representative events involved with rare human errors. Our research team defined the 'rare human errors' as the follow three characterized events; Extremely low frequency Extremely high complicated structure Extremely serious damage of human life and property A safety case is a structured argument, supported by evidence, intended to justify that a system is acceptably safe. The definition by UK defense standard 00-56 issue 4 states that such an evidence-based approach can be contrast with a prescriptive approach to safety certification, which require safety to be justified using a prescribed process. Safety managements and safety regulatory activities based on safety case are effective to control organizational factors in terms of integrated safety management. Especially safety issues relevant with public acceptance are useful to provide practical evidences to the public reasonably. European Union including UK has developed the concept of engineered safety management system to deal with public acceptance using the safety case. In Korea nuclear industry, the Korean Atomic Research Institute has firstly performed a basic research to adapt the safety case in the field of radioactive waste according to the IAEA SSG-23(KAERI/TR-4497, 4531). Excepting the radioactive waste, there is no try to adapt the safety case yet. Most incidents and accidents involved human during operating NPPs have a tendency

  16. Data error effects on net radiation and evapotranspiration estimation

    International Nuclear Information System (INIS)

    Llasat, M.C.; Snyder, R.L.

    1998-01-01

    The objective of this paper is to evaluate the potential error in estimating the net radiation and reference evapotranspiration resulting from errors in the measurement or estimation of weather parameters. A methodology for estimating the net radiation using hourly weather variables measured at a typical agrometeorological station (e.g., solar radiation, temperature and relative humidity) is presented. Then the error propagation analysis is made for net radiation and for reference evapotranspiration. Data from the Raimat weather station, which is located in the Catalonia region of Spain, are used to illustrate the error relationships. The results show that temperature, relative humidity and cloud cover errors have little effect on the net radiation or reference evapotranspiration. A 5°C error in estimating surface temperature leads to errors as big as 30 W m −2 at high temperature. A 4% solar radiation (R s ) error can cause a net radiation error as big as 26 W m −2 when R s ≈ 1000 W m −2 . However, the error is less when cloud cover is calculated as a function of the solar radiation. The absolute error in reference evapotranspiration (ET o ) equals the product of the net radiation error and the radiation term weighting factor [W = Δ(Δ1+γ)] in the ET o equation. Therefore, the ET o error varies between 65 and 85% of the R n error as air temperature increases from about 20° to 40°C. (author)

  17. Tracking tumor boundary in MV-EPID images without implanted markers: A feasibility study

    International Nuclear Information System (INIS)

    Zhang, Xiaoyong; Homma, Noriyasu; Ichiji, Kei; Takai, Yoshihiro; Yoshizawa, Makoto

    2015-01-01

    Purpose: To develop a markerless tracking algorithm to track the tumor boundary in megavoltage (MV)-electronic portal imaging device (EPID) images for image-guided radiation therapy. Methods: A level set method (LSM)-based algorithm is developed to track tumor boundary in EPID image sequences. Given an EPID image sequence, an initial curve is manually specified in the first frame. Driven by a region-scalable energy fitting function, the initial curve automatically evolves toward the tumor boundary and stops on the desired boundary while the energy function reaches its minimum. For the subsequent frames, the tracking algorithm updates the initial curve by using the tracking result in the previous frame and reuses the LSM to detect the tumor boundary in the subsequent frame so that the tracking processing can be continued without user intervention. The tracking algorithm is tested on three image datasets, including a 4-D phantom EPID image sequence, four digitally deformable phantom image sequences with different noise levels, and four clinical EPID image sequences acquired in lung cancer treatment. The tracking accuracy is evaluated based on two metrics: centroid localization error (CLE) and volume overlap index (VOI) between the tracking result and the ground truth. Results: For the 4-D phantom image sequence, the CLE is 0.23 ± 0.20 mm, and VOI is 95.6% ± 0.2%. For the digital phantom image sequences, the total CLE and VOI are 0.11 ± 0.08 mm and 96.7% ± 0.7%, respectively. In addition, for the clinical EPID image sequences, the proposed algorithm achieves 0.32 ± 0.77 mm in the CLE and 72.1% ± 5.5% in the VOI. These results demonstrate the effectiveness of the authors’ proposed method both in tumor localization and boundary tracking in EPID images. In addition, compared with two existing tracking algorithms, the proposed method achieves a higher accuracy in tumor localization. Conclusions: In this paper, the authors presented a feasibility study of tracking

  18. Data Analysis & Statistical Methods for Command File Errors

    Science.gov (United States)

    Meshkat, Leila; Waggoner, Bruce; Bryant, Larry

    2014-01-01

    This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.

  19. Acoustic Evidence for Phonologically Mismatched Speech Errors

    Science.gov (United States)

    Gormley, Andrea

    2015-01-01

    Speech errors are generally said to accommodate to their new phonological context. This accommodation has been validated by several transcription studies. The transcription methodology is not the best choice for detecting errors at this level, however, as this type of error can be difficult to perceive. This paper presents an acoustic analysis of…

  20. IntroductionLa parole des rois à la fin du Moyen Âge : les voies d’une enquête

    Directory of Open Access Journals (Sweden)

    Stéphane PÉQUIGNOT

    2007-10-01

    Full Text Available El artículo sugiere algunas propuestas para una investigación general sobre el hablar de los reyes a finales de la Edad Media. Basándose en un estado de la cuestión para el caso de la Corona de Aragón, se indaga la inscripción del hablar de los reyes en distintas temporalidades imbricadas entre sí. La transcripción de las palabras resulta de un proceso complejo, la « fábrica de la palabra », cuyos mecanismos y huellas son objeto de estudio. Por otra parte, las representaciones del hablar de los reyes hacen a menudo referencia a unos modelos traídos del pasado, a veces se dirigen a un público futuro, mientras testimonian también su necesaria adaptación a las circunstancias de cada momento. Estos « actas reales de palabra », así como los « estilos expresivos » que contribuyen a forjar, se examinan en la secunda parte del artículo. Finalmente, el tiempo dedicado o dejado a las palabras reales participa de las evoluciones a largo plazo de las relaciones a lo escrito, de los regímenes políticos y de su forma de legitimación ; constituye un modo de comunicación político importante, un recurso y, también, una toma de riesgo para el poder y la autoridad monarquica.L’article invite à une enquête générale sur la parole des rois à la fin du Moyen Âge et en esquisse plusieurs voies possibles. À l’aide d’un état de la question sur la couronne d’Aragon, c’est l’inscription de la parole des rois dans différentes temporalités imbriquées qui est visée. Sa transcription même résulte d’un processus complexe, la « fabrique de la parole », dont mécanismes et traces sont examinés. D’autre part, les représentations de la parole des rois renvoient souvent à des modèles du passé, visent parfois un public futur tout en témoignant aussi d’une nécessaire adaptation aux circonstances présentes. Ces « actes royaux de parole » et les « styles expressifs » qu’ils contribuent à consolider

  1. On the Design of Error-Correcting Ciphers

    Directory of Open Access Journals (Sweden)

    Mathur Chetan Nanjunda

    2006-01-01

    Full Text Available Securing transmission over a wireless network is especially challenging, not only because of the inherently insecure nature of the medium, but also because of the highly error-prone nature of the wireless environment. In this paper, we take a joint encryption-error correction approach to ensure secure and robust communication over the wireless link. In particular, we design an error-correcting cipher (called the high diffusion cipher and prove bounds on its error-correcting capacity as well as its security. Towards this end, we propose a new class of error-correcting codes (HD-codes with built-in security features that we use in the diffusion layer of the proposed cipher. We construct an example, 128-bit cipher using the HD-codes, and compare it experimentally with two traditional concatenated systems: (a AES (Rijndael followed by Reed-Solomon codes, (b Rijndael followed by convolutional codes. We show that the HD-cipher is as resistant to linear and differential cryptanalysis as the Rijndael. We also show that any chosen plaintext attack that can be performed on the HD cipher can be transformed into a chosen plaintext attack on the Rijndael cipher. In terms of error correction capacity, the traditional systems using Reed-Solomon codes are comparable to the proposed joint error-correcting cipher and those that use convolutional codes require more data expansion in order to achieve similar error correction as the HD-cipher. The original contributions of this work are (1 design of a new joint error-correction-encryption system, (2 design of a new class of algebraic codes with built-in security criteria, called the high diffusion codes (HD-codes for use in the HD-cipher, (3 mathematical properties of these codes, (4 methods for construction of the codes, (5 bounds on the error-correcting capacity of the HD-cipher, (6 mathematical derivation of the bound on resistance of HD cipher to linear and differential cryptanalysis, (7 experimental comparison

  2. Learning a locomotor task: with or without errors?

    Science.gov (United States)

    Marchal-Crespo, Laura; Schneider, Jasmin; Jaeger, Lukas; Riener, Robert

    2014-03-04

    Robotic haptic guidance is the most commonly used robotic training strategy to reduce performance errors while training. However, research on motor learning has emphasized that errors are a fundamental neural signal that drive motor adaptation. Thus, researchers have proposed robotic therapy algorithms that amplify movement errors rather than decrease them. However, to date, no study has analyzed with precision which training strategy is the most appropriate to learn an especially simple task. In this study, the impact of robotic training strategies that amplify or reduce errors on muscle activation and motor learning of a simple locomotor task was investigated in twenty two healthy subjects. The experiment was conducted with the MAgnetic Resonance COmpatible Stepper (MARCOS) a special robotic device developed for investigations in the MR scanner. The robot moved the dominant leg passively and the subject was requested to actively synchronize the non-dominant leg to achieve an alternating stepping-like movement. Learning with four different training strategies that reduce or amplify errors was evaluated: (i) Haptic guidance: errors were eliminated by passively moving the limbs, (ii) No guidance: no robot disturbances were presented, (iii) Error amplification: existing errors were amplified with repulsive forces, (iv) Noise disturbance: errors were evoked intentionally with a randomly-varying force disturbance on top of the no guidance strategy. Additionally, the activation of four lower limb muscles was measured by the means of surface electromyography (EMG). Strategies that reduce or do not amplify errors limit muscle activation during training and result in poor learning gains. Adding random disturbing forces during training seems to increase attention, and therefore improve motor learning. Error amplification seems to be the most suitable strategy for initially less skilled subjects, perhaps because subjects could better detect their errors and correct them

  3. The interaction of the flux errors and transport errors in modeled atmospheric carbon dioxide concentrations

    Science.gov (United States)

    Feng, S.; Lauvaux, T.; Butler, M. P.; Keller, K.; Davis, K. J.; Jacobson, A. R.; Schuh, A. E.; Basu, S.; Liu, J.; Baker, D.; Crowell, S.; Zhou, Y.; Williams, C. A.

    2017-12-01

    Regional estimates of biogenic carbon fluxes over North America from top-down atmospheric inversions and terrestrial biogeochemical (or bottom-up) models remain inconsistent at annual and sub-annual time scales. While top-down estimates are impacted by limited atmospheric data, uncertain prior flux estimates and errors in the atmospheric transport models, bottom-up fluxes are affected by uncertain driver data, uncertain model parameters and missing mechanisms across ecosystems. This study quantifies both flux errors and transport errors, and their interaction in the CO2 atmospheric simulation. These errors are assessed by an ensemble approach. The WRF-Chem model is set up with 17 biospheric fluxes from the Multiscale Synthesis and Terrestrial Model Intercomparison Project, CarbonTracker-Near Real Time, and the Simple Biosphere model. The spread of the flux ensemble members represents the flux uncertainty in the modeled CO2 concentrations. For the transport errors, WRF-Chem is run using three physical model configurations with three stochastic perturbations to sample the errors from both the physical parameterizations of the model and the initial conditions. Additionally, the uncertainties from boundary conditions are assessed using four CO2 global inversion models which have assimilated tower and satellite CO2 observations. The error structures are assessed in time and space. The flux ensemble members overall overestimate CO2 concentrations. They also show larger temporal variability than the observations. These results suggest that the flux ensemble is overdispersive. In contrast, the transport ensemble is underdispersive. The averaged spatial distribution of modeled CO2 shows strong positive biogenic signal in the southern US and strong negative signals along the eastern coast of Canada. We hypothesize that the former is caused by the 3-hourly downscaling algorithm from which the nighttime respiration dominates the daytime modeled CO2 signals and that the latter

  4. Learning from errors in radiology to improve patient safety.

    Science.gov (United States)

    Saeed, Shaista Afzal; Masroor, Imrana; Shafqat, Gulnaz

    2013-10-01

    To determine the views and practices of trainees and consultant radiologists about error reporting. Cross-sectional survey. Radiology trainees and consultant radiologists in four tertiary care hospitals in Karachi approached in the second quarter of 2011. Participants were enquired as to their grade, sub-specialty interest, whether they kept a record/log of their errors (defined as a mistake that has management implications for the patient), number of errors they made in the last 12 months and the predominant type of error. They were also asked about the details of their department error meetings. All duly completed questionnaires were included in the study while the ones with incomplete information were excluded. A total of 100 radiologists participated in the survey. Of them, 34 were consultants and 66 were trainees. They had a wide range of sub-specialty interest like CT, Ultrasound, etc. Out of the 100 responders, 49 kept a personal record/log of their errors. In response to the recall of approximate errors they made in the last 12 months, 73 (73%) of participants recorded a varied response with 1 - 5 errors mentioned by majority i.e. 47 (64.5%). Most of the radiologists (97%) claimed receiving information about their errors through multiple sources like morbidity/mortality meetings, patients' follow-up, through colleagues and consultants. Perceptual error 66 (66%) were the predominant error type reported. Regular occurrence of error meetings and attending three or more error meetings in the last 12 months was reported by 35% participants. Majority among these described the atmosphere of these error meetings as informative and comfortable (n = 22, 62.8%). It is of utmost importance to develop a culture of learning from mistakes by conducting error meetings and improving the process of recording and addressing errors to enhance patient safety.

  5. LD Definition.

    Science.gov (United States)

    Learning Disability Quarterly, 1987

    1987-01-01

    The position paper (1981) of the National Joint Committee on Learning Disabilities presents a revised definition of learning disabilities and identifies issues and concerns (such as the limitation to children and the exclusion clause) associated with the definition included in P.L. 94-142, the Education for All Handicapped Children Act. (DB)

  6. Errors in practical measurement in surveying, engineering, and technology

    International Nuclear Information System (INIS)

    Barry, B.A.; Morris, M.D.

    1991-01-01

    This book discusses statistical measurement, error theory, and statistical error analysis. The topics of the book include an introduction to measurement, measurement errors, the reliability of measurements, probability theory of errors, measures of reliability, reliability of repeated measurements, propagation of errors in computing, errors and weights, practical application of the theory of errors in measurement, two-dimensional errors and includes a bibliography. Appendices are included which address significant figures in measurement, basic concepts of probability and the normal probability curve, writing a sample specification for a procedure, classification, standards of accuracy, and general specifications of geodetic control surveys, the geoid, the frequency distribution curve and the computer and calculator solution of problems

  7. North error estimation based on solar elevation errors in the third step of sky-polarimetric Viking navigation.

    Science.gov (United States)

    Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Egri, Ádám; Horváth, Gábor

    2016-07-01

    The theory of sky-polarimetric Viking navigation has been widely accepted for decades without any information about the accuracy of this method. Previously, we have measured the accuracy of the first and second steps of this navigation method in psychophysical laboratory and planetarium experiments. Now, we have tested the accuracy of the third step in a planetarium experiment, assuming that the first and second steps are errorless. Using the fists of their outstretched arms, 10 test persons had to estimate the elevation angles (measured in numbers of fists and fingers) of black dots (representing the position of the occluded Sun) projected onto the planetarium dome. The test persons performed 2400 elevation estimations, 48% of which were more accurate than ±1°. We selected three test persons with the (i) largest and (ii) smallest elevation errors and (iii) highest standard deviation of the elevation error. From the errors of these three persons, we calculated their error function, from which the North errors (the angles with which they deviated from the geographical North) were determined for summer solstice and spring equinox, two specific dates of the Viking sailing period. The range of possible North errors Δ ω N was the lowest and highest at low and high solar elevations, respectively. At high elevations, the maximal Δ ω N was 35.6° and 73.7° at summer solstice and 23.8° and 43.9° at spring equinox for the best and worst test person (navigator), respectively. Thus, the best navigator was twice as good as the worst one. At solstice and equinox, high elevations occur the most frequently during the day, thus high North errors could occur more frequently than expected before. According to our findings, the ideal periods for sky-polarimetric Viking navigation are immediately after sunrise and before sunset, because the North errors are the lowest at low solar elevations.

  8. The concept of error and malpractice in radiology.

    Science.gov (United States)

    Pinto, Antonio; Brunese, Luca; Pinto, Fabio; Reali, Riccardo; Daniele, Stefania; Romano, Luigia

    2012-08-01

    Since the early 1970s, physicians have been subjected to an increasing number of medical malpractice claims. Radiology is one of the specialties most liable to claims of medical negligence. The etiology of radiological error is multifactorial. Errors fall into recurrent patterns. Errors arise from poor technique, failures of perception, lack of knowledge, and misjudgments. Every radiologist should understand the sources of error in diagnostic radiology as well as the elements of negligence that form the basis of malpractice litigation. Errors are an inevitable part of human life, and every health professional has made mistakes. To improve patient safety and reduce the risk from harm, we must accept that some errors are inevitable during the delivery of health care. We must play a cultural change in medicine, wherein errors are actively sought, openly discussed, and aggressively addressed. Copyright © 2012 Elsevier Inc. All rights reserved.

  9. Synthesized interstitial lung texture for use in anthropomorphic computational phantoms

    Science.gov (United States)

    Becchetti, Marc F.; Solomon, Justin B.; Segars, W. Paul; Samei, Ehsan

    2016-04-01

    A realistic model of the anatomical texture from the pulmonary interstitium was developed with the goal of extending the capability of anthropomorphic computational phantoms (e.g., XCAT, Duke University), allowing for more accurate image quality assessment. Contrast-enhanced, high dose, thorax images for a healthy patient from a clinical CT system (Discovery CT750HD, GE healthcare) with thin (0.625 mm) slices and filtered back- projection (FBP) were used to inform the model. The interstitium which gives rise to the texture was defined using 24 volumes of interest (VOIs). These VOIs were selected manually to avoid vasculature, bronchi, and bronchioles. A small scale Hessian-based line filter was applied to minimize the amount of partial-volumed supernumerary vessels and bronchioles within the VOIs. The texture in the VOIs was characterized using 8 Haralick and 13 gray-level run length features. A clustered lumpy background (CLB) model with added noise and blurring to match CT system was optimized to resemble the texture in the VOIs using a genetic algorithm with the Mahalanobis distance as a similarity metric between the texture features. The most similar CLB model was then used to generate the interstitial texture to fill the lung. The optimization improved the similarity by 45%. This will substantially enhance the capabilities of anthropomorphic computational phantoms, allowing for more realistic CT simulations.

  10. Investigating Medication Errors in Educational Health Centers of Kermanshah

    Directory of Open Access Journals (Sweden)

    Mohsen Mohammadi

    2015-08-01

    Full Text Available Background and objectives : Medication errors can be a threat to the safety of patients. Preventing medication errors requires reporting and investigating such errors. The present study was conducted with the purpose of investigating medication errors in educational health centers of Kermanshah. Material and Methods: The present research is an applied, descriptive-analytical study and is done as a survey. Error Report of Ministry of Health and Medical Education was used for data collection. The population of the study included all the personnel (nurses, doctors, paramedics of educational health centers of Kermanshah. Among them, those who reported the committed errors were selected as the sample of the study. The data analysis was done using descriptive statistics and Chi 2 Test using SPSS version 18. Results: The findings of the study showed that most errors were related to not using medication properly, the least number of errors were related to improper dose, and the majority of errors occurred in the morning. The most frequent reason for errors was staff negligence and the least frequent was the lack of knowledge. Conclusion: The health care system should create an environment for detecting and reporting errors by the personnel, recognizing related factors causing errors, training the personnel and create a good working environment and standard workload.

  11. Errors in radiographic recognition in the emergency room

    International Nuclear Information System (INIS)

    Britton, C.A.; Cooperstein, L.A.

    1986-01-01

    For 6 months we monitored the frequency and type of errors in radiographic recognition made by radiology residents on call in our emergency room. A relatively low error rate was observed, probably because the authors evaluated cognitive errors only, rather than include those of interpretation. The most common missed finding was a small fracture, particularly on the hands or feet. First-year residents were most likely to make an error, but, interestingly, our survey revealed a small subset of upper-level residents who made a disproportionate number of errors

  12. The effect of errors in charged particle beams

    International Nuclear Information System (INIS)

    Carey, D.C.

    1987-01-01

    Residual errors in a charged particle optical system determine how well the performance of the system conforms to the theory on which it is based. Mathematically possible optical modes can sometimes be eliminated as requiring precisions not attainable. Other plans may require introduction of means of correction for the occurrence of various errors. Error types include misalignments, magnet fabrication precision limitations, and magnet current regulation errors. A thorough analysis of a beam optical system requires computer simulation of all these effects. A unified scheme for the simulation of errors and their correction is discussed

  13. Applying Intelligent Algorithms to Automate the Identification of Error Factors.

    Science.gov (United States)

    Jin, Haizhe; Qu, Qingxing; Munechika, Masahiko; Sano, Masataka; Kajihara, Chisato; Duffy, Vincent G; Chen, Han

    2018-05-03

    Medical errors are the manifestation of the defects occurring in medical processes. Extracting and identifying defects as medical error factors from these processes are an effective approach to prevent medical errors. However, it is a difficult and time-consuming task and requires an analyst with a professional medical background. The issues of identifying a method to extract medical error factors and reduce the extraction difficulty need to be resolved. In this research, a systematic methodology to extract and identify error factors in the medical administration process was proposed. The design of the error report, extraction of the error factors, and identification of the error factors were analyzed. Based on 624 medical error cases across four medical institutes in both Japan and China, 19 error-related items and their levels were extracted. After which, they were closely related to 12 error factors. The relational model between the error-related items and error factors was established based on a genetic algorithm (GA)-back-propagation neural network (BPNN) model. Additionally, compared to GA-BPNN, BPNN, partial least squares regression and support vector regression, GA-BPNN exhibited a higher overall prediction accuracy, being able to promptly identify the error factors from the error-related items. The combination of "error-related items, their different levels, and the GA-BPNN model" was proposed as an error-factor identification technology, which could automatically identify medical error factors.

  14. Error Analysis in Mathematics. Technical Report #1012

    Science.gov (United States)

    Lai, Cheng-Fei

    2012-01-01

    Error analysis is a method commonly used to identify the cause of student errors when they make consistent mistakes. It is a process of reviewing a student's work and then looking for patterns of misunderstanding. Errors in mathematics can be factual, procedural, or conceptual, and may occur for a number of reasons. Reasons why students make…

  15. Improved Landau gauge fixing and discretisation errors

    International Nuclear Information System (INIS)

    Bonnet, F.D.R.; Bowman, P.O.; Leinweber, D.B.; Richards, D.G.; Williams, A.G.

    2000-01-01

    Lattice discretisation errors in the Landau gauge condition are examined. An improved gauge fixing algorithm in which O(a 2 ) errors are removed is presented. O(a 2 ) improvement of the gauge fixing condition displays the secondary benefit of reducing the size of higher-order errors. These results emphasise the importance of implementing an improved gauge fixing condition

  16. Counteracting structural errors in ensemble forecast of influenza outbreaks.

    Science.gov (United States)

    Pei, Sen; Shaman, Jeffrey

    2017-10-13

    For influenza forecasts generated using dynamical models, forecast inaccuracy is partly attributable to the nonlinear growth of error. As a consequence, quantification of the nonlinear error structure in current forecast models is needed so that this growth can be corrected and forecast skill improved. Here, we inspect the error growth of a compartmental influenza model and find that a robust error structure arises naturally from the nonlinear model dynamics. By counteracting these structural errors, diagnosed using error breeding, we develop a new forecast approach that combines dynamical error correction and statistical filtering techniques. In retrospective forecasts of historical influenza outbreaks for 95 US cities from 2003 to 2014, overall forecast accuracy for outbreak peak timing, peak intensity and attack rate, are substantially improved for predicted lead times up to 10 weeks. This error growth correction method can be generalized to improve the forecast accuracy of other infectious disease dynamical models.Inaccuracy of influenza forecasts based on dynamical models is partly due to nonlinear error growth. Here the authors address the error structure of a compartmental influenza model, and develop a new improved forecast approach combining dynamical error correction and statistical filtering techniques.

  17. Forecast errors in IEA-countries' energy consumption

    DEFF Research Database (Denmark)

    Linderoth, Hans

    2002-01-01

    Every year Policy of IEA Countries includes a forecast of the energy consumption in the member countries. Forecasts concerning the years 1985,1990 and 1995 can now be compared to actual values. The second oil crisis resulted in big positive forecast errors. The oil price drop in 1986 did not have...... the small value is often the sum of large positive and negative errors. Almost no significant correlation is found between forecast errors in the 3 years. Correspondingly, no significant correlation coefficient is found between forecasts errors in the 3 main energy sectors. Therefore, a relatively small...

  18. Human Error Analysis by Fuzzy-Set

    International Nuclear Information System (INIS)

    Situmorang, Johnny

    1996-01-01

    In conventional HRA the probability of Error is treated as a single and exact value through constructing even tree, but in this moment the Fuzzy-Set Theory is used. Fuzzy set theory treat the probability of error as a plausibility which illustrate a linguistic variable. Most parameter or variable in human engineering been defined verbal good, fairly good, worst etc. Which describe a range of any value of probability. For example this analysis is quantified the human error in calibration task, and the probability of miscalibration is very low

  19. Intervention strategies for the management of human error

    Science.gov (United States)

    Wiener, Earl L.

    1993-01-01

    This report examines the management of human error in the cockpit. The principles probably apply as well to other applications in the aviation realm (e.g. air traffic control, dispatch, weather, etc.) as well as other high-risk systems outside of aviation (e.g. shipping, high-technology medical procedures, military operations, nuclear power production). Management of human error is distinguished from error prevention. It is a more encompassing term, which includes not only the prevention of error, but also a means of disallowing an error, once made, from adversely affecting system output. Such techniques include: traditional human factors engineering, improvement of feedback and feedforward of information from system to crew, 'error-evident' displays which make erroneous input more obvious to the crew, trapping of errors within a system, goal-sharing between humans and machines (also called 'intent-driven' systems), paperwork management, and behaviorally based approaches, including procedures, standardization, checklist design, training, cockpit resource management, etc. Fifteen guidelines for the design and implementation of intervention strategies are included.

  20. The influence of random and systematic errors on a general definition of minimum detectable amount (MDA) applicable to all radiobioassay measurements

    International Nuclear Information System (INIS)

    Brodsky, A.

    1985-01-01

    An approach to defining minimum detectable amount (MDA) of radioactivity in a sample will be discussed, with the aim of obtaining comments helpful in developing a formulation of MDA that will be broadly applicable to all kinds of radiobioassay measurements, and acceptable to the scientists who make these measurements. Also, the influence of random and systematic errors on the defined MDA are examined