WorldWideScience

Sample records for significantly higher error-identification

  1. Measuring Error Identification and Recovery Skills in Surgical Residents.

    Science.gov (United States)

    Sternbach, Joel M; Wang, Kevin; El Khoury, Rym; Teitelbaum, Ezra N; Meyerson, Shari L

    2017-02-01

    Although error identification and recovery skills are essential for the safe practice of surgery, they have not traditionally been taught or evaluated in residency training. This study validates a method for assessing error identification and recovery skills in surgical residents using a thoracoscopic lobectomy simulator. We developed a 5-station, simulator-based examination containing the most commonly encountered cognitive and technical errors occurring during division of the superior pulmonary vein for left upper lobectomy. Successful completion of each station requires identification and correction of these errors. Examinations were video recorded and scored in a blinded fashion using an examination-specific rating instrument evaluating task performance as well as error identification and recovery skills. Evidence of validity was collected in the categories of content, response process, internal structure, and relationship to other variables. Fifteen general surgical residents (9 interns and 6 third-year residents) completed the examination. Interrater reliability was high, with an intraclass correlation coefficient of 0.78 between 4 trained raters. Station scores ranged from 64% to 84% correct. All stations adequately discriminated between high- and low-performing residents, with discrimination ranging from 0.35 to 0.65. The overall examination score was significantly higher for intermediate residents than for interns (mean, 74 versus 64 of 90 possible; p = 0.03). The described simulator-based examination with embedded errors and its accompanying assessment tool can be used to measure error identification and recovery skills in surgical residents. This examination provides a valid method for comparing teaching strategies designed to improve error recognition and recovery to enhance patient safety. Copyright © 2017 The Society of Thoracic Surgeons. Published by Elsevier Inc. All rights reserved.

  2. Reduction in specimen labeling errors after implementation of a positive patient identification system in phlebotomy.

    Science.gov (United States)

    Morrison, Aileen P; Tanasijevic, Milenko J; Goonan, Ellen M; Lobo, Margaret M; Bates, Michael M; Lipsitz, Stuart R; Bates, David W; Melanson, Stacy E F

    2010-06-01

    Ensuring accurate patient identification is central to preventing medical errors, but it can be challenging. We implemented a bar code-based positive patient identification system for use in inpatient phlebotomy. A before-after design was used to evaluate the impact of the identification system on the frequency of mislabeled and unlabeled samples reported in our laboratory. Labeling errors fell from 5.45 in 10,000 before implementation to 3.2 in 10,000 afterward (P = .0013). An estimated 108 mislabeling events were prevented by the identification system in 1 year. Furthermore, a workflow step requiring manual preprinting of labels, which was accompanied by potential labeling errors in about one quarter of blood "draws," was removed as a result of the new system. After implementation, a higher percentage of patients reported having their wristband checked before phlebotomy. Bar code technology significantly reduced the rate of specimen identification errors.

  3. An intervention to decrease patient identification band errors in a children's hospital.

    Science.gov (United States)

    Hain, Paul D; Joers, B; Rush, M; Slayton, J; Throop, P; Hoagg, S; Allen, L; Grantham, J; Deshpande, J K

    2010-06-01

    Patient misidentification continues to be a quality and safety issue. There is a paucity of US data describing interventions to reduce identification band error rates. Monroe Carell Jr Children's Hospital at Vanderbilt. Percentage of patients with defective identification bands. Web-based surveys were sent, asking hospital personnel to anonymously identify perceived barriers to reaching zero defects with identification bands. Corrective action plans were created and implemented with ideas from leadership, front-line staff and the online survey. Data from unannounced audits of patient identification bands were plotted on statistical process control charts and shared monthly with staff. All hospital personnel were expected to "stop the line" if there were any patient identification questions. The first audit showed a defect rate of 20.4%. The original mean defect rate was 6.5%. After interventions and education, the new mean defect rate was 2.6%. (a) The initial rate of patient identification band errors in the hospital was higher than expected. (b) The action resulting in most significant improvement was staff awareness of the problem, with clear expectations to immediately stop the line if a patient identification error was present. (c) Staff surveys are an excellent source of suggestions for combating patient identification issues. (d) Continued audit and data collection is necessary for sustainable staff focus and continued improvement. (e) Statistical process control charts are both an effective method to track results and an easily understood tool for sharing data with staff.

  4. Applying Intelligent Algorithms to Automate the Identification of Error Factors.

    Science.gov (United States)

    Jin, Haizhe; Qu, Qingxing; Munechika, Masahiko; Sano, Masataka; Kajihara, Chisato; Duffy, Vincent G; Chen, Han

    2018-05-03

    Medical errors are the manifestation of the defects occurring in medical processes. Extracting and identifying defects as medical error factors from these processes are an effective approach to prevent medical errors. However, it is a difficult and time-consuming task and requires an analyst with a professional medical background. The issues of identifying a method to extract medical error factors and reduce the extraction difficulty need to be resolved. In this research, a systematic methodology to extract and identify error factors in the medical administration process was proposed. The design of the error report, extraction of the error factors, and identification of the error factors were analyzed. Based on 624 medical error cases across four medical institutes in both Japan and China, 19 error-related items and their levels were extracted. After which, they were closely related to 12 error factors. The relational model between the error-related items and error factors was established based on a genetic algorithm (GA)-back-propagation neural network (BPNN) model. Additionally, compared to GA-BPNN, BPNN, partial least squares regression and support vector regression, GA-BPNN exhibited a higher overall prediction accuracy, being able to promptly identify the error factors from the error-related items. The combination of "error-related items, their different levels, and the GA-BPNN model" was proposed as an error-factor identification technology, which could automatically identify medical error factors.

  5. Prescribing errors during hospital inpatient care: factors influencing identification by pharmacists.

    Science.gov (United States)

    Tully, Mary P; Buchan, Iain E

    2009-12-01

    To investigate the prevalence of prescribing errors identified by pharmacists in hospital inpatients and the factors influencing error identification rates by pharmacists throughout hospital admission. 880-bed university teaching hospital in North-west England. Data about prescribing errors identified by pharmacists (median: 9 (range 4-17) collecting data per day) when conducting routine work were prospectively recorded on 38 randomly selected days over 18 months. Proportion of new medication orders in which an error was identified; predictors of error identification rate, adjusted for workload and seniority of pharmacist, day of week, type of ward or stage of patient admission. 33,012 new medication orders were reviewed for 5,199 patients; 3,455 errors (in 10.5% of orders) were identified for 2,040 patients (39.2%; median 1, range 1-12). Most were problem orders (1,456, 42.1%) or potentially significant errors (1,748, 50.6%); 197 (5.7%) were potentially serious; 1.6% (n = 54) were potentially severe or fatal. Errors were 41% (CI: 28-56%) more likely to be identified at patient's admission than at other times, independent of confounders. Workload was the strongest predictor of error identification rates, with 40% (33-46%) less errors identified on the busiest days than at other times. Errors identified fell by 1.9% (1.5-2.3%) for every additional chart checked, independent of confounders. Pharmacists routinely identify errors but increasing workload may reduce identification rates. Where resources are limited, they may be better spent on identifying and addressing errors immediately after admission to hospital.

  6. Patient safety in the clinical laboratory: a longitudinal analysis of specimen identification errors.

    Science.gov (United States)

    Wagar, Elizabeth A; Tamashiro, Lorraine; Yasin, Bushra; Hilborne, Lee; Bruckner, David A

    2006-11-01

    Patient safety is an increasingly visible and important mission for clinical laboratories. Attention to improving processes related to patient identification and specimen labeling is being paid by accreditation and regulatory organizations because errors in these areas that jeopardize patient safety are common and avoidable through improvement in the total testing process. To assess patient identification and specimen labeling improvement after multiple implementation projects using longitudinal statistical tools. Specimen errors were categorized by a multidisciplinary health care team. Patient identification errors were grouped into 3 categories: (1) specimen/requisition mismatch, (2) unlabeled specimens, and (3) mislabeled specimens. Specimens with these types of identification errors were compared preimplementation and postimplementation for 3 patient safety projects: (1) reorganization of phlebotomy (4 months); (2) introduction of an electronic event reporting system (10 months); and (3) activation of an automated processing system (14 months) for a 24-month period, using trend analysis and Student t test statistics. Of 16,632 total specimen errors, mislabeled specimens, requisition mismatches, and unlabeled specimens represented 1.0%, 6.3%, and 4.6% of errors, respectively. Student t test showed a significant decrease in the most serious error, mislabeled specimens (P patient safety projects. Trend analysis demonstrated decreases in all 3 error types for 26 months. Applying performance-improvement strategies that focus longitudinally on specimen labeling errors can significantly reduce errors, therefore improving patient safety. This is an important area in which laboratory professionals, working in interdisciplinary teams, can improve safety and outcomes of care.

  7. Estimating error rates for firearm evidence identifications in forensic science

    Science.gov (United States)

    Song, John; Vorburger, Theodore V.; Chu, Wei; Yen, James; Soons, Johannes A.; Ott, Daniel B.; Zhang, Nien Fan

    2018-01-01

    Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. PMID:29331680

  8. Error-Detecting Identification Codes for Algebra Students.

    Science.gov (United States)

    Sutherland, David C.

    1990-01-01

    Discusses common error-detecting identification codes using linear algebra terminology to provide an interesting application of algebra. Presents examples from the International Standard Book Number, the Universal Product Code, bank identification numbers, and the ZIP code bar code. (YP)

  9. Decreasing patient identification band errors by standardizing processes.

    Science.gov (United States)

    Walley, Susan Chu; Berger, Stephanie; Harris, Yolanda; Gallizzi, Gina; Hayes, Leslie

    2013-04-01

    Patient identification (ID) bands are an essential component in patient ID. Quality improvement methodology has been applied as a model to reduce ID band errors although previous studies have not addressed standardization of ID bands. Our specific aim was to decrease ID band errors by 50% in a 12-month period. The Six Sigma DMAIC (define, measure, analyze, improve, and control) quality improvement model was the framework for this study. ID bands at a tertiary care pediatric hospital were audited from January 2011 to January 2012 with continued audits to June 2012 to confirm the new process was in control. After analysis, the major improvement strategy implemented was standardization of styles of ID bands and labels. Additional interventions included educational initiatives regarding the new ID band processes and disseminating institutional and nursing unit data. A total of 4556 ID bands were audited with a preimprovement ID band error average rate of 9.2%. Significant variation in the ID band process was observed, including styles of ID bands. Interventions were focused on standardization of the ID band and labels. The ID band error rate improved to 5.2% in 9 months (95% confidence interval: 2.5-5.5; P error rates. This decrease in ID band error rates was maintained over the subsequent 8 months.

  10. Impact of error fields on plasma identification in ITER

    Energy Technology Data Exchange (ETDEWEB)

    Martone, R., E-mail: Raffaele.Martone@unina2.it [Ass. EURATOM/ENEA/CREATE, Seconda Università di Napoli, Via Roma 29, Aversa (CE) (Italy); Appel, L. [EURATOM/CCFE Fusion Association, Culham Science Centre, Abingdon (United Kingdom); Chiariello, A.G.; Formisano, A.; Mattei, M. [Ass. EURATOM/ENEA/CREATE, Seconda Università di Napoli, Via Roma 29, Aversa (CE) (Italy); Pironti, A. [Ass. EURATOM/ENEA/CREATE, Università degli Studi di Napoli “Federico II”, Via Claudio 25, Napoli (Italy)

    2013-10-15

    Highlights: ► The paper deals with the effect on plasma identification of error fields generated by field coils manufacturing and assembly errors. ► EFIT++ is used to identify plasma gaps when poloidal field coils and central solenoid coils are deformed, and the gaps sensitivity with respect to such errors is analyzed. ► Some examples of reconstruction errors in the presence of deformations are reported. -- Abstract: The active control of plasma discharges in present Tokamak devices must be prompt and accurate to guarantee expected performance. As a consequence, the identification step, calculating plasma parameters from diagnostics, should provide in a very short time reliable estimates of the relevant quantities, such as plasma centroid position, plasma-wall distances at given points called gaps, and other geometrical parameters as elongation and triangularity. To achieve the desired response promptness, a number of simplifying assumptions are usually made in the identification algorithms. Among those clearly affecting the quality of the plasma parameters reconstruction, one of the most relevant is the precise knowledge of the magnetic field produced by active coils. Since uncertainties in their manufacturing and assembly process may cause misalignments between the actual and expected geometry and position of magnets, an analysis on the effect of possible wrong information about magnets on the plasma shape identification is documented in this paper.

  11. Patient identification errors: the detective in the laboratory.

    Science.gov (United States)

    Salinas, Maria; López-Garrigós, Maite; Lillo, Rosa; Gutiérrez, Mercedes; Lugo, Javier; Leiva-Salinas, Carlos

    2013-11-01

    The eradication of errors regarding patients' identification is one of the main goals for safety improvement. As clinical laboratory intervenes in 70% of clinical decisions, laboratory safety is crucial in patient safety. We studied the number of Laboratory Information System (LIS) demographic data errors registered in our laboratory during one year. The laboratory attends a variety of inpatients and outpatients. The demographic data of outpatients is registered in the LIS, when they present to the laboratory front desk. The requests from the primary care centers (PCC) are made electronically by the general practitioner. A manual step is always done at the PCC to conciliate the patient identification number in the electronic request with the one in the LIS. Manual registration is done through hospital information system demographic data capture when patient's medical record number is registered in LIS. Laboratory report is always sent out electronically to the patient's electronic medical record. Daily, every demographic data in LIS is manually compared to the request form to detect potential errors. Fewer errors were committed when electronic order was used. There was great error variability between PCC when using the electronic order. LIS demographic data manual registration errors depended on patient origin and test requesting method. Even when using the electronic approach, errors were detected. There was a great variability between PCC even when using this electronic modality; this suggests that the number of errors is still dependent on the personnel in charge of the technology. © 2013.

  12. Proportionate Minimum Error Entropy Algorithm for Sparse System Identification

    Directory of Open Access Journals (Sweden)

    Zongze Wu

    2015-08-01

    Full Text Available Sparse system identification has received a great deal of attention due to its broad applicability. The proportionate normalized least mean square (PNLMS algorithm, as a popular tool, achieves excellent performance for sparse system identification. In previous studies, most of the cost functions used in proportionate-type sparse adaptive algorithms are based on the mean square error (MSE criterion, which is optimal only when the measurement noise is Gaussian. However, this condition does not hold in most real-world environments. In this work, we use the minimum error entropy (MEE criterion, an alternative to the conventional MSE criterion, to develop the proportionate minimum error entropy (PMEE algorithm for sparse system identification, which may achieve much better performance than the MSE based methods especially in heavy-tailed non-Gaussian situations. Moreover, we analyze the convergence of the proposed algorithm and derive a sufficient condition that ensures the mean square convergence. Simulation results confirm the excellent performance of the new algorithm.

  13. Medical error identification, disclosure, and reporting: do emergency medicine provider groups differ?

    Science.gov (United States)

    Hobgood, Cherri; Weiner, Bryan; Tamayo-Sarver, Joshua H

    2006-04-01

    To determine if the three types of emergency medicine providers--physicians, nurses, and out-of-hospital providers (emergency medical technicians [EMTs])--differ in their identification, disclosure, and reporting of medical error. A convenience sample of providers in an academic emergency department evaluated ten case vignettes that represented two error types (medication and cognitive) and three severity levels. For each vignette, providers were asked the following: 1) Is this an error? 2) Would you tell the patient? 3) Would you report this to a hospital committee? To assess differences in identification, disclosure, and reporting by provider type, error type, and error severity, the authors constructed three-way tables with the nonparametric Somers' D clustered on participant. To assess the contribution of disclosure instruction and environmental variables, fixed-effects regression stratified by provider type was used. Of the 116 providers who were eligible, 103 (40 physicians, 26 nurses, and 35 EMTs) had complete data. Physicians were more likely to classify an event as an error (78%) than nurses (71%; p = 0.04) or EMTs (68%; p error to the patient (59%) than physicians (71%; p = 0.04). Physicians were the least likely to report the error (54%) compared with nurses (68%; p = 0.02) or EMTs (78%; p error types, identification, disclosure, and reporting increased with increasing severity. Improving patient safety hinges on the ability of health care providers to accurately identify, disclose, and report medical errors. Interventions must account for differences in error identification, disclosure, and reporting by provider type.

  14. Reducing patient identification errors related to glucose point-of-care testing

    Directory of Open Access Journals (Sweden)

    Gaurav Alreja

    2011-01-01

    Full Text Available Background: Patient identification (ID errors in point-of-care testing (POCT can cause test results to be transferred to the wrong patient′s chart or prevent results from being transmitted and reported. Despite the implementation of patient barcoding and ongoing operator training at our institution, patient ID errors still occur with glucose POCT. The aim of this study was to develop a solution to reduce identification errors with POCT. Materials and Methods: Glucose POCT was performed by approximately 2,400 clinical operators throughout our health system. Patients are identified by scanning in wristband barcodes or by manual data entry using portable glucose meters. Meters are docked to upload data to a database server which then transmits data to any medical record matching the financial number of the test result. With a new model, meters connect to an interface manager where the patient ID (a nine-digit account number is checked against patient registration data from admission, discharge, and transfer (ADT feeds and only matched results are transferred to the patient′s electronic medical record. With the new process, the patient ID is checked prior to testing, and testing is prevented until ID errors are resolved. Results: When averaged over a period of a month, ID errors were reduced to 3 errors/month (0.015% in comparison with 61.5 errors/month (0.319% before implementing the new meters. Conclusion: Patient ID errors may occur with glucose POCT despite patient barcoding. The verification of patient identification should ideally take place at the bedside before testing occurs so that the errors can be addressed in real time. The introduction of an ADT feed directly to glucose meters reduced patient ID errors in POCT.

  15. Volumetric error modeling, identification and compensation based on screw theory for a large multi-axis propeller-measuring machine

    Science.gov (United States)

    Zhong, Xuemin; Liu, Hongqi; Mao, Xinyong; Li, Bin; He, Songping; Peng, Fangyu

    2018-05-01

    Large multi-axis propeller-measuring machines have two types of geometric error, position-independent geometric errors (PIGEs) and position-dependent geometric errors (PDGEs), which both have significant effects on the volumetric error of the measuring tool relative to the worktable. This paper focuses on modeling, identifying and compensating for the volumetric error of the measuring machine. A volumetric error model in the base coordinate system is established based on screw theory considering all the geometric errors. In order to fully identify all the geometric error parameters, a new method for systematic measurement and identification is proposed. All the PIGEs of adjacent axes and the six PDGEs of the linear axes are identified with a laser tracker using the proposed model. Finally, a volumetric error compensation strategy is presented and an inverse kinematic solution for compensation is proposed. The final measuring and compensation experiments have further verified the efficiency and effectiveness of the measuring and identification method, indicating that the method can be used in volumetric error compensation for large machine tools.

  16. Patient identification error among prostate needle core biopsy specimens--are we ready for a DNA time-out?

    Science.gov (United States)

    Suba, Eric J; Pfeifer, John D; Raab, Stephen S

    2007-10-01

    Patient identification errors in surgical pathology often involve switches of prostate or breast needle core biopsy specimens among patients. We assessed strategies for decreasing the occurrence of these uncommon and yet potentially catastrophic events. Root cause analyses were performed following 3 cases of patient identification error involving prostate needle core biopsy specimens. Patient identification errors in surgical pathology result from slips and lapses of automatic human action that may occur at numerous steps during pre-laboratory, laboratory and post-laboratory work flow processes. Patient identification errors among prostate needle biopsies may be difficult to entirely prevent through the optimization of work flow processes. A DNA time-out, whereby DNA polymorphic microsatellite analysis is used to confirm patient identification before radiation therapy or radical surgery, may eliminate patient identification errors among needle biopsies.

  17. Triaxial Accelerometer Error Coefficients Identification with a Novel Artificial Fish Swarm Algorithm

    Directory of Open Access Journals (Sweden)

    Yanbin Gao

    2015-01-01

    Full Text Available Artificial fish swarm algorithm (AFSA is one of the state-of-the-art swarm intelligence techniques, which is widely utilized for optimization purposes. Triaxial accelerometer error coefficients are relatively unstable with the environmental disturbances and aging of the instrument. Therefore, identifying triaxial accelerometer error coefficients accurately and being with lower costs are of great importance to improve the overall performance of triaxial accelerometer-based strapdown inertial navigation system (SINS. In this study, a novel artificial fish swarm algorithm (NAFSA that eliminated the demerits (lack of using artificial fishes’ previous experiences, lack of existing balance between exploration and exploitation, and high computational cost of AFSA is introduced at first. In NAFSA, functional behaviors and overall procedure of AFSA have been improved with some parameters variations. Second, a hybrid accelerometer error coefficients identification algorithm has been proposed based on NAFSA and Monte Carlo simulation (MCS approaches. This combination leads to maximum utilization of the involved approaches for triaxial accelerometer error coefficients identification. Furthermore, the NAFSA-identified coefficients are testified with 24-position verification experiment and triaxial accelerometer-based SINS navigation experiment. The priorities of MCS-NAFSA are compared with that of conventional calibration method and optimal AFSA. Finally, both experiments results demonstrate high efficiency of MCS-NAFSA on triaxial accelerometer error coefficients identification.

  18. Nurses' Behaviors and Visual Scanning Patterns May Reduce Patient Identification Errors

    Science.gov (United States)

    Marquard, Jenna L.; Henneman, Philip L.; He, Ze; Jo, Junghee; Fisher, Donald L.; Henneman, Elizabeth A.

    2011-01-01

    Patient identification (ID) errors occurring during the medication administration process can be fatal. The aim of this study is to determine whether differences in nurses' behaviors and visual scanning patterns during the medication administration process influence their capacities to identify patient ID errors. Nurse participants (n = 20)…

  19. Identification of linear error-models with projected dynamical systems

    Czech Academy of Sciences Publication Activity Database

    Krejčí, Pavel; Kuhnen, K.

    2004-01-01

    Roč. 10, č. 1 (2004), s. 59-91 ISSN 1387-3954 Keywords : identification * error models * projected dynamical systems Subject RIV: BA - General Mathematics Impact factor: 0.292, year: 2004 http://www.informaworld.com/smpp/content~db=all~content=a713682517

  20. Prediction-error identification of LPV systems : present and beyond

    NARCIS (Netherlands)

    Toth, R.; Heuberger, P.S.C.; Hof, Van den P.M.J.; Mohammadpour, J.; Scherer, C. W.

    2012-01-01

    The proposed chapter aims at presenting a unified framework of prediction-error based identification of LPV systems using freshly developed theoretical results. Recently, these methods have got a considerable attention as they have certain advantages in terms of computational complexity, optimality

  1. Characterization of identification errors and uses in localization of poor modal correlation

    Science.gov (United States)

    Martin, Guillaume; Balmes, Etienne; Chancelier, Thierry

    2017-05-01

    While modal identification is a mature subject, very few studies address the characterization of errors associated with components of a mode shape. This is particularly important in test/analysis correlation procedures, where the Modal Assurance Criterion is used to pair modes and to localize at which sensors discrepancies occur. Poor correlation is usually attributed to modeling errors, but clearly identification errors also occur. In particular with 3D Scanning Laser Doppler Vibrometer measurement, many transfer functions are measured. As a result individual validation of each measurement cannot be performed manually in a reasonable time frame and a notable fraction of measurements is expected to be fairly noisy leading to poor identification of the associated mode shape components. The paper first addresses measurements and introduces multiple criteria. The error measures the difference between test and synthesized transfer functions around each resonance and can be used to localize poorly identified modal components. For intermediate error values, diagnostic of the origin of the error is needed. The level evaluates the transfer function amplitude in the vicinity of a given mode and can be used to eliminate sensors with low responses. A Noise Over Signal indicator, product of error and level, is then shown to be relevant to detect poorly excited modes and errors due to modal property shifts between test batches. Finally, a contribution is introduced to evaluate the visibility of a mode in each transfer. Using tests on a drum brake component, these indicators are shown to provide relevant insight into the quality of measurements. In a second part, test/analysis correlation is addressed with a focus on the localization of sources of poor mode shape correlation. The MACCo algorithm, which sorts sensors by the impact of their removal on a MAC computation, is shown to be particularly relevant. Combined with the error it avoids keeping erroneous modal components

  2. Human error identification for laparoscopic surgery: Development of a motion economy perspective.

    Science.gov (United States)

    Al-Hakim, Latif; Sevdalis, Nick; Maiping, Tanaphon; Watanachote, Damrongpan; Sengupta, Shomik; Dissaranan, Charuspong

    2015-09-01

    This study postulates that traditional human error identification techniques fail to consider motion economy principles and, accordingly, their applicability in operating theatres may be limited. This study addresses this gap in the literature with a dual aim. First, it identifies the principles of motion economy that suit the operative environment and second, it develops a new error mode taxonomy for human error identification techniques which recognises motion economy deficiencies affecting the performance of surgeons and predisposing them to errors. A total of 30 principles of motion economy were developed and categorised into five areas. A hierarchical task analysis was used to break down main tasks of a urological laparoscopic surgery (hand-assisted laparoscopic nephrectomy) to their elements and the new taxonomy was used to identify errors and their root causes resulting from violation of motion economy principles. The approach was prospectively tested in 12 observed laparoscopic surgeries performed by 5 experienced surgeons. A total of 86 errors were identified and linked to the motion economy deficiencies. Results indicate the developed methodology is promising. Our methodology allows error prevention in surgery and the developed set of motion economy principles could be useful for training surgeons on motion economy principles. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  3. [Measures to prevent patient identification errors in blood collection/physiological function testing utilizing a laboratory information system].

    Science.gov (United States)

    Shimazu, Chisato; Hoshino, Satoshi; Furukawa, Taiji

    2013-08-01

    We constructed an integrated personal identification workflow chart using both bar code reading and an all in-one laboratory information system. The information system not only handles test data but also the information needed for patient guidance in the laboratory department. The reception terminals at the entrance, displays for patient guidance and patient identification tools at blood-sampling booths are all controlled by the information system. The number of patient identification errors was greatly reduced by the system. However, identification errors have not been abolished in the ultrasound department. After re-evaluation of the patient identification process in this department, we recognized that the major reason for the errors came from excessive identification workflow. Ordinarily, an ultrasound test requires patient identification 3 times, because 3 different systems are required during the entire test process, i.e. ultrasound modality system, laboratory information system and a system for producing reports. We are trying to connect the 3 different systems to develop a one-time identification workflow, but it is not a simple task and has not been completed yet. Utilization of the laboratory information system is effective, but is not yet perfect for patient identification. The most fundamental procedure for patient identification is to ask a person's name even today. Everyday checks in the ordinary workflow and everyone's participation in safety-management activity are important for the prevention of patient identification errors.

  4. Reduction in pediatric identification band errors: a quality collaborative.

    Science.gov (United States)

    Phillips, Shannon Connor; Saysana, Michele; Worley, Sarah; Hain, Paul D

    2012-06-01

    Accurate and consistent placement of a patient identification (ID) band is used in health care to reduce errors associated with patient misidentification. Multiple safety organizations have devoted time and energy to improving patient ID, but no multicenter improvement collaboratives have shown scalability of previously successful interventions. We hoped to reduce by half the pediatric patient ID band error rate, defined as absent, illegible, or inaccurate ID band, across a quality improvement learning collaborative of hospitals in 1 year. On the basis of a previously successful single-site intervention, we conducted a self-selected 6-site collaborative to reduce ID band errors in heterogeneous pediatric hospital settings. The collaborative had 3 phases: preparatory work and employee survey of current practice and barriers, data collection (ID band failure rate), and intervention driven by data and collaborative learning to accelerate change. The collaborative audited 11377 patients for ID band errors between September 2009 and September 2010. The ID band failure rate decreased from 17% to 4.1% (77% relative reduction). Interventions including education of frontline staff regarding correct ID bands as a safety strategy; a change to softer ID bands, including "luggage tag" type ID bands for some patients; and partnering with families and patients through education were applied at all institutions. Over 13 months, a collaborative of pediatric institutions significantly reduced the ID band failure rate. This quality improvement learning collaborative demonstrates that safety improvements tested in a single institution can be disseminated to improve quality of care across large populations of children.

  5. Research on Error Modelling and Identification of 3 Axis NC Machine Tools Based on Cross Grid Encoder Measurement

    International Nuclear Information System (INIS)

    Du, Z C; Lv, C F; Hong, M S

    2006-01-01

    A new error modelling and identification method based on the cross grid encoder is proposed in this paper. Generally, there are 21 error components in the geometric error of the 3 axis NC machine tools. However according our theoretical analysis, the squareness error among different guide ways affects not only the translation error component, but also the rotational ones. Therefore, a revised synthetic error model is developed. And the mapping relationship between the error component and radial motion error of round workpiece manufactured on the NC machine tools are deduced. This mapping relationship shows that the radial error of circular motion is the comprehensive function result of all the error components of link, worktable, sliding table and main spindle block. Aiming to overcome the solution singularity shortcoming of traditional error component identification method, a new multi-step identification method of error component by using the Cross Grid Encoder measurement technology is proposed based on the kinematic error model of NC machine tool. Firstly, the 12 translational error components of the NC machine tool are measured and identified by using the least square method (LSM) when the NC machine tools go linear motion in the three orthogonal planes: XOY plane, XOZ plane and YOZ plane. Secondly, the circular error tracks are measured when the NC machine tools go circular motion in the same above orthogonal planes by using the cross grid encoder Heidenhain KGM 182. Therefore 9 rotational errors can be identified by using LSM. Finally the experimental validation of the above modelling theory and identification method is carried out in the 3 axis CNC vertical machining centre Cincinnati 750 Arrow. The entire 21 error components have been successfully measured out by the above method. Research shows the multi-step modelling and identification method is very suitable for 'on machine measurement'

  6. Modeling misidentification errors in capture-recapture studies using photographic identification of evolving marks

    Science.gov (United States)

    Yoshizaki, J.; Pollock, K.H.; Brownie, C.; Webster, R.A.

    2009-01-01

    Misidentification of animals is potentially important when naturally existing features (natural tags) are used to identify individual animals in a capture-recapture study. Photographic identification (photoID) typically uses photographic images of animals' naturally existing features as tags (photographic tags) and is subject to two main causes of identification errors: those related to quality of photographs (non-evolving natural tags) and those related to changes in natural marks (evolving natural tags). The conventional methods for analysis of capture-recapture data do not account for identification errors, and to do so requires a detailed understanding of the misidentification mechanism. Focusing on the situation where errors are due to evolving natural tags, we propose a misidentification mechanism and outline a framework for modeling the effect of misidentification in closed population studies. We introduce methods for estimating population size based on this model. Using a simulation study, we show that conventional estimators can seriously overestimate population size when errors due to misidentification are ignored, and that, in comparison, our new estimators have better properties except in cases with low capture probabilities (<0.2) or low misidentification rates (<2.5%). ?? 2009 by the Ecological Society of America.

  7. Error identification, disclosure, and reporting: practice patterns of three emergency medicine provider types.

    Science.gov (United States)

    Hobgood, Cherri; Xie, Jipan; Weiner, Bryan; Hooker, James

    2004-02-01

    To gather preliminary data on how the three major types of emergency medicine (EM) providers, physicians, nurses (RNs), and out-of-hospital personnel (EMTs), differ in error identification, disclosure, and reporting. A convenience sample of emergency department (ED) providers completed a brief survey designed to evaluate error frequency, disclosure, and reporting practices as well as error-based discussion and educational activities. One hundred sixteen subjects participated: 41 EMTs (35%), 33 RNs (28%), and 42 physicians (36%). Forty-five percent of EMTs, 56% of RNs, and 21% of physicians identified no clinical errors during the preceding year. When errors were identified, physicians learned of them via dialogue with RNs (58%), patients (13%), pharmacy (35%), and attending physicians (35%). For known errors, all providers were equally unlikely to inform the team caring for the patient. Disclosure to patients was limited and varied by provider type (19% EMTs, 23% RNs, and 74% physicians). Disclosure education was rare, with error to a patient. Error discussions are widespread, with all providers indicating they discussed their own as well as the errors of others. This study suggests that error identification, disclosure, and reporting challenge all members of the ED care delivery team. Provider-specific education and enhanced teamwork training will be required to further the transformation of the ED into a high-reliability organization.

  8. Technique for human-error sequence identification and signification

    International Nuclear Information System (INIS)

    Heslinga, G.

    1988-01-01

    The aim of the present study was to investigate whether the event-tree technique can be used for the analysis of sequences of human errors that could cause initiating events. The scope of the study was limited to a consideration of the performance of procedural actions. The event-tree technique was modified to adapt it for this study and will be referred to as the 'Technique for Human-Error-Sequence Identification and Signification' (THESIS). The event trees used in this manner, i.e. THESIS event trees, appear to present additional problems if they are applied to human performance instead of technical systems. These problems, referred to as the 'Man-Related Features' of THESIS, are: the human capability to choose among several procedures, the ergonomics of the panel layout, human actions of a continuous nature, dependence between human errors, human capability to recover possible errors, the influence of memory during the recovery attempt, variability in human performance and correlations between human;erropr probabilities. The influence of these problems on the applicability of THESIS was assessed by means of mathematical analysis, field studies and laboratory experiments (author). 130 refs.; 51 figs.; 24 tabs

  9. [Patient identification errors and biological samples in the analytical process: Is it possible to improve patient safety?].

    Science.gov (United States)

    Cuadrado-Cenzual, M A; García Briñón, M; de Gracia Hills, Y; González Estecha, M; Collado Yurrita, L; de Pedro Moro, J A; Fernández Pérez, C; Arroyo Fernández, M

    2015-01-01

    Patient identification errors and biological samples are one of the problems with the highest risk factor in causing an adverse event in the patient. To detect and analyse the causes of patient identification errors in analytical requests (PIEAR) from emergency departments, and to develop improvement strategies. A process and protocol was designed, to be followed by all professionals involved in the requesting and performing of laboratory tests. Evaluation and monitoring indicators of PIEAR were determined, before and after the implementation of these improvement measures (years 2010-2014). A total of 316 PIEAR were detected in a total of 483,254 emergency service requests during the study period, representing a mean of 6.80/10,000 requests. Patient identification failure was the most frequent in all the 6-monthly periods assessed, with a significant difference (Perrors. However, we must continue working with this strategy, promoting a culture of safety for all the professionals involved, and trying to achieve the goal that 100% of the analytical and samples are properly identified. Copyright © 2015 SECA. Published by Elsevier Espana. All rights reserved.

  10. Nursing Errors in Intensive Care Unit by Human Error Identification in Systems Tool: A Case Study

    Directory of Open Access Journals (Sweden)

    Nezamodini

    2016-03-01

    Full Text Available Background Although health services are designed and implemented to improve human health, the errors in health services are a very common phenomenon and even sometimes fatal in this field. Medical errors and their cost are global issues with serious consequences for the patients’ community that are preventable and require serious attention. Objectives The current study aimed to identify possible nursing errors applying human error identification in systems tool (HEIST in the intensive care units (ICUs of hospitals. Patients and Methods This descriptive research was conducted in the intensive care unit of a hospital in Khuzestan province in 2013. Data were collected through observation and interview by nine nurses in this section in a period of four months. Human error classification was based on Rose and Rose and Swain and Guttmann models. According to HEIST work sheets the guide questions were answered and error causes were identified after the determination of the type of errors. Results In total 527 errors were detected. The performing operation on the wrong path had the highest frequency which was 150, and the second rate with a frequency of 136 was doing the tasks later than the deadline. Management causes with a frequency of 451 were the first rank among identified errors. Errors mostly occurred in the system observation stage and among the performance shaping factors (PSFs, time was the most influencing factor in occurrence of human errors. Conclusions Finally, in order to prevent the occurrence and reduce the consequences of identified errors the following suggestions were proposed : appropriate training courses, applying work guidelines and monitoring their implementation, increasing the number of work shifts, hiring professional workforce, equipping work space with appropriate facilities and equipment.

  11. Performance monitoring and error significance in patients with obsessive-compulsive disorder.

    Science.gov (United States)

    Endrass, Tanja; Schuermann, Beate; Kaufmann, Christan; Spielberg, Rüdiger; Kniesche, Rainer; Kathmann, Norbert

    2010-05-01

    Performance monitoring has been consistently found to be overactive in obsessive-compulsive disorder (OCD). The present study examines whether performance monitoring in OCD is adjusted with error significance. Therefore, errors in a flanker task were followed by neutral (standard condition) or punishment feedbacks (punishment condition). In the standard condition patients had significantly larger error-related negativity (ERN) and correct-related negativity (CRN) ampliudes than controls. But, in the punishment condition groups did not differ in ERN and CRN amplitudes. While healthy controls showed an amplitude enhancement between standard and punishment condition, OCD patients showed no variation. In contrast, group differences were not found for the error positivity (Pe): both groups had larger Pe amplitudes in the punishment condition. Results confirm earlier findings of overactive error monitoring in OCD. The absence of a variation with error significance might indicate that OCD patients are unable to down-regulate their monitoring activity according to external requirements. Copyright 2010 Elsevier B.V. All rights reserved.

  12. A procedure for the significance testing of unmodeled errors in GNSS observations

    Science.gov (United States)

    Li, Bofeng; Zhang, Zhetao; Shen, Yunzhong; Yang, Ling

    2018-01-01

    It is a crucial task to establish a precise mathematical model for global navigation satellite system (GNSS) observations in precise positioning. Due to the spatiotemporal complexity of, and limited knowledge on, systematic errors in GNSS observations, some residual systematic errors would inevitably remain even after corrected with empirical model and parameterization. These residual systematic errors are referred to as unmodeled errors. However, most of the existing studies mainly focus on handling the systematic errors that can be properly modeled and then simply ignore the unmodeled errors that may actually exist. To further improve the accuracy and reliability of GNSS applications, such unmodeled errors must be handled especially when they are significant. Therefore, a very first question is how to statistically validate the significance of unmodeled errors. In this research, we will propose a procedure to examine the significance of these unmodeled errors by the combined use of the hypothesis tests. With this testing procedure, three components of unmodeled errors, i.e., the nonstationary signal, stationary signal and white noise, are identified. The procedure is tested by using simulated data and real BeiDou datasets with varying error sources. The results show that the unmodeled errors can be discriminated by our procedure with approximately 90% confidence. The efficiency of the proposed procedure is further reassured by applying the time-domain Allan variance analysis and frequency-domain fast Fourier transform. In summary, the spatiotemporally correlated unmodeled errors are commonly existent in GNSS observations and mainly governed by the residual atmospheric biases and multipath. Their patterns may also be impacted by the receiver.

  13. Progressive significance map and its application to error-resilient image transmission.

    Science.gov (United States)

    Hu, Yang; Pearlman, William A; Li, Xin

    2012-07-01

    Set partition coding (SPC) has shown tremendous success in image compression. Despite its popularity, the lack of error resilience remains a significant challenge to the transmission of images in error-prone environments. In this paper, we propose a novel data representation called the progressive significance map (prog-sig-map) for error-resilient SPC. It structures the significance map (sig-map) into two parts: a high-level summation sig-map and a low-level complementary sig-map (comp-sig-map). Such a structured representation of the sig-map allows us to improve its error-resilient property at the price of only a slight sacrifice in compression efficiency. For example, we have found that a fixed-length coding of the comp-sig-map in the prog-sig-map renders 64% of the coded bitstream insensitive to bit errors, compared with 40% with that of the conventional sig-map. Simulation results have shown that the prog-sig-map can achieve highly competitive rate-distortion performance for binary symmetric channels while maintaining low computational complexity. Moreover, we note that prog-sig-map is complementary to existing independent packetization and channel-coding-based error-resilient approaches and readily lends itself to other source coding applications such as distributed video coding.

  14. Patient identification errors are common in a simulated setting.

    Science.gov (United States)

    Henneman, Philip L; Fisher, Donald L; Henneman, Elizabeth A; Pham, Tuan A; Campbell, Megan M; Nathanson, Brian H

    2010-06-01

    We evaluate the frequency and accuracy of health care workers verifying patient identity before performing common tasks. The study included prospective, simulated patient scenarios with an eye-tracking device that showed where the health care workers looked. Simulations involved nurses administering an intravenous medication, technicians labeling a blood specimen, and clerks applying an identity band. Participants were asked to perform their assigned task on 3 simulated patients, and the third patient had a different date of birth and medical record number than the identity information on the artifact label specific to the health care workers' task. Health care workers were unaware that the focus of the study was patient identity. Sixty-one emergency health care workers participated--28 nurses, 16 technicians, and 17 emergency service associates--in 183 patient scenarios. Sixty-one percent of health care workers (37/61) caught the identity error (61% nurses, 94% technicians, 29% emergency service associates). Thirty-nine percent of health care workers (24/61) performed their assigned task on the wrong patient (39% nurses, 6% technicians, 71% emergency service associates). Eye-tracking data were available for 73% of the patient scenarios (133/183). Seventy-four percent of health care workers (74/100) failed to match the patient to the identity band (87% nurses, 49% technicians). Twenty-seven percent of health care workers (36/133) failed to match the artifact to the patient or the identity band before performing their task (33% nurses, 9% technicians, 33% emergency service associates). Fifteen percent (5/33) of health care workers who completed the steps to verify patient identity on the patient with the identification error still failed to recognize the error. Wide variation exists among health care workers verifying patient identity before performing everyday tasks. Education, process changes, and technology are needed to improve the frequency and accuracy of

  15. Influence of Head Motion on the Accuracy of 3D Reconstruction with Cone-Beam CT: Landmark Identification Errors in Maxillofacial Surface Model.

    Directory of Open Access Journals (Sweden)

    Kyung-Min Lee

    Full Text Available The purpose of this study was to investigate the influence of head motion on the accuracy of three-dimensional (3D reconstruction with cone-beam computed tomography (CBCT scan.Fifteen dry skulls were incorporated into a motion controller which simulated four types of head motion during CBCT scan: 2 horizontal rotations (to the right/to the left and 2 vertical rotations (upward/downward. Each movement was triggered to occur at the start of the scan for 1 second by remote control. Four maxillofacial surface models with head motion and one control surface model without motion were obtained for each skull. Nine landmarks were identified on the five maxillofacial surface models for each skull, and landmark identification errors were compared between the control model and each of the models with head motion.Rendered surface models with head motion were similar to the control model in appearance; however, the landmark identification errors showed larger values in models with head motion than in the control. In particular, the Porion in the horizontal rotation models presented statistically significant differences (P < .05. Statistically significant difference in the errors between the right and left side landmark was present in the left side rotation which was opposite direction to the scanner rotation (P < .05.Patient movement during CBCT scan might cause landmark identification errors on the 3D surface model in relation to the direction of the scanner rotation. Clinicians should take this into consideration to prevent patient movement during CBCT scan, particularly horizontal movement.

  16. Is it possible to eliminate patient identification errors in medical imaging?

    Science.gov (United States)

    Danaher, Luke A; Howells, Joan; Holmes, Penny; Scally, Peter

    2011-08-01

    The aim of this article is to review a system that validates and documents the process of ensuring the correct patient, correct site and side, and correct procedure (commonly referred to as the 3 C's) within medical imaging. A 4-step patient identification and procedure matching process was developed using health care and aviation models. The process was established in medical imaging departments after a successful interventional radiology pilot program. The success of the project was evaluated using compliance audit data, incident reporting data before and after the implementation of the process, and a staff satisfaction survey. There was 95% to 100% verification of site and side and 100% verification of correct patient, procedure, and consent. Correct patient data and side markers were present in 82% to 95% of cases. The number of incidents before and after the implementation of the 3 C's was difficult to assess because of a change in reporting systems and incident underreporting. More incidents are being reported, particularly "near misses." All near misses were related to incorrect patient identification stickers being placed on request forms. The majority of staff members surveyed found the process easy (55.8%), quick (47.7%), relevant (51.7%), and useful (60.9%). Although identification error is difficult to eliminate, practical initiatives can engender significant systems improvement in complex health care environments. Crown Copyright © 2011. Published by Elsevier Inc. All rights reserved.

  17. Five-way smoking status classification using text hot-spot identification and error-correcting output codes.

    Science.gov (United States)

    Cohen, Aaron M

    2008-01-01

    We participated in the i2b2 smoking status classification challenge task. The purpose of this task was to evaluate the ability of systems to automatically identify patient smoking status from discharge summaries. Our submission included several techniques that we compared and studied, including hot-spot identification, zero-vector filtering, inverse class frequency weighting, error-correcting output codes, and post-processing rules. We evaluated our approaches using the same methods as the i2b2 task organizers, using micro- and macro-averaged F1 as the primary performance metric. Our best performing system achieved a micro-F1 of 0.9000 on the test collection, equivalent to the best performing system submitted to the i2b2 challenge. Hot-spot identification, zero-vector filtering, classifier weighting, and error correcting output coding contributed additively to increased performance, with hot-spot identification having by far the largest positive effect. High performance on automatic identification of patient smoking status from discharge summaries is achievable with the efficient and straightforward machine learning techniques studied here.

  18. Validation of the measurement model concept for error structure identification

    International Nuclear Information System (INIS)

    Shukla, Pavan K.; Orazem, Mark E.; Crisalle, Oscar D.

    2004-01-01

    The development of different forms of measurement models for impedance has allowed examination of key assumptions on which the use of such models to assess error structure are based. The stochastic error structures obtained using the transfer-function and Voigt measurement models were identical, even when non-stationary phenomena caused some of the data to be inconsistent with the Kramers-Kronig relations. The suitability of the measurement model for assessment of consistency with the Kramers-Kronig relations, however, was found to be more sensitive to the confidence interval for the parameter estimates than to the number of parameters in the model. A tighter confidence interval was obtained for Voigt measurement model, which made the Voigt measurement model a more sensitive tool for identification of inconsistencies with the Kramers-Kronig relations

  19. Specimen Identification Errors in Breast Biopsies: Age Matters. Report of Two Near-Miss Events and Review of the Literature.

    Science.gov (United States)

    Tozbikian, Gary; Gemignani, Mary L; Brogi, Edi

    2017-09-01

    The consequences of patient identification errors due to specimen mislabeling can be deleterious. We describe two near-miss events involving mislabeled breast specimens from two patients who sought treatment at our institution. In both cases, microscopic review of the slides identified inconsistencies between the histologic findings and patient age, unveiling specimen identification errors. By correlating the clinical information with the microscopic findings, we identified mistakes that had occurred at the time of specimen accessioning at the original laboratories. In both cases, thanks to a timely reassignment of the specimens, the patients suffered no harm. These cases highlight the importance of routine clinical and pathologic correlation as a critical component of quality assurance and patient safety. A review of possible specimen identification errors in the anatomic pathology setting is presented. © 2017 Wiley Periodicals, Inc.

  20. Analysis of human error and organizational deficiency in events considering risk significance

    International Nuclear Information System (INIS)

    Lee, Yong Suk; Kim, Yoonik; Kim, Say Hyung; Kim, Chansoo; Chung, Chang Hyun; Jung, Won Dea

    2004-01-01

    In this study, we analyzed human and organizational deficiencies in the trip events of Korean nuclear power plants. K-HPES items were used in human error analysis, and the organizational factors by Jacobs and Haber were used for organizational deficiency analysis. We proposed the use of CCDP as a risk measure to consider risk information in prioritizing K-HPES items and organizational factors. Until now, the risk significance of events has not been considered in human error and organizational deficiency analysis. Considering the risk significance of events in the process of analysis is necessary for effective enhancement of nuclear power plant safety by focusing on causes of human error and organizational deficiencies that are associated with significant risk

  1. The impact of measurement errors in the identification of regulatory networks

    Directory of Open Access Journals (Sweden)

    Sato João R

    2009-12-01

    Full Text Available Abstract Background There are several studies in the literature depicting measurement error in gene expression data and also, several others about regulatory network models. However, only a little fraction describes a combination of measurement error in mathematical regulatory networks and shows how to identify these networks under different rates of noise. Results This article investigates the effects of measurement error on the estimation of the parameters in regulatory networks. Simulation studies indicate that, in both time series (dependent and non-time series (independent data, the measurement error strongly affects the estimated parameters of the regulatory network models, biasing them as predicted by the theory. Moreover, when testing the parameters of the regulatory network models, p-values computed by ignoring the measurement error are not reliable, since the rate of false positives are not controlled under the null hypothesis. In order to overcome these problems, we present an improved version of the Ordinary Least Square estimator in independent (regression models and dependent (autoregressive models data when the variables are subject to noises. Moreover, measurement error estimation procedures for microarrays are also described. Simulation results also show that both corrected methods perform better than the standard ones (i.e., ignoring measurement error. The proposed methodologies are illustrated using microarray data from lung cancer patients and mouse liver time series data. Conclusions Measurement error dangerously affects the identification of regulatory network models, thus, they must be reduced or taken into account in order to avoid erroneous conclusions. This could be one of the reasons for high biological false positive rates identified in actual regulatory network models.

  2. Estimation of Species Identification Error: Implications for Raptor Migration Counts and Trend Estimation

    Science.gov (United States)

    J.M. Hull; A.M. Fish; J.J. Keane; S.R. Mori; B.J Sacks; A.C. Hull

    2010-01-01

    One of the primary assumptions associated with many wildlife and population trend studies is that target species are correctly identified. This assumption may not always be valid, particularly for species similar in appearance to co-occurring species. We examined size overlap and identification error rates among Cooper's (Accipiter cooperii...

  3. Identification and estimation of nonlinear models using two samples with nonclassical measurement errors

    KAUST Repository

    Carroll, Raymond J.

    2010-05-01

    This paper considers identification and estimation of a general nonlinear Errors-in-Variables (EIV) model using two samples. Both samples consist of a dependent variable, some error-free covariates, and an error-prone covariate, for which the measurement error has unknown distribution and could be arbitrarily correlated with the latent true values; and neither sample contains an accurate measurement of the corresponding true variable. We assume that the regression model of interest - the conditional distribution of the dependent variable given the latent true covariate and the error-free covariates - is the same in both samples, but the distributions of the latent true covariates vary with observed error-free discrete covariates. We first show that the general latent nonlinear model is nonparametrically identified using the two samples when both could have nonclassical errors, without either instrumental variables or independence between the two samples. When the two samples are independent and the nonlinear regression model is parameterized, we propose sieve Quasi Maximum Likelihood Estimation (Q-MLE) for the parameter of interest, and establish its root-n consistency and asymptotic normality under possible misspecification, and its semiparametric efficiency under correct specification, with easily estimated standard errors. A Monte Carlo simulation and a data application are presented to show the power of the approach.

  4. Self-identification and empathy modulate error-related brain activity during the observation of penalty shots between friend and foe

    Science.gov (United States)

    Ganesh, Shanti; van Schie, Hein T.; De Bruijn, Ellen R. A.; Bekkering, Harold

    2009-01-01

    The ability to detect and process errors made by others plays an important role is many social contexts. The capacity to process errors is typically found to rely on sites in the medial frontal cortex. However, it remains to be determined whether responses at these sites are driven primarily by action errors themselves or by the affective consequences normally associated with their commission. Using an experimental paradigm that disentangles action errors and the valence of their affective consequences, we demonstrate that sites in the medial frontal cortex (MFC), including the ventral anterior cingulate cortex (vACC) and pre-supplementary motor area (pre-SMA), respond to action errors independent of the valence of their consequences. The strength of this response was negatively correlated with the empathic concern subscale of the Interpersonal Reactivity Index. We also demonstrate a main effect of self-identification by showing that errors committed by friends and foes elicited significantly different BOLD responses in a separate region of the middle anterior cingulate cortex (mACC). These results suggest that the way we look at others plays a critical role in determining patterns of brain activation during error observation. These findings may have important implications for general theories of error processing. PMID:19015079

  5. Identification errors in the blood transfusion laboratory: a still relevant issue for patient safety.

    Science.gov (United States)

    Lippi, Giuseppe; Plebani, Mario

    2011-04-01

    Remarkable technological advances and increased awareness have both contributed to decrease substantially the uncertainty of the analytical phase, so that the manually intensive preanalytical activities currently represent the leading sources of errors in laboratory and transfusion medicine. Among preanalytical errors, misidentification and mistransfusion are still regarded as a considerable problem, posing serious risks for patient health and carrying huge expenses for the healthcare system. As such, a reliable policy of risk management should be readily implemented, developing through a multifaceted approach to prevent or limit the adverse outcomes related to transfusion reactions from blood incompatibility. This strategy encompasses root cause analysis, compliance with accreditation requirements, strict adherence to standard operating procedures, guidelines and recommendations for specimen collection, use of positive identification devices, rejection of potentially misidentified specimens, informatics data entry, query host communication, automated systems for patient identification and sample labeling and an adequate and safe environment. Copyright © 2011 Elsevier Ltd. All rights reserved.

  6. Automatic detection of patient identification and positioning errors in radiation therapy treatment using 3-dimensional setup images.

    Science.gov (United States)

    Jani, Shyam S; Low, Daniel A; Lamb, James M

    2015-01-01

    To develop an automated system that detects patient identification and positioning errors between 3-dimensional computed tomography (CT) and kilovoltage CT planning images. Planning kilovoltage CT images were collected for head and neck (H&N), pelvis, and spine treatments with corresponding 3-dimensional cone beam CT and megavoltage CT setup images from TrueBeam and TomoTherapy units, respectively. Patient identification errors were simulated by registering setup and planning images from different patients. For positioning errors, setup and planning images were misaligned by 1 to 5 cm in the 6 anatomical directions for H&N and pelvis patients. Spinal misalignments were simulated by misaligning to adjacent vertebral bodies. Image pairs were assessed using commonly used image similarity metrics as well as custom-designed metrics. Linear discriminant analysis classification models were trained and tested on the imaging datasets, and misclassification error (MCE), sensitivity, and specificity parameters were estimated using 10-fold cross-validation. For patient identification, our workflow produced MCE estimates of 0.66%, 1.67%, and 0% for H&N, pelvis, and spine TomoTherapy images, respectively. Sensitivity and specificity ranged from 97.5% to 100%. MCEs of 3.5%, 2.3%, and 2.1% were obtained for TrueBeam images of the above sites, respectively, with sensitivity and specificity estimates between 95.4% and 97.7%. MCEs for 1-cm H&N/pelvis misalignments were 1.3%/5.1% and 9.1%/8.6% for TomoTherapy and TrueBeam images, respectively. Two-centimeter MCE estimates were 0.4%/1.6% and 3.1/3.2%, respectively. MCEs for vertebral body misalignments were 4.8% and 3.6% for TomoTherapy and TrueBeam images, respectively. Patient identification and gross misalignment errors can be robustly and automatically detected using 3-dimensional setup images of different energies across 3 commonly treated anatomical sites. Copyright © 2015 American Society for Radiation Oncology. Published by

  7. Selectively Fortifying Reconfigurable Computing Device to Achieve Higher Error Resilience

    Directory of Open Access Journals (Sweden)

    Mingjie Lin

    2012-01-01

    Full Text Available With the advent of 10 nm CMOS devices and “exotic” nanodevices, the location and occurrence time of hardware defects and design faults become increasingly unpredictable, therefore posing severe challenges to existing techniques for error-resilient computing because most of them statically assign hardware redundancy and do not account for the error tolerance inherently existing in many mission-critical applications. This work proposes a novel approach to selectively fortifying a target reconfigurable computing device in order to achieve hardware-efficient error resilience for a specific target application. We intend to demonstrate that such error resilience can be significantly improved with effective hardware support. The major contributions of this work include (1 the development of a complete methodology to perform sensitivity and criticality analysis of hardware redundancy, (2 a novel problem formulation and an efficient heuristic methodology to selectively allocate hardware redundancy among a target design’s key components in order to maximize its overall error resilience, and (3 an academic prototype of SFC computing device that illustrates a 4 times improvement of error resilience for a H.264 encoder implemented with an FPGA device.

  8. Identification of Special Patterns of Numerical Typographic Errors Increases the Likelihood of Finding a Misplaced Patient File

    OpenAIRE

    Sun, Ying-Chou; Tang, Dah-Dian; Zeng, Qing; Greenes, Robert

    2002-01-01

    When a typographic error of a patient identification number occurs on a patient document such as an envelope for radiology films or the cover of a patient record, it will result in misplacement of the document. Once misplaced, such documents are often extremely difficult to recover. After analyzing 290 numerical typos, we found that errors do not occur randomly. Instead, many of the typos share certain specific patterns. Six major types of non-random numeral typographic error patterns have be...

  9. Identification of failure sequences sensitive to human error

    International Nuclear Information System (INIS)

    1987-06-01

    This report prepared by the participants of the technical committee meeting on ''Identification of Failure Sequences Sensitive to Human Error'' addresses the subjects discussed during the meeting and the conclusions reached by the committee. Chapter 1 reviews the INSAG recommendations and the main elements of the IAEA Programme in the area of human element. In Chapter 2 the role of human actions in nuclear power plants safety from insights of operational experience is reviewed. Chapter 3 is concerned with the relationship between probabilistic safety assessment and human performance associated with severe accident sequences. Chapter 4 addresses the role of simulators in view of training for accident conditions. Chapter 5 presents the conclusions and future trends. The seven papers presented by members of this technical committee are also included in this technical document. A separate abstract was prepared for each of these papers

  10. The statistical significance of error probability as determined from decoding simulations for long codes

    Science.gov (United States)

    Massey, J. L.

    1976-01-01

    The very low error probability obtained with long error-correcting codes results in a very small number of observed errors in simulation studies of practical size and renders the usual confidence interval techniques inapplicable to the observed error probability. A natural extension of the notion of a 'confidence interval' is made and applied to such determinations of error probability by simulation. An example is included to show the surprisingly great significance of as few as two decoding errors in a very large number of decoding trials.

  11. External Stakeholders of Higher Education Institutions in Poland: Their Identification and Significance

    Science.gov (United States)

    Piotrowska-Piatek, Agnieszka

    2017-01-01

    In the context of the ongoing changes in the management systems of higher education, the issue of higher education institutions' (HEIs) relationships with external stakeholders are of key importance. This article discusses this problem from the perspective of Polish higher education system. The aim of it is to answer the following questions: (1)…

  12. Identification of Special Patterns of Numerical Typographic Errors to Increases the Likelihood of Finding a Misplaced Patient File

    OpenAIRE

    Sun, Ying-Chou; Tang, Dah-Dian; Zeng, Qing; Greenes, Robert

    2001-01-01

    When a typographic error of a patient identification number occurs on a patient document such as an envelope for radiology films or the cover of a patient record, it will result in misplacement of the document. Once misplaced, such documents are often extremely difficult to recover. After analyzing 290 numerical typos, we found that errors do not occur randomly. Instead, many of the typos share certain specific patterns. Six major types of non-random numeral typographic error patterns have be...

  13. Color-motion feature-binding errors are mediated by a higher-order chromatic representation.

    Science.gov (United States)

    Shevell, Steven K; Wang, Wei

    2016-03-01

    Peripheral and central moving objects of the same color may be perceived to move in the same direction even though peripheral objects have a different true direction of motion [Nature429, 262 (2004)10.1038/429262a]. The perceived, illusory direction of peripheral motion is a color-motion feature-binding error. Recent work shows that such binding errors occur even without an exact color match between central and peripheral objects, and, moreover, the frequency of the binding errors in the periphery declines as the chromatic difference increases between the central and peripheral objects [J. Opt. Soc. Am. A31, A60 (2014)JOAOD60740-323210.1364/JOSAA.31.000A60]. This change in the frequency of binding errors with the chromatic difference raises the general question of the chromatic representation from which the difference is determined. Here, basic properties of the chromatic representation are tested to discover whether it depends on independent chromatic differences on the l and the s cardinal axes or, alternatively, on a more specific higher-order chromatic representation. Experimental tests compared the rate of feature-binding errors when the central and peripheral colors had the identical s chromaticity (so zero difference in s) and a fixed magnitude of l difference, while varying the identical s level in center and periphery (thus always keeping the s difference at zero). A chromatic representation based on independent l and s differences would result in the same frequency of color-motion binding errors at everyslevel. The results are contrary to this prediction, thus showing that the chromatic representation at the level of color-motion feature binding depends on a higher-order chromatic mechanism.

  14. Olfactory identification in patients with schizophrenia - the influence of β-endorphin and calcitonin gene-related peptide concentrations.

    Science.gov (United States)

    Urban-Kowalczyk, M; Śmigielski, J; Strzelecki, D

    2017-03-01

    The relationship between the olfactory system and emotional processing is an area of growing interest in schizophrenia research. Both the orbitofrontal cortex and amygdala are involved in the processing of olfactory information, and olfactory deficits may be also influenced by endogenous opioids and calcitonin gene-related peptide (CGRP), which is probably involved in dopaminergic transmission. However, the relationship between endorphins and dopaminergic transmission has not been fully explored. Odor identification performance and valence interaction was evaluated among 50 schizophrenic patients and 50 controls. Schizophrenia symptoms were assessed using the Positive and Negative Syndrome Scale (PANSS). All study participants were subjected to the University of Pennsylvania Smell Identification Test (UPSIT), blood β-endorphin (BE) and CGRP measurement. Insignificantly higher BE concentrations were observed in the patient group, while significantly higher UPSIT scores were seen in controls (mean UPSIT 32.48 vs 26.82). The patients demonstrated significantly more identification errors for pleasant (P=0.000) and neutral (P=0.055) odors than for unpleasant odors. Patients with higher BE concentrations made more identification errors concerning pleasant (R s =-0.292; P=0.04) and neutral odors (R s =-0.331; P=0.019). Although the concentration of CGRP was significantly higher in the patient sample (Pidentification by valence for pleasant and neutral odors (UPSIT n/16: R s =-0.450, P=0.001; UPSIT n/15: R s =-0.586, P=0.000), and a weak negative correlation between PANSS N score and identification of unpleasant odors (UPSIT n/9: R s =-0.325, P=0.021). Schizophrenic patients present a unique pattern of smell identification characterized by aberrant hedonic ratings for pleasant odors but not unpleasant ones. Individuals with predominant negative symptoms and higher BE concentrations are most able to identify negative odors. Copyright © 2016 Elsevier Masson SAS. All rights

  15. Clinical significance of multi-leaf collimator calibration errors

    International Nuclear Information System (INIS)

    Norvill, Craig; Jenetsky, Guy

    2016-01-01

    This planning study investigates the clinical impact of multi-leaf collimator (MLC) calibration errors on three common treatment sites; head and neck (H&N), prostate and stereotactic body radiotherapy (SBRT) for lung. All plans used using either volumetric modulated adaptive therapy or dynamic MLC techniques. Five patient plans were retrospectively selected from each treatment site, and MLC errors intentionally introduced. MLC errors of 0.7, 0.4 and 0.2 mm were sufficient to cause major violations in the PTV planning criteria for the H&N, prostate and SBRT lung plans. Mean PTV dose followed a linear trend with MLC error, increasing at rates of 3.2–5.9 % per millimeter depending on treatment site. The results indicate that an MLC quality assurance program that provides sub-millimeter accuracy is an important component of intensity modulated radiotherapy delivery techniques.

  16. Study of on-machine error identification and compensation methods for micro machine tools

    International Nuclear Information System (INIS)

    Wang, Shih-Ming; Yu, Han-Jen; Lee, Chun-Yi; Chiu, Hung-Sheng

    2016-01-01

    Micro machining plays an important role in the manufacturing of miniature products which are made of various materials with complex 3D shapes and tight machining tolerance. To further improve the accuracy of a micro machining process without increasing the manufacturing cost of a micro machine tool, an effective machining error measurement method and a software-based compensation method are essential. To avoid introducing additional errors caused by the re-installment of the workpiece, the measurement and compensation method should be on-machine conducted. In addition, because the contour of a miniature workpiece machined with a micro machining process is very tiny, the measurement method should be non-contact. By integrating the image re-constructive method, camera pixel correction, coordinate transformation, the error identification algorithm, and trajectory auto-correction method, a vision-based error measurement and compensation method that can on-machine inspect the micro machining errors and automatically generate an error-corrected numerical control (NC) program for error compensation was developed in this study. With the use of the Canny edge detection algorithm and camera pixel calibration, the edges of the contour of a machined workpiece were identified and used to re-construct the actual contour of the work piece. The actual contour was then mapped to the theoretical contour to identify the actual cutting points and compute the machining errors. With the use of a moving matching window and calculation of the similarity between the actual and theoretical contour, the errors between the actual cutting points and theoretical cutting points were calculated and used to correct the NC program. With the use of the error-corrected NC program, the accuracy of a micro machining process can be effectively improved. To prove the feasibility and effectiveness of the proposed methods, micro-milling experiments on a micro machine tool were conducted, and the results

  17. Type I Error Inflation in DIF Identification with Mantel-Haenszel: An Explanation and a Solution

    Science.gov (United States)

    Magis, David; De Boeck, Paul

    2014-01-01

    It is known that sum score-based methods for the identification of differential item functioning (DIF), such as the Mantel-Haenszel (MH) approach, can be affected by Type I error inflation in the absence of any DIF effect. This may happen when the items differ in discrimination and when there is item impact. On the other hand, outlier DIF methods…

  18. Exploring human error in military aviation flight safety events using post-incident classification systems.

    Science.gov (United States)

    Hooper, Brionny J; O'Hare, David P A

    2013-08-01

    Human error classification systems theoretically allow researchers to analyze postaccident data in an objective and consistent manner. The Human Factors Analysis and Classification System (HFACS) framework is one such practical analysis tool that has been widely used to classify human error in aviation. The Cognitive Error Taxonomy (CET) is another. It has been postulated that the focus on interrelationships within HFACS can facilitate the identification of the underlying causes of pilot error. The CET provides increased granularity at the level of unsafe acts. The aim was to analyze the influence of factors at higher organizational levels on the unsafe acts of front-line operators and to compare the errors of fixed-wing and rotary-wing operations. This study analyzed 288 aircraft incidents involving human error from an Australasian military organization occurring between 2001 and 2008. Action errors accounted for almost twice (44%) the proportion of rotary wing compared to fixed wing (23%) incidents. Both classificatory systems showed significant relationships between precursor factors such as the physical environment, mental and physiological states, crew resource management, training and personal readiness, and skill-based, but not decision-based, acts. The CET analysis showed different predisposing factors for different aspects of skill-based behaviors. Skill-based errors in military operations are more prevalent in rotary wing incidents and are related to higher level supervisory processes in the organization. The Cognitive Error Taxonomy provides increased granularity to HFACS analyses of unsafe acts.

  19. Review of advances in human reliability analysis of errors of commission, Part 1: EOC identification

    International Nuclear Information System (INIS)

    Reer, Bernhard

    2008-01-01

    In close connection with examples relevant to contemporary probabilistic safety assessment (PSA), a review of advances in human reliability analysis (HRA) of post-initiator errors of commission (EOCs), i.e. inappropriate actions under abnormal operating conditions, has been carried out. The review comprises both EOC identification (part 1) and quantification (part 2); part 1 is presented in this article. Emerging HRA methods addressing the problem of EOC identification are: A Technique for Human Event Analysis (ATHEANA), the EOC HRA method developed by Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS), the Misdiagnosis Tree Analysis (MDTA) method, and the Commission Errors Search and Assessment (CESA) method. Most of the EOCs referred to in predictive studies comprise the stop of running or the inhibition of anticipated functions; a few comprise the start of a function. The CESA search scheme-which proceeds from possible operator actions to the affected systems to scenarios and uses procedures and importance measures as key sources of input information-provides a formalized way for identifying relatively important scenarios with EOC opportunities. In the implementation however, attention should be paid regarding EOCs associated with familiar but non-procedural actions and EOCs leading to failures of manually initiated safety functions

  20. Identification of Hypertension Management-related Errors in a Personal Digital Assistant-based Clinical Log for Nurses in Advanced Practice Nurse Training

    Directory of Open Access Journals (Sweden)

    Nam-Ju Lee, DNSc, RN

    2010-03-01

    Conclusion: The Hypertension Diagnosis and Management Error Taxonomy was useful for identifying errors based on documentation in a clinical log. The results provide an initial understanding of the nature of errors associated with hypertension diagnosis and management of nurses in APN training. The information gained from this study can contribute to educational interventions that promote APN competencies in identification and management of hypertension as well as overall patient safety and informatics competencies.

  1. A critical reflection on subjectivity in examination of higher degrees

    Directory of Open Access Journals (Sweden)

    Collins C Ngwakwe

    2015-11-01

    Full Text Available This paper is a critical reflection on seemingly embedded subjectivity in external examination of higher degrees. The paper is significant given that education is a vital pillar of sustainable development; hence, identification of obscure obstacles to this goal is imperative for an equitable and sustainable education that is devoid of class, race and gender. Adopting a critical review approach, the paper rummaged some related researches that bemoan apparent subjectivity amongst some examiners of higher degrees. Findings show a regrettable and seemingly obscured subjectivity and/or misjudgement that constitute an impediment in higher degrees examination process. Thus the paper highlights that whilst it is understandable that misjudgement or error is innate in every human endeavour including higher degree examination, however an error caused by examiner’s partisanship and/or maladroitness in the research focus may be avoidable. In conclusion, the paper stresses that prejudice or ineptitude in higher degree examination should be bridled by inter alia implementing the policy of alternative assessor; checking the pedigree of examiner’s assessment experience and an opportunity for the supervisor/s to present a rebuttal in circumstances where one examiner’s opinion is fraught with apparent subjectivity.

  2. Errors of first-order probe correction for higher-order probes in spherical near-field antenna measurements

    DEFF Research Database (Denmark)

    Laitinen, Tommi; Nielsen, Jeppe Majlund; Pivnenko, Sergiy

    2004-01-01

    An investigation is performed to study the error of the far-field pattern determined from a spherical near-field antenna measurement in the case where a first-order (mu=+-1) probe correction scheme is applied to the near-field signal measured by a higher-order probe.......An investigation is performed to study the error of the far-field pattern determined from a spherical near-field antenna measurement in the case where a first-order (mu=+-1) probe correction scheme is applied to the near-field signal measured by a higher-order probe....

  3. Identification for Control

    DEFF Research Database (Denmark)

    Tøffner-Clausen, S.

    1995-01-01

    Identification of model error bounds for robust control design has recently achieved much attention.......Identification of model error bounds for robust control design has recently achieved much attention....

  4. Five-way Smoking Status Classification Using Text Hot-Spot Identification and Error-correcting Output Codes

    OpenAIRE

    Cohen, Aaron M.

    2008-01-01

    We participated in the i2b2 smoking status classification challenge task. The purpose of this task was to evaluate the ability of systems to automatically identify patient smoking status from discharge summaries. Our submission included several techniques that we compared and studied, including hot-spot identification, zero-vector filtering, inverse class frequency weighting, error-correcting output codes, and post-processing rules. We evaluated our approaches using the same methods as the i2...

  5. Decisions to shoot in a weapon identification task: The influence of cultural stereotypes and perceived threat on false positive errors.

    Science.gov (United States)

    Fleming, Kevin K; Bandy, Carole L; Kimble, Matthew O

    2010-01-01

    The decision to shoot a gun engages executive control processes that can be biased by cultural stereotypes and perceived threat. The neural locus of the decision to shoot is likely to be found in the anterior cingulate cortex (ACC), where cognition and affect converge. Male military cadets at Norwich University (N=37) performed a weapon identification task in which they made rapid decisions to shoot when images of guns appeared briefly on a computer screen. Reaction times, error rates, and electroencephalogram (EEG) activity were recorded. Cadets reacted more quickly and accurately when guns were primed by images of Middle-Eastern males wearing traditional clothing. However, cadets also made more false positive errors when tools were primed by these images. Error-related negativity (ERN) was measured for each response. Deeper ERNs were found in the medial-frontal cortex following false positive responses. Cadets who made fewer errors also produced deeper ERNs, indicating stronger executive control. Pupil size was used to measure autonomic arousal related to perceived threat. Images of Middle-Eastern males in traditional clothing produced larger pupil sizes. An image of Osama bin Laden induced the largest pupil size, as would be predicted for the exemplar of Middle East terrorism. Cadets who showed greater increases in pupil size also made more false positive errors. Regression analyses were performed to evaluate predictions based on current models of perceived threat, stereotype activation, and cognitive control. Measures of pupil size (perceived threat) and ERN (cognitive control) explained significant proportions of the variance in false positive errors to Middle-Eastern males in traditional clothing, while measures of reaction time, signal detection response bias, and stimulus discriminability explained most of the remaining variance.

  6. Identification of 'Point A' as the prevalent source of error in cephalometric analysis of lateral radiographs.

    Science.gov (United States)

    Grogger, P; Sacher, C; Weber, S; Millesi, G; Seemann, R

    2018-04-10

    Deviations in measuring dentofacial components in a lateral X-ray represent a major hurdle in the subsequent treatment of dysgnathic patients. In a retrospective study, we investigated the most prevalent source of error in the following commonly used cephalometric measurements: the angles Sella-Nasion-Point A (SNA), Sella-Nasion-Point B (SNB) and Point A-Nasion-Point B (ANB); the Wits appraisal; the anteroposterior dysplasia indicator (APDI); and the overbite depth indicator (ODI). Preoperative lateral radiographic images of patients with dentofacial deformities were collected and the landmarks digitally traced by three independent raters. Cephalometric analysis was automatically performed based on 1116 tracings. Error analysis identified the x-coordinate of Point A as the prevalent source of error in all investigated measurements, except SNB, in which it is not incorporated. In SNB, the y-coordinate of Nasion predominated error variance. SNB showed lowest inter-rater variation. In addition, our observations confirmed previous studies showing that landmark identification variance follows characteristic error envelopes in the highest number of tracings analysed up to now. Variance orthogonal to defining planes was of relevance, while variance parallel to planes was not. Taking these findings into account, orthognathic surgeons as well as orthodontists would be able to perform cephalometry more accurately and accomplish better therapeutic results. Copyright © 2018 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  7. Time-discrete higher order ALE formulations: a priori error analysis

    KAUST Repository

    Bonito, Andrea

    2013-03-16

    We derive optimal a priori error estimates for discontinuous Galerkin (dG) time discrete schemes of any order applied to an advection-diffusion model defined on moving domains and written in the Arbitrary Lagrangian Eulerian (ALE) framework. Our estimates hold without any restrictions on the time steps for dG with exact integration or Reynolds\\' quadrature. They involve a mild restriction on the time steps for the practical Runge-Kutta-Radau methods of any order. The key ingredients are the stability results shown earlier in Bonito et al. (Time-discrete higher order ALE formulations: stability, 2013) along with a novel ALE projection. Numerical experiments illustrate and complement our theoretical results. © 2013 Springer-Verlag Berlin Heidelberg.

  8. Error-related anterior cingulate cortex activity and the prediction of conscious error awareness

    Directory of Open Access Journals (Sweden)

    Catherine eOrr

    2012-06-01

    Full Text Available Research examining the neural mechanisms associated with error awareness has consistently identified dorsal anterior cingulate activity (ACC as necessary but not predictive of conscious error detection. Two recent studies (Steinhauser and Yeung, 2010; Wessel et al. 2011 have found a contrary pattern of greater dorsal ACC activity (in the form of the error-related negativity during detected errors, but suggested that the greater activity may instead reflect task influences (e.g., response conflict, error probability and or individual variability (e.g., statistical power. We re-analyzed fMRI BOLD data from 56 healthy participants who had previously been administered the Error Awareness Task, a motor Go/No-go response inhibition task in which subjects make errors of commission of which they are aware (Aware errors, or unaware (Unaware errors. Consistent with previous data, the activity in a number of cortical regions was predictive of error awareness, including bilateral inferior parietal and insula cortices, however in contrast to previous studies, including our own smaller sample studies using the same task, error-related dorsal ACC activity was significantly greater during aware errors when compared to unaware errors. While the significantly faster RT for aware errors (compared to unaware was consistent with the hypothesis of higher response conflict increasing ACC activity, we could find no relationship between dorsal ACC activity and the error RT difference. The data suggests that individual variability in error awareness is associated with error-related dorsal ACC activity, and therefore this region may be important to conscious error detection, but it remains unclear what task and individual factors influence error awareness.

  9. Magnetic Nanoparticle Thermometer: An Investigation of Minimum Error Transmission Path and AC Bias Error

    Directory of Open Access Journals (Sweden)

    Zhongzhou Du

    2015-04-01

    Full Text Available The signal transmission module of a magnetic nanoparticle thermometer (MNPT was established in this study to analyze the error sources introduced during the signal flow in the hardware system. The underlying error sources that significantly affected the precision of the MNPT were determined through mathematical modeling and simulation. A transfer module path with the minimum error in the hardware system was then proposed through the analysis of the variations of the system error caused by the significant error sources when the signal flew through the signal transmission module. In addition, a system parameter, named the signal-to-AC bias ratio (i.e., the ratio between the signal and AC bias, was identified as a direct determinant of the precision of the measured temperature. The temperature error was below 0.1 K when the signal-to-AC bias ratio was higher than 80 dB, and other system errors were not considered. The temperature error was below 0.1 K in the experiments with a commercial magnetic fluid (Sample SOR-10, Ocean Nanotechnology, Springdale, AR, USA when the hardware system of the MNPT was designed with the aforementioned method.

  10. SU-E-T-261: Development of An Automated System to Detect Patient Identification and Positioning Errors Prior to Radiotherapy Treatment

    Energy Technology Data Exchange (ETDEWEB)

    Jani, S; Low, D; Lamb, J [UCLA, Los Angeles, CA (United States)

    2015-06-15

    Purpose: To develop a system that can automatically detect patient identification and positioning errors using 3D computed tomography (CT) setup images and kilovoltage CT (kVCT) planning images. Methods: Planning kVCT images were collected for head-and-neck (H&N), pelvis, and spine treatments with corresponding 3D cone-beam CT (CBCT) and megavoltage CT (MVCT) setup images from TrueBeam and TomoTherapy units, respectively. Patient identification errors were simulated by registering setup and planning images from different patients. Positioning errors were simulated by misaligning the setup image by 1cm to 5cm in the six anatomical directions for H&N and pelvis patients. Misalignments for spine treatments were simulated by registering the setup image to adjacent vertebral bodies on the planning kVCT. A body contour of the setup image was used as an initial mask for image comparison. Images were pre-processed by image filtering and air voxel thresholding, and image pairs were assessed using commonly-used image similarity metrics as well as custom -designed metrics. A linear discriminant analysis classifier was trained and tested on the datasets, and misclassification error (MCE), sensitivity, and specificity estimates were generated using 10-fold cross validation. Results: Our workflow produced MCE estimates of 0.7%, 1.7%, and 0% for H&N, pelvis, and spine TomoTherapy images, respectively. Sensitivities and specificities ranged from 98.0% to 100%. MCEs of 3.5%, 2.3%, and 2.1% were obtained for TrueBeam images of the above sites, respectively, with sensitivity and specificity estimates between 96.2% and 98.4%. MCEs for 1cm H&N/pelvis misalignments were 1.3/5.1% and 9.1/8.6% for TomoTherapy and TrueBeam images, respectively. 2cm MCE estimates were 0.4%/1.6% and 3.1/3.2%, respectively. Vertebral misalignment MCEs were 4.8% and 4.9% for TomoTherapy and TrueBeam images, respectively. Conclusion: Patient identification and gross misalignment errors can be robustly and

  11. SU-E-T-261: Development of An Automated System to Detect Patient Identification and Positioning Errors Prior to Radiotherapy Treatment

    International Nuclear Information System (INIS)

    Jani, S; Low, D; Lamb, J

    2015-01-01

    Purpose: To develop a system that can automatically detect patient identification and positioning errors using 3D computed tomography (CT) setup images and kilovoltage CT (kVCT) planning images. Methods: Planning kVCT images were collected for head-and-neck (H&N), pelvis, and spine treatments with corresponding 3D cone-beam CT (CBCT) and megavoltage CT (MVCT) setup images from TrueBeam and TomoTherapy units, respectively. Patient identification errors were simulated by registering setup and planning images from different patients. Positioning errors were simulated by misaligning the setup image by 1cm to 5cm in the six anatomical directions for H&N and pelvis patients. Misalignments for spine treatments were simulated by registering the setup image to adjacent vertebral bodies on the planning kVCT. A body contour of the setup image was used as an initial mask for image comparison. Images were pre-processed by image filtering and air voxel thresholding, and image pairs were assessed using commonly-used image similarity metrics as well as custom -designed metrics. A linear discriminant analysis classifier was trained and tested on the datasets, and misclassification error (MCE), sensitivity, and specificity estimates were generated using 10-fold cross validation. Results: Our workflow produced MCE estimates of 0.7%, 1.7%, and 0% for H&N, pelvis, and spine TomoTherapy images, respectively. Sensitivities and specificities ranged from 98.0% to 100%. MCEs of 3.5%, 2.3%, and 2.1% were obtained for TrueBeam images of the above sites, respectively, with sensitivity and specificity estimates between 96.2% and 98.4%. MCEs for 1cm H&N/pelvis misalignments were 1.3/5.1% and 9.1/8.6% for TomoTherapy and TrueBeam images, respectively. 2cm MCE estimates were 0.4%/1.6% and 3.1/3.2%, respectively. Vertebral misalignment MCEs were 4.8% and 4.9% for TomoTherapy and TrueBeam images, respectively. Conclusion: Patient identification and gross misalignment errors can be robustly and

  12. Vulnerability Identification Errors in Security Risk Assessments

    OpenAIRE

    Taubenberger, Stefan

    2014-01-01

    At present, companies rely on information technology systems to achieve their business objectives, making them vulnerable to cybersecurity threats. Information security risk assessments help organisations to identify their risks and vulnerabilities. An accurate identification of risks and vulnerabilities is a challenge, because the input data is uncertain. So-called ’vulnerability identification errors‘ can occur if false positive vulnerabilities are identified, or if vulnerabilities remain u...

  13. The Effects of Social Identification and Organizational Identification on Student Commitment, Achievement and Satisfaction in Higher Education

    Science.gov (United States)

    Wilkins, Stephen; Butt, Muhammad Mohsin; Kratochvil, Daniel; Balakrishnan, Melodena Stephens

    2016-01-01

    The purpose of this research is to investigate the effects of social and organizational identifications on student commitment, achievement and satisfaction in higher education. The sample comprised 437 students enrolled in an undergraduate or postgraduate programme in business or management. A model was developed and tested using structural…

  14. Computational identification of candidate nucleotide cyclases in higher plants

    KAUST Repository

    Wong, Aloysius Tze

    2013-09-03

    In higher plants guanylyl cyclases (GCs) and adenylyl cyclases (ACs) cannot be identified using BLAST homology searches based on annotated cyclic nucleotide cyclases (CNCs) of prokaryotes, lower eukaryotes, or animals. The reason is that CNCs are often part of complex multifunctional proteins with different domain organizations and biological functions that are not conserved in higher plants. For this reason, we have developed CNC search strategies based on functionally conserved amino acids in the catalytic center of annotated and/or experimentally confirmed CNCs. Here we detail this method which has led to the identification of >25 novel candidate CNCs in Arabidopsis thaliana, several of which have been experimentally confirmed in vitro and in vivo. We foresee that the application of this method can be used to identify many more members of the growing family of CNCs in higher plants. © Springer Science+Business Media New York 2013.

  15. Optimizer convergence and local minima errors and their clinical importance

    International Nuclear Information System (INIS)

    Jeraj, Robert; Wu, Chuan; Mackie, Thomas R

    2003-01-01

    Two of the errors common in the inverse treatment planning optimization have been investigated. The first error is the optimizer convergence error, which appears because of non-perfect convergence to the global or local solution, usually caused by a non-zero stopping criterion. The second error is the local minima error, which occurs when the objective function is not convex and/or the feasible solution space is not convex. The magnitude of the errors, their relative importance in comparison to other errors as well as their clinical significance in terms of tumour control probability (TCP) and normal tissue complication probability (NTCP) were investigated. Two inherently different optimizers, a stochastic simulated annealing and deterministic gradient method were compared on a clinical example. It was found that for typical optimization the optimizer convergence errors are rather small, especially compared to other convergence errors, e.g., convergence errors due to inaccuracy of the current dose calculation algorithms. This indicates that stopping criteria could often be relaxed leading into optimization speed-ups. The local minima errors were also found to be relatively small and typically in the range of the dose calculation convergence errors. Even for the cases where significantly higher objective function scores were obtained the local minima errors were not significantly higher. Clinical evaluation of the optimizer convergence error showed good correlation between the convergence of the clinical TCP or NTCP measures and convergence of the physical dose distribution. On the other hand, the local minima errors resulted in significantly different TCP or NTCP values (up to a factor of 2) indicating clinical importance of the local minima produced by physical optimization

  16. DNA barcode data accurately assign higher spider taxa

    Directory of Open Access Journals (Sweden)

    Jonathan A. Coddington

    2016-07-01

    Full Text Available The use of unique DNA sequences as a method for taxonomic identification is no longer fundamentally controversial, even though debate continues on the best markers, methods, and technology to use. Although both existing databanks such as GenBank and BOLD, as well as reference taxonomies, are imperfect, in best case scenarios “barcodes” (whether single or multiple, organelle or nuclear, loci clearly are an increasingly fast and inexpensive method of identification, especially as compared to manual identification of unknowns by increasingly rare expert taxonomists. Because most species on Earth are undescribed, a complete reference database at the species level is impractical in the near term. The question therefore arises whether unidentified species can, using DNA barcodes, be accurately assigned to more inclusive groups such as genera and families—taxonomic ranks of putatively monophyletic groups for which the global inventory is more complete and stable. We used a carefully chosen test library of CO1 sequences from 49 families, 313 genera, and 816 species of spiders to assess the accuracy of genus and family-level assignment. We used BLAST queries of each sequence against the entire library and got the top ten hits. The percent sequence identity was reported from these hits (PIdent, range 75–100%. Accurate assignment of higher taxa (PIdent above which errors totaled less than 5% occurred for genera at PIdent values >95 and families at PIdent values ≥ 91, suggesting these as heuristic thresholds for accurate generic and familial identifications in spiders. Accuracy of identification increases with numbers of species/genus and genera/family in the library; above five genera per family and fifteen species per genus all higher taxon assignments were correct. We propose that using percent sequence identity between conventional barcode sequences may be a feasible and reasonably accurate method to identify animals to family/genus. However

  17. Error identification and improvement in English first additional ...

    African Journals Online (AJOL)

    This paper seeks to identify errors committed by learners in EFAL essay writing, focussing on causes behind such errors, and strategies to eliminate them as a way of improving learners' writing skills. Document review was adopted as the research method in this study. 15 Grade 10 essays from Mmapadi Secondary school ...

  18. Errors in laboratory medicine: practical lessons to improve patient safety.

    Science.gov (United States)

    Howanitz, Peter J

    2005-10-01

    Patient safety is influenced by the frequency and seriousness of errors that occur in the health care system. Error rates in laboratory practices are collected routinely for a variety of performance measures in all clinical pathology laboratories in the United States, but a list of critical performance measures has not yet been recommended. The most extensive databases describing error rates in pathology were developed and are maintained by the College of American Pathologists (CAP). These databases include the CAP's Q-Probes and Q-Tracks programs, which provide information on error rates from more than 130 interlaboratory studies. To define critical performance measures in laboratory medicine, describe error rates of these measures, and provide suggestions to decrease these errors, thereby ultimately improving patient safety. A review of experiences from Q-Probes and Q-Tracks studies supplemented with other studies cited in the literature. Q-Probes studies are carried out as time-limited studies lasting 1 to 4 months and have been conducted since 1989. In contrast, Q-Tracks investigations are ongoing studies performed on a yearly basis and have been conducted only since 1998. Participants from institutions throughout the world simultaneously conducted these studies according to specified scientific designs. The CAP has collected and summarized data for participants about these performance measures, including the significance of errors, the magnitude of error rates, tactics for error reduction, and willingness to implement each of these performance measures. A list of recommended performance measures, the frequency of errors when these performance measures were studied, and suggestions to improve patient safety by reducing these errors. Error rates for preanalytic and postanalytic performance measures were higher than for analytic measures. Eight performance measures were identified, including customer satisfaction, test turnaround times, patient identification

  19. Classification of Error Related Brain Activity in an Auditory Identification Task with Conditions of Varying Complexity

    Science.gov (United States)

    Kakkos, I.; Gkiatis, K.; Bromis, K.; Asvestas, P. A.; Karanasiou, I. S.; Ventouras, E. M.; Matsopoulos, G. K.

    2017-11-01

    The detection of an error is the cognitive evaluation of an action outcome that is considered undesired or mismatches an expected response. Brain activity during monitoring of correct and incorrect responses elicits Event Related Potentials (ERPs) revealing complex cerebral responses to deviant sensory stimuli. Development of accurate error detection systems is of great importance both concerning practical applications and in investigating the complex neural mechanisms of decision making. In this study, data are used from an audio identification experiment that was implemented with two levels of complexity in order to investigate neurophysiological error processing mechanisms in actors and observers. To examine and analyse the variations of the processing of erroneous sensory information for each level of complexity we employ Support Vector Machines (SVM) classifiers with various learning methods and kernels using characteristic ERP time-windowed features. For dimensionality reduction and to remove redundant features we implement a feature selection framework based on Sequential Forward Selection (SFS). The proposed method provided high accuracy in identifying correct and incorrect responses both for actors and for observers with mean accuracy of 93% and 91% respectively. Additionally, computational time was reduced and the effects of the nesting problem usually occurring in SFS of large feature sets were alleviated.

  20. Computer input devices: neutral party or source of significant error in manual lesion segmentation?

    Science.gov (United States)

    Chen, James Y; Seagull, F Jacob; Nagy, Paul; Lakhani, Paras; Melhem, Elias R; Siegel, Eliot L; Safdar, Nabile M

    2011-02-01

    Lesion segmentation involves outlining the contour of an abnormality on an image to distinguish boundaries between normal and abnormal tissue and is essential to track malignant and benign disease in medical imaging for clinical, research, and treatment purposes. A laser optical mouse and a graphics tablet were used by radiologists to segment 12 simulated reference lesions per subject in two groups (one group comprised three lesion morphologies in two sizes, one for each input device for each device two sets of six, composed of three morphologies in two sizes each). Time for segmentation was recorded. Subjects completed an opinion survey following segmentation. Error in contour segmentation was calculated using root mean square error. Error in area of segmentation was calculated compared to the reference lesion. 11 radiologists segmented a total of 132 simulated lesions. Overall error in contour segmentation was less with the graphics tablet than with the mouse (P Error in area of segmentation was not significantly different between the tablet and the mouse (P = 0.62). Time for segmentation was less with the tablet than the mouse (P = 0.011). All subjects preferred the graphics tablet for future segmentation (P = 0.011) and felt subjectively that the tablet was faster, easier, and more accurate (P = 0.0005). For purposes in which accuracy in contour of lesion segmentation is of the greater importance, the graphics tablet is superior to the mouse in accuracy with a small speed benefit. For purposes in which accuracy of area of lesion segmentation is of greater importance, the graphics tablet and mouse are equally accurate.

  1. Medication errors in anaesthetic practice: A report of two cases and ...

    African Journals Online (AJOL)

    Background: Mistakes in the identification and administration of drugs may be fatal. This is especially so in the practice of anaesthesia. This is a report of 2 cases of near fatality due to mistakes in drug administration from look-alike medications. Objective: To highlight the significance of medication errors in our practice and ...

  2. Error or "act of God"? A study of patients' and operating room team members' perceptions of error definition, reporting, and disclosure.

    Science.gov (United States)

    Espin, Sherry; Levinson, Wendy; Regehr, Glenn; Baker, G Ross; Lingard, Lorelei

    2006-01-01

    Calls abound for a culture change in health care to improve patient safety. However, effective change cannot proceed without a clear understanding of perceptions and beliefs about error. In this study, we describe and compare operative team members' and patients' perceptions of error, reporting of error, and disclosure of error. Thirty-nine interviews of team members (9 surgeons, 9 nurses, 10 anesthesiologists) and patients (11) were conducted at 2 teaching hospitals using 4 scenarios as prompts. Transcribed responses to open questions were analyzed by 2 researchers for recurrent themes using the grounded-theory method. Yes/no answers were compared across groups using chi-square analyses. Team members and patients agreed on what constitutes an error. Deviation from standards and negative outcome were emphasized as definitive features. Patients and nurse professionals differed significantly in their perception of whether errors should be reported. Nurses were willing to report only events within their disciplinary scope of practice. Although most patients strongly advocated full disclosure of errors (what happened and how), team members preferred to disclose only what happened. When patients did support partial disclosure, their rationales varied from that of team members. Both operative teams and patients define error in terms of breaking the rules and the concept of "no harm no foul." These concepts pose challenges for treating errors as system failures. A strong culture of individualism pervades nurses' perception of error reporting, suggesting that interventions are needed to foster collective responsibility and a constructive approach to error identification.

  3. Embracing Errors in Simulation-Based Training: The Effect of Error Training on Retention and Transfer of Central Venous Catheter Skills.

    Science.gov (United States)

    Gardner, Aimee K; Abdelfattah, Kareem; Wiersch, John; Ahmed, Rami A; Willis, Ross E

    2015-01-01

    Error management training is an approach that encourages exposure to errors during initial skill acquisition so that learners can be equipped with important error identification, management, and metacognitive skills. The purpose of this study was to determine how an error-focused training program affected performance, retention, and transfer of central venous catheter (CVC) placement skills when compared with traditional training methodologies. Surgical interns (N = 30) participated in a 1-hour session featuring an instructional video and practice performing internal jugular (IJ) and subclavian (SC) CVC placement with guided instruction. All interns underwent baseline knowledge and skill assessment for IJ and SC (pretest) CVC placement; watched a "correct-only" (CO) or "correct + error" (CE) instructional video; practiced for 30 minutes; and were posttested on knowledge and IJ and SC CVC placement. Skill retention and transfer (femoral CVC placement) were assessed 30 days later. All skills tests (pretest, posttest, and transfer) were videorecorded and deidentified for evaluation by a single blinded instructor using a validated 17-item checklist. Both the groups exhibited significant improvements (p error-based activities and discussions into training programs can be beneficial for skill retention and transfer. Copyright © 2015 Association of Program Directors in Surgery. Published by Elsevier Inc. All rights reserved.

  4. The significance of gtf genes in caries expression: a rapid identification of Streptococcus mutans from dental plaque of child patients.

    Science.gov (United States)

    Mishra, Apurva; Pandey, Ramesh K; Manickam, Natesan

    2015-01-01

    Rapid phylogenetic and functional gene (gtfB) identification of S. mutans from the dental plaque derived from children. Dental plaque collected from fifteen patients of age group 7-12 underwent centrifugation followed by genomic DNA extraction for S. mutans. Genomic DNA was processed with S. mutans specific primers in suitable PCR condtions for phylogenetic and functional gene (gtfB) identification. The yield and results were confirmed by agarose gel electrophoresis. 1% agarose gel electrophoresis depicts the positive PCR amplification at 1,485 bp when compared with standard 1 kbp indicating the presence of S. mutans in the test sample. Another PCR reaction was set using gtfB primers specific for S. mutans for functional gene identification. 1.2% agarose gel electrophoresis was done and a positive amplication was observed at 192 bp when compared to 100 bp standards. With the advancement in molecular biology techniques, PCR based identification and quantification of the bacterial load can be done within hours using species-specific primers and DNA probes. Thus, this technique may reduce the laboratory time spend in conventional culture methods, reduces the possibility of colony identification errors and is more sensitive to culture techniques.

  5. Error forecasting schemes of error correction at receiver

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-08-01

    To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)

  6. Higher positive identification of malignant CSF cells using the cytocentrifuge than the Suta chamber

    Directory of Open Access Journals (Sweden)

    Sérgio Monteiro de Almeida

    Full Text Available ABSTRACT Objective To define how to best handle cerebrospinal fluid (CSF specimens to obtain the highest positivity rate for the diagnosis of malignancy, comparing two different methods of cell concentration, sedimentation and cytocentrifugation. Methods A retrospective analysis of 411 CSF reports. Results This is a descriptive comparative study. The positive identification of malignant CSF cells was higher using the centrifuge than that using the Suta chamber (27.8% vs. 19.0%, respectively; p = 0.038. Centrifuge positively identified higher numbers of malignant cells in samples with a normal concentration of white blood cells (WBCs (< 5 cells/mm3 and with more than 200 cells/mm3, although this was not statistically significant. There was no lymphocyte loss using either method. Conclusions Cytocentrifugation positively identified a greater number of malignant cells in the CSF than cytosedimentation with the Suta chamber. However, there was no difference between the methods when the WBC counts were within the normal range.

  7. Sources of medical error in refractive surgery.

    Science.gov (United States)

    Moshirfar, Majid; Simpson, Rachel G; Dave, Sonal B; Christiansen, Steven M; Edmonds, Jason N; Culbertson, William W; Pascucci, Stephen E; Sher, Neal A; Cano, David B; Trattler, William B

    2013-05-01

    To evaluate the causes of laser programming errors in refractive surgery and outcomes in these cases. In this multicenter, retrospective chart review, 22 eyes of 18 patients who had incorrect data entered into the refractive laser computer system at the time of treatment were evaluated. Cases were analyzed to uncover the etiology of these errors, patient follow-up treatments, and final outcomes. The results were used to identify potential methods to avoid similar errors in the future. Every patient experienced compromised uncorrected visual acuity requiring additional intervention, and 7 of 22 eyes (32%) lost corrected distance visual acuity (CDVA) of at least one line. Sixteen patients were suitable candidates for additional surgical correction to address these residual visual symptoms and six were not. Thirteen of 22 eyes (59%) received surgical follow-up treatment; nine eyes were treated with contact lenses. After follow-up treatment, six patients (27%) still had a loss of one line or more of CDVA. Three significant sources of error were identified: errors of cylinder conversion, data entry, and patient identification error. Twenty-seven percent of eyes with laser programming errors ultimately lost one or more lines of CDVA. Patients who underwent surgical revision had better outcomes than those who did not. Many of the mistakes identified were likely avoidable had preventive measures been taken, such as strict adherence to patient verification protocol or rigorous rechecking of treatment parameters. Copyright 2013, SLACK Incorporated.

  8. The relationships among work stress, strain and self-reported errors in UK community pharmacy.

    Science.gov (United States)

    Johnson, S J; O'Connor, E M; Jacobs, S; Hassell, K; Ashcroft, D M

    2014-01-01

    Changes in the UK community pharmacy profession including new contractual frameworks, expansion of services, and increasing levels of workload have prompted concerns about rising levels of workplace stress and overload. This has implications for pharmacist health and well-being and the occurrence of errors that pose a risk to patient safety. Despite these concerns being voiced in the profession, few studies have explored work stress in the community pharmacy context. To investigate work-related stress among UK community pharmacists and to explore its relationships with pharmacists' psychological and physical well-being, and the occurrence of self-reported dispensing errors and detection of prescribing errors. A cross-sectional postal survey of a random sample of practicing community pharmacists (n = 903) used ASSET (A Shortened Stress Evaluation Tool) and questions relating to self-reported involvement in errors. Stress data were compared to general working population norms, and regressed on well-being and self-reported errors. Analysis of the data revealed that pharmacists reported significantly higher levels of workplace stressors than the general working population, with concerns about work-life balance, the nature of the job, and work relationships being the most influential on health and well-being. Despite this, pharmacists were not found to report worse health than the general working population. Self-reported error involvement was linked to both high dispensing volume and being troubled by perceived overload (dispensing errors), and resources and communication (detection of prescribing errors). This study contributes to the literature by benchmarking community pharmacists' health and well-being, and investigating sources of stress using a quantitative approach. A further important contribution to the literature is the identification of a quantitative link between high workload and self-reported dispensing errors. Copyright © 2014 Elsevier Inc. All rights

  9. Common patterns in 558 diagnostic radiology errors.

    Science.gov (United States)

    Donald, Jennifer J; Barnard, Stuart A

    2012-04-01

    As a Quality Improvement initiative our department has held regular discrepancy meetings since 2003. We performed a retrospective analysis of the cases presented and identified the most common pattern of error. A total of 558 cases were referred for discussion over 92 months, and errors were classified as perceptual or interpretative. The most common patterns of error for each imaging modality were analysed, and the misses were scored by consensus as subtle or non-subtle. Of 558 diagnostic errors, 447 (80%) were perceptual and 111 (20%) were interpretative errors. Plain radiography and computed tomography (CT) scans were the most frequent imaging modalities accounting for 246 (44%) and 241 (43%) of the total number of errors, respectively. In the plain radiography group 120 (49%) of the errors occurred in chest X-ray reports with perceptual miss of a lung nodule occurring in 40% of this subgroup. In the axial and appendicular skeleton missed fractures occurred most frequently, and metastatic bone disease was overlooked in 12 of 50 plain X-rays of the pelvis or spine. The majority of errors within the CT group were in reports of body scans with the commonest perceptual errors identified including 16 missed significant bone lesions, 14 cases of thromboembolic disease and 14 gastrointestinal tumours. Of the 558 errors, 312 (56%) were considered subtle and 246 (44%) non-subtle. Diagnostic errors are not uncommon and are most frequently perceptual in nature. Identification of the most common patterns of error has the potential to improve the quality of reporting by improving the search behaviour of radiologists. © 2012 The Authors. Journal of Medical Imaging and Radiation Oncology © 2012 The Royal Australian and New Zealand College of Radiologists.

  10. Reduction of multi-dimensional laboratory data to a two-dimensional plot: a novel technique for the identification of laboratory error.

    Science.gov (United States)

    Kazmierczak, Steven C; Leen, Todd K; Erdogmus, Deniz; Carreira-Perpinan, Miguel A

    2007-01-01

    The clinical laboratory generates large amounts of patient-specific data. Detection of errors that arise during pre-analytical, analytical, and post-analytical processes is difficult. We performed a pilot study, utilizing a multidimensional data reduction technique, to assess the utility of this method for identifying errors in laboratory data. We evaluated 13,670 individual patient records collected over a 2-month period from hospital inpatients and outpatients. We utilized those patient records that contained a complete set of 14 different biochemical analytes. We used two-dimensional generative topographic mapping to project the 14-dimensional record to a two-dimensional space. The use of a two-dimensional generative topographic mapping technique to plot multi-analyte patient data as a two-dimensional graph allows for the rapid identification of potentially anomalous data. Although we performed a retrospective analysis, this technique has the benefit of being able to assess laboratory-generated data in real time, allowing for the rapid identification and correction of anomalous data before they are released to the physician. In addition, serial laboratory multi-analyte data for an individual patient can also be plotted as a two-dimensional plot. This tool might also be useful for assessing patient wellbeing and prognosis.

  11. Medical errors in hospitalized pediatric trauma patients with chronic health conditions

    Directory of Open Access Journals (Sweden)

    Xiaotong Liu

    2014-01-01

    Full Text Available Objective: This study compares medical errors in pediatric trauma patients with and without chronic conditions. Methods: The 2009 Kids’ Inpatient Database, which included 123,303 trauma discharges, was analyzed. Medical errors were identified by International Classification of Diseases, Ninth Revision, Clinical Modification diagnosis codes. The medical error rates per 100 discharges and per 1000 hospital days were calculated and compared between inpatients with and without chronic conditions. Results: Pediatric trauma patients with chronic conditions experienced a higher medical error rate compared with patients without chronic conditions: 4.04 (95% confidence interval: 3.75–4.33 versus 1.07 (95% confidence interval: 0.98–1.16 per 100 discharges. The rate of medical error differed by type of chronic condition. After controlling for confounding factors, the presence of a chronic condition increased the adjusted odds ratio of medical error by 37% if one chronic condition existed (adjusted odds ratio: 1.37, 95% confidence interval: 1.21–1.5, and 69% if more than one chronic condition existed (adjusted odds ratio: 1.69, 95% confidence interval: 1.48–1.53. In the adjusted model, length of stay had the strongest association with medical error, but the adjusted odds ratio for chronic conditions and medical error remained significantly elevated even when accounting for the length of stay, suggesting that medical complexity has a role in medical error. Higher adjusted odds ratios were seen in other subgroups. Conclusion: Chronic conditions are associated with significantly higher rate of medical errors in pediatric trauma patients. Future research should evaluate interventions or guidelines for reducing the risk of medical errors in pediatric trauma patients with chronic conditions.

  12. Substructure identification for shear structures: cross-power spectral density method

    International Nuclear Information System (INIS)

    Zhang, Dongyu; Johnson, Erik A

    2012-01-01

    In this paper, a substructure identification method for shear structures is proposed. A shear structure is divided into many small substructures; utilizing the dynamic equilibrium of a one-floor substructure, an inductive identification problem is formulated, using the cross-power spectral densities between structural floor accelerations and a reference response, to estimate the parameters of that one story. Repeating this procedure, all story parameters of the shear structure are identified from top to bottom recursively. An identification error analysis is performed for the proposed substructure method, revealing how uncertain factors (e.g. measurement noise) in the identification process affect the identification accuracy. According to the error analysis, a smart reference selection rule is designed to choose the optimal reference response that further enhances the identification accuracy. Moreover, based on the identification error analysis, explicit formulae are developed to calculate the variances of the parameter identification errors. A ten-story shear structure is used to illustrate the effectiveness of the proposed substructure method. The simulation results show that the method, combined with the reference selection rule, can very accurately identify structural parameters despite large measurement noise. Furthermore, the proposed formulae provide good predictions for the variances of the parameter identification errors, which are vital for providing accurate warnings of structural damage. (paper)

  13. Blood specimen labelling errors: Implications for nephrology nursing practice.

    Science.gov (United States)

    Duteau, Jennifer

    2014-01-01

    Patient safety is the foundation of high-quality health care, as recognized both nationally and worldwide. Patient blood specimen identification is critical in ensuring the delivery of safe and appropriate care. The practice of nephrology nursing involves frequent patient blood specimen withdrawals to treat and monitor kidney disease. A critical review of the literature reveals that incorrect patient identification is one of the major causes of blood specimen labelling errors. Misidentified samples create a serious risk to patient safety leading to multiple specimen withdrawals, delay in diagnosis, misdiagnosis, incorrect treatment, transfusion reactions, increased length of stay and other negative patient outcomes. Barcode technology has been identified as a preferred method for positive patient identification leading to a definitive decrease in blood specimen labelling errors by as much as 83% (Askeland, et al., 2008). The use of a root cause analysis followed by an action plan is one approach to decreasing the occurrence of blood specimen labelling errors. This article will present a review of the evidence-based literature surrounding blood specimen labelling errors, followed by author recommendations for completing a root cause analysis and action plan. A failure modes and effects analysis (FMEA) will be presented as one method to determine root cause, followed by the Ottawa Model of Research Use (OMRU) as a framework for implementation of strategies to reduce blood specimen labelling errors.

  14. Decision Processes in Eyewitness Identification

    OpenAIRE

    Moreland, Molly Bettis

    2015-01-01

    The dominant theory of decision-making in eyewitness identification, based on a distinction between absolute and relative judgments, assumes that relative judgments (identifying the best match relative to the other lineup members) increases identification errors (Wells, 1984). This distinction also assumes that comparisons among lineup members underlying relative judgments increases errors, as evidenced by the sequential lineup advantage (Lindsay & Wells, 1985). Sequential lineups preclude co...

  15. Identification of factors associated with diagnostic error in primary care.

    Science.gov (United States)

    Minué, Sergio; Bermúdez-Tamayo, Clara; Fernández, Alberto; Martín-Martín, José Jesús; Benítez, Vivian; Melguizo, Miguel; Caro, Araceli; Orgaz, María José; Prados, Miguel Angel; Díaz, José Enrique; Montoro, Rafael

    2014-05-12

    Missed, delayed or incorrect diagnoses are considered to be diagnostic errors. The aim of this paper is to describe the methodology of a study to analyse cognitive aspects of the process by which primary care (PC) physicians diagnose dyspnoea. It examines the possible links between the use of heuristics, suboptimal cognitive acts and diagnostic errors, using Reason's taxonomy of human error (slips, lapses, mistakes and violations). The influence of situational factors (professional experience, perceived overwork and fatigue) is also analysed. Cohort study of new episodes of dyspnoea in patients receiving care from family physicians and residents at PC centres in Granada (Spain). With an initial expected diagnostic error rate of 20%, and a sampling error of 3%, 384 episodes of dyspnoea are calculated to be required. In addition to filling out the electronic medical record of the patients attended, each physician fills out 2 specially designed questionnaires about the diagnostic process performed in each case of dyspnoea. The first questionnaire includes questions on the physician's initial diagnostic impression, the 3 most likely diagnoses (in order of likelihood), and the diagnosis reached after the initial medical history and physical examination. It also includes items on the physicians' perceived overwork and fatigue during patient care. The second questionnaire records the confirmed diagnosis once it is reached. The complete diagnostic process is peer-reviewed to identify and classify the diagnostic errors. The possible use of heuristics of representativeness, availability, and anchoring and adjustment in each diagnostic process is also analysed. Each audit is reviewed with the physician responsible for the diagnostic process. Finally, logistic regression models are used to determine if there are differences in the diagnostic error variables based on the heuristics identified. This work sets out a new approach to studying the diagnostic decision-making process

  16. Error suppression and error correction in adiabatic quantum computation: non-equilibrium dynamics

    International Nuclear Information System (INIS)

    Sarovar, Mohan; Young, Kevin C

    2013-01-01

    While adiabatic quantum computing (AQC) has some robustness to noise and decoherence, it is widely believed that encoding, error suppression and error correction will be required to scale AQC to large problem sizes. Previous works have established at least two different techniques for error suppression in AQC. In this paper we derive a model for describing the dynamics of encoded AQC and show that previous constructions for error suppression can be unified with this dynamical model. In addition, the model clarifies the mechanisms of error suppression and allows the identification of its weaknesses. In the second half of the paper, we utilize our description of non-equilibrium dynamics in encoded AQC to construct methods for error correction in AQC by cooling local degrees of freedom (qubits). While this is shown to be possible in principle, we also identify the key challenge to this approach: the requirement of high-weight Hamiltonians. Finally, we use our dynamical model to perform a simplified thermal stability analysis of concatenated-stabilizer-code encoded many-body systems for AQC or quantum memories. This work is a companion paper to ‘Error suppression and error correction in adiabatic quantum computation: techniques and challenges (2013 Phys. Rev. X 3 041013)’, which provides a quantum information perspective on the techniques and limitations of error suppression and correction in AQC. In this paper we couch the same results within a dynamical framework, which allows for a detailed analysis of the non-equilibrium dynamics of error suppression and correction in encoded AQC. (paper)

  17. Evaluation of drug administration errors in a teaching hospital

    Directory of Open Access Journals (Sweden)

    Berdot Sarah

    2012-03-01

    Full Text Available Abstract Background Medication errors can occur at any of the three steps of the medication use process: prescribing, dispensing and administration. We aimed to determine the incidence, type and clinical importance of drug administration errors and to identify risk factors. Methods Prospective study based on disguised observation technique in four wards in a teaching hospital in Paris, France (800 beds. A pharmacist accompanied nurses and witnessed the preparation and administration of drugs to all patients during the three drug rounds on each of six days per ward. Main outcomes were number, type and clinical importance of errors and associated risk factors. Drug administration error rate was calculated with and without wrong time errors. Relationship between the occurrence of errors and potential risk factors were investigated using logistic regression models with random effects. Results Twenty-eight nurses caring for 108 patients were observed. Among 1501 opportunities for error, 415 administrations (430 errors with one or more errors were detected (27.6%. There were 312 wrong time errors, ten simultaneously with another type of error, resulting in an error rate without wrong time error of 7.5% (113/1501. The most frequently administered drugs were the cardiovascular drugs (425/1501, 28.3%. The highest risks of error in a drug administration were for dermatological drugs. No potentially life-threatening errors were witnessed and 6% of errors were classified as having a serious or significant impact on patients (mainly omission. In multivariate analysis, the occurrence of errors was associated with drug administration route, drug classification (ATC and the number of patient under the nurse's care. Conclusion Medication administration errors are frequent. The identification of its determinants helps to undertake designed interventions.

  18. Significance of Dental Records in Personal Identification in Forensic Sciences

    Directory of Open Access Journals (Sweden)

    Vagish Kumar L Shanbhag

    2016-01-01

    Full Text Available Forensic odontology is a branch that connects dentistry and the legal profession. One of the members in the forensic investigation team is a dentist. Dentists play an important and significant role in various aspects of the identification of persons in various forensic circumstances. However, several dentists and legal professionals are quite ignorant of this fascinating aspect of forensic odontology. A need was felt to fill this gap. The dental record is a legal document possessed by the dentist and it contains subjective and objective information about the patient. A PubMed search and Google search were done for articles highlighting the importance of dental records in forensic sciences using the key words "forensic odontology, forensic dentistry, forensic dentists, identification, dental records, and dental chart". A total of 42 articles relevant to the title of the article were found and reviewed. The present article highlights the role of dentists in forensic sciences, their possible contributions to forensics, and the various aspects of forensic dentistry, thus bridging the gap of knowledge between the legal and the dental fraternities.

  19. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.

  20. Elucidation of cross-species proteomic effects in human and hominin bone proteome identification through a bioinformatics experiment.

    Science.gov (United States)

    Welker, F

    2018-02-20

    The study of ancient protein sequences is increasingly focused on the analysis of older samples, including those of ancient hominins. The analysis of such ancient proteomes thereby potentially suffers from "cross-species proteomic effects": the loss of peptide and protein identifications at increased evolutionary distances due to a larger number of protein sequence differences between the database sequence and the analyzed organism. Error-tolerant proteomic search algorithms should theoretically overcome this problem at both the peptide and protein level; however, this has not been demonstrated. If error-tolerant searches do not overcome the cross-species proteomic issue then there might be inherent biases in the identified proteomes. Here, a bioinformatics experiment is performed to test this using a set of modern human bone proteomes and three independent searches against sequence databases at increasing evolutionary distances: the human (0 Ma), chimpanzee (6-8 Ma) and orangutan (16-17 Ma) reference proteomes, respectively. Incorrectly suggested amino acid substitutions are absent when employing adequate filtering criteria for mutable Peptide Spectrum Matches (PSMs), but roughly half of the mutable PSMs were not recovered. As a result, peptide and protein identification rates are higher in error-tolerant mode compared to non-error-tolerant searches but did not recover protein identifications completely. Data indicates that peptide length and the number of mutations between the target and database sequences are the main factors influencing mutable PSM identification. The error-tolerant results suggest that the cross-species proteomics problem is not overcome at increasing evolutionary distances, even at the protein level. Peptide and protein loss has the potential to significantly impact divergence dating and proteome comparisons when using ancient samples as there is a bias towards the identification of conserved sequences and proteins. Effects are minimized

  1. Automated Patient Identification and Localization Error Detection Using 2-Dimensional to 3-Dimensional Registration of Kilovoltage X-Ray Setup Images

    International Nuclear Information System (INIS)

    Lamb, James M.; Agazaryan, Nzhde; Low, Daniel A.

    2013-01-01

    Purpose: To determine whether kilovoltage x-ray projection radiation therapy setup images could be used to perform patient identification and detect gross errors in patient setup using a computer algorithm. Methods and Materials: Three patient cohorts treated using a commercially available image guided radiation therapy (IGRT) system that uses 2-dimensional to 3-dimensional (2D-3D) image registration were retrospectively analyzed: a group of 100 cranial radiation therapy patients, a group of 100 prostate cancer patients, and a group of 83 patients treated for spinal lesions. The setup images were acquired using fixed in-room kilovoltage imaging systems. In the prostate and cranial patient groups, localizations using image registration were performed between computed tomography (CT) simulation images from radiation therapy planning and setup x-ray images corresponding both to the same patient and to different patients. For the spinal patients, localizations were performed to the correct vertebral body, and to an adjacent vertebral body, using planning CTs and setup x-ray images from the same patient. An image similarity measure used by the IGRT system image registration algorithm was extracted from the IGRT system log files and evaluated as a discriminant for error detection. Results: A threshold value of the similarity measure could be chosen to separate correct and incorrect patient matches and correct and incorrect vertebral body localizations with excellent accuracy for these patient cohorts. A 10-fold cross-validation using linear discriminant analysis yielded misclassification probabilities of 0.000, 0.0045, and 0.014 for the cranial, prostate, and spinal cases, respectively. Conclusions: An automated measure of the image similarity between x-ray setup images and corresponding planning CT images could be used to perform automated patient identification and detection of localization errors in radiation therapy treatments

  2. Automated Patient Identification and Localization Error Detection Using 2-Dimensional to 3-Dimensional Registration of Kilovoltage X-Ray Setup Images

    Energy Technology Data Exchange (ETDEWEB)

    Lamb, James M., E-mail: jlamb@mednet.ucla.edu; Agazaryan, Nzhde; Low, Daniel A.

    2013-10-01

    Purpose: To determine whether kilovoltage x-ray projection radiation therapy setup images could be used to perform patient identification and detect gross errors in patient setup using a computer algorithm. Methods and Materials: Three patient cohorts treated using a commercially available image guided radiation therapy (IGRT) system that uses 2-dimensional to 3-dimensional (2D-3D) image registration were retrospectively analyzed: a group of 100 cranial radiation therapy patients, a group of 100 prostate cancer patients, and a group of 83 patients treated for spinal lesions. The setup images were acquired using fixed in-room kilovoltage imaging systems. In the prostate and cranial patient groups, localizations using image registration were performed between computed tomography (CT) simulation images from radiation therapy planning and setup x-ray images corresponding both to the same patient and to different patients. For the spinal patients, localizations were performed to the correct vertebral body, and to an adjacent vertebral body, using planning CTs and setup x-ray images from the same patient. An image similarity measure used by the IGRT system image registration algorithm was extracted from the IGRT system log files and evaluated as a discriminant for error detection. Results: A threshold value of the similarity measure could be chosen to separate correct and incorrect patient matches and correct and incorrect vertebral body localizations with excellent accuracy for these patient cohorts. A 10-fold cross-validation using linear discriminant analysis yielded misclassification probabilities of 0.000, 0.0045, and 0.014 for the cranial, prostate, and spinal cases, respectively. Conclusions: An automated measure of the image similarity between x-ray setup images and corresponding planning CT images could be used to perform automated patient identification and detection of localization errors in radiation therapy treatments.

  3. Automated patient identification and localization error detection using 2-dimensional to 3-dimensional registration of kilovoltage x-ray setup images.

    Science.gov (United States)

    Lamb, James M; Agazaryan, Nzhde; Low, Daniel A

    2013-10-01

    To determine whether kilovoltage x-ray projection radiation therapy setup images could be used to perform patient identification and detect gross errors in patient setup using a computer algorithm. Three patient cohorts treated using a commercially available image guided radiation therapy (IGRT) system that uses 2-dimensional to 3-dimensional (2D-3D) image registration were retrospectively analyzed: a group of 100 cranial radiation therapy patients, a group of 100 prostate cancer patients, and a group of 83 patients treated for spinal lesions. The setup images were acquired using fixed in-room kilovoltage imaging systems. In the prostate and cranial patient groups, localizations using image registration were performed between computed tomography (CT) simulation images from radiation therapy planning and setup x-ray images corresponding both to the same patient and to different patients. For the spinal patients, localizations were performed to the correct vertebral body, and to an adjacent vertebral body, using planning CTs and setup x-ray images from the same patient. An image similarity measure used by the IGRT system image registration algorithm was extracted from the IGRT system log files and evaluated as a discriminant for error detection. A threshold value of the similarity measure could be chosen to separate correct and incorrect patient matches and correct and incorrect vertebral body localizations with excellent accuracy for these patient cohorts. A 10-fold cross-validation using linear discriminant analysis yielded misclassification probabilities of 0.000, 0.0045, and 0.014 for the cranial, prostate, and spinal cases, respectively. An automated measure of the image similarity between x-ray setup images and corresponding planning CT images could be used to perform automated patient identification and detection of localization errors in radiation therapy treatments. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Output Error Method for Tiltrotor Unstable in Hover

    Directory of Open Access Journals (Sweden)

    Lichota Piotr

    2017-03-01

    Full Text Available This article investigates unstable tiltrotor in hover system identification from flight test data. The aircraft dynamics was described by a linear model defined in Body-Fixed-Coordinate System. Output Error Method was selected in order to obtain stability and control derivatives in lateral motion. For estimating model parameters both time and frequency domain formulations were applied. To improve the system identification performed in the time domain, a stabilization matrix was included for evaluating the states. In the end, estimates obtained from various Output Error Method formulations were compared in terms of parameters accuracy and time histories. Evaluations were performed in MATLAB R2009b environment.

  5. Decisions to Shoot in a Weapon Identification Task: The Influence of Cultural Stereotypes and Perceived Threat on False Positive Errors

    OpenAIRE

    Fleming, Kevin K.; Bandy, Carole L.; Kimble, Matthew O.

    2009-01-01

    The decision to shoot engages executive control processes that can be biased by cultural stereotypes and perceived threat. The neural locus of the decision to shoot is likely to be found in the anterior cingulate cortex (ACC) where cognition and affect converge. Male military cadets at Norwich University (N=37) performed a weapon identification task in which they made rapid decisions to shoot when images of guns appeared briefly on a computer screen. Reaction times, error rates, and EEG activ...

  6. Identification of factors associated with diagnostic error in primary care

    Science.gov (United States)

    2014-01-01

    Background Missed, delayed or incorrect diagnoses are considered to be diagnostic errors. The aim of this paper is to describe the methodology of a study to analyse cognitive aspects of the process by which primary care (PC) physicians diagnose dyspnoea. It examines the possible links between the use of heuristics, suboptimal cognitive acts and diagnostic errors, using Reason’s taxonomy of human error (slips, lapses, mistakes and violations). The influence of situational factors (professional experience, perceived overwork and fatigue) is also analysed. Methods Cohort study of new episodes of dyspnoea in patients receiving care from family physicians and residents at PC centres in Granada (Spain). With an initial expected diagnostic error rate of 20%, and a sampling error of 3%, 384 episodes of dyspnoea are calculated to be required. In addition to filling out the electronic medical record of the patients attended, each physician fills out 2 specially designed questionnaires about the diagnostic process performed in each case of dyspnoea. The first questionnaire includes questions on the physician’s initial diagnostic impression, the 3 most likely diagnoses (in order of likelihood), and the diagnosis reached after the initial medical history and physical examination. It also includes items on the physicians’ perceived overwork and fatigue during patient care. The second questionnaire records the confirmed diagnosis once it is reached. The complete diagnostic process is peer-reviewed to identify and classify the diagnostic errors. The possible use of heuristics of representativeness, availability, and anchoring and adjustment in each diagnostic process is also analysed. Each audit is reviewed with the physician responsible for the diagnostic process. Finally, logistic regression models are used to determine if there are differences in the diagnostic error variables based on the heuristics identified. Discussion This work sets out a new approach to studying the

  7. Residents' numeric inputting error in computerized physician order entry prescription.

    Science.gov (United States)

    Wu, Xue; Wu, Changxu; Zhang, Kan; Wei, Dong

    2016-04-01

    Computerized physician order entry (CPOE) system with embedded clinical decision support (CDS) can significantly reduce certain types of prescription error. However, prescription errors still occur. Various factors such as the numeric inputting methods in human computer interaction (HCI) produce different error rates and types, but has received relatively little attention. This study aimed to examine the effects of numeric inputting methods and urgency levels on numeric inputting errors of prescription, as well as categorize the types of errors. Thirty residents participated in four prescribing tasks in which two factors were manipulated: numeric inputting methods (numeric row in the main keyboard vs. numeric keypad) and urgency levels (urgent situation vs. non-urgent situation). Multiple aspects of participants' prescribing behavior were measured in sober prescribing situations. The results revealed that in urgent situations, participants were prone to make mistakes when using the numeric row in the main keyboard. With control of performance in the sober prescribing situation, the effects of the input methods disappeared, and urgency was found to play a significant role in the generalized linear model. Most errors were either omission or substitution types, but the proportion of transposition and intrusion error types were significantly higher than that of the previous research. Among numbers 3, 8, and 9, which were the less common digits used in prescription, the error rate was higher, which was a great risk to patient safety. Urgency played a more important role in CPOE numeric typing error-making than typing skills and typing habits. It was recommended that inputting with the numeric keypad had lower error rates in urgent situation. An alternative design could consider increasing the sensitivity of the keys with lower frequency of occurrence and decimals. To improve the usability of CPOE, numeric keyboard design and error detection could benefit from spatial

  8. (How) do we learn from errors? A prospective study of the link between the ward's learning practices and medication administration errors.

    Science.gov (United States)

    Drach-Zahavy, A; Somech, A; Admi, H; Peterfreund, I; Peker, H; Priente, O

    2014-03-01

    Attention in the ward should shift from preventing medication administration errors to managing them. Nevertheless, little is known in regard with the practices nursing wards apply to learn from medication administration errors as a means of limiting them. To test the effectiveness of four types of learning practices, namely, non-integrated, integrated, supervisory and patchy learning practices in limiting medication administration errors. Data were collected from a convenient sample of 4 hospitals in Israel by multiple methods (observations and self-report questionnaires) at two time points. The sample included 76 wards (360 nurses). Medication administration error was defined as any deviation from prescribed medication processes and measured by a validated structured observation sheet. Wards' use of medication administration technologies, location of the medication station, and workload were observed; learning practices and demographics were measured by validated questionnaires. Results of the mixed linear model analysis indicated that the use of technology and quiet location of the medication cabinet were significantly associated with reduced medication administration errors (estimate=.03, perrors (estimate=.04, plearning practices, supervisory learning was the only practice significantly linked to reduced medication administration errors (estimate=-.04, plearning were significantly linked to higher levels of medication administration errors (estimate=-.03, plearning was not associated with it (p>.05). How wards manage errors might have implications for medication administration errors beyond the effects of typical individual, organizational and technology risk factors. Head nurse can facilitate learning from errors by "management by walking around" and monitoring nurses' medication administration behaviors. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Tests for detecting overdispersion in models with measurement error in covariates.

    Science.gov (United States)

    Yang, Yingsi; Wong, Man Yu

    2015-11-30

    Measurement error in covariates can affect the accuracy in count data modeling and analysis. In overdispersion identification, the true mean-variance relationship can be obscured under the influence of measurement error in covariates. In this paper, we propose three tests for detecting overdispersion when covariates are measured with error: a modified score test and two score tests based on the proposed approximate likelihood and quasi-likelihood, respectively. The proposed approximate likelihood is derived under the classical measurement error model, and the resulting approximate maximum likelihood estimator is shown to have superior efficiency. Simulation results also show that the score test based on approximate likelihood outperforms the test based on quasi-likelihood and other alternatives in terms of empirical power. By analyzing a real dataset containing the health-related quality-of-life measurements of a particular group of patients, we demonstrate the importance of the proposed methods by showing that the analyses with and without measurement error correction yield significantly different results. Copyright © 2015 John Wiley & Sons, Ltd.

  10. Data-acquisition system for the NLO error-propagation exercise

    International Nuclear Information System (INIS)

    Lower, C.W.; Gessiness, B.; Bieber, A.M. Jr.; Keisch, B.; Suda, S.C.

    1983-01-01

    An automated data-acquisition system using barcoded labels was developed for an error-propagation exercise to determine the limit of error for inventory differences (LEID) for a material balance area at NLO, Inc.'s Feed Materials Production Center, Fernald, Ohio. Each discrete item of material to be measured (weighed or analyzed) was labeled with a bar-coded identification number. Automated scale terminals, portable bar-code readers, and an automated laboratory data-entry terminal were used to read identification labels and automatically record measurement and transfer information. This system is the prototype for an entire material control and accountability system

  11. Normal LVEF measurements are significantly higher in females asassessed by post-stress resting Tc-99m sestamibi gated myocardial perfusion SPECT

    International Nuclear Information System (INIS)

    Kim, Jong Ho; Shin, Eak Kyun

    1999-01-01

    Volume-LVEF relationship is one of the most important factors of automatic EF quantification algorithm from gated myocardial perfusion SPECT(gMPS) (Germano et al. JNM, 1995). Gender difference whereby normal LVEF measurements are higher in females assessed by gMPS (Yao et al. JNM 1997). To validate true physiologic value of LVEF vs sampling or measured error, various parameters were evaluated statistically in both gender and age matched 200 subjects (mean age= 58.41±15.01) with normal LVEF more than 50%, and a low likelihood of coronary artery disease. Correlation between LVEDVi(ml/m2) and LVEF was highly significant (r=-0.62, p<0.0001) with similar correlations noted in both male (r=-0.45, p<0.0001) and female (r=-0.67, p<0.0001) subgroups. By multivariate analysis, LV volume and stroke volume was the most significant factor influencing LVEF in male and female, respectively. In conclusion, there is a significant negative correlation between LV volume and LVEF as measured by Tc-99m gated SPECT. Higher normal LVEF value should be applied to females as assessed by post-stress resting Tc-99m Sestamibi gated myocardial perfusion SPECT

  12. Rotorcraft system identification techniques for handling qualities and stability and control evaluation

    Science.gov (United States)

    Hall, W. E., Jr.; Gupta, N. K.; Hansen, R. S.

    1978-01-01

    An integrated approach to rotorcraft system identification is described. This approach consists of sequential application of (1) data filtering to estimate states of the system and sensor errors, (2) model structure estimation to isolate significant model effects, and (3) parameter identification to quantify the coefficient of the model. An input design algorithm is described which can be used to design control inputs which maximize parameter estimation accuracy. Details of each aspect of the rotorcraft identification approach are given. Examples of both simulated and actual flight data processing are given to illustrate each phase of processing. The procedure is shown to provide means of calibrating sensor errors in flight data, quantifying high order state variable models from the flight data, and consequently computing related stability and control design models.

  13. Recognition Errors Control in Biometric Identification Cryptosystems

    Directory of Open Access Journals (Sweden)

    Vladimir Ivanovich Vasilyev

    2015-06-01

    Full Text Available The method of biometric cryptosystem designed on the basis of fuzzy extractor, in which main disadvantages of biometric and cryptographic systems are absent, is considered. The main idea of this work is a control of identity recognition errors with use of fuzzy extractor which operates with Reed – Solomon correcting code. The fingerprint features vector is considered as a biometric user identifier.

  14. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  15. Identification of Medication Errors with Similar Pronunciation, Spelling and Packaging in Tabriz Shahid Madani Hospital -1392

    Directory of Open Access Journals (Sweden)

    Gisoo Alizadeh

    2015-08-01

    Full Text Available Background and objectives: Evidence suggests that medication errors are among the most common types of medical errors, and over fifty percent of them are preventable. Since a significant proportion of these errors are related to the similarity between drug names, this study was designed to evaluate drugs with similar spelling, spelling and packaging. Material and Methods: This is a qualitative study with phenomenological approach. Participants were selected by purposive sampling. Data were collected through semi-structured interviews and with the help of previously designed guide. Data were analyzed using content analysis. Results: The central themes of the findings of this study include: the accuracy of drug use, the way of recording and monitoring used medication, the storage, reporting, and notification of similar medications, verbal or telephone orders, medication lists with similar spelling, pronunciation and packaging and recommendations of the participants. The most errors in the Heparin-Atropine pair was the packaging of the drugs, spelling was the highest error in Dopamine- Dobutamine pair drug and spelling was the highest error in Atropine and Atorvastatin pair drug. Conclusion: The study findings indicate that there is no certain system for recording, monitoring and storage of similar drugs. Therefore, identifying the medication list with similar pronunciation, spelling and packaging is an opportunity to reduce these types of errors with appropriate interventions. ​

  16. Routine cognitive errors: a trait-like predictor of individual differences in anxiety and distress.

    Science.gov (United States)

    Fetterman, Adam K; Robinson, Michael D

    2011-02-01

    Five studies (N=361) sought to model a class of errors--namely, those in routine tasks--that several literatures have suggested may predispose individuals to higher levels of emotional distress. Individual differences in error frequency were assessed in choice reaction-time tasks of a routine cognitive type. In Study 1, it was found that tendencies toward error in such tasks exhibit trait-like stability over time. In Study 3, it was found that tendencies toward error exhibit trait-like consistency across different tasks. Higher error frequency, in turn, predicted higher levels of negative affect, general distress symptoms, displayed levels of negative emotion during an interview, and momentary experiences of negative emotion in daily life (Studies 2-5). In all cases, such predictive relations remained significant with individual differences in neuroticism controlled. The results thus converge on the idea that error frequency in simple cognitive tasks is a significant and consequential predictor of emotional distress in everyday life. The results are novel, but discussed within the context of the wider literatures that informed them. © 2010 Psychology Press, an imprint of the Taylor & Francis Group, an Informa business

  17. Identification of proteomic biomarkers predicting prostate cancer aggressiveness and lethality despite biopsy-sampling error.

    Science.gov (United States)

    Shipitsin, M; Small, C; Choudhury, S; Giladi, E; Friedlander, S; Nardone, J; Hussain, S; Hurley, A D; Ernst, C; Huang, Y E; Chang, H; Nifong, T P; Rimm, D L; Dunyak, J; Loda, M; Berman, D M; Blume-Jensen, P

    2014-09-09

    Key challenges of biopsy-based determination of prostate cancer aggressiveness include tumour heterogeneity, biopsy-sampling error, and variations in biopsy interpretation. The resulting uncertainty in risk assessment leads to significant overtreatment, with associated costs and morbidity. We developed a performance-based strategy to identify protein biomarkers predictive of prostate cancer aggressiveness and lethality regardless of biopsy-sampling variation. Prostatectomy samples from a large patient cohort with long follow-up were blindly assessed by expert pathologists who identified the tissue regions with the highest and lowest Gleason grade from each patient. To simulate biopsy-sampling error, a core from a high- and a low-Gleason area from each patient sample was used to generate a 'high' and a 'low' tumour microarray, respectively. Using a quantitative proteomics approach, we identified from 160 candidates 12 biomarkers that predicted prostate cancer aggressiveness (surgical Gleason and TNM stage) and lethal outcome robustly in both high- and low-Gleason areas. Conversely, a previously reported lethal outcome-predictive marker signature for prostatectomy tissue was unable to perform under circumstances of maximal sampling error. Our results have important implications for cancer biomarker discovery in general and development of a sampling error-resistant clinical biopsy test for prediction of prostate cancer aggressiveness.

  18. The impact of a closed-loop electronic prescribing and administration system on prescribing errors, administration errors and staff time: a before-and-after study.

    Science.gov (United States)

    Franklin, Bryony Dean; O'Grady, Kara; Donyai, Parastou; Jacklin, Ann; Barber, Nick

    2007-08-01

    To assess the impact of a closed-loop electronic prescribing, automated dispensing, barcode patient identification and electronic medication administration record (EMAR) system on prescribing and administration errors, confirmation of patient identity before administration, and staff time. Before-and-after study in a surgical ward of a teaching hospital, involving patients and staff of that ward. Closed-loop electronic prescribing, automated dispensing, barcode patient identification and EMAR system. Percentage of new medication orders with a prescribing error, percentage of doses with medication administration errors (MAEs) and percentage given without checking patient identity. Time spent prescribing and providing a ward pharmacy service. Nursing time on medication tasks. Prescribing errors were identified in 3.8% of 2450 medication orders pre-intervention and 2.0% of 2353 orders afterwards (pMedical staff required 15 s to prescribe a regular inpatient drug pre-intervention and 39 s afterwards (p = 0.03; t test). Time spent providing a ward pharmacy service increased from 68 min to 98 min each weekday (p = 0.001; t test); 22% of drug charts were unavailable pre-intervention. Time per drug administration round decreased from 50 min to 40 min (p = 0.006; t test); nursing time on medication tasks outside of drug rounds increased from 21.1% to 28.7% (p = 0.006; chi(2) test). A closed-loop electronic prescribing, dispensing and barcode patient identification system reduced prescribing errors and MAEs, and increased confirmation of patient identity before administration. Time spent on medication-related tasks increased.

  19. Rapid Classification and Identification of Multiple Microorganisms with Accurate Statistical Significance via High-Resolution Tandem Mass Spectrometry.

    Science.gov (United States)

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y; Drake, Steven K; Gucek, Marjan; Sacks, David B; Yu, Yi-Kuo

    2018-06-05

    Rapid and accurate identification and classification of microorganisms is of paramount importance to public health and safety. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is complicating correct microbial identification even in a simple sample due to the large number of candidates present. To properly untwine candidate microbes in samples containing one or more microbes, one needs to go beyond apparent morphology or simple "fingerprinting"; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptide-centric representations of microbes to better separate them and by augmenting our earlier analysis method that yields accurate statistical significance. Here, we present an updated analysis workflow that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using 226 MS/MS publicly available data files (each containing from 2500 to nearly 100,000 MS/MS spectra) and 4000 additional MS/MS data files, that the updated workflow can correctly identify multiple microbes at the genus and often the species level for samples containing more than one microbe. We have also shown that the proposed workflow computes accurate statistical significances, i.e., E values for identified peptides and unified E values for identified microbes. Our updated analysis workflow MiCId, a freely available software for Microorganism Classification and Identification, is available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html . Graphical Abstract ᅟ.

  20. Understanding changes in employees' identification and professional identity : the case of teachers in higher vocational education in the Netherlands

    NARCIS (Netherlands)

    Max Aangenendt

    2015-01-01

    The central aim of this dissertation is to increase our understanding of changes in identifications and in the professional identity of employees, by investigating the prominent foci of identification, their mix in higher-order social identities and the personal and organisational factors (HRM and

  1. Error analysis for 1-1/2-loop semiscale system isothermal test data

    International Nuclear Information System (INIS)

    Feldman, E.M.; Naff, S.A.

    1975-05-01

    An error analysis was performed on the measurements made during the isothermal portion of the Semiscale Blowdown and Emergency Core Cooling (ECC) Project. A brief description of the measurement techniques employed, identification of potential sources of errors, and quantification of the errors associated with data is presented. (U.S.)

  2. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  3. Low-frequency Periodic Error Identification and Compensation for Star Tracker Attitude Measurement

    Institute of Scientific and Technical Information of China (English)

    WANG Jiongqi; XIONG Kai; ZHOU Haiyin

    2012-01-01

    The low-frequency periodic error of star tracker is one of the most critical problems for high-accuracy satellite attitude determination.In this paper an approach is proposed to identify and compensate the low-frequency periodic error for star tracker in attitude measurement.The analytical expression between the estimated gyro drift and the low-frequency periodic error of star tracker is derived firstly.And then the low-frequency periodic error,which can be expressed by Fourier series,is identified by the frequency spectrum of the estimated gyro drift according to the solution of the first step.Furthermore,the compensated model of the low-frequency periodic error is established based on the identified parameters to improve the attitude determination accuracy.Finally,promising simulated experimental results demonstrate the validity and effectiveness of the proposed method.The periodic error for attitude determination is eliminated basically and the estimation precision is improved greatly.

  4. Teacher knowledge of error analysis in differential calculus

    Directory of Open Access Journals (Sweden)

    Eunice K. Moru

    2014-12-01

    Full Text Available The study investigated teacher knowledge of error analysis in differential calculus. Two teachers were the sample of the study: one a subject specialist and the other a mathematics education specialist. Questionnaires and interviews were used for data collection. The findings of the study reflect that the teachers’ knowledge of error analysis was characterised by the following assertions, which are backed up with some evidence: (1 teachers identified the errors correctly, (2 the generalised error identification resulted in opaque analysis, (3 some of the identified errors were not interpreted from multiple perspectives, (4 teachers’ evaluation of errors was either local or global and (5 in remedying errors accuracy and efficiency were emphasised more than conceptual understanding. The implications of the findings of the study for teaching include engaging in error analysis continuously as this is one way of improving knowledge for teaching.

  5. Competitive Intelligence: Significance in Higher Education

    Science.gov (United States)

    Barrett, Susan E.

    2010-01-01

    Historically noncompetitive, the higher education sector is now having to adjust dramatically to new and increasing demands on numerous levels. To remain successfully operational within the higher educational market universities today must consider all relevant forces which can impact present and future planning. Those institutions that were…

  6. Efficient error correction for next-generation sequencing of viral amplicons.

    Science.gov (United States)

    Skums, Pavel; Dimitrova, Zoya; Campo, David S; Vaughan, Gilberto; Rossi, Livia; Forbi, Joseph C; Yokosawa, Jonny; Zelikovsky, Alex; Khudyakov, Yury

    2012-06-25

    Next-generation sequencing allows the analysis of an unprecedented number of viral sequence variants from infected patients, presenting a novel opportunity for understanding virus evolution, drug resistance and immune escape. However, sequencing in bulk is error prone. Thus, the generated data require error identification and correction. Most error-correction methods to date are not optimized for amplicon analysis and assume that the error rate is randomly distributed. Recent quality assessment of amplicon sequences obtained using 454-sequencing showed that the error rate is strongly linked to the presence and size of homopolymers, position in the sequence and length of the amplicon. All these parameters are strongly sequence specific and should be incorporated into the calibration of error-correction algorithms designed for amplicon sequencing. In this paper, we present two new efficient error correction algorithms optimized for viral amplicons: (i) k-mer-based error correction (KEC) and (ii) empirical frequency threshold (ET). Both were compared to a previously published clustering algorithm (SHORAH), in order to evaluate their relative performance on 24 experimental datasets obtained by 454-sequencing of amplicons with known sequences. All three algorithms show similar accuracy in finding true haplotypes. However, KEC and ET were significantly more efficient than SHORAH in removing false haplotypes and estimating the frequency of true ones. Both algorithms, KEC and ET, are highly suitable for rapid recovery of error-free haplotypes obtained by 454-sequencing of amplicons from heterogeneous viruses.The implementations of the algorithms and data sets used for their testing are available at: http://alan.cs.gsu.edu/NGS/?q=content/pyrosequencing-error-correction-algorithm.

  7. Time-discrete higher order ALE formulations: a priori error analysis

    KAUST Repository

    Bonito, Andrea; Kyza, Irene; Nochetto, Ricardo H.

    2013-01-01

    We derive optimal a priori error estimates for discontinuous Galerkin (dG) time discrete schemes of any order applied to an advection-diffusion model defined on moving domains and written in the Arbitrary Lagrangian Eulerian (ALE) framework. Our

  8. Identification of Error of Commissions in the LOCA Using the CESA Method

    Energy Technology Data Exchange (ETDEWEB)

    Tukhbyet-olla, Myeruyert; Kang, Sunkoo; Kim, Jonghyun [KEPCO international nuclear graduate school, Ulsan (Korea, Republic of)

    2015-10-15

    An Errors of commission (EOCs) can be defined as the performance of any inappropriate action that aggravates the situation. The primary focus in current PSA is placed on those sequences of hardware failures and/or EOOs that lead to unsafe system states. Although EOCs can be treated when identified, a systematic and comprehensive treatment of EOC opportunities remains outside the scope of PSAs. However, some past experiences in the nuclear industry show that EOCs have contributed to severe accidents. Some recent and emerging human reliability analysis (HRA) methods suggest approaches to identify and quantify EOCs, such as ATHEANA, MERMOS, GRS, MDTA, and CESA. The CESA method, developed by the Risk and Human Reliability Group at the Paul Scherrer Institute, is to identify potentially risk-significant EOCs, given an existing PSA. The main idea underlying the method is to catalog the key actions that are required in the procedural response to plant events and to identify specific scenarios in which these candidate actions could erroneously appear to be required. This paper aims at identifying EOCs in the LOCA by using the CESA method. This study is focused on the identification of EOCs, while the quantification of EOCs is out of scope. Then, this paper applies the CESA method to the emergency operating procedure (EOP) of LOCA for APR1400. Finally, this study presents potential EOCs that may lead to the aggravation in the mitigation of LOCA. This study has identified the EOC events for APR1400 in the LOCA using CESA method. The result identified three candidate EOCs event using operator action catalog and RAW cutset of LOCA. These candidate EOC events are inappropriate terminations of safety injection system, safety injection tank and containment spray system. Then after reviewing top 100 accident sequences of PSA, this study finally identified one EOC scenario and EOC path, that is, inappropriate termination of safety injection system.

  9. Error Free Software

    Science.gov (United States)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  10. Random and Systematic Errors Share in Total Error of Probes for CNC Machine Tools

    Directory of Open Access Journals (Sweden)

    Adam Wozniak

    2018-03-01

    Full Text Available Probes for CNC machine tools, as every measurement device, have accuracy limited by random errors and by systematic errors. Random errors of these probes are described by a parameter called unidirectional repeatability. Manufacturers of probes for CNC machine tools usually specify only this parameter, while parameters describing systematic errors of the probes, such as pre-travel variation or triggering radius variation, are used rarely. Systematic errors of the probes, linked to the differences in pre-travel values for different measurement directions, can be corrected or compensated, but it is not a widely used procedure. In this paper, the share of systematic errors and random errors in total error of exemplary probes are determined. In the case of simple, kinematic probes, systematic errors are much greater than random errors, so compensation would significantly reduce the probing error. Moreover, it shows that in the case of kinematic probes commonly specified unidirectional repeatability is significantly better than 2D performance. However, in the case of more precise strain-gauge probe systematic errors are of the same order as random errors, which means that errors correction or compensation, in this case, would not yield any significant benefits.

  11. Measurement errors in voice-key naming latency for Hiragana.

    Science.gov (United States)

    Yamada, Jun; Tamaoka, Katsuo

    2003-12-01

    This study makes explicit the limitations and possibilities of voice-key naming latency research on single hiragana symbols (a Japanese syllabic script) by examining three sets of voice-key naming data against Sakuma, Fushimi, and Tatsumi's 1997 speech-analyzer voice-waveform data. Analysis showed that voice-key measurement errors can be substantial in standard procedures as they may conceal the true effects of significant variables involved in hiragana-naming behavior. While one can avoid voice-key measurement errors to some extent by applying Sakuma, et al.'s deltas and by excluding initial phonemes which induce measurement errors, such errors may be ignored when test items are words and other higher-level linguistic materials.

  12. Human error and the problem of causality in analysis of accidents

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1990-01-01

    , designers or managers have played a major role. There are, however, several basic problems in analysis of accidents and identification of human error. This paper addresses the nature of causal explanations and the ambiguity of the rules applied for identification of the events to include in analysis......Present technology is characterized by complexity, rapid change and growing size of technical systems. This has caused increasing concern with the human involvement in system safety. Analyses of the major accidents during recent decades have concluded that human errors on part of operators...

  13. The Influence of Organisational Identification on Employee Attitudes and Behaviours in Multinational Higher Education Institutions

    Science.gov (United States)

    Wilkins, Stephen; Butt, Muhammad Mohsin; Annabi, Carrie Amani

    2018-01-01

    In order to operate effectively and efficiently, most higher education institutions depend on employees performing extra-role behaviours and being committed to staying with the organisation. This study assesses the extent to which organisational identification and employee satisfaction are antecedents of these two important behaviours. Key…

  14. How Do Simulated Error Experiences Impact Attitudes Related to Error Prevention?

    Science.gov (United States)

    Breitkreuz, Karen R; Dougal, Renae L; Wright, Melanie C

    2016-10-01

    The objective of this project was to determine whether simulated exposure to error situations changes attitudes in a way that may have a positive impact on error prevention behaviors. Using a stratified quasi-randomized experiment design, we compared risk perception attitudes of a control group of nursing students who received standard error education (reviewed medication error content and watched movies about error experiences) to an experimental group of students who reviewed medication error content and participated in simulated error experiences. Dependent measures included perceived memorability of the educational experience, perceived frequency of errors, and perceived caution with respect to preventing errors. Experienced nursing students perceived the simulated error experiences to be more memorable than movies. Less experienced students perceived both simulated error experiences and movies to be highly memorable. After the intervention, compared with movie participants, simulation participants believed errors occurred more frequently. Both types of education increased the participants' intentions to be more cautious and reported caution remained higher than baseline for medication errors 6 months after the intervention. This study provides limited evidence of an advantage of simulation over watching movies describing actual errors with respect to manipulating attitudes related to error prevention. Both interventions resulted in long-term impacts on perceived caution in medication administration. Simulated error experiences made participants more aware of how easily errors can occur, and the movie education made participants more aware of the devastating consequences of errors.

  15. Frequency and analysis of non-clinical errors made in radiology reports using the National Integrated Medical Imaging System voice recognition dictation software.

    Science.gov (United States)

    Motyer, R E; Liddy, S; Torreggiani, W C; Buckley, O

    2016-11-01

    Voice recognition (VR) dictation of radiology reports has become the mainstay of reporting in many institutions worldwide. Despite benefit, such software is not without limitations, and transcription errors have been widely reported. Evaluate the frequency and nature of non-clinical transcription error using VR dictation software. Retrospective audit of 378 finalised radiology reports. Errors were counted and categorised by significance, error type and sub-type. Data regarding imaging modality, report length and dictation time was collected. 67 (17.72 %) reports contained ≥1 errors, with 7 (1.85 %) containing 'significant' and 9 (2.38 %) containing 'very significant' errors. A total of 90 errors were identified from the 378 reports analysed, with 74 (82.22 %) classified as 'insignificant', 7 (7.78 %) as 'significant', 9 (10 %) as 'very significant'. 68 (75.56 %) errors were 'spelling and grammar', 20 (22.22 %) 'missense' and 2 (2.22 %) 'nonsense'. 'Punctuation' error was most common sub-type, accounting for 27 errors (30 %). Complex imaging modalities had higher error rates per report and sentence. Computed tomography contained 0.040 errors per sentence compared to plain film with 0.030. Longer reports had a higher error rate, with reports >25 sentences containing an average of 1.23 errors per report compared to 0-5 sentences containing 0.09. These findings highlight the limitations of VR dictation software. While most error was deemed insignificant, there were occurrences of error with potential to alter report interpretation and patient management. Longer reports and reports on more complex imaging had higher error rates and this should be taken into account by the reporting radiologist.

  16. Quantifying errors and omissions in alien species lists: The introduction status of Melaleuca species in South Africa as a case study

    Directory of Open Access Journals (Sweden)

    Llewellyn Jacobs

    2017-01-01

    Full Text Available Introduced species lists provide essential background information for biological invasions research and management. The compilation of these lists is, however, prone to a variety of errors. We highlight the frequency and consequences of such errors using introduced Melaleuca (sensu lato, including Callistemon species in South Africa as a case study. We examined 111 herbarium specimens from South Africa and noted the categories and sub-categories of errors that occurred in identification. We also used information from herbarium specimens and distribution data collected in the field to determine whether a species was introduced, naturalized and invasive. We found that 72% of the specimens were not named correctly. These were due to human error (70% (misidentification, and improved identifications and species identification problems (30% (synonyms arising from inclusion of Callistemon, and unresolved taxonomy. At least 36 Melaleuca species have been introduced to South Africa, and field observations indicate that ten of these have naturalized, including five that are invasive. While most of the errors likely have negligible impact on management, we highlight one case where incorrect identification lead to an inappropriate management approach and some instances of errors in published lists. Invasive species lists need to be carefully reviewed to minimise errors, and herbarium specimens supported by DNA identification are required where identification using morphological features is particularly challenging.

  17. Error monitoring and empathy: Explorations within a neurophysiological context.

    Science.gov (United States)

    Amiruddin, Azhani; Fueggle, Simone N; Nguyen, An T; Gignac, Gilles E; Clunies-Ross, Karen L; Fox, Allison M

    2017-06-01

    Past literature has proposed that empathy consists of two components: cognitive and affective empathy. Error monitoring mechanisms indexed by the error-related negativity (ERN) have been associated with empathy. Studies have found that a larger ERN is associated with higher levels of empathy. We aimed to expand upon previous work by investigating how error monitoring relates to the independent theoretical domains of cognitive and affective empathy. Study 1 (N = 24) explored the relationship between error monitoring mechanisms and subcomponents of empathy using the Questionnaire of Cognitive and Affective Empathy and found no relationship. Study 2 (N = 38) explored the relationship between the error monitoring mechanisms and overall empathy. Contrary to past findings, there was no evidence to support a relationship between error monitoring mechanisms and scores on empathy measures. A subsequent meta-analysis (Study 3, N = 125) summarizing the relationship across previously published studies together with the two studies reported in the current paper indicated that overall there was no significant association between ERN and empathy and that there was significant heterogeneity across studies. Future investigations exploring the potential variables that may moderate these relationships are discussed. © 2017 Society for Psychophysiological Research.

  18. Errors in Neonatology

    Directory of Open Access Journals (Sweden)

    Antonio Boldrini

    2013-06-01

    Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research

  19. Feasibility Study on Tension Estimation Technique for Hanger Cables Using the FE Model-Based System Identification Method

    Directory of Open Access Journals (Sweden)

    Kyu-Sik Park

    2015-01-01

    Full Text Available Hanger cables in suspension bridges are partly constrained by horizontal clamps. So, existing tension estimation methods based on a single cable model are prone to higher errors as the cable gets shorter, making it more sensitive to flexural rigidity. Therefore, inverse analysis and system identification methods based on finite element models are suggested recently. In this paper, the applicability of system identification methods is investigated using the hanger cables of Gwang-An bridge. The test results show that the inverse analysis and systemic identification methods based on finite element models are more reliable than the existing string theory and linear regression method for calculating the tension in terms of natural frequency errors. However, the estimation error of tension can be varied according to the accuracy of finite element model in model based methods. In particular, the boundary conditions affect the results more profoundly when the cable gets shorter. Therefore, it is important to identify the boundary conditions through experiment if it is possible. The FE model-based tension estimation method using system identification method can take various boundary conditions into account. Also, since it is not sensitive to the number of natural frequency inputs, the availability of this system is high.

  20. Death Certification Errors and the Effect on Mortality Statistics.

    Science.gov (United States)

    McGivern, Lauri; Shulman, Leanne; Carney, Jan K; Shapiro, Steven; Bundock, Elizabeth

    Errors in cause and manner of death on death certificates are common and affect families, mortality statistics, and public health research. The primary objective of this study was to characterize errors in the cause and manner of death on death certificates completed by non-Medical Examiners. A secondary objective was to determine the effects of errors on national mortality statistics. We retrospectively compared 601 death certificates completed between July 1, 2015, and January 31, 2016, from the Vermont Electronic Death Registration System with clinical summaries from medical records. Medical Examiners, blinded to original certificates, reviewed summaries, generated mock certificates, and compared mock certificates with original certificates. They then graded errors using a scale from 1 to 4 (higher numbers indicated increased impact on interpretation of the cause) to determine the prevalence of minor and major errors. They also compared International Classification of Diseases, 10th Revision (ICD-10) codes on original certificates with those on mock certificates. Of 601 original death certificates, 319 (53%) had errors; 305 (51%) had major errors; and 59 (10%) had minor errors. We found no significant differences by certifier type (physician vs nonphysician). We did find significant differences in major errors in place of death ( P statistics. Surveillance and certifier education must expand beyond local and state efforts. Simplifying and standardizing underlying literal text for cause of death may improve accuracy, decrease coding errors, and improve national mortality statistics.

  1. Error Detection and Error Classification: Failure Awareness in Data Transfer Scheduling

    Energy Technology Data Exchange (ETDEWEB)

    Louisiana State University; Balman, Mehmet; Kosar, Tevfik

    2010-10-27

    Data transfer in distributed environment is prone to frequent failures resulting from back-end system level problems, like connectivity failure which is technically untraceable by users. Error messages are not logged efficiently, and sometimes are not relevant/useful from users point-of-view. Our study explores the possibility of an efficient error detection and reporting system for such environments. Prior knowledge about the environment and awareness of the actual reason behind a failure would enable higher level planners to make better and accurate decisions. It is necessary to have well defined error detection and error reporting methods to increase the usability and serviceability of existing data transfer protocols and data management systems. We investigate the applicability of early error detection and error classification techniques and propose an error reporting framework and a failure-aware data transfer life cycle to improve arrangement of data transfer operations and to enhance decision making of data transfer schedulers.

  2. The impact of work-related stress on medication errors in Eastern Region Saudi Arabia.

    Science.gov (United States)

    Salam, Abdul; Segal, David M; Abu-Helalah, Munir Ahmad; Gutierrez, Mary Lou; Joosub, Imran; Ahmed, Wasim; Bibi, Rubina; Clarke, Elizabeth; Qarni, Ali Ahmed Al

    2018-05-07

    To examine the relationship between overall level and source-specific work-related stressors on medication errors rate. A cross-sectional study examined the relationship between overall levels of stress, 25 source-specific work-related stressors and medication error rate based on documented incident reports in Saudi Arabia (SA) hospital, using secondary databases. King Abdulaziz Hospital in Al-Ahsa, Eastern Region, SA. Two hundred and sixty-nine healthcare professionals (HCPs). The odds ratio (OR) and corresponding 95% confidence interval (CI) for HCPs documented incident report medication errors and self-reported sources of Job Stress Survey. Multiple logistic regression analysis identified source-specific work-related stress as significantly associated with HCPs who made at least one medication error per month (P stress were two times more likely to make at least one medication error per month than non-stressed HCPs (OR: 1.95, P = 0.081). This is the first study to use documented incident reports for medication errors rather than self-report to evaluate the level of stress-related medication errors in SA HCPs. Job demands, such as social stressors (home life disruption, difficulties with colleagues), time pressures, structural determinants (compulsory night/weekend call duties) and higher income, were significantly associated with medication errors whereas overall stress revealed a 2-fold higher trend.

  3. Video Error Correction Using Steganography

    Science.gov (United States)

    Robie, David L.; Mersereau, Russell M.

    2002-12-01

    The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  4. The impact of a closed‐loop electronic prescribing and administration system on prescribing errors, administration errors and staff time: a before‐and‐after study

    Science.gov (United States)

    Franklin, Bryony Dean; O'Grady, Kara; Donyai, Parastou; Jacklin, Ann; Barber, Nick

    2007-01-01

    Objectives To assess the impact of a closed‐loop electronic prescribing, automated dispensing, barcode patient identification and electronic medication administration record (EMAR) system on prescribing and administration errors, confirmation of patient identity before administration, and staff time. Design, setting and participants Before‐and‐after study in a surgical ward of a teaching hospital, involving patients and staff of that ward. Intervention Closed‐loop electronic prescribing, automated dispensing, barcode patient identification and EMAR system. Main outcome measures Percentage of new medication orders with a prescribing error, percentage of doses with medication administration errors (MAEs) and percentage given without checking patient identity. Time spent prescribing and providing a ward pharmacy service. Nursing time on medication tasks. Results Prescribing errors were identified in 3.8% of 2450 medication orders pre‐intervention and 2.0% of 2353 orders afterwards (pMedical staff required 15 s to prescribe a regular inpatient drug pre‐intervention and 39 s afterwards (p = 0.03; t test). Time spent providing a ward pharmacy service increased from 68 min to 98 min each weekday (p = 0.001; t test); 22% of drug charts were unavailable pre‐intervention. Time per drug administration round decreased from 50 min to 40 min (p = 0.006; t test); nursing time on medication tasks outside of drug rounds increased from 21.1% to 28.7% (p = 0.006; χ2 test). Conclusions A closed‐loop electronic prescribing, dispensing and barcode patient identification system reduced prescribing errors and MAEs, and increased confirmation of patient identity before administration. Time spent on medication‐related tasks increased. PMID:17693676

  5. Identification of proteomic biomarkers predicting prostate cancer aggressiveness and lethality despite biopsy-sampling error

    OpenAIRE

    Shipitsin, M; Small, C; Choudhury, S; Giladi, E; Friedlander, S; Nardone, J; Hussain, S; Hurley, A D; Ernst, C; Huang, Y E; Chang, H; Nifong, T P; Rimm, D L; Dunyak, J; Loda, M

    2014-01-01

    Background: Key challenges of biopsy-based determination of prostate cancer aggressiveness include tumour heterogeneity, biopsy-sampling error, and variations in biopsy interpretation. The resulting uncertainty in risk assessment leads to significant overtreatment, with associated costs and morbidity. We developed a performance-based strategy to identify protein biomarkers predictive of prostate cancer aggressiveness and lethality regardless of biopsy-sampling variation. Methods: Prostatectom...

  6. Associations between communication climate and the frequency of medical error reporting among pharmacists within an inpatient setting.

    Science.gov (United States)

    Patterson, Mark E; Pace, Heather A; Fincham, Jack E

    2013-09-01

    Although error-reporting systems enable hospitals to accurately track safety climate through the identification of adverse events, these systems may be underused within a work climate of poor communication. The objective of this analysis is to identify the extent to which perceived communication climate among hospital pharmacists impacts medical error reporting rates. This cross-sectional study used survey responses from more than 5000 pharmacists responding to the 2010 Hospital Survey on Patient Safety Culture (HSOPSC). Two composite scores were constructed for "communication openness" and "feedback and about error," respectively. Error reporting frequency was defined from the survey question, "In the past 12 months, how many event reports have you filled out and submitted?" Multivariable logistic regressions were used to estimate the likelihood of medical error reporting conditional upon communication openness or feedback levels, controlling for pharmacist years of experience, hospital geographic region, and ownership status. Pharmacists with higher communication openness scores compared with lower scores were 40% more likely to have filed or submitted a medical error report in the past 12 months (OR, 1.4; 95% CI, 1.1-1.7; P = 0.004). In contrast, pharmacists with higher communication feedback scores were not any more likely than those with lower scores to have filed or submitted a medical report in the past 12 months (OR, 1.0; 95% CI, 0.8-1.3; P = 0.97). Hospital work climates that encourage pharmacists to freely communicate about problems related to patient safety is conducive to medical error reporting. The presence of feedback infrastructures about error may not be sufficient to induce error-reporting behavior.

  7. Learning from Errors in Dual Vocational Education: Video-Enhanced Instructional Strategies

    Science.gov (United States)

    Cattaneo, Alberto A. P.; Boldrini, Elena

    2017-01-01

    Purpose: Starting from the identification of some theoretically driven instructional principles, this paper presents a set of empirical cases based on strategies to learn from errors. The purpose of this paper is to provide first evidence about the feasibility and the effectiveness for learning of video-enhanced error-based strategies in…

  8. A power law global error model for the identification of differentially expressed genes in microarray data

    Directory of Open Access Journals (Sweden)

    Granucci Francesca

    2004-12-01

    Full Text Available Abstract Background High-density oligonucleotide microarray technology enables the discovery of genes that are transcriptionally modulated in different biological samples due to physiology, disease or intervention. Methods for the identification of these so-called "differentially expressed genes" (DEG would largely benefit from a deeper knowledge of the intrinsic measurement variability. Though it is clear that variance of repeated measures is highly dependent on the average expression level of a given gene, there is still a lack of consensus on how signal reproducibility is linked to signal intensity. The aim of this study was to empirically model the variance versus mean dependence in microarray data to improve the performance of existing methods for identifying DEG. Results In the present work we used data generated by our lab as well as publicly available data sets to show that dispersion of repeated measures depends on location of the measures themselves following a power law. This enables us to construct a power law global error model (PLGEM that is applicable to various Affymetrix GeneChip data sets. A new DEG identification method is therefore proposed, consisting of a statistic designed to make explicit use of model-derived measurement spread estimates and a resampling-based hypothesis testing algorithm. Conclusions The new method provides a control of the false positive rate, a good sensitivity vs. specificity trade-off and consistent results with varying number of replicates and even using single samples.

  9. Video Error Correction Using Steganography

    Directory of Open Access Journals (Sweden)

    Robie David L

    2002-01-01

    Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  10. Error identification in a high-volume clinical chemistry laboratory: Five-year experience.

    Science.gov (United States)

    Jafri, Lena; Khan, Aysha Habib; Ghani, Farooq; Shakeel, Shahid; Raheem, Ahmed; Siddiqui, Imran

    2015-07-01

    Quality indicators for assessing the performance of a laboratory require a systematic and continuous approach in collecting and analyzing data. The aim of this study was to determine the frequency of errors utilizing the quality indicators in a clinical chemistry laboratory and to convert errors to the Sigma scale. Five-year quality indicator data of a clinical chemistry laboratory was evaluated to describe the frequency of errors. An 'error' was defined as a defect during the entire testing process from the time requisition was raised and phlebotomy was done until the result dispatch. An indicator with a Sigma value of 4 was considered good but a process for which the Sigma value was 5 (i.e. 99.977% error-free) was considered well controlled. In the five-year period, a total of 6,792,020 specimens were received in the laboratory. Among a total of 17,631,834 analyses, 15.5% were from within hospital. Total error rate was 0.45% and of all the quality indicators used in this study the average Sigma level was 5.2. Three indicators - visible hemolysis, failure of proficiency testing and delay in stat tests - were below 5 on the Sigma scale and highlight the need to rigorously monitor these processes. Using Six Sigma metrics quality in a clinical laboratory can be monitored more effectively and it can set benchmarks for improving efficiency.

  11. Does methodology matter in eyewitness identification research? The effect of live versus video exposure on eyewitness identification accuracy.

    Science.gov (United States)

    Pozzulo, Joanna D; Crescini, Charmagne; Panton, Tasha

    2008-01-01

    The present study examined the effect of mode of target exposure (live versus video) on eyewitness identification accuracy. Adult participants (N=104) were exposed to a staged crime that they witnessed either live or on videotape. Participants were then asked to rate their stress and arousal levels prior to being presented with either a target-present or -absent simultaneous lineup. Across target-present and -absent lineups, mode of target exposure did not have a significant effect on identification accuracy. However, mode of target exposure was found to have a significant effect on stress and arousal levels. Participants who witnessed the crime live had higher levels of stress and arousal than those who were exposed to the videotaped crime. A higher level of arousal was significantly related to poorer identification accuracy for those in the video condition. For participants in the live condition however, stress and arousal had no effect on eyewitness identification accuracy. Implications of these findings in regards to the generalizability of laboratory-based research on eyewitness testimony to real-life crime are discussed.

  12. Making Residents Part of the Safety Culture: Improving Error Reporting and Reducing Harms.

    Science.gov (United States)

    Fox, Michael D; Bump, Gregory M; Butler, Gabriella A; Chen, Ling-Wan; Buchert, Andrew R

    2017-01-30

    Reporting medical errors is a focus of the patient safety movement. As frontline physicians, residents are optimally positioned to recognize errors and flaws in systems of care. Previous work highlights the difficulty of engaging residents in identification and/or reduction of medical errors and in integrating these trainees into their institutions' cultures of safety. The authors describe the implementation of a longitudinal, discipline-based, multifaceted curriculum to enhance the reporting of errors by pediatric residents at Children's Hospital of Pittsburgh of University of Pittsburgh Medical Center. The key elements of this curriculum included providing the necessary education to identify medical errors with an emphasis on systems-based causes, modeling of error reporting by faculty, and integrating error reporting and discussion into the residents' daily activities. The authors tracked monthly error reporting rates by residents and other health care professionals, in addition to serious harm event rates at the institution. The interventions resulted in significant increases in error reports filed by residents, from 3.6 to 37.8 per month over 4 years (P error reporting correlated with a decline in serious harm events, from 15.0 to 8.1 per month over 4 years (P = 0.01). Integrating patient safety into the everyday resident responsibilities encourages frequent reporting and discussion of medical errors and leads to improvements in patient care. Multiple simultaneous interventions are essential to making residents part of the safety culture of their training hospitals.

  13. Guidelines for system modeling: pre-accident human errors, rev.0

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Dae Il; Jung, W. D.; Lee, Y. H.; Hwang, M. J.; Yang, J. E

    2004-01-01

    The evaluation results of Human Reliability Analysis (HRA) of pre-accident human errors in the probabilistic safety assessment (PSA) for the Korea Standard Nuclear Power Plant (KSNP) using the ASME PRA standard show that more than 50% of 10 items to be improved are related to the identification and screening analysis for them. Thus, we developed a guideline for modeling pre-accident human errors for the system analyst to resolve some items to be improved for them. The developed guideline consists of modeling criteria for the pre-accident human errors (identification, qualitative screening, and common restoration errors) and detailed guidelines for pre-accident human errors relating to testing, maintenance, and calibration works of nuclear power plants (NPPs). The system analyst use the developed guideline and he or she applies it to the system which he or she takes care of. The HRA analyst review the application results of the system analyst. We applied the developed guideline to the auxiliary feed water system of the KSNP to show the usefulness of it. The application results of the developed guideline show that more than 50% of the items to be improved for pre-accident human errors of auxiliary feed water system are resolved. The guideline for modeling pre-accident human errors developed in this study can be used for other NPPs as well as the KSNP. It is expected that both use of the detailed procedure, to be developed in the future, for the quantification of pre-accident human errors and the guideline developed in this study will greatly enhance the PSA quality in the HRA of pre-accident human errors.

  14. Guidelines for system modeling: pre-accident human errors, rev.0

    International Nuclear Information System (INIS)

    Kang, Dae Il; Jung, W. D.; Lee, Y. H.; Hwang, M. J.; Yang, J. E.

    2004-01-01

    The evaluation results of Human Reliability Analysis (HRA) of pre-accident human errors in the probabilistic safety assessment (PSA) for the Korea Standard Nuclear Power Plant (KSNP) using the ASME PRA standard show that more than 50% of 10 items to be improved are related to the identification and screening analysis for them. Thus, we developed a guideline for modeling pre-accident human errors for the system analyst to resolve some items to be improved for them. The developed guideline consists of modeling criteria for the pre-accident human errors (identification, qualitative screening, and common restoration errors) and detailed guidelines for pre-accident human errors relating to testing, maintenance, and calibration works of nuclear power plants (NPPs). The system analyst use the developed guideline and he or she applies it to the system which he or she takes care of. The HRA analyst review the application results of the system analyst. We applied the developed guideline to the auxiliary feed water system of the KSNP to show the usefulness of it. The application results of the developed guideline show that more than 50% of the items to be improved for pre-accident human errors of auxiliary feed water system are resolved. The guideline for modeling pre-accident human errors developed in this study can be used for other NPPs as well as the KSNP. It is expected that both use of the detailed procedure, to be developed in the future, for the quantification of pre-accident human errors and the guideline developed in this study will greatly enhance the PSA quality in the HRA of pre-accident human errors

  15. Medication errors: classification of seriousness, type, and of medications involved in the reports from a university teaching hospital

    Directory of Open Access Journals (Sweden)

    Gabriella Rejane dos Santos Dalmolin

    2013-12-01

    Full Text Available Medication errors can be frequent in hospitals; these errors are multidisciplinary and occur at various stages of the drug therapy. The present study evaluated the seriousness, the type and the drugs involved in medication errors reported at the Hospital de Clínicas de Porto Alegre. We analyzed written error reports for 2010-2011. The sample consisted of 165 reports. The errors identified were classified according to seriousness, type and pharmacological class. 114 reports were categorized as actual errors (medication errors and 51 reports were categorized as potential errors. There were more medication error reports in 2011 compared to 2010, but there was no significant change in the seriousness of the reports. The most common type of error was prescribing error (48.25%. Errors that occurred during the process of drug therapy sometimes generated additional medication errors. In 114 reports of medication errors identified, 122 drugs were cited. The reflection on medication errors, the possibility of harm resulting from these errors, and the methods for error identification and evaluation should include a broad perspective of the aspects involved in the occurrence of errors. Patient safety depends on the process of communication involving errors, on the proper recording of information, and on the monitoring itself.

  16. Minimum Probability of Error-Based Equalization Algorithms for Fading Channels

    Directory of Open Access Journals (Sweden)

    Janos Levendovszky

    2007-06-01

    Full Text Available Novel channel equalizer algorithms are introduced for wireless communication systems to combat channel distortions resulting from multipath propagation. The novel algorithms are based on newly derived bounds on the probability of error (PE and guarantee better performance than the traditional zero forcing (ZF or minimum mean square error (MMSE algorithms. The new equalization methods require channel state information which is obtained by a fast adaptive channel identification algorithm. As a result, the combined convergence time needed for channel identification and PE minimization still remains smaller than the convergence time of traditional adaptive algorithms, yielding real-time equalization. The performance of the new algorithms is tested by extensive simulations on standard mobile channels.

  17. Antiretroviral medication prescribing errors are common with hospitalization of HIV-infected patients.

    Science.gov (United States)

    Commers, Tessa; Swindells, Susan; Sayles, Harlan; Gross, Alan E; Devetten, Marcel; Sandkovsky, Uriel

    2014-01-01

    Errors in prescribing antiretroviral therapy (ART) often occur with the hospitalization of HIV-infected patients. The rapid identification and prevention of errors may reduce patient harm and healthcare-associated costs. A retrospective review of hospitalized HIV-infected patients was carried out between 1 January 2009 and 31 December 2011. Errors were documented as omission, underdose, overdose, duplicate therapy, incorrect scheduling and/or incorrect therapy. The time to error correction was recorded. Relative risks (RRs) were computed to evaluate patient characteristics and error rates. A total of 289 medication errors were identified in 146/416 admissions (35%). The most common was drug omission (69%). At an error rate of 31%, nucleoside reverse transcriptase inhibitors were associated with an increased risk of error when compared with protease inhibitors (RR 1.32; 95% CI 1.04-1.69) and co-formulated drugs (RR 1.59; 95% CI 1.19-2.09). Of the errors, 31% were corrected within the first 24 h, but over half (55%) were never remedied. Admissions with an omission error were 7.4 times more likely to have all errors corrected within 24 h than were admissions without an omission. Drug interactions with ART were detected on 51 occasions. For the study population (n = 177), an increased risk of admission error was observed for black (43%) compared with white (28%) individuals (RR 1.53; 95% CI 1.16-2.03) but no significant differences were observed between white patients and other minorities or between men and women. Errors in inpatient ART were common, and the majority were never detected. The most common errors involved omission of medication, and nucleoside reverse transcriptase inhibitors had the highest rate of prescribing error. Interventions to prevent and correct errors are urgently needed.

  18. High cortisol awakening response is associated with impaired error monitoring and decreased post-error adjustment.

    Science.gov (United States)

    Zhang, Liang; Duan, Hongxia; Qin, Shaozheng; Yuan, Yiran; Buchanan, Tony W; Zhang, Kan; Wu, Jianhui

    2015-01-01

    The cortisol awakening response (CAR), a rapid increase in cortisol levels following morning awakening, is an important aspect of hypothalamic-pituitary-adrenocortical axis activity. Alterations in the CAR have been linked to a variety of mental disorders and cognitive function. However, little is known regarding the relationship between the CAR and error processing, a phenomenon that is vital for cognitive control and behavioral adaptation. Using high-temporal resolution measures of event-related potentials (ERPs) combined with behavioral assessment of error processing, we investigated whether and how the CAR is associated with two key components of error processing: error detection and subsequent behavioral adjustment. Sixty university students performed a Go/No-go task while their ERPs were recorded. Saliva samples were collected at 0, 15, 30 and 60 min after awakening on the two consecutive days following ERP data collection. The results showed that a higher CAR was associated with slowed latency of the error-related negativity (ERN) and a higher post-error miss rate. The CAR was not associated with other behavioral measures such as the false alarm rate and the post-correct miss rate. These findings suggest that high CAR is a biological factor linked to impairments of multiple steps of error processing in healthy populations, specifically, the automatic detection of error and post-error behavioral adjustment. A common underlying neural mechanism of physiological and cognitive control may be crucial for engaging in both CAR and error processing.

  19. SU-E-T-377: Inaccurate Positioning Might Introduce Significant MapCheck Calibration Error in Flatten Filter Free Beams

    International Nuclear Information System (INIS)

    Wang, S; Chao, C; Chang, J

    2014-01-01

    Purpose: This study investigates the calibration error of detector sensitivity for MapCheck due to inaccurate positioning of the device, which is not taken into account by the current commercial iterative calibration algorithm. We hypothesize the calibration is more vulnerable to the positioning error for the flatten filter free (FFF) beams than the conventional flatten filter flattened beams. Methods: MapCheck2 was calibrated with 10MV conventional and FFF beams, with careful alignment and with 1cm positioning error during calibration, respectively. Open fields of 37cmx37cm were delivered to gauge the impact of resultant calibration errors. The local calibration error was modeled as a detector independent multiplication factor, with which propagation error was estimated with positioning error from 1mm to 1cm. The calibrated sensitivities, without positioning error, were compared between the conventional and FFF beams to evaluate the dependence on the beam type. Results: The 1cm positioning error leads to 0.39% and 5.24% local calibration error in the conventional and FFF beams respectively. After propagating to the edges of MapCheck, the calibration errors become 6.5% and 57.7%, respectively. The propagation error increases almost linearly with respect to the positioning error. The difference of sensitivities between the conventional and FFF beams was small (0.11 ± 0.49%). Conclusion: The results demonstrate that the positioning error is not handled by the current commercial calibration algorithm of MapCheck. Particularly, the calibration errors for the FFF beams are ~9 times greater than those for the conventional beams with identical positioning error, and a small 1mm positioning error might lead to up to 8% calibration error. Since the sensitivities are only slightly dependent of the beam type and the conventional beam is less affected by the positioning error, it is advisable to cross-check the sensitivities between the conventional and FFF beams to detect

  20. Eliminating US hospital medical errors.

    Science.gov (United States)

    Kumar, Sameer; Steinebach, Marc

    2008-01-01

    Healthcare costs in the USA have continued to rise steadily since the 1980s. Medical errors are one of the major causes of deaths and injuries of thousands of patients every year, contributing to soaring healthcare costs. The purpose of this study is to examine what has been done to deal with the medical-error problem in the last two decades and present a closed-loop mistake-proof operation system for surgery processes that would likely eliminate preventable medical errors. The design method used is a combination of creating a service blueprint, implementing the six sigma DMAIC cycle, developing cause-and-effect diagrams as well as devising poka-yokes in order to develop a robust surgery operation process for a typical US hospital. In the improve phase of the six sigma DMAIC cycle, a number of poka-yoke techniques are introduced to prevent typical medical errors (identified through cause-and-effect diagrams) that may occur in surgery operation processes in US hospitals. It is the authors' assertion that implementing the new service blueprint along with the poka-yokes, will likely result in the current medical error rate to significantly improve to the six-sigma level. Additionally, designing as many redundancies as possible in the delivery of care will help reduce medical errors. Primary healthcare providers should strongly consider investing in adequate doctor and nurse staffing, and improving their education related to the quality of service delivery to minimize clinical errors. This will lead to an increase in higher fixed costs, especially in the shorter time frame. This paper focuses additional attention needed to make a sound technical and business case for implementing six sigma tools to eliminate medical errors that will enable hospital managers to increase their hospital's profitability in the long run and also ensure patient safety.

  1. Error in the delivery of radiation therapy: Results of a quality assurance review

    International Nuclear Information System (INIS)

    Huang, Grace; Medlam, Gaylene; Lee, Justin; Billingsley, Susan; Bissonnette, Jean-Pierre; Ringash, Jolie; Kane, Gabrielle; Hodgson, David C.

    2005-01-01

    Purpose: To examine error rates in the delivery of radiation therapy (RT), technical factors associated with RT errors, and the influence of a quality improvement intervention on the RT error rate. Methods and materials: We undertook a review of all RT errors that occurred at the Princess Margaret Hospital (Toronto) from January 1, 1997, to December 31, 2002. Errors were identified according to incident report forms that were completed at the time the error occurred. Error rates were calculated per patient, per treated volume (≥1 volume per patient), and per fraction delivered. The association between tumor site and error was analyzed. Logistic regression was used to examine the association between technical factors and the risk of error. Results: Over the study interval, there were 555 errors among 28,136 patient treatments delivered (error rate per patient = 1.97%, 95% confidence interval [CI], 1.81-2.14%) and among 43,302 treated volumes (error rate per volume = 1.28%, 95% CI, 1.18-1.39%). The proportion of fractions with errors from July 1, 2000, to December 31, 2002, was 0.29% (95% CI, 0.27-0.32%). Patients with sarcoma or head-and-neck tumors experienced error rates significantly higher than average (5.54% and 4.58%, respectively); however, when the number of treated volumes was taken into account, the head-and-neck error rate was no longer higher than average (1.43%). The use of accessories was associated with an increased risk of error, and internal wedges were more likely to be associated with an error than external wedges (relative risk = 2.04; 95% CI, 1.11-3.77%). Eighty-seven errors (15.6%) were directly attributed to incorrect programming of the 'record and verify' system. Changes to planning and treatment processes aimed at reducing errors within the head-and-neck site group produced a substantial reduction in the error rate. Conclusions: Errors in the delivery of RT are uncommon and usually of little clinical significance. Patient subgroups and

  2. Clinical errors and medical negligence.

    Science.gov (United States)

    Oyebode, Femi

    2013-01-01

    This paper discusses the definition, nature and origins of clinical errors including their prevention. The relationship between clinical errors and medical negligence is examined as are the characteristics of litigants and events that are the source of litigation. The pattern of malpractice claims in different specialties and settings is examined. Among hospitalized patients worldwide, 3-16% suffer injury as a result of medical intervention, the most common being the adverse effects of drugs. The frequency of adverse drug effects appears superficially to be higher in intensive care units and emergency departments but once rates have been corrected for volume of patients, comorbidity of conditions and number of drugs prescribed, the difference is not significant. It is concluded that probably no more than 1 in 7 adverse events in medicine result in a malpractice claim and the factors that predict that a patient will resort to litigation include a prior poor relationship with the clinician and the feeling that the patient is not being kept informed. Methods for preventing clinical errors are still in their infancy. The most promising include new technologies such as electronic prescribing systems, diagnostic and clinical decision-making aids and error-resistant systems. Copyright © 2013 S. Karger AG, Basel.

  3. Beam-Based Error Identification and Correction Methods for Particle Accelerators

    CERN Document Server

    AUTHOR|(SzGeCERN)692826; Tomas, Rogelio; Nilsson, Thomas

    2014-06-10

    Modern particle accelerators have tight tolerances on the acceptable deviation from their desired machine parameters. The control of the parameters is of crucial importance for safe machine operation and performance. This thesis focuses on beam-based methods and algorithms to identify and correct errors in particle accelerators. The optics measurements and corrections of the Large Hadron Collider (LHC), which resulted in an unprecedented low β-beat for a hadron collider is described. The transverse coupling is another parameter which is of importance to control. Improvement in the reconstruction of the coupling from turn-by-turn data has resulted in a significant decrease of the measurement uncertainty. An automatic coupling correction method, which is based on the injected beam oscillations, has been successfully used in normal operation of the LHC. Furthermore, a new method to measure and correct chromatic coupling that was applied to the LHC, is described. It resulted in a decrease of the chromatic coupli...

  4. Coping with medical error: a systematic review of papers to assess the effects of involvement in medical errors on healthcare professionals' psychological well-being.

    Science.gov (United States)

    Sirriyeh, Reema; Lawton, Rebecca; Gardner, Peter; Armitage, Gerry

    2010-12-01

    Previous research has established health professionals as secondary victims of medical error, with the identification of a range of emotional and psychological repercussions that may occur as a result of involvement in error.2 3 Due to the vast range of emotional and psychological outcomes, research to date has been inconsistent in the variables measured and tools used. Therefore, differing conclusions have been drawn as to the nature of the impact of error on professionals and the subsequent repercussions for their team, patients and healthcare institution. A systematic review was conducted. Data sources were identified using database searches, with additional reference and hand searching. Eligibility criteria were applied to all studies identified, resulting in a total of 24 included studies. Quality assessment was conducted with the included studies using a tool that was developed as part of this research, but due to the limited number and diverse nature of studies, no exclusions were made on this basis. Review findings suggest that there is consistent evidence for the widespread impact of medical error on health professionals. Psychological repercussions may include negative states such as shame, self-doubt, anxiety and guilt. Despite much attention devoted to the assessment of negative outcomes, the potential for positive outcomes resulting from error also became apparent, with increased assertiveness, confidence and improved colleague relationships reported. It is evident that involvement in a medical error can elicit a significant psychological response from the health professional involved. However, a lack of literature around coping and support, coupled with inconsistencies and weaknesses in methodology, may need be addressed in future work.

  5. NDE errors and their propagation in sizing and growth estimates

    International Nuclear Information System (INIS)

    Horn, D.; Obrutsky, L.; Lakhan, R.

    2009-01-01

    The accuracy attributed to eddy current flaw sizing determines the amount of conservativism required in setting tube-plugging limits. Several sources of error contribute to the uncertainty of the measurements, and the way in which these errors propagate and interact affects the overall accuracy of the flaw size and flaw growth estimates. An example of this calculation is the determination of an upper limit on flaw growth over one operating period, based on the difference between two measurements. Signal-to-signal comparison involves a variety of human, instrumental, and environmental error sources; of these, some propagate additively and some multiplicatively. In a difference calculation, specific errors in the first measurement may be correlated with the corresponding errors in the second; others may be independent. Each of the error sources needs to be identified and quantified individually, as does its distribution in the field data. A mathematical framework for the propagation of the errors can then be used to assess the sensitivity of the overall uncertainty to each individual error component. This paper quantifies error sources affecting eddy current sizing estimates and presents analytical expressions developed for their effect on depth estimates. A simple case study is used to model the analysis process. For each error source, the distribution of the field data was assessed and propagated through the analytical expressions. While the sizing error obtained was consistent with earlier estimates and with deviations from ultrasonic depth measurements, the error on growth was calculated as significantly smaller than that obtained assuming uncorrelated errors. An interesting result of the sensitivity analysis in the present case study is the quantification of the error reduction available from post-measurement compensation of magnetite effects. With the absolute and difference error equations, variance-covariance matrices, and partial derivatives developed in

  6. An assessment of the risk significance of human errors in selected PSAs and operating events

    International Nuclear Information System (INIS)

    Palla, R.L. Jr.; El-Bassioni, A.

    1991-01-01

    Sensitivity studies based on Probabilistic Safety Assessments (PSAs) for a pressurized water reactor and a boiling water reactor are described. In each case human errors modeled in the PSAs were categorized according to such factors as error type, location, timing, and plant personnel involved. Sensitivity studies were then conducted by varying the error rates in each category and evaluating the corresponding change in total core damage frequency and accident sequence frequency. Insights obtained are discussed and reasons for differences in risk sensitivity between plants are explored. A separate investigation into the role of human error in risk-important operating events is also described. This investigation involved the analysis of data from the USNRC Accident Sequence Precursor program to determine the effect of operator-initiated events on accident precursor trends, and to determine whether improved training can be correlated to current trends. The findings of this study are also presented. 5 refs., 15 figs., 1 tab

  7. Compensation of significant parametric uncertainties using sliding mode online learning

    Science.gov (United States)

    Schnetter, Philipp; Kruger, Thomas

    An augmented nonlinear inverse dynamics (NID) flight control strategy using sliding mode online learning for a small unmanned aircraft system (UAS) is presented. Because parameter identification for this class of aircraft often is not valid throughout the complete flight envelope, aerodynamic parameters used for model based control strategies may show significant deviations. For the concept of feedback linearization this leads to inversion errors that in combination with the distinctive susceptibility of small UAS towards atmospheric turbulence pose a demanding control task for these systems. In this work an adaptive flight control strategy using feedforward neural networks for counteracting such nonlinear effects is augmented with the concept of sliding mode control (SMC). SMC-learning is derived from variable structure theory. It considers a neural network and its training as a control problem. It is shown that by the dynamic calculation of the learning rates, stability can be guaranteed and thus increase the robustness against external disturbances and system failures. With the resulting higher speed of convergence a wide range of simultaneously occurring disturbances can be compensated. The SMC-based flight controller is tested and compared to the standard gradient descent (GD) backpropagation algorithm under the influence of significant model uncertainties and system failures.

  8. Rib fractures and their association With solid organ injury: higher rib fractures have greater significance for solid organ injury screening.

    Science.gov (United States)

    Rostas, Jack W; Lively, Timothy B; Brevard, Sidney B; Simmons, Jon D; Frotan, Mohammad A; Gonzalez, Richard P

    2017-04-01

    The purpose of this study was to identify patients with rib injuries who were at risk for solid organ injury. A retrospective chart review was performed of all blunt trauma patients with rib fractures during the period from July 2007 to July 2012. Data were analyzed for association of rib fractures and solid organ injury. In all, 1,103 rib fracture patients were identified; 142 patients had liver injuries with 109 (77%) associated right rib fractures. Right-sided rib fractures with highest sensitivity for liver injury were middle rib segment (5 to 8) and lower segment (9 to 12) with liver injury sensitivities of 68% and 43%, respectively (P rib fractures. Left middle segment rib fractures and lower segment rib fractures had sensitivities of 80% and 63% for splenic injury, respectively (P Rib fractures higher in the thoracic cage have significant association with solid organ injury. Using rib fractures from middle plus lower segments as indication for abdominal screening will significantly improve rib fracture sensitivity for identification of solid organ injury. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. Subclinical naming errors in mild cognitive impairment: A semantic deficit?

    Directory of Open Access Journals (Sweden)

    Indra F. Willers

    Full Text Available Abstract Mild cognitive impairment (MCI is the transitional stage between normal aging and Alzheimer's disease (AD. Impairments in semantic memory have been demonstrated to be a critical factor in early AD. The Boston Naming Test (BNT is a straightforward method of examining semantic or visuo-perceptual processing and therefore represents a potential diagnostic tool. The objective of this study was to examine naming ability and identify error types in patients with amnestic mild cognitive impairment (aMCI. Methods: Twenty aMCI patients, twenty AD patients and twenty-one normal controls, matched by age, sex and education level were evaluated. As part of a further neuropsychological evaluation, all subjects performed the BNT. A comprehensive classification of error types was devised in order to compare performance and ascertain semantic or perceptual origin of errors. Results: AD patients obtained significantly lower total scores on the BNT than aMCI patients and controls. aMCI patients did not obtain significant differences in total scores, but showed significantly higher semantic errors compared to controls. Conclusion: This study reveals that semantic processing is impaired during confrontation naming in aMCI.

  10. Improving Patient Safety With Error Identification in Chemotherapy Orders by Verification Nurses.

    Science.gov (United States)

    Baldwin, Abigail; Rodriguez, Elizabeth S

    2016-02-01

    The prevalence of medication errors associated with chemotherapy administration is not precisely known. Little evidence exists concerning the extent or nature of errors; however, some evidence demonstrates that errors are related to prescribing. This article demonstrates how the review of chemotherapy orders by a designated nurse known as a verification nurse (VN) at a National Cancer Institute-designated comprehensive cancer center helps to identify prescribing errors that may prevent chemotherapy administration mistakes and improve patient safety in outpatient infusion units. This article will describe the role of the VN and details of the verification process. To identify benefits of the VN role, a retrospective review and analysis of chemotherapy near-miss events from 2009-2014 was performed. A total of 4,282 events related to chemotherapy were entered into the Reporting to Improve Safety and Quality system. A majority of the events were categorized as near-miss events, or those that, because of chance, did not result in patient injury, and were identified at the point of prescribing.

  11. Consequences of exposure measurement error for confounder identification in environmental epidemiology

    DEFF Research Database (Denmark)

    Budtz-Jørgensen, Esben; Keiding, Niels; Grandjean, Philippe

    2003-01-01

    Non-differential measurement error in the exposure variable is known to attenuate the dose-response relationship. The amount of attenuation introduced in a given situation is not only a function of the precision of the exposure measurement but also depends on the conditional variance of the true...... exposure given the other independent variables. In addition, confounder effects may also be affected by the exposure measurement error. These difficulties in statistical model development are illustrated by examples from a epidemiological study performed in the Faroe Islands to investigate the adverse...

  12. ERF/ERFC, Calculation of Error Function, Complementary Error Function, Probability Integrals

    International Nuclear Information System (INIS)

    Vogel, J.E.

    1983-01-01

    1 - Description of problem or function: ERF and ERFC are used to compute values of the error function and complementary error function for any real number. They may be used to compute other related functions such as the normal probability integrals. 4. Method of solution: The error function and complementary error function are approximated by rational functions. Three such rational approximations are used depending on whether - x .GE.4.0. In the first region the error function is computed directly and the complementary error function is computed via the identity erfc(x)=1.0-erf(x). In the other two regions the complementary error function is computed directly and the error function is computed from the identity erf(x)=1.0-erfc(x). The error function and complementary error function are real-valued functions of any real argument. The range of the error function is (-1,1). The range of the complementary error function is (0,2). 5. Restrictions on the complexity of the problem: The user is cautioned against using ERF to compute the complementary error function by using the identity erfc(x)=1.0-erf(x). This subtraction may cause partial or total loss of significance for certain values of x

  13. Blood transfusion sampling and a greater role for error recovery.

    Science.gov (United States)

    Oldham, Jane

    Patient identification errors in pre-transfusion blood sampling ('wrong blood in tube') are a persistent area of risk. These errors can potentially result in life-threatening complications. Current measures to address root causes of incidents and near misses have not resolved this problem and there is a need to look afresh at this issue. PROJECT PURPOSE: This narrative review of the literature is part of a wider system-improvement project designed to explore and seek a better understanding of the factors that contribute to transfusion sampling error as a prerequisite to examining current and potential approaches to error reduction. A broad search of the literature was undertaken to identify themes relating to this phenomenon. KEY DISCOVERIES: Two key themes emerged from the literature. Firstly, despite multi-faceted causes of error, the consistent element is the ever-present potential for human error. Secondly, current focus on error prevention could potentially be augmented with greater attention to error recovery. Exploring ways in which clinical staff taking samples might learn how to better identify their own errors is proposed to add to current safety initiatives.

  14. Error-related negativity varies with the activation of gender stereotypes.

    Science.gov (United States)

    Ma, Qingguo; Shu, Liangchao; Wang, Xiaoyi; Dai, Shenyi; Che, Hongmin

    2008-09-19

    The error-related negativity (ERN) was suggested to reflect the response-performance monitoring process. The purpose of this study is to investigate how the activation of gender stereotypes influences the ERN. Twenty-eight male participants were asked to complete a tool or kitchenware identification task. The prime stimulus is a picture of a male or female face and the target stimulus is either a kitchen utensil or a hand tool. The ERN amplitude on male-kitchenware trials is significantly larger than that on female-kitchenware trials, which reveals the low-level, automatic activation of gender stereotypes. The ERN that was elicited in this task has two sources--operation errors and the conflict between the gender stereotype activation and the non-prejudice beliefs. And the gender stereotype activation may be the key factor leading to this difference of ERN. In other words, the stereotype activation in this experimental paradigm may be indexed by the ERN.

  15. Organizational identification moderates the impact of organizational justice on job satisfaction.

    Science.gov (United States)

    Yuan, Guo; Jia, Libin; Zhao, Jian

    2016-03-09

    Few studies concern the moderator effect of organizational identification between organizational justice and job satisfaction. This study aimed to examine the trilateral relationship among organizational identification, organizational justice and job satisfaction, especially focus on the moderator effect of organizational identification. 354 staffs completed the measures of organizational justice, organizational identification and job satisfaction. Hierarchical regression analysis showed that organizational identification moderated the association between organizational justice and job satisfaction. When staffs reported a low level of organizational identification, those with high organizational justice reported higher scores in job satisfaction than those with low organizational justice. However, the impact of organizational justice on job satisfaction was not significant in high organizational identification group. Organizational identification can significantly moderate the impact of organizational justice on job satisfaction. The significance and limitations of the results are discussed.

  16. Flight Technical Error Analysis of the SATS Higher Volume Operations Simulation and Flight Experiments

    Science.gov (United States)

    Williams, Daniel M.; Consiglio, Maria C.; Murdoch, Jennifer L.; Adams, Catherine H.

    2005-01-01

    This paper provides an analysis of Flight Technical Error (FTE) from recent SATS experiments, called the Higher Volume Operations (HVO) Simulation and Flight experiments, which NASA conducted to determine pilot acceptability of the HVO concept for normal operating conditions. Reported are FTE results from simulation and flight experiment data indicating the SATS HVO concept is viable and acceptable to low-time instrument rated pilots when compared with today s system (baseline). Described is the comparative FTE analysis of lateral, vertical, and airspeed deviations from the baseline and SATS HVO experimental flight procedures. Based on FTE analysis, all evaluation subjects, low-time instrument-rated pilots, flew the HVO procedures safely and proficiently in comparison to today s system. In all cases, the results of the flight experiment validated the results of the simulation experiment and confirm the utility of the simulation platform for comparative Human in the Loop (HITL) studies of SATS HVO and Baseline operations.

  17. Agricultural injuries in Korea and errors in systems of safety

    Directory of Open Access Journals (Sweden)

    Hyocher Kim

    2016-07-01

    It was found that most agricultural injuries were caused by a complex layer of root causes which were classified as errors in the systems of safety. This result indicates that not only training and personal protective equipment, but also regulation of safety design, mitigation devices, inspection/maintenance of workplaces, and other factors play an important role in preventing agricultural injuries. The identification of errors will help farmers to implement easily an effective prevention programme.

  18. PARADOXES IN ORGANIZATIONAL IDENTIFICATION WITH AN HIGHER EDUCATION INSTITUTION

    Directory of Open Access Journals (Sweden)

    Edson Roberto Scharf

    2016-05-01

    Full Text Available The aim of this study was to know the organizational identification of master students at an higher education institution in Brazil, in support of educational management. Based on the study by Celsi and Gilli (2010, with an exploratory inductive character, participated in the survey all students in the course. The results point to the understanding that the communication raises the visibility of the institution, to the realization that the future success of students depends on the success of the organization, for a sense of well-being of the students about the values and beliefs conveyed in communication; and that students are willing to make efforts beyond the normal value for the organization. Nevertheless, this same communication generates a low level of wellbeing and pride for students, when asked if they would comment on it with those close. In this study, the effect was called 'Black Mirror', with students absorbing important information communication, understanding and agreeing with its contents, but not to sharing. The results obtained allow the manager of the institution to plan more adequately their communication with the community of potential students.

  19. Can the Bruckner test be used as a rapid screening test to detect significant refractive errors in children?

    Directory of Open Access Journals (Sweden)

    Kothari Mihir

    2007-01-01

    Full Text Available Purpose: To assess the suitability of Brückner test as a screening test to detect significant refractive errors in children. Materials and Methods: A pediatric ophthalmologist prospectively observed the size and location of pupillary crescent on Brückner test as hyperopic, myopic or astigmatic. This was compared with the cycloplegic refraction. Detailed ophthalmic examination was done for all. Sensitivity, specificity, positive predictive value and negative predictive value of Brückner test were determined for the defined cutoff levels of ametropia. Results: Ninety-six subjects were examined. Mean age was 8.6 years (range 1 to 16 years. Brückner test could be completed for all; the time taken to complete this test was 10 seconds per subject. The ophthalmologist identified 131 eyes as ametropic, 61 as emmetropic. The Brückner test had sensitivity 91%, specificity 72.8%, positive predictive value 85.5% and negative predictive value 83.6%. Of 10 false negatives four had compound hypermetropic astigmatism and three had myopia. Conclusions: Brückner test can be used to rapidly screen the children for significant refractive errors. The potential benefits from such use may be maximized if programs use the test with lower crescent measurement cutoffs, a crescent measurement ruler and a distance fixation target.

  20. Practical error estimates for Reynolds' lubrication approximation and its higher order corrections

    Energy Technology Data Exchange (ETDEWEB)

    Wilkening, Jon

    2008-12-10

    Reynolds lubrication approximation is used extensively to study flows between moving machine parts, in narrow channels, and in thin films. The solution of Reynolds equation may be thought of as the zeroth order term in an expansion of the solution of the Stokes equations in powers of the aspect ratio {var_epsilon} of the domain. In this paper, we show how to compute the terms in this expansion to arbitrary order on a two-dimensional, x-periodic domain and derive rigorous, a-priori error bounds for the difference between the exact solution and the truncated expansion solution. Unlike previous studies of this sort, the constants in our error bounds are either independent of the function h(x) describing the geometry, or depend on h and its derivatives in an explicit, intuitive way. Specifically, if the expansion is truncated at order 2k, the error is O({var_epsilon}{sup 2k+2}) and h enters into the error bound only through its first and third inverse moments {integral}{sub 0}{sup 1} h(x){sup -m} dx, m = 1,3 and via the max norms {parallel} 1/{ell}! h{sup {ell}-1}{partial_derivative}{sub x}{sup {ell}}h{parallel}{sub {infinity}}, 1 {le} {ell} {le} 2k + 2. We validate our estimates by comparing with finite element solutions and present numerical evidence that suggests that even when h is real analytic and periodic, the expansion solution forms an asymptotic series rather than a convergent series.

  1. Error identification and recovery by student nurses using human patient simulation: opportunity to improve patient safety.

    Science.gov (United States)

    Henneman, Elizabeth A; Roche, Joan P; Fisher, Donald L; Cunningham, Helene; Reilly, Cheryl A; Nathanson, Brian H; Henneman, Philip L

    2010-02-01

    This study examined types of errors that occurred or were recovered in a simulated environment by student nurses. Errors occurred in all four rule-based error categories, and all students committed at least one error. The most frequent errors occurred in the verification category. Another common error was related to physician interactions. The least common errors were related to coordinating information with the patient and family. Our finding that 100% of student subjects committed rule-based errors is cause for concern. To decrease errors and improve safe clinical practice, nurse educators must identify effective strategies that students can use to improve patient surveillance. Copyright 2010 Elsevier Inc. All rights reserved.

  2. Elucidation of cross-species proteomic effects in human and hominin bone proteome identification through a bioinformatics experiment

    DEFF Research Database (Denmark)

    Welker, F.

    2018-01-01

    Background: The study of ancient protein sequences is increasingly focused on the analysis of older samples, including those of ancient hominins. The analysis of such ancient proteomes thereby potentially suffers from "cross-species proteomic effects": the loss of peptide and protein identificati......Background: The study of ancient protein sequences is increasingly focused on the analysis of older samples, including those of ancient hominins. The analysis of such ancient proteomes thereby potentially suffers from "cross-species proteomic effects": the loss of peptide and protein...... not been demonstrated. If error-tolerant searches do not overcome the cross-species proteomic issue then there might be inherent biases in the identified proteomes. Here, a bioinformatics experiment is performed to test this using a set of modern human bone proteomes and three independent searches against......), but roughly half of the mutable PSMs were not recovered. As a result, peptide and protein identification rates are higher in error-tolerant mode compared to non-error-tolerant searches but did not recover protein identifications completely. Data indicates that peptide length and the number of mutations...

  3. Highly accurate fluorogenic DNA sequencing with information theory-based error correction.

    Science.gov (United States)

    Chen, Zitian; Zhou, Wenxiong; Qiao, Shuo; Kang, Li; Duan, Haifeng; Xie, X Sunney; Huang, Yanyi

    2017-12-01

    Eliminating errors in next-generation DNA sequencing has proved challenging. Here we present error-correction code (ECC) sequencing, a method to greatly improve sequencing accuracy by combining fluorogenic sequencing-by-synthesis (SBS) with an information theory-based error-correction algorithm. ECC embeds redundancy in sequencing reads by creating three orthogonal degenerate sequences, generated by alternate dual-base reactions. This is similar to encoding and decoding strategies that have proved effective in detecting and correcting errors in information communication and storage. We show that, when combined with a fluorogenic SBS chemistry with raw accuracy of 98.1%, ECC sequencing provides single-end, error-free sequences up to 200 bp. ECC approaches should enable accurate identification of extremely rare genomic variations in various applications in biology and medicine.

  4. Post-error expression of speed and force while performing a simple, monotonous task with a haptic pen

    NARCIS (Netherlands)

    Bruns, M.; Keyson, D.V.; Jabon, M.E.; Hummels, C.C.M.; Hekkert, P.P.M.; Bailenson, J.N.

    2013-01-01

    Control errors often occur in repetitive and monotonous tasks, such as manual assembly tasks. Much research has been done in the area of human error identification; however, most existing systems focus solely on the prediction of errors, not on increasing worker accuracy. The current study examines

  5. Fast, efficient error reconciliation for quantum cryptography

    International Nuclear Information System (INIS)

    Buttler, W.T.; Lamoreaux, S.K.; Torgerson, J.R.; Nickel, G.H.; Donahue, C.H.; Peterson, C.G.

    2003-01-01

    We describe an error-reconciliation protocol, which we call Winnow, based on the exchange of parity and Hamming's 'syndrome' for N-bit subunits of a large dataset. The Winnow protocol was developed in the context of quantum-key distribution and offers significant advantages and net higher efficiency compared to other widely used protocols within the quantum cryptography community. A detailed mathematical analysis of the Winnow protocol is presented in the context of practical implementations of quantum-key distribution; in particular, the information overhead required for secure implementation is one of the most important criteria in the evaluation of a particular error-reconciliation protocol. The increase in efficiency for the Winnow protocol is largely due to the reduction in authenticated public communication required for its implementation

  6. SIMulation of Medication Error induced by Clinical Trial drug labeling: the SIMME-CT study.

    Science.gov (United States)

    Dollinger, Cecile; Schwiertz, Vérane; Sarfati, Laura; Gourc-Berthod, Chloé; Guédat, Marie-Gabrielle; Alloux, Céline; Vantard, Nicolas; Gauthier, Noémie; He, Sophie; Kiouris, Elena; Caffin, Anne-Gaelle; Bernard, Delphine; Ranchon, Florence; Rioufol, Catherine

    2016-06-01

    To assess the impact of investigational drug labels on the risk of medication error in drug dispensing. A simulation-based learning program focusing on investigational drug dispensing was conducted. The study was undertaken in an Investigational Drugs Dispensing Unit of a University Hospital of Lyon, France. Sixty-three pharmacy workers (pharmacists, residents, technicians or students) were enrolled. Ten risk factors were selected concerning label information or the risk of confusion with another clinical trial. Each risk factor was scored independently out of 5: the higher the score, the greater the risk of error. From 400 labels analyzed, two groups were selected for the dispensing simulation: 27 labels with high risk (score ≥3) and 27 with low risk (score ≤2). Each question in the learning program was displayed as a simulated clinical trial prescription. Medication error was defined as at least one erroneous answer (i.e. error in drug dispensing). For each question, response times were collected. High-risk investigational drug labels correlated with medication error and slower response time. Error rates were significantly 5.5-fold higher for high-risk series. Error frequency was not significantly affected by occupational category or experience in clinical trials. SIMME-CT is the first simulation-based learning tool to focus on investigational drug labels as a risk factor for medication error. SIMME-CT was also used as a training tool for staff involved in clinical research, to develop medication error risk awareness and to validate competence in continuing medical education. © The Author 2016. Published by Oxford University Press in association with the International Society for Quality in Health Care; all rights reserved.

  7. THERP and HEART integrated methodology for human error assessment

    Science.gov (United States)

    Castiglia, Francesco; Giardina, Mariarosa; Tomarchio, Elio

    2015-11-01

    THERP and HEART integrated methodology is proposed to investigate accident scenarios that involve operator errors during high-dose-rate (HDR) treatments. The new approach has been modified on the basis of fuzzy set concept with the aim of prioritizing an exhaustive list of erroneous tasks that can lead to patient radiological overexposures. The results allow for the identification of human errors that are necessary to achieve a better understanding of health hazards in the radiotherapy treatment process, so that it can be properly monitored and appropriately managed.

  8. Accounting for measurement error: a critical but often overlooked process.

    Science.gov (United States)

    Harris, Edward F; Smith, Richard N

    2009-12-01

    Due to instrument imprecision and human inconsistencies, measurements are not free of error. Technical error of measurement (TEM) is the variability encountered between dimensions when the same specimens are measured at multiple sessions. A goal of a data collection regimen is to minimise TEM. The few studies that actually quantify TEM, regardless of discipline, report that it is substantial and can affect results and inferences. This paper reviews some statistical approaches for identifying and controlling TEM. Statistically, TEM is part of the residual ('unexplained') variance in a statistical test, so accounting for TEM, which requires repeated measurements, enhances the chances of finding a statistically significant difference if one exists. The aim of this paper was to review and discuss common statistical designs relating to types of error and statistical approaches to error accountability. This paper addresses issues of landmark location, validity, technical and systematic error, analysis of variance, scaled measures and correlation coefficients in order to guide the reader towards correct identification of true experimental differences. Researchers commonly infer characteristics about populations from comparatively restricted study samples. Most inferences are statistical and, aside from concerns about adequate accounting for known sources of variation with the research design, an important source of variability is measurement error. Variability in locating landmarks that define variables is obvious in odontometrics, cephalometrics and anthropometry, but the same concerns about measurement accuracy and precision extend to all disciplines. With increasing accessibility to computer-assisted methods of data collection, the ease of incorporating repeated measures into statistical designs has improved. Accounting for this technical source of variation increases the chance of finding biologically true differences when they exist.

  9. Relating Complexity and Error Rates of Ontology Concepts. More Complex NCIt Concepts Have More Errors.

    Science.gov (United States)

    Min, Hua; Zheng, Ling; Perl, Yehoshua; Halper, Michael; De Coronado, Sherri; Ochs, Christopher

    2017-05-18

    Ontologies are knowledge structures that lend support to many health-information systems. A study is carried out to assess the quality of ontological concepts based on a measure of their complexity. The results show a relation between complexity of concepts and error rates of concepts. A measure of lateral complexity defined as the number of exhibited role types is used to distinguish between more complex and simpler concepts. Using a framework called an area taxonomy, a kind of abstraction network that summarizes the structural organization of an ontology, concepts are divided into two groups along these lines. Various concepts from each group are then subjected to a two-phase QA analysis to uncover and verify errors and inconsistencies in their modeling. A hierarchy of the National Cancer Institute thesaurus (NCIt) is used as our test-bed. A hypothesis pertaining to the expected error rates of the complex and simple concepts is tested. Our study was done on the NCIt's Biological Process hierarchy. Various errors, including missing roles, incorrect role targets, and incorrectly assigned roles, were discovered and verified in the two phases of our QA analysis. The overall findings confirmed our hypothesis by showing a statistically significant difference between the amounts of errors exhibited by more laterally complex concepts vis-à-vis simpler concepts. QA is an essential part of any ontology's maintenance regimen. In this paper, we reported on the results of a QA study targeting two groups of ontology concepts distinguished by their level of complexity, defined in terms of the number of exhibited role types. The study was carried out on a major component of an important ontology, the NCIt. The findings suggest that more complex concepts tend to have a higher error rate than simpler concepts. These findings can be utilized to guide ongoing efforts in ontology QA.

  10. Patient identification and tube labelling

    DEFF Research Database (Denmark)

    van Dongen-Lases, Edmée C; Cornes, Michael P; Grankvist, Kjell

    2016-01-01

    of phlebotomy procedures with the CLSI H3-A6 guideline was unacceptably low, and that patient identification and tube labelling are amongst the most critical steps in need of immediate attention and improvement. The process of patient identification and tube labelling is an essential safety barrier to prevent...... patient identity mix-up. Therefore, the EFLM Working Group aims to encourage and support worldwide harmonisation of patient identification and tube labelling procedures in order to reduce the risk of preanalytical errors and improve patient safety. With this Position paper we wish to raise awareness...... and provide recommendations for proper patient and sample identification procedures....

  11. Exploring Senior Residents' Intraoperative Error Management Strategies: A Potential Measure of Performance Improvement.

    Science.gov (United States)

    Law, Katherine E; Ray, Rebecca D; D'Angelo, Anne-Lise D; Cohen, Elaine R; DiMarco, Shannon M; Linsmeier, Elyse; Wiegmann, Douglas A; Pugh, Carla M

    The study aim was to determine whether residents' error management strategies changed across 2 simulated laparoscopic ventral hernia (LVH) repair procedures after receiving feedback on their initial performance. We hypothesize that error detection and recovery strategies would improve during the second procedure without hands-on practice. Retrospective review of participant procedural performances of simulated laparoscopic ventral herniorrhaphy. A total of 3 investigators reviewed procedure videos to identify surgical errors. Errors were deconstructed. Error management events were noted, including error identification and recovery. Residents performed the simulated LVH procedures during a course on advanced laparoscopy. Participants had 30 minutes to complete a LVH procedure. After verbal and simulator feedback, residents returned 24 hours later to perform a different, more difficult simulated LVH repair. Senior (N = 7; postgraduate year 4-5) residents in attendance at the course participated in this study. In the first LVH procedure, residents committed 121 errors (M = 17.14, standard deviation = 4.38). Although the number of errors increased to 146 (M = 20.86, standard deviation = 6.15) during the second procedure, residents progressed further in the second procedure. There was no significant difference in the number of errors committed for both procedures, but errors shifted to the late stage of the second procedure. Residents changed the error types that they attempted to recover (χ 2 5 =24.96, perrors, but decreased for strategy errors. Residents also recovered the most errors in the late stage of the second procedure (p error management strategies changed between procedures following verbal feedback on their initial performance and feedback from the simulator. Errors and recovery attempts shifted to later steps during the second procedure. This may reflect residents' error management success in the earlier stages, which allowed further progression in the

  12. Stochastic and sensitivity analysis of shape error of inflatable antenna reflectors

    Science.gov (United States)

    San, Bingbing; Yang, Qingshan; Yin, Liwei

    2017-03-01

    Inflatable antennas are promising candidates to realize future satellite communications and space observations since they are lightweight, low-cost and small-packaged-volume. However, due to their high flexibility, inflatable reflectors are difficult to manufacture accurately, which may result in undesirable shape errors, and thus affect their performance negatively. In this paper, the stochastic characteristics of shape errors induced during manufacturing process are investigated using Latin hypercube sampling coupled with manufacture simulations. Four main random error sources are involved, including errors in membrane thickness, errors in elastic modulus of membrane, boundary deviations and pressure variations. Using regression and correlation analysis, a global sensitivity study is conducted to rank the importance of these error sources. This global sensitivity analysis is novel in that it can take into account the random variation and the interaction between error sources. Analyses are parametrically carried out with various focal-length-to-diameter ratios (F/D) and aperture sizes (D) of reflectors to investigate their effects on significance ranking of error sources. The research reveals that RMS (Root Mean Square) of shape error is a random quantity with an exponent probability distribution and features great dispersion; with the increase of F/D and D, both mean value and standard deviation of shape errors are increased; in the proposed range, the significance ranking of error sources is independent of F/D and D; boundary deviation imposes the greatest effect with a much higher weight than the others; pressure variation ranks the second; error in thickness and elastic modulus of membrane ranks the last with very close sensitivities to pressure variation. Finally, suggestions are given for the control of the shape accuracy of reflectors and allowable values of error sources are proposed from the perspective of reliability.

  13. Human error mode identification for NPP main control room operations using soft controls

    International Nuclear Information System (INIS)

    Lee, Seung Jun; Kim, Jaewhan; Jang, Seung-Cheol

    2011-01-01

    The operation environment of main control rooms (MCRs) in modern nuclear power plants (NPPs) has considerably changed over the years. Advanced MCRs, which have been designed by adapting digital and computer technologies, have simpler interfaces using large display panels, computerized displays, soft controls, computerized procedure systems, and so on. The actions for the NPP operations are performed using soft controls in advanced MCRs. Soft controls have different features from conventional controls. Operators need to navigate the screens to find indicators and controls and manipulate controls using a mouse, touch screens, and so on. Due to these different interfaces, different human errors should be considered in the human reliability analysis (HRA) for advanced MCRs. In this work, human errors that could occur during operation executions using soft controls were analyzed. This work classified the human errors in soft controls into six types, and the reasons that affect the occurrence of the human errors were also analyzed. (author)

  14. Enhancement of chemical entity identification in text using semantic similarity validation.

    Directory of Open Access Journals (Sweden)

    Tiago Grego

    Full Text Available With the amount of chemical data being produced and reported in the literature growing at a fast pace, it is increasingly important to efficiently retrieve this information. To tackle this issue text mining tools have been applied, but despite their good performance they still provide many errors that we believe can be filtered by using semantic similarity. Thus, this paper proposes a novel method that receives the results of chemical entity identification systems, such as Whatizit, and exploits the semantic relationships in ChEBI to measure the similarity between the entities found in the text. The method assigns a single validation score to each entity based on its similarities with the other entities also identified in the text. Then, by using a given threshold, the method selects a set of validated entities and a set of outlier entities. We evaluated our method using the results of two state-of-the-art chemical entity identification tools, three semantic similarity measures and two text window sizes. The method was able to increase precision without filtering a significant number of correctly identified entities. This means that the method can effectively discriminate the correctly identified chemical entities, while discarding a significant number of identification errors. For example, selecting a validation set with 75% of all identified entities, we were able to increase the precision by 28% for one of the chemical entity identification tools (Whatizit, maintaining in that subset 97% the correctly identified entities. Our method can be directly used as an add-on by any state-of-the-art entity identification tool that provides mappings to a database, in order to improve their results. The proposed method is included in a freely accessible web tool at www.lasige.di.fc.ul.pt/webtools/ice/.

  15. Maximizing the sensitivity and reliability of peptide identification in large-scale proteomic experiments by harnessing multiple search engines.

    Science.gov (United States)

    Yu, Wen; Taylor, J Alex; Davis, Michael T; Bonilla, Leo E; Lee, Kimberly A; Auger, Paul L; Farnsworth, Chris C; Welcher, Andrew A; Patterson, Scott D

    2010-03-01

    Despite recent advances in qualitative proteomics, the automatic identification of peptides with optimal sensitivity and accuracy remains a difficult goal. To address this deficiency, a novel algorithm, Multiple Search Engines, Normalization and Consensus is described. The method employs six search engines and a re-scoring engine to search MS/MS spectra against protein and decoy sequences. After the peptide hits from each engine are normalized to error rates estimated from the decoy hits, peptide assignments are then deduced using a minimum consensus model. These assignments are produced in a series of progressively relaxed false-discovery rates, thus enabling a comprehensive interpretation of the data set. Additionally, the estimated false-discovery rate was found to have good concordance with the observed false-positive rate calculated from known identities. Benchmarking against standard proteins data sets (ISBv1, sPRG2006) and their published analysis, demonstrated that the Multiple Search Engines, Normalization and Consensus algorithm consistently achieved significantly higher sensitivity in peptide identifications, which led to increased or more robust protein identifications in all data sets compared with prior methods. The sensitivity and the false-positive rate of peptide identification exhibit an inverse-proportional and linear relationship with the number of participating search engines.

  16. Application of Barcoding to Reduce Error of Patient Identification and to Increase Patient's Information Confidentiality of Test Tube Labelling in a Psychiatric Teaching Hospital.

    Science.gov (United States)

    Liu, Hsiu-Chu; Li, Hsing; Chang, Hsin-Fei; Lu, Mei-Rou; Chen, Feng-Chuan

    2015-01-01

    Learning from the experience of another medical center in Taiwan, Kaohsiung Municipal Kai-Suan Psychiatric Hospital has changed the nursing informatics system step by step in the past year and a half . We considered ethics in the original idea of implementing barcodes on the test tube labels to process the identification of the psychiatric patients. The main aims of this project are to maintain the confidential information and to transport the sample effectively. The primary nurses had been using different work sheets for this project to ensure the acceptance of the new barcode system. In the past two years the errors in the blood testing process were as high as 11,000 in 14,000 events per year, resulting in wastage of resources. The actions taken by the nurses and the new barcode system implementation can improve the clinical nursing care quality, safety of the patients, and efficiency, while decreasing the cost due to the human error.

  17. A Bayesian approach for the stochastic modeling error reduction of magnetic material identification of an electromagnetic device

    International Nuclear Information System (INIS)

    Abdallh, A; Crevecoeur, G; Dupré, L

    2012-01-01

    Magnetic material properties of an electromagnetic device can be recovered by solving an inverse problem where measurements are adequately interpreted by a mathematical forward model. The accuracy of these forward models dramatically affects the accuracy of the material properties recovered by the inverse problem. The more accurate the forward model is, the more accurate recovered data are. However, the more accurate ‘fine’ models demand a high computational time and memory storage. Alternatively, less accurate ‘coarse’ models can be used with a demerit of the high expected recovery errors. This paper uses the Bayesian approximation error approach for improving the inverse problem results when coarse models are utilized. The proposed approach adapts the objective function to be minimized with the a priori misfit between fine and coarse forward model responses. In this paper, two different electromagnetic devices, namely a switched reluctance motor and an EI core inductor, are used as case studies. The proposed methodology is validated on both purely numerical and real experimental results. The results show a significant reduction in the recovery error within an acceptable computational time. (paper)

  18. Applying lessons learned to enhance human performance and reduce human error for ISS operations

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, W.R.

    1998-09-01

    A major component of reliability, safety, and mission success for space missions is ensuring that the humans involved (flight crew, ground crew, mission control, etc.) perform their tasks and functions as required. This includes compliance with training and procedures during normal conditions, and successful compensation when malfunctions or unexpected conditions occur. A very significant issue that affects human performance in space flight is human error. Human errors can invalidate carefully designed equipment and procedures. If certain errors combine with equipment failures or design flaws, mission failure or loss of life can occur. The control of human error during operation of the International Space Station (ISS) will be critical to the overall success of the program. As experience from Mir operations has shown, human performance plays a vital role in the success or failure of long duration space missions. The Department of Energy`s Idaho National Engineering and Environmental Laboratory (INEEL) is developed a systematic approach to enhance human performance and reduce human errors for ISS operations. This approach is based on the systematic identification and evaluation of lessons learned from past space missions such as Mir to enhance the design and operation of ISS. This paper describes previous INEEL research on human error sponsored by NASA and how it can be applied to enhance human reliability for ISS.

  19. A method for analysing incidents due to human errors on nuclear installations

    International Nuclear Information System (INIS)

    Griffon, M.

    1980-01-01

    This paper deals with the development of a methodology adapted to a detailed analysis of incidents considered to be due to human errors. An identification of human errors and a search for their eventual multiple causes is then needed. They are categorized in eight classes: education and training of personnel, installation design, work organization, time and work duration, physical environment, social environment, history of the plant and performance of the operator. The method is illustrated by the analysis of a handling incident generated by multiple human errors. (author)

  20. Errors in clinical laboratories or errors in laboratory medicine?

    Science.gov (United States)

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  1. Improved Landau gauge fixing and discretisation errors

    International Nuclear Information System (INIS)

    Bonnet, F.D.R.; Bowman, P.O.; Leinweber, D.B.; Richards, D.G.; Williams, A.G.

    2000-01-01

    Lattice discretisation errors in the Landau gauge condition are examined. An improved gauge fixing algorithm in which O(a 2 ) errors are removed is presented. O(a 2 ) improvement of the gauge fixing condition displays the secondary benefit of reducing the size of higher-order errors. These results emphasise the importance of implementing an improved gauge fixing condition

  2. Medication errors in residential aged care facilities: a distributed cognition analysis of the information exchange process.

    Science.gov (United States)

    Tariq, Amina; Georgiou, Andrew; Westbrook, Johanna

    2013-05-01

    Medication safety is a pressing concern for residential aged care facilities (RACFs). Retrospective studies in RACF settings identify inadequate communication between RACFs, doctors, hospitals and community pharmacies as the major cause of medication errors. Existing literature offers limited insight about the gaps in the existing information exchange process that may lead to medication errors. The aim of this research was to explicate the cognitive distribution that underlies RACF medication ordering and delivery to identify gaps in medication-related information exchange which lead to medication errors in RACFs. The study was undertaken in three RACFs in Sydney, Australia. Data were generated through ethnographic field work over a period of five months (May-September 2011). Triangulated analysis of data primarily focused on examining the transformation and exchange of information between different media across the process. The findings of this study highlight the extensive scope and intense nature of information exchange in RACF medication ordering and delivery. Rather than attributing error to individual care providers, the explication of distributed cognition processes enabled the identification of gaps in three information exchange dimensions which potentially contribute to the occurrence of medication errors namely: (1) design of medication charts which complicates order processing and record keeping (2) lack of coordination mechanisms between participants which results in misalignment of local practices (3) reliance on restricted communication bandwidth channels mainly telephone and fax which complicates the information processing requirements. The study demonstrates how the identification of these gaps enhances understanding of medication errors in RACFs. Application of the theoretical lens of distributed cognition can assist in enhancing our understanding of medication errors in RACFs through identification of gaps in information exchange. Understanding

  3. Crystalline lens power and refractive error.

    Science.gov (United States)

    Iribarren, Rafael; Morgan, Ian G; Nangia, Vinay; Jonas, Jost B

    2012-02-01

    To study the relationships between the refractive power of the crystalline lens, overall refractive error of the eye, and degree of nuclear cataract. All phakic participants of the population-based Central India Eye and Medical Study with an age of 50+ years were included. Calculation of the refractive lens power was based on distance noncycloplegic refractive error, corneal refractive power, anterior chamber depth, lens thickness, and axial length according to Bennett's formula. The study included 1885 subjects. Mean refractive lens power was 25.5 ± 3.0 D (range, 13.9-36.6). After adjustment for age and sex, the standardized correlation coefficients (β) of the association with the ocular refractive error were highest for crystalline lens power (β = -0.41; P lens opacity grade (β = -0.42; P lens power (β = -0.95), lower corneal refractive power (β = -0.76), higher lens thickness (β = 0.30), deeper anterior chamber (β = 0.28), and less marked nuclear lens opacity (β = -0.05). Lens thickness was significantly lower in eyes with greater nuclear opacity. Variations in refractive error in adults aged 50+ years were mostly influenced by variations in axial length and in crystalline lens refractive power, followed by variations in corneal refractive power, and, to a minor degree, by variations in lens thickness and anterior chamber depth.

  4. Using lexical variables to predict picture-naming errors in jargon aphasia

    Directory of Open Access Journals (Sweden)

    Catherine Godbold

    2015-04-01

    Full Text Available Introduction Individuals with jargon aphasia produce fluent output which often comprises high proportions of non-word errors (e.g., maf for dog. Research has been devoted to identifying the underlying mechanisms behind such output. Some accounts posit a reduced flow of spreading activation between levels in the lexical network (e.g., Robson et al., 2003. If activation level differences across the lexical network are a cause of non-word outputs, we would predict improved performance when target items reflect an increased flow of activation between levels (e.g. more frequently-used words are often represented by higher resting levels of activation. This research investigates the effect of lexical properties of targets (e.g., frequency, imageability on accuracy, error type (real word vs. non-word and target-error overlap of non-word errors in a picture naming task by individuals with jargon aphasia. Method Participants were 17 individuals with Wernicke’s aphasia, who produced a high proportion of non-word errors (>20% of errors on the Philadelphia Naming Test (PNT; Roach et al., 1996. The data were retrieved from the Moss Aphasic Psycholinguistic Database Project (MAPPD, Mirman et al., 2010. We used a series of mixed models to test whether lexical variables predicted accuracy, error type (real word vs. non-word and target-error overlap for the PNT data. As lexical variables tend to be highly correlated, we performed a principal components analysis to reduce the variables into five components representing variables associated with phonology (length, phonotactic probability, neighbourhood density and neighbourhood frequency, semantics (imageability and concreteness, usage (frequency and age-of-acquisition, name agreement and visual complexity. Results and Discussion Table 1 shows the components that made a significant contribution to each model. Individuals with jargon aphasia produced more correct responses and fewer non-word errors relative to

  5. Study on the methodology for predicting and preventing errors to improve reliability of maintenance task in nuclear power plant

    International Nuclear Information System (INIS)

    Hanafusa, Hidemitsu; Iwaki, Toshio; Embrey, D.

    2000-01-01

    The objective of this study was to develop and effective methodology for predicting and preventing errors in nuclear power plant maintenance tasks. A method was established by which chief maintenance personnel can predict and reduce errors when reviewing the maintenance procedures and while referring to maintenance supporting systems and methods in other industries including aviation and chemical plant industries. The method involves the following seven steps: 1. Identification of maintenance tasks. 2. Specification of important tasks affecting safety. 3. Assessment of human errors occurring during important tasks. 4. Identification of Performance Degrading Factors. 5. Dividing important tasks into sub-tasks. 6. Extraction of errors using Predictive Human Error Analysis (PHEA). 7. Development of strategies for reducing errors and for recovering from errors. By way of a trial, this method was applied to the pump maintenance procedure in nuclear power plants. This method is believed to be capable of identifying the expected errors in important tasks and supporting the development of error reduction measures. By applying this method, the number of accidents resulting form human errors during maintenance can be reduced. Moreover, the maintenance support base using computers was developed. (author)

  6. Self-reported medical, medication and laboratory error in eight countries: risk factors for chronically ill adults.

    Science.gov (United States)

    Scobie, Andrea

    2011-04-01

    To identify risk factors associated with self-reported medical, medication and laboratory error in eight countries. The Commonwealth Fund's 2008 International Health Policy Survey of chronically ill patients in eight countries. None. A multi-country telephone survey was conducted between 3 March and 30 May 2008 with patients in Australia, Canada, France, Germany, the Netherlands, New Zealand, the UK and the USA who self-reported being chronically ill. A bivariate analysis was performed to determine significant explanatory variables of medical, medication and laboratory error (P error: age 65 and under, education level of some college or less, presence of two or more chronic conditions, high prescription drug use (four+ drugs), four or more doctors seen within 2 years, a care coordination problem, poor doctor-patient communication and use of an emergency department. Risk factors with the greatest ability to predict experiencing an error encompassed issues with coordination of care and provider knowledge of a patient's medical history. The identification of these risk factors could help policymakers and organizations to proactively reduce the likelihood of error through greater examination of system- and organization-level practices.

  7. [Prospective assessment of medication errors in critically ill patients in a university hospital].

    Science.gov (United States)

    Salazar L, Nicole; Jirón A, Marcela; Escobar O, Leslie; Tobar, Eduardo; Romero, Carlos

    2011-11-01

    Critically ill patients are especially vulnerable to medication errors (ME) due to their severe clinical situation and the complexities of their management. To determine the frequency and characteristics of ME and identify shortcomings in the processes of medication management in an Intensive Care Unit. During a 3 months period, an observational prospective and randomized study was carried out in the ICU of a university hospital. Every step of patient's medication management (prescription, transcription, dispensation, preparation and administration) was evaluated by an external trained professional. Steps with higher frequency of ME and their therapeutic groups involved were identified. Medications errors were classified according to the National Coordinating Council for Medication Error Reporting and Prevention. In 52 of 124 patients evaluated, 66 ME were found in 194 drugs prescribed. In 34% of prescribed drugs, there was at least 1 ME during its use. Half of ME occurred during medication administration, mainly due to problems in infusion rates and schedule times. Antibacterial drugs had the highest rate of ME. We found a 34% rate of ME per drug prescribed, which is in concordance with international reports. The identification of those steps more prone to ME in the ICU, will allow the implementation of an intervention program to improve the quality and security of medication management.

  8. Mass measurement errors of Fourier-transform mass spectrometry (FTMS): distribution, recalibration, and application.

    Science.gov (United States)

    Zhang, Jiyang; Ma, Jie; Dou, Lei; Wu, Songfeng; Qian, Xiaohong; Xie, Hongwei; Zhu, Yunping; He, Fuchu

    2009-02-01

    The hybrid linear trap quadrupole Fourier-transform (LTQ-FT) ion cyclotron resonance mass spectrometer, an instrument with high accuracy and resolution, is widely used in the identification and quantification of peptides and proteins. However, time-dependent errors in the system may lead to deterioration of the accuracy of these instruments, negatively influencing the determination of the mass error tolerance (MET) in database searches. Here, a comprehensive discussion of LTQ/FT precursor ion mass error is provided. On the basis of an investigation of the mass error distribution, we propose an improved recalibration formula and introduce a new tool, FTDR (Fourier-transform data recalibration), that employs a graphic user interface (GUI) for automatic calibration. It was found that the calibration could adjust the mass error distribution to more closely approximate a normal distribution and reduce the standard deviation (SD). Consequently, we present a new strategy, LDSF (Large MET database search and small MET filtration), for database search MET specification and validation of database search results. As the name implies, a large-MET database search is conducted and the search results are then filtered using the statistical MET estimated from high-confidence results. By applying this strategy to a standard protein data set and a complex data set, we demonstrate the LDSF can significantly improve the sensitivity of the result validation procedure.

  9. Learning Data Set Influence on Identification Accuracy of Gas Turbine Neural Network Model

    Science.gov (United States)

    Kuznetsov, A. V.; Makaryants, G. M.

    2018-01-01

    There are many gas turbine engine identification researches via dynamic neural network models. It should minimize errors between model and real object during identification process. Questions about training data set processing of neural networks are usually missed. This article presents a study about influence of data set type on gas turbine neural network model accuracy. The identification object is thermodynamic model of micro gas turbine engine. The thermodynamic model input signal is the fuel consumption and output signal is the engine rotor rotation frequency. Four types input signals was used for creating training and testing data sets of dynamic neural network models - step, fast, slow and mixed. Four dynamic neural networks were created based on these types of training data sets. Each neural network was tested via four types test data sets. In the result 16 transition processes from four neural networks and four test data sets from analogous solving results of thermodynamic model were compared. The errors comparison was made between all neural network errors in each test data set. In the comparison result it was shown error value ranges of each test data set. It is shown that error values ranges is small therefore the influence of data set types on identification accuracy is low.

  10. Refractive errors in children and adolescents in Bucaramanga (Colombia).

    Science.gov (United States)

    Galvis, Virgilio; Tello, Alejandro; Otero, Johanna; Serrano, Andrés A; Gómez, Luz María; Castellanos, Yuly

    2017-01-01

    The aim of this study was to establish the frequency of refractive errors in children and adolescents aged between 8 and 17 years old, living in the metropolitan area of Bucaramanga (Colombia). This study was a secondary analysis of two descriptive cross-sectional studies that applied sociodemographic surveys and assessed visual acuity and refraction. Ametropias were classified as myopic errors, hyperopic errors, and mixed astigmatism. Eyes were considered emmetropic if none of these classifications were made. The data were collated using free software and analyzed with STATA/IC 11.2. One thousand two hundred twenty-eight individuals were included in this study. Girls showed a higher rate of ametropia than boys. Hyperopic refractive errors were present in 23.1% of the subjects, and myopic errors in 11.2%. Only 0.2% of the eyes had high myopia (≤-6.00 D). Mixed astigmatism and anisometropia were uncommon, and myopia frequency increased with age. There were statistically significant steeper keratometric readings in myopic compared to hyperopic eyes. The frequency of refractive errors that we found of 36.7% is moderate compared to the global data. The rates and parameters statistically differed by sex and age groups. Our findings are useful for establishing refractive error rate benchmarks in low-middle-income countries and as a baseline for following their variation by sociodemographic factors.

  11. Refractive errors in children and adolescents in Bucaramanga (Colombia

    Directory of Open Access Journals (Sweden)

    Virgilio Galvis

    Full Text Available ABSTRACT Purpose: The aim of this study was to establish the frequency of refractive errors in children and adolescents aged between 8 and 17 years old, living in the metropolitan area of Bucaramanga (Colombia. Methods: This study was a secondary analysis of two descriptive cross-sectional studies that applied sociodemographic surveys and assessed visual acuity and refraction. Ametropias were classified as myopic errors, hyperopic errors, and mixed astigmatism. Eyes were considered emmetropic if none of these classifications were made. The data were collated using free software and analyzed with STATA/IC 11.2. Results: One thousand two hundred twenty-eight individuals were included in this study. Girls showed a higher rate of ametropia than boys. Hyperopic refractive errors were present in 23.1% of the subjects, and myopic errors in 11.2%. Only 0.2% of the eyes had high myopia (≤-6.00 D. Mixed astigmatism and anisometropia were uncommon, and myopia frequency increased with age. There were statistically significant steeper keratometric readings in myopic compared to hyperopic eyes. Conclusions: The frequency of refractive errors that we found of 36.7% is moderate compared to the global data. The rates and parameters statistically differed by sex and age groups. Our findings are useful for establishing refractive error rate benchmarks in low-middle-income countries and as a baseline for following their variation by sociodemographic factors.

  12. Poor vision, refractive errors and barriers to treatment among ...

    African Journals Online (AJOL)

    Poor vision, refractive errors and barriers to treatment among commercial vehicle drivers in the Cape Coast municipality. ... were also administered to the participants to collect demographic data, history of driving and RTAs and utilization of eye care services as well as identification of the colours of the traffic light. Results: A ...

  13. A higher order depletion perturbation theory with application to in-core fuel management optimization

    International Nuclear Information System (INIS)

    Kropaczek, D.J.; Turinsky, P.J.

    1990-01-01

    Perturbation techniques utilized in reactor analysis have recently been applied in the solution of the in-core nuclear fuel management optimization problem. The use of such methods is motivated by the need to evaluate many times over, the core physics characteristics of loading pattern solutions obtained through an optimization process, which is typically iterative. Perturbation theory provides an efficient alternative to the prohibitively expensive, repetitive solutions of the system few-group neutron diffusion equations required in solving the fuel placement problem. A primary concern in the use of such methods is the control of perturbation errors arising during the fuel shuffling process. First-order accurate models inevitably resort to undue restriction of fuel movement during the optimization process to control these errors. Higher order perturbation theory models have the potential to overcome such limitations, which may result in the identification of local versus global optima. An accurate, computationally efficient reactor physics model based on higher order perturbation theory and geared toward the needs of large-scale in-core fuel management optimization is presented in this paper

  14. Blood sample collection and patient identification demand improvement: a questionnaire study of preanalytical practices in hospital wards and laboratories.

    Science.gov (United States)

    Wallin, Olof; Söderberg, Johan; Van Guelpen, Bethany; Stenlund, Hans; Grankvist, Kjell; Brulin, Christine

    2010-09-01

    Scand J Caring Sci; 2010; 24; 581-591 
 Blood sample collection and patient identification demand improvement: a questionnaire study of preanalytical practices in hospital wards and laboratories   Most errors in venous blood testing result from human mistakes occurring before the sample reach the laboratory.   To survey venous blood sampling (VBS) practices in hospital wards and to compare practices with hospital laboratories.   Staff in two hospitals (all wards) and two hospital laboratories (314 respondents, response rate 94%), completed a questionnaire addressing issues relevant to the collection of venous blood samples for clinical chemistry testing.   The findings suggest that instructions for patient identification and the collection of venous blood samples were not always followed. For example, 79% of the respondents reported the undesirable practice (UDP) of not always using wristbands for patient identification. Similarly, 87% of the respondents noted the UDP of removing venous stasis after the sampling is finished. Compared with the ward staff, a significantly higher proportion of the laboratory staff reported desirable practices regarding the collection of venous blood samples. Neither education nor the existence of established sampling routines was clearly associated with VBS practices among the ward staff.   The results of this study, the first of its kind, suggest that a clinically important risk of error is associated with VBS in the surveyed wards. Most important is the risk of misidentification of patients. Quality improvement of blood sample collection is clearly needed, particularly in hospital wards. © 2009 The Authors. Journal compilation © 2009 Nordic College of Caring Science.

  15. Learning from your mistakes: The functional value of spontaneous error monitoring in aphasia

    Directory of Open Access Journals (Sweden)

    Erica L. Middleton

    2014-04-01

    Ex. 4.\t(T = umbrella “umbelella, umbrella”: Phonological error; DetCorr We used mixed effects logistic regression to assess whether the log odds of changing from error to correct was predicted by monitoring status of the error (DetCorr vs. NoDet; DetNoCorr vs. NoDet; whether the monitoring benefit interacted with direction of change (forward, backward; and whether effects varied by error type. Figure 1 (top shows that the proportion accuracy change was higher for DetCorr, relative to NoDet, consistent with a monitoring benefit. The difference in log odds was significant for semantic errors in both directions (forward: coeff. = -1.73; z= -7.78; p < .001; backward: coeff = -0.92; z= -3.60; p < .001, and for phonological errors in both directions (forward: coeff. = -0.74; z= -2.73; p=.006; backward : coeff. = -.76; z = -2.73; p = .006. The difference between DetNoCorr and NoDet was not significant in any condition. Figure 1 (bottom shows that for Semantic errors, there was a directional asymmetry favoring the Forward condition (interaction: coeff. = .79; z = 2.32; p = .02. Phonological errors, in contrast, produced comparable effects in Forward and Backward direction. The results demonstrated a benefit for errors that were detected and corrected. This monitoring benefit was present in both the forward and backward direction, supporting the Strength hypothesis. Of greatest interest, the monitoring benefit for Semantic errors was greater in the forward than backward direction, indicating a role for learning.

  16. Bayesian analysis of data and model error in rainfall-runoff hydrological models

    Science.gov (United States)

    Kavetski, D.; Franks, S. W.; Kuczera, G.

    2004-12-01

    A major unresolved issue in the identification and use of conceptual hydrologic models is realistic description of uncertainty in the data and model structure. In particular, hydrologic parameters often cannot be measured directly and must be inferred (calibrated) from observed forcing/response data (typically, rainfall and runoff). However, rainfall varies significantly in space and time, yet is often estimated from sparse gauge networks. Recent work showed that current calibration methods (e.g., standard least squares, multi-objective calibration, generalized likelihood uncertainty estimation) ignore forcing uncertainty and assume that the rainfall is known exactly. Consequently, they can yield strongly biased and misleading parameter estimates. This deficiency confounds attempts to reliably test model hypotheses, to generalize results across catchments (the regionalization problem) and to quantify predictive uncertainty when the hydrologic model is extrapolated. This paper continues the development of a Bayesian total error analysis (BATEA) methodology for the calibration and identification of hydrologic models, which explicitly incorporates the uncertainty in both the forcing and response data, and allows systematic model comparison based on residual model errors and formal Bayesian hypothesis testing (e.g., using Bayes factors). BATEA is based on explicit stochastic models for both forcing and response uncertainty, whereas current techniques focus solely on response errors. Hence, unlike existing methods, the BATEA parameter equations directly reflect the modeler's confidence in all the data. We compare several approaches to approximating the parameter distributions: a) full Markov Chain Monte Carlo methods and b) simplified approaches based on linear approximations. Studies using synthetic and real data from the US and Australia show that BATEA systematically reduces the parameter bias, leads to more meaningful model fits and allows model comparison taking

  17. Testing the algorithms for automatic identification of errors on the measured quantities of the nuclear power plant. Verification tests

    International Nuclear Information System (INIS)

    Svatek, J.

    1999-12-01

    During the development and implementation of supporting software for the control room and emergency control centre at the Dukovany nuclear power plant it appeared necessary to validate the input quantities in order to assure operating reliability of the software tools. Therefore, the development of software for validation of the measured quantities of the plant data sources was initiated, and the software had to be debugged and verified. The report contains the proposal for and description of the verification tests for testing the algorithms of automatic identification of errors on the observed quantities of the NPP by means of homemade validation software. In particular, the algorithms treated serve the validation of the hot leg temperature at primary circuit loop no. 2 or 4 at the Dukovany-2 reactor unit using data from the URAN and VK3 information systems, recorded during 3 different days. (author)

  18. Optics measurement algorithms and error analysis for the proton energy frontier

    Directory of Open Access Journals (Sweden)

    A. Langner

    2015-03-01

    Full Text Available Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV was insufficient to understand beam size measurements and determine interaction point (IP β-functions (β^{*}. A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased the average error bars by a factor of three to four. This allowed the calculation of β^{*} values and demonstrated to be fundamental in the understanding of emittance evolution during the energy ramp.

  19. Optics measurement algorithms and error analysis for the proton energy frontier

    Science.gov (United States)

    Langner, A.; Tomás, R.

    2015-03-01

    Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV) was insufficient to understand beam size measurements and determine interaction point (IP) β -functions (β*). A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased the average error bars by a factor of three to four. This allowed the calculation of β* values and demonstrated to be fundamental in the understanding of emittance evolution during the energy ramp.

  20. Speak Up: Help Prevent Errors in Your Care: Ambulatory Care

    Science.gov (United States)

    ... Your Care Ambulatory Care To prevent health care errors, patients are urged to... SpeakUP TM Everyone has a ... he or she has confused you with another patient. P ay attention to the ... for their identification (ID) badges. • Notice whether your caregivers have washed ...

  1. Influence of errors in prescriptions on the security of medicine

    Directory of Open Access Journals (Sweden)

    Puke K.

    2016-01-01

    Full Text Available All types of medication errors including missed doses, incorrect dosage forms, time intervals, and routes are essential encumbrances for qualitative pharmaceutical care and security of medicine [1]. Problems related to prescription errors are common in the healthcare profession, and are responsible for significant increase in costs, cases of morbidity and mortality [2]. The aim of the study was to analyze the common errors in prescriptions which were received in pharmacies and their effect on the security of medicine. Retrospective study was conducted between December 2013 and January 2014 in the pharmacy of Riga, Latvia. Prescriptions were analyzed to identify errors in Inscriptio, Praescriptio and the Signatura part. Of 200 prescriptions, only 14 (7% were filled correctly according to the legislative requirements in Latvia. The most common drug therapeutic class in the prescriptions was non-steroidal anti-inflammatory drugs (NSAID and other analgesics (21.1%. Unclear handwriting was observed in more than one third of all studied prescriptions (n=72; 36.0%. Mean age values of physicians were higher, but not significantly different, in the unclear compared to clear prescriptions, 59.5 ± 8.5 vs. 57.8 ± 10.6, respectively (p=0.253. Omission of the quantity of drug in the prescription part was the most frequent type of the error (n=112, 56.0%. High level of incorrect prescriptions was found during the period of study in the pharmacy. Overall, approximately 27% of prescriptions had significant failures, which could negatively affect therapeutic effect and safety of drug use.

  2. Advancing the research agenda for diagnostic error reduction.

    Science.gov (United States)

    Zwaan, Laura; Schiff, Gordon D; Singh, Hardeep

    2013-10-01

    Diagnostic errors remain an underemphasised and understudied area of patient safety research. We briefly summarise the methods that have been used to conduct research on epidemiology, contributing factors and interventions related to diagnostic error and outline directions for future research. Research methods that have studied epidemiology of diagnostic error provide some estimate on diagnostic error rates. However, there appears to be a large variability in the reported rates due to the heterogeneity of definitions and study methods used. Thus, future methods should focus on obtaining more precise estimates in different settings of care. This would lay the foundation for measuring error rates over time to evaluate improvements. Research methods have studied contributing factors for diagnostic error in both naturalistic and experimental settings. Both approaches have revealed important and complementary information. Newer conceptual models from outside healthcare are needed to advance the depth and rigour of analysis of systems and cognitive insights of causes of error. While the literature has suggested many potentially fruitful interventions for reducing diagnostic errors, most have not been systematically evaluated and/or widely implemented in practice. Research is needed to study promising intervention areas such as enhanced patient involvement in diagnosis, improving diagnosis through the use of electronic tools and identification and reduction of specific diagnostic process 'pitfalls' (eg, failure to conduct appropriate diagnostic evaluation of a breast lump after a 'normal' mammogram). The last decade of research on diagnostic error has made promising steps and laid a foundation for more rigorous methods to advance the field.

  3. The Influence of Guided Error-Based Learning on Motor Skills Self-Efficacy and Achievement.

    Science.gov (United States)

    Chien, Kuei-Pin; Chen, Sufen

    2018-01-01

    The authors investigated the role of errors in motor skills teaching, specifically the influence of errors on skills self-efficacy and achievement. The participants were 75 undergraduate students enrolled in pétanque courses. The experimental group (guided error-based learning, n = 37) received a 6-week period of instruction based on the students' errors, whereas the control group (correct motion instruction, n = 38) received a 6-week period of instruction emphasizing correct motor skills. The experimental group had significantly higher scores in motor skills self-efficacy and outcomes than did the control group. Novices' errors reflect their schema in motor skills learning, which provides a basis for instructors to implement student-centered instruction and to facilitate the learning process. Guided error-based learning can effectively enhance beginners' skills self-efficacy and achievement in precision sports such as pétanque.

  4. Influence of the signer's psychophysiological state on the results of his identification using handwritten pattern by natural and artificial intelligence

    Directory of Open Access Journals (Sweden)

    Alexey E. Sulavko

    2017-11-01

    Full Text Available At present, while various mechanisms to ensure information security are actively being improved, particular attention is paid to prevent unauthorized access to information resources.  The human factor and process of identification still remain the most problematic, as well as user authentication. A progress in the technology of information resources protection from internal security threats paves its way towards biometric systems of hidden identification of computer users and their psychophysiological state. A change in psychophysiological state results in the person's handwriting. The influence of the signer’s state of fatigue and excitation on the results of its identification both by a person and by pattern recognition methods on reproduced signatures are studied. Capabilities of human and artificial intelligence are compared in equal conditions. When the state of the signer changes, the probability of erroneous recognition by artificial intelligence increases by factor 3.3 to 3.7. A person identifies a handwritten image with fewer errors in case when the signer is agitated, and with higher error rate if the signer is tired.

  5. Self-assessment of human performance errors in nuclear operations

    International Nuclear Information System (INIS)

    Chambliss, K.V.

    1996-01-01

    One of the most important approaches to improving nuclear safety is to have an effective self-assessment process in place, whose cornerstone is the identification and improvement of human performance errors. Experience has shown that significant events usually have had precursors of human performance errors. If these precursors are left uncorrected or not understood, the symptoms recur and result in unanticipated events of greater safety significance. The Institute of Nuclear Power Operations (INPO) has been championing the cause of promoting excellence in human performance in the nuclear industry. INPO's report, open-quotes Excellence in Human Performance,close quotes emphasizes the importance of several factors that play a role in human performance. They include individual, supervisory, and organizational behaviors; real-time feedback that results in specific behavior to produce safe and reliable performance; and proactive measures that remove obstacles from excellent human performance. Zack Pate, chief executive officer and president of INPO, in his report, open-quotes The Control Room,close quotes provides an excellent discussion of serious events in the nuclear industry since 1994 and compares them with the results from a recent study by the National Transportation Safety Board of airline accidents in the 12-yr period from 1978 to 1990 to draw some common themes that relate to human performance issues in the control room

  6. Speak Up: Help Prevent Errors in Your Care: Home Care

    Science.gov (United States)

    ... Your Care Home Care To prevent health care errors, patients are urged to... SpeakUP TM Everyone has a ... you think they have confused you with another patient. P ay attention to the care you ... for their identification (ID) badges. • Make sure you or family members ...

  7. Addressee Errors in ATC Communications: The Call Sign Problem

    Science.gov (United States)

    Monan, W. P.

    1983-01-01

    Communication errors involving aircraft call signs were portrayed in reports of 462 hazardous incidents voluntarily submitted to the ASRS during an approximate four-year period. These errors resulted in confusion, disorder, and uncoordinated traffic conditions and produced the following types of operational anomalies: altitude deviations, wrong-way headings, aborted takeoffs, go arounds, runway incursions, missed crossing altitude restrictions, descents toward high terrain, and traffic conflicts in flight and on the ground. Analysis of the report set resulted in identification of five categories of errors involving call signs: (1) faulty radio usage techniques, (2) call sign loss or smearing due to frequency congestion, (3) confusion resulting from similar sounding call signs, (4) airmen misses of call signs leading to failures to acknowledge or readback, and (5) controller failures regarding confirmation of acknowledgements or readbacks. These error categories are described in detail and several associated hazard mitigating measures that might be aken are considered.

  8. Proportion of medication error reporting and associated factors among nurses: a cross sectional study.

    Science.gov (United States)

    Jember, Abebaw; Hailu, Mignote; Messele, Anteneh; Demeke, Tesfaye; Hassen, Mohammed

    2018-01-01

    A medication error (ME) is any preventable event that may cause or lead to inappropriate medication use or patient harm. Voluntary reporting has a principal role in appreciating the extent and impact of medication errors. Thus, exploration of the proportion of medication error reporting and associated factors among nurses is important to inform service providers and program implementers so as to improve the quality of the healthcare services. Institution based quantitative cross-sectional study was conducted among 397 nurses from March 6 to May 10, 2015. Stratified sampling followed by simple random sampling technique was used to select the study participants. The data were collected using structured self-administered questionnaire which was adopted from studies conducted in Australia and Jordan. A pilot study was carried out to validate the questionnaire before data collection for this study. Bivariate and multivariate logistic regression models were fitted to identify factors associated with the proportion of medication error reporting among nurses. An adjusted odds ratio with 95% confidence interval was computed to determine the level of significance. The proportion of medication error reporting among nurses was found to be 57.4%. Regression analysis showed that sex, marital status, having made a medication error and medication error experience were significantly associated with medication error reporting. The proportion of medication error reporting among nurses in this study was found to be higher than other studies.

  9. Identification of pleasant, neutral, and unpleasant odors in schizophrenia.

    Science.gov (United States)

    Kamath, Vidyulata; Turetsky, Bruce I; Moberg, Paul J

    2011-05-15

    Recent work on odor hedonics in schizophrenia has indicated that patients display abnormalities in hedonic judgments of odors in comparison to healthy comparison participants. In the current study, identification accuracy for pleasant, neutral, and unpleasant odors in individuals with schizophrenia and healthy controls was examined. Thirty-three schizophrenia patients (63% male) and thirty-one healthy volunteers (65% male) were recruited. The groups were well matched on age, sex, and smoking status. Participants were administered the University of Pennsylvania Smell Identification Test, which was subsequently divided into 16 pleasant, 15 neutral, and 9 unpleasant items. Analysis of identification z-scores for pleasant, neutral, and unpleasant odors revealed a significant diagnosis by valence interaction. Post-hoc analysis revealed that schizophrenia participants made more identification errors on pleasant and neutral odors compared to healthy controls, with no differences observed for unpleasant odors. No effect was seen for sex. The findings from the current investigation suggest that odor identification accuracy in patients is influenced by odor valence. This pattern of results parallels a growing body of literature indicating that patients display aberrant pleasantness ratings for pleasant odors and highlights the need for additional research on the influence of odor valence on olfactory identification performance in individuals with schizophrenia. Copyright © 2010 Elsevier Ltd. All rights reserved.

  10. Some problems and errors in cytogenetic biodosimetry

    International Nuclear Information System (INIS)

    Mosse, Irma; Kilchevsky, Alexander; Nikolova, Nevena; Zhelev, Nikolai

    2017-01-01

    Human radiosensitivity is a quantitative trait that is generally subject to binomial distribution. Individual radiosensitivity, however, may deviate significantly from the mean (by 2–3 standard deviations). Thus, the same dose of radiation may result in different levels of genotoxic damage (commonly measured as chromosome aberration rates) in different individuals. There is significant genetic component in individual radiosensitivity. It is related to carrier ship of variant alleles of various single-nucleotide polymorphisms (most of these in genes coding for proteins functioning in DNA damage identification and repair); carrier ship of a different number of alleles producing cumulative effects; amplification of gene copies coding for proteins responsible for radioresistance, mobile genetic elements and others. Among the other factors influencing individual radioresistance are: the radio adaptive response; the bystander effect; the levels of endogenous substances with radioprotective and antimutagenic properties and environmental factors such as lifestyle and diet, physical activity, psycho emotional state, hormonal state, certain drugs, infections and others. These factors may have radioprotective or sensitizing effects. Apparently, there are too many factors that may significantly modulate the biological effects of ionizing radiation. Thus, conventional methodologies for biodosimetry (specifically, cytogenetic methods) may produce significant errors if personal traits that may affect radioresistance are not accounted for

  11. Sleep quality, posttraumatic stress, depression, and human errors in train drivers: a population-based nationwide study in South Korea.

    Science.gov (United States)

    Jeon, Hong Jin; Kim, Ji-Hae; Kim, Bin-Na; Park, Seung Jin; Fava, Maurizio; Mischoulon, David; Kang, Eun-Ho; Roh, Sungwon; Lee, Dongsoo

    2014-12-01

    Human error is defined as an unintended error that is attributable to humans rather than machines, and that is important to avoid to prevent accidents. We aimed to investigate the association between sleep quality and human errors among train drivers. Cross-sectional. Population-based. A sample of 5,480 subjects who were actively working as train drivers were recruited in South Korea. The participants were 4,634 drivers who completed all questionnaires (response rate 84.6%). None. The Pittsburgh Sleep Quality Index (PSQI), the Center for Epidemiologic Studies Depression Scale (CES-D), the Impact of Event Scale-Revised (IES-R), the State-Trait Anxiety Inventory (STAI), and the Korean Occupational Stress Scale (KOSS). Of 4,634 train drivers, 349 (7.5%) showed more than one human error per 5 y. Human errors were associated with poor sleep quality, higher PSQI total scores, short sleep duration at night, and longer sleep latency. Among train drivers with poor sleep quality, those who experienced severe posttraumatic stress showed a significantly higher number of human errors than those without. Multiple logistic regression analysis showed that human errors were significantly associated with poor sleep quality and posttraumatic stress, whereas there were no significant associations with depression, trait and state anxiety, and work stress after adjusting for age, sex, education years, marital status, and career duration. Poor sleep quality was found to be associated with more human errors in train drivers, especially in those who experienced severe posttraumatic stress. © 2014 Associated Professional Sleep Societies, LLC.

  12. Barriers to medical error reporting

    Directory of Open Access Journals (Sweden)

    Jalal Poorolajal

    2015-01-01

    Full Text Available Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan,Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%, lack of proper reporting form (51.8%, lack of peer supporting a person who has committed an error (56.0%, and lack of personal attention to the importance of medical errors (62.9%. The rate of committing medical errors was higher in men (71.4%, age of 50-40 years (67.6%, less-experienced personnel (58.7%, educational level of MSc (87.5%, and staff of radiology department (88.9%. Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement.

  13. Knowledge-base for the new human reliability analysis method, A Technique for Human Error Analysis (ATHEANA)

    International Nuclear Information System (INIS)

    Cooper, S.E.; Wreathall, J.; Thompson, C.M., Drouin, M.; Bley, D.C.

    1996-01-01

    This paper describes the knowledge base for the application of the new human reliability analysis (HRA) method, a ''A Technique for Human Error Analysis'' (ATHEANA). Since application of ATHEANA requires the identification of previously unmodeled human failure events, especially errors of commission, and associated error-forcing contexts (i.e., combinations of plant conditions and performance shaping factors), this knowledge base is an essential aid for the HRA analyst

  14. Preventing statistical errors in scientific journals.

    NARCIS (Netherlands)

    Nuijten, M.B.

    2016-01-01

    There is evidence for a high prevalence of statistical reporting errors in psychology and other scientific fields. These errors display a systematic preference for statistically significant results, distorting the scientific literature. There are several possible causes for this systematic error

  15. Error-related brain activity and error awareness in an error classification paradigm.

    Science.gov (United States)

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Effects of image enhancement on reliability of landmark identification in digital cephalometry

    Directory of Open Access Journals (Sweden)

    M Oshagh

    2013-01-01

    Full Text Available Introduction: Although digital cephalometric radiography is gaining popularity in orthodontic practice, the most important source of error in its tracing is uncertainty in landmark identification. Therefore, efforts to improve accuracy in landmark identification were directed primarily toward the improvement in image quality. One of the more useful techniques of this process involves digital image enhancement which can increase overall visual quality of image, but this does not necessarily mean a better identification of landmarks. The purpose of this study was to evaluate the effectiveness of digital image enhancements on reliability of landmark identification. Materials and Methods: Fifteen common landmarks including 10 skeletal and 5 soft tissues were selected on the cephalograms of 20 randomly selected patients, prepared in Natural Head Position (NHP. Two observers (orthodontists identified landmarks on the 20 original photostimulable phosphor (PSP digital cephalogram images and 20 enhanced digital images twice with an intervening time interval of at least 4 weeks. The x and y coordinates were further analyzed to evaluate the pattern of recording differences in horizontal and vertical directions. Reliability of landmarks identification was analyzed by paired t test. Results: There was a significant difference between original and enhanced digital images in terms of reliability of points Ar and N in vertical and horizontal dimensions, and enhanced images were significantly more reliable than original images. Identification of A point, Pogonion and Pronasal points, in vertical dimension of enhanced images was significantly more reliable than original ones. Reliability of Menton point identification in horizontal dimension was significantly more in enhanced images than original ones. Conclusion: Direct digital image enhancement by altering brightness and contrast can increase reliability of some landmark identification and this may lead to more

  17. Simple Sample Preparation Method for Direct Microbial Identification and Susceptibility Testing From Positive Blood Cultures.

    Science.gov (United States)

    Pan, Hong-Wei; Li, Wei; Li, Rong-Guo; Li, Yong; Zhang, Yi; Sun, En-Hua

    2018-01-01

    Rapid identification and determination of the antibiotic susceptibility profiles of the infectious agents in patients with bloodstream infections are critical steps in choosing an effective targeted antibiotic for treatment. However, there has been minimal effort focused on developing combined methods for the simultaneous direct identification and antibiotic susceptibility determination of bacteria in positive blood cultures. In this study, we constructed a lysis-centrifugation-wash procedure to prepare a bacterial pellet from positive blood cultures, which can be used directly for identification by matrix-assisted laser desorption/ionization-time-of-flight mass spectrometry (MALDI-TOF MS) and antibiotic susceptibility testing by the Vitek 2 system. The method was evaluated using a total of 129 clinical bacteria-positive blood cultures. The whole sample preparation process could be completed in identification was 96.49% for gram-negative bacteria and 97.22% for gram-positive bacteria. Vitek 2 antimicrobial susceptibility testing of gram-negative bacteria showed an agreement rate of antimicrobial categories of 96.89% with a minor error, major error, and very major error rate of 2.63, 0.24, and 0.24%, respectively. Category agreement of antimicrobials against gram-positive bacteria was 92.81%, with a minor error, major error, and very major error rate of 4.51, 1.22, and 1.46%, respectively. These results indicated that our direct antibiotic susceptibility analysis method worked well compared to the conventional culture-dependent laboratory method. Overall, this fast, easy, and accurate method can facilitate the direct identification and antibiotic susceptibility testing of bacteria in positive blood cultures.

  18. The effectiveness of risk management program on pediatric nurses' medication error.

    Science.gov (United States)

    Dehghan-Nayeri, Nahid; Bayat, Fariba; Salehi, Tahmineh; Faghihzadeh, Soghrat

    2013-09-01

    Medication therapy is one of the most complex and high-risk clinical processes that nurses deal with. Medication error is the most common type of error that brings about damage and death to patients, especially pediatric ones. However, these errors are preventable. Identifying and preventing undesirable events leading to medication errors are the main risk management activities. The aim of this study was to investigate the effectiveness of a risk management program on the pediatric nurses' medication error rate. This study is a quasi-experimental one with a comparison group. In this study, 200 nurses were recruited from two main pediatric hospitals in Tehran. In the experimental hospital, we applied the risk management program for a period of 6 months. Nurses of the control hospital did the hospital routine schedule. A pre- and post-test was performed to measure the frequency of the medication error events. SPSS software, t-test, and regression analysis were used for data analysis. After the intervention, the medication error rate of nurses at the experimental hospital was significantly lower (P error-reporting rate was higher (P medical environment, applying the quality-control programs such as risk management can effectively prevent the occurrence of the hospital undesirable events. Nursing mangers can reduce the medication error rate by applying risk management programs. However, this program cannot succeed without nurses' cooperation.

  19. Identification of significant features by the Global Mean Rank test.

    Science.gov (United States)

    Klammer, Martin; Dybowski, J Nikolaj; Hoffmann, Daniel; Schaab, Christoph

    2014-01-01

    With the introduction of omics-technologies such as transcriptomics and proteomics, numerous methods for the reliable identification of significantly regulated features (genes, proteins, etc.) have been developed. Experimental practice requires these tests to successfully deal with conditions such as small numbers of replicates, missing values, non-normally distributed expression levels, and non-identical distributions of features. With the MeanRank test we aimed at developing a test that performs robustly under these conditions, while favorably scaling with the number of replicates. The test proposed here is a global one-sample location test, which is based on the mean ranks across replicates, and internally estimates and controls the false discovery rate. Furthermore, missing data is accounted for without the need of imputation. In extensive simulations comparing MeanRank to other frequently used methods, we found that it performs well with small and large numbers of replicates, feature dependent variance between replicates, and variable regulation across features on simulation data and a recent two-color microarray spike-in dataset. The tests were then used to identify significant changes in the phosphoproteomes of cancer cells induced by the kinase inhibitors erlotinib and 3-MB-PP1 in two independently published mass spectrometry-based studies. MeanRank outperformed the other global rank-based methods applied in this study. Compared to the popular Significance Analysis of Microarrays and Linear Models for Microarray methods, MeanRank performed similar or better. Furthermore, MeanRank exhibits more consistent behavior regarding the degree of regulation and is robust against the choice of preprocessing methods. MeanRank does not require any imputation of missing values, is easy to understand, and yields results that are easy to interpret. The software implementing the algorithm is freely available for academic and commercial use.

  20. Using Fault Trees to Advance Understanding of Diagnostic Errors.

    Science.gov (United States)

    Rogith, Deevakar; Iyengar, M Sriram; Singh, Hardeep

    2017-11-01

    Diagnostic errors annually affect at least 5% of adults in the outpatient setting in the United States. Formal analytic techniques are only infrequently used to understand them, in part because of the complexity of diagnostic processes and clinical work flows involved. In this article, diagnostic errors were modeled using fault tree analysis (FTA), a form of root cause analysis that has been successfully used in other high-complexity, high-risk contexts. How factors contributing to diagnostic errors can be systematically modeled by FTA to inform error understanding and error prevention is demonstrated. A team of three experts reviewed 10 published cases of diagnostic error and constructed fault trees. The fault trees were modeled according to currently available conceptual frameworks characterizing diagnostic error. The 10 trees were then synthesized into a single fault tree to identify common contributing factors and pathways leading to diagnostic error. FTA is a visual, structured, deductive approach that depicts the temporal sequence of events and their interactions in a formal logical hierarchy. The visual FTA enables easier understanding of causative processes and cognitive and system factors, as well as rapid identification of common pathways and interactions in a unified fashion. In addition, it enables calculation of empirical estimates for causative pathways. Thus, fault trees might provide a useful framework for both quantitative and qualitative analysis of diagnostic errors. Future directions include establishing validity and reliability by modeling a wider range of error cases, conducting quantitative evaluations, and undertaking deeper exploration of other FTA capabilities. Copyright © 2017 The Joint Commission. Published by Elsevier Inc. All rights reserved.

  1. Error Analysis of Deep Sequencing of Phage Libraries: Peptides Censored in Sequencing

    Directory of Open Access Journals (Sweden)

    Wadim L. Matochko

    2013-01-01

    Full Text Available Next-generation sequencing techniques empower selection of ligands from phage-display libraries because they can detect low abundant clones and quantify changes in the copy numbers of clones without excessive selection rounds. Identification of errors in deep sequencing data is the most critical step in this process because these techniques have error rates >1%. Mechanisms that yield errors in Illumina and other techniques have been proposed, but no reports to date describe error analysis in phage libraries. Our paper focuses on error analysis of 7-mer peptide libraries sequenced by Illumina method. Low theoretical complexity of this phage library, as compared to complexity of long genetic reads and genomes, allowed us to describe this library using convenient linear vector and operator framework. We describe a phage library as N×1 frequency vector n=ni, where ni is the copy number of the ith sequence and N is the theoretical diversity, that is, the total number of all possible sequences. Any manipulation to the library is an operator acting on n. Selection, amplification, or sequencing could be described as a product of a N×N matrix and a stochastic sampling operator (Sa. The latter is a random diagonal matrix that describes sampling of a library. In this paper, we focus on the properties of Sa and use them to define the sequencing operator (Seq. Sequencing without any bias and errors is Seq=Sa IN, where IN is a N×N unity matrix. Any bias in sequencing changes IN to a nonunity matrix. We identified a diagonal censorship matrix (CEN, which describes elimination or statistically significant downsampling, of specific reads during the sequencing process.

  2. Identification of inhaler technique errors with a routine procedure in Portuguese community pharmacy

    Directory of Open Access Journals (Sweden)

    Castel-Branco MM

    2017-12-01

    Full Text Available Background: A correct selection of drugs prescribed, but also the choice of the appropriate inhaler device, is crucial for the control of respiratory diseases. Objective: To evaluate the inhaler technique and identify potential errors of patients when treated with inhalers by testing a routinary procedure to be implemented in any community pharmacy. Methods: Adults with asthma/COPD and under inhalation therapy were invited to demonstrate how they use their inhalers. After direct observation it was registered whether all the sequential steps included in the summary of product characteristics (SmPC were performed. Results: The study involved 67 patients from 4 community pharmacies (Portugal central region: 34 (50.7% males, 65.4 (SD=18.28 years old, 42 (62.7% with COPD, and 23 (34.3% using more than one inhaler. The 67 patients used 95 inhalers, comprising: 57 (60.0% multiple dose DPI (dry powder inhalers, 18 (18.9% single dose DPI, 16 (16.8% pMDI (pressurized metered dose inhalers, 2 (2.1% pMDI+spacer and 2 (2.1% SMI (soft mist inhalers. No errors were made only by 9 (13.4% patients. In the 75 DPIs techniques, the most frequent errors were ‘no previous forced expiration’ (46=61.3% and ‘no 10s apnea after inhalation’ (51=68.0%; in the 16 pMDIs techniques common errors were ‘lack of hand-lung coordination’ (7=43.8 %, ‘no previous forced exhalation’ (8=50.0% and ‘no apnea after inhalation’ (10=62.5%. After inhaling from 56 devices containing corticosteroids, 34 (60.7% of the patients did not wash their mouth. Conclusion: The study demonstrated the possibility of performing this procedure routinely in Portuguese community pharmacies and also its utility, since 58 (87% of patients had at least one error during the inhalers use.

  3. Maximum Likelihood Approach for RFID Tag Set Cardinality Estimation with Detection Errors

    DEFF Research Database (Denmark)

    Nguyen, Chuyen T.; Hayashi, Kazunori; Kaneko, Megumi

    2013-01-01

    Abstract Estimation schemes of Radio Frequency IDentification (RFID) tag set cardinality are studied in this paper using Maximum Likelihood (ML) approach. We consider the estimation problem under the model of multiple independent reader sessions with detection errors due to unreliable radio...... is evaluated under dierent system parameters and compared with that of the conventional method via computer simulations assuming flat Rayleigh fading environments and framed-slotted ALOHA based protocol. Keywords RFID tag cardinality estimation maximum likelihood detection error...

  4. New DOI identification approach for high-resolution PET detectors

    International Nuclear Information System (INIS)

    Choghadi, Amin; Takahashi, Hiroyuki; Shimazoe, Kenji

    2016-01-01

    Depth-of-interaction (DOI) Identification in positron emission tomography (PET) detectors is getting importance as it improves spatial resolution in both conventional and time-of-flight (TOF) PET, and coincidence time resolution (CTR) in TOF-PET. In both prototypes, spatial resolution is affected by parallax error caused by length of scintillator crystals. This long length also contributes substantial timing uncertainty to the time resolution of TOF-PET. Through DOI identification, both parallax error and the timing uncertainty caused by the length of crystal can be resolved. In this work, a novel approach to estimate DOI was investigated, enjoying the interference of absorbance spectrum of scintillator crystals with their emission spectrum. Because the absorption length is close to zero for shorter wavelengths of crystal emission spectrum, the counts in this range of spectrum highly depend on DOI; that is, higher counts corresponds to deeper interactions. The ratio of counts in this range to the total counts is a good measure to estimate DOI. In order to extract such ratio, two photodetectors for each crystal are used and an optical filter is mounted only on top of one of them. The ratio of filtered output to non-filtered output can be utilized as DOI estimator. For a 2×2×20 mm 3 GAGG:Ce scintillator, 8-mm DOI resolution achieved in our simulations. (author)

  5. Identification of Error Terms in Regional Water Budgets

    Science.gov (United States)

    Saxe, S.; Farmer, W. H.; Hay, L.

    2017-12-01

    A water balance requires that storage increase in a watershed or some specified domain whenever the inputs exceed the outputs. A simple model will assign the change in groundwater storage to the change in storage (𝚫S), inputs will consider only precipitation (P), and outputs will consider only actual evapotranspiration (AET) and runoff (Q). However, for each component— 𝚫S, P, AET, and Q—there are errors in measurements and uncertainty in the conceptual and numerical models used to extrapolate these measurements to a wide range of temporal and spatial scales. This talk will present our attempt to close the water budget for all regions of the nation using moderate resolution estimates of 𝚫S, P, AET, and Q. For each component, a variety of gridded products were compiled and aggregated at the Hydrologic Unit Code (HUC)-2 (hundreds of thousands of square kilometers) through HUC-8 (hundreds of square kilometers) levels. For each region, a water balance was computed to estimate changes in storage and assign levels of confidence to these estimates. For a simple baseline budget calculated for regions across the continental United States, using estimates of 𝚫S, P, AET, and Q over a 13 year period from 2002 to 2015, the storage of water is increasing in the southwestern United States, at the same time that it is decreasing in the Pacific Northwest, the northern Rockies, the Midwest, and most of the east coast. Whereas 𝚫S shows statistically significant increasing/decreasing trends (depending on region) over the 13 year period, the trends are not always supported by corresponding estimates of P, AET, and Q. This simple water budget excludes several potentially important components, including soil moisture, recharge, snow-water equivalence, etc. that may account for the discrepancy. This research finds that there are substantial volumes of water missing from a simple, baseline water budget and that inclusion of more components may

  6. Discretisation errors in Landau gauge on the lattice

    International Nuclear Information System (INIS)

    Bonnet DR, Frederic; Bowman O, Patrick; Leinweber B, Derek; Williams G, Anthony; Richards G, David G.

    1999-01-01

    Lattice discretization errors in the Landau gauge condition are examined. An improved gauge fixing algorithm in which O(a 2 ) errors are removed is presented. O(a 2 ) improvement of the gauge fixing condition improves comparison with continuum Landau gauge in two ways: (1) through the elimination of O(a 2 ) errors and (2) through a secondary effect of reducing the size of higher-order errors. These results emphasize the importance of implementing an improved gauge fixing condition

  7. Visual Antipriming Effect: Evidence from Chinese Character Identification

    Directory of Open Access Journals (Sweden)

    Feng Zhang

    2017-10-01

    Full Text Available Marsolek et al. (2006 have differentiated antipriming effects from priming effects, by adopting a novel priming paradigm comprised of four phases that include a baseline measurement. The general concept of antipriming supports the overlapping representation theory of knowledge. This study extended examination of the Marsolek et al. (2006 paradigm by investigating antipriming and priming effects in a series of Chinese character identification tasks. Results showed that identification accuracy of old characters was significantly higher than baseline measurements (i.e., the priming effect, while identification accuracy of novel characters was significantly lower than baseline measurements (i.e., the antipriming effect. This study demonstrates for the first time the effect of visual antipriming in Chinese character identification. It further provides new evidence for the overlapping representation theory of knowledge, and supports generalizability of the phenomenon to Chinese characters.

  8. Customization of user interfaces to reduce errors and enhance user acceptance.

    Science.gov (United States)

    Burkolter, Dina; Weyers, Benjamin; Kluge, Annette; Luther, Wolfram

    2014-03-01

    Customization is assumed to reduce error and increase user acceptance in the human-machine relation. Reconfiguration gives the operator the option to customize a user interface according to his or her own preferences. An experimental study with 72 computer science students using a simulated process control task was conducted. The reconfiguration group (RG) interactively reconfigured their user interfaces and used the reconfigured user interface in the subsequent test whereas the control group (CG) used a default user interface. Results showed significantly lower error rates and higher acceptance of the RG compared to the CG while there were no significant differences between the groups regarding situation awareness and mental workload. Reconfiguration seems to be promising and therefore warrants further exploration. Copyright © 2013 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  9. Mobbing, Organizational Identification, and Perceived Support: Evidence from a Higher Education Institution

    Science.gov (United States)

    Coskuner, Selda; Costur, Recai; Bayhan-Karapinar, Pinar; Metin-Camgoz, Selin; Ceylan, Savas; Demirtas-Zorbaz, Selen; Aktas, Emine Feyza; Ciffiliz, Gonca

    2018-01-01

    Purpose: The aim of the current study is twofold. First, it investigates the relationship between mobbing and organizational identification (OI) as an organizational attitude. Second, it explores the moderating effect of perceived organizational support (POS) on the relationship between mobbing and organizational identification. We proposed that…

  10. The Preisach hysteresis model: Error bounds for numerical identification and inversion

    Czech Academy of Sciences Publication Activity Database

    Krejčí, Pavel

    2013-01-01

    Roč. 6, č. 1 (2013), s. 101-119 ISSN 1937-1632 R&D Projects: GA ČR GAP201/10/2315 Institutional support: RVO:67985840 Keywords : hysteresis * Preisach model * error bounds Subject RIV: BA - General Mathematics http://www.aimsciences.org/journals/displayArticlesnew.jsp?paperID=7779

  11. Abnormal error monitoring in math-anxious individuals: evidence from error-related brain potentials.

    Directory of Open Access Journals (Sweden)

    Macarena Suárez-Pellicioni

    Full Text Available This study used event-related brain potentials to investigate whether math anxiety is related to abnormal error monitoring processing. Seventeen high math-anxious (HMA and seventeen low math-anxious (LMA individuals were presented with a numerical and a classical Stroop task. Groups did not differ in terms of trait or state anxiety. We found enhanced error-related negativity (ERN in the HMA group when subjects committed an error on the numerical Stroop task, but not on the classical Stroop task. Groups did not differ in terms of the correct-related negativity component (CRN, the error positivity component (Pe, classical behavioral measures or post-error measures. The amplitude of the ERN was negatively related to participants' math anxiety scores, showing a more negative amplitude as the score increased. Moreover, using standardized low resolution electromagnetic tomography (sLORETA we found greater activation of the insula in errors on a numerical task as compared to errors in a non-numerical task only for the HMA group. The results were interpreted according to the motivational significance theory of the ERN.

  12. Analysis of errors in forensic science

    Directory of Open Access Journals (Sweden)

    Mingxiao Du

    2017-01-01

    Full Text Available Reliability of expert testimony is one of the foundations of judicial justice. Both expert bias and scientific errors affect the reliability of expert opinion, which in turn affects the trustworthiness of the findings of fact in legal proceedings. Expert bias can be eliminated by replacing experts; however, it may be more difficult to eliminate scientific errors. From the perspective of statistics, errors in operation of forensic science include systematic errors, random errors, and gross errors. In general, process repetition and abiding by the standard ISO/IEC:17025: 2005, general requirements for the competence of testing and calibration laboratories, during operation are common measures used to reduce errors that originate from experts and equipment, respectively. For example, to reduce gross errors, the laboratory can ensure that a test is repeated several times by different experts. In applying for forensic principles and methods, the Federal Rules of Evidence 702 mandate that judges consider factors such as peer review, to ensure the reliability of the expert testimony. As the scientific principles and methods may not undergo professional review by specialists in a certain field, peer review serves as an exclusive standard. This study also examines two types of statistical errors. As false-positive errors involve a higher possibility of an unfair decision-making, they should receive more attention than false-negative errors.

  13. Teamwork and clinical error reporting among nurses in Korean hospitals.

    Science.gov (United States)

    Hwang, Jee-In; Ahn, Jeonghoon

    2015-03-01

    To examine levels of teamwork and its relationships with clinical error reporting among Korean hospital nurses. The study employed a cross-sectional survey design. We distributed a questionnaire to 674 nurses in two teaching hospitals in Korea. The questionnaire included items on teamwork and the reporting of clinical errors. We measured teamwork using the Teamwork Perceptions Questionnaire, which has five subscales including team structure, leadership, situation monitoring, mutual support, and communication. Using logistic regression analysis, we determined the relationships between teamwork and error reporting. The response rate was 85.5%. The mean score of teamwork was 3.5 out of 5. At the subscale level, mutual support was rated highest, while leadership was rated lowest. Of the participating nurses, 522 responded that they had experienced at least one clinical error in the last 6 months. Among those, only 53.0% responded that they always or usually reported clinical errors to their managers and/or the patient safety department. Teamwork was significantly associated with better error reporting. Specifically, nurses with a higher team communication score were more likely to report clinical errors to their managers and the patient safety department (odds ratio = 1.82, 95% confidence intervals [1.05, 3.14]). Teamwork was rated as moderate and was positively associated with nurses' error reporting performance. Hospital executives and nurse managers should make substantial efforts to enhance teamwork, which will contribute to encouraging the reporting of errors and improving patient safety. Copyright © 2015. Published by Elsevier B.V.

  14. Key Concept Identification: A Comprehensive Analysis of Frequency and Topical Graph-Based Approaches

    Directory of Open Access Journals (Sweden)

    Muhammad Aman

    2018-05-01

    Full Text Available Automatic key concept extraction from text is the main challenging task in information extraction, information retrieval and digital libraries, ontology learning, and text analysis. The statistical frequency and topical graph-based ranking are the two kinds of potentially powerful and leading unsupervised approaches in this area, devised to address the problem. To utilize the potential of these approaches and improve key concept identification, a comprehensive performance analysis of these approaches on datasets from different domains is needed. The objective of the study presented in this paper is to perform a comprehensive empirical analysis of selected frequency and topical graph-based algorithms for key concept extraction on three different datasets, to identify the major sources of error in these approaches. For experimental analysis, we have selected TF-IDF, KP-Miner and TopicRank. Three major sources of error, i.e., frequency errors, syntactical errors and semantical errors, and the factors that contribute to these errors are identified. Analysis of the results reveals that performance of the selected approaches is significantly degraded by these errors. These findings can help us develop an intelligent solution for key concept extraction in the future.

  15. [Medical errors: inevitable but preventable].

    Science.gov (United States)

    Giard, R W

    2001-10-27

    Medical errors are increasingly reported in the lay press. Studies have shown dramatic error rates of 10 percent or even higher. From a methodological point of view, studying the frequency and causes of medical errors is far from simple. Clinical decisions on diagnostic or therapeutic interventions are always taken within a clinical context. Reviewing outcomes of interventions without taking into account both the intentions and the arguments for a particular action will limit the conclusions from a study on the rate and preventability of errors. The interpretation of the preventability of medical errors is fraught with difficulties and probably highly subjective. Blaming the doctor personally does not do justice to the actual situation and especially the organisational framework. Attention for and improvement of the organisational aspects of error are far more important then litigating the person. To err is and will remain human and if we want to reduce the incidence of faults we must be able to learn from our mistakes. That requires an open attitude towards medical mistakes, a continuous effort in their detection, a sound analysis and, where feasible, the institution of preventive measures.

  16. Discretisation errors in Landau gauge on the lattice

    International Nuclear Information System (INIS)

    Bonnet, F.D.R.; Bowmen, P.O.; Leinweber, D.B.

    1999-01-01

    Lattice discretisation errors in the Landau gauge condition are examined. An improved gauge fixing algorithm in which O(a 2 ) errors are removed is presented. O(a 2 ) improvement of the gauge fixing condition improves comparison with the continuum Landau gauge in two ways: (1) through the elimination of O(a 2 ) errors and (2) through a secondary effect of reducing the size of higher-order errors. These results emphasise the importance of implementing an improved gauge fixing condition. Copyright (1999) CSIRO Australia

  17. Simple Sample Preparation Method for Direct Microbial Identification and Susceptibility Testing From Positive Blood Cultures

    Directory of Open Access Journals (Sweden)

    Hong-wei Pan

    2018-03-01

    Full Text Available Rapid identification and determination of the antibiotic susceptibility profiles of the infectious agents in patients with bloodstream infections are critical steps in choosing an effective targeted antibiotic for treatment. However, there has been minimal effort focused on developing combined methods for the simultaneous direct identification and antibiotic susceptibility determination of bacteria in positive blood cultures. In this study, we constructed a lysis-centrifugation-wash procedure to prepare a bacterial pellet from positive blood cultures, which can be used directly for identification by matrix-assisted laser desorption/ionization-time-of-flight mass spectrometry (MALDI-TOF MS and antibiotic susceptibility testing by the Vitek 2 system. The method was evaluated using a total of 129 clinical bacteria-positive blood cultures. The whole sample preparation process could be completed in <15 min. The correct rate of direct MALDI-TOF MS identification was 96.49% for gram-negative bacteria and 97.22% for gram-positive bacteria. Vitek 2 antimicrobial susceptibility testing of gram-negative bacteria showed an agreement rate of antimicrobial categories of 96.89% with a minor error, major error, and very major error rate of 2.63, 0.24, and 0.24%, respectively. Category agreement of antimicrobials against gram-positive bacteria was 92.81%, with a minor error, major error, and very major error rate of 4.51, 1.22, and 1.46%, respectively. These results indicated that our direct antibiotic susceptibility analysis method worked well compared to the conventional culture-dependent laboratory method. Overall, this fast, easy, and accurate method can facilitate the direct identification and antibiotic susceptibility testing of bacteria in positive blood cultures.

  18. Using a higher criticism statistic to detect modest effects in a genome-wide study of rheumatoid arthritis

    Science.gov (United States)

    2009-01-01

    In high-dimensional studies such as genome-wide association studies, the correction for multiple testing in order to control total type I error results in decreased power to detect modest effects. We present a new analytical approach based on the higher criticism statistic that allows identification of the presence of modest effects. We apply our method to the genome-wide study of rheumatoid arthritis provided in the Genetic Analysis Workshop 16 Problem 1 data set. There is evidence for unknown bias in this study that could be explained by the presence of undetected modest effects. We compared the asymptotic and empirical thresholds for the higher criticism statistic. Using the asymptotic threshold we detected the presence of modest effects genome-wide. We also detected modest effects using 90th percentile of the empirical null distribution as a threshold; however, there is no such evidence when the 95th and 99th percentiles were used. While the higher criticism method suggests that there is some evidence for modest effects, interpreting individual single-nucleotide polymorphisms with significant higher criticism statistics is of undermined value. The goal of higher criticism is to alert the researcher that genetic effects remain to be discovered and to promote the use of more targeted and powerful studies to detect the remaining effects. PMID:20018032

  19. Patient and Sample Identification. Out of the Maze?

    Directory of Open Access Journals (Sweden)

    Lippi Giuseppe

    2017-04-01

    Full Text Available Background: Patient and sample misidentification may cause significant harm or discomfort to the patients, especially when incorrect data is used for performing specific healthcare activities. It is hence obvious that efficient and quality care can only start from accurate patient identification. There are many opportunities for misidentification in healthcare and laboratory medicine, including homonymy, incorrect patient registration, reliance on wrong patient data, mistakes in order entry, collection of biological specimens from wrong patients, inappropriate sample labeling and inaccurate entry or erroneous transmission of test results through the laboratory information system. Many ongoing efforts are made to prevent this important healthcare problem, entailing streamlined strategies for identifying patients throughout the healthcare industry by means of traditional and innovative identifiers, as well as using technologic tools that may enhance both the quality and efficiency of blood tubes labeling. The aim of this article is to provide an overview about the liability of identification errors in healthcare, thus providing a pragmatic approach for diverging the so-called patient identification crisis.

  20. Diagnostic errors in pediatric radiology

    International Nuclear Information System (INIS)

    Taylor, George A.; Voss, Stephan D.; Melvin, Patrice R.; Graham, Dionne A.

    2011-01-01

    Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement. (orig.)

  1. Indirect inference with time series observed with error

    DEFF Research Database (Denmark)

    Rossi, Eduardo; Santucci de Magistris, Paolo

    estimation. We propose to solve this inconsistency by jointly estimating the nuisance and the structural parameters. Under standard assumptions, this estimator is consistent and asymptotically normal. A condition for the identification of ARMA plus noise is obtained. The proposed methodology is used......We analyze the properties of the indirect inference estimator when the observed series are contaminated by measurement error. We show that the indirect inference estimates are asymptotically biased when the nuisance parameters of the measurement error distribution are neglected in the indirect...... to estimate the parameters of continuous-time stochastic volatility models with auxiliary specifications based on realized volatility measures. Monte Carlo simulations shows the bias reduction of the indirect estimates obtained when the microstructure noise is explicitly modeled. Finally, an empirical...

  2. The Sustained Influence of an Error on Future Decision-Making.

    Science.gov (United States)

    Schiffler, Björn C; Bengtsson, Sara L; Lundqvist, Daniel

    2017-01-01

    Post-error slowing (PES) is consistently observed in decision-making tasks after negative feedback. Yet, findings are inconclusive as to whether PES supports performance accuracy. We addressed the role of PES by employing drift diffusion modeling which enabled us to investigate latent processes of reaction times and accuracy on a large-scale dataset (>5,800 participants) of a visual search experiment with emotional face stimuli. In our experiment, post-error trials were characterized by both adaptive and non-adaptive decision processes. An adaptive increase in participants' response threshold was sustained over several trials post-error. Contrarily, an initial decrease in evidence accumulation rate, followed by an increase on the subsequent trials, indicates a momentary distraction of task-relevant attention and resulted in an initial accuracy drop. Higher values of decision threshold and evidence accumulation on the post-error trial were associated with higher accuracy on subsequent trials which further gives credence to these parameters' role in post-error adaptation. Finally, the evidence accumulation rate post-error decreased when the error trial presented angry faces, a finding suggesting that the post-error decision can be influenced by the error context. In conclusion, we demonstrate that error-related response adaptations are multi-component processes that change dynamically over several trials post-error.

  3. The Sustained Influence of an Error on Future Decision-Making

    Directory of Open Access Journals (Sweden)

    Björn C. Schiffler

    2017-06-01

    Full Text Available Post-error slowing (PES is consistently observed in decision-making tasks after negative feedback. Yet, findings are inconclusive as to whether PES supports performance accuracy. We addressed the role of PES by employing drift diffusion modeling which enabled us to investigate latent processes of reaction times and accuracy on a large-scale dataset (>5,800 participants of a visual search experiment with emotional face stimuli. In our experiment, post-error trials were characterized by both adaptive and non-adaptive decision processes. An adaptive increase in participants’ response threshold was sustained over several trials post-error. Contrarily, an initial decrease in evidence accumulation rate, followed by an increase on the subsequent trials, indicates a momentary distraction of task-relevant attention and resulted in an initial accuracy drop. Higher values of decision threshold and evidence accumulation on the post-error trial were associated with higher accuracy on subsequent trials which further gives credence to these parameters’ role in post-error adaptation. Finally, the evidence accumulation rate post-error decreased when the error trial presented angry faces, a finding suggesting that the post-error decision can be influenced by the error context. In conclusion, we demonstrate that error-related response adaptations are multi-component processes that change dynamically over several trials post-error.

  4. Control of Human Error and comparison Level risk after correction action With the SHERPA Method in a control Room of petrochemical industry

    Directory of Open Access Journals (Sweden)

    A. Zakerian

    2011-12-01

    Full Text Available Background and aims Today in many jobs like nuclear, military and chemical industries, human errors may result in a disaster. Accident in different places of the world emphasizes this subject and we indicate for example, Chernobyl disaster in (1986, tree Mile accident in (1974 and Flixborough explosion in (1974.So human errors identification especially in important and intricate systems is necessary and unavoidable for predicting control methods.   Methods Recent research is a case study and performed in Zagross Methanol Company in Asalouye (South pars.   Walking –Talking through method with process expert and control room operators, inspecting technical documents are used for collecting required information and completing Systematic Human Error Reductive and Predictive Approach (SHERPA worksheets.   Results analyzing SHERPA worksheet indicated that, were accepting capable invertebrate errors % 71.25, % 26.75 undesirable errors, % 2 accepting capable(with change errors, % 0 accepting capable errors, and after correction action forecast Level risk to this arrangement, accepting capable invertebrate errors % 0, % 4.35 undesirable errors , % 58.55 accepting capable(with change errors, % 37.1 accepting capable errors .   ConclusionFinally this result is comprehension that this method in different industries especially in chemical industries is enforceable and useful for human errors identification that may lead to accident and adventures.

  5. The treatment of commission errors in first generation human reliability analysis methods

    Energy Technology Data Exchange (ETDEWEB)

    Alvarengga, Marco Antonio Bayout; Fonseca, Renato Alves da, E-mail: bayout@cnen.gov.b, E-mail: rfonseca@cnen.gov.b [Comissao Nacional de Energia Nuclear (CNEN) Rio de Janeiro, RJ (Brazil); Melo, Paulo Fernando Frutuoso e, E-mail: frutuoso@nuclear.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Nuclear

    2011-07-01

    Human errors in human reliability analysis can be classified generically as errors of omission and commission errors. Omission errors are related to the omission of any human action that should have been performed, but does not occur. Errors of commission are those related to human actions that should not be performed, but which in fact are performed. Both involve specific types of cognitive error mechanisms, however, errors of commission are more difficult to model because they are characterized by non-anticipated actions that are performed instead of others that are omitted (omission errors) or are entered into an operational task without being part of the normal sequence of this task. The identification of actions that are not supposed to occur depends on the operational context that will influence or become easy certain unsafe actions of the operator depending on the operational performance of its parameters and variables. The survey of operational contexts and associated unsafe actions is a characteristic of second-generation models, unlike the first generation models. This paper discusses how first generation models can treat errors of commission in the steps of detection, diagnosis, decision-making and implementation, in the human information processing, particularly with the use of THERP tables of errors quantification. (author)

  6. Comparing Absolute Error with Squared Error for Evaluating Empirical Models of Continuous Variables: Compositions, Implications, and Consequences

    Science.gov (United States)

    Gao, J.

    2014-12-01

    model with higher bias but more stability. Further, my experiments showed that the two metrics can and will lead to different conclusions about the impacts of certain operations and may suggest different strategies for model improvement. Therefore, SQ error is a poor substitute for ABS error, and the use of the two metrics should be clearly differentiated.

  7. A Multipoint Method for Detecting Genotyping Errors and Mutations in Sibling-Pair Linkage Data

    OpenAIRE

    Douglas, Julie A.; Boehnke, Michael; Lange, Kenneth

    2000-01-01

    The identification of genes contributing to complex diseases and quantitative traits requires genetic data of high fidelity, because undetected errors and mutations can profoundly affect linkage information. The recent emphasis on the use of the sibling-pair design eliminates or decreases the likelihood of detection of genotyping errors and marker mutations through apparent Mendelian incompatibilities or close double recombinants. In this article, we describe a hidden Markov method for detect...

  8. Error Estimation for Indoor 802.11 Location Fingerprinting

    DEFF Research Database (Denmark)

    Lemelson, Hendrik; Kjærgaard, Mikkel Baun; Hansen, Rene

    2009-01-01

    providers could adapt their delivered services based on the estimated position error to achieve a higher service quality. Finally, system operators could use the information to inspect whether a location system provides satisfactory positioning accuracy throughout the covered area. For position error...

  9. AGAPE-ET for human error analysis of emergency tasks and its application

    International Nuclear Information System (INIS)

    Kim, J. H.; Jeong, W. D.

    2002-01-01

    The paper presents a proceduralised human reliability analysis (HRA) methodology, AGAPE-ET (A Guidance And Procedure for Human Error Analysis for Emergency Tasks), covering both qualitative error analysis and quantification of human error probability (HEP) of emergency tasks in nuclear power plants. The AGAPE-ET method is based on the simplified cognitive model. By each cognitive function, error causes or error-likely situations have been identified considering the characteristics of the performance of each cognitive function and influencing mechanism of the performance influencing factors (PIFs) on the cognitive function. Then, error analysis items have been determined from the identified error causes or error-likely situations and a human error analysis procedure based on the error analysis items is organised to help the analysts cue or guide overall human error analysis. The basic scheme for the quantification of HEP consists in the multiplication of the BHEP assigned by the error analysis item and the weight from the influencing factors decision tree (IFDT) constituted by cognitive function. The method can be characterised by the structured identification of the weak points of the task required to perform and the efficient analysis process that the analysts have only to carry out with the necessary cognitive functions. The paper also presents the application of AGAPE-ET to 31 nuclear emergency tasks and its results

  10. Comparison between calorimeter and HLNC errors

    International Nuclear Information System (INIS)

    Goldman, A.S.; De Ridder, P.; Laszlo, G.

    1991-01-01

    This paper summarizes an error analysis that compares systematic and random errors of total plutonium mass estimated for high-level neutron coincidence counter (HLNC) and calorimeter measurements. This task was part of an International Atomic Energy Agency (IAEA) study on the comparison of the two instruments to determine if HLNC measurement errors met IAEA standards and if the calorimeter gave ''significantly'' better precision. Our analysis was based on propagation of error models that contained all known sources of errors including uncertainties associated with plutonium isotopic measurements. 5 refs., 2 tabs

  11. Errors generated with the use of rectangular collimation

    International Nuclear Information System (INIS)

    Parks, E.T.

    1991-01-01

    This study was designed to determine whether various techniques for achieving rectangular collimation generate different numbers and types of errors and remakes and to determine whether operator skill level influences errors and remakes. Eighteen students exposed full-mouth series of radiographs on manikins with the use of six techniques. The students were grouped according to skill level. The radiographs were evaluated for errors and remakes resulting from errors in the following categories: cone cutting, vertical angulation, and film placement. Significant differences were found among the techniques in cone cutting errors and remakes, vertical angulation errors and remakes, and total errors and remakes. Operator skill did not appear to influence the number or types of errors or remakes generated. Rectangular collimation techniques produced more errors than did the round collimation techniques. However, only one rectangular collimation technique generated significantly more remakes than the other techniques

  12. Identification of factors which affect the tendency towards and attitudes of emergency unit nurses to make medical errors.

    Science.gov (United States)

    Kiymaz, Dilek; Koç, Zeliha

    2018-03-01

    To determine individual and professional factors affecting the tendency of emergency unit nurses to make medical errors and their attitudes towards these errors in Turkey. Compared with other units, the emergency unit is an environment where there is an increased tendency for making medical errors due to its intensive and rapid pace, noise and complex and dynamic structure. A descriptive cross-sectional study. The study was carried out from 25 July 2014-16 September 2015 with the participation of 284 nurses who volunteered to take part in the study. Data were gathered using the data collection survey for nurses, the Medical Error Tendency Scale and the Medical Error Attitude Scale. It was determined that 40.1% of the nurses previously witnessed medical errors, 19.4% made a medical error in the last year, 17.6% of medical errors were caused by medication errors where the wrong medication was administered in the wrong dose, and none of the nurses filled out a case report form about the medical errors they made. Regarding the factors that caused medical errors in the emergency unit, 91.2% of the nurses stated excessive workload as a cause; 85.1% stated an insufficient number of nurses; and 75.4% stated fatigue, exhaustion and burnout. The study showed that nurses who loved their job were satisfied with their unit and who always worked during day shifts had a lower medical error tendency. It is suggested to consider the following actions: increase awareness about medical errors, organise training to reduce errors in medication administration, develop procedures and protocols specific to the emergency unit health care and create an environment which is not punitive wherein nurses can safely report medical errors. © 2017 John Wiley & Sons Ltd.

  13. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  14. Forecast errors in IEA-countries' energy consumption

    DEFF Research Database (Denmark)

    Linderoth, Hans

    2002-01-01

    Every year Policy of IEA Countries includes a forecast of the energy consumption in the member countries. Forecasts concerning the years 1985,1990 and 1995 can now be compared to actual values. The second oil crisis resulted in big positive forecast errors. The oil price drop in 1986 did not have...... the small value is often the sum of large positive and negative errors. Almost no significant correlation is found between forecast errors in the 3 years. Correspondingly, no significant correlation coefficient is found between forecasts errors in the 3 main energy sectors. Therefore, a relatively small...

  15. Comparing different error-conditions in film dosemeter evaluation

    International Nuclear Information System (INIS)

    Roed, H.; Figel, M.

    2007-01-01

    In the evaluation of a film used as a personal dosemeter it may be necessary to mark the dosemeters when possible error-conditions are recognised, such as errors that have an influence on the ability to make a correct evaluation of the dose value. In this project a comparison has been carried out to examine how two individual monitoring services, IMS [National Inst. of Radiation Hygiene (Denmark) (NIRH) and National Research Centre for Environment and Health (Germany) (GSF)], from two different EU countries mark their dosemeters. The IMS are different in size, type of customers and issuing period, but both use films as their primary dosemeters. The error-conditions examined are dosemeters exposed to moisture or light, contaminated dosemeters, films exposed outside the badge, missing filters in the badge, films inserted incorrectly in the badge and dosemeters not returned or returned too late to the IMS. The data are collected for the year 2003 where NIRH evaluated ∼50,000 and GSF ∼1.4 million film dosemeters. The percentage of film dosemeters is calculated for each error-condition as well as the distribution among eight different employee categories, i.e. medicine, nuclear medicine, nuclear industry, industry, radiography, laboratories, veterinary and others. It turned out, that incorrect insertion of the film in the badge was the most common error-condition observed at both IMS and that veterinarians, as the employee category, generally have the highest number of errors. NIRH has a significantly higher relative number of dosemeters in most error-conditions than GSF, which perhaps reflects that a comparison is difficult due to different systemic and methodical differences between the IMS and countries, e.g. regulations and monitoring programs etc. Also the non-existence of a common categorisation method for employee categories contributes to make a comparison like this difficult. (authors)

  16. Assessment of Aliasing Errors in Low-Degree Coefficients Inferred from GPS Data

    Directory of Open Access Journals (Sweden)

    Na Wei

    2016-05-01

    Full Text Available With sparse and uneven site distribution, Global Positioning System (GPS data is just barely able to infer low-degree coefficients in the surface mass field. The unresolved higher-degree coefficients turn out to introduce aliasing errors into the estimates of low-degree coefficients. To reduce the aliasing errors, the optimal truncation degree should be employed. Using surface displacements simulated from loading models, we theoretically prove that the optimal truncation degree should be degree 6–7 for a GPS inversion and degree 20 for combing GPS and Ocean Bottom Pressure (OBP with no additional regularization. The optimal truncation degree should be decreased to degree 4–5 for real GPS data. Additionally, we prove that a Scaled Sensitivity Matrix (SSM approach can be used to quantify the aliasing errors due to any one or any combination of unresolved higher degrees, which is beneficial to identify the major error source from among all the unresolved higher degrees. Results show that the unresolved higher degrees lower than degree 20 are the major error source for global inversion. We also theoretically prove that the SSM approach can be used to mitigate the aliasing errors in a GPS inversion, if the neglected higher degrees are well known from other sources.

  17. Measuring Identification and Quantification Errors in Spectral CT Material Decomposition

    Directory of Open Access Journals (Sweden)

    Aamir Younis Raja

    2018-03-01

    Full Text Available Material decomposition methods are used to identify and quantify multiple tissue components in spectral CT but there is no published method to quantify the misidentification of materials. This paper describes a new method for assessing misidentification and mis-quantification in spectral CT. We scanned a phantom containing gadolinium (1, 2, 4, 8 mg/mL, hydroxyapatite (54.3, 211.7, 808.5 mg/mL, water and vegetable oil using a MARS spectral scanner equipped with a poly-energetic X-ray source operated at 118 kVp and a CdTe Medipix3RX camera. Two imaging protocols were used; both with and without 0.375 mm external brass filter. A proprietary material decomposition method identified voxels as gadolinium, hydroxyapatite, lipid or water. Sensitivity and specificity information was used to evaluate material misidentification. Biological samples were also scanned. There were marked differences in identification and quantification between the two protocols even though spectral and linear correlation of gadolinium and hydroxyapatite in the reconstructed images was high and no qualitative segmentation differences in the material decomposed images were observed. At 8 mg/mL, gadolinium was correctly identified for both protocols, but concentration was underestimated by over half for the unfiltered protocol. At 1 mg/mL, gadolinium was misidentified in 38% of voxels for the filtered protocol and 58% of voxels for the unfiltered protocol. Hydroxyapatite was correctly identified at the two higher concentrations for both protocols, but mis-quantified for the unfiltered protocol. Gadolinium concentration as measured in the biological specimen showed a two-fold difference between protocols. In future, this methodology could be used to compare and optimize scanning protocols, image reconstruction methods, and methods for material differentiation in spectral CT.

  18. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. A multiobserver study of the effects of including point-of-care patient photographs with portable radiography: a means to detect wrong-patient errors.

    Science.gov (United States)

    Tridandapani, Srini; Ramamurthy, Senthil; Provenzale, James; Obuchowski, Nancy A; Evanoff, Michael G; Bhatti, Pamela

    2014-08-01

    To evaluate whether the presence of facial photographs obtained at the point-of-care of portable radiography leads to increased detection of wrong-patient errors. In this institutional review board-approved study, 166 radiograph-photograph combinations were obtained from 30 patients. Consecutive radiographs from the same patients resulted in 83 unique pairs (ie, a new radiograph and prior, comparison radiograph) for interpretation. To simulate wrong-patient errors, mismatched pairs were generated by pairing radiographs from different patients chosen randomly from the sample. Ninety radiologists each interpreted a unique randomly chosen set of 10 radiographic pairs, containing up to 10% mismatches (ie, error pairs). Radiologists were randomly assigned to interpret radiographs with or without photographs. The number of mismatches was identified, and interpretation times were recorded. Ninety radiologists with 21 ± 10 (mean ± standard deviation) years of experience were recruited to participate in this observer study. With the introduction of photographs, the proportion of errors detected increased from 31% (9 of 29) to 77% (23 of 30; P = .006). The odds ratio for detection of error with photographs to detection without photographs was 7.3 (95% confidence interval: 2.29-23.18). Observer qualifications, training, or practice in cardiothoracic radiology did not influence sensitivity for error detection. There is no significant difference in interpretation time for studies without photographs and those with photographs (60 ± 22 vs. 61 ± 25 seconds; P = .77). In this observer study, facial photographs obtained simultaneously with portable chest radiographs increased the identification of any wrong-patient errors, without substantial increase in interpretation time. This technique offers a potential means to increase patient safety through correct patient identification. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  20. Human Error Assessmentin Minefield Cleaning Operation Using Human Event Analysis

    Directory of Open Access Journals (Sweden)

    Mohammad Hajiakbari

    2015-12-01

    Full Text Available Background & objective: Human error is one of the main causes of accidents. Due to the unreliability of the human element and the high-risk nature of demining operations, this study aimed to assess and manage human errors likely to occur in such operations. Methods: This study was performed at a demining site in war zones located in the West of Iran. After acquiring an initial familiarity with the operations, methods, and tools of clearing minefields, job task related to clearing landmines were specified. Next, these tasks were studied using HTA and related possible errors were assessed using ATHEANA. Results: de-mining task was composed of four main operations, including primary detection, technical identification, investigation, and neutralization. There were found four main reasons for accidents occurring in such operations; walking on the mines, leaving mines with no action, error in neutralizing operation and environmental explosion. The possibility of human error in mine clearance operations was calculated as 0.010. Conclusion: The main causes of human error in de-mining operations can be attributed to various factors such as poor weather and operating conditions like outdoor work, inappropriate personal protective equipment, personality characteristics, insufficient accuracy in the work, and insufficient time available. To reduce the probability of human error in de-mining operations, the aforementioned factors should be managed properly.

  1. A procedure for the analysis of errors of commission in a Probabilistic Safety Assessment of a nuclear power plant at full power

    International Nuclear Information System (INIS)

    Julius, J.; Jorgenson, E.; Parry, G.W.; Mosleh, A.M.

    1995-01-01

    This paper describes an analytical procedure that has been developed to facilitate the identification of errors of commission for inclusion in a Probabilistic Safety Assessment (PSA) of a nuclear power plant operating at full power. The procedure first identifies the opportunities for error by determining when operators are required to intervene to bring the plant to a safe condition following a transient, and then identifying under what conditions this is likely to occur using a model of the causes of error. In order to make the analysis practicable, a successive screening approach is used to identify those errors with the highest potential of occurrence. The procedure has been applied as part of a PSA study, and the results of that application are summarized. For the particular plant to which the procedure was applied, the conclusion was that, because of the nature of the procedures, the high degree of redundancy in the instrumentation, the operating practices, and the control board layouts, the potential for significant errors of commission is low

  2. Online identification of linear loudspeakers parameters

    DEFF Research Database (Denmark)

    Pedersen, Bo Rohde; Rubak, Per

    2007-01-01

    Feed forward nonlinear error correction of loudspeakers can improve sound quality. For creating a realistic feed forward strategy identification of the loudspeaker parameters is needed. The strategy of the compensator is that the nonlinear behaviour of the loudspeakers has relatively small drift...

  3. Identification and paleoclimatic significance of magnetite nanoparticles in soils

    Science.gov (United States)

    Ahmed, Imad A. M.; Maher, Barbara A.

    2018-02-01

    In the world-famous sediments of the Chinese Loess Plateau, fossil soils alternate with windblown dust layers to record monsoonal variations over the last ˜3 My. The less-weathered, weakly magnetic dust layers reflect drier, colder glaciations. The fossil soils (paleosols) contain variable concentrations of nanoscale, strongly magnetic iron oxides, formed in situ during the wetter, warmer interglaciations. Mineralogical identification of the magnetic soil oxides is essential for deciphering these key paleoclimatic records. Formation of magnetite, a mixed Fe2+/Fe3+ ferrimagnet, has been linked to soil redox oscillations, and thence to paleorainfall. An opposite hypothesis states that magnetite can only form if the soil is water saturated for significant periods in order for Fe3+ to be reduced to Fe2+, and suggests instead the temperature-dependent formation of maghemite, an Fe3+-oxide, much of which ages subsequently into hematite, typically aluminum substituted. This latter, oxidizing pathway would have been temperature, but not rainfall dependent. Here, through structural fingerprinting and scanning transmission electron microscopy and electron energy loss spectroscopy analysis, we prove that magnetite is the dominant soil-formed ferrite. Maghemite is present in lower concentrations, and shows no evidence of aluminum substitution, negating its proposed precursor role for the aluminum-substituted hematite prevalent in the paleosols. Magnetite dominance demonstrates that magnetite formation occurs in well-drained, generally oxidizing soils, and that soil wetting/drying oscillations drive the degree of soil magnetic enhancement. The magnetic variations of the Chinese Loess Plateau paleosols thus record changes in monsoonal rainfall, over timescales of millions of years.

  4. Bacterial rapid identification with matrix assisted laser desorption/ionization time-of-flight mass spectrometry: development of an 'in-house method' and comparison with Bruker Sepsityper(®) kit.

    Science.gov (United States)

    Frédéric Ric, S; Antoine, M; Bodson, A; Lissoir, B

    2015-10-01

    The objective of this study was to compare an in-house matrix-assisted laser desorption ionization with time of flight (MALDI-TOF) method and a commercial MALDI-TOF kit (Sepsityper(®) kit) for direct bacterial identification in positive blood cultures. We also evaluated the time saved and the cost associated with the rapid identification techniques. We used the BACTEC(®) automated system for detecting positive blood cultures. Direct identification using Sepsityper kit and the in-house method were compared with conventional identification by MALDI-TOF using pure bacterial culture on the solid phase. We also evaluated different cut-off scores for rapid bacterial identification. In total, 127 positive blood vials were selected. The rate of rapid identification with the MALDI Sepsityper kit was 25.2% with the standard cut-off and 33.9% with the enlarged cut-off, while the results for the in-house method were 44.1 and 61.4%, respectively. Error rates with the enlarged cut-off were 6.98 (n = 3) and 2.56% (n = 2) for Sepsityper and the in-house method, respectively. Identification rates were higher for gram-negative bacteria. Direct bacterial identification succeeded in supplying rapid identification of the causative organism in cases of sepsis. The time taken to obtain a result was nearly 24  hours shorter for the direct bacterial identification methods than for conventional MALDI-TOF on solid phase culture. Compared with the Sepsityper kit, the in-house method offered better results and fewer errors, was more cost-effective and easier to use.

  5. Local Use-Dependent Sleep in Wakefulness Links Performance Errors to Learning.

    Science.gov (United States)

    Quercia, Angelica; Zappasodi, Filippo; Committeri, Giorgia; Ferrara, Michele

    2018-01-01

    Sleep and wakefulness are no longer to be considered as discrete states. During wakefulness brain regions can enter a sleep-like state (off-periods) in response to a prolonged period of activity (local use-dependent sleep). Similarly, during nonREM sleep the slow-wave activity, the hallmark of sleep plasticity, increases locally in brain regions previously involved in a learning task. Recent studies have demonstrated that behavioral performance may be impaired by off-periods in wake in task-related regions. However, the relation between off-periods in wake, related performance errors and learning is still untested in humans. Here, by employing high density electroencephalographic (hd-EEG) recordings, we investigated local use-dependent sleep in wake, asking participants to repeat continuously two intensive spatial navigation tasks. Critically, one task relied on previous map learning (Wayfinding) while the other did not (Control). Behaviorally awake participants, who were not sleep deprived, showed progressive increments of delta activity only during the learning-based spatial navigation task. As shown by source localization, delta activity was mainly localized in the left parietal and bilateral frontal cortices, all regions known to be engaged in spatial navigation tasks. Moreover, during the Wayfinding task, these increments of delta power were specifically associated with errors, whose probability of occurrence was significantly higher compared to the Control task. Unlike the Wayfinding task, during the Control task neither delta activity nor the number of errors increased progressively. Furthermore, during the Wayfinding task, both the number and the amplitude of individual delta waves, as indexes of neuronal silence in wake (off-periods), were significantly higher during errors than hits. Finally, a path analysis linked the use of the spatial navigation circuits undergone to learning plasticity to off periods in wake. In conclusion, local sleep regulation in

  6. Assessing the association between thinking dispositions and clinical error.

    Science.gov (United States)

    Kinnear, John; Wilson, Nick

    2017-08-09

    Dual-process theory suggests that type 1 thinking results in a propensity to make 'intuitive' decisions based on limited information. Type 2 processes, on the other hand, are able to analyse these initial responses and replace them with rationalised decisions. Individuals may have a preference for different modes of rationalisation, on a continuum from careful to cursory. These 'dispositions' of thinking reside in type 2 processes and may result in error when the preference is for 'quick and casual' decision-making. We asked clinicians to answer a cognitive puzzle to which there was an obvious, but incorrect, answer, to measure their propensity for 'quick and casual' decision-making. The same clinicians were also asked to report the number of clinical errors they had committed in the previous two weeks. We hypothesised an association between committing error and settling for an incorrect answer, and that the cognitive puzzle would have predictive capability. 90 of 153 (59%) clinicians reported that they had committed error, while 103 (67%) gave the incorrect 'intuitive' answer to the cognitive puzzle. There was no statistically significant difference between clinicians who committed error and answered incorrectly, and those who did not and answered correctly (χ 2 (1, n=1153)=0.021, p=0.885). The prevalence of clinical error in our study was higher than previously reported in the literature, and the propensity for accepting intuitive solutions was high. Although the cognitive puzzle was unable to predict who was more likely to commit error, the study offers insights into developing other predictive models for error. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  7. Human errors identification using the human factors analysis and classification system technique (HFACS

    Directory of Open Access Journals (Sweden)

    G. A. Shirali

    2013-12-01

    .Result: In this study, 158 reports of accident in Ahvaz steel industry were analyzed by HFACS technique. This analysis showed that most of the human errors were: in the first level was related to the skill-based errors, in the second to the physical environment, in the third level to the inadequate supervision and in the fourth level to the management of resources. .Conclusion: Studying and analyzing of past events using the HFACS technique can identify the major and root causes of accidents and can be effective on prevent repetitions of such mishaps. Also, it can be used as a basis for developing strategies to prevent future events in steel industries.

  8. A biometric identification system based on eigenpalm and eigenfinger features.

    Science.gov (United States)

    Ribaric, Slobodan; Fratric, Ivan

    2005-11-01

    This paper presents a multimodal biometric identification system based on the features of the human hand. We describe a new biometric approach to personal identification using eigenfinger and eigenpalm features, with fusion applied at the matching-score level. The identification process can be divided into the following phases: capturing the image; preprocessing; extracting and normalizing the palm and strip-like finger subimages; extracting the eigenpalm and eigenfinger features based on the K-L transform; matching and fusion; and, finally, a decision based on the (k, l)-NN classifier and thresholding. The system was tested on a database of 237 people (1,820 hand images). The experimental results showed the effectiveness of the system in terms of the recognition rate (100 percent), the equal error rate (EER = 0.58 percent), and the total error rate (TER = 0.72 percent).

  9. Dependence of fluence errors in dynamic IMRT on leaf-positional errors varying with time and leaf number

    International Nuclear Information System (INIS)

    Zygmanski, Piotr; Kung, Jong H.; Jiang, Steve B.; Chin, Lee

    2003-01-01

    In d-MLC based IMRT, leaves move along a trajectory that lies within a user-defined tolerance (TOL) about the ideal trajectory specified in a d-MLC sequence file. The MLC controller measures leaf positions multiple times per second and corrects them if they deviate from ideal positions by a value greater than TOL. The magnitude of leaf-positional errors resulting from finite mechanical precision depends on the performance of the MLC motors executing leaf motions and is generally larger if leaves are forced to move at higher speeds. The maximum value of leaf-positional errors can be limited by decreasing TOL. However, due to the inherent time delay in the MLC controller, this may not happen at all times. Furthermore, decreasing the leaf tolerance results in a larger number of beam hold-offs, which, in turn leads, to a longer delivery time and, paradoxically, to higher chances of leaf-positional errors (≤TOL). On the other end, the magnitude of leaf-positional errors depends on the complexity of the fluence map to be delivered. Recently, it has been shown that it is possible to determine the actual distribution of leaf-positional errors either by the imaging of moving MLC apertures with a digital imager or by analysis of a MLC log file saved by a MLC controller. This leads next to an important question: What is the relation between the distribution of leaf-positional errors and fluence errors. In this work, we introduce an analytical method to determine this relation in dynamic IMRT delivery. We model MLC errors as Random-Leaf Positional (RLP) errors described by a truncated normal distribution defined by two characteristic parameters: a standard deviation σ and a cut-off value Δx 0 (Δx 0 ∼TOL). We quantify fluence errors for two cases: (i) Δx 0 >>σ (unrestricted normal distribution) and (ii) Δx 0 0 --limited normal distribution). We show that an average fluence error of an IMRT field is proportional to (i) σ/ALPO and (ii) Δx 0 /ALPO, respectively, where

  10. Impact of exposure measurement error in air pollution epidemiology: effect of error type in time-series studies.

    Science.gov (United States)

    Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E

    2011-06-22

    Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling

  11. Error sensitivity analysis in 10-30-day extended range forecasting by using a nonlinear cross-prediction error model

    Science.gov (United States)

    Xia, Zhiye; Xu, Lisheng; Chen, Hongbin; Wang, Yongqian; Liu, Jinbao; Feng, Wenlan

    2017-06-01

    Extended range forecasting of 10-30 days, which lies between medium-term and climate prediction in terms of timescale, plays a significant role in decision-making processes for the prevention and mitigation of disastrous meteorological events. The sensitivity of initial error, model parameter error, and random error in a nonlinear crossprediction error (NCPE) model, and their stability in the prediction validity period in 10-30-day extended range forecasting, are analyzed quantitatively. The associated sensitivity of precipitable water, temperature, and geopotential height during cases of heavy rain and hurricane is also discussed. The results are summarized as follows. First, the initial error and random error interact. When the ratio of random error to initial error is small (10-6-10-2), minor variation in random error cannot significantly change the dynamic features of a chaotic system, and therefore random error has minimal effect on the prediction. When the ratio is in the range of 10-1-2 (i.e., random error dominates), attention should be paid to the random error instead of only the initial error. When the ratio is around 10-2-10-1, both influences must be considered. Their mutual effects may bring considerable uncertainty to extended range forecasting, and de-noising is therefore necessary. Second, in terms of model parameter error, the embedding dimension m should be determined by the factual nonlinear time series. The dynamic features of a chaotic system cannot be depicted because of the incomplete structure of the attractor when m is small. When m is large, prediction indicators can vanish because of the scarcity of phase points in phase space. A method for overcoming the cut-off effect ( m > 4) is proposed. Third, for heavy rains, precipitable water is more sensitive to the prediction validity period than temperature or geopotential height; however, for hurricanes, geopotential height is most sensitive, followed by precipitable water.

  12. Diagnostic Error in Stroke-Reasons and Proposed Solutions.

    Science.gov (United States)

    Bakradze, Ekaterina; Liberman, Ava L

    2018-02-13

    We discuss the frequency of stroke misdiagnosis and identify subgroups of stroke at high risk for specific diagnostic errors. In addition, we review common reasons for misdiagnosis and propose solutions to decrease error. According to a recent report by the National Academy of Medicine, most people in the USA are likely to experience a diagnostic error during their lifetimes. Nearly half of such errors result in serious disability and death. Stroke misdiagnosis is a major health care concern, with initial misdiagnosis estimated to occur in 9% of all stroke patients in the emergency setting. Under- or missed diagnosis (false negative) of stroke can result in adverse patient outcomes due to the preclusion of acute treatments and failure to initiate secondary prevention strategies. On the other hand, the overdiagnosis of stroke can result in inappropriate treatment, delayed identification of actual underlying disease, and increased health care costs. Young patients, women, minorities, and patients presenting with non-specific, transient, or posterior circulation stroke symptoms are at increased risk of misdiagnosis. Strategies to decrease diagnostic error in stroke have largely focused on early stroke detection via bedside examination strategies and a clinical decision rules. Targeted interventions to improve the diagnostic accuracy of stroke diagnosis among high-risk groups as well as symptom-specific clinical decision supports are needed. There are a number of open questions in the study of stroke misdiagnosis. To improve patient outcomes, existing strategies to improve stroke diagnostic accuracy should be more broadly adopted and novel interventions devised and tested to reduce diagnostic errors.

  13. Medication errors : the impact of prescribing and transcribing errors on preventable harm in hospitalised patients

    NARCIS (Netherlands)

    van Doormaal, J.E.; van der Bemt, P.M.L.A.; Mol, P.G.M.; Egberts, A.C.G.; Haaijer-Ruskamp, F.M.; Kosterink, J.G.W.; Zaal, Rianne J.

    Background: Medication errors (MEs) affect patient safety to a significant extent. Because these errors can lead to preventable adverse drug events (pADEs), it is important to know what type of ME is the most prevalent cause of these pADEs. This study determined the impact of the various types of

  14. Dynamic Error Analysis Method for Vibration Shape Reconstruction of Smart FBG Plate Structure

    Directory of Open Access Journals (Sweden)

    Hesheng Zhang

    2016-01-01

    Full Text Available Shape reconstruction of aerospace plate structure is an important issue for safe operation of aerospace vehicles. One way to achieve such reconstruction is by constructing smart fiber Bragg grating (FBG plate structure with discrete distributed FBG sensor arrays using reconstruction algorithms in which error analysis of reconstruction algorithm is a key link. Considering that traditional error analysis methods can only deal with static data, a new dynamic data error analysis method are proposed based on LMS algorithm for shape reconstruction of smart FBG plate structure. Firstly, smart FBG structure and orthogonal curved network based reconstruction method is introduced. Then, a dynamic error analysis model is proposed for dynamic reconstruction error analysis. Thirdly, the parameter identification is done for the proposed dynamic error analysis model based on least mean square (LMS algorithm. Finally, an experimental verification platform is constructed and experimental dynamic reconstruction analysis is done. Experimental results show that the dynamic characteristics of the reconstruction performance for plate structure can be obtained accurately based on the proposed dynamic error analysis method. The proposed method can also be used for other data acquisition systems and data processing systems as a general error analysis method.

  15. Ocular wavefront aberration and refractive error in pre-school children

    Science.gov (United States)

    Thapa, Damber; Fleck, Andre; Lakshminarayanan, Vasudevan; Bobier, William R.

    2011-11-01

    Hartmann-Shack images taken from an archived collection of SureSight refractive measurements of pre-school children in Oxford County, Ontario, Canada were retrieved and re-analyzed. Higher-order aberrations were calculated over the age range of 3 to 6 years. These higher-order aberrations were compared with respect to magnitudes of ametropia. Subjects were classified as emmetropic (range -0.5 to + 0.5D), low hyperopic (+ 0.5 to +2D) and high hyperopic (+2D or more) based upon the resulting spherical equivalent. Higher-order aberrations were found to increase with higher levels of hyperopia (p < 0.01). The strongest effect was for children showing more than +2.00D of hyperopia. The correlation coefficients were small in all of the higher-order aberrations; however, they were significant (p < 0.01). These analyses indicate a weak association between refractive error and higher-order aberrations in pre-school children.

  16. Judgment of line orientation depends on gender, education, and type of error.

    Science.gov (United States)

    Caparelli-Dáquer, Egas M; Oliveira-Souza, Ricardo; Moreira Filho, Pedro F

    2009-02-01

    Visuospatial tasks are particularly proficient at eliciting gender differences during neuropsychological performance. Here we tested the hypothesis that gender and education are related to different types of visuospatial errors on a task of line orientation that allowed the independent scoring of correct responses ("hits", or H) and one type of incorrect responses ("commission errors", or CE). We studied 343 volunteers of roughly comparable ages and with different levels of education. Education and gender were significantly associated with H scores, which were higher in men and in the groups with higher education. In contrast, the differences between men and women on CE depended on education. We concluded that (I) the ability to find the correct responses differs from the ability to avoid the wrong responses amidst an array of possible alternatives, and that (II) education interacts with gender to promote a stable performance on CE earlier in men than in women.

  17. Links between N-modular redundancy and the theory of error-correcting codes

    Science.gov (United States)

    Bobin, V.; Whitaker, S.; Maki, G.

    1992-01-01

    N-Modular Redundancy (NMR) is one of the best known fault tolerance techniques. Replication of a module to achieve fault tolerance is in some ways analogous to the use of a repetition code where an information symbol is replicated as parity symbols in a codeword. Linear Error-Correcting Codes (ECC) use linear combinations of information symbols as parity symbols which are used to generate syndromes for error patterns. These observations indicate links between the theory of ECC and the use of hardware redundancy for fault tolerance. In this paper, we explore some of these links and show examples of NMR systems where identification of good and failed elements is accomplished in a manner similar to error correction using linear ECC's.

  18. Human errors in NPP operations

    International Nuclear Information System (INIS)

    Sheng Jufang

    1993-01-01

    Based on the operational experiences of nuclear power plants (NPPs), the importance of studying human performance problems is described. Statistical analysis on the significance or frequency of various root-causes and error-modes from a large number of human-error-related events demonstrate that the defects in operation/maintenance procedures, working place factors, communication and training practices are primary root-causes, while omission, transposition, quantitative mistake are the most frequent among the error-modes. Recommendations about domestic research on human performance problem in NPPs are suggested

  19. Improving substructure identification accuracy of shear structures using virtual control system

    Science.gov (United States)

    Zhang, Dongyu; Yang, Yang; Wang, Tingqiang; Li, Hui

    2018-02-01

    Substructure identification is a powerful tool to identify the parameters of a complex structure. Previously, the authors developed an inductive substructure identification method for shear structures. The identification error analysis showed that the identification accuracy of this method is significantly influenced by the magnitudes of two key structural responses near a certain frequency; if these responses are unfavorable, the method cannot provide accurate estimation results. In this paper, a novel method is proposed to improve the substructure identification accuracy by introducing a virtual control system (VCS) into the structure. A virtual control system is a self-balanced system, which consists of some control devices and a set of self-balanced forces. The self-balanced forces counterbalance the forces that the control devices apply on the structure. The control devices are combined with the structure to form a controlled structure used to replace the original structure in the substructure identification; and the self-balance forces are treated as known external excitations to the controlled structure. By optimally tuning the VCS’s parameters, the dynamic characteristics of the controlled structure can be changed such that the original structural responses become more favorable for the substructure identification and, thus, the identification accuracy is improved. A numerical example of 6-story shear structure is utilized to verify the effectiveness of the VCS based controlled substructure identification method. Finally, shake table tests are conducted on a 3-story structural model to verify the efficacy of the VCS to enhance the identification accuracy of the structural parameters.

  20. Error and discrepancy in radiology: inevitable or avoidable?

    Science.gov (United States)

    Brady, Adrian P

    2017-02-01

    Errors and discrepancies in radiology practice are uncomfortably common, with an estimated day-to-day rate of 3-5% of studies reported, and much higher rates reported in many targeted studies. Nonetheless, the meaning of the terms "error" and "discrepancy" and the relationship to medical negligence are frequently misunderstood. This review outlines the incidence of such events, the ways they can be categorized to aid understanding, and potential contributing factors, both human- and system-based. Possible strategies to minimise error are considered, along with the means of dealing with perceived underperformance when it is identified. The inevitability of imperfection is explained, while the importance of striving to minimise such imperfection is emphasised. • Discrepancies between radiology reports and subsequent patient outcomes are not inevitably errors. • Radiologist reporting performance cannot be perfect, and some errors are inevitable. • Error or discrepancy in radiology reporting does not equate negligence. • Radiologist errors occur for many reasons, both human- and system-derived. • Strategies exist to minimise error causes and to learn from errors made.

  1. Profile of drug administration errors in anesthesia among anesthesiologists from Santa Catarina

    Directory of Open Access Journals (Sweden)

    Thomas Rolf Erdmann

    2016-02-01

    Full Text Available INTRODUCTION: Anesthesiology is the only medical specialty that prescribes, dilutes, and administers drugs without conferral by another professional. Adding to the high frequency of drug administration, a propitious scenario to errors is created. OBJECTIVE: Access the prevalence of drug administration errors during anesthesia among anesthesiologists from Santa Catarina, the circumstances in which they occurred, and possible associated factors. MATERIALS AND METHODS: An electronic questionnaire was sent to all anesthesiologists from Sociedade de Anestesiologia do Estado de Santa Catarina, with direct or multiple choice questions on responder demographics and anesthesia practice profile; prevalence of errors, type and consequence of error; and factors that may have contributed to the errors. RESULTS: Of the respondents, 91.8% reported they had committed administration errors, adding the total error of 274 and mean of 4.7 (6.9 errors per respondent. The most common error was replacement (68.4%, followed by dose error (49.1%, and omission (35%. Only 7% of respondents reported neuraxial administration error. Regarding circumstances of errors, they mainly occurred in the morning (32.7%, in anesthesia maintenance (49%, with 47.8% without harm to the patient and 1.75% with the highest morbidity and irreversible damage, and 87.3% of cases with immediate identification. As for possible contributing factors, the most frequent were distraction and fatigue (64.9% and misreading of labels, ampoules, or syringes (54.4%. CONCLUSION: Most respondents committed more than one error in anesthesia administration, mainly justified as a distraction or fatigue, and of low gravity.

  2. The error analysis of the reverse saturation current of the diode in the modeling of photovoltaic modules

    International Nuclear Information System (INIS)

    Wang, Gang; Zhao, Ke; Qiu, Tian; Yang, Xinsheng; Zhang, Yong; Zhao, Yong

    2016-01-01

    In the modeling and simulation of photovoltaic modules, especially in calculating the reverse saturation current of the diode, the series and parallel resistances are often neglected, causing certain errors. We analyzed the errors at the open circuit point, and proposed an iterative algorithm to calculate the modified values of the reverse saturation current, series resistance and parallel resistance of the diode, in order to reduce the errors. Assuming independent irradiation and temperature effects, the irradiation-dependence and the temperature-dependence of the open circuit voltage were introduced to obtain the modified formula of the open circuit voltage under any condition. Experimental results show that this modified formula has high accuracy, even at irradiance as low as 40 W/m"2. The errors of open circuit voltage were significantly reduced, indicating that this modified model is suitable for simulations of photovoltaic modules. - Highlights: • We propose a new method for modeling PV modules with higher accuracy. • The errors of open circuit voltage are significantly reduced. • I_o under any condition is calculated.

  3. Response to "Improving Patient Safety With Error Identification in Chemotherapy Orders by Verification Nurses"
.

    Science.gov (United States)

    Zhu, Ling-Ling; Lv, Na; Zhou, Quan

    2016-12-01

    We read, with great interest, the study by Baldwin and Rodriguez (2016), which described the role of the verification nurse and details the verification process in identifying errors related to chemotherapy orders. We strongly agree with their findings that a verification nurse, collaborating closely with the prescribing physician, pharmacist, and treating nurse, can better identify errors and maintain safety during chemotherapy administration.

  4. Identification problems in linear transformation system

    International Nuclear Information System (INIS)

    Delforge, Jacques.

    1975-01-01

    An attempt was made to solve the theoretical and numerical difficulties involved in the identification problem relative to the linear part of P. Delattre's theory of transformation systems. The theoretical difficulties are due to the very important problem of the uniqueness of the solution, which must be demonstrated in order to justify the value of the solution found. Simple criteria have been found when measurements are possible on all the equivalence classes, but the problem remains imperfectly solved when certain evolution curves are unknown. The numerical difficulties are of two kinds: a slow convergence of iterative methods and a strong repercussion of numerical and experimental errors on the solution. In the former case a fast convergence was obtained by transformation of the parametric space, while in the latter it was possible, from sensitivity functions, to estimate the errors, to define and measure the conditioning of the identification problem then to minimize this conditioning as a function of the experimental conditions [fr

  5. Pollutant source identification model for water pollution incidents in small straight rivers based on genetic algorithm

    Science.gov (United States)

    Zhang, Shou-ping; Xin, Xiao-kang

    2017-07-01

    Identification of pollutant sources for river pollution incidents is an important and difficult task in the emergency rescue, and an intelligent optimization method can effectively compensate for the weakness of traditional methods. An intelligent model for pollutant source identification has been established using the basic genetic algorithm (BGA) as an optimization search tool and applying an analytic solution formula of one-dimensional unsteady water quality equation to construct the objective function. Experimental tests show that the identification model is effective and efficient: the model can accurately figure out the pollutant amounts or positions no matter single pollution source or multiple sources. Especially when the population size of BGA is set as 10, the computing results are sound agree with analytic results for a single source amount and position identification, the relative errors are no more than 5 %. For cases of multi-point sources and multi-variable, there are some errors in computing results for the reasons that there exist many possible combinations of the pollution sources. But, with the help of previous experience to narrow the search scope, the relative errors of the identification results are less than 5 %, which proves the established source identification model can be used to direct emergency responses.

  6. Mobile location with NLOS identification and mitigation based on modified Kalman filtering.

    Science.gov (United States)

    Ke, Wei; Wu, Lenan

    2011-01-01

    In order to enhance accuracy and reliability of wireless location in the mixed line-of-sight (LOS) and non-line-of-sight (NLOS) environments, a robust mobile location algorithm is presented to track the position of a mobile node (MN). An extended Kalman filter (EKF) modified in the updating phase is utilized to reduce the NLOS error in rough wireless environments, in which the NLOS bias contained in each measurement range is estimated directly by the constrained optimization method. To identify the change of channel situation between NLOS and LOS, a low complexity identification method based on innovation vectors is proposed. Numerical results illustrate that the location errors of the proposed algorithm are all significantly smaller than those of the iterated NLOS EKF algorithm and the conventional EKF algorithm in different LOS/NLOS conditions. Moreover, this location method does not require any statistical distribution knowledge of the NLOS error. In addition, complexity experiments suggest that this algorithm supports real-time applications.

  7. Laboratory errors and patient safety.

    Science.gov (United States)

    Miligy, Dawlat A

    2015-01-01

    Laboratory data are extensively used in medical practice; consequently, laboratory errors have a tremendous impact on patient safety. Therefore, programs designed to identify and reduce laboratory errors, as well as, setting specific strategies are required to minimize these errors and improve patient safety. The purpose of this paper is to identify part of the commonly encountered laboratory errors throughout our practice in laboratory work, their hazards on patient health care and some measures and recommendations to minimize or to eliminate these errors. Recording the encountered laboratory errors during May 2008 and their statistical evaluation (using simple percent distribution) have been done in the department of laboratory of one of the private hospitals in Egypt. Errors have been classified according to the laboratory phases and according to their implication on patient health. Data obtained out of 1,600 testing procedure revealed that the total number of encountered errors is 14 tests (0.87 percent of total testing procedures). Most of the encountered errors lay in the pre- and post-analytic phases of testing cycle (representing 35.7 and 50 percent, respectively, of total errors). While the number of test errors encountered in the analytic phase represented only 14.3 percent of total errors. About 85.7 percent of total errors were of non-significant implication on patients health being detected before test reports have been submitted to the patients. On the other hand, the number of test errors that have been already submitted to patients and reach the physician represented 14.3 percent of total errors. Only 7.1 percent of the errors could have an impact on patient diagnosis. The findings of this study were concomitant with those published from the USA and other countries. This proves that laboratory problems are universal and need general standardization and bench marking measures. Original being the first data published from Arabic countries that

  8. Facial expressions of emotion (KDEF): identification under different display-duration conditions.

    Science.gov (United States)

    Calvo, Manuel G; Lundqvist, Daniel

    2008-02-01

    Participants judged which of seven facial expressions (neutrality, happiness, anger, sadness, surprise, fear, and disgust) were displayed by a set of 280 faces corresponding to 20 female and 20 male models of the Karolinska Directed Emotional Faces database (Lundqvist, Flykt, & Ohman, 1998). Each face was presented under free-viewing conditions (to 63 participants) and also for 25, 50, 100, 250, and 500 msec (to 160 participants), to examine identification thresholds. Measures of identification accuracy, types of errors, and reaction times were obtained for each expression. In general, happy faces were identified more accurately, earlier, and faster than other faces, whereas judgments of fearful faces were the least accurate, the latest, and the slowest. Norms for each face and expression regarding level of identification accuracy, errors, and reaction times may be downloaded from www.psychonomic.org/archive/.

  9. Learning from Errors

    Directory of Open Access Journals (Sweden)

    MA. Lendita Kryeziu

    2015-06-01

    Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.

  10. Large errors and severe conditions

    CERN Document Server

    Smith, D L; Van Wormer, L A

    2002-01-01

    Physical parameters that can assume real-number values over a continuous range are generally represented by inherently positive random variables. However, if the uncertainties in these parameters are significant (large errors), conventional means of representing and manipulating the associated variables can lead to erroneous results. Instead, all analyses involving them must be conducted in a probabilistic framework. Several issues must be considered: First, non-linear functional relations between primary and derived variables may lead to significant 'error amplification' (severe conditions). Second, the commonly used normal (Gaussian) probability distribution must be replaced by a more appropriate function that avoids the occurrence of negative sampling results. Third, both primary random variables and those derived through well-defined functions must be dealt with entirely in terms of their probability distributions. Parameter 'values' and 'errors' should be interpreted as specific moments of these probabil...

  11. Error free physically unclonable function with programmed resistive random access memory using reliable resistance states by specific identification-generation method

    Science.gov (United States)

    Tseng, Po-Hao; Hsu, Kai-Chieh; Lin, Yu-Yu; Lee, Feng-Min; Lee, Ming-Hsiu; Lung, Hsiang-Lan; Hsieh, Kuang-Yeu; Chung Wang, Keh; Lu, Chih-Yuan

    2018-04-01

    A high performance physically unclonable function (PUF) implemented with WO3 resistive random access memory (ReRAM) is presented in this paper. This robust ReRAM-PUF can eliminated bit flipping problem at very high temperature (up to 250 °C) due to plentiful read margin by using initial resistance state and set resistance state. It is also promised 10 years retention at the temperature range of 210 °C. These two stable resistance states enable stable operation at automotive environments from -40 to 125 °C without need of temperature compensation circuit. The high uniqueness of PUF can be achieved by implementing a proposed identification (ID)-generation method. Optimized forming condition can move 50% of the cells to low resistance state and the remaining 50% remain at initial high resistance state. The inter- and intra-PUF evaluations with unlimited separation of hamming distance (HD) are successfully demonstrated even under the corner condition. The number of reproduction was measured to exceed 107 times with 0% bit error rate (BER) at read voltage from 0.4 to 0.7 V.

  12. Medication prescribing errors in a public teaching hospital in India: A prospective study.

    Directory of Open Access Journals (Sweden)

    Pote S

    2007-03-01

    Full Text Available Background: To prevent medication errors in prescribing, one needs to know their types and relative occurrence. Such errors are a great cause of concern as they have the potential to cause patient harm. The aim of this study was to determine the nature and types of medication prescribing errors in an Indian setting.Methods: The medication errors were analyzed in a prospective observational study conducted in 3 medical wards of a public teaching hospital in India. The medication errors were analyzed by means of Micromedex Drug-Reax database.Results: Out of 312 patients, only 304 were included in the study. Of the 304 cases, 103 (34% cases had at least one error. The total number of errors found was 157. The drug-drug interactions were the most frequently (68.2% occurring type of error, which was followed by incorrect dosing interval (12% and dosing errors (9.5%. The medication classes involved most were antimicrobial agents (29.4%, cardiovascular agents (15.4%, GI agents (8.6% and CNS agents (8.2%. The moderate errors contributed maximum (61.8% to the total errors when compared to the major (25.5% and minor (12.7% errors. The results showed that the number of errors increases with age and number of medicines prescribed.Conclusion: The results point to the establishment of medication error reporting at each hospital and to share the data with other hospitals. The role of clinical pharmacist in this situation appears to be a strong intervention; and the clinical pharmacist, initially, could confine to identification of the medication errors.

  13. Optics measurement algorithms and error analysis for the proton energy frontier

    CERN Document Server

    Langner, A

    2015-01-01

    Optics measurement algorithms have been improved in preparation for the commissioning of the LHC at higher energy, i.e., with an increased damage potential. Due to machine protection considerations the higher energy sets tighter limits in the maximum excitation amplitude and the total beam charge, reducing the signal to noise ratio of optics measurements. Furthermore the precision in 2012 (4 TeV) was insufficient to understand beam size measurements and determine interaction point (IP) β-functions (β). A new, more sophisticated algorithm has been developed which takes into account both the statistical and systematic errors involved in this measurement. This makes it possible to combine more beam position monitor measurements for deriving the optical parameters and demonstrates to significantly improve the accuracy and precision. Measurements from the 2012 run have been reanalyzed which, due to the improved algorithms, result in a significantly higher precision of the derived optical parameters and decreased...

  14. Effects of measurement noise on modal parameter identification

    International Nuclear Information System (INIS)

    Dorvash, S; Pakzad, S N

    2012-01-01

    In the past decade, much research has been conducted on data-driven structural health monitoring (SHM) algorithms with use of sensor measurements. A fundamental step in this SHM application is to identify the dynamic characteristics of structures. Despite the significant efforts devoted to development and enhancement of the modal parameter identification algorithms, there are still substantial uncertainties in the results obtained in real-life deployments. One of the sources of uncertainties in the results is the existence of noise in the measurement data. Depending on the subsequent application of the system identification, the level of uncertainty in the results and, consequently, the level of noise contamination can be very important. As an effort towards understanding the effect of measurement noise on the modal identification, this paper presents parameters that quantify the effects of measurement noise on the modal identification process and determine their influence on the accuracy of results. The performance of these parameters is validated by a numerically simulated example. They are then used to investigate the accuracy of identified modal properties of the Golden Gate Bridge using ambient data collected by wireless sensors. The vibration monitoring tests of the Golden Gate Bridge provided two synchronized data sets collected by two different sensor types. The influence of the sensor noise level on the accuracy of results is investigated throughout this work and it is shown that high quality sensors provide more accurate results as the physical contribution of response in their measured data is significantly higher. Additionally, higher purity and consistency of modal parameters, identified by higher quality sensors, is observed in the results. (paper)

  15. The impact of transmission errors on progressive 720 lines HDTV coded with H.264

    Science.gov (United States)

    Brunnström, Kjell; Stålenbring, Daniel; Pettersson, Martin; Gustafsson, Jörgen

    2010-02-01

    TV sent over the networks based on the Internet Protocol i.e IPTV is moving towards high definition (HDTV). There has been quite a lot of work on how the HDTV is affected by different codecs and bitrates, but the impact of transmission errors over IP-networks have been less studied. The study was focusing on H.264 encoded 1280x720 progressive HDTV format and was comparing three different concealment methods for different packet loss rates. One is included in a propriety decoder, one is part of FFMPEG and different length of freezing. The target is to simulate what typically IPTV settop-boxes will do when encountering packet loss. Another aim is to study whether the presentation upscaled on the full HDTV screen or presented pixel mapped in a smaller area in the center of the sceen would have an effect on the quality. The results show that there were differences between the two packet loss concealment methods in FFMPEG and in the propriety codec. Freezing seemed to have similar effect as been reported before. For low rates of transmission errors the coding impairments has impact on the quality, but for higher degree of transmission errors these does not affect the quality, since they become overshadowed by transmission error. An interesting effect where the higher bitrate videos goes from having higher quality for lower degree of packet loss, to having lower quality than the lower bitrate video at higher packet loss, was discovered. The different way of presenting the video i.e. upscaled or not-upscaled was significant on the 95% level, but just about.

  16. Measurement of tokamak error fields using plasma response and its applicability to ITER

    International Nuclear Information System (INIS)

    Strait, E.J.; Buttery, R.J.; Chu, M.S.; Garofalo, A.M.; La Haye, R.J.; Schaffer, M.J.; Casper, T.A.; Gribov, Y.; Hanson, J.M.; Reimerdes, H.; Volpe, F.A.

    2014-01-01

    The nonlinear response of a low-beta tokamak plasma to non-axisymmetric fields offers an alternative to direct measurement of the non-axisymmetric part of the vacuum magnetic fields, often termed ‘error fields’. Possible approaches are discussed for determination of error fields and the required current in non-axisymmetric correction coils, with an emphasis on two relatively new methods: measurement of the torque balance on a saturated magnetic island, and measurement of the braking of plasma rotation in the absence of an island. The former is well suited to ohmically heated discharges, while the latter is more appropriate for discharges with a modest amount of neutral beam heating to drive rotation. Both can potentially provide continuous measurements during a discharge, subject to the limitation of a minimum averaging time. The applicability of these methods to ITER is discussed, and an estimate is made of their uncertainties in light of the specifications of ITER's diagnostic systems. The use of plasma response-based techniques in normal ITER operational scenarios may allow identification of the error field contributions by individual central solenoid coils, but identification of the individual contributions by the outer poloidal field coils or other sources is less likely to be feasible. (paper)

  17. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    Science.gov (United States)

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Prevalence, Nature, Severity and Risk Factors for Prescribing Errors in Hospital Inpatients: Prospective Study in 20 UK Hospitals.

    Science.gov (United States)

    Ashcroft, Darren M; Lewis, Penny J; Tully, Mary P; Farragher, Tracey M; Taylor, David; Wass, Valerie; Williams, Steven D; Dornan, Tim

    2015-09-01

    It has been suggested that doctors in their first year of post-graduate training make a disproportionate number of prescribing errors. This study aimed to compare the prevalence of prescribing errors made by first-year post-graduate doctors with that of errors by senior doctors and non-medical prescribers and to investigate the predictors of potentially serious prescribing errors. Pharmacists in 20 hospitals over 7 prospectively selected days collected data on the number of medication orders checked, the grade of prescriber and details of any prescribing errors. Logistic regression models (adjusted for clustering by hospital) identified factors predicting the likelihood of prescribing erroneously and the severity of prescribing errors. Pharmacists reviewed 26,019 patients and 124,260 medication orders; 11,235 prescribing errors were detected in 10,986 orders. The mean error rate was 8.8 % (95 % confidence interval [CI] 8.6-9.1) errors per 100 medication orders. Rates of errors for all doctors in training were significantly higher than rates for medical consultants. Doctors who were 1 year (odds ratio [OR] 2.13; 95 % CI 1.80-2.52) or 2 years in training (OR 2.23; 95 % CI 1.89-2.65) were more than twice as likely to prescribe erroneously. Prescribing errors were 70 % (OR 1.70; 95 % CI 1.61-1.80) more likely to occur at the time of hospital admission than when medication orders were issued during the hospital stay. No significant differences in severity of error were observed between grades of prescriber. Potentially serious errors were more likely to be associated with prescriptions for parenteral administration, especially for cardiovascular or endocrine disorders. The problem of prescribing errors in hospitals is substantial and not solely a problem of the most junior medical prescribers, particularly for those errors most likely to cause significant patient harm. Interventions are needed to target these high-risk errors by all grades of staff and hence

  19. MSblender: A probabilistic approach for integrating peptide identifications from multiple database search engines.

    Science.gov (United States)

    Kwon, Taejoon; Choi, Hyungwon; Vogel, Christine; Nesvizhskii, Alexey I; Marcotte, Edward M

    2011-07-01

    Shotgun proteomics using mass spectrometry is a powerful method for protein identification but suffers limited sensitivity in complex samples. Integrating peptide identifications from multiple database search engines is a promising strategy to increase the number of peptide identifications and reduce the volume of unassigned tandem mass spectra. Existing methods pool statistical significance scores such as p-values or posterior probabilities of peptide-spectrum matches (PSMs) from multiple search engines after high scoring peptides have been assigned to spectra, but these methods lack reliable control of identification error rates as data are integrated from different search engines. We developed a statistically coherent method for integrative analysis, termed MSblender. MSblender converts raw search scores from search engines into a probability score for every possible PSM and properly accounts for the correlation between search scores. The method reliably estimates false discovery rates and identifies more PSMs than any single search engine at the same false discovery rate. Increased identifications increment spectral counts for most proteins and allow quantification of proteins that would not have been quantified by individual search engines. We also demonstrate that enhanced quantification contributes to improve sensitivity in differential expression analyses.

  20. Error and uncertainty in scientific practice

    NARCIS (Netherlands)

    Boumans, M.; Hon, G.; Petersen, A.C.

    2014-01-01

    Assessment of error and uncertainty is a vital component of both natural and social science. Empirical research involves dealing with all kinds of errors and uncertainties, yet there is significant variance in how such results are dealt with. Contributors to this volume present case studies of

  1. Modeling SMAP Spacecraft Attitude Control Estimation Error Using Signal Generation Model

    Science.gov (United States)

    Rizvi, Farheen

    2016-01-01

    Two ground simulation software are used to model the SMAP spacecraft dynamics. The CAST software uses a higher fidelity model than the ADAMS software. The ADAMS software models the spacecraft plant, controller and actuator models, and assumes a perfect sensor and estimator model. In this simulation study, the spacecraft dynamics results from the ADAMS software are used as CAST software is unavailable. The main source of spacecraft dynamics error in the higher fidelity CAST software is due to the estimation error. A signal generation model is developed to capture the effect of this estimation error in the overall spacecraft dynamics. Then, this signal generation model is included in the ADAMS software spacecraft dynamics estimate such that the results are similar to CAST. This signal generation model has similar characteristics mean, variance and power spectral density as the true CAST estimation error. In this way, ADAMS software can still be used while capturing the higher fidelity spacecraft dynamics modeling from CAST software.

  2. Error-related potentials during continuous feedback: using EEG to detect errors of different type and severity

    Science.gov (United States)

    Spüler, Martin; Niethammer, Christian

    2015-01-01

    When a person recognizes an error during a task, an error-related potential (ErrP) can be measured as response. It has been shown that ErrPs can be automatically detected in tasks with time-discrete feedback, which is widely applied in the field of Brain-Computer Interfaces (BCIs) for error correction or adaptation. However, there are only a few studies that concentrate on ErrPs during continuous feedback. With this study, we wanted to answer three different questions: (i) Can ErrPs be measured in electroencephalography (EEG) recordings during a task with continuous cursor control? (ii) Can ErrPs be classified using machine learning methods and is it possible to discriminate errors of different origins? (iii) Can we use EEG to detect the severity of an error? To answer these questions, we recorded EEG data from 10 subjects during a video game task and investigated two different types of error (execution error, due to inaccurate feedback; outcome error, due to not achieving the goal of an action). We analyzed the recorded data to show that during the same task, different kinds of error produce different ErrP waveforms and have a different spectral response. This allows us to detect and discriminate errors of different origin in an event-locked manner. By utilizing the error-related spectral response, we show that also a continuous, asynchronous detection of errors is possible. Although the detection of error severity based on EEG was one goal of this study, we did not find any significant influence of the severity on the EEG. PMID:25859204

  3. Error-related potentials during continuous feedback: using EEG to detect errors of different type and severity

    Directory of Open Access Journals (Sweden)

    Martin eSpüler

    2015-03-01

    Full Text Available When a person recognizes an error during a task, an error-related potential (ErrP can be measured as response. It has been shown that ErrPs can be automatically detected in tasks with time-discrete feedback, which is widely applied in the field of Brain-Computer Interfaces (BCIs for error correction or adaptation. However, there are only a few studies that concentrate on ErrPs during continuous feedback.With this study, we wanted to answer three different questions: (i Can ErrPs be measured in electroencephalography (EEG recordings during a task with continuous cursor control? (ii Can ErrPs be classified using machine learning methods and is it possible to discriminate errors of different origins? (iii Can we use EEG to detect the severity of an error? To answer these questions, we recorded EEG data from 10 subjects during a video game task and investigated two different types of error (execution error, due to inaccurate feedback; outcome error, due to not achieving the goal of an action. We analyzed the recorded data to show that during the same task, different kinds of error produce different ErrP waveforms and have a different spectral response. This allows us to detect and discriminate errors of different origin in an event-locked manner. By utilizing the error-related spectral response, we show that also a continuous, asynchronous detection of errors is possible.Although the detection of error severity based on EEG was one goal of this study, we did not find any significant influence of the severity on the EEG.

  4. Human error in remote Afterloading Brachytherapy

    International Nuclear Information System (INIS)

    Quinn, M.L.; Callan, J.; Schoenfeld, I.; Serig, D.

    1994-01-01

    Remote Afterloading Brachytherapy (RAB) is a medical process used in the treatment of cancer. RAB uses a computer-controlled device to remotely insert and remove radioactive sources close to a target (or tumor) in the body. Some RAB problems affecting the radiation dose to the patient have been reported and attributed to human error. To determine the root cause of human error in the RAB system, a human factors team visited 23 RAB treatment sites in the US. The team observed RAB treatment planning and delivery, interviewed RAB personnel, and performed walk-throughs, during which staff demonstrated the procedures and practices used in performing RAB tasks. Factors leading to human error in the RAB system were identified. The impact of those factors on the performance of RAB was then evaluated and prioritized in terms of safety significance. Finally, the project identified and evaluated alternative approaches for resolving the safety significant problems related to human error

  5. Dissociable genetic contributions to error processing: a multimodal neuroimaging study.

    Directory of Open Access Journals (Sweden)

    Yigal Agam

    Full Text Available Neuroimaging studies reliably identify two markers of error commission: the error-related negativity (ERN, an event-related potential, and functional MRI activation of the dorsal anterior cingulate cortex (dACC. While theorized to reflect the same neural process, recent evidence suggests that the ERN arises from the posterior cingulate cortex not the dACC. Here, we tested the hypothesis that these two error markers also have different genetic mediation.We measured both error markers in a sample of 92 comprised of healthy individuals and those with diagnoses of schizophrenia, obsessive-compulsive disorder or autism spectrum disorder. Participants performed the same task during functional MRI and simultaneously acquired magnetoencephalography and electroencephalography. We examined the mediation of the error markers by two single nucleotide polymorphisms: dopamine D4 receptor (DRD4 C-521T (rs1800955, which has been associated with the ERN and methylenetetrahydrofolate reductase (MTHFR C677T (rs1801133, which has been associated with error-related dACC activation. We then compared the effects of each polymorphism on the two error markers modeled as a bivariate response.We replicated our previous report of a posterior cingulate source of the ERN in healthy participants in the schizophrenia and obsessive-compulsive disorder groups. The effect of genotype on error markers did not differ significantly by diagnostic group. DRD4 C-521T allele load had a significant linear effect on ERN amplitude, but not on dACC activation, and this difference was significant. MTHFR C677T allele load had a significant linear effect on dACC activation but not ERN amplitude, but the difference in effects on the two error markers was not significant.DRD4 C-521T, but not MTHFR C677T, had a significant differential effect on two canonical error markers. Together with the anatomical dissociation between the ERN and error-related dACC activation, these findings suggest that

  6. Outlier Removal and the Relation with Reporting Errors and Quality of Psychological Research

    Science.gov (United States)

    Bakker, Marjan; Wicherts, Jelte M.

    2014-01-01

    Background The removal of outliers to acquire a significant result is a questionable research practice that appears to be commonly used in psychology. In this study, we investigated whether the removal of outliers in psychology papers is related to weaker evidence (against the null hypothesis of no effect), a higher prevalence of reporting errors, and smaller sample sizes in these papers compared to papers in the same journals that did not report the exclusion of outliers from the analyses. Methods and Findings We retrieved a total of 2667 statistical results of null hypothesis significance tests from 153 articles in main psychology journals, and compared results from articles in which outliers were removed (N = 92) with results from articles that reported no exclusion of outliers (N = 61). We preregistered our hypotheses and methods and analyzed the data at the level of articles. Results show no significant difference between the two types of articles in median p value, sample sizes, or prevalence of all reporting errors, large reporting errors, and reporting errors that concerned the statistical significance. However, we did find a discrepancy between the reported degrees of freedom of t tests and the reported sample size in 41% of articles that did not report removal of any data values. This suggests common failure to report data exclusions (or missingness) in psychological articles. Conclusions We failed to find that the removal of outliers from the analysis in psychological articles was related to weaker evidence (against the null hypothesis of no effect), sample size, or the prevalence of errors. However, our control sample might be contaminated due to nondisclosure of excluded values in articles that did not report exclusion of outliers. Results therefore highlight the importance of more transparent reporting of statistical analyses. PMID:25072606

  7. Investigating the relationship between foveal morphology and refractive error in a population with infantile nystagmus syndrome.

    Science.gov (United States)

    Healey, Natasha; McLoone, Eibhlin; Mahon, Gerald; Jackson, A Jonathan; Saunders, Kathryn J; McClelland, Julie F

    2013-04-26

    We explored associations between refractive error and foveal hypoplasia in infantile nystagmus syndrome (INS). We recruited 50 participants with INS (albinism n = 33, nonalbinism infantile nystagmus [NAIN] n = 17) aged 4 to 48 years. Cycloplegic refractive error and logMAR acuity were obtained. Spherical equivalent (SER), most ametropic meridian (MAM) refractive error, and better eye acuity (VA) were used for analyses. High resolution spectral-domain optical coherence tomography (SD-OCT) was used to obtain foveal scans, which were graded using the Foveal Hypoplasia Grading Scale. Associations between grades of severity of foveal hypoplasia, and refractive error and VA were explored. Participants with more severe foveal hypoplasia had significantly higher MAMs and SERs (Kruskal-Wallis H test P = 0.005 and P = 0.008, respectively). There were no statistically significant associations between foveal hypoplasia and cylindrical refractive error (Kruskal-Wallis H test P = 0.144). Analyses demonstrated significant differences between participants with albinism or NAIN in terms of SER and MAM (Mann-Whitney U test P = 0.001). There were no statistically significant differences between astigmatic errors between participants with albinism and NAIN. Controlling for the effects of albinism, results demonstrated no significant associations between SER, and MAM and foveal hypoplasia (partial correlation P > 0.05). Poorer visual acuity was associated statistically significantly with more severe foveal hypoplasia (Kruskal-Wallis H test P = 0.001) and with a diagnosis of albinism (Mann-Whitney U test P = 0.001). Increasing severity of foveal hypoplasia is associated with poorer VA, reflecting reduced cone density in INS. Individuals with INS also demonstrate a significant association between more severe foveal hypoplasia and increasing hyperopia. However, in the absence of albinism, there is no significant relation between refractive outcome and degree of foveal hypoplasia

  8. Errorful and errorless learning: The impact of cue-target constraint in learning from errors.

    Science.gov (United States)

    Bridger, Emma K; Mecklinger, Axel

    2014-08-01

    The benefits of testing on learning are well described, and attention has recently turned to what happens when errors are elicited during learning: Is testing nonetheless beneficial, or can errors hinder learning? Whilst recent findings have indicated that tests boost learning even if errors are made on every trial, other reports, emphasizing the benefits of errorless learning, have indicated that errors lead to poorer later memory performance. The possibility that this discrepancy is a function of the materials that must be learned-in particular, the relationship between the cues and targets-was addressed here. Cued recall after either a study-only errorless condition or an errorful learning condition was contrasted across cue-target associations, for which the extent to which the target was constrained by the cue was either high or low. Experiment 1 showed that whereas errorful learning led to greater recall for low-constraint stimuli, it led to a significant decrease in recall for high-constraint stimuli. This interaction is thought to reflect the extent to which retrieval is constrained by the cue-target association, as well as by the presence of preexisting semantic associations. The advantage of errorful retrieval for low-constraint stimuli was replicated in Experiment 2, and the interaction with stimulus type was replicated in Experiment 3, even when guesses were randomly designated as being either correct or incorrect. This pattern provides support for inferences derived from reports in which participants made errors on all learning trials, whilst highlighting the impact of material characteristics on the benefits and disadvantages that accrue from errorful learning in episodic memory.

  9. Design of an error-free nondestructive plutonium assay facility

    International Nuclear Information System (INIS)

    Moore, C.B.; Steward, W.E.

    1987-01-01

    An automated, at-line nondestructive assay (NDA) laboratory is installed in facilities recently constructed at the Savannah River Plant. The laboratory will enhance nuclear materials accounting in new plutonium scrap and waste recovery facilities. The advantages of at-line NDA operations will not be realized if results are clouded by errors in analytical procedures, sample identification, record keeping, or techniques for extracting samples from process streams. Minimization of such errors has been a primary design objective for the new facility. Concepts for achieving that objective include mechanizing the administrative tasks of scheduling activities in the laboratory, identifying samples, recording and storing assay data, and transmitting results information to process control and materials accounting functions. These concepts have been implemented in an analytical computer system that is programmed to avoid the obvious sources of error encountered in laboratory operations. The laboratory computer exchanges information with process control and materials accounting computers, transmitting results information and obtaining process data and accounting information as required to guide process operations and maintain current records of materials flow through the new facility

  10. [Medication error management climate and perception for system use according to construction of medication error prevention system].

    Science.gov (United States)

    Kim, Myoung Soo

    2012-08-01

    The purpose of this cross-sectional study was to examine current status of IT-based medication error prevention system construction and the relationships among system construction, medication error management climate and perception for system use. The participants were 124 patient safety chief managers working for 124 hospitals with over 300 beds in Korea. The characteristics of the participants, construction status and perception of systems (electric pharmacopoeia, electric drug dosage calculation system, computer-based patient safety reporting and bar-code system) and medication error management climate were measured in this study. The data were collected between June and August 2011. Descriptive statistics, partial Pearson correlation and MANCOVA were used for data analysis. Electric pharmacopoeia were constructed in 67.7% of participating hospitals, computer-based patient safety reporting systems were constructed in 50.8%, electric drug dosage calculation systems were in use in 32.3%. Bar-code systems showed up the lowest construction rate at 16.1% of Korean hospitals. Higher rates of construction of IT-based medication error prevention systems resulted in greater safety and a more positive error management climate prevailed. The supportive strategies for improving perception for use of IT-based systems would add to system construction, and positive error management climate would be more easily promoted.

  11. OpWise: Operons aid the identification of differentially expressed genes in bacterial microarray experiments

    Directory of Open Access Journals (Sweden)

    Arkin Adam P

    2006-01-01

    Full Text Available Abstract Background Differentially expressed genes are typically identified by analyzing the variation between replicate measurements. These procedures implicitly assume that there are no systematic errors in the data even though several sources of systematic error are known. Results OpWise estimates the amount of systematic error in bacterial microarray data by assuming that genes in the same operon have matching expression patterns. OpWise then performs a Bayesian analysis of a linear model to estimate significance. In simulations, OpWise corrects for systematic error and is robust to deviations from its assumptions. In several bacterial data sets, significant amounts of systematic error are present, and replicate-based approaches overstate the confidence of the changers dramatically, while OpWise does not. Finally, OpWise can identify additional changers by assigning genes higher confidence if they are consistent with other genes in the same operon. Conclusion Although microarray data can contain large amounts of systematic error, operons provide an external standard and allow for reasonable estimates of significance. OpWise is available at http://microbesonline.org/OpWise.

  12. Preanalytical errors in medical laboratories: a review of the available methodologies of data collection and analysis.

    Science.gov (United States)

    West, Jamie; Atherton, Jennifer; Costelloe, Seán J; Pourmahram, Ghazaleh; Stretton, Adam; Cornes, Michael

    2017-01-01

    Preanalytical errors have previously been shown to contribute a significant proportion of errors in laboratory processes and contribute to a number of patient safety risks. Accreditation against ISO 15189:2012 requires that laboratory Quality Management Systems consider the impact of preanalytical processes in areas such as the identification and control of non-conformances, continual improvement, internal audit and quality indicators. Previous studies have shown that there is a wide variation in the definition, repertoire and collection methods for preanalytical quality indicators. The International Federation of Clinical Chemistry Working Group on Laboratory Errors and Patient Safety has defined a number of quality indicators for the preanalytical stage, and the adoption of harmonized definitions will support interlaboratory comparisons and continual improvement. There are a variety of data collection methods, including audit, manual recording processes, incident reporting mechanisms and laboratory information systems. Quality management processes such as benchmarking, statistical process control, Pareto analysis and failure mode and effect analysis can be used to review data and should be incorporated into clinical governance mechanisms. In this paper, The Association for Clinical Biochemistry and Laboratory Medicine PreAnalytical Specialist Interest Group review the various data collection methods available. Our recommendation is the use of the laboratory information management systems as a recording mechanism for preanalytical errors as this provides the easiest and most standardized mechanism of data capture.

  13. Activity-Based Costing Systems for Higher Education.

    Science.gov (United States)

    Day, Dennis H.

    1993-01-01

    Examines traditional costing models utilized in higher education and pinpoints shortcomings related to proper identification of costs. Describes activity-based costing systems as a superior alternative for cost identification, measurement, and allocation. (MLF)

  14. Genetic Algorithm-Based Identification of Fractional-Order Systems

    Directory of Open Access Journals (Sweden)

    Shengxi Zhou

    2013-05-01

    Full Text Available Fractional calculus has become an increasingly popular tool for modeling the complex behaviors of physical systems from diverse domains. One of the key issues to apply fractional calculus to engineering problems is to achieve the parameter identification of fractional-order systems. A time-domain identification algorithm based on a genetic algorithm (GA is proposed in this paper. The multi-variable parameter identification is converted into a parameter optimization by applying GA to the identification of fractional-order systems. To evaluate the identification accuracy and stability, the time-domain output error considering the condition variation is designed as the fitness function for parameter optimization. The identification process is established under various noise levels and excitation levels. The effects of external excitation and the noise level on the identification accuracy are analyzed in detail. The simulation results show that the proposed method could identify the parameters of both commensurate rate and non-commensurate rate fractional-order systems from the data with noise. It is also observed that excitation signal is an important factor influencing the identification accuracy of fractional-order systems.

  15. Learning from mistakes: errors in approaches to melanoma and the urgent need for updated national guidelines.

    Science.gov (United States)

    Simionescu, Olga; Blum, Andreas; Grigore, Mariana; Costache, Mariana; Avram, Alina; Testori, Alessandro

    2016-09-01

    The tracking and identification of errors in the detection and follow-up of melanoma are important because there is huge potential to increase awareness about the most vulnerable aspects of diagnosis and treatment, and to improve both from the perspective of healthcare economics. The present study was designed to identify where errors occur and to propose a minimum set of rules for the routine guidance of any specialist in melanoma management. This report describes the evaluation of a unique series of 33 cases in which errors applying to many steps in the diagnosis and treatment of melanoma were detected. Cases were collected at two centers in Romania, one public and one private, as part of a process of obtaining patient-requested second opinions. A total of 166 errors were identified across the 33 patients, most of which were treatment errors. The errors fell into six categories: clinical diagnostic errors (36 errors among 30 patients); primary surgical errors (31 errors among 16 patients); pathology errors (24 errors among 17 patients); sentinel lymph node biopsy errors (13 errors among 13 patients); staging errors (17 errors among 13 patients); and treatment or management errors (45 errors among 33 patients). Based on the present results, we propose that in countries lacking national guidelines, clinicians should adhere to international evidence-based guidelines for the diagnosis and treatment of melanoma. © 2015 The International Society of Dermatology.

  16. A navigation system for percutaneous needle interventions based on PET/CT images: design, workflow and error analysis of soft tissue and bone punctures.

    Science.gov (United States)

    Oliveira-Santos, Thiago; Klaeser, Bernd; Weitzel, Thilo; Krause, Thomas; Nolte, Lutz-Peter; Peterhans, Matthias; Weber, Stefan

    2011-01-01

    as highlighting the differences between bone and soft tissue punctures. An overall average error of 4.23 mm and 3.07 mm for bone and soft tissue punctures, respectively, demonstrated the feasibility of using this system for such interventions. The proposed system workflow was shown to be effective in separating the preparation from the sterile phase, as well as in keeping the system manageable by a single operator. Among the distinct sources of error, the user error based on the system accuracy (defined as the distance from the planned target to the actual needle tip) appeared to be the most significant. Bone punctures showed higher user error, whereas soft tissue punctures showed higher tissue deformation error.

  17. Identification and Evaluation of Human Errors in the Medication Process Using the Extended CREAM Technique

    Directory of Open Access Journals (Sweden)

    Iraj Mohammadfam

    2017-10-01

    Full Text Available Background Medication process is a powerful instrument for curing patients. Obeying the commands of this process has an important role in the treatment and provision of care to patients. Medication error, as a complicated process, can occur in any stage of this process, and to avoid it, appropriate decision-making, cognition, and performance of the hospital staff are needed. Objectives The present study aimed at identifying and evaluating the nature and reasons of human errors in the medication process in a hospital using the extended CREAM method. Methods This was a qualitative and cross-sectional study conducted in a hospital in Hamadan. In this study, first, the medication process was selected as a critical issue based on the opinions of experts, specialists, and experienced individuals in the nursing and medical departments. Then, the process was analyzed into relative steps and substeps using the method of HTA and was evaluated using extended CREAM technique considering the probability of human errors. Results Based on the findings achieved through the basic CREAM method, the highest CFPt was in the step of medicine administration to patients (0.056. Moreover, the results revealed that the highest CFPt was in the substeps of calculating the dose of medicine and determining the method of prescription and identifying the patient (0.0796 and 0.0785, respectively. Also, the least CFPt was related to transcribing the prescribed medicine from file to worksheet of medicine (0.0106. Conclusions Considering the critical consequences of human errors in the medication process, holding pharmacological retraining classes, using the principles of executing pharmaceutical orders, increasing medical personnel, reducing working overtime, organizing work shifts, and using error reporting systems are of paramount importance.

  18. Pediatric crisis resource management training improves emergency medicine trainees' perceived ability to manage emergencies and ability to identify teamwork errors.

    Science.gov (United States)

    Bank, Ilana; Snell, Linda; Bhanji, Farhan

    2014-12-01

    Improved pediatric crisis resource management (CRM) training is needed in emergency medicine residencies because of the variable nature of exposure to critically ill pediatric patients during training. We created a short, needs-based pediatric CRM simulation workshop with postactivity follow-up to determine retention of CRM knowledge. Our aims were to provide a realistic learning experience for residents and to help the learners recognize common errors in teamwork and improve their perceived abilities to manage ill pediatric patients. Residents participated in a 4-hour objectives-based workshop derived from a formal needs assessment. To quantify their subjective abilities to manage pediatric cases, the residents completed a postworkshop survey (with a retrospective precomponent to assess perceived change). Ability to identify CRM errors was determined via a written assessment of scripted errors in a prerecorded video observed before and 1 month after completion of the workshop. Fifteen of the 16 eligible emergency medicine residents (postgraduate year 1-5) attended the workshop and completed the surveys. There were significant differences in 15 of 16 retrospective pre to post survey items using the Wilcoxon rank sum test for non-parametric data. These included ability to be an effective team leader in general (P < 0.008), delegating tasks appropriately (P < 0.009), and ability to ensure closed-loop communication (P < 0.008). There was a significant improvement in identification of CRM errors through the use of the video assessment from 3 of the 12 CRM errors to 7 of the 12 CRM errors (P < 0.006). The pediatric CRM simulation-based workshop improved the residents' self-perceptions of their pediatric CRM abilities and improved their performance on a video assessment task.

  19. Error Characterisation and Merging of Active and Passive Microwave Soil Moisture Data Sets

    Science.gov (United States)

    Wagner, Wolfgang; Gruber, Alexander; de Jeu, Richard; Parinussa, Robert; Chung, Daniel; Dorigo, Wouter; Reimer, Christoph; Kidd, Richard

    2015-04-01

    As part of the Climate Change Initiative (CCI) programme of the European Space Agency (ESA) a data fusion system has been developed which is capable of ingesting surface soil moisture data derived from active and passive microwave sensors (ASCAT, AMSR-E, etc.) flown on different satellite platforms and merging them to create long and consistent time series of soil moisture suitable for use in climate change studies. The so-created soil moisture data records (latest version: ESA CCI SM v02.1 released on 5/12/2014) are freely available and can be obtained from http://www.esa-soilmoisture-cci.org/. As described by Wagner et al. (2012) the principle steps of the data fusion process are: 1) error characterisation, 2) matching to account for data set specific biases, and 3) merging. In this presentation we present the current data fusion process and discuss how new error characterisation methods, such as the increasingly popular triple collocation method as discussed for example by Zwieback et al. (2012) may be used to improve it. The main benefit of an improved error characterisation would be a more reliable identification of the best performing microwave soil moisture retrieval(s) for each grid point and each point in time. In case that two or more satellite data sets provides useful information, the estimated errors can be used to define the weights with which each satellite data set are merged, i.e. the lower its error the higher its weight. This is expected to bring a significant improvement over the current data fusion scheme which is not yet based on quantitative estimates of the retrieval errors but on a proxy measure, namely the vegetation optical depth (Dorigo et al., 2015): over areas with low vegetation passive soil moisture retrievals are used, while over areas with moderate vegetation density active retrievals are used. In transition areas, where both products correlate well, both products are being used in a synergistic way: on time steps where only one of

  20. Propagation of resist heating mask error to wafer level

    Science.gov (United States)

    Babin, S. V.; Karklin, Linard

    2006-10-01

    As technology is approaching 45 nm and below the IC industry is experiencing a severe product yield hit due to rapidly shrinking process windows and unavoidable manufacturing process variations. Current EDA tools are unable by their nature to deliver optimized and process-centered designs that call for 'post design' localized layout optimization DFM tools. To evaluate the impact of different manufacturing process variations on final product it is important to trace and evaluate all errors through design to manufacturing flow. Photo mask is one of the critical parts of this flow, and special attention should be paid to photo mask manufacturing process and especially to mask tight CD control. Electron beam lithography (EBL) is a major technique which is used for fabrication of high-end photo masks. During the writing process, resist heating is one of the sources for mask CD variations. Electron energy is released in the mask body mainly as heat, leading to significant temperature fluctuations in local areas. The temperature fluctuations cause changes in resist sensitivity, which in turn leads to CD variations. These CD variations depend on mask writing speed, order of exposure, pattern density and its distribution. Recent measurements revealed up to 45 nm CD variation on the mask when using ZEP resist. The resist heating problem with CAR resists is significantly smaller compared to other types of resists. This is partially due to higher resist sensitivity and the lower exposure dose required. However, there is no data yet showing CD errors on the wafer induced by CAR resist heating on the mask. This effect can be amplified by high MEEF values and should be carefully evaluated at 45nm and below technology nodes where tight CD control is required. In this paper, we simulated CD variation on the mask due to resist heating; then a mask pattern with the heating error was transferred onto the wafer. So, a CD error on the wafer was evaluated subject to only one term of the

  1. Flight Test Results of a GPS-Based Pitot-Static Calibration Method Using Output-Error Optimization for a Light Twin-Engine Airplane

    Science.gov (United States)

    Martos, Borja; Kiszely, Paul; Foster, John V.

    2011-01-01

    As part of the NASA Aviation Safety Program (AvSP), a novel pitot-static calibration method was developed to allow rapid in-flight calibration for subscale aircraft while flying within confined test areas. This approach uses Global Positioning System (GPS) technology coupled with modern system identification methods that rapidly computes optimal pressure error models over a range of airspeed with defined confidence bounds. This method has been demonstrated in subscale flight tests and has shown small 2- error bounds with significant reduction in test time compared to other methods. The current research was motivated by the desire to further evaluate and develop this method for full-scale aircraft. A goal of this research was to develop an accurate calibration method that enables reductions in test equipment and flight time, thus reducing costs. The approach involved analysis of data acquisition requirements, development of efficient flight patterns, and analysis of pressure error models based on system identification methods. Flight tests were conducted at The University of Tennessee Space Institute (UTSI) utilizing an instrumented Piper Navajo research aircraft. In addition, the UTSI engineering flight simulator was used to investigate test maneuver requirements and handling qualities issues associated with this technique. This paper provides a summary of piloted simulation and flight test results that illustrates the performance and capabilities of the NASA calibration method. Discussion of maneuver requirements and data analysis methods is included as well as recommendations for piloting technique.

  2. Medication errors: the role of the patient.

    Science.gov (United States)

    Britten, Nicky

    2009-06-01

    1. Patients and their carers will usually be the first to notice any observable problems resulting from medication errors. They will probably be unable to distinguish between medication errors, adverse drug reactions, or 'side effects'. 2. Little is known about how patients understand drug related problems or how they make attributions of adverse effects. Some research suggests that patients' cognitive models of adverse drug reactions bear a close relationship to models of illness perception. 3. Attributions of adverse drug reactions are related to people's previous experiences and to their level of education. The evidence suggests that on the whole patients' reports of adverse drug reactions are accurate. However, patients do not report all the problems they perceive and are more likely to report those that they do perceive as severe. Patients may not report problems attributed to their medications if they are fearful of doctors' reactions. Doctors may respond inappropriately to patients' concerns, for example by ignoring them. Some authors have proposed the use of a symptom checklist to elicit patients' reports of suspected adverse drug reactions. 4. Many patients want information about adverse drug effects, and the challenge for the professional is to judge how much information to provide and the best way of doing so. Professionals' inappropriate emphasis on adherence may be dangerous when a medication error has occurred. 5. Recent NICE guidelines recommend that professionals should ask patients if they have any concerns about their medicines, and this approach is likely to yield information conducive to the identification of medication errors.

  3. System Identification for Integrated Aircraft Development and Flight Testing (l’Identification des systemes pour le developpement integre des aeronefs et les essais en vol)

    Science.gov (United States)

    1999-03-01

    aerodynamics to affect load motions. The effects include a load trail angle in proportion to the drag specific force, and modification of the load pendulum...equations algorithm for flight data filtering architeture . and data consistency checking; and SCIDNT 8, an output architecture. error identification...accelerations at the seven sensor locations, identified system is proportional to the number When system identification is performed, as of flexible modes

  4. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  5. Pitch Correlogram Clustering for Fast Speaker Identification

    Directory of Open Access Journals (Sweden)

    Nitin Jhanwar

    2004-12-01

    Full Text Available Gaussian mixture models (GMMs are commonly used in text-independent speaker identification systems. However, for large speaker databases, their high computational run-time limits their use in online or real-time speaker identification situations. Two-stage identification systems, in which the database is partitioned into clusters based on some proximity criteria and only a single-cluster GMM is run in every test, have been suggested in literature to speed up the identification process. However, most clustering algorithms used have shown limited success, apparently because the clustering and GMM feature spaces used are derived from similar speech characteristics. This paper presents a new clustering approach based on the concept of a pitch correlogram that captures frame-to-frame pitch variations of a speaker rather than short-time spectral characteristics like cepstral coefficient, spectral slopes, and so forth. The effectiveness of this two-stage identification process is demonstrated on the IVIE corpus of 110 speakers. The overall system achieves a run-time advantage of 500% as well as a 10% reduction of error in overall speaker identification.

  6. Identification and Assessment of Human Errors in Postgraduate Endodontic Students of Kerman University of Medical Sciences by Using the SHERPA Method

    Directory of Open Access Journals (Sweden)

    Saman Dastaran

    2016-03-01

    Full Text Available Introduction: Human errors are the cause of many accidents, including industrial and medical, therefore finding out an approach for identifying and reducing them is very important. Since no study has been done about human errors in the dental field, this study aimed to identify and assess human errors in postgraduate endodontic students of Kerman University of Medical Sciences by using the SHERPA Method. Methods: This cross-sectional study was performed during year 2014. Data was collected using task observation and interviewing postgraduate endodontic students. Overall, 10 critical tasks, which were most likely to cause harm to patients were determined. Next, Hierarchical Task Analysis (HTA was conducted and human errors in each task were identified by the Systematic Human Error Reduction Prediction Approach (SHERPA technique worksheets. Results: After analyzing the SHERPA worksheets, 90 human errors were identified including (67.7% action errors, (13.3% checking errors, (8.8% selection errors, (5.5% retrieval errors and (4.4% communication errors. As a result, most of them were action errors and less of them were communication errors. Conclusions: The results of the study showed that the highest percentage of errors and the highest level of risk were associated with action errors, therefore, to reduce the occurrence of such errors and limit their consequences, control measures including periodical training of work procedures, providing work check-lists, development of guidelines and establishment of a systematic and standardized reporting system, should be put in place. Regarding the results of this study, the control of recovery errors with the highest percentage of undesirable risk and action errors with the highest frequency of errors should be in the priority of control

  7. Structural damage detection robust against time synchronization errors

    International Nuclear Information System (INIS)

    Yan, Guirong; Dyke, Shirley J

    2010-01-01

    Structural damage detection based on wireless sensor networks can be affected significantly by time synchronization errors among sensors. Precise time synchronization of sensor nodes has been viewed as crucial for addressing this issue. However, precise time synchronization over a long period of time is often impractical in large wireless sensor networks due to two inherent challenges. First, time synchronization needs to be performed periodically, requiring frequent wireless communication among sensors at significant energy cost. Second, significant time synchronization errors may result from node failures which are likely to occur during long-term deployment over civil infrastructures. In this paper, a damage detection approach is proposed that is robust against time synchronization errors in wireless sensor networks. The paper first examines the ways in which time synchronization errors distort identified mode shapes, and then proposes a strategy for reducing distortion in the identified mode shapes. Modified values for these identified mode shapes are then used in conjunction with flexibility-based damage detection methods to localize damage. This alternative approach relaxes the need for frequent sensor synchronization and can tolerate significant time synchronization errors caused by node failures. The proposed approach is successfully demonstrated through numerical simulations and experimental tests in a lab

  8. Two-component model application for error calculus in the environmental monitoring data analysis

    International Nuclear Information System (INIS)

    Carvalho, Maria Angelica G.; Hiromoto, Goro

    2002-01-01

    Analysis and interpretation of results of an environmental monitoring program is often based on the evaluation of the mean value of a particular set of data, which is strongly affected by the analytical errors associated with each measurement. A model proposed by Rocke and Lorenzato assumes two error components, one additive and one multiplicative, to deal with lower and higher concentration values in a single model. In this communication, an application of this method for re-evaluation of the errors reported in a large set of results of total alpha measurements in a environmental sample is presented. The results show that the mean values calculated taking into account the new errors is higher than as obtained with the original errors, being an indicative that the analytical errors reported before were underestimated in the region of lower concentrations. (author)

  9. Implementation of an RFID-Based Sequencing-Error-Proofing System for Automotive Manufacturing Logistics

    Directory of Open Access Journals (Sweden)

    Yong-Shin Kang

    2018-01-01

    Full Text Available Serialized tracing provides the ability to track and trace the lifecycle of the products and parts. Unlike barcodes, Radio frequency identification (RFID, which is an important building block for internet of things (IoT, does not require a line of sight and has the advantages of recognizing many objects simultaneously and rapidly, and storing more information than barcodes. Therefore, RFID has been used in a variety of application domains such as logistics, distributions, and manufacturing, significantly improving traceability and process efficiency. In this study, we applied RFID to improve the just-in-sequence operation of an automotive inbound logistics process. First, we implemented an RFID-based visibility system for real-time traceability and control of part supply from the production lines of suppliers to the assembly line of a car manufacturer. Second, we developed an RFID-based sequence-error proofing system to avoid accidental line stops due to incorrect part sequencing. The whole system has been successfully installed in a rear-axle inbound logistics process of GM Korea. We achieved a significant amount of cost savings, especially due to the prevention of sequencing errors and part shortages, and the reduction of manual operations. Thorough cost-benefit analysis demonstrates the clear economic feasibility of using RFID technologies for the just-in-sequence inbound logistics in an automobile manufacturing environment.

  10. Utilizing measure-based feedback in control-mastery theory: A clinical error.

    Science.gov (United States)

    Snyder, John; Aafjes-van Doorn, Katie

    2016-09-01

    Clinical errors and ruptures are an inevitable part of clinical practice. Often times, therapists are unaware that a clinical error or rupture has occurred, leaving no space for repair, and potentially leading to patient dropout and/or less effective treatment. One way to overcome our blind spots is by frequently and systematically collecting measure-based feedback from the patient. Patient feedback measures that focus on the process of psychotherapy such as the Patient's Experience of Attunement and Responsiveness scale (PEAR) can be used in conjunction with treatment outcome measures such as the Outcome Questionnaire 45.2 (OQ-45.2) to monitor the patient's therapeutic experience and progress. The regular use of these types of measures can aid clinicians in the identification of clinical errors and the associated patient deterioration that might otherwise go unnoticed and unaddressed. The current case study describes an instance of clinical error that occurred during the 2-year treatment of a highly traumatized young woman. The clinical error was identified using measure-based feedback and subsequently understood and addressed from the theoretical standpoint of the control-mastery theory of psychotherapy. An alternative hypothetical response is also presented and explained using control-mastery theory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  11. MEDICAL ERROR: CIVIL AND LEGAL ASPECT.

    Science.gov (United States)

    Buletsa, S; Drozd, O; Yunin, O; Mohilevskyi, L

    2018-03-01

    The scientific article is focused on the research of the notion of medical error, medical and legal aspects of this notion have been considered. The necessity of the legislative consolidation of the notion of «medical error» and criteria of its legal estimation have been grounded. In the process of writing a scientific article, we used the empirical method, general scientific and comparative legal methods. A comparison of the concept of medical error in civil and legal aspects was made from the point of view of Ukrainian, European and American scientists. It has been marked that the problem of medical errors is known since ancient times and in the whole world, in fact without regard to the level of development of medicine, there is no country, where doctors never make errors. According to the statistics, medical errors in the world are included in the first five reasons of death rate. At the same time the grant of medical services practically concerns all people. As a man and his life, health in Ukraine are acknowledged by a higher social value, medical services must be of high-quality and effective. The grant of not quality medical services causes harm to the health, and sometimes the lives of people; it may result in injury or even death. The right to the health protection is one of the fundamental human rights assured by the Constitution of Ukraine; therefore the issue of medical errors and liability for them is extremely relevant. The authors make conclusions, that the definition of the notion of «medical error» must get the legal consolidation. Besides, the legal estimation of medical errors must be based on the single principles enshrined in the legislation and confirmed by judicial practice.

  12. Significance of FDG-PET in Identification of Diseases of the Appendix – Based on Experience of Two Cases Falsely Positive for FDG Accumulation

    Directory of Open Access Journals (Sweden)

    Shimpei Ogawa

    2009-04-01

    Full Text Available A discussion of the significance of F-fluorodeoxyglucose positron emission tomography (FDG-PET in the identification of diseases of the appendix is presented based on two cases falsely positive for FDG accumulation. Both cases were palpable for a tumor in the lower right abdominal region and a prominently enlarged appendix was depicted by CT. Although the patients underwent ileocecal resection based on a strong suspicion of appendiceal cancer rather than appendicitis since abnormal accumulation exhibiting maximum standard uptake values (SUVs of 7.27 and 17.11, respectively, was observed at the same site in FDG-PET examination and since there no malignant findings observed histologically, the patients were diagnosed with appendicitis. Although FDG specifically accumulates not only in malignant tumors, but also in diseases such as acute or chronic inflammation, abscesses and lymphadenitis, and identification based on SUVs has been reported to be used as a method of identification, the two cases reported here were both false-positive cases exhibiting high maximum SUVs. At the present time, although the significance of FDG-PET in the identification of diseases of the appendix is somewhat low and there are limitations on its application, various research is currently being conducted with the aim of improving diagnostic accuracy, and it is hoped that additional studies will be conducted in the future.

  13. Automatic error compensation in dc amplifiers

    International Nuclear Information System (INIS)

    Longden, L.L.

    1976-01-01

    When operational amplifiers are exposed to high levels of neutron fluence or total ionizing dose, significant changes may be observed in input voltages and currents. These changes may produce large errors at the output of direct-coupled amplifier stages. Therefore, the need exists for automatic compensation techniques. However, previously introduced techniques compensate only for errors in the main amplifier and neglect the errors induced by the compensating circuitry. In this paper, the techniques introduced compensate not only for errors in the main operational amplifier, but also for errors induced by the compensation circuitry. Included in the paper is a theoretical analysis of each compensation technique, along with advantages and disadvantages of each. Important design criteria and information necessary for proper selection of semiconductor switches will also be included. Introduced in this paper will be compensation circuitry for both resistive and capacitive feedback networks

  14. Accounting for model error in Bayesian solutions to hydrogeophysical inverse problems using a local basis approach

    Science.gov (United States)

    Irving, J.; Koepke, C.; Elsheikh, A. H.

    2017-12-01

    Bayesian solutions to geophysical and hydrological inverse problems are dependent upon a forward process model linking subsurface parameters to measured data, which is typically assumed to be known perfectly in the inversion procedure. However, in order to make the stochastic solution of the inverse problem computationally tractable using, for example, Markov-chain-Monte-Carlo (MCMC) methods, fast approximations of the forward model are commonly employed. This introduces model error into the problem, which has the potential to significantly bias posterior statistics and hamper data integration efforts if not properly accounted for. Here, we present a new methodology for addressing the issue of model error in Bayesian solutions to hydrogeophysical inverse problems that is geared towards the common case where these errors cannot be effectively characterized globally through some parametric statistical distribution or locally based on interpolation between a small number of computed realizations. Rather than focusing on the construction of a global or local error model, we instead work towards identification of the model-error component of the residual through a projection-based approach. In this regard, pairs of approximate and detailed model runs are stored in a dictionary that grows at a specified rate during the MCMC inversion procedure. At each iteration, a local model-error basis is constructed for the current test set of model parameters using the K-nearest neighbour entries in the dictionary, which is then used to separate the model error from the other error sources before computing the likelihood of the proposed set of model parameters. We demonstrate the performance of our technique on the inversion of synthetic crosshole ground-penetrating radar traveltime data for three different subsurface parameterizations of varying complexity. The synthetic data are generated using the eikonal equation, whereas a straight-ray forward model is assumed in the inversion

  15. On minimizing the influence of the noise tail of correlation functions in operational modal analysis

    DEFF Research Database (Denmark)

    Tarpø, Marius; Olsen, Peter; Amador, Sandro

    2017-01-01

    on the identification results (random errors) when the noise tail is included in the identification. On the other hand, if the correlation function is truncated too much, then important information is lost. In other to minimize this error, a suitable truncation based on manual inspection of the correlation function......In operational modal analysis (OMA) correlation functions are used by all classical time-domain modal identification techniques that uses the impulse response function (free decays) as primary data. However, the main difference between the impulse response and the correlation functions estimated...... from the operational responses is that the latter present a higher noise level. This is due to statistical errors in the estimation of the correlation function and it causes random noise in the end of the function and this is called the noise tail. This noise might have significant influence...

  16. Compensation for positioning error of industrial robot for flexible vision measuring system

    Science.gov (United States)

    Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui

    2013-01-01

    Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.

  17. Analysis of Medication Errors in Simulated Pediatric Resuscitation by Residents

    Directory of Open Access Journals (Sweden)

    Evelyn Porter

    2014-07-01

    Full Text Available Introduction: The objective of our study was to estimate the incidence of prescribing medication errors specifically made by a trainee and identify factors associated with these errors during the simulated resuscitation of a critically ill child. Methods: The results of the simulated resuscitation are described. We analyzed data from the simulated resuscitation for the occurrence of a prescribing medication error. We compared univariate analysis of each variable to medication error rate and performed a separate multiple logistic regression analysis on the significant univariate variables to assess the association between the selected variables. Results: We reviewed 49 simulated resuscitations . The final medication error rate for the simulation was 26.5% (95% CI 13.7% - 39.3%. On univariate analysis, statistically significant findings for decreased prescribing medication error rates included senior residents in charge, presence of a pharmacist, sleeping greater than 8 hours prior to the simulation, and a visual analog scale score showing more confidence in caring for critically ill children. Multiple logistic regression analysis using the above significant variables showed only the presence of a pharmacist to remain significantly associated with decreased medication error, odds ratio of 0.09 (95% CI 0.01 - 0.64. Conclusion: Our results indicate that the presence of a clinical pharmacist during the resuscitation of a critically ill child reduces the medication errors made by resident physician trainees.

  18. Apps for Angiosperms: The Usability of Mobile Computers and Printed Field Guides for UK Wild Flower and Winter Tree Identification

    Science.gov (United States)

    Stagg, Bethan C.; Donkin, Maria E.

    2017-01-01

    We investigated usability of mobile computers and field guide books with adult botanical novices, for the identification of wildflowers and deciduous trees in winter. Identification accuracy was significantly higher for wildflowers using a mobile computer app than field guide books but significantly lower for deciduous trees. User preference…

  19. Students’ Written Production Error Analysis in the EFL Classroom Teaching: A Study of Adult English Learners Errors

    Directory of Open Access Journals (Sweden)

    Ranauli Sihombing

    2016-12-01

    Full Text Available Errors analysis has become one of the most interesting issues in the study of Second Language Acquisition. It can not be denied that some teachers do not know a lot about error analysis and related theories of how L1, L2 or foreign language acquired. In addition, the students often feel upset since they find a gap between themselves and the teachers for the errors the students make and the teachers’ understanding about the error correction. The present research aims to investigate what errors adult English learners make in written production of English. The significances of the study is to know what errors students make in writing that the teachers can find solution to the errors the students make for a better English language teaching and learning especially in teaching English for adults. The study employed qualitative method. The research was undertaken at an airline education center in Bandung. The result showed that syntax errors are more frequently found than morphology errors, especially in terms of verb phrase errors. It is recommended that it is important for teacher to know the theory of second language acquisition in order to know how the students learn and produce theirlanguage. In addition, it will be advantages for teachers if they know what errors students frequently make in their learning, so that the teachers can give solution to the students for a better English language learning achievement.   DOI: https://doi.org/10.24071/llt.2015.180205

  20. Theory of mind in schizophrenia: error types and associations with symptoms.

    Science.gov (United States)

    Fretland, Ragnhild A; Andersson, Stein; Sundet, Kjetil; Andreassen, Ole A; Melle, Ingrid; Vaskinn, Anja

    2015-03-01

    Social cognition is an important determinant of functioning in schizophrenia. However, how social cognition relates to the clinical symptoms of schizophrenia is still unclear. The aim of this study was to explore the relationship between a social cognition domain, Theory of Mind (ToM), and the clinical symptoms of schizophrenia. Specifically, we investigated the associations between three ToM error types; 1) "overmentalizing" 2) "reduced ToM and 3) "no ToM", and positive, negative and disorganized symptoms. Fifty-two participants with a diagnosis of schizophrenia or schizoaffective disorder were assessed with the Movie for the Assessment of Social Cognition (MASC), a video-based ToM measure. An empirically validated five-factor model of the Positive and Negative Syndrome Scale (PANSS) was used to assess clinical symptoms. There was a significant, small-moderate association between overmentalizing and positive symptoms (rho=.28, p=.04). Disorganized symptoms correlated at a trend level with "reduced ToM" (rho=.27, p=.05). There were no other significant correlations between ToM impairments and symptom levels. Positive/disorganized symptoms did not contribute significantly in explaining total ToM performance, whereas IQ did (B=.37, p=.01). Within the undermentalizing domain, participants performed more "reduced ToM" errors than "no ToM" errors. Overmentalizing was associated with positive symptoms. The undermentalizing error types were unrelated to symptoms, but "reduced ToM" was somewhat associated to disorganization. The higher number of "reduced ToM" responses suggests that schizophrenia is characterized by accuracy problems rather than a fundamental lack of mental state concept. The findings call for the use of more sensitive measures when investigating ToM in schizophrenia to avoid the "right/wrong ToM"-dichotomy. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Music identification skills of children with specific language impairment.

    Science.gov (United States)

    Mari, Giorgia; Scorpecci, Alessandro; Reali, Laura; D'Alatri, Lucia

    2016-03-01

    To date very few studies have investigated the musical skills of children with specific language impairment (SLI). There is growing evidence that SLI affects areas other than language, and it is therefore reasonable to hypothesize that children with this disorder may have difficulties in perceiving musical stimuli appropriately. To compare melody and song identification skills in a group of children with SLI and in a control group of children with typical language development (TD); and to study possible correlations between music identification skills and language abilities in the SLI group. This is a prospective case control study. Two groups of children were enrolled: one meeting DSM-IV-TR(®) diagnostic criteria for SLI and the other comprising an age-matched group of children with TD. All children received a melody and a song identification test, together with a test battery assessing receptive and productive language abilities. 30 children with SLI (mean age = 56 ± 9 months) and 23 with TD (mean age = 60 ± 10 months) were included. Melody and song identification scores among SLI children were significantly lower than those of TD children, and in both groups song identification scores were significantly higher than melody identification scores. Song identification skills bore a significant correlation to chronological age in both groups (TD: r = 0.529, p = 0.009; SLI: r = 0.506, p = 0.004). Whereas no other variables were found explaining the variability of melody or song identification scores in either group, the correlation between language comprehension and song identification in the SLI group approached significance (r = 0.166, p = 0.076). The poorer music perception skills of SLI children as compared with TD ones suggests that SLI may also affect music perception. Therefore, training programmes that simultaneously stimulate via language and music may prove useful in the rehabilitation of children affected by SLI. © 2015 Royal College of Speech and

  2. Analytical modeling for thermal errors of motorized spindle unit

    OpenAIRE

    Liu, Teng; Gao, Weiguo; Zhang, Dawei; Zhang, Yifan; Chang, Wenfen; Liang, Cunman; Tian, Yanling

    2017-01-01

    Modeling method investigation about spindle thermal errors is significant for spindle thermal optimization in design phase. To accurately analyze the thermal errors of motorized spindle unit, this paper assumes approximately that 1) spindle linear thermal error on axial direction is ascribed to shaft thermal elongation for its heat transfer from bearings, and 2) spindle linear thermal errors on radial directions and angular thermal errors are attributed to thermal variations of bearing relati...

  3. Significance of acceleration period in a dynamic strength testing study.

    Science.gov (United States)

    Chen, W L; Su, F C; Chou, Y L

    1994-06-01

    The acceleration period that occurs during isokinetic tests may provide valuable information regarding neuromuscular readiness to produce maximal contraction. The purpose of this study was to collect the normative data of acceleration time during isokinetic knee testing, to calculate the acceleration work (Wacc), and to determine the errors (ERexp, ERwork, ERpower) due to ignoring Wacc during explosiveness, total work, and average power measurements. Seven male and 13 female subjects attended the test by using the Cybex 325 system and electronic stroboscope machine for 10 testing speeds (30-300 degrees/sec). A three-way ANOVA was used to assess gender, direction, and speed factors on acceleration time, Wacc, and errors. The results indicated that acceleration time was significantly affected by speed and direction; Wacc and ERexp by speed, direction, and gender; and ERwork and ERpower by speed and gender. The errors appeared to increase when testing the female subjects, during the knee flexion test, or when speed increased. To increase validity in clinical testing, it is important to consider the acceleration phase effect, especially in higher velocity isokinetic testing or for weaker muscle groups.

  4. A study of redundancy management strategy for tetrad strap-down inertial systems. [error detection codes

    Science.gov (United States)

    Hruby, R. J.; Bjorkman, W. S.; Schmidt, S. F.; Carestia, R. A.

    1979-01-01

    Algorithms were developed that attempt to identify which sensor in a tetrad configuration has experienced a step failure. An algorithm is also described that provides a measure of the confidence with which the correct identification was made. Experimental results are presented from real-time tests conducted on a three-axis motion facility utilizing an ortho-skew tetrad strapdown inertial sensor package. The effects of prediction errors and of quantization on correct failure identification are discussed as well as an algorithm for detecting second failures through prediction.

  5. The Nature of Error in Adolescent Student Writing

    Science.gov (United States)

    Wilcox, Kristen Campbell; Yagelski, Robert; Yu, Fang

    2014-01-01

    This study examined the nature and frequency of error in high school native English speaker (L1) and English learner (L2) writing. Four main research questions were addressed: Are there significant differences in students' error rates in English language arts (ELA) and social studies? Do the most common errors made by students differ in ELA…

  6. Extraction and identification of the hepatoprotective bio-active ...

    African Journals Online (AJOL)

    DR. NJ TONUKARI

    deliciosa root led us to the identification of active fraction here after called as 'TF'. .... The data were expressed as mean ± standard error of mean. (S.E.M; n = 6). ..... soluble anti-oxidant vitamin status in patients with long-term ileal pouch-anal ...

  7. Neurochemical enhancement of conscious error awareness.

    Science.gov (United States)

    Hester, Robert; Nandam, L Sanjay; O'Connell, Redmond G; Wagner, Joe; Strudwick, Mark; Nathan, Pradeep J; Mattingley, Jason B; Bellgrove, Mark A

    2012-02-22

    How the brain monitors ongoing behavior for performance errors is a central question of cognitive neuroscience. Diminished awareness of performance errors limits the extent to which humans engage in corrective behavior and has been linked to loss of insight in a number of psychiatric syndromes (e.g., attention deficit hyperactivity disorder, drug addiction). These conditions share alterations in monoamine signaling that may influence the neural mechanisms underlying error processing, but our understanding of the neurochemical drivers of these processes is limited. We conducted a randomized, double-blind, placebo-controlled, cross-over design of the influence of methylphenidate, atomoxetine, and citalopram on error awareness in 27 healthy participants. The error awareness task, a go/no-go response inhibition paradigm, was administered to assess the influence of monoaminergic agents on performance errors during fMRI data acquisition. A single dose of methylphenidate, but not atomoxetine or citalopram, significantly improved the ability of healthy volunteers to consciously detect performance errors. Furthermore, this behavioral effect was associated with a strengthening of activation differences in the dorsal anterior cingulate cortex and inferior parietal lobe during the methylphenidate condition for errors made with versus without awareness. Our results have implications for the understanding of the neurochemical underpinnings of performance monitoring and for the pharmacological treatment of a range of disparate clinical conditions that are marked by poor awareness of errors.

  8. Influence of binary mask estimation errors on robust speaker identification

    DEFF Research Database (Denmark)

    May, Tobias

    2017-01-01

    Missing-data strategies have been developed to improve the noise-robustness of automatic speech recognition systems in adverse acoustic conditions. This is achieved by classifying time-frequency (T-F) units into reliable and unreliable components, as indicated by a so-called binary mask. Different...... approaches have been proposed to handle unreliable feature components, each with distinct advantages. The direct masking (DM) approach attenuates unreliable T-F units in the spectral domain, which allows the extraction of conventionally used mel-frequency cepstral coefficients (MFCCs). Instead of attenuating....... Since each of these approaches utilizes the knowledge about reliable and unreliable feature components in a different way, they will respond differently to estimation errors in the binary mask. The goal of this study was to identify the most effective strategy to exploit knowledge about reliable...

  9. Proposal of procedures to prevent errors in radiotherapy based in learned lessons of accidental expositions

    International Nuclear Information System (INIS)

    Bueno, Giselle Oliveira Vieira

    2007-01-01

    occur as: to carry through dosimetry in vivo 'off-axis' with diode to reduce the use of the incorrect direction of the wedge; to use values of the vertical distance with pointer of optic distance (SSD) to prevent error of distance of treatment; to use the system of transference of data DICOM-RT between the stations of work of the planning of treatment and the simulation, being increased the efficiency and accuracy in the treatment; to manually consider the redundancy in the verifications of the calculations carried through for computer or; to use a computerized system of register and verification of the treatment; preventing errors in the which had daily treatments to the incorrect election of the treatment parameters; to implant a magnetic card of identification of the patient with photo, identification number, name of the institution, name of the department, the date of the first emission and responsible doctor to prevent errors of identification and register and could all be used in the process of the treatment. Of this form, these procedures can prevent incident more in radiotherapy and increase the security of the patients. (author)

  10. Similarities between the target and the intruder in naturally-occurring repeated person naming errors

    Directory of Open Access Journals (Sweden)

    Serge eBredart

    2015-09-01

    Full Text Available The present study investigated an intriguing phenomenon that did not receive much attention so far: repeatedly calling a familiar person with someone else’s name. From participants’ responses to a questionnaire, these repeated naming errors were characterized with respect to a number of properties (e.g., type of names being substituted, error frequency, error longevity and different features of similarity (e.g., age, gender, type of relationship with the participant, face resemblance and similarity of the contexts of encounter between the bearer of the target name and the bearer of the wrong name. Moreover, it was evaluated whether the phonological similarity between names, the participants’ age, the difference of age between the two persons whose names were substituted, and face resemblance between the two persons predicted the frequency of error. Regression analyses indicated that phonological similarity between the target name and the wrong name predicted the frequency of repeated person naming errors. The age of the participant was also a significant predictor of error frequency: the older the participant the higher the frequency of errors. Consistent with previous research stressing the importance of the age of acquisition of words on lexical access in speech production, results indicated that bearer of the wrong name was on average known for longer than the bearer of the target name.

  11. Reliability of System Identification Techniques to Assess Standing Balance in Healthy Elderly.

    Directory of Open Access Journals (Sweden)

    Jantsje H Pasma

    Full Text Available System identification techniques have the potential to assess the contribution of the underlying systems involved in standing balance by applying well-known disturbances. We investigated the reliability of standing balance parameters obtained with multivariate closed loop system identification techniques.In twelve healthy elderly balance tests were performed twice a day during three days. Body sway was measured during two minutes of standing with eyes closed and the Balance test Room (BalRoom was used to apply four disturbances simultaneously: two sensory disturbances, to the proprioceptive and the visual system, and two mechanical disturbances applied at the leg and trunk segment. Using system identification techniques, sensitivity functions of the sensory disturbances and the neuromuscular controller were estimated. Based on the generalizability theory (G theory, systematic errors and sources of variability were assessed using linear mixed models and reliability was assessed by computing indexes of dependability (ID, standard error of measurement (SEM and minimal detectable change (MDC.A systematic error was found between the first and second trial in the sensitivity functions. No systematic error was found in the neuromuscular controller and body sway. The reliability of 15 of 25 parameters and body sway were moderate to excellent when the results of two trials on three days were averaged. To reach an excellent reliability on one day in 7 out of 25 parameters, it was predicted that at least seven trials must be averaged.This study shows that system identification techniques are a promising method to assess the underlying systems involved in standing balance in elderly. However, most of the parameters do not appear to be reliable unless a large number of trials are collected across multiple days. To reach an excellent reliability in one third of the parameters, a training session for participants is needed and at least seven trials of two

  12. Teamwork and error in the operating room: analysis of skills and roles.

    Science.gov (United States)

    Catchpole, K; Mishra, A; Handa, A; McCulloch, P

    2008-04-01

    To analyze the effects of surgical, anesthetic, and nursing teamwork skills on technical outcomes. The value of team skills in reducing adverse events in the operating room is presently receiving considerable attention. Current work has not yet identified in detail how the teamwork and communication skills of surgeons, anesthetists, and nurses affect the course of an operation. Twenty-six laparoscopic cholecystectomies and 22 carotid endarterectomies were studied using direct observation methods. For each operation, teams' skills were scored for the whole team, and for nursing, surgical, and anesthetic subteams on 4 dimensions (leadership and management [LM]; teamwork and cooperation; problem solving and decision making; and situation awareness). Operating time, errors in surgical technique, and other procedural problems and errors were measured as outcome parameters for each operation. The relationships between teamwork scores and these outcome parameters within each operation were examined using analysis of variance and linear regression. Surgical (F(2,42) = 3.32, P = 0.046) and anesthetic (F(2,42) = 3.26, P = 0.048) LM had significant but opposite relationships with operating time in each operation: operating time increased significantly with higher anesthetic but decreased with higher surgical LM scores. Errors in surgical technique had a strong association with surgical situation awareness (F(2,42) = 7.93, P skills of the nurses (F(5,1) = 3.96, P = 0.027). Detailed analysis of team interactions and dimensions is feasible and valuable, yielding important insights into relationships between nontechnical skills, technical performance, and operative duration. These results support the concept that interventions designed to improve teamwork and communication may have beneficial effects on technical performance and patient outcome.

  13. Error Control in Distributed Node Self-Localization

    Directory of Open Access Journals (Sweden)

    Ying Zhang

    2008-03-01

    Full Text Available Location information of nodes in an ad hoc sensor network is essential to many tasks such as routing, cooperative sensing, and service delivery. Distributed node self-localization is lightweight and requires little communication overhead, but often suffers from the adverse effects of error propagation. Unlike other localization papers which focus on designing elaborate localization algorithms, this paper takes a different perspective, focusing on the error propagation problem, addressing questions such as where localization error comes from and how it propagates from node to node. To prevent error from propagating and accumulating, we develop an error-control mechanism based on characterization of node uncertainties and discrimination between neighboring nodes. The error-control mechanism uses only local knowledge and is fully decentralized. Simulation results have shown that the active selection strategy significantly mitigates the effect of error propagation for both range and directional sensors. It greatly improves localization accuracy and robustness.

  14. A basic framework for the analysis of the human error potential due to the computerization in nuclear power plants

    International Nuclear Information System (INIS)

    Lee, Y. H.

    1999-01-01

    Computerization and its vivid benefits expected in the nuclear power plant design cannot be realized without verifying the inherent safety problems. Human error aspect is also included in the verification issues. The verification spans from the perception of the changes in operation functions such as automation to the unfamiliar experience of operators due to the interface change. Therefore, a new framework for human error analysis might capture both the positive and the negative effect of the computerization. This paper suggest a basic framework for error identification through the review of the existing human error studies and the experience of computerizations in nuclear power plants

  15. Human errors related to maintenance and modifications

    International Nuclear Information System (INIS)

    Laakso, K.; Pyy, P.; Reiman, L.

    1998-01-01

    The focus in human reliability analysis (HRA) relating to nuclear power plants has traditionally been on human performance in disturbance conditions. On the other hand, some studies and incidents have shown that also maintenance errors, which have taken place earlier in plant history, may have an impact on the severity of a disturbance, e.g. if they disable safety related equipment. Especially common cause and other dependent failures of safety systems may significantly contribute to the core damage risk. The first aim of the study was to identify and give examples of multiple human errors which have penetrated the various error detection and inspection processes of plant safety barriers. Another objective was to generate numerical safety indicators to describe and forecast the effectiveness of maintenance. A more general objective was to identify needs for further development of maintenance quality and planning. In the first phase of this operational experience feedback analysis, human errors recognisable in connection with maintenance were looked for by reviewing about 4400 failure and repair reports and some special reports which cover two nuclear power plant units on the same site during 1992-94. A special effort was made to study dependent human errors since they are generally the most serious ones. An in-depth root cause analysis was made for 14 dependent errors by interviewing plant maintenance foremen and by thoroughly analysing the errors. A more simple treatment was given to maintenance-related single errors. The results were shown as a distribution of errors among operating states i.a. as regards the following matters: in what operational state the errors were committed and detected; in what operational and working condition the errors were detected, and what component and error type they were related to. These results were presented separately for single and dependent maintenance-related errors. As regards dependent errors, observations were also made

  16. A universal and reliable assay for molecular sex identification of three-spined sticklebacks (Gasterosteus aculeatus).

    Science.gov (United States)

    Toli, E-A; Calboli, F C F; Shikano, T; Merilä, J

    2016-11-01

    In heterogametic species, biological differences between the two sexes are ubiquitous, and hence, errors in sex identification can be a significant source of noise and bias in studies where sex-related sources of variation are of interest or need to be controlled for. We developed and validated a universal multimarker assay for reliable sex identification of three-spined sticklebacks (Gasterosteus aculeatus). The assay makes use of genotype scores from three sex-linked loci and utilizes Bayesian probabilistic inference to identify sex of the genotyped individuals. The results, validated with 286 phenotypically sexed individuals from six populations of sticklebacks representing all major genetic lineages (cf. Pacific, Atlantic and Japan Sea), indicate that in contrast to commonly used single-marker-based sex identification assays, the developed multimarker assay should be 100% accurate. As the markers in the assay can be scored from agarose gels, it provides a quick and cost-efficient tool for universal sex identification of three-spined sticklebacks. The general principle of combining information from multiple markers to improve the reliability of sex identification is transferable and can be utilized to develop and validate similar assays for other species. © 2016 John Wiley & Sons Ltd.

  17. PERFORMANCE OF OPPORTUNISTIC SPECTRUM ACCESS WITH SENSING ERROR IN COGNITIVE RADIO AD HOC NETWORKS

    Directory of Open Access Journals (Sweden)

    N. ARMI

    2012-04-01

    Full Text Available Sensing in opportunistic spectrum access (OSA has a responsibility to detect the available channel by performing binary hypothesis as busy or idle states. If channel is busy, secondary user (SU cannot access and refrain from data transmission. SU is allowed to access when primary user (PU does not use it (idle states. However, channel is sensed on imperfect communication link. Fading, noise and any obstacles existed can cause sensing errors in PU signal detection. False alarm detects idle states as a busy channel while miss-identification detects busy states as an idle channel. False detection makes SU refrain from transmission and reduces number of bits transmitted. On the other hand, miss-identification causes SU collide to PU transmission. This paper study the performance of OSA based on the greedy approach with sensing errors by the restriction of maximum collision probability allowed (collision threshold by PU network. The throughput of SU and spectrum capacity metric is used to evaluate OSA performance and make comparisons to those ones without sensing error as function of number of slot based on the greedy approach. The relations between throughput and signal to noise ratio (SNR with different collision probability as well as false detection with different SNR are presented. According to the obtained results show that CR users can gain the reward from the previous slot for both of with and without sensing errors. It is indicated by the throughput improvement as slot number increases. However, sensing on imperfect channel with sensing errors can degrade the throughput performance. Subsequently, the throughput of SU and spectrum capacity improves by increasing maximum collision probability allowed by PU network as well. Due to frequent collision with PU, the throughput of SU and spectrum capacity decreases at certain value of collision threshold. Computer simulation is used to evaluate and validate these works.

  18. Volterra Filtering for ADC Error Correction

    Directory of Open Access Journals (Sweden)

    J. Saliga

    2001-09-01

    Full Text Available Dynamic non-linearity of analog-to-digital converters (ADCcontributes significantly to the distortion of digitized signals. Thispaper introduces a new effective method for compensation such adistortion based on application of Volterra filtering. Considering ana-priori error model of ADC allows finding an efficient inverseVolterra model for error correction. Efficiency of proposed method isdemonstrated on experimental results.

  19. Reduced phase error through optimized control of a superconducting qubit

    International Nuclear Information System (INIS)

    Lucero, Erik; Kelly, Julian; Bialczak, Radoslaw C.; Lenander, Mike; Mariantoni, Matteo; Neeley, Matthew; O'Connell, A. D.; Sank, Daniel; Wang, H.; Weides, Martin; Wenner, James; Cleland, A. N.; Martinis, John M.; Yamamoto, Tsuyoshi

    2010-01-01

    Minimizing phase and other errors in experimental quantum gates allows higher fidelity quantum processing. To quantify and correct for phase errors, in particular, we have developed an experimental metrology - amplified phase error (APE) pulses - that amplifies and helps identify phase errors in general multilevel qubit architectures. In order to correct for both phase and amplitude errors specific to virtual transitions and leakage outside of the qubit manifold, we implement 'half derivative', an experimental simplification of derivative reduction by adiabatic gate (DRAG) control theory. The phase errors are lowered by about a factor of five using this method to ∼1.6 deg. per gate, and can be tuned to zero. Leakage outside the qubit manifold, to the qubit |2> state, is also reduced to ∼10 -4 for 20% faster gates.

  20. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  1. Application of identification techniques to remote manipulator system flight data

    Science.gov (United States)

    Shepard, G. D.; Lepanto, J. A.; Metzinger, R. W.; Fogel, E.

    1983-01-01

    This paper addresses the application of identification techniques to flight data from the Space Shuttle Remote Manipulator System (RMS). A description of the remote manipulator, including structural and control system characteristics, sensors, and actuators is given. A brief overview of system identification procedures is presented, and the practical aspects of implementing system identification algorithms are discussed. In particular, the problems posed by desampling rate, numerical error, and system nonlinearities are considered. Simulation predictions of damping, frequency, and system order are compared with values identified from flight data to support an evaluation of RMS structural and control system models. Finally, conclusions are drawn regarding the application of identification techniques to flight data obtained from a flexible space structure.

  2. Error and discrepancy in radiology: inevitable or avoidable?

    OpenAIRE

    Brady, Adrian P.

    2016-01-01

    Abstract Errors and discrepancies in radiology practice are uncomfortably common, with an estimated day-to-day rate of 3?5% of studies reported, and much higher rates reported in many targeted studies. Nonetheless, the meaning of the terms ?error? and ?discrepancy? and the relationship to medical negligence are frequently misunderstood. This review outlines the incidence of such events, the ways they can be categorized to aid understanding, and potential contributing factors, both human- and ...

  3. Error estimation and adaptivity for incompressible hyperelasticity

    KAUST Repository

    Whiteley, J.P.

    2014-04-30

    SUMMARY: A Galerkin FEM is developed for nonlinear, incompressible (hyper) elasticity that takes account of nonlinearities in both the strain tensor and the relationship between the strain tensor and the stress tensor. By using suitably defined linearised dual problems with appropriate boundary conditions, a posteriori error estimates are then derived for both linear functionals of the solution and linear functionals of the stress on a boundary, where Dirichlet boundary conditions are applied. A second, higher order method for calculating a linear functional of the stress on a Dirichlet boundary is also presented together with an a posteriori error estimator for this approach. An implementation for a 2D model problem with known solution, where the entries of the strain tensor exhibit large, rapid variations, demonstrates the accuracy and sharpness of the error estimators. Finally, using a selection of model problems, the a posteriori error estimate is shown to provide a basis for effective mesh adaptivity. © 2014 John Wiley & Sons, Ltd.

  4. Errors in practical measurement in surveying, engineering, and technology

    International Nuclear Information System (INIS)

    Barry, B.A.; Morris, M.D.

    1991-01-01

    This book discusses statistical measurement, error theory, and statistical error analysis. The topics of the book include an introduction to measurement, measurement errors, the reliability of measurements, probability theory of errors, measures of reliability, reliability of repeated measurements, propagation of errors in computing, errors and weights, practical application of the theory of errors in measurement, two-dimensional errors and includes a bibliography. Appendices are included which address significant figures in measurement, basic concepts of probability and the normal probability curve, writing a sample specification for a procedure, classification, standards of accuracy, and general specifications of geodetic control surveys, the geoid, the frequency distribution curve and the computer and calculator solution of problems

  5. Robust model identification applied to type 1diabetes

    DEFF Research Database (Denmark)

    Finan, Daniel Aaron; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad

    2010-01-01

    In many realistic applications, process noise is known to be neither white nor normally distributed. When identifying models in these cases, it may be more effective to minimize a different penalty function than the standard sum of squared errors (as in a least-squares identification method). Thi...

  6. Analogue particle identifier and test unit for automatic measuring of errors

    International Nuclear Information System (INIS)

    Boden, A.; Lauch, J.

    1979-04-01

    A high accuracy analogue particle identifier is described. The unit is used for particle identification or data correction of experimental based errors in magnetic spectrometers. Signals which are proportional to the energy, the time-of-flight or the position of absorption of the particles are supplied to an analogue computation circuit (multifunction converter). Three computation functions are available for different applications. The output of the identifier produces correction signals or pulses whose amplitudes are proportional to the mass of the particles. Particle identification and data correction can be optimized by the adjustment of variable parameters. An automatic test unit has been developed for adjustment and routine checking of particle identifiers. The computation functions can be tested by this unit with an accuracy of 1%. (orig.) [de

  7. Performance of an optical identification and interrogation system

    Science.gov (United States)

    Venugopalan, A.; Ghosh, A. K.; Verma, P.; Cheng, S.

    2008-04-01

    A free space optics based identification and interrogation system has been designed. The applications of the proposed system lie primarily in areas which require a secure means of mutual identification and information exchange between optical readers and tags. Conventional RFIDs raise issues regarding security threats, electromagnetic interference and health safety. The security of RF-ID chips is low due to the wide spatial spread of radio waves. Malicious nodes can read data being transmitted on the network, if they are in the receiving range. The proposed system provides an alternative which utilizes the narrow paraxial beams of lasers and an RSA-based authentication scheme. These provide enhanced security to communication between a tag and the base station or reader. The optical reader can also perform remote identification and the tag can be read from a far off distance, given line of sight. The free space optical identification and interrogation system can be used for inventory management, security systems at airports, port security, communication with high security systems, etc. to name a few. The proposed system was implemented with low-cost, off-the-shelf components and its performance in terms of throughput and bit error rate has been measured and analyzed. The range of operation with a bit-error-rate lower than 10-9 was measured to be about 4.5 m. The security of the system is based on the strengths of the RSA encryption scheme implemented using more than 1024 bits.

  8. Adaptive error detection for HDR/PDR brachytherapy: Guidance for decision making during real-time in vivo point dosimetry

    DEFF Research Database (Denmark)

    Kertzscher Schwencke, Gustavo Adolfo Vladimir; Andersen, Claus E.; Tanderup, Kari

    2014-01-01

    Purpose:This study presents an adaptive error detection algorithm (AEDA) for real-timein vivo point dosimetry during high dose rate (HDR) or pulsed dose rate (PDR) brachytherapy (BT) where the error identification, in contrast to existing approaches, does not depend on an a priori reconstruction ......, and the AEDA’s capacity to distinguish between true and false error scenarios. The study further shows that the AEDA can offer guidance in decision making in the event of potential errors detected with real-time in vivo point dosimetry....... of the dosimeter position reconstruction. Given its nearly exclusive dependence on stable dosimeter positioning, the AEDA allows for a substantially simplified and time efficient real-time in vivo BT dosimetry implementation. Methods:In the event of a measured potential treatment error, the AEDA proposes the most...

  9. Internal Error Propagation in Explicit Runge--Kutta Methods

    KAUST Repository

    Ketcheson, David I.

    2014-09-11

    In practical computation with Runge--Kutta methods, the stage equations are not satisfied exactly, due to roundoff errors, algebraic solver errors, and so forth. We show by example that propagation of such errors within a single step can have catastrophic effects for otherwise practical and well-known methods. We perform a general analysis of internal error propagation, emphasizing that it depends significantly on how the method is implemented. We show that for a fixed method, essentially any set of internal stability polynomials can be obtained by modifying the implementation details. We provide bounds on the internal error amplification constants for some classes of methods with many stages, including strong stability preserving methods and extrapolation methods. These results are used to prove error bounds in the presence of roundoff or other internal errors.

  10. Identification of subgroups of patients with tension type headache with higher widespread pressure pain hyperalgesia.

    Science.gov (United States)

    Fernández-de-Las-Peñas, César; Benito-González, Elena; Palacios-Ceña, María; Wang, Kelun; Castaldo, Matteo; Arendt-Nielsen, Lars

    2017-12-01

    Identification of subgroups of patients with different levels of sensitization and clinical features can help to identify groups at risk and the development of better therapeutic strategies. The aim of this study was to identify subgroups of patients with tension type headache (TTH) with different levels of sensitization, clinical pain features, and psychological outcomes. A total of 197 individuals with TTH participated. Headache intensity, frequency, and duration and medication intake were collected with a 4-weeks diary. Pressure pain thresholds were assessed bilaterally over the temporalis muscle, C5-C6 joint, second metacarpal and tibialis anterior muscle to determine widespread pressure pain hyperalgesia. The Hospital Anxiety and Depression Scale assessed anxiety and depression. The State-Trait Anxiety Inventory evaluated the state and trait levels of anxiety. The Headache Disability Inventory evaluated the burden of headache. Health-related quality of life was determined with the SF-36 questionnaire. Groups were considered as positive (three or more criteria) or negative (less than three criteria) on a clinical prediction rule: headache duration patients with TTH with higher sensitization, higher chronicity of headaches and worse quality of life but lower frequency and duration of headache episodes. This subgroup of individuals with TTH may need particular attention and specific therapeutic programs for avoiding potential chronification.

  11. A Systematic Approach for Identifying Level-1 Error Covariance Structures in Latent Growth Modeling

    Science.gov (United States)

    Ding, Cherng G.; Jane, Ten-Der; Wu, Chiu-Hui; Lin, Hang-Rung; Shen, Chih-Kang

    2017-01-01

    It has been pointed out in the literature that misspecification of the level-1 error covariance structure in latent growth modeling (LGM) has detrimental impacts on the inferences about growth parameters. Since correct covariance structure is difficult to specify by theory, the identification needs to rely on a specification search, which,…

  12. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  13. Errors as a Means of Reducing Impulsive Food Choice.

    Science.gov (United States)

    Sellitto, Manuela; di Pellegrino, Giuseppe

    2016-06-05

    Nowadays, the increasing incidence of eating disorders due to poor self-control has given rise to increased obesity and other chronic weight problems, and ultimately, to reduced life expectancy. The capacity to refrain from automatic responses is usually high in situations in which making errors is highly likely. The protocol described here aims at reducing imprudent preference in women during hypothetical intertemporal choices about appetitive food by associating it with errors. First, participants undergo an error task where two different edible stimuli are associated with two different error likelihoods (high and low). Second, they make intertemporal choices about the two edible stimuli, separately. As a result, this method decreases the discount rate for future amounts of the edible reward that cued higher error likelihood, selectively. This effect is under the influence of the self-reported hunger level. The present protocol demonstrates that errors, well known as motivationally salient events, can induce the recruitment of cognitive control, thus being ultimately useful in reducing impatient choices for edible commodities.

  14. Commission errors of active intentions: the roles of aging, cognitive load, and practice.

    Science.gov (United States)

    Boywitt, C Dennis; Rummel, Jan; Meiser, Thorsten

    2015-01-01

    Performing an intended action when it needs to be withheld, for example, when temporarily prescribed medication is incompatible with the other medication, is referred to as commission errors of prospective memory (PM). While recent research indicates that older adults are especially prone to commission errors for finished intentions, there is a lack of research on the effects of aging on commission errors for still active intentions. The present research investigates conditions which might contribute to older adults' propensity to perform planned intentions under inappropriate conditions. Specifically, disproportionally higher rates of commission errors for still active intentions were observed in older than in younger adults with both salient (Experiment 1) and non-salient (Experiment 2) target cues. Practicing the PM task in Experiment 2, however, helped execution of the intended action in terms of higher PM performance at faster ongoing-task response times but did not increase the rate of commission errors. The results have important implications for the understanding of older adults' PM commission errors and the processes involved in these errors.

  15. Relating physician's workload with errors during radiation therapy planning.

    Science.gov (United States)

    Mazur, Lukasz M; Mosaly, Prithima R; Hoyle, Lesley M; Jones, Ellen L; Chera, Bhishamjit S; Marks, Lawrence B

    2014-01-01

    To relate subjective workload (WL) levels to errors for routine clinical tasks. Nine physicians (4 faculty and 5 residents) each performed 3 radiation therapy planning cases. The WL levels were subjectively assessed using National Aeronautics and Space Administration Task Load Index (NASA-TLX). Individual performance was assessed objectively based on the severity grade of errors. The relationship between the WL and performance was assessed via ordinal logistic regression. There was an increased rate of severity grade of errors with increasing WL (P value = .02). As the majority of the higher NASA-TLX scores, and the majority of the performance errors were in the residents, our findings are likely most pertinent to radiation oncology centers with training programs. WL levels may be an important factor contributing to errors during radiation therapy planning tasks. Published by Elsevier Inc.

  16. Modified Redundancy based Technique—a New Approach to Combat Error Propagation Effect of AES

    Science.gov (United States)

    Sarkar, B.; Bhunia, C. T.; Maulik, U.

    2012-06-01

    Advanced encryption standard (AES) is a great research challenge. It has been developed to replace the data encryption standard (DES). AES suffers from a major limitation of error propagation effect. To tackle this limitation, two methods are available. One is redundancy based technique and the other one is bite based parity technique. The first one has a significant advantage of correcting any error on definite term over the second one but at the cost of higher level of overhead and hence lowering the processing speed. In this paper, a new approach based on the redundancy based technique is proposed that would certainly speed up the process of reliable encryption and hence the secured communication.

  17. Patient and tissue identification in the assisted reproductive technology laboratory.

    Science.gov (United States)

    Pomeroy, Kimball O; Racowsky, Catherine

    2012-06-01

    Several high-profile cases involving in vitro fertilization have recently received considerable media attention and highlight the importance of assuring patient and tissue identification. Within the assisted reproductive technology (ART) laboratory, there are many steps where wrong patient or tissue identity could have drastic results. Erroneous identity can result in tragic consequences for the patient, the laboratory, and for those working in the program as a whole. Such errors can result in enormous psychological and financial costs, as well as a loss in confidence. There are several critical steps that should be taken every single time and for each specific procedure performed in the ART laboratory to ensure the correct identification of patients and their tissue. These steps should be detailed in protocols that include the method of identification, the two unique identifiers that will be used, the sources of these identifiers, and often a system in which more than one person is involved in the identification. Each protocol should ideally include a checklist that is actively used for the implementation of each procedure. The protocol should also indicate what to do if the identification does not match up, including rapid handling and notification of the patient involved in the error. All ART laboratories should instill in their employees an atmosphere of full and open disclosure for cases where mistakes are made. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  18. Fixturing error measurement and analysis using CMMs

    International Nuclear Information System (INIS)

    Wang, Y; Chen, X; Gindy, N

    2005-01-01

    Influence of fixture on the errors of a machined surface can be very significant. The machined surface errors generated during machining can be measured by using a coordinate measurement machine (CMM) through the displacements of three coordinate systems on a fixture-workpiece pair in relation to the deviation of the machined surface. The surface errors consist of the component movement, component twist, deviation between actual machined surface and defined tool path. A turbine blade fixture for grinding operation is used for case study

  19. Group representations, error bases and quantum codes

    Energy Technology Data Exchange (ETDEWEB)

    Knill, E

    1996-01-01

    This report continues the discussion of unitary error bases and quantum codes. Nice error bases are characterized in terms of the existence of certain characters in a group. A general construction for error bases which are non-abelian over the center is given. The method for obtaining codes due to Calderbank et al. is generalized and expressed purely in representation theoretic terms. The significance of the inertia subgroup both for constructing codes and obtaining the set of transversally implementable operations is demonstrated.

  20. Advanced error-prediction LDPC with temperature compensation for highly reliable SSDs

    Science.gov (United States)

    Tokutomi, Tsukasa; Tanakamaru, Shuhei; Iwasaki, Tomoko Ogura; Takeuchi, Ken

    2015-09-01

    To improve the reliability of NAND Flash memory based solid-state drives (SSDs), error-prediction LDPC (EP-LDPC) has been proposed for multi-level-cell (MLC) NAND Flash memory (Tanakamaru et al., 2012, 2013), which is effective for long retention times. However, EP-LDPC is not as effective for triple-level cell (TLC) NAND Flash memory, because TLC NAND Flash has higher error rates and is more sensitive to program-disturb error. Therefore, advanced error-prediction LDPC (AEP-LDPC) has been proposed for TLC NAND Flash memory (Tokutomi et al., 2014). AEP-LDPC can correct errors more accurately by precisely describing the error phenomena. In this paper, the effects of AEP-LDPC are investigated in a 2×nm TLC NAND Flash memory with temperature characterization. Compared with LDPC-with-BER-only, the SSD's data-retention time is increased by 3.4× and 9.5× at room-temperature (RT) and 85 °C, respectively. Similarly, the acceptable BER is increased by 1.8× and 2.3×, respectively. Moreover, AEP-LDPC can correct errors with pre-determined tables made at higher temperatures to shorten the measurement time before shipping. Furthermore, it is found that one table can cover behavior over a range of temperatures in AEP-LDPC. As a result, the total table size can be reduced to 777 kBytes, which makes this approach more practical.

  1. ERROR HANDLING IN INTEGRATION WORKFLOWS

    Directory of Open Access Journals (Sweden)

    Alexey M. Nazarenko

    2017-01-01

    Full Text Available Simulation experiments performed while solving multidisciplinary engineering and scientific problems require joint usage of multiple software tools. Further, when following a preset plan of experiment or searching for optimum solu- tions, the same sequence of calculations is run multiple times with various simulation parameters, input data, or conditions while overall workflow does not change. Automation of simulations like these requires implementing of a workflow where tool execution and data exchange is usually controlled by a special type of software, an integration environment or plat- form. The result is an integration workflow (a platform-dependent implementation of some computing workflow which, in the context of automation, is a composition of weakly coupled (in terms of communication intensity typical subtasks. These compositions can then be decomposed back into a few workflow patterns (types of subtasks interaction. The pat- terns, in their turn, can be interpreted as higher level subtasks.This paper considers execution control and data exchange rules that should be imposed by the integration envi- ronment in the case of an error encountered by some integrated software tool. An error is defined as any abnormal behavior of a tool that invalidates its result data thus disrupting the data flow within the integration workflow. The main requirementto the error handling mechanism implemented by the integration environment is to prevent abnormal termination of theentire workflow in case of missing intermediate results data. Error handling rules are formulated on the basic pattern level and on the level of a composite task that can combine several basic patterns as next level subtasks. The cases where workflow behavior may be different, depending on user's purposes, when an error takes place, and possible error handling op- tions that can be specified by the user are also noted in the work.

  2. Prevalence of Pre-Analytical Errors in Clinical Chemistry Diagnostic Labs in Sulaimani City of Iraqi Kurdistan.

    Science.gov (United States)

    Najat, Dereen

    2017-01-01

    Laboratory testing is roughly divided into three phases: a pre-analytical phase, an analytical phase and a post-analytical phase. Most analytical errors have been attributed to the analytical phase. However, recent studies have shown that up to 70% of analytical errors reflect the pre-analytical phase. The pre-analytical phase comprises all processes from the time a laboratory request is made by a physician until the specimen is analyzed at the lab. Generally, the pre-analytical phase includes patient preparation, specimen transportation, specimen collection and storage. In the present study, we report the first comprehensive assessment of the frequency and types of pre-analytical errors at the Sulaimani diagnostic labs in Iraqi Kurdistan. Over 2 months, 5500 venous blood samples were observed in 10 public diagnostic labs of Sulaimani City. The percentages of rejected samples and types of sample inappropriateness were evaluated. The percentage of each of the following pre-analytical errors were recorded: delay in sample transportation, clotted samples, expired reagents, hemolyzed samples, samples not on ice, incorrect sample identification, insufficient sample, tube broken in centrifuge, request procedure errors, sample mix-ups, communication conflicts, misinterpreted orders, lipemic samples, contaminated samples and missed physician's request orders. The difference between the relative frequencies of errors observed in the hospitals considered was tested using a proportional Z test. In particular, the survey aimed to discover whether analytical errors were recorded and examine the types of platforms used in the selected diagnostic labs. The analysis showed a high prevalence of improper sample handling during the pre-analytical phase. In appropriate samples, the percentage error was as high as 39%. The major reasons for rejection were hemolyzed samples (9%), incorrect sample identification (8%) and clotted samples (6%). Most quality control schemes at Sulaimani

  3. Prevalence of Pre-Analytical Errors in Clinical Chemistry Diagnostic Labs in Sulaimani City of Iraqi Kurdistan.

    Directory of Open Access Journals (Sweden)

    Dereen Najat

    Full Text Available Laboratory testing is roughly divided into three phases: a pre-analytical phase, an analytical phase and a post-analytical phase. Most analytical errors have been attributed to the analytical phase. However, recent studies have shown that up to 70% of analytical errors reflect the pre-analytical phase. The pre-analytical phase comprises all processes from the time a laboratory request is made by a physician until the specimen is analyzed at the lab. Generally, the pre-analytical phase includes patient preparation, specimen transportation, specimen collection and storage. In the present study, we report the first comprehensive assessment of the frequency and types of pre-analytical errors at the Sulaimani diagnostic labs in Iraqi Kurdistan.Over 2 months, 5500 venous blood samples were observed in 10 public diagnostic labs of Sulaimani City. The percentages of rejected samples and types of sample inappropriateness were evaluated. The percentage of each of the following pre-analytical errors were recorded: delay in sample transportation, clotted samples, expired reagents, hemolyzed samples, samples not on ice, incorrect sample identification, insufficient sample, tube broken in centrifuge, request procedure errors, sample mix-ups, communication conflicts, misinterpreted orders, lipemic samples, contaminated samples and missed physician's request orders. The difference between the relative frequencies of errors observed in the hospitals considered was tested using a proportional Z test. In particular, the survey aimed to discover whether analytical errors were recorded and examine the types of platforms used in the selected diagnostic labs.The analysis showed a high prevalence of improper sample handling during the pre-analytical phase. In appropriate samples, the percentage error was as high as 39%. The major reasons for rejection were hemolyzed samples (9%, incorrect sample identification (8% and clotted samples (6%. Most quality control schemes

  4. The refractive index in electron microscopy and the errors of its approximations

    Energy Technology Data Exchange (ETDEWEB)

    Lentzen, M.

    2017-05-15

    In numerical calculations for electron diffraction often a simplified form of the electron-optical refractive index, linear in the electric potential, is used. In recent years improved calculation schemes have been proposed, aiming at higher accuracy by including higher-order terms of the electric potential. These schemes start from the relativistically corrected Schrödinger equation, and use a second simplified form, now for the refractive index squared, being linear in the electric potential. The second and higher-order corrections thus determined have, however, a large error, compared to those derived from the relativistically correct refractive index. The impact of the two simplifications on electron diffraction calculations is assessed through numerical comparison of the refractive index at high-angle Coulomb scattering and of cross-sections for a wide range of scattering angles, kinetic energies, and atomic numbers. - Highlights: • The standard model for the refractive index in electron microscopy is investigated. • The error of the standard model is proportional to the electric potential squared. • Relativistically correct error terms are derived from the energy-momentum relation. • The errors are assessed for Coulomb scattering varying energy and atomic number. • Errors of scattering cross-sections are pronounced at large angles and attain 10%.

  5. The refractive index in electron microscopy and the errors of its approximations

    International Nuclear Information System (INIS)

    Lentzen, M.

    2017-01-01

    In numerical calculations for electron diffraction often a simplified form of the electron-optical refractive index, linear in the electric potential, is used. In recent years improved calculation schemes have been proposed, aiming at higher accuracy by including higher-order terms of the electric potential. These schemes start from the relativistically corrected Schrödinger equation, and use a second simplified form, now for the refractive index squared, being linear in the electric potential. The second and higher-order corrections thus determined have, however, a large error, compared to those derived from the relativistically correct refractive index. The impact of the two simplifications on electron diffraction calculations is assessed through numerical comparison of the refractive index at high-angle Coulomb scattering and of cross-sections for a wide range of scattering angles, kinetic energies, and atomic numbers. - Highlights: • The standard model for the refractive index in electron microscopy is investigated. • The error of the standard model is proportional to the electric potential squared. • Relativistically correct error terms are derived from the energy-momentum relation. • The errors are assessed for Coulomb scattering varying energy and atomic number. • Errors of scattering cross-sections are pronounced at large angles and attain 10%.

  6. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  7. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  8. Groundwater Pollution Source Identification using Linked ANN-Optimization Model

    Science.gov (United States)

    Ayaz, Md; Srivastava, Rajesh; Jain, Ashu

    2014-05-01

    Groundwater is the principal source of drinking water in several parts of the world. Contamination of groundwater has become a serious health and environmental problem today. Human activities including industrial and agricultural activities are generally responsible for this contamination. Identification of groundwater pollution source is a major step in groundwater pollution remediation. Complete knowledge of pollution source in terms of its source characteristics is essential to adopt an effective remediation strategy. Groundwater pollution source is said to be identified completely when the source characteristics - location, strength and release period - are known. Identification of unknown groundwater pollution source is an ill-posed inverse problem. It becomes more difficult for real field conditions, when the lag time between the first reading at observation well and the time at which the source becomes active is not known. We developed a linked ANN-Optimization model for complete identification of an unknown groundwater pollution source. The model comprises two parts- an optimization model and an ANN model. Decision variables of linked ANN-Optimization model contain source location and release period of pollution source. An objective function is formulated using the spatial and temporal data of observed and simulated concentrations, and then minimized to identify the pollution source parameters. In the formulation of the objective function, we require the lag time which is not known. An ANN model with one hidden layer is trained using Levenberg-Marquardt algorithm to find the lag time. Different combinations of source locations and release periods are used as inputs and lag time is obtained as the output. Performance of the proposed model is evaluated for two and three dimensional case with error-free and erroneous data. Erroneous data was generated by adding uniformly distributed random error (error level 0-10%) to the analytically computed concentration

  9. Odor identification: perceptual and semantic dimensions.

    Science.gov (United States)

    Cain, W S; de Wijk, R; Lulejian, C; Schiet, F; See, L C

    1998-06-01

    Five studies explored identification of odors as an aspect of semantic memory. All dealt in one way or another with the accessibility of acquired olfactory information. The first study examined stability and showed that, consistent with personal reports, people can fail to identify an odor one day yet succeed another. Failure turned more commonly to success than vice versa, and once success occurred it tended to recur. Confidence ratings implied that subjects generally knew the quality of their answers. Even incorrect names, though, often carried considerable information which sometimes reflected a semantic and sometimes a perceptual source of errors. The second study showed that profiling odors via the American Society of Testing and Materials list of attributes, an exercise in depth of processing, effected no increment in the identifiability/accessibility beyond an unelaborated second attempt at retrieval. The third study showed that subjects had only a weak ability to predict the relative recognizability of odors they had failed to identify. Whereas the strength of the feeling that they would 'know' an answer if offered choices did not associate significantly with performance for odors, it did for trivia questions. The fourth study demonstrated an association between ability to discriminate among one set of odors and to identify another, but this emerged only after subjects had received feedback about identity, which essentially changed the task to one of recognition and effectively stabilized access. The fifth study illustrated that feedback improves performance dramatically only for odors involved with it, but that mere retrieval leads to some improvement. The studies suggest a research agenda that could include supplemental use of confidence judgments both retrospectively and prospectively in the same subjects to indicate the amount of accessible semantic information; use of second and third guesses to examine subjects' simultaneously held hypotheses about

  10. Technological Advancements and Error Rates in Radiation Therapy Delivery

    Energy Technology Data Exchange (ETDEWEB)

    Margalit, Danielle N., E-mail: dmargalit@partners.org [Harvard Radiation Oncology Program, Boston, MA (United States); Harvard Cancer Consortium and Brigham and Women' s Hospital/Dana Farber Cancer Institute, Boston, MA (United States); Chen, Yu-Hui; Catalano, Paul J.; Heckman, Kenneth; Vivenzio, Todd; Nissen, Kristopher; Wolfsberger, Luciant D.; Cormack, Robert A.; Mauch, Peter; Ng, Andrea K. [Harvard Cancer Consortium and Brigham and Women' s Hospital/Dana Farber Cancer Institute, Boston, MA (United States)

    2011-11-15

    Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system at Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique

  11. Technological Advancements and Error Rates in Radiation Therapy Delivery

    International Nuclear Information System (INIS)

    Margalit, Danielle N.; Chen, Yu-Hui; Catalano, Paul J.; Heckman, Kenneth; Vivenzio, Todd; Nissen, Kristopher; Wolfsberger, Luciant D.; Cormack, Robert A.; Mauch, Peter; Ng, Andrea K.

    2011-01-01

    Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)–conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system at Brigham and Women’s Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher’s exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01–0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08–0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique

  12. Sequential blind identification of underdetermined mixtures using a novel deflation scheme.

    Science.gov (United States)

    Zhang, Mingjian; Yu, Simin; Wei, Gang

    2013-09-01

    In this brief, we consider the problem of blind identification in underdetermined instantaneous mixture cases, where there are more sources than sensors. A new blind identification algorithm, which estimates the mixing matrix in a sequential fashion, is proposed. By using the rank-1 detecting device, blind identification is reformulated as a constrained optimization problem. The identification of one column of the mixing matrix hence reduces to an optimization task for which an efficient iterative algorithm is proposed. The identification of the other columns of the mixing matrix is then carried out by a generalized eigenvalue decomposition-based deflation method. The key merit of the proposed deflation method is that it does not suffer from error accumulation. The proposed sequential blind identification algorithm provides more flexibility and better robustness than its simultaneous counterpart. Comparative simulation results demonstrate the superior performance of the proposed algorithm over the simultaneous blind identification algorithm.

  13. [Analysis of intrusion errors in free recall].

    Science.gov (United States)

    Diesfeldt, H F A

    2017-06-01

    Extra-list intrusion errors during five trials of the eight-word list-learning task of the Amsterdam Dementia Screening Test (ADST) were investigated in 823 consecutive psychogeriatric patients (87.1% suffering from major neurocognitive disorder). Almost half of the participants (45.9%) produced one or more intrusion errors on the verbal recall test. Correct responses were lower when subjects made intrusion errors, but learning slopes did not differ between subjects who committed intrusion errors and those who did not so. Bivariate regression analyses revealed that participants who committed intrusion errors were more deficient on measures of eight-word recognition memory, delayed visual recognition and tests of executive control (the Behavioral Dyscontrol Scale and the ADST-Graphical Sequences as measures of response inhibition). Using hierarchical multiple regression, only free recall and delayed visual recognition retained an independent effect in the association with intrusion errors, such that deficient scores on tests of episodic memory were sufficient to explain the occurrence of intrusion errors. Measures of inhibitory control did not add significantly to the explanation of intrusion errors in free recall, which makes insufficient strength of memory traces rather than a primary deficit in inhibition the preferred account for intrusion errors in free recall.

  14. Burnout, engagement and resident physicians' self-reported errors.

    Science.gov (United States)

    Prins, J T; van der Heijden, F M M A; Hoekstra-Weebers, J E H M; Bakker, A B; van de Wiel, H B M; Jacobs, B; Gazendam-Donofrio, S M

    2009-12-01

    Burnout is a work-related syndrome that may negatively affect more than just the resident physician. On the other hand, engagement has been shown to protect employees; it may also positively affect the patient care that the residents provide. Little is known about the relationship between residents' self-reported errors and burnout and engagement. In our national study that included all residents and physicians in The Netherlands, 2115 questionnaires were returned (response rate 41.1%). The residents reported on burnout (Maslach Burnout Inventory-Health and Social Services), engagement (Utrecht Work Engagement Scale) and self-assessed patient care practices (six items, two factors: errors in action/judgment, errors due to lack of time). Ninety-four percent of the residents reported making one or more mistake without negative consequences for the patient during their training. Seventy-one percent reported performing procedures for which they did not feel properly trained. More than half (56%) of the residents stated they had made a mistake with a negative consequence. Seventy-six percent felt they had fallen short in the quality of care they provided on at least one occasion. Men reported more errors in action/judgment than women. Significant effects of specialty and clinical setting were found on both types of errors. Residents with burnout reported significantly more errors (p engaged residents reported fewer errors (p burnout and to keep residents engaged in their work.

  15. An Investigation into Soft Error Detection Efficiency at Operating System Level

    OpenAIRE

    Asghari, Seyyed Amir; Kaynak, Okyay; Taheri, Hassan

    2014-01-01

    Electronic equipment operating in harsh environments such as space is subjected to a range of threats. The most important of these is radiation that gives rise to permanent and transient errors on microelectronic components. The occurrence rate of transient errors is significantly more than permanent errors. The transient errors, or soft errors, emerge in two formats: control flow errors (CFEs) and data errors. Valuable research results have already appeared in literature at hardware and soft...

  16. Field experiments on eyewitness identification: towards a better understanding of pitfalls and prospects.

    Science.gov (United States)

    Wells, Gary L

    2008-02-01

    The Illinois pilot program on lineup procedures has helped sharpen the focus on the types of controls that are needed in eyewitness field experiments and the limits that exist for interpreting outcome measures (rates of suspect and filler identifications). A widely-known limitation of field experiments is that, unlike simulated crime experiments, the guilt or innocence of the suspects is not easily known independently of the behavior of the eyewitnesses. Less well appreciated is that the rate of identification of lineup fillers, although clearly errors, can be a misleading measure if the filler identification rate is used to assess which of two or more lineup procedures is the better procedure. Several examples are used to illustrate that there are clearly improper procedures that would yield fewer identifications of fillers than would their proper counterparts. For example, biased lineup structure (e.g., using poorly matched fillers) as well as suggestive lineup procedures (that can result from non-blind administration of lineups) would reduce filler identification errors compared to unbiased and non-suggestive procedures. Hence, under many circumstances filler identification rates can be misleading indicators of preferred methods. Comparisons of lineup procedures in future field experiments will not be easily accepted in the absence of double-blind administration methods in all conditions plus true random assignment to conditions.

  17. Error Control for Network-on-Chip Links

    CERN Document Server

    Fu, Bo

    2012-01-01

    As technology scales into nanoscale regime, it is impossible to guarantee the perfect hardware design. Moreover, if the requirement of 100% correctness in hardware can be relaxed, the cost of manufacturing, verification, and testing will be significantly reduced. Many approaches have been proposed to address the reliability problem of on-chip communications. This book focuses on the use of error control codes (ECCs) to improve on-chip interconnect reliability. Coverage includes detailed description of key issues in NOC error control faced by circuit and system designers, as well as practical error control techniques to minimize the impact of these errors on system performance. Provides a detailed background on the state of error control methods for on-chip interconnects; Describes the use of more complex concatenated codes such as Hamming Product Codes with Type-II HARQ, while emphasizing integration techniques for on-chip interconnect links; Examines energy-efficient techniques for integrating multiple error...

  18. Parental Cognitive Errors Mediate Parental Psychopathology and Ratings of Child Inattention.

    Science.gov (United States)

    Haack, Lauren M; Jiang, Yuan; Delucchi, Kevin; Kaiser, Nina; McBurnett, Keith; Hinshaw, Stephen; Pfiffner, Linda

    2017-09-01

    We investigate the Depression-Distortion Hypothesis in a sample of 199 school-aged children with ADHD-Predominantly Inattentive presentation (ADHD-I) by examining relations and cross-sectional mediational pathways between parental characteristics (i.e., levels of parental depressive and ADHD symptoms) and parental ratings of child problem behavior (inattention, sluggish cognitive tempo, and functional impairment) via parental cognitive errors. Results demonstrated a positive association between parental factors and parental ratings of inattention, as well as a mediational pathway between parental depressive and ADHD symptoms and parental ratings of inattention via parental cognitive errors. Specifically, higher levels of parental depressive and ADHD symptoms predicted higher levels of cognitive errors, which in turn predicted higher parental ratings of inattention. Findings provide evidence for core tenets of the Depression-Distortion Hypothesis, which state that parents with high rates of psychopathology hold negative schemas for their child's behavior and subsequently, report their child's behavior as more severe. © 2016 Family Process Institute.

  19. Processing graded feedback: electrophysiological correlates of learning from small and large errors.

    Science.gov (United States)

    Luft, Caroline Di Bernardi; Takase, Emilio; Bhattacharya, Joydeep

    2014-05-01

    Feedback processing is important for learning and therefore may affect the consolidation of skills. Considerable research demonstrates electrophysiological differences between correct and incorrect feedback, but how we learn from small versus large errors is usually overlooked. This study investigated electrophysiological differences when processing small or large error feedback during a time estimation task. Data from high-learners and low-learners were analyzed separately. In both high- and low-learners, large error feedback was associated with higher feedback-related negativity (FRN) and small error feedback was associated with a larger P300 and increased amplitude over the motor related areas of the left hemisphere. In addition, small error feedback induced larger desynchronization in the alpha and beta bands with distinctly different topographies between the two learning groups: The high-learners showed a more localized decrease in beta power over the left frontocentral areas, and the low-learners showed a widespread reduction in the alpha power following small error feedback. Furthermore, only the high-learners showed an increase in phase synchronization between the midfrontal and left central areas. Importantly, this synchronization was correlated to how well the participants consolidated the estimation of the time interval. Thus, although large errors were associated with higher FRN, small errors were associated with larger oscillatory responses, which was more evident in the high-learners. Altogether, our results suggest an important role of the motor areas in the processing of error feedback for skill consolidation.

  20. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  1. Individual identification via electrocardiogram analysis.

    Science.gov (United States)

    Fratini, Antonio; Sansone, Mario; Bifulco, Paolo; Cesarelli, Mario

    2015-08-14

    During last decade the use of ECG recordings in biometric recognition studies has increased. ECG characteristics made it suitable for subject identification: it is unique, present in all living individuals, and hard to forge. However, in spite of the great number of approaches found in literature, no agreement exists on the most appropriate methodology. This study aimed at providing a survey of the techniques used so far in ECG-based human identification. Specifically, a pattern recognition perspective is here proposed providing a unifying framework to appreciate previous studies and, hopefully, guide future research. We searched for papers on the subject from the earliest available date using relevant electronic databases (Medline, IEEEXplore, Scopus, and Web of Knowledge). The following terms were used in different combinations: electrocardiogram, ECG, human identification, biometric, authentication and individual variability. The electronic sources were last searched on 1st March 2015. In our selection we included published research on peer-reviewed journals, books chapters and conferences proceedings. The search was performed for English language documents. 100 pertinent papers were found. Number of subjects involved in the journal studies ranges from 10 to 502, age from 16 to 86, male and female subjects are generally present. Number of analysed leads varies as well as the recording conditions. Identification performance differs widely as well as verification rate. Many studies refer to publicly available databases (Physionet ECG databases repository) while others rely on proprietary recordings making difficult them to compare. As a measure of overall accuracy we computed a weighted average of the identification rate and equal error rate in authentication scenarios. Identification rate resulted equal to 94.95 % while the equal error rate equal to 0.92 %. Biometric recognition is a mature field of research. Nevertheless, the use of physiological signals

  2. Open quantum systems and error correction

    Science.gov (United States)

    Shabani Barzegar, Alireza

    Quantum effects can be harnessed to manipulate information in a desired way. Quantum systems which are designed for this purpose are suffering from harming interaction with their surrounding environment or inaccuracy in control forces. Engineering different methods to combat errors in quantum devices are highly demanding. In this thesis, I focus on realistic formulations of quantum error correction methods. A realistic formulation is the one that incorporates experimental challenges. This thesis is presented in two sections of open quantum system and quantum error correction. Chapters 2 and 3 cover the material on open quantum system theory. It is essential to first study a noise process then to contemplate methods to cancel its effect. In the second chapter, I present the non-completely positive formulation of quantum maps. Most of these results are published in [Shabani and Lidar, 2009b,a], except a subsection on geometric characterization of positivity domain of a quantum map. The real-time formulation of the dynamics is the topic of the third chapter. After introducing the concept of Markovian regime, A new post-Markovian quantum master equation is derived, published in [Shabani and Lidar, 2005a]. The section of quantum error correction is presented in three chapters of 4, 5, 6 and 7. In chapter 4, we introduce a generalized theory of decoherence-free subspaces and subsystems (DFSs), which do not require accurate initialization (published in [Shabani and Lidar, 2005b]). In Chapter 5, we present a semidefinite program optimization approach to quantum error correction that yields codes and recovery procedures that are robust against significant variations in the noise channel. Our approach allows us to optimize the encoding, recovery, or both, and is amenable to approximations that significantly improve computational cost while retaining fidelity (see [Kosut et al., 2008] for a published version). Chapter 6 is devoted to a theory of quantum error correction (QEC

  3. Forbearance coping, identification with heritage culture, acculturative stress, and psychological distress among Chinese international students.

    Science.gov (United States)

    Wei, Meifen; Liao, Kelly Yu-Hsin; Heppner, Puncky Paul; Chao, Ruth Chu-Lien; Ku, Tsun-Yao

    2012-01-01

    Based on Berry's (1997) theoretical framework for acculturation, our goal in this study was to examine whether the use of a culturally relevant coping strategy (i.e., forbearance coping, a predictor) would be associated with a lower level of psychological distress (a psychological outcome), for whom (i.e., those with weaker vs. stronger identification with heritage culture, a moderator), and under what situations (i.e., lower vs. higher acculturative stress, a moderator). A total of 188 Chinese international students completed an online survey. Results from a hierarchical regression indicated a significant 3-way interaction of forbearance coping, identification with heritage culture, and acculturative stress on psychological distress. For those with a weaker identification with their heritage culture, when acculturative stress was higher, the use of forbearance coping was positively associated with psychological distress. However, this was not the case when acculturative stress was lower. In other words, the use of forbearance coping was not significantly associated with psychological distress when acculturative stress was lower. Moreover, for those with a stronger cultural heritage identification, the use of forbearance coping was not significantly associated with psychological distress regardless of whether acculturative stress was high or low. Future research and implications are discussed. (c) 2012 APA, all rights reserved.

  4. Search-based model identification of smart-structure damage

    Science.gov (United States)

    Glass, B. J.; Macalou, A.

    1991-01-01

    This paper describes the use of a combined model and parameter identification approach, based on modal analysis and artificial intelligence (AI) techniques, for identifying damage or flaws in a rotating truss structure incorporating embedded piezoceramic sensors. This smart structure example is representative of a class of structures commonly found in aerospace systems and next generation space structures. Artificial intelligence techniques of classification, heuristic search, and an object-oriented knowledge base are used in an AI-based model identification approach. A finite model space is classified into a search tree, over which a variant of best-first search is used to identify the model whose stored response most closely matches that of the input. Newly-encountered models can be incorporated into the model space. This adaptativeness demonstrates the potential for learning control. Following this output-error model identification, numerical parameter identification is used to further refine the identified model. Given the rotating truss example in this paper, noisy data corresponding to various damage configurations are input to both this approach and a conventional parameter identification method. The combination of the AI-based model identification with parameter identification is shown to lead to smaller parameter corrections than required by the use of parameter identification alone.

  5. Medication administration errors in Eastern Saudi Arabia

    International Nuclear Information System (INIS)

    Mir Sadat-Ali

    2010-01-01

    To assess the prevalence and characteristics of medication errors (ME) in patients admitted to King Fahd University Hospital, Alkhobar, Kingdom of Saudi Arabia. Medication errors are documented by the nurses and physicians standard reporting forms (Hospital Based Incident Report). The study was carried out in King Fahd University Hospital, Alkhobar, Kingdom of Saudi Arabia and all the incident reports were collected during the period from January 2008 to December 2009. The incident reports were analyzed for age, gender, nationality, nursing unit, and time where ME was reported. The data were analyzed and the statistical significance differences between groups were determined by Student's t-test, and p-values of <0.05 using confidence interval of 95% were considered significant. There were 38 ME reported for the study period. The youngest patient was 5 days and the oldest 70 years. There were 31 Saudis, and 7 non-Saudi patients involved. The most common error was missed medication, which was seen in 15 (39.5%) patients. Over 15 (39.5%) of errors occurred in 2 units (pediatric medicine, and obstetrics and gynecology). Nineteen (50%) of the errors occurred during the 3-11 pm shift. Our study shows that the prevalence of ME in our institution is low, in comparison with the world literature. This could be due to under reporting of the errors, and we believe that ME reporting should be made less punitive so that ME can be studied and preventive measures implemented (Author).

  6. FMLRC: Hybrid long read error correction using an FM-index.

    Science.gov (United States)

    Wang, Jeremy R; Holt, James; McMillan, Leonard; Jones, Corbin D

    2018-02-09

    Long read sequencing is changing the landscape of genomic research, especially de novo assembly. Despite the high error rate inherent to long read technologies, increased read lengths dramatically improve the continuity and accuracy of genome assemblies. However, the cost and throughput of these technologies limits their application to complex genomes. One solution is to decrease the cost and time to assemble novel genomes by leveraging "hybrid" assemblies that use long reads for scaffolding and short reads for accuracy. We describe a novel method leveraging a multi-string Burrows-Wheeler Transform with auxiliary FM-index to correct errors in long read sequences using a set of complementary short reads. We demonstrate that our method efficiently produces significantly more high quality corrected sequence than existing hybrid error-correction methods. We also show that our method produces more contiguous assemblies, in many cases, than existing state-of-the-art hybrid and long-read only de novo assembly methods. Our method accurately corrects long read sequence data using complementary short reads. We demonstrate higher total throughput of corrected long reads and a corresponding increase in contiguity of the resulting de novo assemblies. Improved throughput and computational efficiency than existing methods will help better economically utilize emerging long read sequencing technologies.

  7. A human error analysis methodology, AGAPE-ET, for emergency tasks in nuclear power plants and its application

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jae Whan; Jung, Won Dea [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2002-03-01

    This report presents a procedurised human reliability analysis (HRA) methodology, AGAPE-ET (A Guidance And Procedure for Human Error Analysis for Emergency Tasks), for both qualitative error analysis and quantification of human error probability (HEP) of emergency tasks in nuclear power plants. The AGAPE-ET is based on the simplified cognitive model. By each cognitive function, error causes or error-likely situations have been identified considering the characteristics of the performance of each cognitive function and influencing mechanism of PIFs on the cognitive function. Then, error analysis items have been determined from the identified error causes or error-likely situations to help the analysts cue or guide overall human error analysis. A human error analysis procedure based on the error analysis items is organised. The basic scheme for the quantification of HEP consists in the multiplication of the BHEP assigned by the error analysis item and the weight from the influencing factors decision tree (IFDT) constituted by cognitive function. The method can be characterised by the structured identification of the weak points of the task required to perform and the efficient analysis process that the analysts have only to carry out with the necessary cognitive functions. The report also presents the the application of AFAPE-ET to 31 nuclear emergency tasks and its results. 42 refs., 7 figs., 36 tabs. (Author)

  8. Vitek 2 ANC card versus BBL Crystal Anaerobe and RapID ANA II for identification of clinical anaerobic bacteria.

    Science.gov (United States)

    Blairon, Laurent; Maza, Mengi L; Wybo, Ingrid; Piérard, Denis; Dediste, Anne; Vandenberg, Olivier

    2010-08-01

    The Vitek 2 Anaerobe and Corynebacterium Identification Card (ANC) was recently evaluated in a multicentre study. In the present work, this system was compared with the BBL Crystal Anaerobe and RapID ANA II panels. These kits were tested using 196 strains of anaerobes that had been previously identified by gas-liquid chromatography. Identification to the species or to the genus level was 75.0%, 81.1% and 70.9% for Crystal, RapID and Vitek, respectively. Vitek ANC failed to provide any identification in 20.4% of the strains, but it had fewer misidentifications than RapID. The confidence factors provided on the results report of each kit were not always correlated with a lower risk of major errors, with the exception of Vitek 2 in which a confidence factor higher than 0.86 excluded the risk of misidentification in more than 87% of isolates. The lower rate of identification by the Vitek and Crystal panels is mostly due the lower ability of these systems to identify the Clostridia. Overall, the three panels are comparable but need improvement to a better accuracy. Copyright (c) 2010 Elsevier Ltd. All rights reserved.

  9. Medical Errors in Cyprus: The 2005 Eurobarometer Survey

    Directory of Open Access Journals (Sweden)

    Andreas Pavlakis

    2012-01-01

    Full Text Available Background: Medical errors have been highlighted in recent years by different agencies, scientific bodies and research teams alike. We sought to explore the issue of medical errors in Cyprus using data from the Eurobarometer survey.Methods: Data from the special Eurobarometer survey conducted in 2005 across all European Union countries (EU-25 and the acceding countries were obtained from the corresponding EU office. Statisticalanalyses including logistic regression models were performed using SPSS.Results: A total of 502 individuals participated in the Cyprus survey. About 90% reported that they had often or sometimes heard about medical errors, while 22% reported that a family member or they had suffered a serious medical error in a local hospital. In addition, 9.4% reported a serious problem from a prescribed medicine. We also found statistically significant differences across different ages and gender and in rural versus urban residents. Finally, using multivariable-adjusted logistic regression models, wefound that residents in rural areas were more likely to have suffered a serious medical error in a local hospital or from a prescribed medicine.Conclusion: Our study shows that the vast majority of residents in Cyprus in parallel with the other Europeans worry about medical errors and a significant percentage report having suffered a serious medical error at a local hospital or from a prescribed medicine. The results of our study could help the medical community in Cyprus and the society at large to enhance its vigilance with respect to medical errors in order to improve medical care.

  10. Recursive Subspace Identification of AUV Dynamic Model under General Noise Assumption

    Directory of Open Access Journals (Sweden)

    Zheping Yan

    2014-01-01

    Full Text Available A recursive subspace identification algorithm for autonomous underwater vehicles (AUVs is proposed in this paper. Due to the advantages at handling nonlinearities and couplings, the AUV model investigated here is for the first time constructed as a Hammerstein model with nonlinear feedback in the linear part. To better take the environment and sensor noises into consideration, the identification problem is concerned as an errors-in-variables (EIV one which means that the identification procedure is under general noise assumption. In order to make the algorithm recursively, propagator method (PM based subspace approach is extended into EIV framework to form the recursive identification method called PM-EIV algorithm. With several identification experiments carried out by the AUV simulation platform, the proposed algorithm demonstrates its effectiveness and feasibility.

  11. Refractive Error in a Sample of Black High School Children in South Africa.

    Science.gov (United States)

    Wajuihian, Samuel Otabor; Hansraj, Rekha

    2017-12-01

    This study focused on a cohort that has not been studied and who currently have limited access to eye care services. The findings, while improving the understanding of the distribution of refractive errors, also enabled identification of children requiring intervention and provided a guide for future resource allocation. The aim of conducting the study was to determine the prevalence and distribution of refractive error and its association with gender, age, and school grade level. Using a multistage random cluster sampling, 1586 children, 632 males (40%) and 954 females (60%), were selected. Their ages ranged between 13 and 18 years with a mean of 15.81 ± 1.56 years. The visual functions evaluated included visual acuity using the logarithm of minimum angle of resolution chart and refractive error measured using the autorefractor and then refined subjectively. Axis astigmatism was presented in the vector method where positive values of J0 indicated with-the-rule astigmatism, negative values indicated against-the-rule astigmatism, whereas J45 represented oblique astigmatism. Overall, patients were myopic with a mean spherical power for right eye of -0.02 ± 0.47; mean astigmatic cylinder power was -0.09 ± 0.27 with mainly with-the-rule astigmatism (J0 = 0.01 ± 0.11). The prevalence estimates were as follows: myopia (at least -0.50) 7% (95% confidence interval [CI], 6 to 9%), hyperopia (at least 0.5) 5% (95% CI, 4 to 6%), astigmatism (at least -0.75 cylinder) 3% (95% CI, 2 to 4%), and anisometropia 3% (95% CI, 2 to 4%). There was no significant association between refractive error and any of the categories (gender, age, and grade levels). The prevalence of refractive error in the sample of high school children was relatively low. Myopia was the most prevalent, and findings on its association with age suggest that the prevalence of myopia may be stabilizing at late teenage years.

  12. Bandwagon effects and error bars in particle physics

    Science.gov (United States)

    Jeng, Monwhea

    2007-02-01

    We study historical records of experiments on particle masses, lifetimes, and widths, both for signs of expectation bias, and to compare actual errors with reported error bars. We show that significant numbers of particle properties exhibit "bandwagon effects": reported values show trends and clustering as a function of the year of publication, rather than random scatter about the mean. While the total amount of clustering is significant, it is also fairly small; most individual particle properties do not display obvious clustering. When differences between experiments are compared with the reported error bars, the deviations do not follow a normal distribution, but instead follow an exponential distribution for up to ten standard deviations.

  13. Bandwagon effects and error bars in particle physics

    International Nuclear Information System (INIS)

    Jeng, Monwhea

    2007-01-01

    We study historical records of experiments on particle masses, lifetimes, and widths, both for signs of expectation bias, and to compare actual errors with reported error bars. We show that significant numbers of particle properties exhibit 'bandwagon effects': reported values show trends and clustering as a function of the year of publication, rather than random scatter about the mean. While the total amount of clustering is significant, it is also fairly small; most individual particle properties do not display obvious clustering. When differences between experiments are compared with the reported error bars, the deviations do not follow a normal distribution, but instead follow an exponential distribution for up to ten standard deviations

  14. An Error Estimate for Symplectic Euler Approximation of Optimal Control Problems

    KAUST Repository

    Karlsson, Jesper; Larsson, Stig; Sandberg, Mattias; Szepessy, Anders; Tempone, Raul

    2015-01-01

    This work focuses on numerical solutions of optimal control problems. A time discretization error representation is derived for the approximation of the associated value function. It concerns symplectic Euler solutions of the Hamiltonian system connected with the optimal control problem. The error representation has a leading-order term consisting of an error density that is computable from symplectic Euler solutions. Under an assumption of the pathwise convergence of the approximate dual function as the maximum time step goes to zero, we prove that the remainder is of higher order than the leading-error density part in the error representation. With the error representation, it is possible to perform adaptive time stepping. We apply an adaptive algorithm originally developed for ordinary differential equations. The performance is illustrated by numerical tests.

  15. A methodology for translating positional error into measures of attribute error, and combining the two error sources

    Science.gov (United States)

    Yohay Carmel; Curtis Flather; Denis Dean

    2006-01-01

    This paper summarizes our efforts to investigate the nature, behavior, and implications of positional error and attribute error in spatiotemporal datasets. Estimating the combined influence of these errors on map analysis has been hindered by the fact that these two error types are traditionally expressed in different units (distance units, and categorical units,...

  16. An Investigation of effective factors on nurses\\' speech errors

    Directory of Open Access Journals (Sweden)

    Maryam Tafaroji yeganeh

    2017-03-01

    Full Text Available Background : Speech errors are a branch of psycholinguistic science. Speech error or slip of tongue is a natural process that happens to everyone. The importance of this research is because of sensitivity and importance of nursing in which the speech errors may be interfere in the treatment of patients, but unfortunately no research has been done yet in this field.This research has been done to study the factors (personality, stress, fatigue and insomnia which cause speech errors happen to nurses of Ilam province. Materials and Methods: The sample of this correlation-descriptive research consists of 50 nurses working in Mustafa Khomeini Hospital of Ilam province who were selected randomly. Our data were collected using The Minnesota Multiphasic Personality Inventory, NEO-Five Factor Inventory and Expanded Nursing Stress Scale, and were analyzed using SPSS version 20, descriptive, inferential and multivariate linear regression or two-variable statistical methods (with significant level: p≤0. 05. Results: 30 (60% of nurses participating in the study were female and 19 (38% were male. In this study, all three factors (type of personality, stress and fatigue have significant effects on nurses' speech errors Conclusion: 30 (60% of nurses participating in the study were female and 19 (38% were male. In this study, all three factors (type of personality, stress and fatigue have significant effects on nurses' speech errors.

  17. Orthonormal filters for identification in active control systems

    International Nuclear Information System (INIS)

    Mayer, Dirk

    2015-01-01

    Many active noise and vibration control systems require models of the control paths. When the controlled system changes slightly over time, adaptive digital filters for the identification of the models are useful. This paper aims at the investigation of a special class of adaptive digital filters: orthonormal filter banks possess the robust and simple adaptation of the widely applied finite impulse response (FIR) filters, but at a lower model order, which is important when considering implementation on embedded systems. However, the filter banks require prior knowledge about the resonance frequencies and damping of the structure. This knowledge can be supposed to be of limited precision, since in many practical systems, uncertainties in the structural parameters exist. In this work, a procedure using a number of training systems to find the fixed parameters for the filter banks is applied. The effect of uncertainties in the prior knowledge on the model error is examined both with a basic example and in an experiment. Furthermore, the possibilities to compensate for the imprecise prior knowledge by a higher filter order are investigated. Also comparisons with FIR filters are implemented in order to assess the possible advantages of the orthonormal filter banks. Numerical and experimental investigations show that significantly lower computational effort can be reached by the filter banks under certain conditions. (paper)

  18. A methodology for analysing human errors of commission in accident scenarios for risk assessment

    International Nuclear Information System (INIS)

    Kim, J. H.; Jung, W. D.; Park, J. K

    2003-01-01

    As the concern on the impact of the operator's inappropriate interventions, so-called Errors Of Commissions(EOCs), on the plant safety has been raised, the interest in the identification and analysis of EOC events from the risk assessment perspective becomes increasing accordingly. To this purpose, we propose a new methodology for identifying and analysing human errors of commission that might be caused from the failures in situation assessment and decision making during accident progressions given an initiating event. The proposed methodology was applied to the accident scenarios of YGN 3 and 4 NPPs, which resulted in about 10 EOC situations that need careful attention

  19. Dynamic Modeling Accuracy Dependence on Errors in Sensor Measurements, Mass Properties, and Aircraft Geometry

    Science.gov (United States)

    Grauer, Jared A.; Morelli, Eugene A.

    2013-01-01

    A nonlinear simulation of the NASA Generic Transport Model was used to investigate the effects of errors in sensor measurements, mass properties, and aircraft geometry on the accuracy of dynamic models identified from flight data. Measurements from a typical system identification maneuver were systematically and progressively deteriorated and then used to estimate stability and control derivatives within a Monte Carlo analysis. Based on the results, recommendations were provided for maximum allowable errors in sensor measurements, mass properties, and aircraft geometry to achieve desired levels of dynamic modeling accuracy. Results using other flight conditions, parameter estimation methods, and a full-scale F-16 nonlinear aircraft simulation were compared with these recommendations.

  20. ldentification of Needs for Higher Education of Adults

    Directory of Open Access Journals (Sweden)

    Darka Podmenik

    1997-12-01

    Full Text Available The paper, which is a summary of a longer study, presents the most important conceptual and methodological approaches to the identification of the needs for adult education. The conceptual approaches are related to the theoretical treatment of the needs, such as the behaviourist and development-humanist theory of the needs applied in psychology, the class-strata needs theory developed by sociologists, and an understanding of the needs characteristic of marketing. A special chapter considers the applicability of individual conceptual approaches to identification of the needs for adult education within higher education. A consequence of multiplicity of conceptual approaches to the identification of the needs is a methodological pluralism which becomes evident in empirical work. In the subsequent chapter the author outlines the already established methods for identification of the needs and "a case study" carried out by the researchers of the former Centre for the Development of the University. In the conclusion the author makes a few proposals for improvement of the identification of the needs for adult education in the sphere of Slovenian higher education.

  1. A Novel Artificial Fish Swarm Algorithm for Recalibration of Fiber Optic Gyroscope Error Parameters

    Directory of Open Access Journals (Sweden)

    Yanbin Gao

    2015-05-01

    Full Text Available The artificial fish swarm algorithm (AFSA is one of the state-of-the-art swarm intelligent techniques, which is widely utilized for optimization purposes. Fiber optic gyroscope (FOG error parameters such as scale factors, biases and misalignment errors are relatively unstable, especially with the environmental disturbances and the aging of fiber coils. These uncalibrated error parameters are the main reasons that the precision of FOG-based strapdown inertial navigation system (SINS degraded. This research is mainly on the application of a novel artificial fish swarm algorithm (NAFSA on FOG error coefficients recalibration/identification. First, the NAFSA avoided the demerits (e.g., lack of using artificial fishes’ pervious experiences, lack of existing balance between exploration and exploitation, and high computational cost of the standard AFSA during the optimization process. To solve these weak points, functional behaviors and the overall procedures of AFSA have been improved with some parameters eliminated and several supplementary parameters added. Second, a hybrid FOG error coefficients recalibration algorithm has been proposed based on NAFSA and Monte Carlo simulation (MCS approaches. This combination leads to maximum utilization of the involved approaches for FOG error coefficients recalibration. After that, the NAFSA is verified with simulation and experiments and its priorities are compared with that of the conventional calibration method and optimal AFSA. Results demonstrate high efficiency of the NAFSA on FOG error coefficients recalibration.

  2. A novel artificial fish swarm algorithm for recalibration of fiber optic gyroscope error parameters.

    Science.gov (United States)

    Gao, Yanbin; Guan, Lianwu; Wang, Tingjun; Sun, Yunlong

    2015-05-05

    The artificial fish swarm algorithm (AFSA) is one of the state-of-the-art swarm intelligent techniques, which is widely utilized for optimization purposes. Fiber optic gyroscope (FOG) error parameters such as scale factors, biases and misalignment errors are relatively unstable, especially with the environmental disturbances and the aging of fiber coils. These uncalibrated error parameters are the main reasons that the precision of FOG-based strapdown inertial navigation system (SINS) degraded. This research is mainly on the application of a novel artificial fish swarm algorithm (NAFSA) on FOG error coefficients recalibration/identification. First, the NAFSA avoided the demerits (e.g., lack of using artificial fishes' pervious experiences, lack of existing balance between exploration and exploitation, and high computational cost) of the standard AFSA during the optimization process. To solve these weak points, functional behaviors and the overall procedures of AFSA have been improved with some parameters eliminated and several supplementary parameters added. Second, a hybrid FOG error coefficients recalibration algorithm has been proposed based on NAFSA and Monte Carlo simulation (MCS) approaches. This combination leads to maximum utilization of the involved approaches for FOG error coefficients recalibration. After that, the NAFSA is verified with simulation and experiments and its priorities are compared with that of the conventional calibration method and optimal AFSA. Results demonstrate high efficiency of the NAFSA on FOG error coefficients recalibration.

  3. CORRECTING ACCOUNTING ERRORS AND ACKNOWLEDGING THEM IN THE EARNINGS TO THE PERIOD

    Directory of Open Access Journals (Sweden)

    BUSUIOCEANU STELIANA

    2013-08-01

    Full Text Available The accounting information is reliable when it does not contain significant errors, is not biasedand accurately represents the transactions and events. In the light of the regulations complying with Europeandirectives, the information is significant if its omission or wrong presentation may influence the decisions users makebased on annual financial statements. Given that the professional practice sees errors in registering or interpretinginformation, as well as omissions and wrong calculations, the Romanian accounting regulations stipulate treatmentsfor correcting errors in compliance with international references. Thus, the correction of the errors corresponding tothe current period is accomplished based on the retained earnings in the case of significant errors or on the currentearnings when the errors are insignificant. The different situations in the professional practice triggered by errorsrequire both knowledge of regulations and professional rationale to be addressed.

  4. The problem of assessing landmark error in geometric morphometrics: theory, methods, and modifications.

    Science.gov (United States)

    von Cramon-Taubadel, Noreen; Frazier, Brenda C; Lahr, Marta Mirazón

    2007-09-01

    Geometric morphometric methods rely on the accurate identification and quantification of landmarks on biological specimens. As in any empirical analysis, the assessment of inter- and intra-observer error is desirable. A review of methods currently being employed to assess measurement error in geometric morphometrics was conducted and three general approaches to the problem were identified. One such approach employs Generalized Procrustes Analysis to superimpose repeatedly digitized landmark configurations, thereby establishing whether repeat measures fall within an acceptable range of variation. The potential problem of this error assessment method (the "Pinocchio effect") is demonstrated and its effect on error studies discussed. An alternative approach involves employing Euclidean distances between the configuration centroid and repeat measures of a landmark to assess the relative repeatability of individual landmarks. This method is also potentially problematic as the inherent geometric properties of the specimen can result in misleading estimates of measurement error. A third approach involved the repeated digitization of landmarks with the specimen held in a constant orientation to assess individual landmark precision. This latter approach is an ideal method for assessing individual landmark precision, but is restrictive in that it does not allow for the incorporation of instrumentally defined or Type III landmarks. Hence, a revised method for assessing landmark error is proposed and described with the aid of worked empirical examples. (c) 2007 Wiley-Liss, Inc.

  5. LAT Onboard Science: Gamma-Ray Burst Identification

    International Nuclear Information System (INIS)

    Kuehn, Frederick; Hughes, Richard; Smith, Patrick; Winer, Brian; Bonnell, Jerry; Norris, Jay; Ritz, Steven; Russell, James

    2007-01-01

    The main goal of the Large Area Telescope (LAT) onboard science program is to provide quick identification and localization of Gamma Ray Bursts (GRB) onboard the LAT for follow-up observations by other observatories. The GRB identification and localization algorithm will provide celestial coordinates with an error region that will be distributed via the Gamma ray burst Coordinate Network (GCN). We present results that show our sensitivity to bursts as characterized using Monte Carlo simulations of the GLAST observatory. We describe and characterize the method of onboard track determination and the GRB identification and localization algorithm. Onboard track determination is considerably different than in the on-ground case, resulting in a substantially altered point spread function. The algorithm contains tunable parameters which may be adjusted after launch when real bursts characteristics at very high energies have been identified

  6. Merits of using color and shape differentiation to improve the speed and accuracy of drug strength identification on over-the-counter medicines by laypeople.

    Science.gov (United States)

    Hellier, Elizabeth; Tucker, Mike; Kenny, Natalie; Rowntree, Anna; Edworthy, Judy

    2010-09-01

    This study aimed to examine the utility of using color and shape to differentiate drug strength information on over-the-counter medicine packages. Medication errors are an important threat to patient safety, and confusions between drug strengths are a significant source of medication error. A visual search paradigm required laypeople to search for medicine packages of a particular strength from among distracter packages of different strengths, and measures of reaction time and error were recorded. Using color to differentiate drug strength information conferred an advantage on search times and accuracy. Shape differentiation did not improve search times and had only a weak effect on search accuracy. Using color to differentiate drug strength information improves drug strength identification performance. Color differentiation of drug strength information may be a useful way of reducing medication errors and improving patient safety.

  7. 'Galileo Galilei-GG': design, requirements, error budget and significance of the ground prototype

    International Nuclear Information System (INIS)

    Nobili, A.M.; Bramanti, D.; Comandi, G.L.; Toncelli, R.; Polacco, E.; Chiofalo, M.L.

    2003-01-01

    'Galileo Galilei-GG' is a proposed experiment in low orbit around the Earth aiming to test the equivalence principle to the level of 1 part in 10 17 at room temperature. A unique feature of GG, which is pivotal to achieve high accuracy at room temperature, is fast rotation in supercritical regime around the symmetry axis of the test cylinders, with very weak coupling in the plane perpendicular to it. Another unique feature of GG is the possibility to fly 2 concentric pairs of test cylinders, the outer pair being made of the same material for detection of spurious effects. GG was originally designed for an equatorial orbit. The much lower launching cost for higher inclinations has made it worth redesigning the experiment for a sun-synchronous orbit. We report the main conclusions of this study, which confirms the feasibility of the original goal of the mission also at high inclination, and conclude by stressing the significance of the ground based prototype of the apparatus proposed for space

  8. The surveillance error grid.

    Science.gov (United States)

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  9. A constrained robust least squares approach for contaminant release history identification

    Science.gov (United States)

    Sun, Alexander Y.; Painter, Scott L.; Wittmeyer, Gordon W.

    2006-04-01

    Contaminant source identification is an important type of inverse problem in groundwater modeling and is subject to both data and model uncertainty. Model uncertainty was rarely considered in the previous studies. In this work, a robust framework for solving contaminant source recovery problems is introduced. The contaminant source identification problem is first cast into one of solving uncertain linear equations, where the response matrix is constructed using a superposition technique. The formulation presented here is general and is applicable to any porous media flow and transport solvers. The robust least squares (RLS) estimator, which originated in the field of robust identification, directly accounts for errors arising from model uncertainty and has been shown to significantly reduce the sensitivity of the optimal solution to perturbations in model and data. In this work, a new variant of RLS, the constrained robust least squares (CRLS), is formulated for solving uncertain linear equations. CRLS allows for additional constraints, such as nonnegativity, to be imposed. The performance of CRLS is demonstrated through one- and two-dimensional test problems. When the system is ill-conditioned and uncertain, it is found that CRLS gave much better performance than its classical counterpart, the nonnegative least squares. The source identification framework developed in this work thus constitutes a reliable tool for recovering source release histories in real applications.

  10. Driving error and anxiety related to iPod mp3 player use in a simulated driving experience.

    Science.gov (United States)

    Harvey, Ashley R; Carden, Randy L

    2009-08-01

    Driver distraction due to cellular phone usage has repeatedly been shown to increase the risk of vehicular accidents; however, the literature regarding the use of other personal electronic devices while driving is relatively sparse. It was hypothesized that the usage of an mp3 player would result in an increase in not only driving error while operating a driving simulator, but driver anxiety scores as well. It was also hypothesized that anxiety scores would be positively related to driving errors when using an mp3 player. 32 participants drove through a set course in a driving simulator twice, once with and once without an iPod mp3 player, with the order counterbalanced. Number of driving errors per course, such as leaving the road, impacts with stationary objects, loss of vehicular control, etc., and anxiety were significantly higher when an iPod was in use. Anxiety scores were unrelated to number of driving errors.

  11. Ontology-based specification, identification and analysis of perioperative risks.

    Science.gov (United States)

    Uciteli, Alexandr; Neumann, Juliane; Tahar, Kais; Saleh, Kutaiba; Stucke, Stephan; Faulbrück-Röhr, Sebastian; Kaeding, André; Specht, Martin; Schmidt, Tobias; Neumuth, Thomas; Besting, Andreas; Stegemann, Dominik; Portheine, Frank; Herre, Heinrich

    2017-09-06

    Medical personnel in hospitals often works under great physical and mental strain. In medical decision-making, errors can never be completely ruled out. Several studies have shown that between 50 and 60% of adverse events could have been avoided through better organization, more attention or more effective security procedures. Critical situations especially arise during interdisciplinary collaboration and the use of complex medical technology, for example during surgical interventions and in perioperative settings (the period of time before, during and after surgical intervention). In this paper, we present an ontology and an ontology-based software system, which can identify risks across medical processes and supports the avoidance of errors in particular in the perioperative setting. We developed a practicable definition of the risk notion, which is easily understandable by the medical staff and is usable for the software tools. Based on this definition, we developed a Risk Identification Ontology (RIO) and used it for the specification and the identification of perioperative risks. An agent system was developed, which gathers risk-relevant data during the whole perioperative treatment process from various sources and provides it for risk identification and analysis in a centralized fashion. The results of such an analysis are provided to the medical personnel in form of context-sensitive hints and alerts. For the identification of the ontologically specified risks, we developed an ontology-based software module, called Ontology-based Risk Detector (OntoRiDe). About 20 risks relating to cochlear implantation (CI) have already been implemented. Comprehensive testing has indicated the correctness of the data acquisition, risk identification and analysis components, as well as the web-based visualization of results.

  12. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  13. Formal Analysis of Soft Errors using Theorem Proving

    Directory of Open Access Journals (Sweden)

    Sofiène Tahar

    2013-07-01

    Full Text Available Modeling and analysis of soft errors in electronic circuits has traditionally been done using computer simulations. Computer simulations cannot guarantee correctness of analysis because they utilize approximate real number representations and pseudo random numbers in the analysis and thus are not well suited for analyzing safety-critical applications. In this paper, we present a higher-order logic theorem proving based method for modeling and analysis of soft errors in electronic circuits. Our developed infrastructure includes formalized continuous random variable pairs, their Cumulative Distribution Function (CDF properties and independent standard uniform and Gaussian random variables. We illustrate the usefulness of our approach by modeling and analyzing soft errors in commonly used dynamic random access memory sense amplifier circuits.

  14. Human Error and the International Space Station: Challenges and Triumphs in Science Operations

    Science.gov (United States)

    Harris, Samantha S.; Simpson, Beau C.

    2016-01-01

    Any system with a human component is inherently risky. Studies in human factors and psychology have repeatedly shown that human operators will inevitably make errors, regardless of how well they are trained. Onboard the International Space Station (ISS) where crew time is arguably the most valuable resource, errors by the crew or ground operators can be costly to critical science objectives. Operations experts at the ISS Payload Operations Integration Center (POIC), located at NASA's Marshall Space Flight Center in Huntsville, Alabama, have learned that from payload concept development through execution, there are countless opportunities to introduce errors that can potentially result in costly losses of crew time and science. To effectively address this challenge, we must approach the design, testing, and operation processes with two specific goals in mind. First, a systematic approach to error and human centered design methodology should be implemented to minimize opportunities for user error. Second, we must assume that human errors will be made and enable rapid identification and recoverability when they occur. While a systematic approach and human centered development process can go a long way toward eliminating error, the complete exclusion of operator error is not a reasonable expectation. The ISS environment in particular poses challenging conditions, especially for flight controllers and astronauts. Operating a scientific laboratory 250 miles above the Earth is a complicated and dangerous task with high stakes and a steep learning curve. While human error is a reality that may never be fully eliminated, smart implementation of carefully chosen tools and techniques can go a long way toward minimizing risk and increasing the efficiency of NASA's space science operations.

  15. Learning a locomotor task: with or without errors?

    Science.gov (United States)

    Marchal-Crespo, Laura; Schneider, Jasmin; Jaeger, Lukas; Riener, Robert

    2014-03-04

    . Error strategies have a great potential to evoke higher muscle activation and provoke better motor learning of simple tasks. Neuroimaging evaluation of brain regions involved in learning can provide valuable information on observed behavioral outcomes related to learning processes. The impacts of these strategies on neurological patients need further investigations.

  16. Heuristics and Cognitive Error in Medical Imaging.

    Science.gov (United States)

    Itri, Jason N; Patel, Sohil H

    2018-05-01

    The field of cognitive science has provided important insights into mental processes underlying the interpretation of imaging examinations. Despite these insights, diagnostic error remains a major obstacle in the goal to improve quality in radiology. In this article, we describe several types of cognitive bias that lead to diagnostic errors in imaging and discuss approaches to mitigate cognitive biases and diagnostic error. Radiologists rely on heuristic principles to reduce complex tasks of assessing probabilities and predicting values into simpler judgmental operations. These mental shortcuts allow rapid problem solving based on assumptions and past experiences. Heuristics used in the interpretation of imaging studies are generally helpful but can sometimes result in cognitive biases that lead to significant errors. An understanding of the causes of cognitive biases can lead to the development of educational content and systematic improvements that mitigate errors and improve the quality of care provided by radiologists.

  17. Reducing Error Rates for Iris Image using higher Contrast in Normalization process

    Science.gov (United States)

    Aminu Ghali, Abdulrahman; Jamel, Sapiee; Abubakar Pindar, Zahraddeen; Hasssan Disina, Abdulkadir; Mat Daris, Mustafa

    2017-08-01

    Iris recognition system is the most secured, and faster means of identification and authentication. However, iris recognition system suffers a setback from blurring, low contrast and illumination due to low quality image which compromises the accuracy of the system. The acceptance or rejection rates of verified user depend solely on the quality of the image. In many cases, iris recognition system with low image contrast could falsely accept or reject user. Therefore this paper adopts Histogram Equalization Technique to address the problem of False Rejection Rate (FRR) and False Acceptance Rate (FAR) by enhancing the contrast of the iris image. A histogram equalization technique enhances the image quality and neutralizes the low contrast of the image at normalization stage. The experimental result shows that Histogram Equalization Technique has reduced FRR and FAR compared to the existing techniques.

  18. Redundant measurements for controlling errors

    International Nuclear Information System (INIS)

    Ehinger, M.H.; Crawford, J.M.; Madeen, M.L.

    1979-07-01

    Current federal regulations for nuclear materials control require consideration of operating data as part of the quality control program and limits of error propagation. Recent work at the BNFP has revealed that operating data are subject to a number of measurement problems which are very difficult to detect and even more difficult to correct in a timely manner. Thus error estimates based on operational data reflect those problems. During the FY 1978 and FY 1979 R and D demonstration runs at the BNFP, redundant measurement techniques were shown to be effective in detecting these problems to allow corrective action. The net effect is a reduction in measurement errors and a significant increase in measurement sensitivity. Results show that normal operation process control measurements, in conjunction with routine accountability measurements, are sensitive problem indicators when incorporated in a redundant measurement program

  19. Understanding errors in EIA projections of energy demand

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, Carolyn; Herrnstadt, Evan; Morgenstern, Richard [Resources for the Future, 1616 P St. NW, Washington, DC 20036 (United States)

    2009-08-15

    This paper investigates the potential for systematic errors in the Energy Information Administration's (EIA) widely used Annual Energy Outlook, focusing on the near- to mid-term projections of energy demand. Based on analysis of the EIA's 22-year projection record, we find a fairly modest but persistent tendency to underestimate total energy demand by an average of 2 percent per year after controlling for projection errors in gross domestic product, oil prices, and heating/cooling degree days. For 14 individual fuels/consuming sectors routinely reported by the EIA, we observe a great deal of directional consistency in the errors over time, ranging up to 7 percent per year. Electric utility renewables, electric utility natural gas, transportation distillate, and residential electricity show significant biases on average. Projections for certain other sectors have significant unexplained errors for selected time horizons. Such independent evaluation can be useful for validating analytic efforts and for prioritizing future model revisions. (author)

  20. Improving Type Error Messages in OCaml

    Directory of Open Access Journals (Sweden)

    Arthur Charguéraud

    2015-12-01

    Full Text Available Cryptic type error messages are a major obstacle to learning OCaml or other ML-based languages. In many cases, error messages cannot be interpreted without a sufficiently-precise model of the type inference algorithm. The problem of improving type error messages in ML has received quite a bit of attention over the past two decades, and many different strategies have been considered. The challenge is not only to produce error messages that are both sufficiently concise and systematically useful to the programmer, but also to handle a full-blown programming language and to cope with large-sized programs efficiently. In this work, we present a modification to the traditional ML type inference algorithm implemented in OCaml that, by significantly reducing the left-to-right bias, allows us to report error messages that are more helpful to the programmer. Our algorithm remains fully predictable and continues to produce fairly concise error messages that always help making some progress towards fixing the code. We implemented our approach as a patch to the OCaml compiler in just a few hundred lines of code. We believe that this patch should benefit not just to beginners, but also to experienced programs developing large-scale OCaml programs.

  1. [The effectiveness of error reporting promoting strategy on nurse's attitude, patient safety culture, intention to report and reporting rate].

    Science.gov (United States)

    Kim, Myoungsoo

    2010-04-01

    The purpose of this study was to examine the impact of strategies to promote reporting of errors on nurses' attitude to reporting errors, organizational culture related to patient safety, intention to report and reporting rate in hospital nurses. A nonequivalent control group non-synchronized design was used for this study. The program was developed and then administered to the experimental group for 12 weeks. Data were analyzed using descriptive analysis, X(2)-test, t-test, and ANCOVA with the SPSS 12.0 program. After the intervention, the experimental group showed significantly higher scores for nurses' attitude to reporting errors (experimental: 20.73 vs control: 20.52, F=5.483, p=.021) and reporting rate (experimental: 3.40 vs control: 1.33, F=1998.083, porganizational culture and intention to report. The study findings indicate that strategies that promote reporting of errors play an important role in producing positive attitudes to reporting errors and improving behavior of reporting. Further advanced strategies for reporting errors that can lead to improved patient safety should be developed and applied in a broad range of hospitals.

  2. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  3. English word frequency and recognition in bilinguals: Inter-corpus comparison and error analysis.

    Science.gov (United States)

    Shi, Lu-Feng

    2015-01-01

    This study is the second of a two-part investigation on lexical effects on bilinguals' performance on a clinical English word recognition test. Focus is on word-frequency effects using counts provided by four corpora. Frequency of occurrence was obtained for 200 NU-6 words from the Hoosier mental lexicon (HML) and three contemporary corpora, American National Corpora, Hyperspace analogue to language (HAL), and SUBTLEX(US). Correlation analysis was performed between word frequency and error rate. Ten monolinguals and 30 bilinguals participated. Bilinguals were further grouped according to their age of English acquisition and length of schooling/working in English. Word frequency significantly affected word recognition in bilinguals who acquired English late and had limited schooling/working in English. When making errors, bilinguals tended to replace the target word with a word of a higher frequency. Overall, the newer corpora outperformed the HML in predicting error rate. Frequency counts provided by contemporary corpora predict bilinguals' recognition of English monosyllabic words. Word frequency also helps explain top replacement words for misrecognized targets. Word-frequency effects are especially prominent for bilinguals foreign born and educated.

  4. The prevalence of coeliac disease is significantly higher in children compared with adults.

    Science.gov (United States)

    Mariné, M; Farre, C; Alsina, M; Vilar, P; Cortijo, M; Salas, A; Fernández-Bañares, F; Rosinach, M; Santaolalla, R; Loras, C; Marquès, T; Cusí, V; Hernández, M I; Carrasco, A; Ribes, J; Viver, J M; Esteve, M

    2011-02-01

    Some limited studies of coeliac disease have shown higher frequency of coeliac disease in infancy and adolescence than in adulthood. This finding has remained unnoticed and not adequately demonstrated. To assess whether there are age and gender differences in coeliac disease prevalence. A total of 4230 subjects were included consecutively (1 to ≥80 years old) reproducing the reference population by age and gender. Sample size was calculated assuming a population-based coeliac disease prevalence of 1:250. After an interim analysis, the paediatric sample was expanded (2010 children) due to high prevalence in this group. Anti-transglutaminase and antiendomysial antibodies were determined and duodenal biopsy was performed if positive. Log-linear models were fitted to coeliac disease prevalence by age allowing calculation of percentage change of prevalence. Differences between groups were compared using Chi-squared test. Twenty-one subjects had coeliac disease (male/female 1:2.5). Coeliac disease prevalence in the total population was 1:204. Coeliac disease prevalence was higher in children (1:71) than in adults (1:357) (P = 0.00005). A significant decrease of prevalence in older generations was observed [change of prevalence by age of -5% (95% CI: -7.58 to -2.42%)]. In the paediatric expanded group (1-14 years), a decrease of coeliac disease prevalence was also observed [prevalence change: -17% (95% CI: -25.02 to -6.10)]. The prevalence of coeliac disease in childhood was five times higher than in adults. Whether this difference is due to environmental factors influencing infancy, or latency of coeliac disease in adulthood, remains to be demonstrated in prospective longitudinal studies. © 2010 Blackwell Publishing Ltd.

  5. Identification of time-varying nonlinear systems using differential evolution algorithm

    DEFF Research Database (Denmark)

    Perisic, Nevena; Green, Peter L; Worden, Keith

    2013-01-01

    (DE) algorithm for the identification of time-varying systems. DE is an evolutionary optimisation method developed to perform direct search in a continuous space without requiring any derivative estimation. DE is modified so that the objective function changes with time to account for the continuing......, thus identification of time-varying systems with nonlinearities can be a very challenging task. In order to avoid conventional least squares and gradient identification methods which require uni-modal and double differentiable objective functions, this work proposes a modified differential evolution...... inclusion of new data within an error metric. This paper presents results of identification of a time-varying SDOF system with Coulomb friction using simulated noise-free and noisy data for the case of time-varying friction coefficient, stiffness and damping. The obtained results are promising and the focus...

  6. Error Patterns

    NARCIS (Netherlands)

    Hoede, C.; Li, Z.

    2001-01-01

    In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,

  7. How does aging affect the types of error made in a visual short-term memory 'object-recall' task?

    Science.gov (United States)

    Sapkota, Raju P; van der Linde, Ian; Pardhan, Shahina

    2014-01-01

    This study examines how normal aging affects the occurrence of different types of incorrect responses in a visual short-term memory (VSTM) object-recall task. Seventeen young (Mean = 23.3 years, SD = 3.76), and 17 normally aging older (Mean = 66.5 years, SD = 6.30) adults participated. Memory stimuli comprised two or four real world objects (the memory load) presented sequentially, each for 650 ms, at random locations on a computer screen. After a 1000 ms retention interval, a test display was presented, comprising an empty box at one of the previously presented two or four memory stimulus locations. Participants were asked to report the name of the object presented at the cued location. Errors rates wherein participants reported the names of objects that had been presented in the memory display but not at the cued location (non-target errors) vs. objects that had not been presented at all in the memory display (non-memory errors) were compared. Significant effects of aging, memory load and target recency on error type and absolute error rates were found. Non-target error rate was higher than non-memory error rate in both age groups, indicating that VSTM may have been more often than not populated with partial traces of previously presented items. At high memory load, non-memory error rate was higher in young participants (compared to older participants) when the memory target had been presented at the earliest temporal position. However, non-target error rates exhibited a reversed trend, i.e., greater error rates were found in older participants when the memory target had been presented at the two most recent temporal positions. Data are interpreted in terms of proactive interference (earlier examined non-target items interfering with more recent items), false memories (non-memory items which have a categorical relationship to presented items, interfering with memory targets), slot and flexible resource models, and spatial coding deficits.

  8. Study of Error Propagation in the Transformations of Dynamic Thermal Models of Buildings

    Directory of Open Access Journals (Sweden)

    Loïc Raillon

    2017-01-01

    Full Text Available Dynamic behaviour of a system may be described by models with different forms: thermal (RC networks, state-space representations, transfer functions, and ARX models. These models, which describe the same process, are used in the design, simulation, optimal predictive control, parameter identification, fault detection and diagnosis, and so on. Since more forms are available, it is interesting to know which one is the most suitable by estimating the sensitivity of the model to transform into a physical model, which is represented by a thermal network. A procedure for the study of error by Monte Carlo simulation and of factor prioritization is exemplified on a simple, but representative, thermal model of a building. The analysis of the propagation of errors and of the influence of the errors on the parameter estimation shows that the transformation from state-space representation to transfer function is more robust than the other way around. Therefore, if only one model is chosen, the state-space representation is preferable.

  9. At the cross-roads: an on-road examination of driving errors at intersections.

    Science.gov (United States)

    Young, Kristie L; Salmon, Paul M; Lenné, Michael G

    2013-09-01

    A significant proportion of road trauma occurs at intersections. Understanding the nature of driving errors at intersections therefore has the potential to lead to significant injury reductions. To further understand how the complexity of modern intersections shapes behaviour of these errors are compared to errors made mid-block, and the role of wider systems failures in intersection error causation is investigated in an on-road study. Twenty-five participants drove a pre-determined urban route incorporating 25 intersections. Two in-vehicle observers recorded the errors made while a range of other data was collected, including driver verbal protocols, video, driver eye glance behaviour and vehicle data (e.g., speed, braking and lane position). Participants also completed a post-trial cognitive task analysis interview. Participants were found to make 39 specific error types, with speeding violations the most common. Participants made significantly more errors at intersections compared to mid-block, with misjudgement, action and perceptual/observation errors more commonly observed at intersections. Traffic signal configuration was found to play a key role in intersection error causation, with drivers making more errors at partially signalised compared to fully signalised intersections. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Ozone Production in Global Tropospheric Models: Quantifying Errors due to Grid Resolution

    Science.gov (United States)

    Wild, O.; Prather, M. J.

    2005-12-01

    Ozone production in global chemical models is dependent on model resolution because ozone chemistry is inherently nonlinear, the timescales for chemical production are short, and precursors are artificially distributed over the spatial scale of the model grid. In this study we examine the sensitivity of ozone, its precursors, and its production to resolution by running a global chemical transport model at four different resolutions between T21 (5.6° × 5.6°) and T106 (1.1° × 1.1°) and by quantifying the errors in regional and global budgets. The sensitivity to vertical mixing through the parameterization of boundary layer turbulence is also examined. We find less ozone production in the boundary layer at higher resolution, consistent with slower chemical production in polluted emission regions and greater export of precursors. Agreement with ozonesonde and aircraft measurements made during the NASA TRACE-P campaign over the Western Pacific in spring 2001 is consistently better at higher resolution. We demonstrate that the numerical errors in transport processes at a given resolution converge geometrically for a tracer at successively higher resolutions. The convergence in ozone production on progressing from T21 to T42, T63 and T106 resolution is likewise monotonic but still indicates large errors at 120~km scales, suggesting that T106 resolution is still too coarse to resolve regional ozone production. Diagnosing the ozone production and precursor transport that follow a short pulse of emissions over East Asia in springtime allows us to quantify the impacts of resolution on both regional and global ozone. Production close to continental emission regions is overestimated by 27% at T21 resolution, by 13% at T42 resolution, and by 5% at T106 resolution, but subsequent ozone production in the free troposphere is less significantly affected.

  11. Modeling of Geometric Error in Linear Guide Way to Improved the vertical three-axis CNC Milling machine’s accuracy

    Science.gov (United States)

    Kwintarini, Widiyanti; Wibowo, Agung; Arthaya, Bagus M.; Yuwana Martawirya, Yatna

    2018-03-01

    The purpose of this study was to improve the accuracy of three-axis CNC Milling Vertical engines with a general approach by using mathematical modeling methods of machine tool geometric errors. The inaccuracy of CNC machines can be caused by geometric errors that are an important factor during the manufacturing process and during the assembly phase, and are factors for being able to build machines with high-accuracy. To improve the accuracy of the three-axis vertical milling machine, by knowing geometric errors and identifying the error position parameters in the machine tool by arranging the mathematical modeling. The geometric error in the machine tool consists of twenty-one error parameters consisting of nine linear error parameters, nine angle error parameters and three perpendicular error parameters. The mathematical modeling approach of geometric error with the calculated alignment error and angle error in the supporting components of the machine motion is linear guide way and linear motion. The purpose of using this mathematical modeling approach is the identification of geometric errors that can be helpful as reference during the design, assembly and maintenance stages to improve the accuracy of CNC machines. Mathematically modeling geometric errors in CNC machine tools can illustrate the relationship between alignment error, position and angle on a linear guide way of three-axis vertical milling machines.

  12. Human reliability analysis of errors of commission: a review of methods and applications

    Energy Technology Data Exchange (ETDEWEB)

    Reer, B

    2007-06-15

    Illustrated by specific examples relevant to contemporary probabilistic safety assessment (PSA), this report presents a review of human reliability analysis (HRA) addressing post initiator errors of commission (EOCs), i.e. inappropriate actions under abnormal operating conditions. The review addressed both methods and applications. Emerging HRA methods providing advanced features and explicit guidance suitable for PSA are: A Technique for Human Event Analysis (ATHEANA, key publications in 1998/2000), Methode d'Evaluation de la Realisation des Missions Operateur pour la Surete (MERMOS, 1998/2000), the EOC HRA method developed by the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS, 2003), the Misdiagnosis Tree Analysis (MDTA) method (2005/2006), the Cognitive Reliability and Error Analysis Method (CREAM, 1998), and the Commission Errors Search and Assessment (CESA) method (2002/2004). As a result of a thorough investigation of various PSA/HRA applications, this paper furthermore presents an overview of EOCs (termination of safety injection, shutdown of secondary cooling, etc.) referred to in predictive studies and a qualitative review of cases of EOC quantification. The main conclusions of the review of both the methods and the EOC HRA cases are: (1) The CESA search scheme, which proceeds from possible operator actions to the affected systems to scenarios, may be preferable because this scheme provides a formalized way for identifying relatively important scenarios with EOC opportunities; (2) an EOC identification guidance like CESA, which is strongly based on the procedural guidance and important measures of systems or components affected by inappropriate actions, however should pay some attention to EOCs associated with familiar but non-procedural actions and EOCs leading to failures of manually initiated safety functions. (3) Orientations of advanced EOC quantification comprise a) modeling of multiple contexts for a given scenario, b) accounting for

  13. Human reliability analysis of errors of commission: a review of methods and applications

    International Nuclear Information System (INIS)

    Reer, B.

    2007-06-01

    Illustrated by specific examples relevant to contemporary probabilistic safety assessment (PSA), this report presents a review of human reliability analysis (HRA) addressing post initiator errors of commission (EOCs), i.e. inappropriate actions under abnormal operating conditions. The review addressed both methods and applications. Emerging HRA methods providing advanced features and explicit guidance suitable for PSA are: A Technique for Human Event Analysis (ATHEANA, key publications in 1998/2000), Methode d'Evaluation de la Realisation des Missions Operateur pour la Surete (MERMOS, 1998/2000), the EOC HRA method developed by the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS, 2003), the Misdiagnosis Tree Analysis (MDTA) method (2005/2006), the Cognitive Reliability and Error Analysis Method (CREAM, 1998), and the Commission Errors Search and Assessment (CESA) method (2002/2004). As a result of a thorough investigation of various PSA/HRA applications, this paper furthermore presents an overview of EOCs (termination of safety injection, shutdown of secondary cooling, etc.) referred to in predictive studies and a qualitative review of cases of EOC quantification. The main conclusions of the review of both the methods and the EOC HRA cases are: (1) The CESA search scheme, which proceeds from possible operator actions to the affected systems to scenarios, may be preferable because this scheme provides a formalized way for identifying relatively important scenarios with EOC opportunities; (2) an EOC identification guidance like CESA, which is strongly based on the procedural guidance and important measures of systems or components affected by inappropriate actions, however should pay some attention to EOCs associated with familiar but non-procedural actions and EOCs leading to failures of manually initiated safety functions. (3) Orientations of advanced EOC quantification comprise a) modeling of multiple contexts for a given scenario, b) accounting for

  14. Electronic prescribing reduces prescribing error in public hospitals.

    Science.gov (United States)

    Shawahna, Ramzi; Rahman, Nisar-Ur; Ahmad, Mahmood; Debray, Marcel; Yliperttula, Marjo; Declèves, Xavier

    2011-11-01

    To examine the incidence of prescribing errors in a main public hospital in Pakistan and to assess the impact of introducing electronic prescribing system on the reduction of their incidence. Medication errors are persistent in today's healthcare system. The impact of electronic prescribing on reducing errors has not been tested in developing world. Prospective review of medication and discharge medication charts before and after the introduction of an electronic inpatient record and prescribing system. Inpatient records (n = 3300) and 1100 discharge medication sheets were reviewed for prescribing errors before and after the installation of electronic prescribing system in 11 wards. Medications (13,328 and 14,064) were prescribed for inpatients, among which 3008 and 1147 prescribing errors were identified, giving an overall error rate of 22·6% and 8·2% throughout paper-based and electronic prescribing, respectively. Medications (2480 and 2790) were prescribed for discharge patients, among which 418 and 123 errors were detected, giving an overall error rate of 16·9% and 4·4% during paper-based and electronic prescribing, respectively. Electronic prescribing has a significant effect on the reduction of prescribing errors. Prescribing errors are commonplace in Pakistan public hospitals. The study evaluated the impact of introducing electronic inpatient records and electronic prescribing in the reduction of prescribing errors in a public hospital in Pakistan. © 2011 Blackwell Publishing Ltd.

  15. Resisting attraction: Individual differences in executive control are associated with subject-verb agreement errors in production.

    Science.gov (United States)

    Veenstra, Alma; Antoniou, Kyriakos; Katsos, Napoleon; Kissine, Mikhail

    2018-04-19

    We propose that attraction errors in agreement production (e.g., the key to the cabinets are missing) are related to two components of executive control: working memory and inhibitory control. We tested 138 children aged 10 to 12, an age when children are expected to produce high rates of errors. To increase the potential of individual variation in executive control skills, participants came from monolingual, bilingual, and bidialectal language backgrounds. Attraction errors were elicited with a picture description task in Dutch and executive control was measured with a digit span task, Corsi blocks task, switching task, and attentional networks task. Overall, higher rates of attraction errors were negatively associated with higher verbal working memory and, independently, with higher inhibitory control. To our knowledge, this is the first demonstration of the role of both working memory and inhibitory control in attraction errors in production. Implications for memory- and grammar-based models are discussed. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  16. State Space identification of Civil Engineering Structures from Output Measurements

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Andersen, P.

    for identification of civil engineering structures. The SST is compared with the stochastic realization estimator Matrix Block Hankel (MBH) and a prediction error method (PEM). The results show that the investigated techniques give good results in terms of estimated modal parameters and mode shapes. Especially...

  17. State Space identification of Civil Engineering Structures from Output Measurements

    DEFF Research Database (Denmark)

    Kirkegaard, Poul Henning; Andersen, P.

    1997-01-01

    for identification of civil engineering structures. The SST is compared with the stochastic realization estimator Matrix Block Hankel (MBH) and a prediction error method (PEM). The results show that the investigated techniques give good results in terms of estimated modal parameters and mode shapes. Especially...

  18. The error in total error reduction.

    Science.gov (United States)

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Sex-Specific Effects of Gender Identification on Pain Study Recruitment.

    Science.gov (United States)

    Mattos Feijó, Larissa; Tarman, Guliz Zeynep; Fontaine, Charlotte; Harrison, Richard; Johnstone, Tom; Salomons, Tim

    2018-02-01

    Epidemiological, clinical, and laboratory studies show sex differences in pain responses, with women more sensitive to nociceptive stimulation and more vulnerable to long-term pain conditions than men. Because of evidence that men are culturally reinforced for the ability to endure (or under-report) pain, some of these findings might be explained by sociocultural beliefs about gender-appropriate behavior. One potential manifestation of these effects might be differential participation in pain studies, with men adhering to stereotypical masculine roles viewing participation as a way to demonstrate their masculinity. To test this possibility, we assessed gender identification in 137 healthy participants. At the end of the assessment, they were asked if they would like to participate in other research studies. Interested participants were then asked to participate in a study involving administration of pain-evoking stimulation. We compared individuals who agreed to participate in the pain study with those who declined. We observed a significant Sex × Participation interaction in masculine gender identification, such that men (but not women) who agreed to participate identified significantly more with masculine gender. Among masculine gender traits examined, we found that high levels of aggression and competitiveness were the strongest predictors of pain study participation. Our results suggest that men in pain studies might have higher levels of masculine gender identification than the wider male population. Taken together with previous findings of lower levels of pain sensitivity (or reporting) in masculine-identifying male participants, these results suggest an explanation for some of the sex-related differences observed in pain responses. To examine whether sex and gender affect willingness to participate in pain studies, we assessed gender identification in men and women, then attempted to recruit them to participate in a pain study. Men who agree to participate

  20. Prevalence and risk factors for refractive errors in the South Indian adult population: The Andhra Pradesh Eye disease study

    Directory of Open Access Journals (Sweden)

    Sannapaneni Krishnaiah

    2008-12-01

    Full Text Available Sannapaneni Krishnaiah1,2,3, Marmamula Srinivas1,2,3, Rohit C Khanna1,2, Gullapalli N Rao1,2,31L V Prasad Eye Institute, Banjara Hills, Hyderabad, India; 2International Center for Advancement of Rural Eye Care, L V Prasad Eye Institute, Banjara Hills, Hyderabad, India; 3Vision CRC, University of New South Wales, Sydney, NSW, AustraliaAim: To report the prevalence, risk factors and associated population attributable risk percentage (PAR for refractive errors in the South Indian adult population.Methods: A population-based cross-sectional epidemiologic study was conducted in the Indian state of Andhra Pradesh. A multistage cluster, systematic, stratified random sampling method was used to obtain participants (n = 10293 for this study.Results: The age-gender-area-adjusted prevalence rates in those ≥40 years of age were determined for myopia (spherical equivalent [SE] < −0.5 D 34.6% (95% confidence interval [CI]: 33.1–36.1, high-myopia (SE < −5.0 D 4.5% (95% CI: 3.8–5.2, hyperopia (SE > +0.5 D 18.4% (95% CI: 17.1–19.7, astigmatism (cylinder < −0.5 D 37.6% (95% CI: 36–39.2, and anisometropia (SE difference between right and left eyes >0.5 D 13.0% (95% CI: 11.9–14.1. The prevalence of myopia, astigmatism, high-myopia, and anisometropia significantly increased with increasing age (all p < 0.0001. There was no gender difference in prevalence rates in any type of refractive error, though women had a significantly higher rate of hyperopia than men (p < 0.0001. Hyperopia was significantly higher among those with a higher educational level (odds ratio [OR] 2.49; 95% CI: 1.51–3.95 and significantly higher among the hypertensive group (OR 1.24; 95% CI: 1.03–1.49. The severity of lens nuclear opacity was positively associated with myopia and negatively associated with hyperopia.Conclusions: The prevalence of myopia in this adult Indian population is much higher than in similarly aged white populations. These results confirm the previously

  1. Characteristics of pediatric chemotherapy medication errors in a national error reporting database.

    Science.gov (United States)

    Rinke, Michael L; Shore, Andrew D; Morlock, Laura; Hicks, Rodney W; Miller, Marlene R

    2007-07-01

    Little is known regarding chemotherapy medication errors in pediatrics despite studies suggesting high rates of overall pediatric medication errors. In this study, the authors examined patterns in pediatric chemotherapy errors. The authors queried the United States Pharmacopeia MEDMARX database, a national, voluntary, Internet-accessible error reporting system, for all error reports from 1999 through 2004 that involved chemotherapy medications and patients aged error reports, 85% reached the patient, and 15.6% required additional patient monitoring or therapeutic intervention. Forty-eight percent of errors originated in the administering phase of medication delivery, and 30% originated in the drug-dispensing phase. Of the 387 medications cited, 39.5% were antimetabolites, 14.0% were alkylating agents, 9.3% were anthracyclines, and 9.3% were topoisomerase inhibitors. The most commonly involved chemotherapeutic agents were methotrexate (15.3%), cytarabine (12.1%), and etoposide (8.3%). The most common error types were improper dose/quantity (22.9% of 327 cited error types), wrong time (22.6%), omission error (14.1%), and wrong administration technique/wrong route (12.2%). The most common error causes were performance deficit (41.3% of 547 cited error causes), equipment and medication delivery devices (12.4%), communication (8.8%), knowledge deficit (6.8%), and written order errors (5.5%). Four of the 5 most serious errors occurred at community hospitals. Pediatric chemotherapy errors often reached the patient, potentially were harmful, and differed in quality between outpatient and inpatient areas. This study indicated which chemotherapeutic agents most often were involved in errors and that administering errors were common. Investigation is needed regarding targeted medication administration safeguards for these high-risk medications. Copyright (c) 2007 American Cancer Society.

  2. Working memory load impairs the evaluation of behavioral errors in the medial frontal cortex.

    Science.gov (United States)

    Maier, Martin E; Steinhauser, Marco

    2017-10-01

    Early error monitoring in the medial frontal cortex enables error detection and the evaluation of error significance, which helps prioritize adaptive control. This ability has been assumed to be independent from central capacity, a limited pool of resources assumed to be involved in cognitive control. The present study investigated whether error evaluation depends on central capacity by measuring the error-related negativity (Ne/ERN) in a flanker paradigm while working memory load was varied on two levels. We used a four-choice flanker paradigm in which participants had to classify targets while ignoring flankers. Errors could be due to responding either to the flankers (flanker errors) or to none of the stimulus elements (nonflanker errors). With low load, the Ne/ERN was larger for flanker errors than for nonflanker errors-an effect that has previously been interpreted as reflecting differential significance of these error types. With high load, no such effect of error type on the Ne/ERN was observable. Our findings suggest that working memory load does not impair the generation of an Ne/ERN per se but rather impairs the evaluation of error significance. They demonstrate that error monitoring is composed of capacity-dependent and capacity-independent mechanisms. © 2017 Society for Psychophysiological Research.

  3. Nuclear power plant personnel errors in decision-making as an object of probabilistic risk assessment

    International Nuclear Information System (INIS)

    Reer, B.

    1993-09-01

    The integration of human error - also called man-machine system analysis (MMSA) - is an essential part of probabilistic risk assessment (PRA). A new method is presented which allows for a systematic and comprehensive PRA inclusions of decision-based errors due to conflicts or similarities. For the error identification procedure, new question techniques are developed. These errors are shown to be identified by looking at retroactions caused by subordinate goals as components of the overall safety relevant goal. New quantification methods for estimating situation-specific probabilities are developed. The factors conflict and similarity are operationalized in a way that allows their quantification based on informations which are usually available in PRA. The quantification procedure uses extrapolations and interpolations based on a poor set of data related to decision-based errors. Moreover, for passive errors in decision-making a completely new approach is presented where errors are quantified via a delay initiating the required action rather than via error probabilities. The practicability of this dynamic approach is demonstrated by a probabilistic analysis of the actions required during the total loss of feedwater event at the Davis-Besse plant 1985. The extensions of the ''classical'' PRA method developed in this work are applied to a MMSA of the decay heat removal (DHR) of the ''HTR-500''. Errors in decision-making - as potential roots of extraneous acts - are taken into account in a comprehensive and systematic manner. Five additional errors are identified. However, the probabilistic quantification results a nonsignificant increase of the DHR failure probability. (orig.) [de

  4. Optimization of intelligent infusion pump technology to minimize vasopressor pump programming errors.

    Science.gov (United States)

    Vadiei, Nina; Shuman, Carrie A; Murthy, Manasa S; Daley, Mitchell J

    2017-08-01

    There is a lack of data evaluating the impact of hard limit implementation into intelligent infusion pump technology (IIPT). The purpose of this study was to determine if incorporation of vasopressor upper hard limits (UHL) into IIPT increases efficacy of alerts by preventing pump programming errors. Retrospective review from five hospitals within a single healthcare network between April 1, 2013 and May 31, 2014. A total of 65,680 vasopressor data entries were evaluated; 19,377 prior to hard limit implementation and 46,303 after hard limit implementation. The primary outcome was the percent of effective alerts. The secondary outcome was the proportional dose increase from the soft limit provided. A reduction in alert rate occurred after incorporation of hard limits to the IIPT drug library (pre-UHL 4.7% vs. post-UHL 4.0%) with a subsequent increase in the number of errors prevented as represented by a higher effective alert rate (pre-UHL 23.0% vs. post-UHL 37.3%; p < 0.001). The proportional dose increase was significantly reduced (pre-UHL 188% ± 380%] vs. post-UHL 95% ± 128%; p < 0.001). Incorporation of UHLs into IIPT in a multi-site health system with varying intensive care unit and emergency department acuity increases alert effectiveness, reduces dosing errors, and reduces the magnitude of dosing errors that reach the patient.

  5. Effects of learning climate and registered nurse staffing on medication errors.

    Science.gov (United States)

    Chang, YunKyung; Mark, Barbara

    2011-01-01

    Despite increasing recognition of the significance of learning from errors, little is known about how learning climate contributes to error reduction. The purpose of this study was to investigate whether learning climate moderates the relationship between error-producing conditions and medication errors. A cross-sectional descriptive study was done using data from 279 nursing units in 146 randomly selected hospitals in the United States. Error-producing conditions included work environment factors (work dynamics and nurse mix), team factors (communication with physicians and nurses' expertise), personal factors (nurses' education and experience), patient factors (age, health status, and previous hospitalization), and medication-related support services. Poisson models with random effects were used with the nursing unit as the unit of analysis. A significant negative relationship was found between learning climate and medication errors. It also moderated the relationship between nurse mix and medication errors: When learning climate was negative, having more registered nurses was associated with fewer medication errors. However, no relationship was found between nurse mix and medication errors at either positive or average levels of learning climate. Learning climate did not moderate the relationship between work dynamics and medication errors. The way nurse mix affects medication errors depends on the level of learning climate. Nursing units with fewer registered nurses and frequent medication errors should examine their learning climate. Future research should be focused on the role of learning climate as related to the relationships between nurse mix and medication errors.

  6. Closed loop identification of a piezoelectrically controlled radial gas bearing: Theory and experiment

    DEFF Research Database (Denmark)

    Sekunda, André Krabdrup; Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2018-01-01

    Gas bearing systems have extremely small damping properties. Feedback control is thus employed to increase the damping of gas bearings. Such a feedback loop correlates the input with the measurement noise which in turn makes the assumptions for direct identification invalid. The originality...... of this article lies in the investigation of the impact of using different identification methods to identify a rotor-bearing systems’ dynamic model when a feedback loop is active. Two different identification methods are employed. The first method is open loop Prediction Error Method, while the other method...

  7. Technology-related medication errors in a tertiary hospital: a 5-year analysis of reported medication incidents.

    Science.gov (United States)

    Samaranayake, N R; Cheung, S T D; Chui, W C M; Cheung, B M Y

    2012-12-01

    Healthcare technology is meant to reduce medication errors. The objective of this study was to assess unintended errors related to technologies in the medication use process. Medication incidents reported from 2006 to 2010 in a main tertiary care hospital were analysed by a pharmacist and technology-related errors were identified. Technology-related errors were further classified as socio-technical errors and device errors. This analysis was conducted using data from medication incident reports which may represent only a small proportion of medication errors that actually takes place in a hospital. Hence, interpretation of results must be tentative. 1538 medication incidents were reported. 17.1% of all incidents were technology-related, of which only 1.9% were device errors, whereas most were socio-technical errors (98.1%). Of these, 61.2% were linked to computerised prescription order entry, 23.2% to bar-coded patient identification labels, 7.2% to infusion pumps, 6.8% to computer-aided dispensing label generation and 1.5% to other technologies. The immediate causes for technology-related errors included, poor interface between user and computer (68.1%), improper procedures or rule violations (22.1%), poor interface between user and infusion pump (4.9%), technical defects (1.9%) and others (3.0%). In 11.4% of the technology-related incidents, the error was detected after the drug had been administered. A considerable proportion of all incidents were technology-related. Most errors were due to socio-technical issues. Unintended and unanticipated errors may happen when using technologies. Therefore, when using technologies, system improvement, awareness, training and monitoring are needed to minimise medication errors. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  8. Error Mitigation in Computational Design of Sustainable Energy Materials

    DEFF Research Database (Denmark)

    Christensen, Rune

    by individual C=O bonds. Energy corrections applied to C=O bonds significantly reduce systematic errors and can be extended to adsorbates. A similar study is performed for intermediates in the oxygen evolution and oxygen reduction reactions. An identified systematic error on peroxide bonds is found to also...... be present in the OOH* adsorbate. However, the systematic error will almost be canceled by inclusion of van der Waals energy. The energy difference between key adsorbates is thus similar to that previously found. Finally, a method is developed for error estimation in computationally inexpensive neural...

  9. Phonological errors predominate in Arabic spelling across grades 1-9.

    Science.gov (United States)

    Abu-Rabia, Salim; Taha, Haitham

    2006-03-01

    Most of the spelling error analysis has been conducted in Latin orthographies and rarely conducted in other orthographies like Arabic. Two hundred and eighty-eight students in grades 1-9 participated in the study. They were presented nine lists of words to test their spelling skills. Their spelling errors were analyzed by error categories. The most frequent errors were phonological. The results did not indicate any significant differences in the percentages of phonological errors across grades one to nine.Thus, phonology probably presents the greatest challenge to students developing spelling skills in Arabic.

  10. Multi-function radar emitter identification based on stochastic syntax-directed translation schema

    OpenAIRE

    Liu, Haijun; Yu, Hongqi; Sun, Zhaolin; Diao, Jietao

    2014-01-01

    To cope with the problem of emitter identification caused by the radar words’ uncertainty of measured multi-function radar emitters, this paper proposes a new identification method based on stochastic syntax-directed translation schema (SSDTS). This method, which is deduced from the syntactic modeling of multi-function radars, considers the probabilities of radar phrases appearance in different radar modes as well as the probabilities of radar word errors occurrence in different radar phrases...

  11. Accurate recapture identification for genetic mark–recapture studies with error-tolerant likelihood-based match calling and sample clustering

    Science.gov (United States)

    Sethi, Suresh; Linden, Daniel; Wenburg, John; Lewis, Cara; Lemons, Patrick R.; Fuller, Angela K.; Hare, Matthew P.

    2016-01-01

    Error-tolerant likelihood-based match calling presents a promising technique to accurately identify recapture events in genetic mark–recapture studies by combining probabilities of latent genotypes and probabilities of observed genotypes, which may contain genotyping errors. Combined with clustering algorithms to group samples into sets of recaptures based upon pairwise match calls, these tools can be used to reconstruct accurate capture histories for mark–recapture modelling. Here, we assess the performance of a recently introduced error-tolerant likelihood-based match-calling model and sample clustering algorithm for genetic mark–recapture studies. We assessed both biallelic (i.e. single nucleotide polymorphisms; SNP) and multiallelic (i.e. microsatellite; MSAT) markers using a combination of simulation analyses and case study data on Pacific walrus (Odobenus rosmarus divergens) and fishers (Pekania pennanti). A novel two-stage clustering approach is demonstrated for genetic mark–recapture applications. First, repeat captures within a sampling occasion are identified. Subsequently, recaptures across sampling occasions are identified. The likelihood-based matching protocol performed well in simulation trials, demonstrating utility for use in a wide range of genetic mark–recapture studies. Moderately sized SNP (64+) and MSAT (10–15) panels produced accurate match calls for recaptures and accurate non-match calls for samples from closely related individuals in the face of low to moderate genotyping error. Furthermore, matching performance remained stable or increased as the number of genetic markers increased, genotyping error notwithstanding.

  12. Technology and medication errors: impact in nursing homes.

    Science.gov (United States)

    Baril, Chantal; Gascon, Viviane; St-Pierre, Liette; Lagacé, Denis

    2014-01-01

    The purpose of this paper is to study a medication distribution technology's (MDT) impact on medication errors reported in public nursing homes in Québec Province. The work was carried out in six nursing homes (800 patients). Medication error data were collected from nursing staff through a voluntary reporting process before and after MDT was implemented. The errors were analysed using: totals errors; medication error type; severity and patient consequences. A statistical analysis verified whether there was a significant difference between the variables before and after introducing MDT. The results show that the MDT detected medication errors. The authors' analysis also indicates that errors are detected more rapidly resulting in less severe consequences for patients. MDT is a step towards safer and more efficient medication processes. Our findings should convince healthcare administrators to implement technology such as electronic prescriber or bar code medication administration systems to improve medication processes and to provide better healthcare to patients. Few studies have been carried out in long-term healthcare facilities such as nursing homes. The authors' study extends what is known about MDT's impact on medication errors in nursing homes.

  13. Counting OCR errors in typeset text

    Science.gov (United States)

    Sandberg, Jonathan S.

    1995-03-01

    Frequently object recognition accuracy is a key component in the performance analysis of pattern matching systems. In the past three years, the results of numerous excellent and rigorous studies of OCR system typeset-character accuracy (henceforth OCR accuracy) have been published, encouraging performance comparisons between a variety of OCR products and technologies. These published figures are important; OCR vendor advertisements in the popular trade magazines lead readers to believe that published OCR accuracy figures effect market share in the lucrative OCR market. Curiously, a detailed review of many of these OCR error occurrence counting results reveals that they are not reproducible as published and they are not strictly comparable due to larger variances in the counts than would be expected by the sampling variance. Naturally, since OCR accuracy is based on a ratio of the number of OCR errors over the size of the text searched for errors, imprecise OCR error accounting leads to similar imprecision in OCR accuracy. Some published papers use informal, non-automatic, or intuitively correct OCR error accounting. Still other published results present OCR error accounting methods based on string matching algorithms such as dynamic programming using Levenshtein (edit) distance but omit critical implementation details (such as the existence of suspect markers in the OCR generated output or the weights used in the dynamic programming minimization procedure). The problem with not specifically revealing the accounting method is that the number of errors found by different methods are significantly different. This paper identifies the basic accounting methods used to measure OCR errors in typeset text and offers an evaluation and comparison of the various accounting methods.

  14. Radiologic errors, past, present and future.

    Science.gov (United States)

    Berlin, Leonard

    2014-01-01

    During the 10-year period beginning in 1949 with publication of five articles in two radiology journals and UKs The Lancet, a California radiologist named L.H. Garland almost single-handedly shocked the entire medical and especially the radiologic community. He focused their attention on the fact now known and accepted by all, but at that time not previously recognized and acknowledged only with great reluctance, that a substantial degree of observer error was prevalent in radiologic interpretation. In the more than half-century that followed, Garland's pioneering work has been affirmed and reaffirmed by numerous researchers. Retrospective studies disclosed then and still disclose today that diagnostic errors in radiologic interpretations of plain radiographic (as well as CT, MR, ultrasound, and radionuclide) images hover in the 30% range, not too dissimilar to the error rates in clinical medicine. Seventy percent of these errors are perceptual in nature, i.e., the radiologist does not "see" the abnormality on the imaging exam, perhaps due to poor conspicuity, satisfaction of search, or simply the "inexplicable psycho-visual phenomena of human perception." The remainder are cognitive errors: the radiologist sees an abnormality but fails to render a correct diagnoses by attaching the wrong significance to what is seen, perhaps due to inadequate knowledge, or an alliterative or judgmental error. Computer-assisted detection (CAD), a technology that for the past two decades has been utilized primarily in mammographic interpretation, increases sensitivity but at the same time decreases specificity; whether it reduces errors is debatable. Efforts to reduce diagnostic radiological errors continue, but the degree to which they will be successful remains to be determined.

  15. Improving probabilistic prediction of daily streamflow by identifying Pareto optimal approaches for modeling heteroscedastic residual errors

    Science.gov (United States)

    McInerney, David; Thyer, Mark; Kavetski, Dmitri; Lerat, Julien; Kuczera, George

    2017-03-01

    Reliable and precise probabilistic prediction of daily catchment-scale streamflow requires statistical characterization of residual errors of hydrological models. This study focuses on approaches for representing error heteroscedasticity with respect to simulated streamflow, i.e., the pattern of larger errors in higher streamflow predictions. We evaluate eight common residual error schemes, including standard and weighted least squares, the Box-Cox transformation (with fixed and calibrated power parameter λ) and the log-sinh transformation. Case studies include 17 perennial and 6 ephemeral catchments in Australia and the United States, and two lumped hydrological models. Performance is quantified using predictive reliability, precision, and volumetric bias metrics. We find the choice of heteroscedastic error modeling approach significantly impacts on predictive performance, though no single scheme simultaneously optimizes all performance metrics. The set of Pareto optimal schemes, reflecting performance trade-offs, comprises Box-Cox schemes with λ of 0.2 and 0.5, and the log scheme (λ = 0, perennial catchments only). These schemes significantly outperform even the average-performing remaining schemes (e.g., across ephemeral catchments, median precision tightens from 105% to 40% of observed streamflow, and median biases decrease from 25% to 4%). Theoretical interpretations of empirical results highlight the importance of capturing the skew/kurtosis of raw residuals and reproducing zero flows. Paradoxically, calibration of λ is often counterproductive: in perennial catchments, it tends to overfit low flows at the expense of abysmal precision in high flows. The log-sinh transformation is dominated by the simpler Pareto optimal schemes listed above. Recommendations for researchers and practitioners seeking robust residual error schemes for practical work are provided.

  16. Improving probabilistic prediction of daily streamflow by identifying Pareto optimal approaches for modelling heteroscedastic residual errors

    Science.gov (United States)

    David, McInerney; Mark, Thyer; Dmitri, Kavetski; George, Kuczera

    2017-04-01

    This study provides guidance to hydrological researchers which enables them to provide probabilistic predictions of daily streamflow with the best reliability and precision for different catchment types (e.g. high/low degree of ephemerality). Reliable and precise probabilistic prediction of daily catchment-scale streamflow requires statistical characterization of residual errors of hydrological models. It is commonly known that hydrological model residual errors are heteroscedastic, i.e. there is a pattern of larger errors in higher streamflow predictions. Although multiple approaches exist for representing this heteroscedasticity, few studies have undertaken a comprehensive evaluation and comparison of these approaches. This study fills this research gap by evaluating 8 common residual error schemes, including standard and weighted least squares, the Box-Cox transformation (with fixed and calibrated power parameter, lambda) and the log-sinh transformation. Case studies include 17 perennial and 6 ephemeral catchments in Australia and USA, and two lumped hydrological models. We find the choice of heteroscedastic error modelling approach significantly impacts on predictive performance, though no single scheme simultaneously optimizes all performance metrics. The set of Pareto optimal schemes, reflecting performance trade-offs, comprises Box-Cox schemes with lambda of 0.2 and 0.5, and the log scheme (lambda=0, perennial catchments only). These schemes significantly outperform even the average-performing remaining schemes (e.g., across ephemeral catchments, median precision tightens from 105% to 40% of observed streamflow, and median biases decrease from 25% to 4%). Theoretical interpretations of empirical results highlight the importance of capturing the skew/kurtosis of raw residuals and reproducing zero flows. Recommendations for researchers and practitioners seeking robust residual error schemes for practical work are provided.

  17. Statistical methods for biodosimetry in the presence of both Berkson and classical measurement error

    Science.gov (United States)

    Miller, Austin

    In radiation epidemiology, the true dose received by those exposed cannot be assessed directly. Physical dosimetry uses a deterministic function of the source term, distance and shielding to estimate dose. For the atomic bomb survivors, the physical dosimetry system is well established. The classical measurement errors plaguing the location and shielding inputs to the physical dosimetry system are well known. Adjusting for the associated biases requires an estimate for the classical measurement error variance, for which no data-driven estimate exists. In this case, an instrumental variable solution is the most viable option to overcome the classical measurement error indeterminacy. Biological indicators of dose may serve as instrumental variables. Specification of the biodosimeter dose-response model requires identification of the radiosensitivity variables, for which we develop statistical definitions and variables. More recently, researchers have recognized Berkson error in the dose estimates, introduced by averaging assumptions for many components in the physical dosimetry system. We show that Berkson error induces a bias in the instrumental variable estimate of the dose-response coefficient, and then address the estimation problem. This model is specified by developing an instrumental variable mixed measurement error likelihood function, which is then maximized using a Monte Carlo EM Algorithm. These methods produce dose estimates that incorporate information from both physical and biological indicators of dose, as well as the first instrumental variable based data-driven estimate for the classical measurement error variance.

  18. Error studies for SNS Linac. Part 1: Transverse errors

    International Nuclear Information System (INIS)

    Crandall, K.R.

    1998-01-01

    The SNS linac consist of a radio-frequency quadrupole (RFQ), a drift-tube linac (DTL), a coupled-cavity drift-tube linac (CCDTL) and a coupled-cavity linac (CCL). The RFQ and DTL are operated at 402.5 MHz; the CCDTL and CCL are operated at 805 MHz. Between the RFQ and DTL is a medium-energy beam-transport system (MEBT). This error study is concerned with the DTL, CCDTL and CCL, and each will be analyzed separately. In fact, the CCL is divided into two sections, and each of these will be analyzed separately. The types of errors considered here are those that affect the transverse characteristics of the beam. The errors that cause the beam center to be displaced from the linac axis are quad displacements and quad tilts. The errors that cause mismatches are quad gradient errors and quad rotations (roll)

  19. Two-dimensional errors

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements

  20. Sleep, mental health status, and medical errors among hospital nurses in Japan.

    Science.gov (United States)

    Arimura, Mayumi; Imai, Makoto; Okawa, Masako; Fujimura, Toshimasa; Yamada, Naoto

    2010-01-01

    Medical error involving nurses is a critical issue since nurses' actions will have a direct and often significant effect on the prognosis of their patients. To investigate the significance of nurse health in Japan and its potential impact on patient services, a questionnaire-based survey amongst nurses working in hospitals was conducted, with the specific purpose of examining the relationship between shift work, mental health and self-reported medical errors. Multivariate analysis revealed significant associations between the shift work system, General Health Questionnaire (GHQ) scores and nurse errors: the odds ratios for shift system and GHQ were 2.1 and 1.1, respectively. It was confirmed that both sleep and mental health status among hospital nurses were relatively poor, and that shift work and poor mental health were significant factors contributing to medical errors.