WorldWideScience

Sample records for errors prescribing faults

  1. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  2. Electronic prescribing reduces prescribing error in public hospitals.

    Science.gov (United States)

    Shawahna, Ramzi; Rahman, Nisar-Ur; Ahmad, Mahmood; Debray, Marcel; Yliperttula, Marjo; Declèves, Xavier

    2011-11-01

    To examine the incidence of prescribing errors in a main public hospital in Pakistan and to assess the impact of introducing electronic prescribing system on the reduction of their incidence. Medication errors are persistent in today's healthcare system. The impact of electronic prescribing on reducing errors has not been tested in developing world. Prospective review of medication and discharge medication charts before and after the introduction of an electronic inpatient record and prescribing system. Inpatient records (n = 3300) and 1100 discharge medication sheets were reviewed for prescribing errors before and after the installation of electronic prescribing system in 11 wards. Medications (13,328 and 14,064) were prescribed for inpatients, among which 3008 and 1147 prescribing errors were identified, giving an overall error rate of 22·6% and 8·2% throughout paper-based and electronic prescribing, respectively. Medications (2480 and 2790) were prescribed for discharge patients, among which 418 and 123 errors were detected, giving an overall error rate of 16·9% and 4·4% during paper-based and electronic prescribing, respectively. Electronic prescribing has a significant effect on the reduction of prescribing errors. Prescribing errors are commonplace in Pakistan public hospitals. The study evaluated the impact of introducing electronic inpatient records and electronic prescribing in the reduction of prescribing errors in a public hospital in Pakistan. © 2011 Blackwell Publishing Ltd.

  3. Learning from prescribing errors

    OpenAIRE

    Dean, B

    2002-01-01

    

 The importance of learning from medical error has recently received increasing emphasis. This paper focuses on prescribing errors and argues that, while learning from prescribing errors is a laudable goal, there are currently barriers that can prevent this occurring. Learning from errors can take place on an individual level, at a team level, and across an organisation. Barriers to learning from prescribing errors include the non-discovery of many prescribing errors, lack of feedback to th...

  4. E-Prescribing Errors in Community Pharmacies: Exploring Consequences and Contributing Factors

    Science.gov (United States)

    Stone, Jamie A.; Chui, Michelle A.

    2014-01-01

    Objective To explore types of e-prescribing errors in community pharmacies and their potential consequences, as well as the factors that contribute to e-prescribing errors. Methods Data collection involved performing 45 total hours of direct observations in five pharmacies. Follow-up interviews were conducted with 20 study participants. Transcripts from observations and interviews were subjected to content analysis using NVivo 10. Results Pharmacy staff detected 75 e-prescription errors during the 45 hour observation in pharmacies. The most common e-prescribing errors were wrong drug quantity, wrong dosing directions, wrong duration of therapy, and wrong dosage formulation. Participants estimated that 5 in 100 e-prescriptions have errors. Drug classes that were implicated in e-prescribing errors were antiinfectives, inhalers, ophthalmic, and topical agents. The potential consequences of e-prescribing errors included increased likelihood of the patient receiving incorrect drug therapy, poor disease management for patients, additional work for pharmacy personnel, increased cost for pharmacies and patients, and frustrations for patients and pharmacy staff. Factors that contribute to errors included: technology incompatibility between pharmacy and clinic systems, technology design issues such as use of auto-populate features and dropdown menus, and inadvertently entering incorrect information. Conclusion Study findings suggest that a wide range of e-prescribing errors are encountered in community pharmacies. Pharmacists and technicians perceive that causes of e-prescribing errors are multidisciplinary and multifactorial, that is to say e-prescribing errors can originate from technology used in prescriber offices and pharmacies. PMID:24657055

  5. The impact of a closed-loop electronic prescribing and administration system on prescribing errors, administration errors and staff time: a before-and-after study.

    Science.gov (United States)

    Franklin, Bryony Dean; O'Grady, Kara; Donyai, Parastou; Jacklin, Ann; Barber, Nick

    2007-08-01

    To assess the impact of a closed-loop electronic prescribing, automated dispensing, barcode patient identification and electronic medication administration record (EMAR) system on prescribing and administration errors, confirmation of patient identity before administration, and staff time. Before-and-after study in a surgical ward of a teaching hospital, involving patients and staff of that ward. Closed-loop electronic prescribing, automated dispensing, barcode patient identification and EMAR system. Percentage of new medication orders with a prescribing error, percentage of doses with medication administration errors (MAEs) and percentage given without checking patient identity. Time spent prescribing and providing a ward pharmacy service. Nursing time on medication tasks. Prescribing errors were identified in 3.8% of 2450 medication orders pre-intervention and 2.0% of 2353 orders afterwards (pMedical staff required 15 s to prescribe a regular inpatient drug pre-intervention and 39 s afterwards (p = 0.03; t test). Time spent providing a ward pharmacy service increased from 68 min to 98 min each weekday (p = 0.001; t test); 22% of drug charts were unavailable pre-intervention. Time per drug administration round decreased from 50 min to 40 min (p = 0.006; t test); nursing time on medication tasks outside of drug rounds increased from 21.1% to 28.7% (p = 0.006; chi(2) test). A closed-loop electronic prescribing, dispensing and barcode patient identification system reduced prescribing errors and MAEs, and increased confirmation of patient identity before administration. Time spent on medication-related tasks increased.

  6. The impact of a closed‐loop electronic prescribing and administration system on prescribing errors, administration errors and staff time: a before‐and‐after study

    Science.gov (United States)

    Franklin, Bryony Dean; O'Grady, Kara; Donyai, Parastou; Jacklin, Ann; Barber, Nick

    2007-01-01

    Objectives To assess the impact of a closed‐loop electronic prescribing, automated dispensing, barcode patient identification and electronic medication administration record (EMAR) system on prescribing and administration errors, confirmation of patient identity before administration, and staff time. Design, setting and participants Before‐and‐after study in a surgical ward of a teaching hospital, involving patients and staff of that ward. Intervention Closed‐loop electronic prescribing, automated dispensing, barcode patient identification and EMAR system. Main outcome measures Percentage of new medication orders with a prescribing error, percentage of doses with medication administration errors (MAEs) and percentage given without checking patient identity. Time spent prescribing and providing a ward pharmacy service. Nursing time on medication tasks. Results Prescribing errors were identified in 3.8% of 2450 medication orders pre‐intervention and 2.0% of 2353 orders afterwards (pMedical staff required 15 s to prescribe a regular inpatient drug pre‐intervention and 39 s afterwards (p = 0.03; t test). Time spent providing a ward pharmacy service increased from 68 min to 98 min each weekday (p = 0.001; t test); 22% of drug charts were unavailable pre‐intervention. Time per drug administration round decreased from 50 min to 40 min (p = 0.006; t test); nursing time on medication tasks outside of drug rounds increased from 21.1% to 28.7% (p = 0.006; χ2 test). Conclusions A closed‐loop electronic prescribing, dispensing and barcode patient identification system reduced prescribing errors and MAEs, and increased confirmation of patient identity before administration. Time spent on medication‐related tasks increased. PMID:17693676

  7. Medication errors: definitions and classification

    Science.gov (United States)

    Aronson, Jeffrey K

    2009-01-01

    To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey–Lewis method (based on an understanding of theory and practice). A medication error is ‘a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient’. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is ‘a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient’. The converse of this, ‘balanced prescribing’ is ‘the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm’. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. A prescription error is ‘a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription’. The ‘normal features’ include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies. PMID:19594526

  8. Using Fault Trees to Advance Understanding of Diagnostic Errors.

    Science.gov (United States)

    Rogith, Deevakar; Iyengar, M Sriram; Singh, Hardeep

    2017-11-01

    Diagnostic errors annually affect at least 5% of adults in the outpatient setting in the United States. Formal analytic techniques are only infrequently used to understand them, in part because of the complexity of diagnostic processes and clinical work flows involved. In this article, diagnostic errors were modeled using fault tree analysis (FTA), a form of root cause analysis that has been successfully used in other high-complexity, high-risk contexts. How factors contributing to diagnostic errors can be systematically modeled by FTA to inform error understanding and error prevention is demonstrated. A team of three experts reviewed 10 published cases of diagnostic error and constructed fault trees. The fault trees were modeled according to currently available conceptual frameworks characterizing diagnostic error. The 10 trees were then synthesized into a single fault tree to identify common contributing factors and pathways leading to diagnostic error. FTA is a visual, structured, deductive approach that depicts the temporal sequence of events and their interactions in a formal logical hierarchy. The visual FTA enables easier understanding of causative processes and cognitive and system factors, as well as rapid identification of common pathways and interactions in a unified fashion. In addition, it enables calculation of empirical estimates for causative pathways. Thus, fault trees might provide a useful framework for both quantitative and qualitative analysis of diagnostic errors. Future directions include establishing validity and reliability by modeling a wider range of error cases, conducting quantitative evaluations, and undertaking deeper exploration of other FTA capabilities. Copyright © 2017 The Joint Commission. Published by Elsevier Inc. All rights reserved.

  9. A Technological Innovation to Reduce Prescribing Errors Based on Implementation Intentions: The Acceptability and Feasibility of MyPrescribe.

    Science.gov (United States)

    Keyworth, Chris; Hart, Jo; Thoong, Hong; Ferguson, Jane; Tully, Mary

    2017-08-01

    Although prescribing of medication in hospitals is rarely an error-free process, prescribers receive little feedback on their mistakes and ways to change future practices. Audit and feedback interventions may be an effective approach to modifying the clinical practice of health professionals, but these may pose logistical challenges when used in hospitals. Moreover, such interventions are often labor intensive. Consequently, there is a need to develop effective and innovative interventions to overcome these challenges and to improve the delivery of feedback on prescribing. Implementation intentions, which have been shown to be effective in changing behavior, link critical situations with an appropriate response; however, these have rarely been used in the context of improving prescribing practices. Semistructured qualitative interviews were conducted to evaluate the acceptability and feasibility of providing feedback on prescribing errors via MyPrescribe, a mobile-compatible website informed by implementation intentions. Data relating to 200 prescribing errors made by 52 junior doctors were collected by 11 hospital pharmacists. These errors were populated into MyPrescribe, where prescribers were able to construct their own personalized action plans. Qualitative interviews with a subsample of 15 junior doctors were used to explore issues regarding feasibility and acceptability of MyPrescribe and their experiences of using implementation intentions to construct prescribing action plans. Framework analysis was used to identify prominent themes, with findings mapped to the behavioral components of the COM-B model (capability, opportunity, motivation, and behavior) to inform the development of future interventions. MyPrescribe was perceived to be effective in providing opportunities for critical reflection on prescribing errors and to complement existing training (such as junior doctors' e-portfolio). The participants were able to provide examples of how they would use

  10. [Medication reconciliation errors according to patient risk and type of physician prescriber identified by prescribing tool used].

    Science.gov (United States)

    Bilbao Gómez-Martino, Cristina; Nieto Sánchez, Ángel; Fernández Pérez, Cristina; Borrego Hernando, Mª Isabel; Martín-Sánchez, Francisco Javier

    2017-01-01

    To study the frequency of medication reconciliation errors (MREs) in hospitalized patients and explore the profiles of patients at greater risk. To compare the rates of errors in prescriptions written by emergency physicians and ward physicians, who each used a different prescribing tool. Prospective cross-sectional study of a convenience sample of patients admitted to medical, geriatric, and oncology wards over a period of 6 months. A pharmacist undertook the medication reconciliation report, and data were analyzed for possible associations with risk factors or prescriber type (emergency vs ward physician). A total of 148 patients were studied. Emergency physicians had prescribed for 68 (45.9%) and ward physicians for 80 (54.1%). A total of 303 MREs were detected; 113 (76.4%) patients had at least 1 error. No statistically significant differences were found between prescriber types. Factors that conferred risk for a medication error were use polypharmacy (odds ratio [OR], 3.4; 95% CI, 1.2-9.0; P=.016) and multiple chronic conditions in patients under the age of 80 years (OR, 3.9; 95% CI, 1.1-14.7; P=.039). The incidence of MREs is high regardless of whether the prescriber is an emergency or ward physician. The patients who are most at risk are those taking several medications and those under the age of 80 years who have multiple chronic conditions.

  11. Prescribing Errors in Cardiovascular Diseases in a Tertiary Health ...

    African Journals Online (AJOL)

    Prescription errors are now known to be contributing to a large number of deaths during the treatment of cardiovascular diseases. However, there is paucity of information about these errors occurring in health facilities in Nigeria. The objective of this study was to investigate the prevalence of prescribing errors in ...

  12. The causes of prescribing errors in English general practices: a qualitative study.

    Science.gov (United States)

    Slight, Sarah P; Howard, Rachel; Ghaleb, Maisoon; Barber, Nick; Franklin, Bryony Dean; Avery, Anthony J

    2013-10-01

    Few detailed studies exist of the underlying causes of prescribing errors in the UK. To examine the causes of prescribing and monitoring errors in general practice and provide recommendations for how they may be overcome. Qualitative interview and focus group study with purposive sampling of English general practices. General practice staff from 15 general practices across three PCTs in England participated in a combination of semi-structured interviews (n = 34) and six focus groups (n = 46). Thematic analysis informed by Reason's Accident Causation Model was used. Seven categories of high-level error-producing conditions were identified: the prescriber, the patient, the team, the working environment, the task, the computer system, and the primary-secondary care interface. These were broken down to reveal various error-producing conditions: the prescriber's therapeutic training, drug knowledge and experience, knowledge of the patient, perception of risk, and their physical and emotional health; the patient's characteristics and the complexity of the individual clinical case; the importance of feeling comfortable within the practice team was highlighted, as well as the safety implications of GPs signing prescriptions generated by nurses when they had not seen the patient for themselves; the working environment with its extensive workload, time pressures, and interruptions; and computer-related issues associated with mis-selecting drugs from electronic pick-lists and overriding alerts were all highlighted as possible causes of prescribing errors and were often interconnected. Complex underlying causes of prescribing and monitoring errors in general practices were highlighted, several of which are amenable to intervention.

  13. Relating faults in diagnostic reasoning with diagnostic errors and patient harm.

    NARCIS (Netherlands)

    Zwaan, L.; Thijs, A.; Wagner, C.; Wal, G. van der; Timmermans, D.R.M.

    2012-01-01

    Purpose: The relationship between faults in diagnostic reasoning, diagnostic errors, and patient harm has hardly been studied. This study examined suboptimal cognitive acts (SCAs; i.e., faults in diagnostic reasoning), related them to the occurrence of diagnostic errors and patient harm, and studied

  14. Chronology of prescribing error during the hospital stay and prediction of pharmacist's alerts overriding: a prospective analysis

    Directory of Open Access Journals (Sweden)

    Bruni Vanida

    2010-01-01

    Full Text Available Abstract Background Drug prescribing errors are frequent in the hospital setting and pharmacists play an important role in detection of these errors. The objectives of this study are (1 to describe the drug prescribing errors rate during the patient's stay, (2 to find which characteristics for a prescribing error are the most predictive of their reproduction the next day despite pharmacist's alert (i.e. override the alert. Methods We prospectively collected all medication order lines and prescribing errors during 18 days in 7 medical wards' using computerized physician order entry. We described and modelled the errors rate according to the chronology of hospital stay. We performed a classification and regression tree analysis to find which characteristics of alerts were predictive of their overriding (i.e. prescribing error repeated. Results 12 533 order lines were reviewed, 117 errors (errors rate 0.9% were observed and 51% of these errors occurred on the first day of the hospital stay. The risk of a prescribing error decreased over time. 52% of the alerts were overridden (i.e error uncorrected by prescribers on the following day. Drug omissions were the most frequently taken into account by prescribers. The classification and regression tree analysis showed that overriding pharmacist's alerts is first related to the ward of the prescriber and then to either Anatomical Therapeutic Chemical class of the drug or the type of error. Conclusions Since 51% of prescribing errors occurred on the first day of stay, pharmacist should concentrate his analysis of drug prescriptions on this day. The difference of overriding behavior between wards and according drug Anatomical Therapeutic Chemical class or type of error could also guide the validation tasks and programming of electronic alerts.

  15. Prevalence, Nature, Severity and Risk Factors for Prescribing Errors in Hospital Inpatients: Prospective Study in 20 UK Hospitals.

    Science.gov (United States)

    Ashcroft, Darren M; Lewis, Penny J; Tully, Mary P; Farragher, Tracey M; Taylor, David; Wass, Valerie; Williams, Steven D; Dornan, Tim

    2015-09-01

    It has been suggested that doctors in their first year of post-graduate training make a disproportionate number of prescribing errors. This study aimed to compare the prevalence of prescribing errors made by first-year post-graduate doctors with that of errors by senior doctors and non-medical prescribers and to investigate the predictors of potentially serious prescribing errors. Pharmacists in 20 hospitals over 7 prospectively selected days collected data on the number of medication orders checked, the grade of prescriber and details of any prescribing errors. Logistic regression models (adjusted for clustering by hospital) identified factors predicting the likelihood of prescribing erroneously and the severity of prescribing errors. Pharmacists reviewed 26,019 patients and 124,260 medication orders; 11,235 prescribing errors were detected in 10,986 orders. The mean error rate was 8.8 % (95 % confidence interval [CI] 8.6-9.1) errors per 100 medication orders. Rates of errors for all doctors in training were significantly higher than rates for medical consultants. Doctors who were 1 year (odds ratio [OR] 2.13; 95 % CI 1.80-2.52) or 2 years in training (OR 2.23; 95 % CI 1.89-2.65) were more than twice as likely to prescribe erroneously. Prescribing errors were 70 % (OR 1.70; 95 % CI 1.61-1.80) more likely to occur at the time of hospital admission than when medication orders were issued during the hospital stay. No significant differences in severity of error were observed between grades of prescriber. Potentially serious errors were more likely to be associated with prescriptions for parenteral administration, especially for cardiovascular or endocrine disorders. The problem of prescribing errors in hospitals is substantial and not solely a problem of the most junior medical prescribers, particularly for those errors most likely to cause significant patient harm. Interventions are needed to target these high-risk errors by all grades of staff and hence

  16. Barriers and facilitators to recovering from e-prescribing errors in community pharmacies.

    Science.gov (United States)

    Odukoya, Olufunmilola K; Stone, Jamie A; Chui, Michelle A

    2015-01-01

    To explore barriers and facilitators to recovery from e-prescribing errors in community pharmacies and to explore practical solutions for work system redesign to ensure successful recovery from errors. Cross-sectional qualitative design using direct observations, interviews, and focus groups. Five community pharmacies in Wisconsin. 13 pharmacists and 14 pharmacy technicians. Observational field notes and transcribed interviews and focus groups were subjected to thematic analysis guided by the Systems Engineering Initiative for Patient Safety (SEIPS) work system and patient safety model. Barriers and facilitators to recovering from e-prescription errors in community pharmacies. Organizational factors, such as communication, training, teamwork, and staffing levels, play an important role in recovering from e-prescription errors. Other factors that could positively or negatively affect recovery of e-prescription errors include level of experience, knowledge of the pharmacy personnel, availability or usability of tools and technology, interruptions and time pressure when performing tasks, and noise in the physical environment. The SEIPS model sheds light on key factors that may influence recovery from e-prescribing errors in pharmacies, including the environment, teamwork, communication, technology, tasks, and other organizational variables. To be successful in recovering from e-prescribing errors, pharmacies must provide the appropriate working conditions that support recovery from errors.

  17. Prescribing errors in a Brazilian neonatal intensive care unit

    Directory of Open Access Journals (Sweden)

    Ana Paula Cezar Machado

    2015-12-01

    Full Text Available Abstract Pediatric patients, especially those admitted to the neonatal intensive care unit (ICU, are highly vulnerable to medication errors. This study aimed to measure the prescription error rate in a university hospital neonatal ICU and to identify susceptible patients, types of errors, and the medicines involved. The variables related to medicines prescribed were compared to the Neofax prescription protocol. The study enrolled 150 newborns and analyzed 489 prescription order forms, with 1,491 medication items, corresponding to 46 drugs. Prescription error rate was 43.5%. Errors were found in dosage, intervals, diluents, and infusion time, distributed across 7 therapeutic classes. Errors were more frequent in preterm newborns. Diluent and dosing were the most frequent sources of errors. The therapeutic classes most involved in errors were antimicrobial agents and drugs that act on the nervous and cardiovascular systems.

  18. Medication prescribing errors in a public teaching hospital in India: A prospective study.

    Directory of Open Access Journals (Sweden)

    Pote S

    2007-03-01

    Full Text Available Background: To prevent medication errors in prescribing, one needs to know their types and relative occurrence. Such errors are a great cause of concern as they have the potential to cause patient harm. The aim of this study was to determine the nature and types of medication prescribing errors in an Indian setting.Methods: The medication errors were analyzed in a prospective observational study conducted in 3 medical wards of a public teaching hospital in India. The medication errors were analyzed by means of Micromedex Drug-Reax database.Results: Out of 312 patients, only 304 were included in the study. Of the 304 cases, 103 (34% cases had at least one error. The total number of errors found was 157. The drug-drug interactions were the most frequently (68.2% occurring type of error, which was followed by incorrect dosing interval (12% and dosing errors (9.5%. The medication classes involved most were antimicrobial agents (29.4%, cardiovascular agents (15.4%, GI agents (8.6% and CNS agents (8.2%. The moderate errors contributed maximum (61.8% to the total errors when compared to the major (25.5% and minor (12.7% errors. The results showed that the number of errors increases with age and number of medicines prescribed.Conclusion: The results point to the establishment of medication error reporting at each hospital and to share the data with other hospitals. The role of clinical pharmacist in this situation appears to be a strong intervention; and the clinical pharmacist, initially, could confine to identification of the medication errors.

  19. Prescribing errors during hospital inpatient care: factors influencing identification by pharmacists.

    Science.gov (United States)

    Tully, Mary P; Buchan, Iain E

    2009-12-01

    To investigate the prevalence of prescribing errors identified by pharmacists in hospital inpatients and the factors influencing error identification rates by pharmacists throughout hospital admission. 880-bed university teaching hospital in North-west England. Data about prescribing errors identified by pharmacists (median: 9 (range 4-17) collecting data per day) when conducting routine work were prospectively recorded on 38 randomly selected days over 18 months. Proportion of new medication orders in which an error was identified; predictors of error identification rate, adjusted for workload and seniority of pharmacist, day of week, type of ward or stage of patient admission. 33,012 new medication orders were reviewed for 5,199 patients; 3,455 errors (in 10.5% of orders) were identified for 2,040 patients (39.2%; median 1, range 1-12). Most were problem orders (1,456, 42.1%) or potentially significant errors (1,748, 50.6%); 197 (5.7%) were potentially serious; 1.6% (n = 54) were potentially severe or fatal. Errors were 41% (CI: 28-56%) more likely to be identified at patient's admission than at other times, independent of confounders. Workload was the strongest predictor of error identification rates, with 40% (33-46%) less errors identified on the busiest days than at other times. Errors identified fell by 1.9% (1.5-2.3%) for every additional chart checked, independent of confounders. Pharmacists routinely identify errors but increasing workload may reduce identification rates. Where resources are limited, they may be better spent on identifying and addressing errors immediately after admission to hospital.

  20. Measuring the relationship between interruptions, multitasking and prescribing errors in an emergency department: a study protocol.

    Science.gov (United States)

    Raban, Magdalena Z; Walter, Scott R; Douglas, Heather E; Strumpman, Dana; Mackenzie, John; Westbrook, Johanna I

    2015-10-13

    Interruptions and multitasking are frequent in clinical settings, and have been shown in the cognitive psychology literature to affect performance, increasing the risk of error. However, comparatively less is known about their impact on errors in clinical work. This study will assess the relationship between prescribing errors, interruptions and multitasking in an emergency department (ED) using direct observations and chart review. The study will be conducted in an ED of a 440-bed teaching hospital in Sydney, Australia. Doctors will be shadowed at proximity by observers for 2 h time intervals while they are working on day shift (between 0800 and 1800). Time stamped data on tasks, interruptions and multitasking will be recorded on a handheld computer using the validated Work Observation Method by Activity Timing (WOMBAT) tool. The prompts leading to interruptions and multitasking will also be recorded. When doctors prescribe medication, type of chart and chart sections written on, along with the patient's medical record number (MRN) will be recorded. A clinical pharmacist will access patient records and assess the medication orders for prescribing errors. The prescribing error rate will be calculated per prescribing task and is defined as the number of errors divided by the number of medication orders written during the prescribing task. The association between prescribing error rates, and rates of prompts, interruptions and multitasking will be assessed using statistical modelling. Ethics approval has been obtained from the hospital research ethics committee. Eligible doctors will be provided with written information sheets and written consent will be obtained if they agree to participate. Doctor details and MRNs will be kept separate from the data on prescribing errors, and will not appear in the final data set for analysis. Study results will be disseminated in publications and feedback to the ED. Published by the BMJ Publishing Group Limited. For permission

  1. Medication errors with electronic prescribing (eP): Two views of the same picture

    Science.gov (United States)

    2010-01-01

    Background Quantitative prospective methods are widely used to evaluate the impact of new technologies such as electronic prescribing (eP) on medication errors. However, they are labour-intensive and it is not always feasible to obtain pre-intervention data. Our objective was to compare the eP medication error picture obtained with retrospective quantitative and qualitative methods. Methods The study was carried out at one English district general hospital approximately two years after implementation of an integrated electronic prescribing, administration and records system. Quantitative: A structured retrospective analysis was carried out of clinical records and medication orders for 75 randomly selected patients admitted to three wards (medicine, surgery and paediatrics) six months after eP implementation. Qualitative: Eight doctors, 6 nurses, 8 pharmacy staff and 4 other staff at senior, middle and junior grades, and 19 adult patients on acute surgical and medical wards were interviewed. Staff interviews explored experiences of developing and working with the system; patient interviews focused on experiences of medicine prescribing and administration on the ward. Interview transcripts were searched systematically for accounts of medication incidents. A classification scheme was developed and applied to the errors identified in the records review. Results The two approaches produced similar pictures of the drug use process. Interviews identified types of error identified in the retrospective notes review plus two eP-specific errors which were not detected by record review. Interview data took less time to collect than record review, and provided rich data on the prescribing process, and reasons for delays or non-administration of medicines, including "once only" orders and "as required" medicines. Conclusions The qualitative approach provided more understanding of processes, and some insights into why medication errors can happen. The method is cost-effective and

  2. Antiretroviral medication prescribing errors are common with hospitalization of HIV-infected patients.

    Science.gov (United States)

    Commers, Tessa; Swindells, Susan; Sayles, Harlan; Gross, Alan E; Devetten, Marcel; Sandkovsky, Uriel

    2014-01-01

    Errors in prescribing antiretroviral therapy (ART) often occur with the hospitalization of HIV-infected patients. The rapid identification and prevention of errors may reduce patient harm and healthcare-associated costs. A retrospective review of hospitalized HIV-infected patients was carried out between 1 January 2009 and 31 December 2011. Errors were documented as omission, underdose, overdose, duplicate therapy, incorrect scheduling and/or incorrect therapy. The time to error correction was recorded. Relative risks (RRs) were computed to evaluate patient characteristics and error rates. A total of 289 medication errors were identified in 146/416 admissions (35%). The most common was drug omission (69%). At an error rate of 31%, nucleoside reverse transcriptase inhibitors were associated with an increased risk of error when compared with protease inhibitors (RR 1.32; 95% CI 1.04-1.69) and co-formulated drugs (RR 1.59; 95% CI 1.19-2.09). Of the errors, 31% were corrected within the first 24 h, but over half (55%) were never remedied. Admissions with an omission error were 7.4 times more likely to have all errors corrected within 24 h than were admissions without an omission. Drug interactions with ART were detected on 51 occasions. For the study population (n = 177), an increased risk of admission error was observed for black (43%) compared with white (28%) individuals (RR 1.53; 95% CI 1.16-2.03) but no significant differences were observed between white patients and other minorities or between men and women. Errors in inpatient ART were common, and the majority were never detected. The most common errors involved omission of medication, and nucleoside reverse transcriptase inhibitors had the highest rate of prescribing error. Interventions to prevent and correct errors are urgently needed.

  3. Medication prescribing errors in the medical intensive care unit of Tikur Anbessa Specialized Hospital, Addis Ababa, Ethiopia.

    Science.gov (United States)

    Sada, Oumer; Melkie, Addisu; Shibeshi, Workineh

    2015-09-16

    Medication errors (MEs) are important problems in all hospitalized populations, especially in intensive care unit (ICU). Little is known about the prevalence of medication prescribing errors in the ICU of hospitals in Ethiopia. The aim of this study was to assess medication prescribing errors in the ICU of Tikur Anbessa Specialized Hospital using retrospective cross-sectional analysis of patient cards and medication charts. About 220 patient charts were reviewed with a total of 1311 patient-days, and 882 prescription episodes. 359 MEs were detected; with prevalence of 40 per 100 orders. Common prescribing errors were omission errors 154 (42.89%), 101 (28.13%) wrong combination, 48 (13.37%) wrong abbreviation, 30 (8.36%) wrong dose, wrong frequency 18 (5.01%) and wrong indications 8 (2.23%). The present study shows that medication errors are common in medical ICU of Tikur Anbessa Specialized Hospital. These results suggest future targets of prevention strategies to reduce the rate of medication error.

  4. Quantum Error Correction and Fault Tolerant Quantum Computing

    CERN Document Server

    Gaitan, Frank

    2008-01-01

    It was once widely believed that quantum computation would never become a reality. However, the discovery of quantum error correction and the proof of the accuracy threshold theorem nearly ten years ago gave rise to extensive development and research aimed at creating a working, scalable quantum computer. Over a decade has passed since this monumental accomplishment yet no book-length pedagogical presentation of this important theory exists. Quantum Error Correction and Fault Tolerant Quantum Computing offers the first full-length exposition on the realization of a theory once thought impo

  5. Novel prescribed performance neural control of a flexible air-breathing hypersonic vehicle with unknown initial errors.

    Science.gov (United States)

    Bu, Xiangwei; Wu, Xiaoyan; Zhu, Fujing; Huang, Jiaqi; Ma, Zhen; Zhang, Rui

    2015-11-01

    A novel prescribed performance neural controller with unknown initial errors is addressed for the longitudinal dynamic model of a flexible air-breathing hypersonic vehicle (FAHV) subject to parametric uncertainties. Different from traditional prescribed performance control (PPC) requiring that the initial errors have to be known accurately, this paper investigates the tracking control without accurate initial errors via exploiting a new performance function. A combined neural back-stepping and minimal learning parameter (MLP) technology is employed for exploring a prescribed performance controller that provides robust tracking of velocity and altitude reference trajectories. The highlight is that the transient performance of velocity and altitude tracking errors is satisfactory and the computational load of neural approximation is low. Finally, numerical simulation results from a nonlinear FAHV model demonstrate the efficacy of the proposed strategy. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  6. Prescribing error at hospital discharge: a retrospective review of medication information in an Irish hospital.

    Science.gov (United States)

    Michaelson, M; Walsh, E; Bradley, C P; McCague, P; Owens, R; Sahm, L J

    2017-08-01

    Prescribing error may result in adverse clinical outcomes leading to increased patient morbidity, mortality and increased economic burden. Many errors occur during transitional care as patients move between different stages and settings of care. To conduct a review of medication information and identify prescribing error among an adult population in an urban hospital. Retrospective review of medication information was conducted. Part 1: an audit of discharge prescriptions which assessed: legibility, compliance with legal requirements, therapeutic errors (strength, dose and frequency) and drug interactions. Part 2: A review of all sources of medication information (namely pre-admission medication list, drug Kardex, discharge prescription, discharge letter) for 15 inpatients to identify unintentional prescription discrepancies, defined as: "undocumented and/or unjustified medication alteration" throughout the hospital stay. Part 1: of the 5910 prescribed items; 53 (0.9%) were deemed illegible. Of the controlled drug prescriptions 11.1% (n = 167) met all the legal requirements. Therapeutic errors occurred in 41% of prescriptions (n = 479) More than 1 in 5 patients (21.9%) received a prescription containing a drug interaction. Part 2: 175 discrepancies were identified across all sources of medication information; of which 78 were deemed unintentional. Of these: 10.2% (n = 8) occurred at the point of admission, whereby 76.9% (n = 60) occurred at the point of discharge. The study identified the time of discharge as a point at which prescribing errors are likely to occur. This has implications for patient safety and provider work load in both primary and secondary care.

  7. Learning curves, taking instructions, and patient safety: using a theoretical domains framework in an interview study to investigate prescribing errors among trainee doctors

    Directory of Open Access Journals (Sweden)

    Duncan Eilidh M

    2012-09-01

    Full Text Available Abstract Background Prescribing errors are a major source of morbidity and mortality and represent a significant patient safety concern. Evidence suggests that trainee doctors are responsible for most prescribing errors. Understanding the factors that influence prescribing behavior may lead to effective interventions to reduce errors. Existing investigations of prescribing errors have been based on Human Error Theory but not on other relevant behavioral theories. The aim of this study was to apply a broad theory-based approach using the Theoretical Domains Framework (TDF to investigate prescribing in the hospital context among a sample of trainee doctors. Method Semistructured interviews, based on 12 theoretical domains, were conducted with 22 trainee doctors to explore views, opinions, and experiences of prescribing and prescribing errors. Content analysis was conducted, followed by applying relevance criteria and a novel stage of critical appraisal, to identify which theoretical domains could be targeted in interventions to improve prescribing. Results Seven theoretical domains met the criteria of relevance: “social professional role and identity,” “environmental context and resources,” “social influences,” “knowledge,” “skills,” “memory, attention, and decision making,” and “behavioral regulation.” From critical appraisal of the interview data, “beliefs about consequences” and “beliefs about capabilities” were also identified as potentially important domains. Interrelationships between domains were evident. Additionally, the data supported theoretical elaboration of the domain behavioral regulation. Conclusions In this investigation of hospital-based prescribing, participants’ attributions about causes of errors were used to identify domains that could be targeted in interventions to improve prescribing. In a departure from previous TDF practice, critical appraisal was used to identify additional domains

  8. Learning curves, taking instructions, and patient safety: using a theoretical domains framework in an interview study to investigate prescribing errors among trainee doctors.

    Science.gov (United States)

    Duncan, Eilidh M; Francis, Jill J; Johnston, Marie; Davey, Peter; Maxwell, Simon; McKay, Gerard A; McLay, James; Ross, Sarah; Ryan, Cristín; Webb, David J; Bond, Christine

    2012-09-11

    Prescribing errors are a major source of morbidity and mortality and represent a significant patient safety concern. Evidence suggests that trainee doctors are responsible for most prescribing errors. Understanding the factors that influence prescribing behavior may lead to effective interventions to reduce errors. Existing investigations of prescribing errors have been based on Human Error Theory but not on other relevant behavioral theories. The aim of this study was to apply a broad theory-based approach using the Theoretical Domains Framework (TDF) to investigate prescribing in the hospital context among a sample of trainee doctors. Semistructured interviews, based on 12 theoretical domains, were conducted with 22 trainee doctors to explore views, opinions, and experiences of prescribing and prescribing errors. Content analysis was conducted, followed by applying relevance criteria and a novel stage of critical appraisal, to identify which theoretical domains could be targeted in interventions to improve prescribing. Seven theoretical domains met the criteria of relevance: "social professional role and identity," "environmental context and resources," "social influences," "knowledge," "skills," "memory, attention, and decision making," and "behavioral regulation." From critical appraisal of the interview data, "beliefs about consequences" and "beliefs about capabilities" were also identified as potentially important domains. Interrelationships between domains were evident. Additionally, the data supported theoretical elaboration of the domain behavioral regulation. In this investigation of hospital-based prescribing, participants' attributions about causes of errors were used to identify domains that could be targeted in interventions to improve prescribing. In a departure from previous TDF practice, critical appraisal was used to identify additional domains that should also be targeted, despite participants' perceptions that they were not relevant to

  9. Detection and Localization of Tooth Breakage Fault on Wind Turbine Planetary Gear System considering Gear Manufacturing Errors

    Directory of Open Access Journals (Sweden)

    Y. Gui

    2014-01-01

    Full Text Available Sidebands of vibration spectrum are sensitive to the fault degree and have been proved to be useful for tooth fault detection and localization. However, the amplitude and frequency modulation due to manufacturing errors (which are inevitable in actual planetary gear system lead to much more complex sidebands. Thus, in the paper, a lumped parameter model for a typical planetary gear system with various types of errors is established. In the model, the influences of tooth faults on time-varying mesh stiffness and tooth impact force are derived analytically. Numerical methods are then utilized to obtain the response spectra of the system with tooth faults with and without errors. Three system components (including sun, planet, and ring gears with tooth faults are considered in the discussion, respectively. Through detailed comparisons of spectral sidebands, fault characteristic frequencies of the system are acquired. Dynamic experiments on a planetary gear-box test rig are carried out to verify the simulation results and these results are of great significances for the detection and localization of tooth faults in wind turbines.

  10. Tracking error constrained robust adaptive neural prescribed performance control for flexible hypersonic flight vehicle

    Directory of Open Access Journals (Sweden)

    Zhonghua Wu

    2017-02-01

    Full Text Available A robust adaptive neural control scheme based on a back-stepping technique is developed for the longitudinal dynamics of a flexible hypersonic flight vehicle, which is able to ensure the state tracking error being confined in the prescribed bounds, in spite of the existing model uncertainties and actuator constraints. Minimal learning parameter technique–based neural networks are used to estimate the model uncertainties; thus, the amount of online updated parameters is largely lessened, and the prior information of the aerodynamic parameters is dispensable. With the utilization of an assistant compensation system, the problem of actuator constraint is overcome. By combining the prescribed performance function and sliding mode differentiator into the neural back-stepping control design procedure, a composite state tracking error constrained adaptive neural control approach is presented, and a new type of adaptive law is constructed. As compared with other adaptive neural control designs for hypersonic flight vehicle, the proposed composite control scheme exhibits not only low-computation property but also strong robustness. Finally, two comparative simulations are performed to demonstrate the robustness of this neural prescribed performance controller.

  11. Ground Motion Synthetics For Spontaneous Versus Prescribed Rupture On A 45(o) Thrust Fault

    Science.gov (United States)

    Gottschämmer, E.; Olsen, K. B.

    We have compared prescribed (kinematic) and spontaneous dynamic rupture propaga- tion on a 45(o) dipping thrust fault buried up to 5 km in a half-space model, as well as ground motions on the free surface for frequencies less than 1 Hz. The computa- tions are carried out using a 3D finite-difference method with rate-and-state friction on a planar, 20 km by 20 km fault. We use a slip-weakening distance of 15 cm and a slip- velocity weakening distance of 9.2 cm/s, similar to those for the dynamic study for the 1994 M6.7 Northridge earthquake by Nielsen and Olsen (2000) which generated satis- factory fits to selected strong motion data in the San Fernando Valley. The prescribed rupture propagation was designed to mimic that of the dynamic simulation at depth in order to isolate the dynamic free-surface effects. In this way, the results reflect the dy- namic (normal-stress) interaction with the free surface for various depths of burial of the fault. We find that the moment, peak slip and peak sliprate for the rupture breaking the surface are increased by up to 60%, 80%, and 10%, respectively, compared to the values for the scenario buried 5 km. The inclusion of these effects increases the peak displacements and velocities above the fault by factors up 3.4 and 2.9 including the increase in moment due to normal-stress effects at the free surface, and up to 2.1 and 2.0 when scaled to a Northridge-size event with surface rupture. Similar differences were found by Aagaard et al. (2001). Significant dynamic effects on the ground mo- tions include earlier arrival times caused by super-shear rupture velocities (break-out phases), in agreement with the dynamic finite-element simulations by Oglesby et al. (1998, 2000). The presence of shallow low-velocity layers tend to increase the rup- ture time and the sliprate. In particular, they promote earlier transitions to super-shear velocities and decrease the rupture velocity within the layers. Our results suggest that dynamic

  12. Incidence and Severity of Prescribing Errors in Parenteral Nutrition for Pediatric Inpatients at a Neonatal and Pediatric Intensive Care Unit

    Directory of Open Access Journals (Sweden)

    Theresa Hermanspann

    2017-06-01

    Full Text Available ObjectivesPediatric inpatients are particularly vulnerable to medication errors (MEs, especially in highly individualized preparations like parenteral nutrition (PN. Aside from prescribing via a computerized physician order entry system (CPOE, we evaluated the effect of cross-checking by a clinical pharmacist to prevent harm from PN order errors in a neonatal and pediatric intensive care unit (NICU/PICU.MethodsThe incidence of prescribing errors in PN in a tertiary level NICU/PICU was surveyed prospectively between March 2012 and July 2013 (n = 3,012 orders. A pharmacist cross-checked all PN orders prior to preparation. Errors were assigned to seven different error-type categories. Three independent experts from different academic tertiary level NICUs judged the severity of each error according to the National Coordinating Council for Medication Error Reporting and Prevention (NCC MERP Index (categories A–I.ResultsThe error rate was 3.9% for all 3,012 orders (118 prescribing errors in 111 orders. 77 (6.0%, 1,277 orders errors occurred in the category concentration range, all concerning a relative overdose of calcium gluconate for peripheral infusion. The majority of all events (60% were assigned to categories C and D (without major harmful consequences while 28% could not be assigned due to missing majority decision. Potential harmful consequences requiring interventions (category E could have occurred in 12% of assessments.ConclusionNext to systematic application of clinical guidelines and prescribing via CPOE, order review by a clinical pharmacist is still required to effectively reduce MEs and thus to prevent minor and major adverse drug events with the aim to enhance medication safety.

  13. Synchronization of multiple 3-DOF helicopters under actuator faults and saturations with prescribed performance.

    Science.gov (United States)

    Yang, Huiliao; Jiang, Bin; Yang, Hao; Liu, Hugh H T

    2018-04-01

    The distributed cooperative control strategy is proposed to make the networked nonlinear 3-DOF helicopters achieve the attitude synchronization in the presence of actuator faults and saturations. Based on robust adaptive control, the proposed control method can both compensate the uncertain partial loss of control effectiveness and deal with the system uncertainties. To address actuator saturation problem, the control scheme is designed to ensure that the saturation constraint on the actuation will not be violated during the operation in spite of the actuator faults. It is shown that with the proposed control strategy, both the tracking errors of the leading helicopter and the attitude synchronization errors of each following helicopter are bounded in the existence of faulty actuators and actuator saturations. Moreover, the state responses of the entire group would not exceed the predesigned performance functions which are totally independent from the underlaying interaction topology. Simulation results illustrate the effectiveness of the proposed control scheme. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  14. Fault diagnosis of generation IV nuclear HTGR components – Part II: The area error enthalpy–entropy graph approach

    International Nuclear Information System (INIS)

    Rand, C.P. du; Schoor, G. van

    2012-01-01

    Highlights: ► Different uncorrelated fault signatures are derived for HTGR component faults. ► A multiple classifier ensemble increases confidence in classification accuracy. ► Detailed simulation model of system is not required for fault diagnosis. - Abstract: The second paper in a two part series presents the area error method for generation of representative enthalpy–entropy (h–s) fault signatures to classify malfunctions in generation IV nuclear high temperature gas-cooled reactor (HTGR) components. The second classifier is devised to ultimately address the fault diagnosis (FD) problem via the proposed methods in a multiple classifier (MC) ensemble. FD is realized by way of different input feature sets to the classification algorithm based on the area and trajectory of the residual shift between the fault-free and the actual operating h–s graph models. The application of the proposed technique is specifically demonstrated for 24 single fault transients considered in the main power system (MPS) of the Pebble Bed Modular Reactor (PBMR). The results show that the area error technique produces different fault signatures with low correlation for all the examined component faults. A brief evaluation of the two fault signature generation techniques is presented and the performance of the area error method is documented using the fault classification index (FCI) presented in Part I of the series. The final part of this work reports the application of the proposed approach for classification of an emulated fault transient in data from the prototype Pebble Bed Micro Model (PBMM) plant. Reference data values are calculated for the plant via a thermo-hydraulic simulation model of the MPS. The results show that the correspondence between the fault signatures, generated via experimental plant data and simulated reference values, are generally good. The work presented in the two part series, related to the classification of component faults in the MPS of different

  15. Impact of Internally Developed Electronic Prescription on Prescribing Errors at Discharge from the Emergency Department.

    Science.gov (United States)

    Hitti, Eveline; Tamim, Hani; Bakhti, Rinad; Zebian, Dina; Mufarrij, Afif

    2017-08-01

    Medication errors are common, with studies reporting at least one error per patient encounter. At hospital discharge, medication errors vary from 15%-38%. However, studies assessing the effect of an internally developed electronic (E)-prescription system at discharge from an emergency department (ED) are comparatively minimal. Additionally, commercially available electronic solutions are cost-prohibitive in many resource-limited settings. We assessed the impact of introducing an internally developed, low-cost E-prescription system, with a list of commonly prescribed medications, on prescription error rates at discharge from the ED, compared to handwritten prescriptions. We conducted a pre- and post-intervention study comparing error rates in a randomly selected sample of discharge prescriptions (handwritten versus electronic) five months pre and four months post the introduction of the E-prescription. The internally developed, E-prescription system included a list of 166 commonly prescribed medications with the generic name, strength, dose, frequency and duration. We included a total of 2,883 prescriptions in this study: 1,475 in the pre-intervention phase were handwritten (HW) and 1,408 in the post-intervention phase were electronic. We calculated rates of 14 different errors and compared them between the pre- and post-intervention period. Overall, E-prescriptions included fewer prescription errors as compared to HW-prescriptions. Specifically, E-prescriptions reduced missing dose (11.3% to 4.3%, p prescriptions, however, were associated with a significant increase in duplication errors, specifically with home medication (1.7% to 3%, p=0.02). A basic, internally developed E-prescription system, featuring commonly used medications, effectively reduced medication errors in a low-resource setting where the costs of sophisticated commercial electronic solutions are prohibitive.

  16. Fault-tolerant quantum computing in the Pauli or Clifford frame with slow error diagnostics

    Directory of Open Access Journals (Sweden)

    Christopher Chamberland

    2018-01-01

    Full Text Available We consider the problem of fault-tolerant quantum computation in the presence of slow error diagnostics, either caused by measurement latencies or slow decoding algorithms. Our scheme offers a few improvements over previously existing solutions, for instance it does not require active error correction and results in a reduced error-correction overhead when error diagnostics is much slower than the gate time. In addition, we adapt our protocol to cases where the underlying error correction strategy chooses the optimal correction amongst all Clifford gates instead of the usual Pauli gates. The resulting Clifford frame protocol is of independent interest as it can increase error thresholds and could find applications in other areas of quantum computation.

  17. Automation bias in electronic prescribing.

    Science.gov (United States)

    Lyell, David; Magrabi, Farah; Raban, Magdalena Z; Pont, L G; Baysari, Melissa T; Day, Richard O; Coiera, Enrico

    2017-03-16

    Clinical decision support (CDS) in e-prescribing can improve safety by alerting potential errors, but introduces new sources of risk. Automation bias (AB) occurs when users over-rely on CDS, reducing vigilance in information seeking and processing. Evidence of AB has been found in other clinical tasks, but has not yet been tested with e-prescribing. This study tests for the presence of AB in e-prescribing and the impact of task complexity and interruptions on AB. One hundred and twenty students in the final two years of a medical degree prescribed medicines for nine clinical scenarios using a simulated e-prescribing system. Quality of CDS (correct, incorrect and no CDS) and task complexity (low, low + interruption and high) were varied between conditions. Omission errors (failure to detect prescribing errors) and commission errors (acceptance of false positive alerts) were measured. Compared to scenarios with no CDS, correct CDS reduced omission errors by 38.3% (p < .0001, n = 120), 46.6% (p < .0001, n = 70), and 39.2% (p < .0001, n = 120) for low, low + interrupt and high complexity scenarios respectively. Incorrect CDS increased omission errors by 33.3% (p < .0001, n = 120), 24.5% (p < .009, n = 82), and 26.7% (p < .0001, n = 120). Participants made commission errors, 65.8% (p < .0001, n = 120), 53.5% (p < .0001, n = 82), and 51.7% (p < .0001, n = 120). Task complexity and interruptions had no impact on AB. This study found evidence of AB omission and commission errors in e-prescribing. Verification of CDS alerts is key to avoiding AB errors. However, interventions focused on this have had limited success to date. Clinicians should remain vigilant to the risks of CDS failures and verify CDS.

  18. FPGAs and parallel architectures for aerospace applications soft errors and fault-tolerant design

    CERN Document Server

    Rech, Paolo

    2016-01-01

    This book introduces the concepts of soft errors in FPGAs, as well as the motivation for using commercial, off-the-shelf (COTS) FPGAs in mission-critical and remote applications, such as aerospace.  The authors describe the effects of radiation in FPGAs, present a large set of soft-error mitigation techniques that can be applied in these circuits, as well as methods for qualifying these circuits under radiation.  Coverage includes radiation effects in FPGAs, fault-tolerant techniques for FPGAs, use of COTS FPGAs in aerospace applications, experimental data of FPGAs under radiation, FPGA embedded processors under radiation, and fault injection in FPGAs. Since dedicated parallel processing architectures such as GPUs have become more desirable in aerospace applications due to high computational power, GPU analysis under radiation is also discussed. ·         Discusses features and drawbacks of reconfigurability methods for FPGAs, focused on aerospace applications; ·         Explains how radia...

  19. Safe prescribing: a titanic challenge

    Science.gov (United States)

    Routledge, Philip A

    2012-01-01

    The challenge to achieve safe prescribing merits the adjective ‘titanic’. The organisational and human errors leading to poor prescribing (e.g. underprescribing, overprescribing, misprescribing or medication errors) have parallels in the organisational and human errors that led to the loss of the Titanic 100 years ago this year. Prescribing can be adversely affected by communication failures, critical conditions, complacency, corner cutting, callowness and a lack of courage of conviction, all of which were also factors leading to the Titanic tragedy. These issues need to be addressed by a commitment to excellence, the final component of the ‘Seven C's’. Optimal prescribing is dependent upon close communication and collaborative working between highly trained health professionals, whose role is to ensure maximum clinical effectiveness, whilst also protecting their patients from avoidable harm. Since humans are prone to error, and the environments in which they work are imperfect, it is not surprising that medication errors are common, occurring more often during the prescribing stage than during dispensing or administration. A commitment to excellence in prescribing includes a continued focus on lifelong learning (including interprofessional learning) in pharmacology and therapeutics. This should be accompanied by improvements in the clinical working environment of prescribers, and the encouragement of a strong safety culture (including reporting of adverse incidents as well as suspected adverse drug reactions whenever appropriate). Finally, members of the clinical team must be prepared to challenge each other, when necessary, to ensure that prescribing combines the highest likelihood of benefit with the lowest potential for harm. PMID:22738396

  20. Who Do Hospital Physicians and Nurses Go to for Advice About Medications? A Social Network Analysis and Examination of Prescribing Error Rates.

    Science.gov (United States)

    Creswick, Nerida; Westbrook, Johanna Irene

    2015-09-01

    To measure the weekly medication advice-seeking networks of hospital staff, to compare patterns across professional groups, and to examine these in the context of prescribing error rates. A social network analysis was conducted. All 101 staff in 2 wards in a large, academic teaching hospital in Sydney, Australia, were surveyed (response rate, 90%) using a detailed social network questionnaire. The extent of weekly medication advice seeking was measured by density of connections, proportion of reciprocal relationships by reciprocity, number of colleagues to whom each person provided advice by in-degree, and perceptions of amount and impact of advice seeking between physicians and nurses. Data on prescribing error rates from the 2 wards were compared. Weekly medication advice-seeking networks were sparse (density: 7% ward A and 12% ward B). Information sharing across professional groups was modest, and rates of reciprocation of advice were low (9% ward A, 14% ward B). Pharmacists provided advice to most people, and junior physicians also played central roles. Senior physicians provided medication advice to few people. Many staff perceived that physicians rarely sought advice from nurses when prescribing, but almost all believed that an increase in communication between physicians and nurses about medications would improve patient safety. The medication networks in ward B had higher measures for density, reciprocation, and fewer senior physicians who were isolates. Ward B had a significantly lower rate of both procedural and clinical prescribing errors than ward A (0.63 clinical prescribing errors per admission [95%CI, 0.47-0.79] versus 1.81/ admission [95%CI, 1.49-2.13]). Medication advice-seeking networks among staff on hospital wards are limited. Hubs of advice provision include pharmacists, junior physicians, and senior nurses. Senior physicians are poorly integrated into medication advice networks. Strategies to improve the advice-giving networks between senior

  1. Safe prescribing: a titanic challenge.

    Science.gov (United States)

    Routledge, Philip A

    2012-10-01

    The challenge to achieve safe prescribing merits the adjective 'titanic'. The organisational and human errors leading to poor prescribing (e.g. underprescribing, overprescribing, misprescribing or medication errors) have parallels in the organisational and human errors that led to the loss of the Titanic 100 years ago this year. Prescribing can be adversely affected by communication failures, critical conditions, complacency, corner cutting, callowness and a lack of courage of conviction, all of which were also factors leading to the Titanic tragedy. These issues need to be addressed by a commitment to excellence, the final component of the 'Seven C's'. Optimal prescribing is dependent upon close communication and collaborative working between highly trained health professionals, whose role is to ensure maximum clinical effectiveness, whilst also protecting their patients from avoidable harm. Since humans are prone to error, and the environments in which they work are imperfect, it is not surprising that medication errors are common, occurring more often during the prescribing stage than during dispensing or administration. A commitment to excellence in prescribing includes a continued focus on lifelong learning (including interprofessional learning) in pharmacology and therapeutics. This should be accompanied by improvements in the clinical working environment of prescribers, and the encouragement of a strong safety culture (including reporting of adverse incidents as well as suspected adverse drug reactions whenever appropriate). Finally, members of the clinical team must be prepared to challenge each other, when necessary, to ensure that prescribing combines the highest likelihood of benefit with the lowest potential for harm. © 2012 The Author. British Journal of Clinical Pharmacology © 2012 The British Pharmacological Society.

  2. Software error masking effect on hardware faults

    International Nuclear Information System (INIS)

    Choi, Jong Gyun; Seong, Poong Hyun

    1999-01-01

    Based on the Very High Speed Integrated Circuit (VHSIC) Hardware Description Language (VHDL), in this work, a simulation model for fault injection is developed to estimate the dependability of the digital system in operational phase. We investigated the software masking effect on hardware faults through the single bit-flip and stuck-at-x fault injection into the internal registers of the processor and memory cells. The fault location reaches all registers and memory cells. Fault distribution over locations is randomly chosen based on a uniform probability distribution. Using this model, we have predicted the reliability and masking effect of an application software in a digital system-Interposing Logic System (ILS) in a nuclear power plant. We have considered four the software operational profiles. From the results it was found that the software masking effect on hardware faults should be properly considered for predicting the system dependability accurately in operation phase. It is because the masking effect was formed to have different values according to the operational profile

  3. Neuroadaptive Fault-Tolerant Control of Nonlinear Systems Under Output Constraints and Actuation Faults.

    Science.gov (United States)

    Zhao, Kai; Song, Yongduan; Shen, Zhixi

    2018-02-01

    In this paper, a neuroadaptive fault-tolerant tracking control method is proposed for a class of time-delay pure-feedback systems in the presence of external disturbances and actuation faults. The proposed controller can achieve prescribed transient and steady-state performance, despite uncertain time delays and output constraints as well as actuation faults. By combining a tangent barrier Lyapunov-Krasovskii function with the dynamic surface control technique, the neural network unit in the developed control scheme is able to take its action from the very beginning and play its learning/approximating role safely during the entire system operational envelope, leading to enhanced control performance without the danger of violating compact set precondition. Furthermore, prescribed transient performance and output constraints are strictly ensured in the presence of nonaffine uncertainties, external disturbances, and undetectable actuation faults. The control strategy is also validated by numerical simulation.

  4. Neural network-based robust actuator fault diagnosis for a non-linear multi-tank system.

    Science.gov (United States)

    Mrugalski, Marcin; Luzar, Marcel; Pazera, Marcin; Witczak, Marcin; Aubrun, Christophe

    2016-03-01

    The paper is devoted to the problem of the robust actuator fault diagnosis of the dynamic non-linear systems. In the proposed method, it is assumed that the diagnosed system can be modelled by the recurrent neural network, which can be transformed into the linear parameter varying form. Such a system description allows developing the designing scheme of the robust unknown input observer within H∞ framework for a class of non-linear systems. The proposed approach is designed in such a way that a prescribed disturbance attenuation level is achieved with respect to the actuator fault estimation error, while guaranteeing the convergence of the observer. The application of the robust unknown input observer enables actuator fault estimation, which allows applying the developed approach to the fault tolerant control tasks. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Fault tolerant computing systems

    International Nuclear Information System (INIS)

    Randell, B.

    1981-01-01

    Fault tolerance involves the provision of strategies for error detection damage assessment, fault treatment and error recovery. A survey is given of the different sorts of strategies used in highly reliable computing systems, together with an outline of recent research on the problems of providing fault tolerance in parallel and distributed computing systems. (orig.)

  6. An efficient diagnostic technique for distribution systems based on under fault voltages and currents

    Energy Technology Data Exchange (ETDEWEB)

    Campoccia, A.; Di Silvestre, M.L.; Incontrera, I.; Riva Sanseverino, E. [Dipartimento di Ingegneria Elettrica elettronica e delle Telecomunicazioni, Universita degli Studi di Palermo, viale delle Scienze, 90128 Palermo (Italy); Spoto, G. [Centro per la Ricerca Elettronica in Sicilia, Monreale, Via Regione Siciliana 49, 90046 Palermo (Italy)

    2010-10-15

    Service continuity is one of the major aspects in the definition of the quality of the electrical energy, for this reason the research in the field of faults diagnostic for distribution systems is spreading ever more. Moreover the increasing interest around modern distribution systems automation for management purposes gives faults diagnostics more tools to detect outages precisely and in short times. In this paper, the applicability of an efficient fault location and characterization methodology within a centralized monitoring system is discussed. The methodology, appropriate for any kind of fault, is based on the use of the analytical model of the network lines and uses the fundamental components rms values taken from the transient measures of line currents and voltages at the MV/LV substations. The fault location and identification algorithm, proposed by the authors and suitably restated, has been implemented on a microprocessor-based device that can be installed at each MV/LV substation. The speed and precision of the algorithm have been tested against the errors deriving from the fundamental extraction within the prescribed fault clearing times and against the inherent precision of the electronic device used for computation. The tests have been carried out using Matlab Simulink for simulating the faulted system. (author)

  7. The mechanics of fault-bend folding and tear-fault systems in the Niger Delta

    Science.gov (United States)

    Benesh, Nathan Philip

    This dissertation investigates the mechanics of fault-bend folding using the discrete element method (DEM) and explores the nature of tear-fault systems in the deep-water Niger Delta fold-and-thrust belt. In Chapter 1, we employ the DEM to investigate the development of growth structures in anticlinal fault-bend folds. This work was inspired by observations that growth strata in active folds show a pronounced upward decrease in bed dip, in contrast to traditional kinematic fault-bend fold models. Our analysis shows that the modeled folds grow largely by parallel folding as specified by the kinematic theory; however, the process of folding over a broad axial surface zone yields a component of fold growth by limb rotation that is consistent with the patterns observed in natural folds. This result has important implications for how growth structures can he used to constrain slip and paleo-earthquake ages on active blind-thrust faults. In Chapter 2, we expand our DEM study to investigate the development of a wider range of fault-bend folds. We examine the influence of mechanical stratigraphy and quantitatively compare our models with the relationships between fold and fault shape prescribed by the kinematic theory. While the synclinal fault-bend models closely match the kinematic theory, the modeled anticlinal fault-bend folds show robust behavior that is distinct from the kinematic theory. Specifically, we observe that modeled structures maintain a linear relationship between fold shape (gamma) and fault-horizon cutoff angle (theta), rather than expressing the non-linear relationship with two distinct modes of anticlinal folding that is prescribed by the kinematic theory. These observations lead to a revised quantitative relationship for fault-bend folds that can serve as a useful interpretation tool. Finally, in Chapter 3, we examine the 3D relationships of tear- and thrust-fault systems in the western, deep-water Niger Delta. Using 3D seismic reflection data and new

  8. Physician Acceptance of Pharmacist Recommendations about Medication Prescribing Errors in Iraqi Hospitals

    Directory of Open Access Journals (Sweden)

    ALI AZEEZ ALI AL-JUMAILI

    2016-08-01

    Full Text Available The objectives of this study were to measure the incidence and types of medication prescribing errors (MPEs in Iraqi hospitals, to calculate for the first time the percentage of physician agreement with pharmacist medication regimen review (MRR recommendations regarding MPEs, and to identify the factors influencing the physician agreement rate with these recommendations. Methods: Fourteen pharmacists (10 females and 4 males reviewed each hand-written physician order for 1506 patients who were admitted to two public hospitals in Al-Najaf, Iraq during August 2015. The pharmacists identified medication prescribing errors using the Medscape WebMD, LCC phone application as a reference. The pharmacists contacted the physicians (2 females and 34 males in-person to address MPEs that were identified. Results: The pharmacists identified 78 physician orders containing 99 MPEs with an incidence of 6.57 percent of all the physician orders reviewed. The patients with MPEs were taking 4.8 medications on average. The MPEs included drug-drug interactions (65.7%, incorrect doses (16.2%, unnecessary medications (8.1%, contra-indications (7.1%, incorrect drug duration (2%, and untreated conditions (1%. The physicians implemented 37 (37.4% pharmacist recommendations. Three factors were significantly related to physician acceptance of pharmacist recommendations. These were physician specialty, pharmacist gender, and patient gender. Pediatricians were less likely (OR= 0.1 to accept pharmacist recommendations compared to internal medicine physicians. Male pharmacists received more positive responses from physicians (OR=7.11 than female pharmacists. Lastly, the recommendations were significantly more likely to be accepted (OR= 3.72 when the patients were females. Conclusions: The incidence of MPEs is higher in Iraqi hospitalized patients than in the U.S. and U.K, but lower than in Brazil, Ethiopia, India, and Croatia. Drug-drug interactions were the most common type of

  9. H infinity Integrated Fault Estimation and Fault Tolerant Control of Discrete-time Piecewise Linear Systems

    DEFF Research Database (Denmark)

    Tabatabaeipour, Seyed Mojtaba; Bak, Thomas

    2012-01-01

    In this paper we consider the problem of fault estimation and accommodation for discrete time piecewise linear systems. A robust fault estimator is designed to estimate the fault such that the estimation error converges to zero and H∞ performance of the fault estimation is minimized. Then, the es...

  10. A fault-tolerant software strategy for digital systems

    Science.gov (United States)

    Hitt, E. F.; Webb, J. J.

    1984-01-01

    Techniques developed for producing fault-tolerant software are described. Tolerance is required because of the impossibility of defining fault-free software. Faults are caused by humans and can appear anywhere in the software life cycle. Tolerance is effected through error detection, damage assessment, recovery, and fault treatment, followed by return of the system to service. Multiversion software comprises two or more versions of the software yielding solutions which are examined by a decision algorithm. Errors can also be detected by extrapolation from previous results or by the acceptability of results. Violations of timing specifications can reveal errors, or the system can roll back to an error-free state when a defect is detected. The software, when used in flight control systems, must not impinge on time-critical responses. Efforts are still needed to reduce the costs of developing the fault-tolerant systems.

  11. A Novel Design for Drug-Drug Interaction Alerts Improves Prescribing Efficiency.

    Science.gov (United States)

    Russ, Alissa L; Chen, Siying; Melton, Brittany L; Johnson, Elizabette G; Spina, Jeffrey R; Weiner, Michael; Zillich, Alan J

    2015-09-01

    Drug-drug interactions (DDIs) are common in clinical care and pose serious risks for patients. Electronic health records display DDI alerts that can influence prescribers, but the interface design of DDI alerts has largely been unstudied. In this study, the objective was to apply human factors engineering principles to alert design. It was hypothesized that redesigned DDI alerts would significantly improve prescribers' efficiency and reduce prescribing errors. In a counterbalanced, crossover study with prescribers, two DDI alert designs were evaluated. Department of Veterans Affairs (VA) prescribers were video recorded as they completed fictitious patient scenarios, which included DDI alerts of varying severity. Efficiency was measured from time-stamped recordings. Prescribing errors were evaluated against predefined criteria. Efficiency and prescribing errors were analyzed with the Wilcoxon signed-rank test. Other usability data were collected on the adequacy of alert content, prescribers' use of the DDI monograph, and alert navigation. Twenty prescribers completed patient scenarios for both designs. Prescribers resolved redesigned alerts in about half the time (redesign: 52 seconds versus original design: 97 seconds; p<.001). Prescribing errors were not significantly different between the two designs. Usability results indicate that DDI alerts might be enhanced by facilitating easier access to laboratory data and dosing information and by allowing prescribers to cancel either interacting medication directly from the alert. Results also suggest that neither design provided adequate information for decision making via the primary interface. Applying human factors principles to DDI alerts improved overall efficiency. Aspects of DDI alert design that could be further enhanced prior to implementation were also identified.

  12. Optimization of electronic prescribing in pediatric patients

    NARCIS (Netherlands)

    Maat, B.

    2014-01-01

    Improving pediatric patient safety by preventing medication errors that may result in adverse drug events and consequent healthcare expenditure,is a worldwide challenge to healthcare. In pediatrics, reported medication error rates in general, and prescribing error rates in particular, vary between

  13. Fault Analysis of Wind Turbines Based on Error Messages and Work Orders

    DEFF Research Database (Denmark)

    Borchersen, Anders Bech; Larsen, Jesper Abildgaard; Stoustrup, Jakob

    2012-01-01

    describing the service performed at the individual turbines. The auto generated alarms are analysed by applying a cleaning procedure to identify the alarms related to components. A severity, occurrence, and detection analysis is performed on the work orders. The outcome of the two analyses are then compared......In this paper data describing the operation and maintenance of an offshore wind farm is presented and analysed. Two different sets of data is presented; the first is auto generated error messages from the Supervisory Control and Data Acquisition (SCADA) system, the other is the work orders...... to identify common fault types and areas where further data analysis would be beneficial for improving the operation and maintenance of wind turbines in the future....

  14. Medication error detection in two major teaching hospitals: What are the types of errors?

    Directory of Open Access Journals (Sweden)

    Fatemeh Saghafi

    2014-01-01

    Full Text Available Background: Increasing number of reports on medication errors and relevant subsequent damages, especially in medical centers has become a growing concern for patient safety in recent decades. Patient safety and in particular, medication safety is a major concern and challenge for health care professionals around the world. Our prospective study was designed to detect prescribing, transcribing, dispensing, and administering medication errors in two major university hospitals. Materials and Methods: After choosing 20 similar hospital wards in two large teaching hospitals in the city of Isfahan, Iran, the sequence was randomly selected. Diagrams for drug distribution were drawn by the help of pharmacy directors. Direct observation technique was chosen as the method for detecting the errors. A total of 50 doses were studied in each ward to detect prescribing, transcribing and administering errors in each ward. The dispensing error was studied on 1000 doses dispensed in each hospital pharmacy. Results: A total of 8162 number of doses of medications were studied during the four stages, of which 8000 were complete data to be analyzed. 73% of prescribing orders were incomplete and did not have all six parameters (name, dosage form, dose and measuring unit, administration route, and intervals of administration. We found 15% transcribing errors. One-third of administration of medications on average was erroneous in both hospitals. Dispensing errors ranged between 1.4% and 2.2%. Conclusion: Although prescribing and administrating compromise most of the medication errors, improvements are needed in all four stages with regard to medication errors. Clear guidelines must be written and executed in both hospitals to reduce the incidence of medication errors.

  15. Using total quality management approach to improve patient safety by preventing medication error incidences*.

    Science.gov (United States)

    Yousef, Nadin; Yousef, Farah

    2017-09-04

    Whereas one of the predominant causes of medication errors is a drug administration error, a previous study related to our investigations and reviews estimated that the incidences of medication errors constituted 6.7 out of 100 administrated medication doses. Therefore, we aimed by using six sigma approach to propose a way that reduces these errors to become less than 1 out of 100 administrated medication doses by improving healthcare professional education and clearer handwritten prescriptions. The study was held in a General Government Hospital. First, we systematically studied the current medication use process. Second, we used six sigma approach by utilizing the five-step DMAIC process (Define, Measure, Analyze, Implement, Control) to find out the real reasons behind such errors. This was to figure out a useful solution to avoid medication error incidences in daily healthcare professional practice. Data sheet was used in Data tool and Pareto diagrams were used in Analyzing tool. In our investigation, we reached out the real cause behind administrated medication errors. As Pareto diagrams used in our study showed that the fault percentage in administrated phase was 24.8%, while the percentage of errors related to prescribing phase was 42.8%, 1.7 folds. This means that the mistakes in prescribing phase, especially because of the poor handwritten prescriptions whose percentage in this phase was 17.6%, are responsible for the consequent) mistakes in this treatment process later on. Therefore, we proposed in this study an effective low cost strategy based on the behavior of healthcare workers as Guideline Recommendations to be followed by the physicians. This method can be a prior caution to decrease errors in prescribing phase which may lead to decrease the administrated medication error incidences to less than 1%. This improvement way of behavior can be efficient to improve hand written prescriptions and decrease the consequent errors related to administrated

  16. Quantum error correction for beginners

    International Nuclear Information System (INIS)

    Devitt, Simon J; Nemoto, Kae; Munro, William J

    2013-01-01

    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future. (review article)

  17. Concatenated codes for fault tolerant quantum computing

    Energy Technology Data Exchange (ETDEWEB)

    Knill, E.; Laflamme, R.; Zurek, W.

    1995-05-01

    The application of concatenated codes to fault tolerant quantum computing is discussed. We have previously shown that for quantum memories and quantum communication, a state can be transmitted with error {epsilon} provided each gate has error at most c{epsilon}. We show how this can be used with Shor`s fault tolerant operations to reduce the accuracy requirements when maintaining states not currently participating in the computation. Viewing Shor`s fault tolerant operations as a method for reducing the error of operations, we give a concatenated implementation which promises to propagate the reduction hierarchically. This has the potential of reducing the accuracy requirements in long computations.

  18. On-ward participation of a hospital pharmacist in a Dutch intensive care unit reduces prescribing errors and related patient harm: an intervention study

    NARCIS (Netherlands)

    Klopotowska, J.E.; Kuiper, R.; van Kan, H.J.; de Pont, A.C.; Dijkgraaf, M.G.; Lie-A-Huen, L.; Vroom, M.B.; Smorenburg, S.M.

    2010-01-01

    Introduction: Patients admitted to an intensive care unit (ICU) are at high risk for prescribing errors and related adverse drug events (ADEs). An effective intervention to decrease this risk, based on studies conducted mainly in North America, is on-ward participation of a clinical pharmacist in an

  19. A survey of the criteria for prescribing in cases of borderline refractive errors

    Directory of Open Access Journals (Sweden)

    Einat Shneor

    2016-01-01

    Conclusions: The prescribing criteria found in this study are broadly comparable with those in previous studies and with published prescribing guidelines. Subtle indications suggest that optometrists may become more conservative in their prescribing criteria with experience.

  20. On-ward participation of a hospital pharmacist in a Dutch intensive care unit reduces prescribing errors and related patient harm: an intervention study

    NARCIS (Netherlands)

    Klopotowska, Joanna E.; Kuiper, Rob; van Kan, Hendrikus J.; de Pont, Anne-Cornelie; Dijkgraaf, Marcel G.; Lie-A-Huen, Loraine; Vroom, Margreeth B.; Smorenburg, Susanne M.

    2010-01-01

    Patients admitted to an intensive care unit (ICU) are at high risk for prescribing errors and related adverse drug events (ADEs). An effective intervention to decrease this risk, based on studies conducted mainly in North America, is on-ward participation of a clinical pharmacist in an ICU team. As

  1. Measurement and analysis of operating system fault tolerance

    Science.gov (United States)

    Lee, I.; Tang, D.; Iyer, R. K.

    1992-01-01

    This paper demonstrates a methodology to model and evaluate the fault tolerance characteristics of operational software. The methodology is illustrated through case studies on three different operating systems: the Tandem GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Measurements are made on these systems for substantial periods to collect software error and recovery data. In addition to investigating basic dependability characteristics such as major software problems and error distributions, we develop two levels of models to describe error and recovery processes inside an operating system and on multiple instances of an operating system running in a distributed environment. Based on the models, reward analysis is conducted to evaluate the loss of service due to software errors and the effect of the fault-tolerance techniques implemented in the systems. Software error correlation in multicomputer systems is also investigated.

  2. Non-intercepted dose errors in prescribing anti-neoplastic treatment

    DEFF Research Database (Denmark)

    Mattsson, T O; Holm, B; Michelsen, H

    2015-01-01

    BACKGROUND: The incidence of non-intercepted prescription errors and the risk factors involved, including the impact of computerised order entry (CPOE) systems on such errors, are unknown. Our objective was to determine the incidence, type, severity, and related risk factors of non-intercepted pr....... Strategies to prevent future prescription errors could usefully focus on integrated computerised systems that can aid dose calculations and reduce transcription errors between databases....

  3. A model of methods for influencing prescribing: Part I. A review of prescribing models, persuasion theories, and administrative and educational methods.

    Science.gov (United States)

    Raisch, D W

    1990-04-01

    The purpose of this literature review is to develop a model of methods to be used to influence prescribing. Four bodies of literature were identified as being important for developing the model: (1) Theoretical prescribing models furnish information concerning factors that affect prescribing and how prescribing decisions are made. (2) Theories of persuasion provide insight into important components of educational communications. (3) Research articles of programs to improve prescribing identify types of programs that have been found to be successful. (4) Theories of human inference describe how judgments are formulated and identify errors in judgment that can play a role in prescribing. This review is presented in two parts. This article reviews prescribing models, theories of persuasion, studies of administrative programs to control prescribing, and sub-optimally designed studies of educational efforts to influence drug prescribing.

  4. Error Mitigation of Point-to-Point Communication for Fault-Tolerant Computing

    Science.gov (United States)

    Akamine, Robert L.; Hodson, Robert F.; LaMeres, Brock J.; Ray, Robert E.

    2011-01-01

    Fault tolerant systems require the ability to detect and recover from physical damage caused by the hardware s environment, faulty connectors, and system degradation over time. This ability applies to military, space, and industrial computing applications. The integrity of Point-to-Point (P2P) communication, between two microcontrollers for example, is an essential part of fault tolerant computing systems. In this paper, different methods of fault detection and recovery are presented and analyzed.

  5. Beyond the basics: refills by electronic prescribing.

    Science.gov (United States)

    Goldman, Roberta E; Dubé, Catherine; Lapane, Kate L

    2010-07-01

    E-prescribing is part of a new generation of electronic solutions for the medical industry that may have great potential for improving work flow and communication between medical practices and pharmacies. In the US, it has been introduced with minimal monitoring of errors and general usability. This paper examines refill functionality in e-prescribing software. A mixed method study including focus groups and surveys was conducted. Qualitative data were collected in on-site focus groups or individual interviews with clinicians and medical office staff at 64 physician office practices. Focus group participants described their experiences with the refill functionality of e-prescribing software, provided suggestions for improving it, and suggested improvements in office procedures and software functionality. Overall, approximately 50% reduction in time spent each day on refills was reported. Overall reports of refill functionality were positive; but clinicians and staff identified numerous difficulties and glitches associated managing prescription refills. These glitches diminished over time. Benefits included time saved as well as patient convenience. Potential for refilling without thought because of the ease of use was noted. Clinicians and staff appreciated the ability to track whether patients are filling and refilling prescriptions. E-prescribing software for managing medication refills has not yet reached its full potential. To reduce work flow barriers and medication errors, software companies need to develop error reporting systems and response teams to deal effectively with problems experienced by users. Examining usability issues on both the medical office and pharmacy ends is required to identify the behavioral and cultural changes that accompany technological innovation and ease the transition to full use of e-prescribing software. 2010 Elsevier Ireland Ltd. All rights reserved.

  6. Which non-technical skills do junior doctors require to prescribe safely? A systematic review.

    Science.gov (United States)

    Dearden, Effie; Mellanby, Edward; Cameron, Helen; Harden, Jeni

    2015-12-01

    Prescribing errors are a major source of avoidable morbidity and mortality. Junior doctors write most in-hospital prescriptions and are the least experienced members of the healthcare team. This puts them at high risk of error and makes them attractive targets for interventions to improve prescription safety. Error analysis has shown a background of complex environments with multiple contributory conditions. Similar conditions in other high risk industries, such as aviation, have led to an increased understanding of so-called human factors and the use of non-technical skills (NTS) training to try to reduce error. To date no research has examined the NTS required for safe prescribing. The aim of this review was to develop a prototype NTS taxonomy for safe prescribing, by junior doctors, in hospital settings. A systematic search identified 14 studies analyzing prescribing behaviours and errors by junior doctors. Framework analysis was used to extract data from the studies and identify behaviours related to categories of NTS that might be relevant to safe and effective prescribing performance by junior doctors. Categories were derived from existing literature and inductively from the data. A prototype taxonomy of relevant categories (situational awareness, decision making, communication and team working, and task management) and elements was constructed. This prototype will form the basis of future work to create a tool that can be used for training and assessment of medical students and junior doctors to reduce prescribing error in the future. © 2015 The British Pharmacological Society.

  7. Approximation errors during variance propagation

    International Nuclear Information System (INIS)

    Dinsmore, Stephen

    1986-01-01

    Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given

  8. Fault tree analysis: concepts and techniques

    International Nuclear Information System (INIS)

    Fussell, J.B.

    1976-01-01

    Concepts and techniques of fault tree analysis have been developed over the past decade and now predictions from this type analysis are important considerations in the design of many systems such as aircraft, ships and their electronic systems, missiles, and nuclear reactor systems. Routine, hardware-oriented fault tree construction can be automated; however, considerable effort is needed in this area to get the methodology into production status. When this status is achieved, the entire analysis of hardware systems will be automated except for the system definition step. Automated analysis is not undesirable; to the contrary, when verified on adequately complex systems, automated analysis could well become a routine analysis. It could also provide an excellent start for a more in-depth fault tree analysis that includes environmental effects, common mode failure, and human errors. The automated analysis is extremely fast and frees the analyst from the routine hardware-oriented fault tree construction, as well as eliminates logic errors and errors of oversight in this part of the analysis. Automated analysis then affords the analyst a powerful tool to allow his prime efforts to be devoted to unearthing more subtle aspects of the modes of failure of the system

  9. Negligence, genuine error, and litigation

    Directory of Open Access Journals (Sweden)

    Sohn DH

    2013-02-01

    Full Text Available David H SohnDepartment of Orthopedic Surgery, University of Toledo Medical Center, Toledo, OH, USAAbstract: Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort system in the United States; and review current and future solutions, including medical malpractice reform, alternative dispute resolution, health courts, and no-fault compensation systems. The current political environment favors investigation of non-cap tort reform remedies; investment into more rational oversight systems, such as health courts or no-fault systems may reap both quantitative and qualitative benefits for a less costly and safer health system.Keywords: medical malpractice, tort reform, no fault compensation, alternative dispute resolution, system errors

  10. Unit of measurement used and parent medication dosing errors.

    Science.gov (United States)

    Yin, H Shonna; Dreyer, Benard P; Ugboaja, Donna C; Sanchez, Dayana C; Paul, Ian M; Moreira, Hannah A; Rodriguez, Luis; Mendelsohn, Alan L

    2014-08-01

    Adopting the milliliter as the preferred unit of measurement has been suggested as a strategy to improve the clarity of medication instructions; teaspoon and tablespoon units may inadvertently endorse nonstandard kitchen spoon use. We examined the association between unit used and parent medication errors and whether nonstandard instruments mediate this relationship. Cross-sectional analysis of baseline data from a larger study of provider communication and medication errors. English- or Spanish-speaking parents (n = 287) whose children were prescribed liquid medications in 2 emergency departments were enrolled. Medication error defined as: error in knowledge of prescribed dose, error in observed dose measurement (compared to intended or prescribed dose); >20% deviation threshold for error. Multiple logistic regression performed adjusting for parent age, language, country, race/ethnicity, socioeconomic status, education, health literacy (Short Test of Functional Health Literacy in Adults); child age, chronic disease; site. Medication errors were common: 39.4% of parents made an error in measurement of the intended dose, 41.1% made an error in the prescribed dose. Furthermore, 16.7% used a nonstandard instrument. Compared with parents who used milliliter-only, parents who used teaspoon or tablespoon units had twice the odds of making an error with the intended (42.5% vs 27.6%, P = .02; adjusted odds ratio=2.3; 95% confidence interval, 1.2-4.4) and prescribed (45.1% vs 31.4%, P = .04; adjusted odds ratio=1.9; 95% confidence interval, 1.03-3.5) dose; associations greater for parents with low health literacy and non-English speakers. Nonstandard instrument use partially mediated teaspoon and tablespoon-associated measurement errors. Findings support a milliliter-only standard to reduce medication errors. Copyright © 2014 by the American Academy of Pediatrics.

  11. Microcomputer applications of, and modifications to, the modular fault trees

    International Nuclear Information System (INIS)

    Zimmerman, T.L.; Graves, N.L.; Payne, A.C. Jr.; Whitehead, D.W.

    1994-10-01

    The LaSalle Probabilistic Risk Assessment was the first major application of the modular logic fault trees after the IREP program. In the process of performing the analysis, many errors were discovered in the fault tree modules that led to difficulties in combining the modules to form the final system fault trees. These errors are corrected in the revised modules listed in this report. In addition, the application of the modules in terms of editing them and forming them into the system fault trees was inefficient. Originally, the editing had to be done line by line and no error checking was performed by the computer. This led to many typos and other logic errors in the construction of the modular fault tree files. Two programs were written to help alleviate this problem: (1) MODEDIT - This program allows an operator to retrieve a file for editing, edit the file for the plant specific application, perform some general error checking while the file is being modified, and store the file for later use, and (2) INDEX - This program checks that the modules that are supposed to form one fault tree all link up appropriately before the files are,loaded onto the mainframe computer. Lastly, the modules were not designed for relay type logic common in BWR designs but for solid state type logic. Some additional modules were defined for modeling relay logic, and an explanation and example of their use are included in this report

  12. Fault-tolerant measurement-based quantum computing with continuous-variable cluster states.

    Science.gov (United States)

    Menicucci, Nicolas C

    2014-03-28

    A long-standing open question about Gaussian continuous-variable cluster states is whether they enable fault-tolerant measurement-based quantum computation. The answer is yes. Initial squeezing in the cluster above a threshold value of 20.5 dB ensures that errors from finite squeezing acting on encoded qubits are below the fault-tolerance threshold of known qubit-based error-correcting codes. By concatenating with one of these codes and using ancilla-based error correction, fault-tolerant measurement-based quantum computation of theoretically indefinite length is possible with finitely squeezed cluster states.

  13. The pattern of the discovery of medication errors in a tertiary hospital in Hong Kong.

    Science.gov (United States)

    Samaranayake, N R; Cheung, S T D; Chui, W C M; Cheung, B M Y

    2013-06-01

    The primary goal of reducing medication errors is to eliminate those that reach the patient. We aimed to study the pattern of interceptions to tackle medication errors along the medication use processes. Tertiary care hospital in Hong Kong. The 'Swiss Cheese Model' was used to explain the interceptions targeting medication error reporting over 5 years (2006-2010). Proportions of prescribing, dispensing and drug administration errors intercepted by pharmacists and nurses; proportions of prescribing, dispensing and drug administration errors that reached the patient. Our analysis included 1,268 in-patient medication errors, of which 53.4% were related to prescribing, 29.0% to administration and 17.6% to dispensing. 34.1% of all medication errors (4.9% prescribing, 26.8% drug administration and 2.4% dispensing) were not intercepted. Pharmacy staff intercepted 85.4% of the prescribing errors. Nurses detected 83.0% of dispensing and 5.0% of prescribing errors. However, 92.4% of all drug administration errors reached the patient. Having a preventive measure at each stage of the medication use process helps to prevent most errors. Most drug administration errors reach the patient as there is no defense against these. Therefore, more interventions to prevent drug administration errors are warranted.

  14. Spent fuel bundle counter sequence error manual - BRUCE NGS

    International Nuclear Information System (INIS)

    Nicholson, L.E.

    1992-01-01

    The Spent Fuel Bundle Counter (SFBC) is used to count the number and type of spent fuel transfers that occur into or out of controlled areas at CANDU reactor sites. However if the transfers are executed in a non-standard manner or the SFBC is malfunctioning, the transfers are recorded as sequence errors. Each sequence error message typically contains adequate information to determine the cause of the message. This manual provides a guide to interpret the various sequence error messages that can occur and suggests probable cause or causes of the sequence errors. Each likely sequence error is presented on a 'card' in Appendix A. Note that it would be impractical to generate a sequence error card file with entries for all possible combinations of faults. Therefore the card file contains sequences with only one fault at a time. Some exceptions have been included however where experience has indicated that several faults can occur simultaneously

  15. Spent fuel bundle counter sequence error manual - DARLINGTON NGS

    International Nuclear Information System (INIS)

    Nicholson, L.E.

    1992-01-01

    The Spent Fuel Bundle Counter (SFBC) is used to count the number and type of spent fuel transfers that occur into or out of controlled areas at CANDU reactor sites. However if the transfers are executed in a non-standard manner or the SFBC is malfunctioning, the transfers are recorded as sequence errors. Each sequence error message typically contains adequate information to determine the cause of the message. This manual provides a guide to interpret the various sequence error messages that can occur and suggests probable cause or causes of the sequence errors. Each likely sequence error is presented on a 'card' in Appendix A. Note that it would be impractical to generate a sequence error card file with entries for all possible combinations of faults. Therefore the card file contains sequences with only one fault at a time. Some exceptions have been included however where experience has indicated that several faults can occur simultaneously

  16. Medication errors with the use of allopurinol and colchicine: a retrospective study of a national, anonymous Internet-accessible error reporting system.

    Science.gov (United States)

    Mikuls, Ted R; Curtis, Jeffrey R; Allison, Jeroan J; Hicks, Rodney W; Saag, Kenneth G

    2006-03-01

    To more closely assess medication errors in gout care, we examined data from a national, Internet-accessible error reporting program over a 5-year reporting period. We examined data from the MEDMARX database, covering the period from January 1, 1999 through December 31, 2003. For allopurinol and colchicine, we examined error severity, source, type, contributing factors, and healthcare personnel involved in errors, and we detailed errors resulting in patient harm. Causes of error and the frequency of other error characteristics were compared for gout medications versus other musculoskeletal treatments using the chi-square statistic. Gout medication errors occurred in 39% (n = 273) of facilities participating in the MEDMARX program. Reported errors were predominantly from the inpatient hospital setting and related to the use of allopurinol (n = 524), followed by colchicine (n = 315), probenecid (n = 50), and sulfinpyrazone (n = 2). Compared to errors involving other musculoskeletal treatments, allopurinol and colchicine errors were more often ascribed to problems with physician prescribing (7% for other therapies versus 23-39% for allopurinol and colchicine, p < 0.0001) and less often due to problems with drug administration or nursing error (50% vs 23-27%, p < 0.0001). Our results suggest that inappropriate prescribing practices are characteristic of errors occurring with the use of allopurinol and colchicine. Physician prescribing practices are a potential target for quality improvement interventions in gout care.

  17. Unrealized potential and residual consequences of electronic prescribing on pharmacy workflow in the outpatient pharmacy.

    Science.gov (United States)

    Nanji, Karen C; Rothschild, Jeffrey M; Boehne, Jennifer J; Keohane, Carol A; Ash, Joan S; Poon, Eric G

    2014-01-01

    Electronic prescribing systems have often been promoted as a tool for reducing medication errors and adverse drug events. Recent evidence has revealed that adoption of electronic prescribing systems can lead to unintended consequences such as the introduction of new errors. The purpose of this study is to identify and characterize the unrealized potential and residual consequences of electronic prescribing on pharmacy workflow in an outpatient pharmacy. A multidisciplinary team conducted direct observations of workflow in an independent pharmacy and semi-structured interviews with pharmacy staff members about their perceptions of the unrealized potential and residual consequences of electronic prescribing systems. We used qualitative methods to iteratively analyze text data using a grounded theory approach, and derive a list of major themes and subthemes related to the unrealized potential and residual consequences of electronic prescribing. We identified the following five themes: Communication, workflow disruption, cost, technology, and opportunity for new errors. These contained 26 unique subthemes representing different facets of our observations and the pharmacy staff's perceptions of the unrealized potential and residual consequences of electronic prescribing. We offer targeted solutions to improve electronic prescribing systems by addressing the unrealized potential and residual consequences that we identified. These recommendations may be applied not only to improve staff perceptions of electronic prescribing systems but also to improve the design and/or selection of these systems in order to optimize communication and workflow within pharmacies while minimizing both cost and the potential for the introduction of new errors.

  18. A fault-tolerant one-way quantum computer

    International Nuclear Information System (INIS)

    Raussendorf, R.; Harrington, J.; Goyal, K.

    2006-01-01

    We describe a fault-tolerant one-way quantum computer on cluster states in three dimensions. The presented scheme uses methods of topological error correction resulting from a link between cluster states and surface codes. The error threshold is 1.4% for local depolarizing error and 0.11% for each source in an error model with preparation-, gate-, storage-, and measurement errors

  19. Medication errors reported to the National Medication Error Reporting System in Malaysia: a 4-year retrospective review (2009 to 2012).

    Science.gov (United States)

    Samsiah, A; Othman, Noordin; Jamshed, Shazia; Hassali, Mohamed Azmi; Wan-Mohaina, W M

    2016-12-01

    Reporting and analysing the data on medication errors (MEs) is important and contributes to a better understanding of the error-prone environment. This study aims to examine the characteristics of errors submitted to the National Medication Error Reporting System (MERS) in Malaysia. A retrospective review of reports received from 1 January 2009 to 31 December 2012 was undertaken. Descriptive statistics method was applied. A total of 17,357 MEs reported were reviewed. The majority of errors were from public-funded hospitals. Near misses were classified in 86.3 % of the errors. The majority of errors (98.1 %) had no harmful effects on the patients. Prescribing contributed to more than three-quarters of the overall errors (76.1 %). Pharmacists detected and reported the majority of errors (92.1 %). Cases of erroneous dosage or strength of medicine (30.75 %) were the leading type of error, whilst cardiovascular (25.4 %) was the most common category of drug found. MERS provides rich information on the characteristics of reported MEs. Low contribution to reporting from healthcare facilities other than government hospitals and non-pharmacists requires further investigation. Thus, a feasible approach to promote MERS among healthcare providers in both public and private sectors needs to be formulated and strengthened. Preventive measures to minimise MEs should be directed to improve prescribing competency among the fallible prescribers identified.

  20. Fault latency in the memory - An experimental study on VAX 11/780

    Science.gov (United States)

    Chillarege, Ram; Iyer, Ravishankar K.

    1986-01-01

    Fault latency is the time between the physical occurrence of a fault and its corruption of data, causing an error. The measure of this time is difficult to obtain because the time of occurrence of a fault and the exact moment of generation of an error are not known. This paper describes an experiment to accurately study the fault latency in the memory subsystem. The experiment employs real memory data from a VAX 11/780 at the University of Illinois. Fault latency distributions are generated for s-a-0 and s-a-1 permanent fault models. Results show that the mean fault latency of a s-a-0 fault is nearly 5 times that of the s-a-1 fault. Large variations in fault latency are found for different regions in memory. An analysis of a variance model to quantify the relative influence of various workload measures on the evaluated latency is also given.

  1. Active Fault-Tolerant Control for Wind Turbine with Simultaneous Actuator and Sensor Faults

    Directory of Open Access Journals (Sweden)

    Lei Wang

    2017-01-01

    Full Text Available The purpose of this paper is to show a novel fault-tolerant tracking control (FTC strategy with robust fault estimation and compensating for simultaneous actuator sensor faults. Based on the framework of fault-tolerant control, developing an FTC design method for wind turbines is a challenge and, thus, they can tolerate simultaneous pitch actuator and pitch sensor faults having bounded first time derivatives. The paper’s key contribution is proposing a descriptor sliding mode method, in which for establishing a novel augmented descriptor system, with which we can estimate the state of system and reconstruct fault by designing descriptor sliding mode observer, the paper introduces an auxiliary descriptor state vector composed by a system state vector, actuator fault vector, and sensor fault vector. By the optimized method of LMI, the conditions for stability that estimated error dynamics are set up to promote the determination of the parameters designed. With this estimation, and designing a fault-tolerant controller, the system’s stability can be maintained. The effectiveness of the design strategy is verified by implementing the controller in the National Renewable Energy Laboratory’s 5-MW nonlinear, high-fidelity wind turbine model (FAST and simulating it in MATLAB/Simulink.

  2. Prescribing practices for pediatric out-patients: A case study of two ...

    African Journals Online (AJOL)

    Purpose: The objective of this study was to evaluate drug utilization pattern in the pediatric ... Medication error can affect ... medication error may be caused by many factors ... pharmacokinetic .... prescriber's performance, patients experience at.

  3. Software fault tolerance in computer operating systems

    Science.gov (United States)

    Iyer, Ravishankar K.; Lee, Inhwan

    1994-01-01

    This chapter provides data and analysis of the dependability and fault tolerance for three operating systems: the Tandem/GUARDIAN fault-tolerant system, the VAX/VMS distributed system, and the IBM/MVS system. Based on measurements from these systems, basic software error characteristics are investigated. Fault tolerance in operating systems resulting from the use of process pairs and recovery routines is evaluated. Two levels of models are developed to analyze error and recovery processes inside an operating system and interactions among multiple instances of an operating system running in a distributed environment. The measurements show that the use of process pairs in Tandem systems, which was originally intended for tolerating hardware faults, allows the system to tolerate about 70% of defects in system software that result in processor failures. The loose coupling between processors which results in the backup execution (the processor state and the sequence of events occurring) being different from the original execution is a major reason for the measured software fault tolerance. The IBM/MVS system fault tolerance almost doubles when recovery routines are provided, in comparison to the case in which no recovery routines are available. However, even when recovery routines are provided, there is almost a 50% chance of system failure when critical system jobs are involved.

  4. Design of passive fault-tolerant flight controller against actuator failures

    Directory of Open Access Journals (Sweden)

    Xiang Yu

    2015-02-01

    Full Text Available The problem of designing passive fault-tolerant flight controller is addressed when the normal and faulty cases are prescribed. First of all, the considered fault and fault-free cases are formed by polytopes. As considering that the safety of a post-fault system is directly related to the maximum values of physical variables in the system, peak-to-peak gain is selected to represent the relationships among the amplitudes of actuator outputs, system outputs, and reference commands. Based on the parameter dependent Lyapunov and slack methods, the passive fault-tolerant flight controllers in the absence/presence of system uncertainty for actuator failure cases are designed, respectively. Case studies of an airplane under actuator failures are carried out to validate the effectiveness of the proposed approach.

  5. Architecture Fault Modeling and Analysis with the Error Model Annex, Version 2

    Science.gov (United States)

    2016-06-01

    specification of fault propagation in EMV2 corresponds to the Fault Propagation and Transformation Calculus (FPTC) [Paige 2009]. The following concepts...definition of security includes acci- dental malicious indication of anomalous behavior either from outside a system or by unauthor- ized crossing of a

  6. Analyzing Software Errors in Safety-Critical Embedded Systems

    Science.gov (United States)

    Lutz, Robyn R.

    1994-01-01

    This paper analyzes the root causes of safty-related software faults identified as potentially hazardous to the system are distributed somewhat differently over the set of possible error causes than non-safety-related software faults.

  7. DYNAMIC SOFTWARE TESTING MODELS WITH PROBABILISTIC PARAMETERS FOR FAULT DETECTION AND ERLANG DISTRIBUTION FOR FAULT RESOLUTION DURATION

    Directory of Open Access Journals (Sweden)

    A. D. Khomonenko

    2016-07-01

    Full Text Available Subject of Research.Software reliability and test planning models are studied taking into account the probabilistic nature of error detection and discovering. Modeling of software testing enables to plan the resources and final quality at early stages of project execution. Methods. Two dynamic models of processes (strategies are suggested for software testing, using error detection probability for each software module. The Erlang distribution is used for arbitrary distribution approximation of fault resolution duration. The exponential distribution is used for approximation of fault resolution discovering. For each strategy, modified labeled graphs are built, along with differential equation systems and their numerical solutions. The latter makes it possible to compute probabilistic characteristics of the test processes and states: probability states, distribution functions for fault detection and elimination, mathematical expectations of random variables, amount of detected or fixed errors. Evaluation of Results. Probabilistic characteristics for software development projects were calculated using suggested models. The strategies have been compared by their quality indexes. Required debugging time to achieve the specified quality goals was calculated. The calculation results are used for time and resources planning for new projects. Practical Relevance. The proposed models give the possibility to use the reliability estimates for each individual module. The Erlang approximation removes restrictions on the use of arbitrary time distribution for fault resolution duration. It improves the accuracy of software test process modeling and helps to take into account the viability (power of the tests. With the use of these models we can search for ways to improve software reliability by generating tests which detect errors with the highest probability.

  8. A study on quantification of unavailability of DPPS with fault tolerant techniques considering fault tolerant techniques' characteristics

    International Nuclear Information System (INIS)

    Kim, B. G.; Kang, H. G.; Kim, H. E.; Seung, P. H.; Kang, H. G.; Lee, S. J.

    2012-01-01

    With the improvement of digital technologies, digital I and C systems have included more various fault tolerant techniques than conventional analog I and C systems have, in order to increase fault detection and to help the system safely perform the required functions in spite of the presence of faults. So, in the reliability evaluation of digital systems, the fault tolerant techniques (FTTs) and their fault coverage must be considered. To consider the effects of FTTs in a digital system, there have been several studies on the reliability of digital model. Therefore, this research based on literature survey attempts to develop a model to evaluate the plant reliability of the digital plant protection system (DPPS) with fault tolerant techniques considering detection and process characteristics and human errors. Sensitivity analysis is performed to ascertain important variables from the fault management coverage and unavailability based on the proposed model

  9. Homogeneity of small-scale earthquake faulting, stress, and fault strength

    Science.gov (United States)

    Hardebeck, J.L.

    2006-01-01

    Small-scale faulting at seismogenic depths in the crust appears to be more homogeneous than previously thought. I study three new high-quality focal-mechanism datasets of small (M angular difference between their focal mechanisms. Closely spaced earthquakes (interhypocentral distance faults of many orientations may or may not be present, only similarly oriented fault planes produce earthquakes contemporaneously. On these short length scales, the crustal stress orientation and fault strength (coefficient of friction) are inferred to be homogeneous as well, to produce such similar earthquakes. Over larger length scales (???2-50 km), focal mechanisms become more diverse with increasing interhypocentral distance (differing on average by 40-70??). Mechanism variability on ???2- to 50 km length scales can be explained by ralatively small variations (???30%) in stress or fault strength. It is possible that most of this small apparent heterogeneity in stress of strength comes from measurement error in the focal mechanisms, as negligibble variation in stress or fault strength (<10%) is needed if each earthquake is assigned the optimally oriented focal mechanism within the 1-sigma confidence region. This local homogeneity in stress orientation and fault strength is encouraging, implying it may be possible to measure these parameters with enough precision to be useful in studying and modeling large earthquakes.

  10. Spent fuel bundle counter sequence error manual - RAPPS (200 MW) NGS

    International Nuclear Information System (INIS)

    Nicholson, L.E.

    1992-01-01

    The Spent Fuel Bundle Counter (SFBC) is used to count the number and type of spent fuel transfers that occur into or out of controlled areas at CANDU reactor sites. However if the transfers are executed in a non-standard manner or the SFBC is malfunctioning, the transfers are recorded as sequence errors. Each sequence error message typically contains adequate information to determine the cause of the message. This manual provides a guide to interpret the various sequence error messages that can occur and suggests probable cause or causes of the sequence errors. Each likely sequence error is presented on a 'card' in Appendix A. Note that it would be impractical to generate a sequence error card file with entries for all possible combinations of faults. Therefore the card file contains sequences with only one fault at a time. Some exceptions have been included however where experience has indicated that several faults can occur simultaneously

  11. Spent fuel bundle counter sequence error manual - KANUPP (125 MW) NGS

    International Nuclear Information System (INIS)

    Nicholson, L.E.

    1992-01-01

    The Spent Fuel Bundle Counter (SFBC) is used to count the number and type of spent fuel transfers that occur into or out of controlled areas at CANDU reactor sites. However if the transfers are executed in a non-standard manner or the SFBC is malfunctioning, the transfers are recorded as sequence errors. Each sequence error message may contain adequate information to determine the cause of the message. This manual provides a guide to interpret the various sequence error messages that can occur and suggests probable cause or causes of the sequence errors. Each likely sequence error is presented on a 'card' in Appendix A. Note that it would be impractical to generate a sequence error card file with entries for all possible combinations of faults. Therefore the card file contains sequences with only one fault at a time. Some exceptions have been included however where experience has indicated that several faults can occur simultaneously

  12. Multi-link faults localization and restoration based on fuzzy fault set for dynamic optical networks.

    Science.gov (United States)

    Zhao, Yongli; Li, Xin; Li, Huadong; Wang, Xinbo; Zhang, Jie; Huang, Shanguo

    2013-01-28

    Based on a distributed method of bit-error-rate (BER) monitoring, a novel multi-link faults restoration algorithm is proposed for dynamic optical networks. The concept of fuzzy fault set (FFS) is first introduced for multi-link faults localization, which includes all possible optical equipment or fiber links with a membership describing the possibility of faults. Such a set is characterized by a membership function which assigns each object a grade of membership ranging from zero to one. OSPF protocol extension is designed for the BER information flooding in the network. The BER information can be correlated to link faults through FFS. Based on the BER information and FFS, multi-link faults localization mechanism and restoration algorithm are implemented and experimentally demonstrated on a GMPLS enabled optical network testbed with 40 wavelengths in each fiber link. Experimental results show that the novel localization mechanism has better performance compared with the extended limited perimeter vector matching (LVM) protocol and the restoration algorithm can improve the restoration success rate under multi-link faults scenario.

  13. Electronic prescribing in pediatrics: toward safer and more effective medication management.

    Science.gov (United States)

    Johnson, Kevin B; Lehmann, Christoph U

    2013-04-01

    This technical report discusses recent advances in electronic prescribing (e-prescribing) systems, including the evidence base supporting their limitations and potential benefits. Specifically, this report acknowledges that there are limited but positive pediatric data supporting the role of e-prescribing in mitigating medication errors, improving communication with dispensing pharmacists, and improving medication adherence. On the basis of these data and on the basis of federal statutes that provide incentives for the use of e-prescribing systems, the American Academy of Pediatrics recommends the adoption of e-prescribing systems with pediatric functionality. This report supports the accompanying policy statement from the American Academy of Pediatrics recommending the adoption of e-prescribing by pediatric health care providers.

  14. Recurrent fuzzy neural network backstepping control for the prescribed output tracking performance of nonlinear dynamic systems.

    Science.gov (United States)

    Han, Seong-Ik; Lee, Jang-Myung

    2014-01-01

    This paper proposes a backstepping control system that uses a tracking error constraint and recurrent fuzzy neural networks (RFNNs) to achieve a prescribed tracking performance for a strict-feedback nonlinear dynamic system. A new constraint variable was defined to generate the virtual control that forces the tracking error to fall within prescribed boundaries. An adaptive RFNN was also used to obtain the required improvement on the approximation performances in order to avoid calculating the explosive number of terms generated by the recursive steps of traditional backstepping control. The boundedness and convergence of the closed-loop system was confirmed based on the Lyapunov stability theory. The prescribed performance of the proposed control scheme was validated by using it to control the prescribed error of a nonlinear system and a robot manipulator. © 2013 ISA. Published by Elsevier Ltd. All rights reserved.

  15. Medication errors : the impact of prescribing and transcribing errors on preventable harm in hospitalised patients

    NARCIS (Netherlands)

    van Doormaal, J.E.; van der Bemt, P.M.L.A.; Mol, P.G.M.; Egberts, A.C.G.; Haaijer-Ruskamp, F.M.; Kosterink, J.G.W.; Zaal, Rianne J.

    Background: Medication errors (MEs) affect patient safety to a significant extent. Because these errors can lead to preventable adverse drug events (pADEs), it is important to know what type of ME is the most prevalent cause of these pADEs. This study determined the impact of the various types of

  16. Error rates and resource overheads of encoded three-qubit gates

    Science.gov (United States)

    Takagi, Ryuji; Yoder, Theodore J.; Chuang, Isaac L.

    2017-10-01

    A non-Clifford gate is required for universal quantum computation, and, typically, this is the most error-prone and resource-intensive logical operation on an error-correcting code. Small, single-qubit rotations are popular choices for this non-Clifford gate, but certain three-qubit gates, such as Toffoli or controlled-controlled-Z (ccz), are equivalent options that are also more suited for implementing some quantum algorithms, for instance, those with coherent classical subroutines. Here, we calculate error rates and resource overheads for implementing logical ccz with pieceable fault tolerance, a nontransversal method for implementing logical gates. We provide a comparison with a nonlocal magic-state scheme on a concatenated code and a local magic-state scheme on the surface code. We find the pieceable fault-tolerance scheme particularly advantaged over magic states on concatenated codes and in certain regimes over magic states on the surface code. Our results suggest that pieceable fault tolerance is a promising candidate for fault tolerance in a near-future quantum computer.

  17. Near field communications technology and the potential to reduce medication errors through multidisciplinary application.

    Science.gov (United States)

    O'Connell, Emer; Pegler, Joe; Lehane, Elaine; Livingstone, Vicki; McCarthy, Nora; Sahm, Laura J; Tabirca, Sabin; O'Driscoll, Aoife; Corrigan, Mark

    2016-01-01

    Patient safety requires optimal management of medications. Electronic systems are encouraged to reduce medication errors. Near field communications (NFC) is an emerging technology that may be used to develop novel medication management systems. An NFC-based system was designed to facilitate prescribing, administration and review of medications commonly used on surgical wards. Final year medical, nursing, and pharmacy students were recruited to test the electronic system in a cross-over observational setting on a simulated ward. Medication errors were compared against errors recorded using a paper-based system. A significant difference in the commission of medication errors was seen when NFC and paper-based medication systems were compared. Paper use resulted in a mean of 4.09 errors per prescribing round while NFC prescribing resulted in a mean of 0.22 errors per simulated prescribing round (P=0.000). Likewise, medication administration errors were reduced from a mean of 2.30 per drug round with a Paper system to a mean of 0.80 errors per round using NFC (PNFC based medication system may be used to effectively reduce medication errors in a simulated ward environment.

  18. Managing systems faults on the commercial flight deck: Analysis of pilots' organization and prioritization of fault management information

    Science.gov (United States)

    Rogers, William H.

    1993-01-01

    In rare instances, flight crews of commercial aircraft must manage complex systems faults in addition to all their normal flight tasks. Pilot errors in fault management have been attributed, at least in part, to an incomplete or inaccurate awareness of the fault situation. The current study is part of a program aimed at assuring that the types of information potentially available from an intelligent fault management aiding concept developed at NASA Langley called 'Faultfinde' (see Abbott, Schutte, Palmer, and Ricks, 1987) are an asset rather than a liability: additional information should improve pilot performance and aircraft safety, but it should not confuse, distract, overload, mislead, or generally exacerbate already difficult circumstances.

  19. Fault diagnosis system of electromagnetic valve using neural network filter

    International Nuclear Information System (INIS)

    Hayashi, Shoji; Odaka, Tomohiro; Kuroiwa, Jousuke; Ogura, Hisakazu

    2008-01-01

    This paper is concerned with the gas leakage fault detection of electromagnetic valve using a neural network filter. In modern plants, the ability to detect and identify gas leakage faults is becoming increasingly important. The main difficulty in detecting gas leakage faults by sound signals lies in the fact that the practical plants are usually very noisy. To solve this difficulty, a neural network filter is used to eliminate background noise and raise the signal noise ratio of the sound signal. The background noise is assumed as a dynamic system, and an accurate mathematical model of the dynamic system can be established using a neural network filter. The predicted error between predicted values and practical ones constitutes the output of the filter. If the predicted error is zero, then there is no leakage. If the predicted error is greater than a certain value, then there is a leakage fault. Through application to practical pneumatic systems, it is verified that the neural network filter was effective in gas leakage detection. (author)

  20. A fault tolerant system by using distributed RTOS

    International Nuclear Information System (INIS)

    Ge Yingan; Liu Songqiang; Wang Yanfang

    1999-01-01

    The author describes the design and implementation of a prototypal distributed fault tolerant system, which is developed under QNX RTOS by networking two standard PCs. By using a watchdog timer for error detection, the system can be tolerant for fail silent and transient fault of a single node

  1. Gearbox fault diagnosis based on time-frequency domain synchronous averaging and feature extraction technique

    Science.gov (United States)

    Zhang, Shengli; Tang, Jiong

    2016-04-01

    Gearbox is one of the most vulnerable subsystems in wind turbines. Its healthy status significantly affects the efficiency and function of the entire system. Vibration based fault diagnosis methods are prevalently applied nowadays. However, vibration signals are always contaminated by noise that comes from data acquisition errors, structure geometric errors, operation errors, etc. As a result, it is difficult to identify potential gear failures directly from vibration signals, especially for the early stage faults. This paper utilizes synchronous averaging technique in time-frequency domain to remove the non-synchronous noise and enhance the fault related time-frequency features. The enhanced time-frequency information is further employed in gear fault classification and identification through feature extraction algorithms including Kernel Principal Component Analysis (KPCA), Multilinear Principal Component Analysis (MPCA), and Locally Linear Embedding (LLE). Results show that the LLE approach is the most effective to classify and identify different gear faults.

  2. A Review Of Fault Tolerant Scheduling In Multicore Systems

    Directory of Open Access Journals (Sweden)

    Shefali Malhotra

    2015-05-01

    Full Text Available Abstract In this paper we have discussed about various fault tolerant task scheduling algorithm for multi core system based on hardware and software. Hardware based algorithm which is blend of Triple Modulo Redundancy and Double Modulo Redundancy in which Agricultural Vulnerability Factor is considered while deciding the scheduling other than EDF and LLF scheduling algorithms. In most of the real time system the dominant part is shared memory.Low overhead software based fault tolerance approach can be implemented at user-space level so that it does not require any changes at application level. Here redundant multi-threaded processes are used. Using those processes we can detect soft errors and recover from them. This method gives low overhead fast error detection and recovery mechanism. The overhead incurred by this method ranges from 0 to 18 for selected benchmarks. Hybrid Scheduling Method is another scheduling approach for real time systems. Dynamic fault tolerant scheduling gives high feasibility rate whereas task criticality is used to select the type of fault recovery method in order to tolerate the maximum number of faults.

  3. Multiple Embedded Processors for Fault-Tolerant Computing

    Science.gov (United States)

    Bolotin, Gary; Watson, Robert; Katanyoutanant, Sunant; Burke, Gary; Wang, Mandy

    2005-01-01

    A fault-tolerant computer architecture has been conceived in an effort to reduce vulnerability to single-event upsets (spurious bit flips caused by impingement of energetic ionizing particles or photons). As in some prior fault-tolerant architectures, the redundancy needed for fault tolerance is obtained by use of multiple processors in one computer. Unlike prior architectures, the multiple processors are embedded in a single field-programmable gate array (FPGA). What makes this new approach practical is the recent commercial availability of FPGAs that are capable of having multiple embedded processors. A working prototype (see figure) consists of two embedded IBM PowerPC 405 processor cores and a comparator built on a Xilinx Virtex-II Pro FPGA. This relatively simple instantiation of the architecture implements an error-detection scheme. A planned future version, incorporating four processors and two comparators, would correct some errors in addition to detecting them.

  4. Medical Errors in Cyprus: The 2005 Eurobarometer Survey

    Directory of Open Access Journals (Sweden)

    Andreas Pavlakis

    2012-01-01

    Full Text Available Background: Medical errors have been highlighted in recent years by different agencies, scientific bodies and research teams alike. We sought to explore the issue of medical errors in Cyprus using data from the Eurobarometer survey.Methods: Data from the special Eurobarometer survey conducted in 2005 across all European Union countries (EU-25 and the acceding countries were obtained from the corresponding EU office. Statisticalanalyses including logistic regression models were performed using SPSS.Results: A total of 502 individuals participated in the Cyprus survey. About 90% reported that they had often or sometimes heard about medical errors, while 22% reported that a family member or they had suffered a serious medical error in a local hospital. In addition, 9.4% reported a serious problem from a prescribed medicine. We also found statistically significant differences across different ages and gender and in rural versus urban residents. Finally, using multivariable-adjusted logistic regression models, wefound that residents in rural areas were more likely to have suffered a serious medical error in a local hospital or from a prescribed medicine.Conclusion: Our study shows that the vast majority of residents in Cyprus in parallel with the other Europeans worry about medical errors and a significant percentage report having suffered a serious medical error at a local hospital or from a prescribed medicine. The results of our study could help the medical community in Cyprus and the society at large to enhance its vigilance with respect to medical errors in order to improve medical care.

  5. Residents' numeric inputting error in computerized physician order entry prescription.

    Science.gov (United States)

    Wu, Xue; Wu, Changxu; Zhang, Kan; Wei, Dong

    2016-04-01

    Computerized physician order entry (CPOE) system with embedded clinical decision support (CDS) can significantly reduce certain types of prescription error. However, prescription errors still occur. Various factors such as the numeric inputting methods in human computer interaction (HCI) produce different error rates and types, but has received relatively little attention. This study aimed to examine the effects of numeric inputting methods and urgency levels on numeric inputting errors of prescription, as well as categorize the types of errors. Thirty residents participated in four prescribing tasks in which two factors were manipulated: numeric inputting methods (numeric row in the main keyboard vs. numeric keypad) and urgency levels (urgent situation vs. non-urgent situation). Multiple aspects of participants' prescribing behavior were measured in sober prescribing situations. The results revealed that in urgent situations, participants were prone to make mistakes when using the numeric row in the main keyboard. With control of performance in the sober prescribing situation, the effects of the input methods disappeared, and urgency was found to play a significant role in the generalized linear model. Most errors were either omission or substitution types, but the proportion of transposition and intrusion error types were significantly higher than that of the previous research. Among numbers 3, 8, and 9, which were the less common digits used in prescription, the error rate was higher, which was a great risk to patient safety. Urgency played a more important role in CPOE numeric typing error-making than typing skills and typing habits. It was recommended that inputting with the numeric keypad had lower error rates in urgent situation. An alternative design could consider increasing the sensitivity of the keys with lower frequency of occurrence and decimals. To improve the usability of CPOE, numeric keyboard design and error detection could benefit from spatial

  6. Enhanced fault-tolerant quantum computing in d-level systems.

    Science.gov (United States)

    Campbell, Earl T

    2014-12-05

    Error-correcting codes protect quantum information and form the basis of fault-tolerant quantum computing. Leading proposals for fault-tolerant quantum computation require codes with an exceedingly rare property, a transversal non-Clifford gate. Codes with the desired property are presented for d-level qudit systems with prime d. The codes use n=d-1 qudits and can detect up to ∼d/3 errors. We quantify the performance of these codes for one approach to quantum computation known as magic-state distillation. Unlike prior work, we find performance is always enhanced by increasing d.

  7. Fault-tolerant architectures for superconducting qubits

    International Nuclear Information System (INIS)

    DiVincenzo, David P

    2009-01-01

    In this short review, I draw attention to new developments in the theory of fault tolerance in quantum computation that may give concrete direction to future work in the development of superconducting qubit systems. The basics of quantum error-correction codes, which I will briefly review, have not significantly changed since their introduction 15 years ago. But an interesting picture has emerged of an efficient use of these codes that may put fault-tolerant operation within reach. It is now understood that two-dimensional surface codes, close relatives of the original toric code of Kitaev, can be adapted as shown by Raussendorf and Harrington to effectively perform logical gate operations in a very simple planar architecture, with error thresholds for fault-tolerant operation simulated to be 0.75%. This architecture uses topological ideas in its functioning, but it is not 'topological quantum computation'-there are no non-abelian anyons in sight. I offer some speculations on the crucial pieces of superconducting hardware that could be demonstrated in the next couple of years that would be clear stepping stones towards this surface-code architecture.

  8. Common errors of drug administration in infants: causes and avoidance.

    Science.gov (United States)

    Anderson, B J; Ellis, J F

    1999-01-01

    Drug administration errors are common in infants. Although the infant population has a high exposure to drugs, there are few data concerning pharmacokinetics or pharmacodynamics, or the influence of paediatric diseases on these processes. Children remain therapeutic orphans. Formulations are often suitable only for adults; in addition, the lack of maturation of drug elimination processes, alteration of body composition and influence of size render the calculation of drug doses complex in infants. The commonest drug administration error in infants is one of dose, and the commonest hospital site for this error is the intensive care unit. Drug errors are a consequence of system error, and preventive strategies are possible through system analysis. The goal of a zero drug error rate should be aggressively sought, with systems in place that aim to eliminate the effects of inevitable human error. This involves review of the entire system from drug manufacture to drug administration. The nuclear industry, telecommunications and air traffic control services all practise error reduction policies with zero error as a clear goal, not by finding fault in the individual, but by identifying faults in the system and building into that system mechanisms for picking up faults before they occur. Such policies could be adapted to medicine using interventions both specific (the production of formulations which are for children only and clearly labelled, regular audit by pharmacists, legible prescriptions, standardised dose tables) and general (paediatric drug trials, education programmes, nonpunitive error reporting) to reduce the number of errors made in giving medication to infants.

  9. Algorithmic fault tree construction by component-based system modeling

    International Nuclear Information System (INIS)

    Majdara, Aref; Wakabayashi, Toshio

    2008-01-01

    Computer-aided fault tree generation can be easier, faster and less vulnerable to errors than the conventional manual fault tree construction. In this paper, a new approach for algorithmic fault tree generation is presented. The method mainly consists of a component-based system modeling procedure an a trace-back algorithm for fault tree synthesis. Components, as the building blocks of systems, are modeled using function tables and state transition tables. The proposed method can be used for a wide range of systems with various kinds of components, if an inclusive component database is developed. (author)

  10. Objective Function and Learning Algorithm for the General Node Fault Situation.

    Science.gov (United States)

    Xiao, Yi; Feng, Rui-Bin; Leung, Chi-Sing; Sum, John

    2016-04-01

    Fault tolerance is one interesting property of artificial neural networks. However, the existing fault models are able to describe limited node fault situations only, such as stuck-at-zero and stuck-at-one. There is no general model that is able to describe a large class of node fault situations. This paper studies the performance of faulty radial basis function (RBF) networks for the general node fault situation. We first propose a general node fault model that is able to describe a large class of node fault situations, such as stuck-at-zero, stuck-at-one, and the stuck-at level being with arbitrary distribution. Afterward, we derive an expression to describe the performance of faulty RBF networks. An objective function is then identified from the formula. With the objective function, a training algorithm for the general node situation is developed. Finally, a mean prediction error (MPE) formula that is able to estimate the test set error of faulty networks is derived. The application of the MPE formula in the selection of basis width is elucidated. Simulation experiments are then performed to demonstrate the effectiveness of the proposed method.

  11. Analysis of Medication Errors in Simulated Pediatric Resuscitation by Residents

    Directory of Open Access Journals (Sweden)

    Evelyn Porter

    2014-07-01

    Full Text Available Introduction: The objective of our study was to estimate the incidence of prescribing medication errors specifically made by a trainee and identify factors associated with these errors during the simulated resuscitation of a critically ill child. Methods: The results of the simulated resuscitation are described. We analyzed data from the simulated resuscitation for the occurrence of a prescribing medication error. We compared univariate analysis of each variable to medication error rate and performed a separate multiple logistic regression analysis on the significant univariate variables to assess the association between the selected variables. Results: We reviewed 49 simulated resuscitations . The final medication error rate for the simulation was 26.5% (95% CI 13.7% - 39.3%. On univariate analysis, statistically significant findings for decreased prescribing medication error rates included senior residents in charge, presence of a pharmacist, sleeping greater than 8 hours prior to the simulation, and a visual analog scale score showing more confidence in caring for critically ill children. Multiple logistic regression analysis using the above significant variables showed only the presence of a pharmacist to remain significantly associated with decreased medication error, odds ratio of 0.09 (95% CI 0.01 - 0.64. Conclusion: Our results indicate that the presence of a clinical pharmacist during the resuscitation of a critically ill child reduces the medication errors made by resident physician trainees.

  12. Medication errors in the Middle East countries: a systematic review of the literature.

    Science.gov (United States)

    Alsulami, Zayed; Conroy, Sharon; Choonara, Imti

    2013-04-01

    Medication errors are a significant global concern and can cause serious medical consequences for patients. Little is known about medication errors in Middle Eastern countries. The objectives of this systematic review were to review studies of the incidence and types of medication errors in Middle Eastern countries and to identify the main contributory factors involved. A systematic review of the literature related to medication errors in Middle Eastern countries was conducted in October 2011 using the following databases: Embase, Medline, Pubmed, the British Nursing Index and the Cumulative Index to Nursing & Allied Health Literature. The search strategy included all ages and languages. Inclusion criteria were that the studies assessed or discussed the incidence of medication errors and contributory factors to medication errors during the medication treatment process in adults or in children. Forty-five studies from 10 of the 15 Middle Eastern countries met the inclusion criteria. Nine (20 %) studies focused on medication errors in paediatric patients. Twenty-one focused on prescribing errors, 11 measured administration errors, 12 were interventional studies and one assessed transcribing errors. Dispensing and documentation errors were inadequately evaluated. Error rates varied from 7.1 % to 90.5 % for prescribing and from 9.4 % to 80 % for administration. The most common types of prescribing errors reported were incorrect dose (with an incidence rate from 0.15 % to 34.8 % of prescriptions), wrong frequency and wrong strength. Computerised physician rder entry and clinical pharmacist input were the main interventions evaluated. Poor knowledge of medicines was identified as a contributory factor for errors by both doctors (prescribers) and nurses (when administering drugs). Most studies did not assess the clinical severity of the medication errors. Studies related to medication errors in the Middle Eastern countries were relatively few in number and of poor quality

  13. Design of fault tolerant control system for steam generator using

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Myung Ki; Seo, Mi Ro [Korea Electric Power Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    A controller and sensor fault tolerant system for a steam generator is designed with fuzzy logic. A structure of the proposed fault tolerant redundant system is composed of a supervisor and two fuzzy weighting modulators. A supervisor alternatively checks a controller and a sensor induced performances to identify which part, a controller or a sensor, is faulty. In order to analyze controller induced performance both an error and a change in error of the system output are chosen as fuzzy variables. The fuzzy logic for a sensor induced performance uses two variables : a deviation between two sensor outputs and its frequency. Fuzzy weighting modulator generates an output signal compensated for faulty input signal. Simulations show that the proposed fault tolerant control scheme for a steam generator regulates well water level by suppressing fault effect of either controllers or sensors. Therefore through duplicating sensors and controllers with the proposed fault tolerant scheme, both a reliability of a steam generator control and sensor system and that of a power plant increase even more. 2 refs., 9 figs., 1 tab. (Author)

  14. Negligence, genuine error, and litigation

    Science.gov (United States)

    Sohn, David H

    2013-01-01

    Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort system in the United States; and review current and future solutions, including medical malpractice reform, alternative dispute resolution, health courts, and no-fault compensation systems. The current political environment favors investigation of non-cap tort reform remedies; investment into more rational oversight systems, such as health courts or no-fault systems may reap both quantitative and qualitative benefits for a less costly and safer health system. PMID:23426783

  15. Data Error Detection and Recovery in Embedded Systems: a Literature Review

    Directory of Open Access Journals (Sweden)

    Venu Babu Thati

    2017-06-01

    Full Text Available This paper presents a literature review on data flow error detection and recovery techniques in embedded systems. In recent years, embedded systems are being used more and more in an enormous number of applications from small mobile device to big medical devices. At the same time, it is becoming important for embedded developers to make embedded systems fault-tolerant. To make embedded systems fault-tolerant, error detection and recovery mechanisms are effective techniques to take into consideration. Fault tolerance can be achieved by using both hardware and software techniques. This literature review focuses on software-based techniques since hardware-based techniques need extra hardware and are an extra investment in cost per product. Whereas, software-based techniques needed no or limited hardware. A review on various existing data flow error detection and error recovery techniques is given along with their strengths and weaknesses. Such an information is useful to identify the better techniques among the others.

  16. Effect of Pharmacist Participation During Physician Rounds and Prescription Error in the Intensive Care Unit

    Directory of Open Access Journals (Sweden)

    Marlina A. Turnodihardjo

    2016-09-01

    Full Text Available Patient’s safety is now a prominent issue in pharmaceutical care because of adverse drug events that is common in hospitalized patients. Majority of error are likely occured during prescribing, which is the first stage of pharmacy process. Prescription errors mostly occured in an Intensive Care Unit (ICU, which is due to the severity of the illness of its patients as well as the large number of medications prescribed. Pharmacist participation actually could reduce prescribing error made by doctors. The main objective of this study was to determine the effect of pharmacist participation during physician rounds on prescription errors in the ICU. This study was a quasi-experimental design with one group pre-post test. A prospective study was conducted from April to May 2015 by screening 110 samples of orders. Screening was done to identify type of prescription errors. Prescription error was defined as error in the prescription writing process – incomplete information and not according to agreement. Mann-Whitney test was used to analyze the differences in prescribing errors. The results showed that there was the differences between prescription errors before and during the pharmacist participation (p<0.05. There was also a significant negative correlation between the frequency of pharmacist recommendation on drug ordering and prescription errors (r= –0.638; p<0.05. It means the pharmacist participation was one of the strategies that can be adopted to prevent in prescribing errors and implementation of collaboration between both doctors and pharmacists. In other words, the supporting hospital management system which would encourage interpersonal communication among health care proffesionals is needed.

  17. Internal Leakage Fault Detection and Tolerant Control of Single-Rod Hydraulic Actuators

    Directory of Open Access Journals (Sweden)

    Jianyong Yao

    2014-01-01

    Full Text Available The integration of internal leakage fault detection and tolerant control for single-rod hydraulic actuators is present in this paper. Fault detection is a potential technique to provide efficient condition monitoring and/or preventive maintenance, and fault tolerant control is a critical method to improve the safety and reliability of hydraulic servo systems. Based on quadratic Lyapunov functions, a performance-oriented fault detection method is proposed, which has a simple structure and is prone to implement in practice. The main feature is that, when a prescribed performance index is satisfied (even a slight fault has occurred, there is no fault alarmed; otherwise (i.e., a severe fault has occurred, the fault is detected and then a fault tolerant controller is activated. The proposed tolerant controller, which is based on the parameter adaptive methodology, is also prone to realize, and the learning mechanism is simple since only the internal leakage is considered in parameter adaptation and thus the persistent exciting (PE condition is easily satisfied. After the activation of the fault tolerant controller, the control performance is gradually recovered. Simulation results on a hydraulic servo system with both abrupt and incipient internal leakage fault demonstrate the effectiveness of the proposed fault detection and tolerant control method.

  18. Links between N-modular redundancy and the theory of error-correcting codes

    Science.gov (United States)

    Bobin, V.; Whitaker, S.; Maki, G.

    1992-01-01

    N-Modular Redundancy (NMR) is one of the best known fault tolerance techniques. Replication of a module to achieve fault tolerance is in some ways analogous to the use of a repetition code where an information symbol is replicated as parity symbols in a codeword. Linear Error-Correcting Codes (ECC) use linear combinations of information symbols as parity symbols which are used to generate syndromes for error patterns. These observations indicate links between the theory of ECC and the use of hardware redundancy for fault tolerance. In this paper, we explore some of these links and show examples of NMR systems where identification of good and failed elements is accomplished in a manner similar to error correction using linear ECC's.

  19. A Method to Simultaneously Detect the Current Sensor Fault and Estimate the State of Energy for Batteries in Electric Vehicles.

    Science.gov (United States)

    Xu, Jun; Wang, Jing; Li, Shiying; Cao, Binggang

    2016-08-19

    Recently, State of energy (SOE) has become one of the most fundamental parameters for battery management systems in electric vehicles. However, current information is critical in SOE estimation and current sensor is usually utilized to obtain the latest current information. However, if the current sensor fails, the SOE estimation may be confronted with large error. Therefore, this paper attempts to make the following contributions: Current sensor fault detection and SOE estimation method is realized simultaneously. Through using the proportional integral observer (PIO) based method, the current sensor fault could be accurately estimated. By taking advantage of the accurate estimated current sensor fault, the influence caused by the current sensor fault can be eliminated and compensated. As a result, the results of the SOE estimation will be influenced little by the fault. In addition, the simulation and experimental workbench is established to verify the proposed method. The results indicate that the current sensor fault can be estimated accurately. Simultaneously, the SOE can also be estimated accurately and the estimation error is influenced little by the fault. The maximum SOE estimation error is less than 2%, even though the large current error caused by the current sensor fault still exists.

  20. Accounting for uncertain fault geometry in earthquake source inversions - I: theory and simplified application

    Science.gov (United States)

    Ragon, Théa; Sladen, Anthony; Simons, Mark

    2018-05-01

    The ill-posed nature of earthquake source estimation derives from several factors including the quality and quantity of available observations and the fidelity of our forward theory. Observational errors are usually accounted for in the inversion process. Epistemic errors, which stem from our simplified description of the forward problem, are rarely dealt with despite their potential to bias the estimate of a source model. In this study, we explore the impact of uncertainties related to the choice of a fault geometry in source inversion problems. The geometry of a fault structure is generally reduced to a set of parameters, such as position, strike and dip, for one or a few planar fault segments. While some of these parameters can be solved for, more often they are fixed to an uncertain value. We propose a practical framework to address this limitation by following a previously implemented method exploring the impact of uncertainties on the elastic properties of our models. We develop a sensitivity analysis to small perturbations of fault dip and position. The uncertainties in fault geometry are included in the inverse problem under the formulation of the misfit covariance matrix that combines both prediction and observation uncertainties. We validate this approach with the simplified case of a fault that extends infinitely along strike, using both Bayesian and optimization formulations of a static inversion. If epistemic errors are ignored, predictions are overconfident in the data and source parameters are not reliably estimated. In contrast, inclusion of uncertainties in fault geometry allows us to infer a robust posterior source model. Epistemic uncertainties can be many orders of magnitude larger than observational errors for great earthquakes (Mw > 8). Not accounting for uncertainties in fault geometry may partly explain observed shallow slip deficits for continental earthquakes. Similarly, ignoring the impact of epistemic errors can also bias estimates of

  1. Fault tolerance with noisy and slow measurements and preparation.

    Science.gov (United States)

    Paz-Silva, Gerardo A; Brennen, Gavin K; Twamley, Jason

    2010-09-03

    It is not so well known that measurement-free quantum error correction protocols can be designed to achieve fault-tolerant quantum computing. Despite their potential advantages in terms of the relaxation of accuracy, speed, and addressing requirements, they have usually been overlooked since they are expected to yield a very bad threshold. We show that this is not the case. We design fault-tolerant circuits for the 9-qubit Bacon-Shor code and find an error threshold for unitary gates and preparation of p((p,g)thresh)=3.76×10(-5) (30% of the best known result for the same code using measurement) while admitting up to 1/3 error rates for measurements and allocating no constraints on measurement speed. We further show that demanding gate error rates sufficiently below the threshold pushes the preparation threshold up to p((p)thresh)=1/3.

  2. Scalable error correction in distributed ion trap computers

    International Nuclear Information System (INIS)

    Oi, Daniel K. L.; Devitt, Simon J.; Hollenberg, Lloyd C. L.

    2006-01-01

    A major challenge for quantum computation in ion trap systems is scalable integration of error correction and fault tolerance. We analyze a distributed architecture with rapid high-fidelity local control within nodes and entangled links between nodes alleviating long-distance transport. We demonstrate fault-tolerant operator measurements which are used for error correction and nonlocal gates. This scheme is readily applied to linear ion traps which cannot be scaled up beyond a few ions per individual trap but which have access to a probabilistic entanglement mechanism. A proof-of-concept system is presented which is within the reach of current experiment

  3. Feature-based handling of surface faults in compact disc players

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Stoustrup, Jakob; Andersen, Palle

    2006-01-01

    In this paper a novel method called feature-based control is presented. The method is designed to improve compact disc players’ handling of surface faults on the discs. The method is based on a fault-tolerant control scheme, which uses extracted features of the surface faults to remove those from...... the detector signals used for control during the occurrence of surface faults. The extracted features are coefficients of Karhunen–Loève approximations of the surface faults. The performance of the feature-based control scheme controlling compact disc players playing discs with surface faults has been...... validated experimentally. The proposed scheme reduces the control errors due to the surface faults, and in some cases where the standard fault handling scheme fails, our scheme keeps the CD-player playing....

  4. Medication errors detected in non-traditional databases

    DEFF Research Database (Denmark)

    Perregaard, Helene; Aronson, Jeffrey K; Dalhoff, Kim

    2015-01-01

    AIMS: We have looked for medication errors involving the use of low-dose methotrexate, by extracting information from Danish sources other than traditional pharmacovigilance databases. We used the data to establish the relative frequencies of different types of errors. METHODS: We searched four...... errors, whereas knowledge-based errors more often resulted in near misses. CONCLUSIONS: The medication errors in this survey were most often action-based (50%) and knowledge-based (34%), suggesting that greater attention should be paid to education and surveillance of medical personnel who prescribe...

  5. Fractional-order adaptive fault estimation for a class of nonlinear fractional-order systems

    KAUST Repository

    N'Doye, Ibrahima; Laleg-Kirati, Taous-Meriem

    2015-01-01

    This paper studies the problem of fractional-order adaptive fault estimation for a class of fractional-order Lipschitz nonlinear systems using fractional-order adaptive fault observer. Sufficient conditions for the asymptotical convergence of the fractional-order state estimation error, the conventional integer-order and the fractional-order faults estimation error are derived in terms of linear matrix inequalities (LMIs) formulation by introducing a continuous frequency distributed equivalent model and using an indirect Lyapunov approach where the fractional-order α belongs to 0 < α < 1. A numerical example is given to demonstrate the validity of the proposed approach.

  6. Fractional-order adaptive fault estimation for a class of nonlinear fractional-order systems

    KAUST Repository

    N'Doye, Ibrahima

    2015-07-01

    This paper studies the problem of fractional-order adaptive fault estimation for a class of fractional-order Lipschitz nonlinear systems using fractional-order adaptive fault observer. Sufficient conditions for the asymptotical convergence of the fractional-order state estimation error, the conventional integer-order and the fractional-order faults estimation error are derived in terms of linear matrix inequalities (LMIs) formulation by introducing a continuous frequency distributed equivalent model and using an indirect Lyapunov approach where the fractional-order α belongs to 0 < α < 1. A numerical example is given to demonstrate the validity of the proposed approach.

  7. SIFT - Design and analysis of a fault-tolerant computer for aircraft control. [Software Implemented Fault Tolerant systems

    Science.gov (United States)

    Wensley, J. H.; Lamport, L.; Goldberg, J.; Green, M. W.; Levitt, K. N.; Melliar-Smith, P. M.; Shostak, R. E.; Weinstock, C. B.

    1978-01-01

    SIFT (Software Implemented Fault Tolerance) is an ultrareliable computer for critical aircraft control applications that achieves fault tolerance by the replication of tasks among processing units. The main processing units are off-the-shelf minicomputers, with standard microcomputers serving as the interface to the I/O system. Fault isolation is achieved by using a specially designed redundant bus system to interconnect the processing units. Error detection and analysis and system reconfiguration are performed by software. Iterative tasks are redundantly executed, and the results of each iteration are voted upon before being used. Thus, any single failure in a processing unit or bus can be tolerated with triplication of tasks, and subsequent failures can be tolerated after reconfiguration. Independent execution by separate processors means that the processors need only be loosely synchronized, and a novel fault-tolerant synchronization method is described.

  8. Development of methods for evaluating active faults

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2013-08-15

    The report for long-term evaluation of active faults was published by the Headquarters for Earthquake Research Promotion on Nov. 2010. After occurrence of the 2011 Tohoku-oki earthquake, the safety review guide with regard to geology and ground of site was revised by the Nuclear Safety Commission on Mar. 2012 with scientific knowledges of the earthquake. The Nuclear Regulation Authority established on Sep. 2012 is newly planning the New Safety Design Standard related to Earthquakes and Tsunamis of Light Water Nuclear Power Reactor Facilities. With respect to those guides and standards, our investigations for developing the methods of evaluating active faults are as follows; (1) For better evaluation on activities of offshore fault, we proposed a work flow to date marine terrace (indicator for offshore fault activity) during the last 400,000 years. We also developed the analysis of fault-related fold for evaluating of blind fault. (2) To clarify the activities of active faults without superstratum, we carried out the color analysis of fault gouge and divided the activities into thousand of years and tens of thousands. (3) To reduce uncertainties of fault activities and frequency of earthquakes, we compiled the survey data and possible errors. (4) For improving seismic hazard analysis, we compiled the fault activities of the Yunotake and Itozawa faults, induced by the 2011 Tohoku-oki earthquake. (author)

  9. Bounding quantum gate error rate based on reported average fidelity

    International Nuclear Information System (INIS)

    Sanders, Yuval R; Wallman, Joel J; Sanders, Barry C

    2016-01-01

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates. (fast track communication)

  10. Soft errors in modern electronic systems

    CERN Document Server

    Nicolaidis, Michael

    2010-01-01

    This book provides a comprehensive presentation of the most advanced research results and technological developments enabling understanding, qualifying and mitigating the soft errors effect in advanced electronics, including the fundamental physical mechanisms of radiation induced soft errors, the various steps that lead to a system failure, the modelling and simulation of soft error at various levels (including physical, electrical, netlist, event driven, RTL, and system level modelling and simulation), hardware fault injection, accelerated radiation testing and natural environment testing, s

  11. A SAFE approach towards early design space exploration of Fault-tolerant multimedia MPSoCs

    NARCIS (Netherlands)

    van Stralen, P.; Pimentel, A.

    2012-01-01

    With the reduction in feature size, transient errors start to play an important role in modern embedded systems. It is therefore important to make fault-tolerance a first-class citizen in embedded system design. Fault-tolerance patterns are techniques to make an application fault-tolerant. Not only

  12. A simulation of the San Andreas fault experiment

    Science.gov (United States)

    Agreen, R. W.; Smith, D. E.

    1974-01-01

    The San Andreas fault experiment (Safe), which employs two laser tracking systems for measuring the relative motion of two points on opposite sides of the fault, has been simulated for an 8-yr observation period. The two tracking stations are located near San Diego on the western side of the fault and near Quincy on the eastern side; they are roughly 900 km apart. Both will simultaneously track laser reflector equipped satellites as they pass near the stations. Tracking of the Beacon Explorer C spacecraft has been simulated for these two stations during August and September for 8 consecutive years. An error analysis of the recovery of the relative location of Quincy from the data has been made, allowing for model errors in the mass of the earth, the gravity field, solar radiation pressure, atmospheric drag, errors in the position of the San Diego site, and biases and noise in the laser systems. The results of this simulation indicate that the distance of Quincy from San Diego will be determined each year with a precision of about 10 cm. Projected improvements in these model parameters and in the laser systems over the next few years will bring the precision to about 1-2 cm by 1980.

  13. Adaptive Fault-Tolerant Synchronization Control of a Class of Complex Dynamical Networks With General Input Distribution Matrices and Actuator Faults.

    Science.gov (United States)

    Li, Xiao-Jian; Yang, Guang-Hong

    2017-03-01

    This paper is concerned with the problem of adaptive fault-tolerant synchronization control of a class of complex dynamical networks (CDNs) with actuator faults and unknown coupling weights. The considered input distribution matrix is assumed to be an arbitrary matrix, instead of a unit one. Within this framework, an adaptive fault-tolerant controller is designed to achieve synchronization for the CDN. Moreover, a convex combination technique and an important graph theory result are developed, such that the rigorous convergence analysis of synchronization errors can be conducted. In particular, it is shown that the proposed fault-tolerant synchronization control approach is valid for the CDN with both time-invariant and time-varying coupling weights. Finally, two simulation examples are provided to validate the effectiveness of the theoretical results.

  14. Real-time fault tolerant full adder design for critical applications

    Directory of Open Access Journals (Sweden)

    Pankaj Kumar

    2016-09-01

    Full Text Available In the complex computing system, processing units are dealing with devices of smaller size, which are sensitive to the transient faults. A transient fault occurs in a circuit caused by the electromagnetic noises, cosmic rays, crosstalk and power supply noise. It is very difficult to detect these faults during offline testing. Hence an area efficient fault tolerant full adder for testing and repairing of transient and permanent faults occurred in single and multi-net is proposed. Additionally, the proposed architecture can also detect and repair permanent faults. This design incurs much lower hardware overheads relative to the traditional hardware architecture. In addition to this, proposed design also provides higher error detection and correction efficiency when compared to the existing designs.

  15. Clinical errors and medical negligence.

    Science.gov (United States)

    Oyebode, Femi

    2013-01-01

    This paper discusses the definition, nature and origins of clinical errors including their prevention. The relationship between clinical errors and medical negligence is examined as are the characteristics of litigants and events that are the source of litigation. The pattern of malpractice claims in different specialties and settings is examined. Among hospitalized patients worldwide, 3-16% suffer injury as a result of medical intervention, the most common being the adverse effects of drugs. The frequency of adverse drug effects appears superficially to be higher in intensive care units and emergency departments but once rates have been corrected for volume of patients, comorbidity of conditions and number of drugs prescribed, the difference is not significant. It is concluded that probably no more than 1 in 7 adverse events in medicine result in a malpractice claim and the factors that predict that a patient will resort to litigation include a prior poor relationship with the clinician and the feeling that the patient is not being kept informed. Methods for preventing clinical errors are still in their infancy. The most promising include new technologies such as electronic prescribing systems, diagnostic and clinical decision-making aids and error-resistant systems. Copyright © 2013 S. Karger AG, Basel.

  16. Risk Management and the Concept of Human Error

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1995-01-01

    by a stochastic coincidence of faults and human errors, but by a systemic erosion of the defenses due to decision making under competitive pressure in a dynamic environment. The presentation will discuss the nature of human error and the risk management problems found in a dynamic, competitive society facing...

  17. Fault Estimation for Fuzzy Delay Systems: A Minimum Norm Least Squares Solution Approach.

    Science.gov (United States)

    Huang, Sheng-Juan; Yang, Guang-Hong

    2017-09-01

    This paper mainly focuses on the problem of fault estimation for a class of Takagi-Sugeno fuzzy systems with state delays. A minimum norm least squares solution (MNLSS) approach is first introduced to establish a fault estimation compensator, which is able to optimize the fault estimator. Compared with most of the existing fault estimation methods, the MNLSS-based fault estimation method can effectively decrease the effect of state errors on the accuracy of fault estimation. Finally, three examples are given to illustrate the effectiveness and merits of the proposed method.

  18. EXPERIMENT BASED FAULT DIAGNOSIS ON BOTTLE FILLING PLANT WITH LVQ ARTIFICIAL NEURAL NETWORK ALGORITHM

    Directory of Open Access Journals (Sweden)

    Mustafa DEMETGÜL

    2008-01-01

    Full Text Available In this study, an artificial neural network is developed to find an error rapidly on pneumatic system. Also the ANN prevents the system versus the failure. The error on the experimental bottle filling plant can be defined without any interference using analog values taken from pressure sensors and linear potentiometers. The sensors and potentiometers are placed on different places of the plant. Neural network diagnosis faults on plant, where no bottle, cap closing cylinder B is not working, bottle cap closing cylinder C is not working, air pressure is not sufficient, water is not filling and low air pressure faults. The fault is diagnosed by artificial neural network with LVQ. It is possible to find an failure by using normal programming or PLC. The reason offing Artificial Neural Network is to give a information where the fault is. However, ANN can be used for different systems. The aim is to find the fault by using ANN simultaneously. In this situation, the error taken place on the pneumatic system is collected by a data acquisition card. It is observed that the algorithm is very capable program for many industrial plants which have mechatronic systems.

  19. STEM - software test and evaluation methods: fault detection using static analysis techniques

    International Nuclear Information System (INIS)

    Bishop, P.G.; Esp, D.G.

    1988-08-01

    STEM is a software reliability project with the objective of evaluating a number of fault detection and fault estimation methods which can be applied to high integrity software. This Report gives some interim results of applying both manual and computer-based static analysis techniques, in particular SPADE, to an early CERL version of the PODS software containing known faults. The main results of this study are that: The scope for thorough verification is determined by the quality of the design documentation; documentation defects become especially apparent when verification is attempted. For well-defined software, the thoroughness of SPADE-assisted verification for detecting a large class of faults was successfully demonstrated. For imprecisely-defined software (not recommended for high-integrity systems) the use of tools such as SPADE is difficult and inappropriate. Analysis and verification tools are helpful, through their reliability and thoroughness. However, they are designed to assist, not replace, a human in validating software. Manual inspection can still reveal errors (such as errors in specification and errors of transcription of systems constants) which current tools cannot detect. There is a need for tools to automatically detect typographical errors in system constants, for example by reporting outliers to patterns. To obtain the maximum benefit from advanced tools, they should be applied during software development (when verification problems can be detected and corrected) rather than retrospectively. (author)

  20. Fuzzy Inference System Approach for Locating Series, Shunt, and Simultaneous Series-Shunt Faults in Double Circuit Transmission Lines.

    Science.gov (United States)

    Swetapadma, Aleena; Yadav, Anamika

    2015-01-01

    Many schemes are reported for shunt fault location estimation, but fault location estimation of series or open conductor faults has not been dealt with so far. The existing numerical relays only detect the open conductor (series) fault and give the indication of the faulty phase(s), but they are unable to locate the series fault. The repair crew needs to patrol the complete line to find the location of series fault. In this paper fuzzy based fault detection/classification and location schemes in time domain are proposed for both series faults, shunt faults, and simultaneous series and shunt faults. The fault simulation studies and fault location algorithm have been developed using Matlab/Simulink. Synchronized phasors of voltage and current signals of both the ends of the line have been used as input to the proposed fuzzy based fault location scheme. Percentage of error in location of series fault is within 1% and shunt fault is 5% for all the tested fault cases. Validation of percentage of error in location estimation is done using Chi square test with both 1% and 5% level of significance.

  1. Periodic Application of Concurrent Error Detection in Processor Array Architectures. PhD. Thesis -

    Science.gov (United States)

    Chen, Paul Peichuan

    1993-01-01

    Processor arrays can provide an attractive architecture for some applications. Featuring modularity, regular interconnection and high parallelism, such arrays are well-suited for VLSI/WSI implementations, and applications with high computational requirements, such as real-time signal processing. Preserving the integrity of results can be of paramount importance for certain applications. In these cases, fault tolerance should be used to ensure reliable delivery of a system's service. One aspect of fault tolerance is the detection of errors caused by faults. Concurrent error detection (CED) techniques offer the advantage that transient and intermittent faults may be detected with greater probability than with off-line diagnostic tests. Applying time-redundant CED techniques can reduce hardware redundancy costs. However, most time-redundant CED techniques degrade a system's performance.

  2. Data-Driven Method for Wind Turbine Yaw Angle Sensor Zero-Point Shifting Fault Detection

    Directory of Open Access Journals (Sweden)

    Yan Pei

    2018-03-01

    Full Text Available Wind turbine yaw control plays an important role in increasing the wind turbine production and also in protecting the wind turbine. Accurate measurement of yaw angle is the basis of an effective wind turbine yaw controller. The accuracy of yaw angle measurement is affected significantly by the problem of zero-point shifting. Hence, it is essential to evaluate the zero-point shifting error on wind turbines on-line in order to improve the reliability of yaw angle measurement in real time. Particularly, qualitative evaluation of the zero-point shifting error could be useful for wind farm operators to realize prompt and cost-effective maintenance on yaw angle sensors. In the aim of qualitatively evaluating the zero-point shifting error, the yaw angle sensor zero-point shifting fault is firstly defined in this paper. A data-driven method is then proposed to detect the zero-point shifting fault based on Supervisory Control and Data Acquisition (SCADA data. The zero-point shifting fault is detected in the proposed method by analyzing the power performance under different yaw angles. The SCADA data are partitioned into different bins according to both wind speed and yaw angle in order to deeply evaluate the power performance. An indicator is proposed in this method for power performance evaluation under each yaw angle. The yaw angle with the largest indicator is considered as the yaw angle measurement error in our work. A zero-point shifting fault would trigger an alarm if the error is larger than a predefined threshold. Case studies from several actual wind farms proved the effectiveness of the proposed method in detecting zero-point shifting fault and also in improving the wind turbine performance. Results of the proposed method could be useful for wind farm operators to realize prompt adjustment if there exists a large error of yaw angle measurement.

  3. Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip

    Energy Technology Data Exchange (ETDEWEB)

    Du, Xuecheng; He, Chaohui; Liu, Shuhuan, E-mail: liushuhuan@mail.xjtu.edu.cn; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang

    2016-09-21

    Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.

  4. Simultaneous Robust Fault and State Estimation for Linear Discrete-Time Uncertain Systems

    Directory of Open Access Journals (Sweden)

    Feten Gannouni

    2017-01-01

    Full Text Available We consider the problem of robust simultaneous fault and state estimation for linear uncertain discrete-time systems with unknown faults which affect both the state and the observation matrices. Using transformation of the original system, a new robust proportional integral filter (RPIF having an error variance with an optimized guaranteed upper bound for any allowed uncertainty is proposed to improve robust estimation of unknown time-varying faults and to improve robustness against uncertainties. In this study, the minimization problem of the upper bound of the estimation error variance is formulated as a convex optimization problem subject to linear matrix inequalities (LMI for all admissible uncertainties. The proportional and the integral gains are optimally chosen by solving the convex optimization problem. Simulation results are given in order to illustrate the performance of the proposed filter, in particular to solve the problem of joint fault and state estimation.

  5. Task errors by emergency physicians are associated with interruptions, multitasking, fatigue and working memory capacity: a prospective, direct observation study.

    Science.gov (United States)

    Westbrook, Johanna I; Raban, Magdalena Z; Walter, Scott R; Douglas, Heather

    2018-01-09

    Interruptions and multitasking have been demonstrated in experimental studies to reduce individuals' task performance. These behaviours are frequently used by clinicians in high-workload, dynamic clinical environments, yet their effects have rarely been studied. To assess the relative contributions of interruptions and multitasking by emergency physicians to prescribing errors. 36 emergency physicians were shadowed over 120 hours. All tasks, interruptions and instances of multitasking were recorded. Physicians' working memory capacity (WMC) and preference for multitasking were assessed using the Operation Span Task (OSPAN) and Inventory of Polychronic Values. Following observation, physicians were asked about their sleep in the previous 24 hours. Prescribing errors were used as a measure of task performance. We performed multivariate analysis of prescribing error rates to determine associations with interruptions and multitasking, also considering physician seniority, age, psychometric measures, workload and sleep. Physicians experienced 7.9 interruptions/hour. 28 clinicians were observed prescribing 239 medication orders which contained 208 prescribing errors. While prescribing, clinicians were interrupted 9.4 times/hour. Error rates increased significantly if physicians were interrupted (rate ratio (RR) 2.82; 95% CI 1.23 to 6.49) or multitasked (RR 1.86; 95% CI 1.35 to 2.56) while prescribing. Having below-average sleep showed a >15-fold increase in clinical error rate (RR 16.44; 95% CI 4.84 to 55.81). WMC was protective against errors; for every 10-point increase on the 75-point OSPAN, a 19% decrease in prescribing errors was observed. There was no effect of polychronicity, workload, physician gender or above-average sleep on error rates. Interruptions, multitasking and poor sleep were associated with significantly increased rates of prescribing errors among emergency physicians. WMC mitigated the negative influence of these factors to an extent. These

  6. Using UAVSAR to Estimate Creep Along the Superstition Hills Fault, Southern California

    Science.gov (United States)

    Donnellan, A.; Parker, J. W.; Pierce, M.; Wang, J.

    2012-12-01

    UAVSAR data were first acquired over the Salton Trough region, just north of the Mexican border in October 2009. Second passes of data were acquired on 12 and 13 April 2010, about one week following the 5 April 2010 M 7.2 El Mayor - Cucapah earthquake. The earthquake resulted in creep on several faults north of the main rupture, including the Yuha, Imperial, and Superstition Hills faults. The UAVSAR platform acquires data about every six meters in swaths about 15 km wide. Tropospheric effects and residual aircraft motion contribute to error in the estimation of surface deformation in the Repeat Pass Interferometry products. The Superstition Hills fault shows clearly in the associated radar interferogram; however, error in the data product makes it difficult to infer deformation from long profiles that cross the fault. Using the QuakeSim InSAR Profile tool we extracted line of site profiles on either side of the fault delineated in the interferogram. We were able to remove much of the correlated error by differencing profiles 250 m on either side of the fault. The result shows right-lateral creep of 1.5±.4 mm along the northern 7 km of the fault in the interferogram. The amount of creep abruptly changes to 8.4±.4 mm of right lateral creep along at least 9 km of the fault covered in the image to the south. The transition occurs within less than 100 m along the fault. We also extracted 2 km long line of site profiles perpendicular to this section of the fault. Averaging these profiles shows a step across the fault of 14.9±.3 mm with greater creep on the order of 20 mm on the northern two profiles and lower creep of about 10 mm on the southern two profiles. Nearby GPS stations P503 and P493 are consistent with this result. They also confirm that the creep event occurred at the time of the El Mayor - Cucapah earthquake. By removing regional deformation resulting from the main rupture we were able to invert for the depth of creep from the surface. Results indicate

  7. An Intelligent Actuator Fault Reconstruction Scheme for Robotic Manipulators.

    Science.gov (United States)

    Xiao, Bing; Yin, Shen

    2018-02-01

    This paper investigates a difficult problem of reconstructing actuator faults for robotic manipulators. An intelligent approach with fast reconstruction property is developed. This is achieved by using observer technique. This scheme is capable of precisely reconstructing the actual actuator fault. It is shown by Lyapunov stability analysis that the reconstruction error can converge to zero after finite time. A perfect reconstruction performance including precise and fast properties can be provided for actuator fault. The most important feature of the scheme is that, it does not depend on control law, dynamic model of actuator, faults' type, and also their time-profile. This super reconstruction performance and capability of the proposed approach are further validated by simulation and experimental results.

  8. An advanced SEU tolerant latch based on error detection

    Science.gov (United States)

    Xu, Hui; Zhu, Jianwei; Lu, Xiaoping; Li, Jingzhao

    2018-05-01

    This paper proposes a latch that can mitigate SEUs via an error detection circuit. The error detection circuit is hardened by a C-element and a stacked PMOS. In the hold state, a particle strikes the latch or the error detection circuit may cause a fault logic state of the circuit. The error detection circuit can detect the upset node in the latch and the fault output will be corrected. The upset node in the error detection circuit can be corrected by the C-element. The power dissipation and propagation delay of the proposed latch are analyzed by HSPICE simulations. The proposed latch consumes about 77.5% less energy and 33.1% less propagation delay than the triple modular redundancy (TMR) latch. Simulation results demonstrate that the proposed latch can mitigate SEU effectively. Project supported by the National Natural Science Foundation of China (Nos. 61404001, 61306046), the Anhui Province University Natural Science Research Major Project (No. KJ2014ZD12), the Huainan Science and Technology Program (No. 2013A4011), and the National Natural Science Foundation of China (No. 61371025).

  9. Large-scale simulations of error-prone quantum computation devices

    International Nuclear Information System (INIS)

    Trieu, Doan Binh

    2009-01-01

    The theoretical concepts of quantum computation in the idealized and undisturbed case are well understood. However, in practice, all quantum computation devices do suffer from decoherence effects as well as from operational imprecisions. This work assesses the power of error-prone quantum computation devices using large-scale numerical simulations on parallel supercomputers. We present the Juelich Massively Parallel Ideal Quantum Computer Simulator (JUMPIQCS), that simulates a generic quantum computer on gate level. It comprises an error model for decoherence and operational errors. The robustness of various algorithms in the presence of noise has been analyzed. The simulation results show that for large system sizes and long computations it is imperative to actively correct errors by means of quantum error correction. We implemented the 5-, 7-, and 9-qubit quantum error correction codes. Our simulations confirm that using error-prone correction circuits with non-fault-tolerant quantum error correction will always fail, because more errors are introduced than being corrected. Fault-tolerant methods can overcome this problem, provided that the single qubit error rate is below a certain threshold. We incorporated fault-tolerant quantum error correction techniques into JUMPIQCS using Steane's 7-qubit code and determined this threshold numerically. Using the depolarizing channel as the source of decoherence, we find a threshold error rate of (5.2±0.2) x 10 -6 . For Gaussian distributed operational over-rotations the threshold lies at a standard deviation of 0.0431±0.0002. We can conclude that quantum error correction is especially well suited for the correction of operational imprecisions and systematic over-rotations. For realistic simulations of specific quantum computation devices we need to extend the generic model to dynamic simulations, i.e. time-dependent Hamiltonian simulations of realistic hardware models. We focus on today's most advanced technology, i

  10. The Combined Application of Fault Trees and Turbine Cycle Simulation in Generation Risk Assessment

    International Nuclear Information System (INIS)

    Heo, Gyun Young; Park, Jin Kyun

    2009-01-01

    The paper describes a few ideas developed for the framework to quantify human errors taking place during the test and maintenance (T and M) in a secondary system of nuclear power plants, which was presented in the previous meeting. GRA-HRE (Generation Risk Assessment for Human Related Events) is composed of four essential components, the human error interpreter, the frequency estimator, the risk estimator, and the derate estimator. The proposed GRA gave emphasis on explicitly considering human errors, performing fault tree analysis including the entire balance-of-plant side, and quantifying electric loss under abnormal plant configurations. In terms of the consideration of human errors, it was hard to distinguish the effects of human errors from other failure modes in the conventional GRA because the human errors were implicitly involved in mechanical failure mode. Since the risk estimator in GRA-HRE separately deals with the basic events representing human error modes such as control failure, wrong object, omission, wrong action, etc., we can recognize their relative importance comparing with other types of mechanical failures. Other specialties in GRA-HRE came from the combined application of fault tree analysis and turbine cycle simulation. The previous study suggested that we would use the fault tree analysis with the top events designated by system's malfunction such as 'feedwater system failure' to develop the risk estimator. However, this approach could not clearly provide the path of propagation of human errors, and it was difficult to present the failure logics in some cases. In order to overcome these bottlenecks, the paper is going to propose the modified idea to setup top events and to explain how to make use of turbine cycle simulation to complete the fault trees in a cooperative manner

  11. Medication Administration Errors Involving Paediatric In-Patients in a ...

    African Journals Online (AJOL)

    Erah

    In-Patients in a Hospital in Ethiopia. Yemisirach Feleke ... Purpose: To assess the type and frequency of medication administration errors (MAEs) in the paediatric ward of .... prescribers, does not go beyond obeying ... specialists, 43 general practitioners, 2 health officers ..... Medication Errors, International Council of Nurses.

  12. Modeling and Measurement Constraints in Fault Diagnostics for HVAC Systems

    Energy Technology Data Exchange (ETDEWEB)

    Najafi, Massieh; Auslander, David M.; Bartlett, Peter L.; Haves, Philip; Sohn, Michael D.

    2010-05-30

    Many studies have shown that energy savings of five to fifteen percent are achievable in commercial buildings by detecting and correcting building faults, and optimizing building control systems. However, in spite of good progress in developing tools for determining HVAC diagnostics, methods to detect faults in HVAC systems are still generally undeveloped. Most approaches use numerical filtering or parameter estimation methods to compare data from energy meters and building sensors to predictions from mathematical or statistical models. They are effective when models are relatively accurate and data contain few errors. In this paper, we address the case where models are imperfect and data are variable, uncertain, and can contain error. We apply a Bayesian updating approach that is systematic in managing and accounting for most forms of model and data errors. The proposed method uses both knowledge of first principle modeling and empirical results to analyze the system performance within the boundaries defined by practical constraints. We demonstrate the approach by detecting faults in commercial building air handling units. We find that the limitations that exist in air handling unit diagnostics due to practical constraints can generally be effectively addressed through the proposed approach.

  13. Approximate dynamic fault tree calculations for modelling water supply risks

    International Nuclear Information System (INIS)

    Lindhe, Andreas; Norberg, Tommy; Rosén, Lars

    2012-01-01

    Traditional fault tree analysis is not always sufficient when analysing complex systems. To overcome the limitations dynamic fault tree (DFT) analysis is suggested in the literature as well as different approaches for how to solve DFTs. For added value in fault tree analysis, approximate DFT calculations based on a Markovian approach are presented and evaluated here. The approximate DFT calculations are performed using standard Monte Carlo simulations and do not require simulations of the full Markov models, which simplifies model building and in particular calculations. It is shown how to extend the calculations of the traditional OR- and AND-gates, so that information is available on the failure probability, the failure rate and the mean downtime at all levels in the fault tree. Two additional logic gates are presented that make it possible to model a system's ability to compensate for failures. This work was initiated to enable correct analyses of water supply risks. Drinking water systems are typically complex with an inherent ability to compensate for failures that is not easily modelled using traditional logic gates. The approximate DFT calculations are compared to results from simulations of the corresponding Markov models for three water supply examples. For the traditional OR- and AND-gates, and one gate modelling compensation, the errors in the results are small. For the other gate modelling compensation, the error increases with the number of compensating components. The errors are, however, in most cases acceptable with respect to uncertainties in input data. The approximate DFT calculations improve the capabilities of fault tree analysis of drinking water systems since they provide additional and important information and are simple and practically applicable.

  14. TREDRA, Minimal Cut Sets Fault Tree Plot Program

    International Nuclear Information System (INIS)

    Fussell, J.B.

    1983-01-01

    1 - Description of problem or function: TREDRA is a computer program for drafting report-quality fault trees. The input to TREDRA is similar to input for standard computer programs that find minimal cut sets from fault trees. Output includes fault tree plots containing all standard fault tree logic and event symbols, gate and event labels, and an output description for each event in the fault tree. TREDRA contains the following features: a variety of program options that allow flexibility in the program output; capability for automatic pagination of the output fault tree, when necessary; input groups which allow labeling of gates, events, and their output descriptions; a symbol library which includes standard fault tree symbols plus several less frequently used symbols; user control of character size and overall plot size; and extensive input error checking and diagnostic oriented output. 2 - Method of solution: Fault trees are generated by user-supplied control parameters and a coded description of the fault tree structure consisting of the name of each gate, the gate type, the number of inputs to the gate, and the names of these inputs. 3 - Restrictions on the complexity of the problem: TREDRA can produce fault trees with a minimum of 3 and a maximum of 56 levels. The width of each level may range from 3 to 37. A total of 50 transfers is allowed during pagination

  15. Improving Patient Safety With Error Identification in Chemotherapy Orders by Verification Nurses.

    Science.gov (United States)

    Baldwin, Abigail; Rodriguez, Elizabeth S

    2016-02-01

    The prevalence of medication errors associated with chemotherapy administration is not precisely known. Little evidence exists concerning the extent or nature of errors; however, some evidence demonstrates that errors are related to prescribing. This article demonstrates how the review of chemotherapy orders by a designated nurse known as a verification nurse (VN) at a National Cancer Institute-designated comprehensive cancer center helps to identify prescribing errors that may prevent chemotherapy administration mistakes and improve patient safety in outpatient infusion units. This article will describe the role of the VN and details of the verification process. To identify benefits of the VN role, a retrospective review and analysis of chemotherapy near-miss events from 2009-2014 was performed. A total of 4,282 events related to chemotherapy were entered into the Reporting to Improve Safety and Quality system. A majority of the events were categorized as near-miss events, or those that, because of chance, did not result in patient injury, and were identified at the point of prescribing.

  16. Sensor fault detection and recovery in satellite attitude control

    Science.gov (United States)

    Nasrolahi, Seiied Saeed; Abdollahi, Farzaneh

    2018-04-01

    This paper proposes an integrated sensor fault detection and recovery for the satellite attitude control system. By introducing a nonlinear observer, the healthy sensor measurements are provided. Considering attitude dynamics and kinematic, a novel observer is developed to detect the fault in angular rate as well as attitude sensors individually or simultaneously. There is no limit on type and configuration of attitude sensors. By designing a state feedback based control signal and Lyapunov stability criterion, the uniformly ultimately boundedness of tracking errors in the presence of sensor faults is guaranteed. Finally, simulation results are presented to illustrate the performance of the integrated scheme.

  17. Measurement and analysis of workload effects on fault latency in real-time systems

    Science.gov (United States)

    Woodbury, Michael H.; Shin, Kang G.

    1990-01-01

    The authors demonstrate the need to address fault latency in highly reliable real-time control computer systems. It is noted that the effectiveness of all known recovery mechanisms is greatly reduced in the presence of multiple latent faults. The presence of multiple latent faults increases the possibility of multiple errors, which could result in coverage failure. The authors present experimental evidence indicating that the duration of fault latency is dependent on workload. A synthetic workload generator is used to vary the workload, and a hardware fault injector is applied to inject transient faults of varying durations. This method makes it possible to derive the distribution of fault latency duration. Experimental results obtained from the fault-tolerant multiprocessor at the NASA Airlab are presented and discussed.

  18. A method for detection and location of high resistance earth faults

    Energy Technology Data Exchange (ETDEWEB)

    Haenninen, S; Lehtonen, M [VTT Energy, Espoo (Finland); Antila, E [ABB Transmit Oy (Finland)

    1998-08-01

    In the first part of this presentation, the theory of earth faults in unearthed and compensated power systems is briefly presented. The main factors affecting the high resistance fault detection are outlined and common practices for earth fault protection in present systems are summarized. The algorithms of the new method for high resistance fault detection and location are then presented. These are based on the change of neutral voltage and zero sequence currents, measured at the high voltage / medium voltage substation and also at the distribution line locations. The performance of the method is analyzed, and the possible error sources discussed. Among these are, for instance, switching actions, thunder storms and heavy snow fall. The feasibility of the method is then verified by an analysis based both on simulated data, which was derived using an EMTP-ATP simulator, and by real system data recorded during field tests at three substations. For the error source analysis, some real case data recorded during natural power system events, is also used

  19. Line-to-Line Fault Analysis and Location in a VSC-Based Low-Voltage DC Distribution Network

    Directory of Open Access Journals (Sweden)

    Shi-Min Xue

    2018-03-01

    Full Text Available A DC cable short-circuit fault is the most severe fault type that occurs in DC distribution networks, having a negative impact on transmission equipment and the stability of system operation. When a short-circuit fault occurs in a DC distribution network based on a voltage source converter (VSC, an in-depth analysis and characterization of the fault is of great significance to establish relay protection, devise fault current limiters and realize fault location. However, research on short-circuit faults in VSC-based low-voltage DC (LVDC systems, which are greatly different from high-voltage DC (HVDC systems, is currently stagnant. The existing research in this area is not conclusive, with further study required to explain findings in HVDC systems that do not fit with simulated results or lack thorough theoretical analyses. In this paper, faults are divided into transient- and steady-state faults, and detailed formulas are provided. A more thorough and practical theoretical analysis with fewer errors can be used to develop protection schemes and short-circuit fault locations based on transient- and steady-state analytic formulas. Compared to the classical methods, the fault analyses in this paper provide more accurate computed results of fault current. Thus, the fault location method can rapidly evaluate the distance between the fault and converter. The analyses of error increase and an improved handshaking method coordinating with the proposed location method are presented.

  20. Development of direct dating methods of fault gouges: Deep drilling into Nojima Fault, Japan

    Science.gov (United States)

    Miyawaki, M.; Uchida, J. I.; Satsukawa, T.

    2017-12-01

    It is crucial to develop a direct dating method of fault gouges for the assessment of recent fault activity in terms of site evaluation for nuclear power plants. This method would be useful in regions without Late Pleistocene overlying sediments. In order to estimate the age of the latest fault slip event, it is necessary to use fault gouges which have experienced high frictional heating sufficient for age resetting. It is said that frictional heating is higher in deeper depths, because frictional heating generated by fault movement is determined depending on the shear stress. Therefore, we should determine the reliable depth of age resetting, as it is likely that fault gouges from the ground surface have been dated to be older than the actual age of the latest fault movement due to incomplete resetting. In this project, we target the Nojima fault which triggered the 1995 Kobe earthquake in Japan. Samples are collected from various depths (300-1,500m) by trenching and drilling to investigate age resetting conditions and depth using several methods including electron spin resonance (ESR) and optical stimulated luminescence (OSL), which are applicable to ages later than the Late Pleistocene. The preliminary results by the ESR method show approx. 1.1 Ma1) at the ground surface and 0.15-0.28 Ma2) at 388 m depth, respectively. These results indicate that samples from deeper depths preserve a younger age. In contrast, the OSL method dated approx. 2,200 yr1) at the ground surface. Although further consideration is still needed as there is a large margin of error, this result indicates that the age resetting depth of OSL is relatively shallow due to the high thermosensitivity of OSL compare to ESR. In the future, we plan to carry out further investigation for dating fault gouges from various depths up to approx. 1,500 m to verify the use of these direct dating methods.1) Kyoto University, 2017. FY27 Commissioned for the disaster presentation on nuclear facilities (Drilling

  1. A description of medication errors reported by pharmacists in a neonatal intensive care unit.

    Science.gov (United States)

    Pawluk, Shane; Jaam, Myriam; Hazi, Fatima; Al Hail, Moza Sulaiman; El Kassem, Wessam; Khalifa, Hanan; Thomas, Binny; Abdul Rouf, Pallivalappila

    2017-02-01

    Background Patients in the Neonatal Intensive Care Unit (NICU) are at an increased risk for medication errors. Objective The objective of this study is to describe the nature and setting of medication errors occurring in patients admitted to an NICU in Qatar based on a standard electronic system reported by pharmacists. Setting Neonatal intensive care unit, Doha, Qatar. Method This was a retrospective cross-sectional study on medication errors reported electronically by pharmacists in the NICU between January 1, 2014 and April 30, 2015. Main outcome measure Data collected included patient information, and incident details including error category, medications involved, and follow-up completed. Results A total of 201 NICU pharmacists-reported medication errors were submitted during the study period. All reported errors did not reach the patient and did not cause harm. Of the errors reported, 98.5% occurred in the prescribing phase of the medication process with 58.7% being due to calculation errors. Overall, 53 different medications were documented in error reports with the anti-infective agents being the most frequently cited. The majority of incidents indicated that the primary prescriber was contacted and the error was resolved before reaching the next phase of the medication process. Conclusion Medication errors reported by pharmacists occur most frequently in the prescribing phase of the medication process. Our data suggest that error reporting systems need to be specific to the population involved. Special attention should be paid to frequently used medications in the NICU as these were responsible for the greatest numbers of medication errors.

  2. Uncertainties related to the fault tree reliability data

    International Nuclear Information System (INIS)

    Apostol, Minodora; Nitoi, Mirela; Farcasiu, M.

    2003-01-01

    Uncertainty analyses related to the fault trees evaluate the system variability which appears from the uncertainties of the basic events probabilities. Having a logical model which describes a system, to obtain outcomes means to evaluate it, using estimations for each basic event of the model. If the model has basic events that incorporate uncertainties, then the results of the model should incorporate the uncertainties of the events. Uncertainties estimation in the final result of the fault tree means first the uncertainties evaluation for the basic event probabilities and then combination of these uncertainties, to calculate the top event uncertainty. To calculate the propagating uncertainty, a knowledge of the probability density function as well as the range of possible values of the basic event probabilities is required. The following data are defined, using suitable probability density function: the components failure rates; the human error probabilities; the initiating event frequencies. It was supposed that the possible value distribution of the basic event probabilities is given by the lognormal probability density function. To know the range of possible value of the basic event probabilities, the error factor or the uncertainty factor is required. The aim of this paper is to estimate the error factor for the failure rates and for the human errors probabilities from the reliability data base used in Cernavoda Probabilistic Safety Evaluation. The top event chosen as an example is FEED3, from the Pressure and Inventory Control System. The quantitative evaluation of this top event was made by using EDFT code, developed in Institute for Nuclear Research Pitesti (INR). It was supposed that the error factors for the component failures are the same as for the failure rates. Uncertainty analysis was made with INCERT application, which uses the moment method and Monte Carlo method. The reliability data base used at INR Pitesti does not contain the error factors (ef

  3. Assessment of the rate and etiology of pharmacological errors by nurses of two major teaching hospitals in Shiraz

    Directory of Open Access Journals (Sweden)

    Fatemeh Vizeshfar

    2015-06-01

    Full Text Available Medication errors have serious consequences for patients, their families and care givers. Reduction of these faults by care givers such as nurses can increase the safety of patients. The goal of study was to assess the rate and etiology of medication error in pediatric and medical wards. This cross-sectional-analytic study is done on 101 registered nurses who had the duty of drug administration in medical pediatric and adults’ wards. Data was collected by a questionnaire including demographic information, self report faults, etiology of medication error and researcher observations. The results showed that nurses’ faults in pediatric wards were 51/6% and in adults wards were 47/4%. The most common faults in adults wards were later or sooner drug administration (48/6%, and administration of drugs without prescription and administering wrong drugs were the most common medication errors in pediatric wards (each one 49/2%. According to researchers’ observations, the medication error rate of 57/9% was rated low in adults wards and the rate of 69/4% in pediatric wards was rated moderate. The most frequent medication errors in both adults and pediatric wards were that nurses didn’t explain the reason and type of drug they were going to administer to patients. Independent T-test showed a significant change in faults observations in pediatric wards (p=0.000 and in adults wards (p=0.000. Several studies have shown medication errors all over the world, especially in pediatric wards. However, by designing a suitable report system and use a multi disciplinary approach, we can be reduced the occurrence of medication errors and its negative consequences.

  4. Cooperative Fault Tolerant Tracking Control for Multiagent Systems: An Intermediate Estimator-Based Approach.

    Science.gov (United States)

    Zhu, Jun-Wei; Yang, Guang-Hong; Zhang, Wen-An; Yu, Li

    2017-10-17

    This paper studies the observer based fault tolerant tracking control problem for linear multiagent systems with multiple faults and mismatched disturbances. A novel distributed intermediate estimator based fault tolerant tracking protocol is presented. The leader's input is nonzero and unavailable to the followers. By applying a projection technique, the mismatched disturbances are separated into matched and unmatched components. For each node, a tracking error system is established, for which an intermediate estimator driven by the relative output measurements is constructed to estimate the sensor faults and a combined signal of the leader's input, process faults, and matched disturbance component. Based on the estimation, a fault tolerant tracking protocol is designed to eliminate the effects of the combined signal. Besides, the effect of unmatched disturbance component can be attenuated by directly adjusting some specified parameters. Finally, a simulation example of aircraft demonstrates the effectiveness of the designed tracking protocol.This paper studies the observer based fault tolerant tracking control problem for linear multiagent systems with multiple faults and mismatched disturbances. A novel distributed intermediate estimator based fault tolerant tracking protocol is presented. The leader's input is nonzero and unavailable to the followers. By applying a projection technique, the mismatched disturbances are separated into matched and unmatched components. For each node, a tracking error system is established, for which an intermediate estimator driven by the relative output measurements is constructed to estimate the sensor faults and a combined signal of the leader's input, process faults, and matched disturbance component. Based on the estimation, a fault tolerant tracking protocol is designed to eliminate the effects of the combined signal. Besides, the effect of unmatched disturbance component can be attenuated by directly adjusting some

  5. Chaos Synchronization Based Novel Real-Time Intelligent Fault Diagnosis for Photovoltaic Systems

    Directory of Open Access Journals (Sweden)

    Chin-Tsung Hsieh

    2014-01-01

    Full Text Available The traditional solar photovoltaic fault diagnosis system needs two to three sets of sensing elements to capture fault signals as fault features and many fault diagnosis methods cannot be applied with real time. The fault diagnosis method proposed in this study needs only one set of sensing elements to intercept the fault features of the system, which can be real-time-diagnosed by creating the fault data of only one set of sensors. The aforesaid two points reduce the cost and fault diagnosis time. It can improve the construction of the huge database. This study used Matlab to simulate the faults in the solar photovoltaic system. The maximum power point tracker (MPPT is used to keep a stable power supply to the system when the system has faults. The characteristic signal of system fault voltage is captured and recorded, and the dynamic error of the fault voltage signal is extracted by chaos synchronization. Then, the extension engineering is used to implement the fault diagnosis. Finally, the overall fault diagnosis system only needs to capture the voltage signal of the solar photovoltaic system, and the fault type can be diagnosed instantly.

  6. A theoretical basis for the analysis of multiversion software subject to coincident errors

    Science.gov (United States)

    Eckhardt, D. E., Jr.; Lee, L. D.

    1985-01-01

    Fundamental to the development of redundant software techniques (known as fault-tolerant software) is an understanding of the impact of multiple joint occurrences of errors, referred to here as coincident errors. A theoretical basis for the study of redundant software is developed which: (1) provides a probabilistic framework for empirically evaluating the effectiveness of a general multiversion strategy when component versions are subject to coincident errors, and (2) permits an analytical study of the effects of these errors. An intensity function, called the intensity of coincident errors, has a central role in this analysis. This function describes the propensity of programmers to introduce design faults in such a way that software components fail together when executing in the application environment. A condition under which a multiversion system is a better strategy than relying on a single version is given.

  7. Large-scale simulations of error-prone quantum computation devices

    Energy Technology Data Exchange (ETDEWEB)

    Trieu, Doan Binh

    2009-07-01

    The theoretical concepts of quantum computation in the idealized and undisturbed case are well understood. However, in practice, all quantum computation devices do suffer from decoherence effects as well as from operational imprecisions. This work assesses the power of error-prone quantum computation devices using large-scale numerical simulations on parallel supercomputers. We present the Juelich Massively Parallel Ideal Quantum Computer Simulator (JUMPIQCS), that simulates a generic quantum computer on gate level. It comprises an error model for decoherence and operational errors. The robustness of various algorithms in the presence of noise has been analyzed. The simulation results show that for large system sizes and long computations it is imperative to actively correct errors by means of quantum error correction. We implemented the 5-, 7-, and 9-qubit quantum error correction codes. Our simulations confirm that using error-prone correction circuits with non-fault-tolerant quantum error correction will always fail, because more errors are introduced than being corrected. Fault-tolerant methods can overcome this problem, provided that the single qubit error rate is below a certain threshold. We incorporated fault-tolerant quantum error correction techniques into JUMPIQCS using Steane's 7-qubit code and determined this threshold numerically. Using the depolarizing channel as the source of decoherence, we find a threshold error rate of (5.2{+-}0.2) x 10{sup -6}. For Gaussian distributed operational over-rotations the threshold lies at a standard deviation of 0.0431{+-}0.0002. We can conclude that quantum error correction is especially well suited for the correction of operational imprecisions and systematic over-rotations. For realistic simulations of specific quantum computation devices we need to extend the generic model to dynamic simulations, i.e. time-dependent Hamiltonian simulations of realistic hardware models. We focus on today's most advanced

  8. Nurse prescribing in dermatology: doctors' and non-prescribing nurses' views.

    Science.gov (United States)

    Stenner, Karen; Carey, Nicola; Courtenay, Molly

    2009-04-01

    This paper is a report of a study conducted to explore doctor and non-prescribing nurse views about nurse prescribing in the light of their experience in dermatology. The cooperation of healthcare professionals and peers is of key importance in enabling and supporting nurse prescribing. Lack of understanding of and opposition to nurse prescribing are known barriers to its implementation. Given the important role they play, it is necessary to consider how the recent expansion of nurse prescribing rights in England impacts on the views of healthcare professionals. Interviews with 12 doctors and six non-prescribing nurses were conducted in 10 case study sites across England between 2006 and 2007. Participants all worked with nurses who prescribed for patients with dermatological conditions in secondary or primary care. Thematic analysis was conducted on the interview data. Participants were positive about their experiences of nurse prescribing having witnessed benefits from it, but had reservations about nurse prescribing in general. Acceptance was conditional upon the nurses' level of experience, awareness of their own limitations and the context in which they prescribed. Fears that nurses would prescribe beyond their level of competence were expected to reduce as understanding and experience of nurse prescribing increased. Indications are that nurse prescribing can be acceptable to doctors and nurses so long as it operates within recommended parameters. Greater promotion and assessment of standards and criteria are recommended to improve understanding and acceptance of nurse prescribing.

  9. Sliding Mode Fault Tolerant Control with Adaptive Diagnosis for Aircraft Engines

    Science.gov (United States)

    Xiao, Lingfei; Du, Yanbin; Hu, Jixiang; Jiang, Bin

    2018-03-01

    In this paper, a novel sliding mode fault tolerant control method is presented for aircraft engine systems with uncertainties and disturbances on the basis of adaptive diagnostic observer. By taking both sensors faults and actuators faults into account, the general model of aircraft engine control systems which is subjected to uncertainties and disturbances, is considered. Then, the corresponding augmented dynamic model is established in order to facilitate the fault diagnosis and fault tolerant controller design. Next, a suitable detection observer is designed to detect the faults effectively. Through creating an adaptive diagnostic observer and based on sliding mode strategy, the sliding mode fault tolerant controller is constructed. Robust stabilization is discussed and the closed-loop system can be stabilized robustly. It is also proven that the adaptive diagnostic observer output errors and the estimations of faults converge to a set exponentially, and the converge rate greater than some value which can be adjusted by choosing designable parameters properly. The simulation on a twin-shaft aircraft engine verifies the applicability of the proposed fault tolerant control method.

  10. A New Method for Weak Fault Feature Extraction Based on Improved MED

    Directory of Open Access Journals (Sweden)

    Junlin Li

    2018-01-01

    Full Text Available Because of the characteristics of weak signal and strong noise, the low-speed vibration signal fault feature extraction has been a hot spot and difficult problem in the field of equipment fault diagnosis. Moreover, the traditional minimum entropy deconvolution (MED method has been proved to be used to detect such fault signals. The MED uses objective function method to design the filter coefficient, and the appropriate threshold value should be set in the calculation process to achieve the optimal iteration effect. It should be pointed out that the improper setting of the threshold will cause the target function to be recalculated, and the resulting error will eventually affect the distortion of the target function in the background of strong noise. This paper presents an improved MED based method of fault feature extraction from rolling bearing vibration signals that originate in high noise environments. The method uses the shuffled frog leaping algorithm (SFLA, finds the set of optimal filter coefficients, and eventually avoids the artificial error influence of selecting threshold parameter. Therefore, the fault bearing under the two rotating speeds of 60 rpm and 70 rpm is selected for verification with typical low-speed fault bearing as the research object; the results show that SFLA-MED extracts more obvious bearings and has a higher signal-to-noise ratio than the prior MED method.

  11. A framework for software fault tolerance in real-time systems

    Science.gov (United States)

    Anderson, T.; Knight, J. C.

    1983-01-01

    A classification scheme for errors and a technique for the provision of software fault tolerance in cyclic real-time systems is presented. The technique requires that the process structure of a system be represented by a synchronization graph which is used by an executive as a specification of the relative times at which they will communicate during execution. Communication between concurrent processes is severely limited and may only take place between processes engaged in an exchange. A history of error occurrences is maintained by an error handler. When an error is detected, the error handler classifies it using the error history information and then initiates appropriate recovery action.

  12. Automated Classification of Phonological Errors in Aphasic Language

    Science.gov (United States)

    Ahuja, Sanjeev B.; Reggia, James A.; Berndt, Rita S.

    1984-01-01

    Using heuristically-guided state space search, a prototype program has been developed to simulate and classify phonemic errors occurring in the speech of neurologically-impaired patients. Simulations are based on an interchangeable rule/operator set of elementary errors which represent a theory of phonemic processing faults. This work introduces and evaluates a novel approach to error simulation and classification, it provides a prototype simulation tool for neurolinguistic research, and it forms the initial phase of a larger research effort involving computer modelling of neurolinguistic processes.

  13. Dependability validation by means of fault injection: method, implementation, application

    International Nuclear Information System (INIS)

    Arlat, Jean

    1990-01-01

    This dissertation presents theoretical and practical results concerning the use of fault injection as a means for testing fault tolerance in the framework of the experimental dependability validation of computer systems. The dissertation first presents the state-of-the-art of published work on fault injection, encompassing both hardware (fault simulation, physical fault Injection) and software (mutation testing) issues. Next, the major attributes of fault injection (faults and their activation, experimental readouts and measures, are characterized taking into account: i) the abstraction levels used to represent the system during the various phases of its development (analytical, empirical and physical models), and Il) the validation objectives (verification and evaluation). An evaluation method is subsequently proposed that combines the analytical modeling approaches (Monte Carlo Simulations, closed-form expressions. Markov chains) used for the representation of the fault occurrence process and the experimental fault Injection approaches (fault Simulation and physical injection); characterizing the error processing and fault treatment provided by the fault tolerance mechanisms. An experimental tool - MESSALINE - is then defined and presented. This tool enables physical faults to be Injected In an hardware and software prototype of the system to be validated. Finally, the application of MESSALINE for testing two fault-tolerant systems possessing very dissimilar features and the utilization of the experimental results obtained - both as design feedbacks and for dependability measures evaluation - are used to illustrate the relevance of the method. (author) [fr

  14. From experiment to design -- Fault characterization and detection in parallel computer systems using computational accelerators

    Science.gov (United States)

    Yim, Keun Soo

    This dissertation summarizes experimental validation and co-design studies conducted to optimize the fault detection capabilities and overheads in hybrid computer systems (e.g., using CPUs and Graphics Processing Units, or GPUs), and consequently to improve the scalability of parallel computer systems using computational accelerators. The experimental validation studies were conducted to help us understand the failure characteristics of CPU-GPU hybrid computer systems under various types of hardware faults. The main characterization targets were faults that are difficult to detect and/or recover from, e.g., faults that cause long latency failures (Ch. 3), faults in dynamically allocated resources (Ch. 4), faults in GPUs (Ch. 5), faults in MPI programs (Ch. 6), and microarchitecture-level faults with specific timing features (Ch. 7). The co-design studies were based on the characterization results. One of the co-designed systems has a set of source-to-source translators that customize and strategically place error detectors in the source code of target GPU programs (Ch. 5). Another co-designed system uses an extension card to learn the normal behavioral and semantic execution patterns of message-passing processes executing on CPUs, and to detect abnormal behaviors of those parallel processes (Ch. 6). The third co-designed system is a co-processor that has a set of new instructions in order to support software-implemented fault detection techniques (Ch. 7). The work described in this dissertation gains more importance because heterogeneous processors have become an essential component of state-of-the-art supercomputers. GPUs were used in three of the five fastest supercomputers that were operating in 2011. Our work included comprehensive fault characterization studies in CPU-GPU hybrid computers. In CPUs, we monitored the target systems for a long period of time after injecting faults (a temporally comprehensive experiment), and injected faults into various types of

  15. Learning from no-fault treatment injury claims to improve the safety of older patients.

    Science.gov (United States)

    Wallis, Katharine Ann

    2015-09-01

    New Zealand's treatment injury compensation claims data set provides an uncommon no-fault perspective of patient safety incidents. Analysis of primary care claims data confirmed medication as the leading threat to the safety of older patients in primary care and drew particular attention to the threat posed by antibiotics. For most injuries there was no suggestion of error. The no-fault perspective reveals the greatest threat to the safety of older patients in primary care to be, not error, but the risk posed by treatment itself. To improve patients' safety, in addition to reducing error, clinicians need to reduce patients' exposure to treatment risk, where appropriate. © 2015 Annals of Family Medicine, Inc.

  16. The incidence and types of medication errors in patients receiving antiretroviral therapy in resource-constrained settings.

    Directory of Open Access Journals (Sweden)

    Kenneth Anene Agu

    Full Text Available This study assessed the incidence and types of medication errors, interventions and outcomes in patients on antiretroviral therapy (ART in selected HIV treatment centres in Nigeria.Of 69 health facilities that had program for active screening of medication errors, 14 were randomly selected for prospective cohort assessment. All patients who filled/refilled their antiretroviral medications between February 2009 and March 2011 were screened for medication errors using study-specific pharmaceutical care daily worksheet (PCDW. All potential or actual medication errors identified, interventions provided and the outcomes were documented in the PCDW. Interventions included pharmaceutical care in HIV training for pharmacists amongst others. Chi-square was used for inferential statistics and P0.05. The major medications errors identified were 26.4% incorrect ART regimens prescribed; 19.8% potential drug-drug interaction or contraindication present; and 16.6% duration and/or frequency of medication inappropriate. Interventions provided included 67.1% cases of prescriber contacted to clarify/resolve errors and 14.7% cases of patient counselling and education; 97.4% of potential/actual medication error(s were resolved.The incidence rate of medication errors was somewhat high; and majority of identified errors were related to prescription of incorrect ART regimens and potential drug-drug interactions; the prescriber was contacted and the errors were resolved in majority of cases. Active screening for medication errors is feasible in resource-limited settings following a capacity building intervention.

  17. Dose error analysis for a scanned proton beam delivery system

    International Nuclear Information System (INIS)

    Coutrakon, G; Wang, N; Miller, D W; Yang, Y

    2010-01-01

    All particle beam scanning systems are subject to dose delivery errors due to errors in position, energy and intensity of the delivered beam. In addition, finite scan speeds, beam spill non-uniformities, and delays in detector, detector electronics and magnet responses will all contribute errors in delivery. In this paper, we present dose errors for an 8 x 10 x 8 cm 3 target of uniform water equivalent density with 8 cm spread out Bragg peak and a prescribed dose of 2 Gy. Lower doses are also analyzed and presented later in the paper. Beam energy errors and errors due to limitations of scanning system hardware have been included in the analysis. By using Gaussian shaped pencil beams derived from measurements in the research room of the James M Slater Proton Treatment and Research Center at Loma Linda, CA and executing treatment simulations multiple times, statistical dose errors have been calculated in each 2.5 mm cubic voxel in the target. These errors were calculated by delivering multiple treatments to the same volume and calculating the rms variation in delivered dose at each voxel in the target. The variations in dose were the result of random beam delivery errors such as proton energy, spot position and intensity fluctuations. The results show that with reasonable assumptions of random beam delivery errors, the spot scanning technique yielded an rms dose error in each voxel less than 2% or 3% of the 2 Gy prescribed dose. These calculated errors are within acceptable clinical limits for radiation therapy.

  18. Non-binary unitary error bases and quantum codes

    Energy Technology Data Exchange (ETDEWEB)

    Knill, E.

    1996-06-01

    Error operator bases for systems of any dimension are defined and natural generalizations of the bit-flip/ sign-change error basis for qubits are given. These bases allow generalizing the construction of quantum codes based on eigenspaces of Abelian groups. As a consequence, quantum codes can be constructed form linear codes over {ital Z}{sub {ital n}} for any {ital n}. The generalization of the punctured code construction leads to many codes which permit transversal (i.e. fault tolerant) implementations of certain operations compatible with the error basis.

  19. Automatic fault tracing of active faults in the Sutlej valley (NW-Himalayas, India)

    Science.gov (United States)

    Janda, C.; Faber, R.; Hager, C.; Grasemann, B.

    2003-04-01

    In the Sutlej Valley the Lesser Himalayan Crystalline Sequence (LHCS) is actively extruding between the Munsiari Thrust (MT) at the base, and the Karcham Normal Fault (KNF) at the top. The clear evidences for ongoing deformation are brittle faults in Holocene lake deposits, hot springs activity near the faults and dramatically younger cooling ages within the LHCS (Vannay and Grasemann, 2001). Because these brittle fault zones obviously influence the morphology in the field we developed a new method for automatically tracing the intersections of planar fault geometries with digital elevation models (Faber, 2002). Traditional mapping techniques use structure contours (i.e. lines or curves connecting points of equal elevation on a geological structure) in order to construct intersections of geological structures with topographic maps. However, even if the geological structure is approximated by a plane and therefore structure contours are equally spaced lines, this technique is rather time consuming and inaccurate, because errors are cumulative. Drawing structure contours by hand makes it also impossible to slightly change the azimuth and dip direction of the favoured plane without redrawing everything from the beginning on. However, small variations of the fault position which are easily possible by either inaccuracies of measurement in the field or small local variations in the trend and/or dip of the fault planes can have big effects on the intersection with topography. The developed method allows to interactively view intersections in a 2D and 3D mode. Unlimited numbers of planes can be moved separately in 3 dimensions (translation and rotation) and intersections with the topography probably following morphological features can be mapped. Besides the increase of efficiency this method underlines the shortcoming of classical lineament extraction ignoring the dip of planar structures. Using this method, areas of active faulting influencing the morphology, can be

  20. Noise Threshold and Resource Cost of Fault-Tolerant Quantum Computing with Majorana Fermions in Hybrid Systems.

    Science.gov (United States)

    Li, Ying

    2016-09-16

    Fault-tolerant quantum computing in systems composed of both Majorana fermions and topologically unprotected quantum systems, e.g., superconducting circuits or quantum dots, is studied in this Letter. Errors caused by topologically unprotected quantum systems need to be corrected with error-correction schemes, for instance, the surface code. We find that the error-correction performance of such a hybrid topological quantum computer is not superior to a normal quantum computer unless the topological charge of Majorana fermions is insusceptible to noise. If errors changing the topological charge are rare, the fault-tolerance threshold is much higher than the threshold of a normal quantum computer and a surface-code logical qubit could be encoded in only tens of topological qubits instead of about 1,000 normal qubits.

  1. Social determinants of prescribed and non-prescribed medicine use

    Directory of Open Access Journals (Sweden)

    García-Altés Anna

    2010-05-01

    Full Text Available Abstract Background The aim of the present study was to describe the use of prescribed and non prescribed medicines in a non-institutionalised population older than 15 years of an urban area during the year 2000, in terms of age and gender, social class, employment status and type of Primary Health Care. Methods Cross-sectional study. Information came from the 2000 Barcelona Health Interview Survey. The indicators used were the prevalence of use of prescribed and non-prescribed medicines in the two weeks prior to the interview. Descriptive analyses, bivariate and multivariate logistic regression analyses were carried out. Results More women than men took medicines (75.8% vs. 60% respectively. The prevalence of use of prescribed medicines increased with age while the prevalence of non-prescribed use decreased. These age differences are smaller among those with poor perceived health. In terms of social class, a higher percentage of men with good health in the more advantaged classes took non-prescribed medicines compared with disadvantaged classes (38.7% vs 31.8%. In contrast, among the group with poor health, more people from the more advantaged classes took prescribed medicines, compared with disadvantaged classes (51.4% vs 33.3%. A higher proportion of people who were either retired, unemployed or students, with good health, used prescribed medicines. Conclusion This study shows that beside health needs, there are social determinants affecting medicine consumption in the city of Barcelona.

  2. Fault tolerant control of multivariable processes using auto-tuning PID controller.

    Science.gov (United States)

    Yu, Ding-Li; Chang, T K; Yu, Ding-Wen

    2005-02-01

    Fault tolerant control of dynamic processes is investigated in this paper using an auto-tuning PID controller. A fault tolerant control scheme is proposed composing an auto-tuning PID controller based on an adaptive neural network model. The model is trained online using the extended Kalman filter (EKF) algorithm to learn system post-fault dynamics. Based on this model, the PID controller adjusts its parameters to compensate the effects of the faults, so that the control performance is recovered from degradation. The auto-tuning algorithm for the PID controller is derived with the Lyapunov method and therefore, the model predicted tracking error is guaranteed to converge asymptotically. The method is applied to a simulated two-input two-output continuous stirred tank reactor (CSTR) with various faults, which demonstrate the applicability of the developed scheme to industrial processes.

  3. Critical Gates Identification for Fault-Tolerant Design in Math Circuits

    Directory of Open Access Journals (Sweden)

    Tian Ban

    2017-01-01

    Full Text Available Hardware redundancy at different levels of design is a common fault mitigation technique, which is well known for its efficiency to the detriment of area overhead. In order to reduce this drawback, several fault-tolerant techniques have been proposed in literature to find a good trade-off. In this paper, critical constituent gates in math circuits are detected and graded based on the impact of an error in the output of a circuit. These critical gates should be hardened first under the area constraint of design criteria. Indeed, output bits considered crucial to a system receive higher priorities to be protected, reducing the occurrence of critical errors. The 74283 fast adder is used as an example to illustrate the feasibility and efficiency of the proposed approach.

  4. Accident Fault Trees for Defense Waste Processing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Sarrack, A.G.

    1999-06-22

    The purpose of this report is to document fault tree analyses which have been completed for the Defense Waste Processing Facility (DWPF) safety analysis. Logic models for equipment failures and human error combinations that could lead to flammable gas explosions in various process tanks, or failure of critical support systems were developed for internal initiating events and for earthquakes. These fault trees provide frequency estimates for support systems failures and accidents that could lead to radioactive and hazardous chemical releases both on-site and off-site. Top event frequency results from these fault trees will be used in further APET analyses to calculate accident risk associated with DWPF facility operations. This report lists and explains important underlying assumptions, provides references for failure data sources, and briefly describes the fault tree method used. Specific commitments from DWPF to provide new procedural/administrative controls or system design changes are listed in the ''Facility Commitments'' section. The purpose of the ''Assumptions'' section is to clarify the basis for fault tree modeling, and is not necessarily a list of items required to be protected by Technical Safety Requirements (TSRs).

  5. Standardized Competencies for Parenteral Nutrition Prescribing: The American Society for Parenteral and Enteral Nutrition Model.

    Science.gov (United States)

    Guenter, Peggi; Boullata, Joseph I; Ayers, Phil; Gervasio, Jane; Malone, Ainsley; Raymond, Erica; Holcombe, Beverly; Kraft, Michael; Sacks, Gordon; Seres, David

    2015-08-01

    Parenteral nutrition (PN) provision is complex, as it is a high-alert medication and prone to a variety of potential errors. With changes in clinical practice models and recent federal rulings, the number of PN prescribers may be increasing. Safe prescribing of this therapy requires that competency for prescribers from all disciplines be demonstrated using a standardized process. A standardized model for PN prescribing competency is proposed based on a competency framework, the American Society for Parenteral and Enteral Nutrition (A.S.P.E.N.)-published interdisciplinary core competencies, safe practice recommendations, and clinical guidelines. This framework will guide institutions and agencies in developing and maintaining competency for safe PN prescription by their staff. © 2015 American Society for Parenteral and Enteral Nutrition.

  6. Fault-tolerant search algorithms reliable computation with unreliable information

    CERN Document Server

    Cicalese, Ferdinando

    2013-01-01

    Why a book on fault-tolerant search algorithms? Searching is one of the fundamental problems in computer science. Time and again algorithmic and combinatorial issues originally studied in the context of search find application in the most diverse areas of computer science and discrete mathematics. On the other hand, fault-tolerance is a necessary ingredient of computing. Due to their inherent complexity, information systems are naturally prone to errors, which may appear at any level - as imprecisions in the data, bugs in the software, or transient or permanent hardware failures. This book pr

  7. A Framework For Evaluating Comprehensive Fault Resilience Mechanisms In Numerical Programs

    Energy Technology Data Exchange (ETDEWEB)

    Chen, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Peng, L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bronevetsky, G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-01-09

    As HPC systems approach Exascale, their circuit feature will shrink, while their overall size will grow, all at a fixed power limit. These trends imply that soft faults in electronic circuits will become an increasingly significant problem for applications that run on these systems, causing them to occasionally crash or worse, silently return incorrect results. This is motivating extensive work on application resilience to such faults, ranging from generic techniques such as replication or checkpoint/restart to algorithm-specific error detection and resilience techniques. Effective use of such techniques requires a detailed understanding of (1) which vulnerable parts of the application are most worth protecting (2) the performance and resilience impact of fault resilience mechanisms on the application. This paper presents FaultTelescope, a tool that combines these two and generates actionable insights by presenting in an intuitive way application vulnerabilities and impact of fault resilience mechanisms on applications.

  8. Robust approximation-free prescribed performance control for nonlinear systems and its application

    Science.gov (United States)

    Sun, Ruisheng; Na, Jing; Zhu, Bin

    2018-02-01

    This paper presents a robust prescribed performance control approach and its application to nonlinear tail-controlled missile systems with unknown dynamics and uncertainties. The idea of prescribed performance function (PPF) is incorporated into the control design, such that both the steady-state and transient control performance can be strictly guaranteed. Unlike conventional PPF-based control methods, we further tailor a recently proposed systematic control design procedure (i.e. approximation-free control) using the transformed tracking error dynamics, which provides a proportional-like control action. Hence, the function approximators (e.g. neural networks, fuzzy systems) that are widely used to address the unknown nonlinearities in the nonlinear control designs are not needed. The proposed control design leads to a robust yet simplified function approximation-free control for nonlinear systems. The closed-loop system stability and the control error convergence are all rigorously proved. Finally, comparative simulations are conducted based on nonlinear missile systems to validate the improved response and the robustness of the proposed control method.

  9. Assessing pediatrics residents' mathematical skills for prescribing medication: a need for improved training.

    Science.gov (United States)

    Glover, Mark L; Sussmane, Jeffrey B

    2002-10-01

    To evaluate residents' skills in performing basic mathematical calculations used for prescribing medications to pediatric patients. In 2001, a test of ten questions on basic calculations was given to first-, second-, and third-year residents at Miami Children's Hospital in Florida. Four additional questions were included to obtain the residents' levels of training, specific pediatrics intensive care unit (PICU) experience, and whether or not they routinely double-checked doses and adjusted them for each patient's weight. The test was anonymous and calculators were permitted. The overall score and the score for each resident class were calculated. Twenty-one residents participated. The overall average test score and the mean test score of each resident class was less than 70%. Second-year residents had the highest mean test scores, although there was no significant difference between the classes of residents (p =.745) or relationship between the residents' PICU experiences and their exam scores (p =.766). There was no significant difference between residents' levels of training and whether they double-checked their calculations (p =.633) or considered each patient's weight relative to the dose prescribed (p =.869). Seven residents committed tenfold dosing errors, and one resident committed a 1,000-fold dosing error. Pediatrics residents need to receive additional education in performing the calculations needed to prescribe medications. In addition, residents should be required to demonstrate these necessary mathematical skills before they are allowed to prescribe medications.

  10. An evaluation of the appropriateness and safety of nurse and midwife prescribing in Ireland.

    LENUS (Irish Health Repository)

    Naughton, Corina

    2012-09-19

    AIM: To evaluate the clinical appropriateness and safety of nurse and midwife prescribing practice. BACKGROUND: The number of countries introducing nurse and midwife prescribing is increasing; however, concerns over patient safety remain. DESIGN: A multi-site documentation evaluation was conducted using purposeful and random sampling. The sample included 142 patients\\' records and 208 medications prescribed by 25 Registered Nurse Prescribers. METHODS: Data were extracted from patient and prescription records between March-May 2009. Two expert reviewers applied the modified Medication Appropriate Index tool (8 criteria) to each drug. The percentage of appropriate or inappropriate responses for each criterion was reported. Reviewer concordance was measured using the Cohen\\'s kappa statistic (inter-rater reliability). RESULTS: Nurse or midwife prescribers from eight hospitals working in seventeen different areas of practice were included. The reviewers judged that 95-96% of medicines prescribed were indicated and effective for the diagnosed condition. Criteria relating to dosage, directions, drug-drugs or disease-condition interaction, and duplication of therapy were judged appropriate in 87-92% of prescriptions. Duration of therapy received the lowest value at 76%. Overall, reviewers indicated that between 69 (reviewer 2)-80% (reviewer 1) of prescribing decisions met all eight criteria. CONCLUSION: The majority of nurse and midwife prescribing decisions were deemed safe and clinically appropriate. However, risk of inappropriate prescribing with the potential for drug errors was detected. Continuing education and evaluation of prescribing practice, especially related to drug and condition interactions, is required to maximize appropriate and safe prescribing.

  11. Quality Improvement Initiative to Decrease Variability of Emergency Physician Opioid Analgesic Prescribing

    Directory of Open Access Journals (Sweden)

    John H. Burton

    2016-05-01

    Full Text Available Introduction: Addressing pain is a crucial aspect of emergency medicine. Prescription opioids are commonly prescribed for moderate to severe pain in the emergency department (ED; unfortunately, prescribing practices are variable. High variability of opioid prescribing decisions suggests a lack of consensus and an opportunity to improve care. This quality improvement (QI initiative aimed to reduce variability in ED opioid analgesic prescribing. Methods: We evaluated the impact of a three-part QI initiative on ED opioid prescribing by physicians at seven sites. Stage 1: Retrospective baseline period (nine months. Stage 2: Physicians were informed that opioid prescribing information would be prospectively collected and feedback on their prescribing and that of the group would be shared at the end of the stage (three months. Stage 3: After physicians received their individual opioid prescribing data with blinded comparison to the group means (from Stage 2 they were informed that individual prescribing data would be unblinded and shared with the group after three months. The primary outcome was variability of the standard error of the mean and standard deviation of the opioid prescribing rate (defined as number of patients discharged with an opioid divided by total number of discharges for each provider. Secondary observations included mean quantity of pills per opioid prescription, and overall frequency of opioid prescribing. Results: The study group included 47 physicians with 149,884 ED patient encounters. The variability in prescribing decreased through each stage of the initiative as represented by the distributions for the opioid prescribing rate: Stage 1 mean 20%; Stage 2 mean 13% (46% reduction, p<0.01, and Stage 3 mean 8% (60% reduction, p<0.01. The mean quantity of pills prescribed per prescription was 16 pills in Stage 1, 14 pills in Stage 2 (18% reduction, p<0.01, and 13 pills in Stage 3 (18% reduction, p<0.01. The group mean

  12. The Earth isn't flat: The (large) influence of topography on geodetic fault slip imaging.

    Science.gov (United States)

    Thompson, T. B.; Meade, B. J.

    2017-12-01

    While earthquakes both occur near and generate steep topography, most geodetic slip inversions assume that the Earth's surface is flat. We have developed a new boundary element tool, Tectosaur, with the capability to study fault and earthquake problems including complex fault system geometries, topography, material property contrasts, and millions of elements. Using Tectosaur, we study the model error induced by neglecting topography in both idealized synthetic fault models and for the cases of the MW=7.3 Landers and MW=8.0 Wenchuan earthquakes. Near the steepest topography, we find the use of flat Earth dislocation models may induce errors of more than 100% in the inferred slip magnitude and rake. In particular, neglecting topographic effects leads to an inferred shallow slip deficit. Thus, we propose that the shallow slip deficit observed in several earthquakes may be an artefact resulting from the systematic use of elastic dislocation models assuming a flat Earth. Finally, using this study as an example, we emphasize the dangerous potential for forward model errors to be amplified by an order of magnitude in inverse problems.

  13. [Medical errors: inevitable but preventable].

    Science.gov (United States)

    Giard, R W

    2001-10-27

    Medical errors are increasingly reported in the lay press. Studies have shown dramatic error rates of 10 percent or even higher. From a methodological point of view, studying the frequency and causes of medical errors is far from simple. Clinical decisions on diagnostic or therapeutic interventions are always taken within a clinical context. Reviewing outcomes of interventions without taking into account both the intentions and the arguments for a particular action will limit the conclusions from a study on the rate and preventability of errors. The interpretation of the preventability of medical errors is fraught with difficulties and probably highly subjective. Blaming the doctor personally does not do justice to the actual situation and especially the organisational framework. Attention for and improvement of the organisational aspects of error are far more important then litigating the person. To err is and will remain human and if we want to reduce the incidence of faults we must be able to learn from our mistakes. That requires an open attitude towards medical mistakes, a continuous effort in their detection, a sound analysis and, where feasible, the institution of preventive measures.

  14. Overview of error-tolerant cockpit research

    Science.gov (United States)

    Abbott, Kathy

    1990-01-01

    The objectives of research in intelligent cockpit aids and intelligent error-tolerant systems are stated. In intelligent cockpit aids research, the objective is to provide increased aid and support to the flight crew of civil transport aircraft through the use of artificial intelligence techniques combined with traditional automation. In intelligent error-tolerant systems, the objective is to develop and evaluate cockpit systems that provide flight crews with safe and effective ways and means to manage aircraft systems, plan and replan flights, and respond to contingencies. A subsystems fault management functional diagram is given. All information is in viewgraph form.

  15. The impact of pharmacy services on opioid prescribing in dental practice.

    Science.gov (United States)

    Stewart, Autumn; Zborovancik, Kelsey J; Stiely, Kara L

    To compare rates of dental opioid prescribing between periods of full and partial integration of pharmacy services and periods of no integration. This observational study used a retrospective chart review of opioid prescriptions written by dental providers practicing in a free dental clinic for the medically underserved over a period of 74 months. Pharmacy services were fully integrated into the practice model for 48 of the 74 months under study. During this time frame, all dental opioid orders required review by the pharmacy department before prescribing. Outcomes related to prescribing rates and errors were compared between groups, which were defined by the level of integrated pharmacy services. Demographic and prescription-specific data (drug name, dose, quantity, directions, professional designation of individual entering order) and clinic appointment data were collected and analyzed with the use of descriptive and inferential statistics. A total of 102 opioids were prescribed to 89 patients; hydrocodone-acetaminophen combination products were the most frequently used. Opioid prescribing rates were 5 times greater when pharmacy services were not integrated (P dental practice. Copyright © 2017 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  16. Do final‐year medical students have sufficient prescribing competencies? A systematic literature review

    Science.gov (United States)

    Tichelaar, Jelle; Graaf, Sanne; Otten, René H. J.; Richir, Milan C.; van Agtmael, Michiel A.

    2018-01-01

    Aims Prescribing errors are an important cause of patient safety incidents and are frequently caused by junior doctors. This might be because the prescribing competence of final‐year medical students is poor as a result of inadequate clinical pharmacology and therapeutic (CPT) education. We reviewed the literature to investigate which prescribing competencies medical students should have acquired in order to prescribe safely and effectively, and whether these have been attained by the time they graduate. Methods PubMed, EMBASE and ERIC databases were searched from the earliest dates up to and including January 2017, using the terms ‘prescribing’, ‘competence’ and ‘medical students’ in combination. Articles describing or evaluating essential prescribing competencies of final‐year medical students were included. Results Twenty‐five articles describing, and 47 articles evaluating, the prescribing competencies of final‐year students were included. Although there seems to be some agreement, we found no clear consensus among CPT teachers on which prescribing competencies medical students should have when they graduate. Studies showed that students had a general lack of preparedness, self‐confidence, knowledge and skills, specifically regarding general and antimicrobial prescribing and pharmacovigilance. However, the results should be interpreted with caution, given the heterogeneity and methodological weaknesses of the included studies. Conclusions There is considerable evidence that final‐year students have insufficient competencies to prescribe safely and effectively, although there is a need for a greater consensus among CPT teachers on the required competencies. Changes in undergraduate CPT education are urgently required in order to improve the prescribing of future doctors. PMID:29315721

  17. Minimizing Experimental Error in Thinning Research

    Science.gov (United States)

    C. B. Briscoe

    1964-01-01

    Many diverse approaches have been made prescribing and evaluating thinnings on an objective basis. None of the techniques proposed hasbeen widely accepted. Indeed. none has been proven superior to the others nor even widely applicable. There are at least two possible reasons for this: none of the techniques suggested is of any general utility and/or experimental error...

  18. Reliability of Measured Data for pH Sensor Arrays with Fault Diagnosis and Data Fusion Based on LabVIEW

    OpenAIRE

    Liao, Yi-Hung; Chou, Jung-Chuan; Lin, Chin-Yi

    2013-01-01

    Fault diagnosis (FD) and data fusion (DF) technologies implemented in the LabVIEW program were used for a ruthenium dioxide pH sensor array. The purpose of the fault diagnosis and data fusion technologies is to increase the reliability of measured data. Data fusion is a very useful statistical method used for sensor arrays in many fields. Fault diagnosis is used to avoid sensor faults and to measure errors in the electrochemical measurement system, therefore, in this study, we use fault diagn...

  19. What are incident reports telling us? A comparative study at two Australian hospitals of medication errors identified at audit, detected by staff and reported to an incident system.

    Science.gov (United States)

    Westbrook, Johanna I; Li, Ling; Lehnbom, Elin C; Baysari, Melissa T; Braithwaite, Jeffrey; Burke, Rosemary; Conn, Chris; Day, Richard O

    2015-02-01

    To (i) compare medication errors identified at audit and observation with medication incident reports; (ii) identify differences between two hospitals in incident report frequency and medication error rates; (iii) identify prescribing error detection rates by staff. Audit of 3291 patient records at two hospitals to identify prescribing errors and evidence of their detection by staff. Medication administration errors were identified from a direct observational study of 180 nurses administering 7451 medications. Severity of errors was classified. Those likely to lead to patient harm were categorized as 'clinically important'. Two major academic teaching hospitals in Sydney, Australia. Rates of medication errors identified from audit and from direct observation were compared with reported medication incident reports. A total of 12 567 prescribing errors were identified at audit. Of these 1.2/1000 errors (95% CI: 0.6-1.8) had incident reports. Clinically important prescribing errors (n = 539) were detected by staff at a rate of 218.9/1000 (95% CI: 184.0-253.8), but only 13.0/1000 (95% CI: 3.4-22.5) were reported. 78.1% (n = 421) of clinically important prescribing errors were not detected. A total of 2043 drug administrations (27.4%; 95% CI: 26.4-28.4%) contained ≥ 1 errors; none had an incident report. Hospital A had a higher frequency of incident reports than Hospital B, but a lower rate of errors at audit. Prescribing errors with the potential to cause harm frequently go undetected. Reported incidents do not reflect the profile of medication errors which occur in hospitals or the underlying rates. This demonstrates the inaccuracy of using incident frequency to compare patient risk or quality performance within or across hospitals. New approaches including data mining of electronic clinical information systems are required to support more effective medication error detection and mitigation. © The Author 2015. Published by Oxford University Press in association

  20. Fault morphology of the lyo Fault, the Median Tectonic Line Active Fault System

    OpenAIRE

    後藤, 秀昭

    1996-01-01

    In this paper, we investigated the various fault features of the lyo fault and depicted fault lines or detailed topographic map. The results of this paper are summarized as follows; 1) Distinct evidence of the right-lateral movement is continuously discernible along the lyo fault. 2) Active fault traces are remarkably linear suggesting that the angle of fault plane is high. 3) The lyo fault can be divided into four segments by jogs between left-stepping traces. 4) The mean slip rate is 1.3 ~ ...

  1. Analysis of large fault trees based on functional decomposition

    International Nuclear Information System (INIS)

    Contini, Sergio; Matuzas, Vaidas

    2011-01-01

    With the advent of the Binary Decision Diagrams (BDD) approach in fault tree analysis, a significant enhancement has been achieved with respect to previous approaches, both in terms of efficiency and accuracy of the overall outcome of the analysis. However, the exponential increase of the number of nodes with the complexity of the fault tree may prevent the construction of the BDD. In these cases, the only way to complete the analysis is to reduce the complexity of the BDD by applying the truncation technique, which nevertheless implies the problem of estimating the truncation error or upper and lower bounds of the top-event unavailability. This paper describes a new method to analyze large coherent fault trees which can be advantageously applied when the working memory is not sufficient to construct the BDD. It is based on the decomposition of the fault tree into simpler disjoint fault trees containing a lower number of variables. The analysis of each simple fault tree is performed by using all the computational resources. The results from the analysis of all simpler fault trees are re-combined to obtain the results for the original fault tree. Two decomposition methods are herewith described: the first aims at determining the minimal cut sets (MCS) and the upper and lower bounds of the top-event unavailability; the second can be applied to determine the exact value of the top-event unavailability. Potentialities, limitations and possible variations of these methods will be discussed with reference to the results of their application to some complex fault trees.

  2. Analysis of large fault trees based on functional decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Contini, Sergio, E-mail: sergio.contini@jrc.i [European Commission, Joint Research Centre, Institute for the Protection and Security of the Citizen, 21020 Ispra (Italy); Matuzas, Vaidas [European Commission, Joint Research Centre, Institute for the Protection and Security of the Citizen, 21020 Ispra (Italy)

    2011-03-15

    With the advent of the Binary Decision Diagrams (BDD) approach in fault tree analysis, a significant enhancement has been achieved with respect to previous approaches, both in terms of efficiency and accuracy of the overall outcome of the analysis. However, the exponential increase of the number of nodes with the complexity of the fault tree may prevent the construction of the BDD. In these cases, the only way to complete the analysis is to reduce the complexity of the BDD by applying the truncation technique, which nevertheless implies the problem of estimating the truncation error or upper and lower bounds of the top-event unavailability. This paper describes a new method to analyze large coherent fault trees which can be advantageously applied when the working memory is not sufficient to construct the BDD. It is based on the decomposition of the fault tree into simpler disjoint fault trees containing a lower number of variables. The analysis of each simple fault tree is performed by using all the computational resources. The results from the analysis of all simpler fault trees are re-combined to obtain the results for the original fault tree. Two decomposition methods are herewith described: the first aims at determining the minimal cut sets (MCS) and the upper and lower bounds of the top-event unavailability; the second can be applied to determine the exact value of the top-event unavailability. Potentialities, limitations and possible variations of these methods will be discussed with reference to the results of their application to some complex fault trees.

  3. Modeling and Experimental Study of Soft Error Propagation Based on Cellular Automaton

    OpenAIRE

    He, Wei; Wang, Yueke; Xing, Kefei; Yang, Jianwei

    2016-01-01

    Aiming to estimate SEE soft error performance of complex electronic systems, a soft error propagation model based on cellular automaton is proposed and an estimation methodology based on circuit partitioning and error propagation is presented. Simulations indicate that different fault grade jamming and different coupling factors between cells are the main parameters influencing the vulnerability of the system. Accelerated radiation experiments have been developed to determine the main paramet...

  4. Do final-year medical students have sufficient prescribing competencies? A systematic literature review.

    Science.gov (United States)

    Brinkman, David J; Tichelaar, Jelle; Graaf, Sanne; Otten, René H J; Richir, Milan C; van Agtmael, Michiel A

    2018-04-01

    Prescribing errors are an important cause of patient safety incidents and are frequently caused by junior doctors. This might be because the prescribing competence of final-year medical students is poor as a result of inadequate clinical pharmacology and therapeutic (CPT) education. We reviewed the literature to investigate which prescribing competencies medical students should have acquired in order to prescribe safely and effectively, and whether these have been attained by the time they graduate. PubMed, EMBASE and ERIC databases were searched from the earliest dates up to and including January 2017, using the terms 'prescribing', 'competence' and 'medical students' in combination. Articles describing or evaluating essential prescribing competencies of final-year medical students were included. Twenty-five articles describing, and 47 articles evaluating, the prescribing competencies of final-year students were included. Although there seems to be some agreement, we found no clear consensus among CPT teachers on which prescribing competencies medical students should have when they graduate. Studies showed that students had a general lack of preparedness, self-confidence, knowledge and skills, specifically regarding general and antimicrobial prescribing and pharmacovigilance. However, the results should be interpreted with caution, given the heterogeneity and methodological weaknesses of the included studies. There is considerable evidence that final-year students have insufficient competencies to prescribe safely and effectively, although there is a need for a greater consensus among CPT teachers on the required competencies. Changes in undergraduate CPT education are urgently required in order to improve the prescribing of future doctors. © 2018 VU University Medical Centre. British Journal of Clinical Pharmacology published by John Wiley & Sons Ltd on behalf of British Pharmacological Society.

  5. Adaptive Fault Tolerance for Many-Core Based Space-Borne Computing

    Science.gov (United States)

    James, Mark; Springer, Paul; Zima, Hans

    2010-01-01

    This paper describes an approach to providing software fault tolerance for future deep-space robotic NASA missions, which will require a high degree of autonomy supported by an enhanced on-board computational capability. Such systems have become possible as a result of the emerging many-core technology, which is expected to offer 1024-core chips by 2015. We discuss the challenges and opportunities of this new technology, focusing on introspection-based adaptive fault tolerance that takes into account the specific requirements of applications, guided by a fault model. Introspection supports runtime monitoring of the program execution with the goal of identifying, locating, and analyzing errors. Fault tolerance assertions for the introspection system can be provided by the user, domain-specific knowledge, or via the results of static or dynamic program analysis. This work is part of an on-going project at the Jet Propulsion Laboratory in Pasadena, California.

  6. RedThreads: An Interface for Application-Level Fault Detection/Correction Through Adaptive Redundant Multithreading

    International Nuclear Information System (INIS)

    Hukerikar, Saurabh; Teranishi, Keita; Diniz, Pedro C.; Lucas, Robert F.

    2017-01-01

    In the presence of accelerated fault rates, which are projected to be the norm on future exascale systems, it will become increasingly difficult for high-performance computing (HPC) applications to accomplish useful computation. Due to the fault-oblivious nature of current HPC programming paradigms and execution environments, HPC applications are insufficiently equipped to deal with errors. We believe that HPC applications should be enabled with capabilities to actively search for and correct errors in their computations. The redundant multithreading (RMT) approach offers lightweight replicated execution streams of program instructions within the context of a single application process. Furthermore, the use of complete redundancy incurs significant overhead to the application performance.

  7. Efficient preparation of large-block-code ancilla states for fault-tolerant quantum computation

    Science.gov (United States)

    Zheng, Yi-Cong; Lai, Ching-Yi; Brun, Todd A.

    2018-03-01

    Fault-tolerant quantum computation (FTQC) schemes that use multiqubit large block codes can potentially reduce the resource overhead to a great extent. A major obstacle is the requirement for a large number of clean ancilla states of different types without correlated errors inside each block. These ancilla states are usually logical stabilizer states of the data-code blocks, which are generally difficult to prepare if the code size is large. Previously, we have proposed an ancilla distillation protocol for Calderbank-Shor-Steane (CSS) codes by classical error-correcting codes. It was assumed that the quantum gates in the distillation circuit were perfect; however, in reality, noisy quantum gates may introduce correlated errors that are not treatable by the protocol. In this paper, we show that additional postselection by another classical error-detecting code can be applied to remove almost all correlated errors. Consequently, the revised protocol is fully fault tolerant and capable of preparing a large set of stabilizer states sufficient for FTQC using large block codes. At the same time, the yield rate can be boosted from O (t-2) to O (1 ) in practice for an [[n ,k ,d =2 t +1

  8. About problematic peculiarities of Fault Tolerance digital regulation organization

    Science.gov (United States)

    Rakov, V. I.; Zakharova, O. V.

    2018-05-01

    The solution of problems concerning estimation of working capacity of regulation chains and possibilities of preventing situations of its violation in three directions are offered. The first direction is working out (creating) the methods of representing the regulation loop (circuit) by means of uniting (combining) diffuse components and forming algorithmic tooling for building predicates of serviceability assessment separately for the components and the for regulation loops (circuits, contours) in general. The second direction is creating methods of Fault Tolerance redundancy in the process of complex assessment of current values of control actions, closure errors and their regulated parameters. The third direction is creating methods of comparing the processes of alteration (change) of control actions, errors of closure and regulating parameters with their standard models or their surroundings. This direction allows one to develop methods and algorithmic tool means, aimed at preventing loss of serviceability and effectiveness of not only a separate digital regulator, but also the whole complex of Fault Tolerance regulation.

  9. Fault-tolerant clock synchronization validation methodology. [in computer systems

    Science.gov (United States)

    Butler, Ricky W.; Palumbo, Daniel L.; Johnson, Sally C.

    1987-01-01

    A validation method for the synchronization subsystem of a fault-tolerant computer system is presented. The high reliability requirement of flight-crucial systems precludes the use of most traditional validation methods. The method presented utilizes formal design proof to uncover design and coding errors and experimentation to validate the assumptions of the design proof. The experimental method is described and illustrated by validating the clock synchronization system of the Software Implemented Fault Tolerance computer. The design proof of the algorithm includes a theorem that defines the maximum skew between any two nonfaulty clocks in the system in terms of specific system parameters. Most of these parameters are deterministic. One crucial parameter is the upper bound on the clock read error, which is stochastic. The probability that this upper bound is exceeded is calculated from data obtained by the measurement of system parameters. This probability is then included in a detailed reliability analysis of the system.

  10. Evaluation of Drug Interactions and Prescription Errors of Poultry Veterinarians in North of Iran

    Directory of Open Access Journals (Sweden)

    Madadi MS

    2014-03-01

    Full Text Available Drug prescription errors are a common cause of adverse incidents and may lead to adverse outcomes, sometimes in subtle ways, being compounded by circumstances or further errors. Therefore, it is important that veterinarians issue the correct drug at the correct dose. Using two or more prescribed drugs may lead to drug interactions. Some drug interactions are very harmful and may have potential threats to the patient's health that is called antagonism. In a survey study, medication errors of 750 prescriptions, including dosage errors and drug interactions were studied. The results indicated that 20.8% of prescriptions had at least one drug interaction. The most interactions were related to antibiotics (69.1%, Sulfonamides (46.7%, Methenamine (46.7% and Florfenicol (20.2%. Analysis of dosage errors indicated that total drugs consumed by broilers in the summer are more than winter seasons. Based on these results, avoiding medication errors are important in the balanced prescribing of drugs and regular education of veterinary practitioners in a certain interval is needed.

  11. Fault-tolerant sub-lithographic design with rollback recovery

    International Nuclear Information System (INIS)

    Naeimi, Helia; DeHon, Andre

    2008-01-01

    Shrinking feature sizes and energy levels coupled with high clock rates and decreasing node capacitance lead us into a regime where transient errors in logic cannot be ignored. Consequently, several recent studies have focused on feed-forward spatial redundancy techniques to combat these high transient fault rates. To complement these studies, we analyze fine-grained rollback techniques and show that they can offer lower spatial redundancy factors with no significant impact on system performance for fault rates up to one fault per device per ten million cycles of operation (P f = 10 -7 ) in systems with 10 12 susceptible devices. Further, we concretely demonstrate these claims on nanowire-based programmable logic arrays. Despite expensive rollback buffers and general-purpose, conservative analysis, we show the area overhead factor of our technique is roughly an order of magnitude lower than a gate level feed-forward redundancy scheme

  12. Evaluation of drug administration errors in a teaching hospital

    OpenAIRE

    Berdot, Sarah; Sabatier, Brigitte; Gillaizeau, Florence; Caruba, Thibaut; Prognon, Patrice; Durieux, Pierre

    2012-01-01

    Abstract Background Medication errors can occur at any of the three steps of the medication use process: prescribing, dispensing and administration. We aimed to determine the incidence, type and clinical importance of drug administration errors and to identify risk factors. Methods Prospective study based on disguised observation technique in four wards in a teaching hospital in Paris, France (800 beds). A pharmacist accompanied nurses and witnessed the preparation and administration of drugs...

  13. Research on bearing fault diagnosis of large machinery based on mathematical morphology

    Science.gov (United States)

    Wang, Yu

    2018-04-01

    To study the automatic diagnosis of large machinery fault based on support vector machine, combining the four common faults of the large machinery, the support vector machine is used to classify and identify the fault. The extracted feature vectors are entered. The feature vector is trained and identified by multi - classification method. The optimal parameters of the support vector machine are searched by trial and error method and cross validation method. Then, the support vector machine is compared with BP neural network. The results show that the support vector machines are short in time and high in classification accuracy. It is more suitable for the research of fault diagnosis in large machinery. Therefore, it can be concluded that the training speed of support vector machines (SVM) is fast and the performance is good.

  14. Fault tree model of human error based on error-forcing contexts

    International Nuclear Information System (INIS)

    Kang, Hyun Gook; Jang, Seung Cheol; Ha, Jae Joo

    2004-01-01

    In the safety-critical systems such as nuclear power plants, the safety-feature actuation is fully automated. In emergency case, the human operator could also play the role of a backup for automated systems. That is, the failure of safety-feature-actuation signal generation implies the concurrent failure of automated systems and that of manual actuation. The human operator's manual actuation failure is largely affected by error-forcing contexts (EFC). The failures of sensors and automated systems are most important ones. The sensors, the automated actuation system and the human operators are correlated in a complex manner and hard to develop a proper model. In this paper, we will explain the condition-based human reliability assessment (CBHRA) method in order to treat these complicated conditions in a practical way. In this study, we apply the CBHRA method to the manual actuation of safety features such as reactor trip and safety injection in Korean Standard Nuclear Power Plants

  15. Model-based fault diagnosis approach on external short circuit of lithium-ion battery used in electric vehicles

    International Nuclear Information System (INIS)

    Chen, Zeyu; Xiong, Rui; Tian, Jinpeng; Shang, Xiong; Lu, Jiahuan

    2016-01-01

    Highlights: • The characteristics of ESC fault of lithium-ion battery are investigated experimentally. • The proposed method to simulate the electrical behavior of ESC fault is viable. • Ten parameters in the presented fault model were optimized using a DPSO algorithm. • A two-layer model-based fault diagnosis approach for battery ESC is proposed. • The effective and robustness of the proposed algorithm has been evaluated. - Abstract: This study investigates the external short circuit (ESC) fault characteristics of lithium-ion battery experimentally. An experiment platform is established and the ESC tests are implemented on ten 18650-type lithium cells considering different state-of-charges (SOCs). Based on the experiment results, several efforts have been made. (1) The ESC process can be divided into two periods and the electrical and thermal behaviors within these two periods are analyzed. (2) A modified first-order RC model is employed to simulate the electrical behavior of the lithium cell in the ESC fault process. The model parameters are re-identified by a dynamic-neighborhood particle swarm optimization algorithm. (3) A two-layer model-based ESC fault diagnosis algorithm is proposed. The first layer conducts preliminary fault detection and the second layer gives a precise model-based diagnosis. Four new cells are short-circuited to evaluate the proposed algorithm. It shows that the ESC fault can be diagnosed within 5 s, the error between the model and measured data is less than 0.36 V. The effectiveness of the fault diagnosis algorithm is not sensitive to the precision of battery SOC. The proposed algorithm can still make the correct diagnosis even if there is 10% error in SOC estimation.

  16. Working Conditions-Aware Fault Injection Technique

    OpenAIRE

    Alouani , Ihsen; Niar , Smail; Jemai , Mohamed; Kuradi , Fadi; Abid , Mohamed

    2012-01-01

    International audience; With new integration rates, the circuits sensitivity to environmental and working conditions has increased dramatically. Thus, presenting reliable, less consuming energy and error resilient architectures is being one of the major problems to deal with. Besides, evaluating robustness and effectiveness of the proposed architectures is also an urgent need. In this paper, we present an extension of SimpleScalar simulation tool having the ability to inject faults in a given...

  17. Fault tolerant control based on active fault diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik

    2005-01-01

    An active fault diagnosis (AFD) method will be considered in this paper in connection with a Fault Tolerant Control (FTC) architecture based on the YJBK parameterization of all stabilizing controllers. The architecture consists of a fault diagnosis (FD) part and a controller reconfiguration (CR......) part. The FTC architecture can be applied for additive faults, parametric faults, and for system structural changes. Only parametric faults will be considered in this paper. The main focus in this paper is on the use of the new approach of active fault diagnosis in connection with FTC. The active fault...... diagnosis approach is based on including an auxiliary input in the system. A fault signature matrix is introduced in connection with AFD, given as the transfer function from the auxiliary input to the residual output. This can be considered as a generalization of the passive fault diagnosis case, where...

  18. A γ-ray survey along Hanaore fault

    International Nuclear Information System (INIS)

    Mino, Kazuo

    1978-01-01

    The γ-ray survey was carried out by a scintillation survey meter at O-hara area near around Hanaore Fault Zone in the northern part of Kyoto. The survey was done several times over along the same observational line. Static pattern of γ-ray intensity is revealed similar one in each other, even there is small difference. Strong intensity of γ-ray means subsistance of crushed rocks zone and a huge fault as Hanaore consists of the structure made by these weak zones. A pretty large earthquake among microearthquakes was occurred, fortunately for us, during survey period. The γ-ray survey was done just on January 6, 1978 when it was just one day before the earthquake. The observational results before the earthquake, did not give large variations of γ-ray intensity. But after 5 days from the earthquake, that is January 11, the intensity of γ-ray decreases into low value, over observational error, at almost all stations. The improvement of γ-ray was found after 2 weeks from the earthquake. Ordinarily the large fault as Hanaore is one of boundaries around block of crust, and fault zone is more sensitive to geophysical activity in the crust. Continuous observation of γ-ray will give the solution to corelation with earthquake or earthquake prediction. (author)

  19. Modeling and Experimental Study of Soft Error Propagation Based on Cellular Automaton

    Directory of Open Access Journals (Sweden)

    Wei He

    2016-01-01

    Full Text Available Aiming to estimate SEE soft error performance of complex electronic systems, a soft error propagation model based on cellular automaton is proposed and an estimation methodology based on circuit partitioning and error propagation is presented. Simulations indicate that different fault grade jamming and different coupling factors between cells are the main parameters influencing the vulnerability of the system. Accelerated radiation experiments have been developed to determine the main parameters for raw soft error vulnerability of the module and coupling factors. Results indicate that the proposed method is feasible.

  20. Support vector data description for fusion of multiple health indicators for enhancing gearbox fault diagnosis and prognosis

    International Nuclear Information System (INIS)

    Wang, Dong; Tse, Peter W; Guo, Wei; Miao, Qiang

    2011-01-01

    A novel method for enhancing gearbox fault diagnosis and prognosis is developed by fusion of multiple health indicators through support vector data description. First, the Comblet transform is used to identify gear residual error signals from the raw signal. Second, based on the observation of gear residual error signals, a total of 11 gear health indicators are identified, and are categorized into two types of indicators. The first and second types of indicators are for fault diagnosis and prognosis, respectively. The first type has six indicators, which are sensitive to impulsive signals triggered by anomalous impacts. The second type has five indicators, which are suitable for tracking degradation of faults. Third, through the support vector data description, the first six health indicators are fused into type one indicators for fault diagnosis. The remaining five indicators are fused into type two indicators for fault prognosis. Finally, a Gaussian kernel is designed to enhance the performance of type one and two indicators by optimal range of width size. The effectiveness of the proposed method is validated through experiments. The new method has been proven to be superior to methods that use unfused indicators individually

  1. To prescribe codeine or not to prescribe codeine?

    Science.gov (United States)

    Fleming, Marc L; Wanat, Matthew A

    2014-09-01

    A recently published study in Pediatrics by Kaiser et al. (2014; Epub April 21, DOI: 10.1542/peds.2013-3171) reported that on average, over the past decade, children aged 3 to 17 were prescribed approximately 700,000 prescriptions for codeine-containing products each year in association with emergency department (ED) visits. Although, guidelines from the American Academy of Pediatrics issued warnings in 1997 and reaffirmed their concerns regarding the safety and effectiveness of codeine in 2006, it is still often prescribed for pain and cough associated with upper respiratory infection. With the impending rescheduling of hydrocodone combination products to Schedule II, physicians and mid-level prescribers may be compelled to prescribe codeine-containing products (e.g., with acetaminophen) due to reduced administrative burden and limits on Schedule II prescriptive authority for nurse practitioners and physician assistants in some states. This commentary expounds on the safety and effectiveness concerns of codeine, with a primary focus on patients in the ED setting.

  2. Narrowing the scope of failure prediction using targeted fault load injection

    Science.gov (United States)

    Jordan, Paul L.; Peterson, Gilbert L.; Lin, Alan C.; Mendenhall, Michael J.; Sellers, Andrew J.

    2018-05-01

    As society becomes more dependent upon computer systems to perform increasingly critical tasks, ensuring that those systems do not fail becomes increasingly important. Many organizations depend heavily on desktop computers for day-to-day operations. Unfortunately, the software that runs on these computers is written by humans and, as such, is still subject to human error and consequent failure. A natural solution is to use statistical machine learning to predict failure. However, since failure is still a relatively rare event, obtaining labelled training data to train these models is not a trivial task. This work presents new simulated fault-inducing loads that extend the focus of traditional fault injection techniques to predict failure in the Microsoft enterprise authentication service and Apache web server. These new fault loads were successful in creating failure conditions that were identifiable using statistical learning methods, with fewer irrelevant faults being created.

  3. Sensor-driven, fault-tolerant control of a maintenance robot

    International Nuclear Information System (INIS)

    Moy, M.M.; Davidson, W.M.

    1987-01-01

    A robot system has been designed to do routine maintenance tasks on the Sandia Pulsed Reactor (SPR). The use of this Remote Maintenance Robot (RMR) is expected to significantly reduce the occupational radiation exposure of the reactor operators. Reactor safety was a key issue in the design of the robot maintenance system. Using sensors to detect error conditions and intelligent control to recover from the errors, the RMR is capable of responding to error conditions without creating a hazard. This paper describes the design and implementation of a sensor-driven, fault-tolerant control for the RMR. Recovery from errors is not automatic; it does rely on operator assistance. However, a key feature of the error recovery procedure is that the operator is allowed to reenter the programmed operation after the error has been corrected. The recovery procedure guarantees that the moving components of the system will not collide with the reactor during recovery

  4. Experimental magic state distillation for fault-tolerant quantum computing.

    Science.gov (United States)

    Souza, Alexandre M; Zhang, Jingfu; Ryan, Colm A; Laflamme, Raymond

    2011-01-25

    Any physical quantum device for quantum information processing (QIP) is subject to errors in implementation. In order to be reliable and efficient, quantum computers will need error-correcting or error-avoiding methods. Fault-tolerance achieved through quantum error correction will be an integral part of quantum computers. Of the many methods that have been discovered to implement it, a highly successful approach has been to use transversal gates and specific initial states. A critical element for its implementation is the availability of high-fidelity initial states, such as |0〉 and the 'magic state'. Here, we report an experiment, performed in a nuclear magnetic resonance (NMR) quantum processor, showing sufficient quantum control to improve the fidelity of imperfect initial magic states by distilling five of them into one with higher fidelity.

  5. Fault detection and isolation in systems with parametric faults

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Niemann, Hans Henrik

    1999-01-01

    The problem of fault detection and isolation of parametric faults is considered in this paper. A fault detection problem based on parametric faults are associated with internal parameter variations in the dynamical system. A fault detection and isolation method for parametric faults is formulated...

  6. Comparison of Cenozoic Faulting at the Savannah River Site to Fault Characteristics of the Atlantic Coast Fault Province: Implications for Fault Capability

    International Nuclear Information System (INIS)

    Cumbest, R.J.

    2000-01-01

    This study compares the faulting observed on the Savannah River Site and vicinity with the faults of the Atlantic Coastal Fault Province and concludes that both sets of faults exhibit the same general characteristics and are closely associated. Based on the strength of this association it is concluded that the faults observed on the Savannah River Site and vicinity are in fact part of the Atlantic Coastal Fault Province. Inclusion in this group means that the historical precedent established by decades of previous studies on the seismic hazard potential for the Atlantic Coastal Fault Province is relevant to faulting at the Savannah River Site. That is, since these faults are genetically related the conclusion of ''not capable'' reached in past evaluations applies.In addition, this study establishes a set of criteria by which individual faults may be evaluated in order to assess their inclusion in the Atlantic Coast Fault Province and the related association of the ''not capable'' conclusion

  7. Progressive retry for software error recovery in distributed systems

    Science.gov (United States)

    Wang, Yi-Min; Huang, Yennun; Fuchs, W. K.

    1993-01-01

    In this paper, we describe a method of execution retry for bypassing software errors based on checkpointing, rollback, message reordering and replaying. We demonstrate how rollback techniques, previously developed for transient hardware failure recovery, can also be used to recover from software faults by exploiting message reordering to bypass software errors. Our approach intentionally increases the degree of nondeterminism and the scope of rollback when a previous retry fails. Examples from our experience with telecommunications software systems illustrate the benefits of the scheme.

  8. Nature and frequency of medication errors in a geriatric ward: an Indonesian experience

    Directory of Open Access Journals (Sweden)

    Ernawati DK

    2014-06-01

    Full Text Available Desak Ketut Ernawati,1,2 Ya Ping Lee,2 Jeffery David Hughes21Faculty of Medicine, Udayana University, Denpasar, Bali, Indonesia; 2School of Pharmacy and Curtin Health Innovation and Research Institute, Curtin University, Perth, WA, AustraliaPurpose: To determine the nature and frequency of medication errors during medication delivery processes in a public teaching hospital geriatric ward in Bali, Indonesia.Methods: A 20-week prospective study on medication errors occurring during the medication delivery process was conducted in a geriatric ward in a public teaching hospital in Bali, Indonesia. Participants selected were inpatients aged more than 60 years. Patients were excluded if they had a malignancy, were undergoing surgery, or receiving chemotherapy treatment. The occurrence of medication errors in prescribing, transcribing, dispensing, and administration were detected by the investigator providing in-hospital clinical pharmacy services.Results: Seven hundred and seventy drug orders and 7,662 drug doses were reviewed as part of the study. There were 1,563 medication errors detected among the 7,662 drug doses reviewed, representing an error rate of 20.4%. Administration errors were the most frequent medication errors identified (59%, followed by transcription errors (15%, dispensing errors (14%, and prescribing errors (7%. Errors in documentation were the most common form of administration errors. Of these errors, 2.4% were classified as potentially serious and 10.3% as potentially significant.Conclusion: Medication errors occurred in every stage of the medication delivery process, with administration errors being the most frequent. The majority of errors identified in the administration stage were related to documentation. Provision of in-hospital clinical pharmacy services could potentially play a significant role in detecting and preventing medication errors.Keywords: geriatric, medication errors, inpatients, medication delivery process

  9. Residual Generator Fuzzy Identification for Wind TurbineBenchmark Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    Silvio Simani

    2014-11-01

    Full Text Available In order to improve the availability of wind turbines, thus improving theirefficiency, it is important to detect and isolate faults in their earlier occurrence. The mainproblem of model-based fault diagnosis applied to wind turbines is represented by thesystem complexity, as well as the reliability of the available measurements. In this work, adata-driven strategy relying on fuzzy models is presented, in order to build a fault diagnosissystem. Fuzzy theory jointly with the Frisch identification scheme for errors-in-variablemodels is exploited here, since it allows one to approximate unknown models and manageuncertain data. Moreover, the use of fuzzy models, which are directly identified from thewind turbine measurements, allows the design of the fault detection and isolation module.It is worth noting that, sometimes, the nonlinearity of a wind turbine system could lead toquite complex analytic solutions. However, IF-THEN fuzzy rules provide a simpler solution,important when on-line implementations have to be considered. The wind turbine benchmarkis used to validate the achieved performances of the suggested fault detection and isolationscheme. Finally, comparisons of the proposed methodology with respect to different faultdiagnosis methods serve to highlight the features of the suggested solution.

  10. Prescribed Performance Fuzzy Adaptive Output-Feedback Control for Nonlinear Stochastic Systems

    Directory of Open Access Journals (Sweden)

    Lili Zhang

    2014-01-01

    Full Text Available A prescribed performance fuzzy adaptive output-feedback control approach is proposed for a class of single-input and single-output nonlinear stochastic systems with unmeasured states. Fuzzy logic systems are used to identify the unknown nonlinear system, and a fuzzy state observer is designed for estimating the unmeasured states. Based on the backstepping recursive design technique and the predefined performance technique, a new fuzzy adaptive output-feedback control method is developed. It is shown that all the signals of the resulting closed-loop system are bounded in probability and the tracking error remains an adjustable neighborhood of the origin with the prescribed performance bounds. A simulation example is provided to show the effectiveness of the proposed approach.

  11. Fault-tolerant linear optical quantum computing with small-amplitude coherent States.

    Science.gov (United States)

    Lund, A P; Ralph, T C; Haselgrove, H L

    2008-01-25

    Quantum computing using two coherent states as a qubit basis is a proposed alternative architecture with lower overheads but has been questioned as a practical way of performing quantum computing due to the fragility of diagonal states with large coherent amplitudes. We show that using error correction only small amplitudes (alpha>1.2) are required for fault-tolerant quantum computing. We study fault tolerance under the effects of small amplitudes and loss using a Monte Carlo simulation. The first encoding level resources are orders of magnitude lower than the best single photon scheme.

  12. Rationalising prescribing

    DEFF Research Database (Denmark)

    Wadmann, Sarah; Bang, Lia Evi

    2015-01-01

    Initiatives in the name of 'rational pharmacotherapy' have been launched to alter what is seen as 'inappropriate' prescribing practices of physicians. Based on observations and interviews with 20 general practitioners (GPs) in 2009-2011, we explored how attempts to rationalise prescribing interac...

  13. Analog fault diagnosis by inverse problem technique

    KAUST Repository

    Ahmed, Rania F.

    2011-12-01

    A novel algorithm for detecting soft faults in linear analog circuits based on the inverse problem concept is proposed. The proposed approach utilizes optimization techniques with the aid of sensitivity analysis. The main contribution of this work is to apply the inverse problem technique to estimate the actual parameter values of the tested circuit and so, to detect and diagnose single fault in analog circuits. The validation of the algorithm is illustrated through applying it to Sallen-Key second order band pass filter and the results show that the detecting percentage efficiency was 100% and also, the maximum error percentage of estimating the parameter values is 0.7%. This technique can be applied to any other linear circuit and it also can be extended to be applied to non-linear circuits. © 2011 IEEE.

  14. Medication reconciliation and prescribing reviews by pharmacy technicians in a geriatric ward

    DEFF Research Database (Denmark)

    Buck, Thomas Croft; Gronkjaer, Louise Smed; Duckert, Marie-Louise

    2013-01-01

    OBJECTIVE: Incomplete medication histories obtained on hospital admission are responsible for more than 25% of prescribing errors. This study aimed to evaluate whether pharmacy technicians can assist hospital physicians' in obtaining medication histories by performing medication reconciliation an...... reconciliation and focused medication reviews. Further randomized, controlled studies including a larger number of patients are required to elucidate whether these observations are of significance and of importance for securing patient safety....... and prescribing reviews. A secondary aim was to evaluate whether the interventions made by pharmacy technicians could reduce the time spent by the nurses on administration of medications to the patients. METHODS: This observational study was conducted over a 7 week period in the geriatric ward at Odense...... University Hospital, Denmark. Two pharmacy technicians conducted medication reconciliation and prescribing reviews at the time of patients' admission to the ward. The reviews were conducted according to standard operating procedures developed by a clinical pharmacist and approved by the Head of the Geriatric...

  15. Preventing treatment errors in radiotherapy by identifying and evaluating near misses and actual incidents

    LENUS (Irish Health Repository)

    Holmberg, Ola

    2002-06-01

    When preparing radiation treatment, the prescribed dose and irradiation geometry must be translated into physical machine parameters. An error in the calculations or machine settings can negatively affect the intended treatment outcome. Analysing incidents originating in the treatment preparation chain makes it possible to find weak links and prevent treatment errors. The aim of this work is to study the effectiveness of a multilayered error prevention system by analysing both near misses and actual treatment errors.

  16. Fault Detection of Aircraft Cable via Spread Spectrum Time Domain Reflectometry

    Directory of Open Access Journals (Sweden)

    Xudong SHI

    2014-03-01

    Full Text Available As the airplane cable fault detection based on TDR (time domain reflectometry is affected easily by various noise signals, which makes the reflected signal attenuate and distort heavily, failing to locate the fault. In order to solve these problems, a method of spread spectrum time domain reflectometry (SSTDR is introduced in this paper, taking the advantage of the sharp peak of correlation function. The test signal is generated from ML sequence (MLS modulated by sine wave in the same frequency. Theoretically, the test signal has the very high immunity of noise, which can be applied with excellent precision to fault location on the aircraft cable. In this paper, the method of SSTDR was normally simulated in MATLAB. Then, an experimental setup, based on LabVIEW, was organized to detect and locate the fault on the aircraft cable. It has been demonstrated that SSTDR has the high immunity of noise, reducing some detection errors effectively.

  17. [Prospective assessment of medication errors in critically ill patients in a university hospital].

    Science.gov (United States)

    Salazar L, Nicole; Jirón A, Marcela; Escobar O, Leslie; Tobar, Eduardo; Romero, Carlos

    2011-11-01

    Critically ill patients are especially vulnerable to medication errors (ME) due to their severe clinical situation and the complexities of their management. To determine the frequency and characteristics of ME and identify shortcomings in the processes of medication management in an Intensive Care Unit. During a 3 months period, an observational prospective and randomized study was carried out in the ICU of a university hospital. Every step of patient's medication management (prescription, transcription, dispensation, preparation and administration) was evaluated by an external trained professional. Steps with higher frequency of ME and their therapeutic groups involved were identified. Medications errors were classified according to the National Coordinating Council for Medication Error Reporting and Prevention. In 52 of 124 patients evaluated, 66 ME were found in 194 drugs prescribed. In 34% of prescribed drugs, there was at least 1 ME during its use. Half of ME occurred during medication administration, mainly due to problems in infusion rates and schedule times. Antibacterial drugs had the highest rate of ME. We found a 34% rate of ME per drug prescribed, which is in concordance with international reports. The identification of those steps more prone to ME in the ICU, will allow the implementation of an intervention program to improve the quality and security of medication management.

  18. Computation of a Reference Model for Robust Fault Detection and Isolation Residual Generation

    Directory of Open Access Journals (Sweden)

    Emmanuel Mazars

    2008-01-01

    Full Text Available This paper considers matrix inequality procedures to address the robust fault detection and isolation (FDI problem for linear time-invariant systems subject to disturbances, faults, and polytopic or norm-bounded uncertainties. We propose a design procedure for an FDI filter that aims to minimize a weighted combination of the sensitivity of the residual signal to disturbances and modeling errors, and the deviation of the faults to residual dynamics from a fault to residual reference model, using the ℋ∞-norm as a measure. A key step in our procedure is the design of an optimal fault reference model. We show that the optimal design requires the solution of a quadratic matrix inequality (QMI optimization problem. Since the solution of the optimal problem is intractable, we propose a linearization technique to derive a numerically tractable suboptimal design procedure that requires the solution of a linear matrix inequality (LMI optimization. A jet engine example is employed to demonstrate the effectiveness of the proposed approach.

  19. From fault classification to fault tolerance for multi-agent systems

    CERN Document Server

    Potiron, Katia; Taillibert, Patrick

    2013-01-01

    Faults are a concern for Multi-Agent Systems (MAS) designers, especially if the MAS are built for industrial or military use because there must be some guarantee of dependability. Some fault classification exists for classical systems, and is used to define faults. When dependability is at stake, such fault classification may be used from the beginning of the system's conception to define fault classes and specify which types of faults are expected. Thus, one may want to use fault classification for MAS; however, From Fault Classification to Fault Tolerance for Multi-Agent Systems argues that

  20. Summary: beyond fault trees to fault graphs

    International Nuclear Information System (INIS)

    Alesso, H.P.; Prassinos, P.; Smith, C.F.

    1984-09-01

    Fault Graphs are the natural evolutionary step over a traditional fault-tree model. A Fault Graph is a failure-oriented directed graph with logic connectives that allows cycles. We intentionally construct the Fault Graph to trace the piping and instrumentation drawing (P and ID) of the system, but with logical AND and OR conditions added. Then we evaluate the Fault Graph with computer codes based on graph-theoretic methods. Fault Graph computer codes are based on graph concepts, such as path set (a set of nodes traveled on a path from one node to another) and reachability (the complete set of all possible paths between any two nodes). These codes are used to find the cut-sets (any minimal set of component failures that will fail the system) and to evaluate the system reliability

  1. On Round-off Error for Adaptive Finite Element Methods

    KAUST Repository

    Alvarez-Aramberri, J.

    2012-06-02

    Round-off error analysis has been historically studied by analyzing the condition number of the associated matrix. By controlling the size of the condition number, it is possible to guarantee a prescribed round-off error tolerance. However, the opposite is not true, since it is possible to have a system of linear equations with an arbitrarily large condition number that still delivers a small round-off error. In this paper, we perform a round-off error analysis in context of 1D and 2D hp-adaptive Finite Element simulations for the case of Poisson equation. We conclude that boundary conditions play a fundamental role on the round-off error analysis, specially for the so-called ‘radical meshes’. Moreover, we illustrate the importance of the right-hand side when analyzing the round-off error, which is independent of the condition number of the matrix.

  2. On Round-off Error for Adaptive Finite Element Methods

    KAUST Repository

    Alvarez-Aramberri, J.; Pardo, David; Paszynski, Maciej; Collier, Nathan; Dalcin, Lisandro; Calo, Victor M.

    2012-01-01

    Round-off error analysis has been historically studied by analyzing the condition number of the associated matrix. By controlling the size of the condition number, it is possible to guarantee a prescribed round-off error tolerance. However, the opposite is not true, since it is possible to have a system of linear equations with an arbitrarily large condition number that still delivers a small round-off error. In this paper, we perform a round-off error analysis in context of 1D and 2D hp-adaptive Finite Element simulations for the case of Poisson equation. We conclude that boundary conditions play a fundamental role on the round-off error analysis, specially for the so-called ‘radical meshes’. Moreover, we illustrate the importance of the right-hand side when analyzing the round-off error, which is independent of the condition number of the matrix.

  3. Fault-related clay authigenesis along the Moab Fault: Implications for calculations of fault rock composition and mechanical and hydrologic fault zone properties

    Science.gov (United States)

    Solum, J.G.; Davatzes, N.C.; Lockner, D.A.

    2010-01-01

    The presence of clays in fault rocks influences both the mechanical and hydrologic properties of clay-bearing faults, and therefore it is critical to understand the origin of clays in fault rocks and their distributions is of great importance for defining fundamental properties of faults in the shallow crust. Field mapping shows that layers of clay gouge and shale smear are common along the Moab Fault, from exposures with throws ranging from 10 to ???1000 m. Elemental analyses of four locations along the Moab Fault show that fault rocks are enriched in clays at R191 and Bartlett Wash, but that this clay enrichment occurred at different times and was associated with different fluids. Fault rocks at Corral and Courthouse Canyons show little difference in elemental composition from adjacent protolith, suggesting that formation of fault rocks at those locations is governed by mechanical processes. Friction tests show that these authigenic clays result in fault zone weakening, and potentially influence the style of failure along the fault (seismogenic vs. aseismic) and potentially influence the amount of fluid loss associated with coseismic dilation. Scanning electron microscopy shows that authigenesis promotes that continuity of slip surfaces, thereby enhancing seal capacity. The occurrence of the authigenesis, and its influence on the sealing properties of faults, highlights the importance of determining the processes that control this phenomenon. ?? 2010 Elsevier Ltd.

  4. Guideliness for system modeling: fault tree [analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yoon Hwan; Yang, Joon Eon; Kang, Dae Il; Hwang, Mee Jeong

    2004-07-01

    This document, the guidelines for system modeling related to Fault Tree Analysis(FTA), is intended to provide the guidelines with the analyzer to construct the fault trees in the level of the capability category II of ASME PRA standard. Especially, they are to provide the essential and basic guidelines and the related contents to be used in support of revising the Ulchin 3 and 4 PSA model for risk monitor within the capability category II of ASME PRA standard. Normally the main objective of system analysis is to assess the reliability of system modeled by Event Tree Analysis (ETA). A variety of analytical techniques can be used for the system analysis, however, FTA method is used in this procedures guide. FTA is the method used for representing the failure logic of plant systems deductively using AND, OR or NOT gates. The fault tree should reflect all possible failure modes that may contribute to the system unavailability. This should include contributions due to the mechanical failures of the components, Common Cause Failures (CCFs), human errors and outages for testing and maintenance. This document identifies and describes the definitions and the general procedures of FTA and the essential and basic guidelines for reving the fault trees. Accordingly, the guidelines for FTA will be capable to guide the FTA to the level of the capability category II of ASME PRA standard.

  5. Guideliness for system modeling: fault tree [analysis

    International Nuclear Information System (INIS)

    Lee, Yoon Hwan; Yang, Joon Eon; Kang, Dae Il; Hwang, Mee Jeong

    2004-07-01

    This document, the guidelines for system modeling related to Fault Tree Analysis(FTA), is intended to provide the guidelines with the analyzer to construct the fault trees in the level of the capability category II of ASME PRA standard. Especially, they are to provide the essential and basic guidelines and the related contents to be used in support of revising the Ulchin 3 and 4 PSA model for risk monitor within the capability category II of ASME PRA standard. Normally the main objective of system analysis is to assess the reliability of system modeled by Event Tree Analysis (ETA). A variety of analytical techniques can be used for the system analysis, however, FTA method is used in this procedures guide. FTA is the method used for representing the failure logic of plant systems deductively using AND, OR or NOT gates. The fault tree should reflect all possible failure modes that may contribute to the system unavailability. This should include contributions due to the mechanical failures of the components, Common Cause Failures (CCFs), human errors and outages for testing and maintenance. This document identifies and describes the definitions and the general procedures of FTA and the essential and basic guidelines for reving the fault trees. Accordingly, the guidelines for FTA will be capable to guide the FTA to the level of the capability category II of ASME PRA standard

  6. Simultaneous Event-Triggered Fault Detection and Estimation for Stochastic Systems Subject to Deception Attacks.

    Science.gov (United States)

    Li, Yunji; Wu, QingE; Peng, Li

    2018-01-23

    In this paper, a synthesized design of fault-detection filter and fault estimator is considered for a class of discrete-time stochastic systems in the framework of event-triggered transmission scheme subject to unknown disturbances and deception attacks. A random variable obeying the Bernoulli distribution is employed to characterize the phenomena of the randomly occurring deception attacks. To achieve a fault-detection residual is only sensitive to faults while robust to disturbances, a coordinate transformation approach is exploited. This approach can transform the considered system into two subsystems and the unknown disturbances are removed from one of the subsystems. The gain of fault-detection filter is derived by minimizing an upper bound of filter error covariance. Meanwhile, system faults can be reconstructed by the remote fault estimator. An recursive approach is developed to obtain fault estimator gains as well as guarantee the fault estimator performance. Furthermore, the corresponding event-triggered sensor data transmission scheme is also presented for improving working-life of the wireless sensor node when measurement information are aperiodically transmitted. Finally, a scaled version of an industrial system consisting of local PC, remote estimator and wireless sensor node is used to experimentally evaluate the proposed theoretical results. In particular, a novel fault-alarming strategy is proposed so that the real-time capacity of fault-detection is guaranteed when the event condition is triggered.

  7. Fault Modeling and Testing for Analog Circuits in Complex Space Based on Supply Current and Output Voltage

    Directory of Open Access Journals (Sweden)

    Hongzhi Hu

    2015-01-01

    Full Text Available This paper deals with the modeling of fault for analog circuits. A two-dimensional (2D fault model is first proposed based on collaborative analysis of supply current and output voltage. This model is a family of circle loci on the complex plane, and it simplifies greatly the algorithms for test point selection and potential fault simulations, which are primary difficulties in fault diagnosis of analog circuits. Furthermore, in order to reduce the difficulty of fault location, an improved fault model in three-dimensional (3D complex space is proposed, which achieves a far better fault detection ratio (FDR against measurement error and parametric tolerance. To address the problem of fault masking in both 2D and 3D fault models, this paper proposes an effective design for testability (DFT method. By adding redundant bypassing-components in the circuit under test (CUT, this method achieves excellent fault isolation ratio (FIR in ambiguity group isolation. The efficacy of the proposed model and testing method is validated through experimental results provided in this paper.

  8. High resolution t-LiDAR scanning of an active bedrock fault scarp for palaeostress analysis

    Science.gov (United States)

    Reicherter, Klaus; Wiatr, Thomas; Papanikolaou, Ioannis; Fernández-Steeger, Tomas

    2013-04-01

    Palaeostress analysis of an active bedrock normal fault scarp based on kinematic indicators is carried applying terrestrial laser scanning (t-LiDAR or TLS). For this purpose three key elements are necessary for a defined region on the fault plane: (i) the orientation of the fault plane, (ii) the orientation of the slickenside lineation or other kinematic indicators and (iii) the sense of motion of the hanging wall. We present a workflow to obtain palaeostress data from point cloud data using terrestrial laser scanning. The entire case-study was performed on a continuous limestone bedrock normal fault scarp on the island of Crete, Greece, at four different locations along the WNW-ESE striking Spili fault. At each location we collected data with a mobile terrestrial light detection and ranging system and validated the calculated three-dimensional palaeostress results by comparison with the conventional palaeostress method with compass at three of the locations. Numerous kinematics indicators for normal faulting were discovered on the fault plane surface using t-LiDAR data and traditional methods, like Riedel shears, extensional break-outs, polished corrugations and many more. However, the kinematic indicators are more or less unidirectional and almost pure dip-slip. No oblique reactivations have been observed. But, towards the tips of the fault, inclination of the striation tends to point towards the centre of the fault. When comparing all reconstructed palaeostress data obtained from t-LiDAR to that obtained through manual compass measurements, the degree of fault plane orientation divergence is around ±005/03 for dip direction and dip. The degree of slickenside lineation variation is around ±003/03 for dip direction and dip. Therefore, the percentage threshold error of the individual vector angle at the different investigation site is lower than 3 % for the dip direction and dip for planes, and lower than 6 % for strike. The maximum mean variation of the complete

  9. Modeling of outpatient prescribing process in iran: a gateway toward electronic prescribing system.

    Science.gov (United States)

    Ahmadi, Maryam; Samadbeik, Mahnaz; Sadoughi, Farahnaz

    2014-01-01

    Implementation of electronic prescribing system can overcome many problems of the paper prescribing system, and provide numerous opportunities of more effective and advantageous prescribing. Successful implementation of such a system requires complete and deep understanding of work content, human force, and workflow of paper prescribing. The current study was designed in order to model the current business process of outpatient prescribing in Iran and clarify different actions during this process. In order to describe the prescribing process and the system features in Iran, the methodology of business process modeling and analysis was used in the present study. The results of the process documentation were analyzed using a conceptual model of workflow elements and the technique of modeling "As-Is" business processes. Analysis of the current (as-is) prescribing process demonstrated that Iran stood at the first levels of sophistication in graduated levels of electronic prescribing, namely electronic prescription reference, and that there were problematic areas including bottlenecks, redundant and duplicated work, concentration of decision nodes, and communicative weaknesses among stakeholders of the process. Using information technology in some activities of medication prescription in Iran has not eliminated the dependence of the stakeholders on paper-based documents and prescriptions. Therefore, it is necessary to implement proper system programming in order to support change management and solve the problems in the existing prescribing process. To this end, a suitable basis should be provided for reorganization and improvement of the prescribing process for the future electronic systems.

  10. Universal Fault-Tolerant Gates on Concatenated Stabilizer Codes

    Directory of Open Access Journals (Sweden)

    Theodore J. Yoder

    2016-09-01

    Full Text Available It is an oft-cited fact that no quantum code can support a set of fault-tolerant logical gates that is both universal and transversal. This no-go theorem is generally responsible for the interest in alternative universality constructions including magic state distillation. Widely overlooked, however, is the possibility of nontransversal, yet still fault-tolerant, gates that work directly on small quantum codes. Here, we demonstrate precisely the existence of such gates. In particular, we show how the limits of nontransversality can be overcome by performing rounds of intermediate error correction to create logical gates on stabilizer codes that use no ancillas other than those required for syndrome measurement. Moreover, the logical gates we construct, the most prominent examples being Toffoli and controlled-controlled-Z, often complete universal gate sets on their codes. We detail such universal constructions for the smallest quantum codes, the 5-qubit and 7-qubit codes, and then proceed to generalize the approach. One remarkable result of this generalization is that any nondegenerate stabilizer code with a complete set of fault-tolerant single-qubit Clifford gates has a universal set of fault-tolerant gates. Another is the interaction of logical qubits across different stabilizer codes, which, for instance, implies a broadly applicable method of code switching.

  11. Model-based monitoring of rotors with multiple coexisting faults

    International Nuclear Information System (INIS)

    Rossner, Markus

    2015-01-01

    Monitoring systems are applied to many rotors, but only few monitoring systems can separate coexisting errors and identify their quantity. This research project solves this problem using a combination of signal-based and model-based monitoring. The signal-based part performs a pre-selection of possible errors; these errors are further separated with model-based methods. This approach is demonstrated for the errors unbalance, bow, stator-fixed misalignment, rotor-fixed misalignment and roundness errors. For the model-based part, unambiguous error definitions and models are set up. The Ritz approach reduces the model order and therefore speeds up the diagnosis. Identification algorithms are developed for the different rotor faults. Hereto, reliable damage indicators and proper sub steps of the diagnosis have to be defined. For several monitoring problems, measuring both deflection and bearing force is very useful. The monitoring system is verified by experiments on an academic rotor test rig. The interpretation of the measurements requires much knowledge concerning the dynamics of the rotor. Due to the model-based approach, the system can separate errors with similar signal patterns and identify bow and roundness error online at operation speed. [de

  12. Nonuniform code concatenation for universal fault-tolerant quantum computing

    Science.gov (United States)

    Nikahd, Eesa; Sedighi, Mehdi; Saheb Zamani, Morteza

    2017-09-01

    Using transversal gates is a straightforward and efficient technique for fault-tolerant quantum computing. Since transversal gates alone cannot be computationally universal, they must be combined with other approaches such as magic state distillation, code switching, or code concatenation to achieve universality. In this paper we propose an alternative approach for universal fault-tolerant quantum computing, mainly based on the code concatenation approach proposed in [T. Jochym-O'Connor and R. Laflamme, Phys. Rev. Lett. 112, 010505 (2014), 10.1103/PhysRevLett.112.010505], but in a nonuniform fashion. The proposed approach is described based on nonuniform concatenation of the 7-qubit Steane code with the 15-qubit Reed-Muller code, as well as the 5-qubit code with the 15-qubit Reed-Muller code, which lead to two 49-qubit and 47-qubit codes, respectively. These codes can correct any arbitrary single physical error with the ability to perform a universal set of fault-tolerant gates, without using magic state distillation.

  13. The use of automatic programming techniques for fault tolerant computing systems

    Science.gov (United States)

    Wild, C.

    1985-01-01

    It is conjectured that the production of software for ultra-reliable computing systems such as required by Space Station, aircraft, nuclear power plants and the like will require a high degree of automation as well as fault tolerance. In this paper, the relationship between automatic programming techniques and fault tolerant computing systems is explored. Initial efforts in the automatic synthesis of code from assertions to be used for error detection as well as the automatic generation of assertions and test cases from abstract data type specifications is outlined. Speculation on the ability to generate truly diverse designs capable of recovery from errors by exploring alternate paths in the program synthesis tree is discussed. Some initial thoughts on the use of knowledge based systems for the global detection of abnormal behavior using expectations and the goal-directed reconfiguration of resources to meet critical mission objectives are given. One of the sources of information for these systems would be the knowledge captured during the automatic programming process.

  14. Fault detection for hydraulic pump based on chaotic parallel RBF network

    Directory of Open Access Journals (Sweden)

    Ma Ning

    2011-01-01

    Full Text Available Abstract In this article, a parallel radial basis function network in conjunction with chaos theory (CPRBF network is presented, and applied to practical fault detection for hydraulic pump, which is a critical component in aircraft. The CPRBF network consists of a number of radial basis function (RBF subnets connected in parallel. The number of input nodes for each RBF subnet is determined by different embedding dimension based on chaotic phase-space reconstruction. The output of CPRBF is a weighted sum of all RBF subnets. It was first trained using the dataset from normal state without fault, and then a residual error generator was designed to detect failures based on the trained CPRBF network. Then, failure detection can be achieved by the analysis of the residual error. Finally, two case studies are introduced to compare the proposed CPRBF network with traditional RBF networks, in terms of prediction and detection accuracy.

  15. Feature Selection and Fault Classification of Reciprocating Compressors using a Genetic Algorithm and a Probabilistic Neural Network

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed, M; Gu, F; Ball, A, E-mail: M.Ahmed@hud.ac.uk [Diagnostic Engineering Research Group, University of Huddersfield, HD1 3DH (United Kingdom)

    2011-07-19

    Reciprocating compressors are widely used in industry for various purposes and faults occurring in them can degrade their performance, consume additional energy and even cause severe damage to the machine. Vibration monitoring techniques are often used for early fault detection and diagnosis, but it is difficult to prescribe a given set of effective diagnostic features because of the wide variety of operating conditions and the complexity of the vibration signals which originate from the many different vibrating and impact sources. This paper studies the use of genetic algorithms (GAs) and neural networks (NNs) to select effective diagnostic features for the fault diagnosis of a reciprocating compressor. A large number of common features are calculated from the time and frequency domains and envelope analysis. Applying GAs and NNs to these features found that envelope analysis has the most potential for differentiating three common faults: valve leakage, inter-cooler leakage and a loose drive belt. Simultaneously, the spread parameter of the probabilistic NN was also optimised. The selected subsets of features were examined based on vibration source characteristics. The approach developed and the trained NN are confirmed as possessing general characteristics for fault detection and diagnosis.

  16. Feature Selection and Fault Classification of Reciprocating Compressors using a Genetic Algorithm and a Probabilistic Neural Network

    International Nuclear Information System (INIS)

    Ahmed, M; Gu, F; Ball, A

    2011-01-01

    Reciprocating compressors are widely used in industry for various purposes and faults occurring in them can degrade their performance, consume additional energy and even cause severe damage to the machine. Vibration monitoring techniques are often used for early fault detection and diagnosis, but it is difficult to prescribe a given set of effective diagnostic features because of the wide variety of operating conditions and the complexity of the vibration signals which originate from the many different vibrating and impact sources. This paper studies the use of genetic algorithms (GAs) and neural networks (NNs) to select effective diagnostic features for the fault diagnosis of a reciprocating compressor. A large number of common features are calculated from the time and frequency domains and envelope analysis. Applying GAs and NNs to these features found that envelope analysis has the most potential for differentiating three common faults: valve leakage, inter-cooler leakage and a loose drive belt. Simultaneously, the spread parameter of the probabilistic NN was also optimised. The selected subsets of features were examined based on vibration source characteristics. The approach developed and the trained NN are confirmed as possessing general characteristics for fault detection and diagnosis.

  17. ASCS online fault detection and isolation based on an improved MPCA

    Science.gov (United States)

    Peng, Jianxin; Liu, Haiou; Hu, Yuhui; Xi, Junqiang; Chen, Huiyan

    2014-09-01

    Multi-way principal component analysis (MPCA) has received considerable attention and been widely used in process monitoring. A traditional MPCA algorithm unfolds multiple batches of historical data into a two-dimensional matrix and cut the matrix along the time axis to form subspaces. However, low efficiency of subspaces and difficult fault isolation are the common disadvantages for the principal component model. This paper presents a new subspace construction method based on kernel density estimation function that can effectively reduce the storage amount of the subspace information. The MPCA model and the knowledge base are built based on the new subspace. Then, fault detection and isolation with the squared prediction error (SPE) statistic and the Hotelling ( T 2) statistic are also realized in process monitoring. When a fault occurs, fault isolation based on the SPE statistic is achieved by residual contribution analysis of different variables. For fault isolation of subspace based on the T 2 statistic, the relationship between the statistic indicator and state variables is constructed, and the constraint conditions are presented to check the validity of fault isolation. Then, to improve the robustness of fault isolation to unexpected disturbances, the statistic method is adopted to set the relation between single subspace and multiple subspaces to increase the corrective rate of fault isolation. Finally fault detection and isolation based on the improved MPCA is used to monitor the automatic shift control system (ASCS) to prove the correctness and effectiveness of the algorithm. The research proposes a new subspace construction method to reduce the required storage capacity and to prove the robustness of the principal component model, and sets the relationship between the state variables and fault detection indicators for fault isolation.

  18. A second study of the prediction of cognitive errors using the 'CREAM' technique

    International Nuclear Information System (INIS)

    Collier, Steve; Andresen, Gisle

    2000-03-01

    Some human errors, such as errors of commission and knowledge-based errors, are not adequately modelled in probabilistic safety assessments. Even qualitative methods for handling these sorts of errors are comparatively underdeveloped. The 'Cognitive Reliability and Error Analysis Method' (CREAM) was recently developed for prediction of cognitive error modes. It has not yet been comprehensively established how reliable, valid and generally useful it could be to researchers and practitioners. A previous study of CREAM at Halden was promising, showing a relationship between errors predicted in advance and those that actually occurred in simulated fault scenarios. The present study continues this work. CREAM was used to make predictions of cognitive error modes throughout two rather difficult fault scenarios. Predictions were made of the most likely cognitive error mode, were one to occur at all, at several points throughout the expected scenarios, based upon the scenario design and description. Each scenario was then run 15 times with different operators. Error modes occurring during simulations were later scored using the task description for the scenario, videotapes of operator actions, eye-track recording, operators' verbal protocols and an expert's concurrent commentary. The scoring team had no previous substantive knowledge of the experiment or the techniques used, so as to provide a more stringent test of the data and knowledge needed for scoring. The scored error modes were then compared with the CREAM predictions to assess the degree of agreement. Some cognitive error modes were predicted successfully, but the results were generally not so encouraging as the previous study. Several problems were found with both the CREAM technique and the data needed to complete the analysis. It was felt that further development was needed before this kind of analysis can be reliable and valid, either in a research setting or as a practitioner's tool in a safety assessment

  19. Understanding the determinants of antimicrobial prescribing within hospitals: the role of "prescribing etiquette".

    Science.gov (United States)

    Charani, E; Castro-Sanchez, E; Sevdalis, N; Kyratsis, Y; Drumright, L; Shah, N; Holmes, A

    2013-07-01

    There is limited knowledge of the key determinants of antimicrobial prescribing behavior (APB) in hospitals. An understanding of these determinants is required for the successful design, adoption, and implementation of quality improvement interventions in antimicrobial stewardship programs. Qualitative semistructured interviews were conducted with doctors (n = 10), pharmacists (n = 10), and nurses and midwives (n = 19) in 4 hospitals in London. Interviews were conducted until thematic saturation was reached. Thematic analysis was applied to the data to identify the key determinants of antimicrobial prescribing behaviors. The APB of healthcare professionals is governed by a set of cultural rules. Antimicrobial prescribing is performed in an environment where the behavior of clinical leaders or seniors influences practice of junior doctors. Senior doctors consider themselves exempt from following policy and practice within a culture of perceived autonomous decision making that relies more on personal knowledge and experience than formal policy. Prescribers identify with the clinical groups in which they work and adjust their APB according to the prevailing practice within these groups. A culture of "noninterference" in the antimicrobial prescribing practice of peers prevents intervention into prescribing of colleagues. These sets of cultural rules demonstrate the existence of a "prescribing etiquette," which dominates the APB of healthcare professionals. Prescribing etiquette creates an environment in which professional hierarchy and clinical groups act as key determinants of APB. To influence the antimicrobial prescribing of individual healthcare professionals, interventions need to address prescribing etiquette and use clinical leadership within existing clinical groups to influence practice.

  20. Indirect adaptive fuzzy fault-tolerant tracking control for MIMO nonlinear systems with actuator and sensor failures.

    Science.gov (United States)

    Bounemeur, Abdelhamid; Chemachema, Mohamed; Essounbouli, Najib

    2018-05-10

    In this paper, an active fuzzy fault tolerant tracking control (AFFTTC) scheme is developed for a class of multi-input multi-output (MIMO) unknown nonlinear systems in the presence of unknown actuator faults, sensor failures and external disturbance. The developed control scheme deals with four kinds of faults for both sensors and actuators. The bias, drift, and loss of accuracy additive faults are considered along with the loss of effectiveness multiplicative fault. A fuzzy adaptive controller based on back-stepping design is developed to deal with actuator failures and unknown system dynamics. However, an additional robust control term is added to deal with sensor faults, approximation errors, and external disturbances. Lyapunov theory is used to prove the stability of the closed loop system. Numerical simulations on a quadrotor are presented to show the effectiveness of the proposed approach. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  1. Technology and medication errors: impact in nursing homes.

    Science.gov (United States)

    Baril, Chantal; Gascon, Viviane; St-Pierre, Liette; Lagacé, Denis

    2014-01-01

    The purpose of this paper is to study a medication distribution technology's (MDT) impact on medication errors reported in public nursing homes in Québec Province. The work was carried out in six nursing homes (800 patients). Medication error data were collected from nursing staff through a voluntary reporting process before and after MDT was implemented. The errors were analysed using: totals errors; medication error type; severity and patient consequences. A statistical analysis verified whether there was a significant difference between the variables before and after introducing MDT. The results show that the MDT detected medication errors. The authors' analysis also indicates that errors are detected more rapidly resulting in less severe consequences for patients. MDT is a step towards safer and more efficient medication processes. Our findings should convince healthcare administrators to implement technology such as electronic prescriber or bar code medication administration systems to improve medication processes and to provide better healthcare to patients. Few studies have been carried out in long-term healthcare facilities such as nursing homes. The authors' study extends what is known about MDT's impact on medication errors in nursing homes.

  2. Fault tolerant control for uncertain systems with parametric faults

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2006-01-01

    A fault tolerant control (FTC) architecture based on active fault diagnosis (AFD) and the YJBK (Youla, Jarb, Bongiorno and Kucera)parameterization is applied in this paper. Based on the FTC architecture, fault tolerant control of uncertain systems with slowly varying parametric faults...... is investigated. Conditions are given for closed-loop stability in case of false alarms or missing fault detection/isolation....

  3. LAMPF first-fault identifier for fast transient faults

    International Nuclear Information System (INIS)

    Swanson, A.R.; Hill, R.E.

    1979-01-01

    The LAMPF accelerator is presently producing 800-MeV proton beams at 0.5 mA average current. Machine protection for such a high-intensity accelerator requires a fast shutdown mechanism, which can turn off the beam within a few microseconds of the occurrence of a machine fault. The resulting beam unloading transients cause the rf systems to exceed control loop tolerances and consequently generate multiple fault indications for identification by the control computer. The problem is to isolate the primary fault or cause of beam shutdown while disregarding as many as 50 secondary fault indications that occur as a result of beam shutdown. The LAMPF First-Fault Identifier (FFI) for fast transient faults is operational and has proven capable of first-fault identification. The FFI design utilized features of the Fast Protection System that were previously implemented for beam chopping and rf power conservation. No software changes were required

  4. Fault-tolerant quantum computation for local non-Markovian noise

    International Nuclear Information System (INIS)

    Terhal, Barbara M.; Burkard, Guido

    2005-01-01

    We derive a threshold result for fault-tolerant quantum computation for local non-Markovian noise models. The role of error amplitude in our analysis is played by the product of the elementary gate time t 0 and the spectral width of the interaction Hamiltonian between system and bath. We discuss extensions of our model and the applicability of our analysis

  5. Why the 2002 Denali fault rupture propagated onto the Totschunda fault: implications for fault branching and seismic hazards

    Science.gov (United States)

    Schwartz, David P.; Haeussler, Peter J.; Seitz, Gordon G.; Dawson, Timothy E.

    2012-01-01

    The propagation of the rupture of the Mw7.9 Denali fault earthquake from the central Denali fault onto the Totschunda fault has provided a basis for dynamic models of fault branching in which the angle of the regional or local prestress relative to the orientation of the main fault and branch plays a principal role in determining which fault branch is taken. GeoEarthScope LiDAR and paleoseismic data allow us to map the structure of the Denali-Totschunda fault intersection and evaluate controls of fault branching from a geological perspective. LiDAR data reveal the Denali-Totschunda fault intersection is structurally simple with the two faults directly connected. At the branch point, 227.2 km east of the 2002 epicenter, the 2002 rupture diverges southeast to become the Totschunda fault. We use paleoseismic data to propose that differences in the accumulated strain on each fault segment, which express differences in the elapsed time since the most recent event, was one important control of the branching direction. We suggest that data on event history, slip rate, paleo offsets, fault geometry and structure, and connectivity, especially on high slip rate-short recurrence interval faults, can be used to assess the likelihood of branching and its direction. Analysis of the Denali-Totschunda fault intersection has implications for evaluating the potential for a rupture to propagate across other types of fault intersections and for characterizing sources of future large earthquakes.

  6. Estimation of reliability on digital plant protection system in nuclear power plants using fault simulation with self-checking

    International Nuclear Information System (INIS)

    Lee, Jun Seok; Kim, Suk Joon; Seong, Poong Hyun

    2004-01-01

    Safety-critical digital systems in nuclear power plants require high design reliability. Reliable software design and accurate prediction methods for the system reliability are important problems. In the reliability analysis, the error detection coverage of the system is one of the crucial factors, however, it is difficult to evaluate the error detection coverage of digital instrumentation and control system in nuclear power plants due to complexity of the system. To evaluate the error detection coverage for high efficiency and low cost, the simulation based fault injections with self checking are needed for digital instrumentation and control system in nuclear power plants. The target system is local coincidence logic in digital plant protection system and a simplified software modeling for this target system is used in this work. C++ based hardware description of micro computer simulator system is used to evaluate the error detection coverage of the system. From the simulation result, it is possible to estimate the error detection coverage of digital plant protection system in nuclear power plants using simulation based fault injection method with self checking. (author)

  7. Analyzing temozolomide medication errors: potentially fatal.

    Science.gov (United States)

    Letarte, Nathalie; Gabay, Michael P; Bressler, Linda R; Long, Katie E; Stachnik, Joan M; Villano, J Lee

    2014-10-01

    The EORTC-NCIC regimen for glioblastoma requires different dosing of temozolomide (TMZ) during radiation and maintenance therapy. This complexity is exacerbated by the availability of multiple TMZ capsule strengths. TMZ is an alkylating agent and the major toxicity of this class is dose-related myelosuppression. Inadvertent overdose can be fatal. The websites of the Institute for Safe Medication Practices (ISMP), and the Food and Drug Administration (FDA) MedWatch database were reviewed. We searched the MedWatch database for adverse events associated with TMZ and obtained all reports including hematologic toxicity submitted from 1st November 1997 to 30th May 2012. The ISMP describes errors with TMZ resulting from the positioning of information on the label of the commercial product. The strength and quantity of capsules on the label were in close proximity to each other, and this has been changed by the manufacturer. MedWatch identified 45 medication errors. Patient errors were the most common, accounting for 21 or 47% of errors, followed by dispensing errors, which accounted for 13 or 29%. Seven reports or 16% were errors in the prescribing of TMZ. Reported outcomes ranged from reversible hematological adverse events (13%), to hospitalization for other adverse events (13%) or death (18%). Four error reports lacked detail and could not be categorized. Although the FDA issued a warning in 2003 regarding fatal medication errors and the product label warns of overdosing, errors in TMZ dosing occur for various reasons and involve both healthcare professionals and patients. Overdosing errors can be fatal.

  8. An Overview of Optical Network Bandwidth and Fault Management

    Directory of Open Access Journals (Sweden)

    J.A. Zubairi

    2010-09-01

    Full Text Available This paper discusses the optical network management issues and identifies potential areas for focused research. A general outline of the main components in optical network management is given and specific problems in GMPLS based model are explained. Later, protection and restoration issues are discussed in the broader context of fault management and the tools developed for fault detection are listed. Optical networks need efficient and reliable protection schemes that restore the communications quickly on the occurrence of faults without causing failure of real-time applications using the network. A holistic approach is required that provides mechanisms for fault detection, rapid restoration and reversion in case of fault resolution. Since the role of SDH/SONET is diminishing, the modern optical networks are poised towards the IP-centric model where high performance IP-MPLS routers manage a core intelligent network of IP over WDM. Fault management schemes are developed for both the IP layer and the WDM layer. Faults can be detected and repaired locally and also through centralized network controller. A hybrid approach works best in detecting the faults where the domain controller verifies the established LSPs in addition to the link tests at the node level. On detecting a fault, rapid restoration can perform localized routing of traffic away from the affected port and link. The traffic may be directed to pre-assigned backup paths that are established as shared or dedicated resources. We examine the protection issues in detail including the choice of layer for protection, implementing protection or restoration, backup path routing, backup resource efficiency, subpath protection, QoS traffic survival and multilayer protection triggers and alarm propagation. The complete protection cycle is described and mechanisms incorporated into RSVP-TE and other protocols for detecting and recording path errors are outlined. In addition, MPLS testbed

  9. Incipient Fault Detection and Isolation of Field Devices in Nuclear Power Systems Using Principal Component Analysis

    International Nuclear Information System (INIS)

    Kaistha, Nitin; Upadhyaya, Belle R.

    2001-01-01

    An integrated method for the detection and isolation of incipient faults in common field devices, such as sensors and actuators, using plant operational data is presented. The approach is based on the premise that data for normal operation lie on a surface and abnormal situations lead to deviations from the surface in a particular way. Statistically significant deviations from the surface result in the detection of faults, and the characteristic directions of deviations are used for isolation of one or more faults from the set of typical faults. Principal component analysis (PCA), a multivariate data-driven technique, is used to capture the relationships in the data and fit a hyperplane to the data. The fault direction for each of the scenarios is obtained using the singular value decomposition on the state and control function prediction errors, and fault isolation is then accomplished from projections on the fault directions. This approach is demonstrated for a simulated pressurized water reactor steam generator system and for a laboratory process control system under single device fault conditions. Enhanced fault isolation capability is also illustrated by incorporating realistic nonlinear terms in the PCA data matrix

  10. Support vector machine based fault classification and location of a long transmission line

    Directory of Open Access Journals (Sweden)

    Papia Ray

    2016-09-01

    Full Text Available This paper investigates support vector machine based fault type and distance estimation scheme in a long transmission line. The planned technique uses post fault single cycle current waveform and pre-processing of the samples is done by wavelet packet transform. Energy and entropy are obtained from the decomposed coefficients and feature matrix is prepared. Then the redundant features from the matrix are taken out by the forward feature selection method and normalized. Test and train data are developed by taking into consideration variables of a simulation situation like fault type, resistance path, inception angle, and distance. In this paper 10 different types of short circuit fault are analyzed. The test data are examined by support vector machine whose parameters are optimized by particle swarm optimization method. The anticipated method is checked on a 400 kV, 300 km long transmission line with voltage source at both the ends. Two cases were examined with the proposed method. The first one is fault very near to both the source end (front and rear and the second one is support vector machine with and without optimized parameter. Simulation result indicates that the anticipated method for fault classification gives high accuracy (99.21% and least fault distance estimation error (0.29%.

  11. Novel neural networks-based fault tolerant control scheme with fault alarm.

    Science.gov (United States)

    Shen, Qikun; Jiang, Bin; Shi, Peng; Lim, Cheng-Chew

    2014-11-01

    In this paper, the problem of adaptive active fault-tolerant control for a class of nonlinear systems with unknown actuator fault is investigated. The actuator fault is assumed to have no traditional affine appearance of the system state variables and control input. The useful property of the basis function of the radial basis function neural network (NN), which will be used in the design of the fault tolerant controller, is explored. Based on the analysis of the design of normal and passive fault tolerant controllers, by using the implicit function theorem, a novel NN-based active fault-tolerant control scheme with fault alarm is proposed. Comparing with results in the literature, the fault-tolerant control scheme can minimize the time delay between fault occurrence and accommodation that is called the time delay due to fault diagnosis, and reduce the adverse effect on system performance. In addition, the FTC scheme has the advantages of a passive fault-tolerant control scheme as well as the traditional active fault-tolerant control scheme's properties. Furthermore, the fault-tolerant control scheme requires no additional fault detection and isolation model which is necessary in the traditional active fault-tolerant control scheme. Finally, simulation results are presented to demonstrate the efficiency of the developed techniques.

  12. Adaptive Neural Networks Decentralized FTC Design for Nonstrict-Feedback Nonlinear Interconnected Large-Scale Systems Against Actuator Faults.

    Science.gov (United States)

    Li, Yongming; Tong, Shaocheng

    The problem of active fault-tolerant control (FTC) is investigated for the large-scale nonlinear systems in nonstrict-feedback form. The nonstrict-feedback nonlinear systems considered in this paper consist of unstructured uncertainties, unmeasured states, unknown interconnected terms, and actuator faults (e.g., bias fault and gain fault). A state observer is designed to solve the unmeasurable state problem. Neural networks (NNs) are used to identify the unknown lumped nonlinear functions so that the problems of unstructured uncertainties and unknown interconnected terms can be solved. By combining the adaptive backstepping design principle with the combination Nussbaum gain function property, a novel NN adaptive output-feedback FTC approach is developed. The proposed FTC controller can guarantee that all signals in all subsystems are bounded, and the tracking errors for each subsystem converge to a small neighborhood of zero. Finally, numerical results of practical examples are presented to further demonstrate the effectiveness of the proposed control strategy.The problem of active fault-tolerant control (FTC) is investigated for the large-scale nonlinear systems in nonstrict-feedback form. The nonstrict-feedback nonlinear systems considered in this paper consist of unstructured uncertainties, unmeasured states, unknown interconnected terms, and actuator faults (e.g., bias fault and gain fault). A state observer is designed to solve the unmeasurable state problem. Neural networks (NNs) are used to identify the unknown lumped nonlinear functions so that the problems of unstructured uncertainties and unknown interconnected terms can be solved. By combining the adaptive backstepping design principle with the combination Nussbaum gain function property, a novel NN adaptive output-feedback FTC approach is developed. The proposed FTC controller can guarantee that all signals in all subsystems are bounded, and the tracking errors for each subsystem converge to a small

  13. A Cooperative Approach to Virtual Machine Based Fault Injection

    Energy Technology Data Exchange (ETDEWEB)

    Naughton III, Thomas J [ORNL; Engelmann, Christian [ORNL; Vallee, Geoffroy R [ORNL; Aderholdt, William Ferrol [ORNL; Scott, Stephen L [Tennessee Technological University (TTU)

    2017-01-01

    Resilience investigations often employ fault injection (FI) tools to study the effects of simulated errors on a target system. It is important to keep the target system under test (SUT) isolated from the controlling environment in order to maintain control of the experiement. Virtual machines (VMs) have been used to aid these investigations due to the strong isolation properties of system-level virtualization. A key challenge in fault injection tools is to gain proper insight and context about the SUT. In VM-based FI tools, this challenge of target con- text is increased due to the separation between host and guest (VM). We discuss an approach to VM-based FI that leverages virtual machine introspection (VMI) methods to gain insight into the target s context running within the VM. The key to this environment is the ability to provide basic information to the FI system that can be used to create a map of the target environment. We describe a proof- of-concept implementation and a demonstration of its use to introduce simulated soft errors into an iterative solver benchmark running in user-space of a guest VM.

  14. Holonomic surface codes for fault-tolerant quantum computation

    Science.gov (United States)

    Zhang, Jiang; Devitt, Simon J.; You, J. Q.; Nori, Franco

    2018-02-01

    Surface codes can protect quantum information stored in qubits from local errors as long as the per-operation error rate is below a certain threshold. Here we propose holonomic surface codes by harnessing the quantum holonomy of the system. In our scheme, the holonomic gates are built via auxiliary qubits rather than the auxiliary levels in multilevel systems used in conventional holonomic quantum computation. The key advantage of our approach is that the auxiliary qubits are in their ground state before and after each gate operation, so they are not involved in the operation cycles of surface codes. This provides an advantageous way to implement surface codes for fault-tolerant quantum computation.

  15. Fault detection of sensors in nuclear reactors using self-organizing maps

    Energy Technology Data Exchange (ETDEWEB)

    Barbosa, Paulo Roberto; Tiago, Graziela Marchi [Instituto Federal de Educacao, Ciencia e Tecnologia de Sao Paulo (IFSP), Sao Paulo, SP (Brazil); Bueno, Elaine Inacio [Instituto Federal de Educacao, Ciencia e Tecnologia de Sao Paulo (IFSP), Guarulhos, SP (Brazil); Pereira, Iraci Martinez, E-mail: martinez@ipen.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    In this work a Fault Detection System was developed based on the self-organizing maps methodology. This method was applied to the IEA-R1 research reactor at IPEN using a database generated by a theoretical model of the reactor. The IEA-R1 research reactor is a pool type reactor of 5 MW, cooled and moderated by light water, and uses graphite and beryllium as reflector. The theoretical model was developed using the Matlab Guide toolbox. The equations are based in the IEA-R1 mass and energy inventory balance and physical as well as operational aspects are taken into consideration. In order to test the model ability for fault detection, faults were artificially produced. As the value of the maximum calibration error for special thermocouples is +- 0.5 deg C, it had been inserted faults in the sensor signals with the purpose to produce the database considered in this work. The results show a high percentage of correct classification, encouraging the use of the technique for this type of industrial application. (author)

  16. Fault detection of sensors in nuclear reactors using self-organizing maps

    International Nuclear Information System (INIS)

    Barbosa, Paulo Roberto; Tiago, Graziela Marchi; Bueno, Elaine Inacio; Pereira, Iraci Martinez

    2011-01-01

    In this work a Fault Detection System was developed based on the self-organizing maps methodology. This method was applied to the IEA-R1 research reactor at IPEN using a database generated by a theoretical model of the reactor. The IEA-R1 research reactor is a pool type reactor of 5 MW, cooled and moderated by light water, and uses graphite and beryllium as reflector. The theoretical model was developed using the Matlab Guide toolbox. The equations are based in the IEA-R1 mass and energy inventory balance and physical as well as operational aspects are taken into consideration. In order to test the model ability for fault detection, faults were artificially produced. As the value of the maximum calibration error for special thermocouples is +- 0.5 deg C, it had been inserted faults in the sensor signals with the purpose to produce the database considered in this work. The results show a high percentage of correct classification, encouraging the use of the technique for this type of industrial application. (author)

  17. Human Factors Risk Analyses of a Doffing Protocol for Ebola-Level Personal Protective Equipment: Mapping Errors to Contamination.

    Science.gov (United States)

    Mumma, Joel M; Durso, Francis T; Ferguson, Ashley N; Gipson, Christina L; Casanova, Lisa; Erukunuakpor, Kimberly; Kraft, Colleen S; Walsh, Victoria L; Zimring, Craig; DuBose, Jennifer; Jacob, Jesse T

    2018-03-05

    Doffing protocols for personal protective equipment (PPE) are critical for keeping healthcare workers (HCWs) safe during care of patients with Ebola virus disease. We assessed the relationship between errors and self-contamination during doffing. Eleven HCWs experienced with doffing Ebola-level PPE participated in simulations in which HCWs donned PPE marked with surrogate viruses (ɸ6 and MS2), completed a clinical task, and were assessed for contamination after doffing. Simulations were video recorded, and a failure modes and effects analysis and fault tree analyses were performed to identify errors during doffing, quantify their risk (risk index), and predict contamination data. Fifty-one types of errors were identified, many having the potential to spread contamination. Hand hygiene and removing the powered air purifying respirator (PAPR) hood had the highest total risk indexes (111 and 70, respectively) and number of types of errors (9 and 13, respectively). ɸ6 was detected on 10% of scrubs and the fault tree predicted a 10.4% contamination rate, likely occurring when the PAPR hood inadvertently contacted scrubs during removal. MS2 was detected on 10% of hands, 20% of scrubs, and 70% of inner gloves and the predicted rates were 7.3%, 19.4%, 73.4%, respectively. Fault trees for MS2 and ɸ6 contamination suggested similar pathways. Ebola-level PPE can both protect and put HCWs at risk for self-contamination throughout the doffing process, even among experienced HCWs doffing with a trained observer. Human factors methodologies can identify error-prone steps, delineate the relationship between errors and self-contamination, and suggest remediation strategies.

  18. Fault Diagnosis of a Reconfigurable Crawling–Rolling Robot Based on Support Vector Machines

    Directory of Open Access Journals (Sweden)

    Karthikeyan Elangovan

    2017-10-01

    Full Text Available As robots begin to perform jobs autonomously, with minimal or no human intervention, a new challenge arises: robots also need to autonomously detect errors and recover from faults. In this paper, we present a Support Vector Machine (SVM-based fault diagnosis system for a bio-inspired reconfigurable robot named Scorpio. The diagnosis system needs to detect and classify faults while Scorpio uses its crawling and rolling locomotion modes. Specifically, we classify between faulty and non-faulty conditions by analyzing onboard Inertial Measurement Unit (IMU sensor data. The data capture nine different locomotion gaits, which include rolling and crawling modes, at three different speeds. Statistical methods are applied to extract features and to reduce the dimensionality of original IMU sensor data features. These statistical features were given as inputs for training and testing. Additionally, the c-Support Vector Classification (c-SVC and nu-SVC models of SVM, and their fault classification accuracies, were compared. The results show that the proposed SVM approach can be used to autonomously diagnose locomotion gait faults while the reconfigurable robot is in operation.

  19. Fault tree handbook

    International Nuclear Information System (INIS)

    Haasl, D.F.; Roberts, N.H.; Vesely, W.E.; Goldberg, F.F.

    1981-01-01

    This handbook describes a methodology for reliability analysis of complex systems such as those which comprise the engineered safety features of nuclear power generating stations. After an initial overview of the available system analysis approaches, the handbook focuses on a description of the deductive method known as fault tree analysis. The following aspects of fault tree analysis are covered: basic concepts for fault tree analysis; basic elements of a fault tree; fault tree construction; probability, statistics, and Boolean algebra for the fault tree analyst; qualitative and quantitative fault tree evaluation techniques; and computer codes for fault tree evaluation. Also discussed are several example problems illustrating the basic concepts of fault tree construction and evaluation

  20. Improved characterisation and modelling of measurement errors in electrical resistivity tomography (ERT) surveys

    Science.gov (United States)

    Tso, Chak-Hau Michael; Kuras, Oliver; Wilkinson, Paul B.; Uhlemann, Sebastian; Chambers, Jonathan E.; Meldrum, Philip I.; Graham, James; Sherlock, Emma F.; Binley, Andrew

    2017-11-01

    Measurement errors can play a pivotal role in geophysical inversion. Most inverse models require users to prescribe or assume a statistical model of data errors before inversion. Wrongly prescribed errors can lead to over- or under-fitting of data; however, the derivation of models of data errors is often neglected. With the heightening interest in uncertainty estimation within hydrogeophysics, better characterisation and treatment of measurement errors is needed to provide improved image appraisal. Here we focus on the role of measurement errors in electrical resistivity tomography (ERT). We have analysed two time-lapse ERT datasets: one contains 96 sets of direct and reciprocal data collected from a surface ERT line within a 24 h timeframe; the other is a two-year-long cross-borehole survey at a UK nuclear site with 246 sets of over 50,000 measurements. Our study includes the characterisation of the spatial and temporal behaviour of measurement errors using autocorrelation and correlation coefficient analysis. We find that, in addition to well-known proportionality effects, ERT measurements can also be sensitive to the combination of electrodes used, i.e. errors may not be uncorrelated as often assumed. Based on these findings, we develop a new error model that allows grouping based on electrode number in addition to fitting a linear model to transfer resistance. The new model explains the observed measurement errors better and shows superior inversion results and uncertainty estimates in synthetic examples. It is robust, because it groups errors together based on the electrodes used to make the measurements. The new model can be readily applied to the diagonal data weighting matrix widely used in common inversion methods, as well as to the data covariance matrix in a Bayesian inversion framework. We demonstrate its application using extensive ERT monitoring datasets from the two aforementioned sites.

  1. Fault isolability conditions for linear systems with additive faults

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Stoustrup, Jakob

    2006-01-01

    In this paper, we shall show that an unlimited number of additive single faults can be isolated under mild conditions if a general isolation scheme is applied. Multiple faults are also covered. The approach is algebraic and is based on a set representation of faults, where all faults within a set...

  2. Diagnosis of Short-Circuit Fault in Large-Scale Permanent-Magnet Wind Power Generator Based on CMAC

    Directory of Open Access Journals (Sweden)

    Chin-Tsung Hsieh

    2013-01-01

    Full Text Available This study proposes a method based on the cerebellar model arithmetic controller (CMAC for fault diagnosis of large-scale permanent-magnet wind power generators and compares the results with Error Back Propagation (EBP. The diagnosis is based on the short-circuit faults in permanent-magnet wind power generators, magnetic field change, and temperature change. Since CMAC is characterized by inductive ability, associative ability, quick response, and similar input signals exciting similar memories, it has an excellent effect as an intelligent fault diagnosis implement. The experimental results suggest that faults can be diagnosed effectively after only training CMAC 10 times. In comparison to training 151 times for EBP, CMAC is better than EBP in terms of training speed.

  3. Robust model reference adaptive output feedback tracking for uncertain linear systems with actuator fault based on reinforced dead-zone modification.

    Science.gov (United States)

    Bagherpoor, H M; Salmasi, Farzad R

    2015-07-01

    In this paper, robust model reference adaptive tracking controllers are considered for Single-Input Single-Output (SISO) and Multi-Input Multi-Output (MIMO) linear systems containing modeling uncertainties, unknown additive disturbances and actuator fault. Two new lemmas are proposed for both SISO and MIMO, under which dead-zone modification rule is improved such that the tracking error for any reference signal tends to zero in such systems. In the conventional approach, adaption of the controller parameters is ceased inside the dead-zone region which results tracking error, while preserving the system stability. In the proposed scheme, control signal is reinforced with an additive term based on tracking error inside the dead-zone which results in full reference tracking. In addition, no Fault Detection and Diagnosis (FDD) unit is needed in the proposed approach. Closed loop system stability and zero tracking error are proved by considering a suitable Lyapunov functions candidate. It is shown that the proposed control approach can assure that all the signals of the close loop system are bounded in faulty conditions. Finally, validity and performance of the new schemes have been illustrated through numerical simulations of SISO and MIMO systems in the presence of actuator faults, modeling uncertainty and output disturbance. Copyright © 2015 ISA. Published by Elsevier Ltd. All rights reserved.

  4. Injecting Artificial Memory Errors Into a Running Computer Program

    Science.gov (United States)

    Bornstein, Benjamin J.; Granat, Robert A.; Wagstaff, Kiri L.

    2008-01-01

    Single-event upsets (SEUs) or bitflips are computer memory errors caused by radiation. BITFLIPS (Basic Instrumentation Tool for Fault Localized Injection of Probabilistic SEUs) is a computer program that deliberately injects SEUs into another computer program, while the latter is running, for the purpose of evaluating the fault tolerance of that program. BITFLIPS was written as a plug-in extension of the open-source Valgrind debugging and profiling software. BITFLIPS can inject SEUs into any program that can be run on the Linux operating system, without needing to modify the program s source code. Further, if access to the original program source code is available, BITFLIPS offers fine-grained control over exactly when and which areas of memory (as specified via program variables) will be subjected to SEUs. The rate of injection of SEUs is controlled by specifying either a fault probability or a fault rate based on memory size and radiation exposure time, in units of SEUs per byte per second. BITFLIPS can also log each SEU that it injects and, if program source code is available, report the magnitude of effect of the SEU on a floating-point value or other program variable.

  5. Fault diagnosis and fault-tolerant control based on adaptive control approach

    CERN Document Server

    Shen, Qikun; Shi, Peng

    2017-01-01

    This book provides recent theoretical developments in and practical applications of fault diagnosis and fault tolerant control for complex dynamical systems, including uncertain systems, linear and nonlinear systems. Combining adaptive control technique with other control methodologies, it investigates the problems of fault diagnosis and fault tolerant control for uncertain dynamic systems with or without time delay. As such, the book provides readers a solid understanding of fault diagnosis and fault tolerant control based on adaptive control technology. Given its depth and breadth, it is well suited for undergraduate and graduate courses on linear system theory, nonlinear system theory, fault diagnosis and fault tolerant control techniques. Further, it can be used as a reference source for academic research on fault diagnosis and fault tolerant control, and for postgraduates in the field of control theory and engineering. .

  6. A summary of the active fault investigation in the extension sea area of Kikugawa fault and the Nishiyama fault , N-S direction fault in south west Japan

    Science.gov (United States)

    Abe, S.

    2010-12-01

    In this study, we carried out two sets of active fault investigation by the request from Ministry of Education, Culture, Sports, Science and Technology in the sea area of the extension of Kikugawa fault and the Nishiyama fault. We want to clarify the five following matters about both active faults based on those results. (1)Fault continuity of the land and the sea. (2) The length of the active fault. (3) The division of the segment. (4) Activity characteristics. In this investigation, we carried out a digital single channel seismic reflection survey in the whole area of both active faults. In addition, a high-resolution multichannel seismic reflection survey was carried out to recognize the detailed structure of a shallow stratum. Furthermore, the sampling with the vibrocoring to get information of the sedimentation age was carried out. The reflection profile of both active faults was extremely clear. The characteristics of the lateral fault such as flower structure, the dispersion of the active fault were recognized. In addition, from analysis of the age of the stratum, it was recognized that the thickness of the sediment was extremely thin in Holocene epoch on the continental shelf in this sea area. It was confirmed that the Kikugawa fault extended to the offing than the existing results of research by a result of this investigation. In addition, the width of the active fault seems to become wide toward the offing while dispersing. At present, we think that we can divide Kikugawa fault into some segments based on the distribution form of the segment. About the Nishiyama fault, reflection profiles to show the existence of the active fault was acquired in the sea between Ooshima and Kyushu. From this result and topographical existing results of research in Ooshima, it is thought that Nishiyama fault and the Ooshima offing active fault are a series of structure. As for Ooshima offing active fault, the upheaval side changes, and a direction changes too. Therefore, we

  7. Ciliates learn to diagnose and correct classical error syndromes in mating strategies.

    Science.gov (United States)

    Clark, Kevin B

    2013-01-01

    Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by "rivals" and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via "power" or "refrigeration" cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in social

  8. Ciliates learn to diagnose and correct classical error syndromes in mating strategies

    Directory of Open Access Journals (Sweden)

    Kevin Bradley Clark

    2013-08-01

    Full Text Available Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by rivals and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell-cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via power or refrigeration cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and nonmodal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in

  9. Fault finder

    Science.gov (United States)

    Bunch, Richard H.

    1986-01-01

    A fault finder for locating faults along a high voltage electrical transmission line. Real time monitoring of background noise and improved filtering of input signals is used to identify the occurrence of a fault. A fault is detected at both a master and remote unit spaced along the line. A master clock synchronizes operation of a similar clock at the remote unit. Both units include modulator and demodulator circuits for transmission of clock signals and data. All data is received at the master unit for processing to determine an accurate fault distance calculation.

  10. Reliable channel-adapted error correction: Bacon-Shor code recovery from amplitude damping

    NARCIS (Netherlands)

    Á. Piedrafita (Álvaro); J.M. Renes (Joseph)

    2017-01-01

    textabstractWe construct two simple error correction schemes adapted to amplitude damping noise for Bacon-Shor codes and investigate their prospects for fault-tolerant implementation. Both consist solely of Clifford gates and require far fewer qubits, relative to the standard method, to achieve

  11. Online Synthesis for Error Recovery in Digital Microfluidic Biochips with Operation Variability

    DEFF Research Database (Denmark)

    Alistar, Mirela; Pop, Paul; Madsen, Jan

    2012-01-01

    . The droplet volumes can vary erroneously due to parametric faults, thus impacting negatively the correctness of the application. Researchers have proposed approaches that synthesize offline predetermined recovery subroutines, which are activated online when errors occur. In this paper, we propose an online...

  12. The Sorong Fault Zone, Indonesia: Mapping a Fault Zone Offshore

    Science.gov (United States)

    Melia, S.; Hall, R.

    2017-12-01

    The Sorong Fault Zone is a left-lateral strike-slip fault zone in eastern Indonesia, extending westwards from the Bird's Head peninsula of West Papua towards Sulawesi. It is the result of interactions between the Pacific, Caroline, Philippine Sea, and Australian Plates and much of it is offshore. Previous research on the fault zone has been limited by the low resolution of available data offshore, leading to debates over the extent, location, and timing of movements, and the tectonic evolution of eastern Indonesia. Different studies have shown it north of the Sula Islands, truncated south of Halmahera, continuing to Sulawesi, or splaying into a horsetail fan of smaller faults. Recently acquired high resolution multibeam bathymetry of the seafloor (with a resolution of 15-25 meters), and 2D seismic lines, provide the opportunity to trace the fault offshore. The position of different strands can be identified. On land, SRTM topography shows that in the northern Bird's Head the fault zone is characterised by closely spaced E-W trending faults. NW of the Bird's Head offshore there is a fold and thrust belt which terminates some strands. To the west of the Bird's Head offshore the fault zone diverges into multiple strands trending ENE-WSW. Regions of Riedel shearing are evident west of the Bird's Head, indicating sinistral strike-slip motion. Further west, the ENE-WSW trending faults turn to an E-W trend and there are at least three fault zones situated immediately south of Halmahera, north of the Sula Islands, and between the islands of Sanana and Mangole where the fault system terminates in horsetail strands. South of the Sula islands some former normal faults at the continent-ocean boundary with the North Banda Sea are being reactivated as strike-slip faults. The fault zone does not currently reach Sulawesi. The new fault map differs from previous interpretations concerning the location, age and significance of different parts of the Sorong Fault Zone. Kinematic

  13. Probabilistic assessment of faults

    International Nuclear Information System (INIS)

    Foden, R.W.

    1987-01-01

    Probabilistic safety analysis (PSA) is the process by which the probability (or frequency of occurrence) of reactor fault conditions which could lead to unacceptable consequences is assessed. The basic objective of a PSA is to allow a judgement to be made as to whether or not the principal probabilistic requirement is satisfied. It also gives insights into the reliability of the plant which can be used to identify possible improvements. This is explained in the article. The scope of a PSA and the PSA performed by the National Nuclear Corporation (NNC) for the Heysham II and Torness AGRs and Sizewell-B PWR are discussed. The NNC methods for hazards, common cause failure and operator error are mentioned. (UK)

  14. Improved nonlinear fault detection strategy based on the Hellinger distance metric: Plug flow reactor monitoring

    KAUST Repository

    Harrou, Fouzi

    2017-03-18

    Fault detection has a vital role in the process industry to enhance productivity, efficiency, and safety, and to avoid expensive maintenance. This paper proposes an innovative multivariate fault detection method that can be used for monitoring nonlinear processes. The proposed method merges advantages of nonlinear projection to latent structures (NLPLS) modeling and those of Hellinger distance (HD) metric to identify abnormal changes in highly correlated multivariate data. Specifically, the HD is used to quantify the dissimilarity between current NLPLS-based residual and reference probability distributions obtained using fault-free data. Furthermore, to enhance further the robustness of these methods to measurement noise, and reduce the false alarms due to modeling errors, wavelet-based multiscale filtering of residuals is used before the application of the HD-based monitoring scheme. The performances of the developed NLPLS-HD fault detection technique is illustrated using simulated plug flow reactor data. The results show that the proposed method provides favorable performance for detection of faults compared to the conventional NLPLS method.

  15. Wind turbine fault detection and fault tolerant control

    DEFF Research Database (Denmark)

    Odgaard, Peter Fogh; Johnson, Kathryn

    2013-01-01

    In this updated edition of a previous wind turbine fault detection and fault tolerant control challenge, we present a more sophisticated wind turbine model and updated fault scenarios to enhance the realism of the challenge and therefore the value of the solutions. This paper describes...

  16. Fault-weighted quantification method of fault detection coverage through fault mode and effect analysis in digital I&C systems

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jaehyun; Lee, Seung Jun, E-mail: sjlee420@unist.ac.kr; Jung, Wondea

    2017-05-15

    Highlights: • We developed the fault-weighted quantification method of fault detection coverage. • The method has been applied to specific digital reactor protection system. • The unavailability of the module had 20-times difference with the traditional method. • Several experimental tests will be effectively prioritized using this method. - Abstract: The one of the most outstanding features of a digital I&C system is the use of a fault-tolerant technique. With an awareness regarding the importance of thequantification of fault detection coverage of fault-tolerant techniques, several researches related to the fault injection method were developed and employed to quantify a fault detection coverage. In the fault injection method, each injected fault has a different importance because the frequency of realization of every injected fault is different. However, there have been no previous studies addressing the importance and weighting factor of each injected fault. In this work, a new method for allocating the weighting to each injected fault using the failure mode and effect analysis data was proposed. For application, the fault-weighted quantification method has also been applied to specific digital reactor protection system to quantify the fault detection coverage. One of the major findings in an application was that we may estimate the unavailability of the specific module in digital I&C systems about 20-times smaller than real value when we use a traditional method. The other finding was that we can also classify the importance of the experimental case. Therefore, this method is expected to not only suggest an accurate quantification procedure of fault-detection coverage by weighting the injected faults, but to also contribute to an effective fault injection experiment by sorting the importance of the failure categories.

  17. Fault diagnosis

    Science.gov (United States)

    Abbott, Kathy

    1990-01-01

    The objective of the research in this area of fault management is to develop and implement a decision aiding concept for diagnosing faults, especially faults which are difficult for pilots to identify, and to develop methods for presenting the diagnosis information to the flight crew in a timely and comprehensible manner. The requirements for the diagnosis concept were identified by interviewing pilots, analyzing actual incident and accident cases, and examining psychology literature on how humans perform diagnosis. The diagnosis decision aiding concept developed based on those requirements takes abnormal sensor readings as input, as identified by a fault monitor. Based on these abnormal sensor readings, the diagnosis concept identifies the cause or source of the fault and all components affected by the fault. This concept was implemented for diagnosis of aircraft propulsion and hydraulic subsystems in a computer program called Draphys (Diagnostic Reasoning About Physical Systems). Draphys is unique in two important ways. First, it uses models of both functional and physical relationships in the subsystems. Using both models enables the diagnostic reasoning to identify the fault propagation as the faulted system continues to operate, and to diagnose physical damage. Draphys also reasons about behavior of the faulted system over time, to eliminate possibilities as more information becomes available, and to update the system status as more components are affected by the fault. The crew interface research is examining display issues associated with presenting diagnosis information to the flight crew. One study examined issues for presenting system status information. One lesson learned from that study was that pilots found fault situations to be more complex if they involved multiple subsystems. Another was pilots could identify the faulted systems more quickly if the system status was presented in pictorial or text format. Another study is currently under way to

  18. Investigation of the applicability of a functional programming model to fault-tolerant parallel processing for knowledge-based systems

    Science.gov (United States)

    Harper, Richard

    1989-01-01

    In a fault-tolerant parallel computer, a functional programming model can facilitate distributed checkpointing, error recovery, load balancing, and graceful degradation. Such a model has been implemented on the Draper Fault-Tolerant Parallel Processor (FTPP). When used in conjunction with the FTPP's fault detection and masking capabilities, this implementation results in a graceful degradation of system performance after faults. Three graceful degradation algorithms have been implemented and are presented. A user interface has been implemented which requires minimal cognitive overhead by the application programmer, masking such complexities as the system's redundancy, distributed nature, variable complement of processing resources, load balancing, fault occurrence and recovery. This user interface is described and its use demonstrated. The applicability of the functional programming style to the Activation Framework, a paradigm for intelligent systems, is then briefly described.

  19. Fuzzy Uncertainty Evaluation for Fault Tree Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ki Beom; Shim, Hyung Jin [Seoul National University, Seoul (Korea, Republic of); Jae, Moo Sung [Hanyang University, Seoul (Korea, Republic of)

    2015-05-15

    This traditional probabilistic approach can calculate relatively accurate results. However it requires a long time because of repetitive computation due to the MC method. In addition, when informative data for statistical analysis are not sufficient or some events are mainly caused by human error, the probabilistic approach may not be possible because uncertainties of these events are difficult to be expressed by probabilistic distributions. In order to reduce the computation time and quantify uncertainties of top events when basic events whose uncertainties are difficult to be expressed by probabilistic distributions exist, the fuzzy uncertainty propagation based on fuzzy set theory can be applied. In this paper, we develop a fuzzy uncertainty propagation code and apply the fault tree of the core damage accident after the large loss of coolant accident (LLOCA). The fuzzy uncertainty propagation code is implemented and tested for the fault tree of the radiation release accident. We apply this code to the fault tree of the core damage accident after the LLOCA in three cases and compare the results with those computed by the probabilistic uncertainty propagation using the MC method. The results obtained by the fuzzy uncertainty propagation can be calculated in relatively short time, covering the results obtained by the probabilistic uncertainty propagation.

  20. Adaptive extended-state observer-based fault tolerant attitude control for spacecraft with reaction wheels

    Science.gov (United States)

    Ran, Dechao; Chen, Xiaoqian; de Ruiter, Anton; Xiao, Bing

    2018-04-01

    This study presents an adaptive second-order sliding control scheme to solve the attitude fault tolerant control problem of spacecraft subject to system uncertainties, external disturbances and reaction wheel faults. A novel fast terminal sliding mode is preliminarily designed to guarantee that finite-time convergence of the attitude errors can be achieved globally. Based on this novel sliding mode, an adaptive second-order observer is then designed to reconstruct the system uncertainties and the actuator faults. One feature of the proposed observer is that the design of the observer does not necessitate any priori information of the upper bounds of the system uncertainties and the actuator faults. In view of the reconstructed information supplied by the designed observer, a second-order sliding mode controller is developed to accomplish attitude maneuvers with great robustness and precise tracking accuracy. Theoretical stability analysis proves that the designed fault tolerant control scheme can achieve finite-time stability of the closed-loop system, even in the presence of reaction wheel faults and system uncertainties. Numerical simulations are also presented to demonstrate the effectiveness and superiority of the proposed control scheme over existing methodologies.

  1. New constraints on slip rates and locking depths of the San Andreas Fault System from Sentinel-1A InSAR and GAGE GPS observations

    Science.gov (United States)

    Ward, L. A.; Smith-Konter, B. R.; Higa, J. T.; Xu, X.; Tong, X.; Sandwell, D. T.

    2017-12-01

    After over a decade of operation, the EarthScope (GAGE) Facility has now accumulated a wealth of GPS and InSAR data, that when successfully integrated, make it possible to image the entire San Andreas Fault System (SAFS) with unprecedented spatial coverage and resolution. Resulting surface velocity and deformation time series products provide critical boundary conditions needed for improving our understanding of how faults are loaded across a broad range of temporal and spatial scales. Moreover, our understanding of how earthquake cycle deformation is influenced by fault zone strength and crust/mantle rheology is still developing. To further study these processes, we construct a new 4D earthquake cycle model of the SAFS representing the time-dependent 3D velocity field associated with interseismic strain accumulation, co-seismic slip, and postseismic viscoelastic relaxation. This high-resolution California statewide model, spanning the Cerro Prieto fault to the south to the Maacama fault to the north, is constructed on a 500 m spaced grid and comprises variable slip and locking depths along 42 major fault segments. Secular deep slip is prescribed from the base of the locked zone to the base of the elastic plate while episodic shallow slip is prescribed from the historical earthquake record and geologic recurrence intervals. Locking depths and slip rates for all 42 fault segments are constrained by the newest GAGE Facility geodetic observations; 3169 horizontal GPS velocity measurements, combined with over 53,000 line-of-sight (LOS) InSAR velocity observations from Sentinel-1A, are used in a weighted least-squares inversion. To assess slip rate and locking depth sensitivity of a heterogeneous rheology model, we also implement variations in crustal rigidity throughout the plate boundary, assuming a coarse representation of shear modulus variability ranging from 20-40 GPa throughout the (low rigidity) Salton Trough and Basin and Range and the (high rigidity) Central

  2. Detecting Soft Errors in Stencil based Computations

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, V. [Univ. of Utah, Salt Lake City, UT (United States); Gopalkrishnan, G. [Univ. of Utah, Salt Lake City, UT (United States); Bronevetsky, G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-05-06

    Given the growing emphasis on system resilience, it is important to develop software-level error detectors that help trap hardware-level faults with reasonable accuracy while minimizing false alarms as well as the performance overhead introduced. We present a technique that approaches this idea by taking stencil computations as our target, and synthesizing detectors based on machine learning. In particular, we employ linear regression to generate computationally inexpensive models which form the basis for error detection. Our technique has been incorporated into a new open-source library called SORREL. In addition to reporting encouraging experimental results, we demonstrate techniques that help reduce the size of training data. We also discuss the efficacy of various detectors synthesized, as well as our future plans.

  3. Fault zone hydrogeology

    Science.gov (United States)

    Bense, V. F.; Gleeson, T.; Loveless, S. E.; Bour, O.; Scibek, J.

    2013-12-01

    Deformation along faults in the shallow crust (research effort of structural geologists and hydrogeologists. However, we find that these disciplines often use different methods with little interaction between them. In this review, we document the current multi-disciplinary understanding of fault zone hydrogeology. We discuss surface- and subsurface observations from diverse rock types from unlithified and lithified clastic sediments through to carbonate, crystalline, and volcanic rocks. For each rock type, we evaluate geological deformation mechanisms, hydrogeologic observations and conceptual models of fault zone hydrogeology. Outcrop observations indicate that fault zones commonly have a permeability structure suggesting they should act as complex conduit-barrier systems in which along-fault flow is encouraged and across-fault flow is impeded. Hydrogeological observations of fault zones reported in the literature show a broad qualitative agreement with outcrop-based conceptual models of fault zone hydrogeology. Nevertheless, the specific impact of a particular fault permeability structure on fault zone hydrogeology can only be assessed when the hydrogeological context of the fault zone is considered and not from outcrop observations alone. To gain a more integrated, comprehensive understanding of fault zone hydrogeology, we foresee numerous synergistic opportunities and challenges for the discipline of structural geology and hydrogeology to co-evolve and address remaining challenges by co-locating study areas, sharing approaches and fusing data, developing conceptual models from hydrogeologic data, numerical modeling, and training interdisciplinary scientists.

  4. Fault detection of a spur gear using vibration signal with multivariable statistical parameters

    Directory of Open Access Journals (Sweden)

    Songpon Klinchaeam

    2014-10-01

    Full Text Available This paper presents a condition monitoring technique of a spur gear fault detection using vibration signal analysis based on time domain. Vibration signals were acquired from gearboxes and used to simulate various faults on spur gear tooth. In this study, vibration signals were applied to monitor a normal and various fault conditions of a spur gear such as normal, scuffing defect, crack defect and broken tooth. The statistical parameters of vibration signal were used to compare and evaluate the value of fault condition. This technique can be applied to set alarm limit of the signal condition based on statistical parameter such as variance, kurtosis, rms and crest factor. These parameters can be used to set as a boundary decision of signal condition. From the results, the vibration signal analysis with single statistical parameter is unclear to predict fault of the spur gears. The using at least two statistical parameters can be clearly used to separate in every case of fault detection. The boundary decision of statistical parameter with the 99.7% certainty ( 3   from 300 referenced dataset and detected the testing condition with 99.7% ( 3   accuracy and had an error of less than 0.3 % using 50 testing dataset.

  5. Study on Unified Chaotic System-Based Wind Turbine Blade Fault Diagnostic System

    Science.gov (United States)

    Kuo, Ying-Che; Hsieh, Chin-Tsung; Yau, Her-Terng; Li, Yu-Chung

    At present, vibration signals are processed and analyzed mostly in the frequency domain. The spectrum clearly shows the signal structure and the specific characteristic frequency band is analyzed, but the number of calculations required is huge, resulting in delays. Therefore, this study uses the characteristics of a nonlinear system to load the complete vibration signal to the unified chaotic system, applying the dynamic error to analyze the wind turbine vibration signal, and adopting extenics theory for artificial intelligent fault diagnosis of the analysis signal. Hence, a fault diagnostor has been developed for wind turbine rotating blades. This study simulates three wind turbine blade states, namely stress rupture, screw loosening and blade loss, and validates the methods. The experimental results prove that the unified chaotic system used in this paper has a significant effect on vibration signal analysis. Thus, the operating conditions of wind turbines can be quickly known from this fault diagnostic system, and the maintenance schedule can be arranged before the faults worsen, making the management and implementation of wind turbines smoother, so as to reduce many unnecessary costs.

  6. Imaging of Subsurface Faults using Refraction Migration with Fault Flooding

    KAUST Repository

    Metwally, Ahmed Mohsen Hassan

    2017-05-31

    We propose a novel method for imaging shallow faults by migration of transmitted refraction arrivals. The assumption is that there is a significant velocity contrast across the fault boundary that is underlain by a refracting interface. This procedure, denoted as refraction migration with fault flooding, largely overcomes the difficulty in imaging shallow faults with seismic surveys. Numerical results successfully validate this method on three synthetic examples and two field-data sets. The first field-data set is next to the Gulf of Aqaba and the second example is from a seismic profile recorded in Arizona. The faults detected by refraction migration in the Gulf of Aqaba data were in agreement with those indicated in a P-velocity tomogram. However, a new fault is detected at the end of the migration image that is not clearly seen in the traveltime tomogram. This result is similar to that for the Arizona data where the refraction image showed faults consistent with those seen in the P-velocity tomogram, except it also detected an antithetic fault at the end of the line. This fault cannot be clearly seen in the traveltime tomogram due to the limited ray coverage.

  7. Imaging of Subsurface Faults using Refraction Migration with Fault Flooding

    KAUST Repository

    Metwally, Ahmed Mohsen Hassan; Hanafy, Sherif; Guo, Bowen; Kosmicki, Maximillian Sunflower

    2017-01-01

    We propose a novel method for imaging shallow faults by migration of transmitted refraction arrivals. The assumption is that there is a significant velocity contrast across the fault boundary that is underlain by a refracting interface. This procedure, denoted as refraction migration with fault flooding, largely overcomes the difficulty in imaging shallow faults with seismic surveys. Numerical results successfully validate this method on three synthetic examples and two field-data sets. The first field-data set is next to the Gulf of Aqaba and the second example is from a seismic profile recorded in Arizona. The faults detected by refraction migration in the Gulf of Aqaba data were in agreement with those indicated in a P-velocity tomogram. However, a new fault is detected at the end of the migration image that is not clearly seen in the traveltime tomogram. This result is similar to that for the Arizona data where the refraction image showed faults consistent with those seen in the P-velocity tomogram, except it also detected an antithetic fault at the end of the line. This fault cannot be clearly seen in the traveltime tomogram due to the limited ray coverage.

  8. Electronic prescribing: criteria for evaluating handheld prescribing systems and an evaluation of a new, handheld, wireless wide area network (WWAN) prescribing system.

    Science.gov (United States)

    Goldblum, O M

    2001-02-01

    The objectives of this study were: 1) to establish criteria for evaluating handheld computerized prescribing systems; and 2) to evaluate out-of-box performance and features of a new, Palm Operating System (OS)-based, handheld, wireless wide area network (WWAN) prescribing system. The system consisted of a Palm Vx handheld organizer, a Novatel Minstrel V wireless modem, OmniSky wireless internet access and ePhysician ePad 1.1, the Palm OS electronic prescribing software program. A dermatologist familiar with healthcare information technology conducted an evaluation of the performance and features of a new, handheld, WWAN electronic prescribing system in an office practice during a three-month period in 2000. System performance, defined as transmission success rate, was determined from data collected during the three-month trial. Evaluation criteria consisted of an analysis of features found in electronic prescribing systems. All prescriptions written for all patients seen during a three-month period (August - November, 2000) were eligible for inclusion. Prescriptions written for patients who intended to fill them at pharmacies without known facsimile receiving capabilities were excluded from the study. The performance of the system was evaluated using data collected during the study. Criteria for evaluating features of electronic prescribing systems were developed and used to analyze the system employed in this study. During this three-month trial, 200 electronic prescriptions were generated for 132 patients included in the study. Of these prescriptions, 92.5 percent were successfully transmitted to pharmacies. Transmission failures resulted from incorrect facsimile numbers and non-functioning facsimile machines. Criteria established for evaluation of electronic prescribing systems included System (Hardware & Software), Costs, System Features, Printing & Transmission, Formulary & Insurance, Customization, Drug Safety and Security. This study is the first effort to

  9. Adjustable Parameter-Based Distributed Fault Estimation Observer Design for Multiagent Systems With Directed Graphs.

    Science.gov (United States)

    Zhang, Ke; Jiang, Bin; Shi, Peng

    2017-02-01

    In this paper, a novel adjustable parameter (AP)-based distributed fault estimation observer (DFEO) is proposed for multiagent systems (MASs) with the directed communication topology. First, a relative output estimation error is defined based on the communication topology of MASs. Then a DFEO with AP is constructed with the purpose of improving the accuracy of fault estimation. Based on H ∞ and H 2 with pole placement, multiconstrained design is given to calculate the gain of DFEO. Finally, simulation results are presented to illustrate the feasibility and effectiveness of the proposed DFEO design with AP.

  10. Architecture of thrust faults with alongstrike variations in fault-plane dip: anatomy of the Lusatian Fault, Bohemian Massif

    Czech Academy of Sciences Publication Activity Database

    Coubal, Miroslav; Adamovič, Jiří; Málek, Jiří; Prouza, V.

    2014-01-01

    Roč. 59, č. 3 (2014), s. 183-208 ISSN 1802-6222 Institutional support: RVO:67985831 ; RVO:67985891 Keywords : fault architecture * fault plane geometry * drag structures * thrust fault * sandstone * Lusatian Fault Subject RIV: DB - Geology ; Mineralogy Impact factor: 1.405, year: 2014

  11. Inappropriate prescribing: criteria, detection and prevention.

    LENUS (Irish Health Repository)

    O'Connor, Marie N

    2012-06-01

    Inappropriate prescribing is highly prevalent in older people and is a major healthcare concern because of its association with negative healthcare outcomes including adverse drug events, related morbidity and hospitalization. With changing population demographics resulting in increasing proportions of older people worldwide, improving the quality and safety of prescribing in older people poses a global challenge. To date a number of different strategies have been used to identify potentially inappropriate prescribing in older people. Over the last two decades, a number of criteria have been published to assist prescribers in detecting inappropriate prescribing, the majority of which have been explicit sets of criteria, though some are implicit. The majority of these prescribing indicators pertain to overprescribing and misprescribing, with only a minority focussing on the underprescribing of indicated medicines. Additional interventions to optimize prescribing in older people include comprehensive geriatric assessment, clinical pharmacist review, and education of prescribers as well as computerized prescribing with clinical decision support systems. In this review, we describe the inappropriate prescribing detection tools or criteria most frequently cited in the literature and examine their role in preventing inappropriate prescribing and other related healthcare outcomes. We also discuss other measures commonly used in the detection and prevention of inappropriate prescribing in older people and the evidence supporting their use and their application in everyday clinical practice.

  12. Design and Evaluation of an Electronic Override Mechanism for Medication Alerts to Facilitate Communication Between Prescribers and Pharmacists.

    Science.gov (United States)

    Russ, Alissa L; Chen, Siying; Melton, Brittany L; Saleem, Jason J; Weiner, Michael; Spina, Jeffrey R; Daggy, Joanne K; Zillich, Alan J

    2015-07-01

    Computerized medication alerts can often be bypassed by entering an override rationale, but prescribers' override reasons are frequently ambiguous to pharmacists who review orders. To develop and evaluate a new override mechanism for adverse reaction and drug-drug interaction alerts. We hypothesized that the new mechanism would improve usability for prescribers and increase the clinical appropriateness of override reasons. A counterbalanced, crossover study was conducted with 20 prescribers in a simulated prescribing environment. We modified the override mechanism timing, navigation, and text entry. Instead of free-text entry, the new mechanism presented prescribers with a predefined set of override reasons. We assessed usability (learnability, perceived efficiency, and usability errors) and used a priori criteria to evaluate the clinical appropriateness of override reasons entered. Prescribers rated the new mechanism as more efficient (Wilcoxon signed-rank test, P = 0.032). When first using the new design, 5 prescribers had difficulty finding the new mechanism, and 3 interpreted the navigation to mean that the alert could not be overridden. The number of appropriate override reasons significantly increased with the new mechanism compared with the original mechanism (median change of 3.0; interquartile range = 3.0; P < 0.0001). When prescribers were given a menu-based choice for override reasons, clinical appropriateness of these reasons significantly improved. Further enhancements are necessary, but this study is an important first step toward a more standardized menu of override choices. Findings may be used to improve communication through e-prescribing systems between prescribers and pharmacists. © The Author(s) 2015.

  13. Response to "Improving Patient Safety With Error Identification in Chemotherapy Orders by Verification Nurses"
.

    Science.gov (United States)

    Zhu, Ling-Ling; Lv, Na; Zhou, Quan

    2016-12-01

    We read, with great interest, the study by Baldwin and Rodriguez (2016), which described the role of the verification nurse and details the verification process in identifying errors related to chemotherapy orders. We strongly agree with their findings that a verification nurse, collaborating closely with the prescribing physician, pharmacist, and treating nurse, can better identify errors and maintain safety during chemotherapy administration.

  14. Fault-tolerant cooperative output regulation for multi-vehicle systems with sensor faults

    Science.gov (United States)

    Qin, Liguo; He, Xiao; Zhou, D. H.

    2017-10-01

    This paper presents a unified framework of fault diagnosis and fault-tolerant cooperative output regulation (FTCOR) for a linear discrete-time multi-vehicle system with sensor faults. The FTCOR control law is designed through three steps. A cooperative output regulation (COR) controller is designed based on the internal mode principle when there are no sensor faults. A sufficient condition on the existence of the COR controller is given based on the discrete-time algebraic Riccati equation (DARE). Then, a decentralised fault diagnosis scheme is designed to cope with sensor faults occurring in followers. A residual generator is developed to detect sensor faults of each follower, and a bank of fault-matching estimators are proposed to isolate and estimate sensor faults of each follower. Unlike the current distributed fault diagnosis for multi-vehicle systems, the presented decentralised fault diagnosis scheme in each vehicle reduces the communication and computation load by only using the information of the vehicle. By combing the sensor fault estimation and the COR control law, an FTCOR controller is proposed. Finally, the simulation results demonstrate the effectiveness of the FTCOR controller.

  15. A Case for Soft Error Detection and Correction in Computational Chemistry.

    Science.gov (United States)

    van Dam, Hubertus J J; Vishnu, Abhinav; de Jong, Wibe A

    2013-09-10

    High performance computing platforms are expected to deliver 10(18) floating operations per second by the year 2022 through the deployment of millions of cores. Even if every core is highly reliable the sheer number of them will mean that the mean time between failures will become so short that most application runs will suffer at least one fault. In particular soft errors caused by intermittent incorrect behavior of the hardware are a concern as they lead to silent data corruption. In this paper we investigate the impact of soft errors on optimization algorithms using Hartree-Fock as a particular example. Optimization algorithms iteratively reduce the error in the initial guess to reach the intended solution. Therefore they may intuitively appear to be resilient to soft errors. Our results show that this is true for soft errors of small magnitudes but not for large errors. We suggest error detection and correction mechanisms for different classes of data structures. The results obtained with these mechanisms indicate that we can correct more than 95% of the soft errors at moderate increases in the computational cost.

  16. Simulating and Detecting Radiation-Induced Errors for Onboard Machine Learning

    Science.gov (United States)

    Wagstaff, Kiri L.; Bornstein, Benjamin; Granat, Robert; Tang, Benyang; Turmon, Michael

    2009-01-01

    Spacecraft processors and memory are subjected to high radiation doses and therefore employ radiation-hardened components. However, these components are orders of magnitude more expensive than typical desktop components, and they lag years behind in terms of speed and size. We have integrated algorithm-based fault tolerance (ABFT) methods into onboard data analysis algorithms to detect radiation-induced errors, which ultimately may permit the use of spacecraft memory that need not be fully hardened, reducing cost and increasing capability at the same time. We have also developed a lightweight software radiation simulator, BITFLIPS, that permits evaluation of error detection strategies in a controlled fashion, including the specification of the radiation rate and selective exposure of individual data structures. Using BITFLIPS, we evaluated our error detection methods when using a support vector machine to analyze data collected by the Mars Odyssey spacecraft. We found ABFT error detection for matrix multiplication is very successful, while error detection for Gaussian kernel computation still has room for improvement.

  17. Smartphone apps to support hospital prescribing and pharmacology education: a review of current provision.

    Science.gov (United States)

    Haffey, Faye; Brady, Richard R W; Maxwell, Simon

    2014-01-01

    Junior doctors write the majority of hospital prescriptions but many indicate they feel underprepared to assume this responsibility and around 10% of prescriptions contain errors. Medical smartphone apps are now widely used in clinical practice and present an opportunity to provide support to inexperienced prescribers. This study assesses the contemporary range of smartphone apps with prescribing or related content. Six smartphone app stores were searched for apps aimed at the healthcare professional with drug, pharmacology or prescribing content. Three hundred and six apps were identified. 34% appeared to be for use within the clinical environment in order to aid prescribing, 14% out with the clinical setting and 51% of apps were deemed appropriate for both clinical and non-clinical use. Apps with drug reference material, such as textbooks, manuals or medical apps with drug information were the commonest apps found (51%), followed by apps offering drug or infusion rate dose calculation (26%). 68% of apps charged for download, with a mean price of £14.25 per app and a range of £0.62-101.90. A diverse range of pharmacology-themed apps are available and there is further potential for the development of contemporary apps to improve prescribing performance. Personalized app stores may help universities/healthcare organizations offer high quality apps to students to aid in pharmacology education. Users of prescribing apps must be aware of the lack of information regarding the medical expertise of app developers. This will enable them to make informed choices about the use of such apps in their clinical practice. © 2013 The British Pharmacological Society.

  18. Robust Mpc for Actuator–Fault Tolerance Using Set–Based Passive Fault Detection and Active Fault Isolation

    Directory of Open Access Journals (Sweden)

    Xu Feng

    2017-03-01

    Full Text Available In this paper, a fault-tolerant control (FTC scheme is proposed for actuator faults, which is built upon tube-based model predictive control (MPC as well as set-based fault detection and isolation (FDI. In the class of MPC techniques, tubebased MPC can effectively deal with system constraints and uncertainties with relatively low computational complexity compared with other robust MPC techniques such as min-max MPC. Set-based FDI, generally considering the worst case of uncertainties, can robustly detect and isolate actuator faults. In the proposed FTC scheme, fault detection (FD is passive by using invariant sets, while fault isolation (FI is active by means of MPC and tubes. The active FI method proposed in this paper is implemented by making use of the constraint-handling ability of MPC to manipulate the bounds of inputs.

  19. Determining on-fault magnitude distributions for a connected, multi-fault system

    Science.gov (United States)

    Geist, E. L.; Parsons, T.

    2017-12-01

    A new method is developed to determine on-fault magnitude distributions within a complex and connected multi-fault system. A binary integer programming (BIP) method is used to distribute earthquakes from a 10 kyr synthetic regional catalog, with a minimum magnitude threshold of 6.0 and Gutenberg-Richter (G-R) parameters (a- and b-values) estimated from historical data. Each earthquake in the synthetic catalog can occur on any fault and at any location. In the multi-fault system, earthquake ruptures are allowed to branch or jump from one fault to another. The objective is to minimize the slip-rate misfit relative to target slip rates for each of the faults in the system. Maximum and minimum slip-rate estimates around the target slip rate are used as explicit constraints. An implicit constraint is that an earthquake can only be located on a fault (or series of connected faults) if it is long enough to contain that earthquake. The method is demonstrated in the San Francisco Bay area, using UCERF3 faults and slip-rates. We also invoke the same assumptions regarding background seismicity, coupling, and fault connectivity as in UCERF3. Using the preferred regional G-R a-value, which may be suppressed by the 1906 earthquake, the BIP problem is deemed infeasible when faults are not connected. Using connected faults, however, a solution is found in which there is a surprising diversity of magnitude distributions among faults. In particular, the optimal magnitude distribution for earthquakes that participate along the Peninsula section of the San Andreas fault indicates a deficit of magnitudes in the M6.0- 7.0 range. For the Rodgers Creek-Hayward fault combination, there is a deficit in the M6.0- 6.6 range. Rather than solving this as an optimization problem, we can set the objective function to zero and solve this as a constraint problem. Among the solutions to the constraint problem is one that admits many more earthquakes in the deficit magnitude ranges for both faults

  20. A New Method of PV Array Faults Diagnosis in Smart Grid

    Directory of Open Access Journals (Sweden)

    Ze Cheng

    2014-01-01

    Full Text Available A new fault diagnosis method is proposed for PV arrays with SP connection in this study, the advantages of which are that it would minimize the number of sensors needed and that the accuracy and anti-interference ability are improved with the introduction of fuzzy group decision-making theory. We considered five “decision makers” contributing to the diagnosis of PV array faults, including voltage, current, environmental temperature, panel temperature, and solar illumination. The accuracy and reliability of the proposed method were verified experimentally, and the possible factors contributing to diagnosis deviation were analyzed, based on which solutions were suggested to reduce or eliminate errors in aspects of hardware and software.

  1. Medication errors in chemotherapy preparation and administration: a survey conducted among oncology nurses in Turkey.

    Science.gov (United States)

    Ulas, Arife; Silay, Kamile; Akinci, Sema; Dede, Didem Sener; Akinci, Muhammed Bulent; Sendur, Mehmet Ali Nahit; Cubukcu, Erdem; Coskun, Hasan Senol; Degirmenci, Mustafa; Utkan, Gungor; Ozdemir, Nuriye; Isikdogan, Abdurrahman; Buyukcelik, Abdullah; Inanc, Mevlude; Bilici, Ahmet; Odabasi, Hatice; Cihan, Sener; Avci, Nilufer; Yalcin, Bulent

    2015-01-01

    Medication errors in oncology may cause severe clinical problems due to low therapeutic indices and high toxicity of chemotherapeutic agents. We aimed to investigate unintentional medication errors and underlying factors during chemotherapy preparation and administration based on a systematic survey conducted to reflect oncology nurses experience. This study was conducted in 18 adult chemotherapy units with volunteer participation of 206 nurses. A survey developed by primary investigators and medication errors (MAEs) defined preventable errors during prescription of medication, ordering, preparation or administration. The survey consisted of 4 parts: demographic features of nurses; workload of chemotherapy units; errors and their estimated monthly number during chemotherapy preparation and administration; and evaluation of the possible factors responsible from ME. The survey was conducted by face to face interview and data analyses were performed with descriptive statistics. Chi-square or Fisher exact tests were used for a comparative analysis of categorical data. Some 83.4% of the 210 nurses reported one or more than one error during chemotherapy preparation and administration. Prescribing or ordering wrong doses by physicians (65.7%) and noncompliance with administration sequences during chemotherapy administration (50.5%) were the most common errors. The most common estimated average monthly error was not following the administration sequence of the chemotherapeutic agents (4.1 times/month, range 1-20). The most important underlying reasons for medication errors were heavy workload (49.7%) and insufficient number of staff (36.5%). Our findings suggest that the probability of medication error is very high during chemotherapy preparation and administration, the most common involving prescribing and ordering errors. Further studies must address the strategies to minimize medication error in chemotherapy receiving patients, determine sufficient protective measures

  2. Single-phased Fault Location on Transmission Lines Using Unsynchronized Voltages

    Directory of Open Access Journals (Sweden)

    ISTRATE, M.

    2009-10-01

    Full Text Available The increased accuracy into the fault's detection and location makes it easier for maintenance, this being the reason to develop new possibilities for a precise estimation of the fault location. In the field literature, many methods for fault location using voltages and currents measurements at one or both terminals of power grids' lines are presented. The double-end synchronized data algorithms are very precise, but the current transformers can limit the accuracy of these estimations. The paper presents an algorithm to estimate the location of the single-phased faults which uses only voltage measurements at both terminals of the transmission lines by eliminating the error due to current transformers and without introducing the restriction of perfect data synchronization. In such conditions, the algorithm can be used with the actual equipment of the most power grids, the installation of phasor measurement units with GPS system synchronized timer not being compulsory. Only the positive sequence of line parameters and sources are used, thus, eliminating the incertitude in zero sequence parameter estimation. The algorithm is tested using the results of EMTP-ATP simulations, after the validation of the ATP models on the basis of registered results in a real power grid.

  3. An Autonomous Self-Aware and Adaptive Fault Tolerant Routing Technique for Wireless Sensor Networks.

    Science.gov (United States)

    Abba, Sani; Lee, Jeong-A

    2015-08-18

    We propose an autonomous self-aware and adaptive fault-tolerant routing technique (ASAART) for wireless sensor networks. We address the limitations of self-healing routing (SHR) and self-selective routing (SSR) techniques for routing sensor data. We also examine the integration of autonomic self-aware and adaptive fault detection and resiliency techniques for route formation and route repair to provide resilience to errors and failures. We achieved this by using a combined continuous and slotted prioritized transmission back-off delay to obtain local and global network state information, as well as multiple random functions for attaining faster routing convergence and reliable route repair despite transient and permanent node failure rates and efficient adaptation to instantaneous network topology changes. The results of simulations based on a comparison of the ASAART with the SHR and SSR protocols for five different simulated scenarios in the presence of transient and permanent node failure rates exhibit a greater resiliency to errors and failure and better routing performance in terms of the number of successfully delivered network packets, end-to-end delay, delivered MAC layer packets, packet error rate, as well as efficient energy conservation in a highly congested, faulty, and scalable sensor network.

  4. The use of prescribed and non-prescribed medication by Dutch children.

    NARCIS (Netherlands)

    Dijk, L. van; Lindert, H. van

    2002-01-01

    Background: Most research on the use of medication focuses on adults. Children, however, use medication too, most of which is prescribed by GP's. Children also use non-prescribed medication (f.e. bought in the drugstore), but the extent to which is not known. Moreover, it is not known to what extent

  5. Fault-Tolerant Approach for Modular Multilevel Converters under Submodule Faults

    DEFF Research Database (Denmark)

    Deng, Fujin; Tian, Yanjun; Zhu, Rongwu

    2016-01-01

    The modular multilevel converter (MMC) is attractive for medium- or high-power applications because of the advantages of its high modularity, availability, and high power quality. The fault-tolerant operation is one of the important issues for the MMC. This paper proposed a fault-tolerant approach...... for the MMC under submodule (SM) faults. The characteristic of the MMC with arms containing different number of healthy SMs under faults is analyzed. Based on the characteristic, the proposed approach can effectively keep the MMC operation as normal under SM faults. It can effectively improve the MMC...

  6. IAS 8, Accounting Policies, Changes in Accounting Estimates and Errors – A Closer Look

    OpenAIRE

    Muthupandian, K S

    2008-01-01

    The International Accounting Standards Board issued the revised version of the International Accounting Standard 8, Accounting Policies, Changes in Accounting Estimates and Errors. The objective of IAS 8 is to prescribe the criteria for selecting, applying and changing accounting policies, together with the accounting treatment and disclosure of changes in accounting policies, changes in accounting estimates and the corrections of errors. This article presents a closer look of the standard (o...

  7. Barriers to accepting e-prescribing in the U.S.A.

    Science.gov (United States)

    Smith, Alan D

    2006-01-01

    With the number of prescriptions rising nationally each year, it is surprising that Web-based technology is not fully embraced in the pharmacy industry as an aid to quality-assuring prescribing processes. Traditional prescription handling is done in a manual fashion with physicians hand-writing prescriptions for the patients during an office visit, giving the patient the responsibility of taking the prescription to a pharmacy or mailing the prescription to a mail order company for fulfillment. Electronic prescribing (e-prescribing) has the ability not only to streamline the prescription writing process, but also to reduce the number of errors that may be incurred with hand-written prescriptions. The purpose of this paper is to investigate these phenomena in the U.S.A. A number of hypotheses were tested using principal-components analysis (PCA) and factor analyses. As a result, a total of 55 fully employed, professional and semi-professional service management and internet users, representing a college-educated and knowledge-based sample derived from the metropolitan section of Pittsburgh, was selected. The six major constructs generated from the factor loadings in descending order of importance were: profit and risk factors, shipping and handling, saving, customer relationship management (CRM) and ethics, age, and awareness. The dependent variable chosen to be regressed against these major independent factor-based constructs was willingness to purchase prescriptions online. The overall relationship was found to be statistically significant (F = 2.971, p = 0.015) in predicting willingness to use e-prescribing options based on the various independent constructs. However, when testing the various standardized beta coefficients in the linear model, only the factor score-based construct CRM and ethics was found to significantly contribute to predicting the willingness to purchase prescriptions online (t = -3.074, p = 0.003). Although this study appears to represent the

  8. Reducing diagnostic errors in medicine: what's the goal?

    Science.gov (United States)

    Graber, Mark; Gordon, Ruthanna; Franklin, Nancy

    2002-10-01

    This review considers the feasibility of reducing or eliminating the three major categories of diagnostic errors in medicine: "No-fault errors" occur when the disease is silent, presents atypically, or mimics something more common. These errors will inevitably decline as medical science advances, new syndromes are identified, and diseases can be detected more accurately or at earlier stages. These errors can never be eradicated, unfortunately, because new diseases emerge, tests are never perfect, patients are sometimes noncompliant, and physicians will inevitably, at times, choose the most likely diagnosis over the correct one, illustrating the concept of necessary fallibility and the probabilistic nature of choosing a diagnosis. "System errors" play a role when diagnosis is delayed or missed because of latent imperfections in the health care system. These errors can be reduced by system improvements, but can never be eliminated because these improvements lag behind and degrade over time, and each new fix creates the opportunity for novel errors. Tradeoffs also guarantee system errors will persist, when resources are just shifted. "Cognitive errors" reflect misdiagnosis from faulty data collection or interpretation, flawed reasoning, or incomplete knowledge. The limitations of human processing and the inherent biases in using heuristics guarantee that these errors will persist. Opportunities exist, however, for improving the cognitive aspect of diagnosis by adopting system-level changes (e.g., second opinions, decision-support systems, enhanced access to specialists) and by training designed to improve cognition or cognitive awareness. Diagnostic error can be substantially reduced, but never eradicated.

  9. Numerical simulations of earthquakes and the dynamics of fault systems using the Finite Element method.

    Science.gov (United States)

    Kettle, L. M.; Mora, P.; Weatherley, D.; Gross, L.; Xing, H.

    2006-12-01

    Simulations using the Finite Element method are widely used in many engineering applications and for the solution of partial differential equations (PDEs). Computational models based on the solution of PDEs play a key role in earth systems simulations. We present numerical modelling of crustal fault systems where the dynamic elastic wave equation is solved using the Finite Element method. This is achieved using a high level computational modelling language, escript, available as open source software from ACcESS (Australian Computational Earth Systems Simulator), the University of Queensland. Escript is an advanced geophysical simulation software package developed at ACcESS which includes parallel equation solvers, data visualisation and data analysis software. The escript library was implemented to develop a flexible Finite Element model which reliably simulates the mechanism of faulting and the physics of earthquakes. Both 2D and 3D elastodynamic models are being developed to study the dynamics of crustal fault systems. Our final goal is to build a flexible model which can be applied to any fault system with user-defined geometry and input parameters. To study the physics of earthquake processes, two different time scales must be modelled, firstly the quasi-static loading phase which gradually increases stress in the system (~100years), and secondly the dynamic rupture process which rapidly redistributes stress in the system (~100secs). We will discuss the solution of the time-dependent elastic wave equation for an arbitrary fault system using escript. This involves prescribing the correct initial stress distribution in the system to simulate the quasi-static loading of faults to failure; determining a suitable frictional constitutive law which accurately reproduces the dynamics of the stick/slip instability at the faults; and using a robust time integration scheme. These dynamic models generate data and information that can be used for earthquake forecasting.

  10. Analysis of error-correction constraints in an optical disk

    Science.gov (United States)

    Roberts, Jonathan D.; Ryley, Alan; Jones, David M.; Burke, David

    1996-07-01

    The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.

  11. The Supply of Prescription Opioids: Contributions of Episodic-Care Prescribers and High-Quantity Prescribers.

    Science.gov (United States)

    Schneberk, Todd; Raffetto, Brian; Kim, David; Schriger, David L

    2018-06-01

    We determine episodic and high-quantity prescribers' contribution to opioid prescriptions and total morphine milligram equivalents in California, especially among individuals prescribed large amounts of opioids. This was a cross-sectional descriptive analysis of opioid prescribing patterns during an 8-year period using the de-identified Controlled Substance Utilization Review and Evaluation System (CURES) database, the California subsection of the prescription drug monitoring program. We took a 10% random sample of all patients and stratified them by the amount of prescription opioids obtained during their maximal 90-day period. We identified "episodic prescribers" as those whose prescribing pattern included short-acting opioids on greater than 95% of all prescriptions, fewer than or equal to 31 pills on 95% of all prescriptions, only 1 prescription in the database for greater than 90% of all patients to whom they gave opioids, fewer than 6 prescriptions in the database to greater than 99% of patients given opioids, and fewer than 540 prescriptions per year. We identified top 5% prescribers by their morphine milligram equivalents per day in the database. We examined the relationship between patient opioid prescriptions and provider type, with the primary analysis performed on the patient cohort who received only short-acting opioids in an attempt to avoid guideline-concordant palliative, oncologic, and addiction care, and a secondary analysis performed on all patients. Among patients with short-acting opioid only, episodic prescribers (14.6% of 173,000 prescribers) wrote at least one prescription to 25% of 2.7 million individuals but were responsible for less than 9% of the 10.5 million opioid prescriptions and less than 3% of the 3.9 billion morphine milligram equivalents in our sample. Among individuals with high morphine milligram equivalents use, episodic prescribers were responsible for 2.8% of prescriptions and 0.6% of total morphine milligram equivalents

  12. Rectifier Fault Diagnosis and Fault Tolerance of a Doubly Fed Brushless Starter Generator

    Directory of Open Access Journals (Sweden)

    Liwei Shi

    2015-01-01

    Full Text Available This paper presents a rectifier fault diagnosis method with wavelet packet analysis to improve the fault tolerant four-phase doubly fed brushless starter generator (DFBLSG system reliability. The system components and fault tolerant principle of the high reliable DFBLSG are given. And the common fault of the rectifier is analyzed. The process of wavelet packet transforms fault detection/identification algorithm is introduced in detail. The fault tolerant performance and output voltage experiments were done to gather the energy characteristics with a voltage sensor. The signal is analyzed with 5-layer wavelet packets, and the energy eigenvalue of each frequency band is obtained. Meanwhile, the energy-eigenvalue tolerance was introduced to improve the diagnostic accuracy. With the wavelet packet fault diagnosis, the fault tolerant four-phase DFBLSG can detect the usual open-circuit fault and operate in the fault tolerant mode if there is a fault. The results indicate that the fault analysis techniques in this paper are accurate and effective.

  13. A novel Lagrangian approach for the stable numerical simulation of fault and fracture mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Franceschini, Andrea; Ferronato, Massimiliano, E-mail: massimiliano.ferronato@unipd.it; Janna, Carlo; Teatini, Pietro

    2016-06-01

    The simulation of the mechanics of geological faults and fractures is of paramount importance in several applications, such as ensuring the safety of the underground storage of wastes and hydrocarbons or predicting the possible seismicity triggered by the production and injection of subsurface fluids. However, the stable numerical modeling of ground ruptures is still an open issue. The present work introduces a novel formulation based on the use of the Lagrange multipliers to prescribe the constraints on the contact surfaces. The variational formulation is modified in order to take into account the frictional work along the activated fault portion according to the principle of maximum plastic dissipation. The numerical model, developed in the framework of the Finite Element method, provides stable solutions with a fast convergence of the non-linear problem. The stabilizing properties of the proposed model are emphasized with the aid of a realistic numerical example dealing with the generation of ground fractures due to groundwater withdrawal in arid regions. - Highlights: • A numerical model is developed for the simulation of fault and fracture mechanics. • The model is implemented in the framework of the Finite Element method and with the aid of Lagrange multipliers. • The proposed formulation introduces a new contribution due to the frictional work on the portion of activated fault. • The resulting algorithm is highly non-linear as the portion of activated fault is itself unknown. • The numerical solution is validated against analytical results and proves to be stable also in realistic applications.

  14. A novel Lagrangian approach for the stable numerical simulation of fault and fracture mechanics

    International Nuclear Information System (INIS)

    Franceschini, Andrea; Ferronato, Massimiliano; Janna, Carlo; Teatini, Pietro

    2016-01-01

    The simulation of the mechanics of geological faults and fractures is of paramount importance in several applications, such as ensuring the safety of the underground storage of wastes and hydrocarbons or predicting the possible seismicity triggered by the production and injection of subsurface fluids. However, the stable numerical modeling of ground ruptures is still an open issue. The present work introduces a novel formulation based on the use of the Lagrange multipliers to prescribe the constraints on the contact surfaces. The variational formulation is modified in order to take into account the frictional work along the activated fault portion according to the principle of maximum plastic dissipation. The numerical model, developed in the framework of the Finite Element method, provides stable solutions with a fast convergence of the non-linear problem. The stabilizing properties of the proposed model are emphasized with the aid of a realistic numerical example dealing with the generation of ground fractures due to groundwater withdrawal in arid regions. - Highlights: • A numerical model is developed for the simulation of fault and fracture mechanics. • The model is implemented in the framework of the Finite Element method and with the aid of Lagrange multipliers. • The proposed formulation introduces a new contribution due to the frictional work on the portion of activated fault. • The resulting algorithm is highly non-linear as the portion of activated fault is itself unknown. • The numerical solution is validated against analytical results and proves to be stable also in realistic applications.

  15. Fault displacement along the Naruto-South fault, the Median Tectonic Line active fault system in the eastern part of Shikoku, southwestern Japan

    OpenAIRE

    高田, 圭太; 中田, 高; 後藤, 秀昭; 岡田, 篤正; 原口, 強; 松木, 宏彰

    1998-01-01

    The Naruto-South fault is situated of about 1000m south of the Naruto fault, the Median Tectonic Line active fault system in the eastern part of Shikoku. We investigated fault topography and subsurface geology of this fault by interpretation of large scale aerial photographs, collecting borehole data and Geo-Slicer survey. The results obtained are as follows; 1) The Naruto-South fault runs on the Yoshino River deltaic plain at least 2.5 km long with fault scarplet. the Naruto-South fault is o...

  16. Auditing GPs' prescribing habits : Cardiovascular prescribing frequently continues medication initiated by specialists

    NARCIS (Netherlands)

    de Vries, C.S; van Diepen, N.M; de Jong-van den Berg, L T W

    Objective: To determine to what extent general practitioners' (GPs) prescribing behaviour is a result of repeat prescribing of medication which has been initiated by specialists. Method: During a 4-week period, pharmacists identified GPs' prescriptions for a large group of cardiovascular drugs.

  17. The relationships among work stress, strain and self-reported errors in UK community pharmacy.

    Science.gov (United States)

    Johnson, S J; O'Connor, E M; Jacobs, S; Hassell, K; Ashcroft, D M

    2014-01-01

    Changes in the UK community pharmacy profession including new contractual frameworks, expansion of services, and increasing levels of workload have prompted concerns about rising levels of workplace stress and overload. This has implications for pharmacist health and well-being and the occurrence of errors that pose a risk to patient safety. Despite these concerns being voiced in the profession, few studies have explored work stress in the community pharmacy context. To investigate work-related stress among UK community pharmacists and to explore its relationships with pharmacists' psychological and physical well-being, and the occurrence of self-reported dispensing errors and detection of prescribing errors. A cross-sectional postal survey of a random sample of practicing community pharmacists (n = 903) used ASSET (A Shortened Stress Evaluation Tool) and questions relating to self-reported involvement in errors. Stress data were compared to general working population norms, and regressed on well-being and self-reported errors. Analysis of the data revealed that pharmacists reported significantly higher levels of workplace stressors than the general working population, with concerns about work-life balance, the nature of the job, and work relationships being the most influential on health and well-being. Despite this, pharmacists were not found to report worse health than the general working population. Self-reported error involvement was linked to both high dispensing volume and being troubled by perceived overload (dispensing errors), and resources and communication (detection of prescribing errors). This study contributes to the literature by benchmarking community pharmacists' health and well-being, and investigating sources of stress using a quantitative approach. A further important contribution to the literature is the identification of a quantitative link between high workload and self-reported dispensing errors. Copyright © 2014 Elsevier Inc. All rights

  18. Measurement-based analysis of error latency. [in computer operating system

    Science.gov (United States)

    Chillarege, Ram; Iyer, Ravishankar K.

    1987-01-01

    This paper demonstrates a practical methodology for the study of error latency under a real workload. The method is illustrated with sampled data on the physical memory activity, gathered by hardware instrumentation on a VAX 11/780 during the normal workload cycle of the installation. These data are used to simulate fault occurrence and to reconstruct the error discovery process in the system. The technique provides a means to study the system under different workloads and for multiple days. An approach to determine the percentage of undiscovered errors is also developed and a verification of the entire methodology is performed. This study finds that the mean error latency, in the memory containing the operating system, varies by a factor of 10 to 1 (in hours) between the low and high workloads. It is found that of all errors occurring within a day, 70 percent are detected in the same day, 82 percent within the following day, and 91 percent within the third day. The increase in failure rate due to latency is not so much a function of remaining errors but is dependent on whether or not there is a latent error.

  19. Robust Fault Diagnosis Design for Linear Multiagent Systems with Incipient Faults

    Directory of Open Access Journals (Sweden)

    Jingping Xia

    2015-01-01

    Full Text Available The design of a robust fault estimation observer is studied for linear multiagent systems subject to incipient faults. By considering the fact that incipient faults are in low-frequency domain, the fault estimation of such faults is proposed for discrete-time multiagent systems based on finite-frequency technique. Moreover, using the decomposition design, an equivalent conclusion is given. Simulation results of a numerical example are presented to demonstrate the effectiveness of the proposed techniques.

  20. Stafford fault system: 120 million year fault movement history of northern Virginia

    Science.gov (United States)

    Powars, David S.; Catchings, Rufus D.; Horton, J. Wright; Schindler, J. Stephen; Pavich, Milan J.

    2015-01-01

    The Stafford fault system, located in the mid-Atlantic coastal plain of the eastern United States, provides the most complete record of fault movement during the past ~120 m.y. across the Virginia, Washington, District of Columbia (D.C.), and Maryland region, including displacement of Pleistocene terrace gravels. The Stafford fault system is close to and aligned with the Piedmont Spotsylvania and Long Branch fault zones. The dominant southwest-northeast trend of strong shaking from the 23 August 2011, moment magnitude Mw 5.8 Mineral, Virginia, earthquake is consistent with the connectivity of these faults, as seismic energy appears to have traveled along the documented and proposed extensions of the Stafford fault system into the Washington, D.C., area. Some other faults documented in the nearby coastal plain are clearly rooted in crystalline basement faults, especially along terrane boundaries. These coastal plain faults are commonly assumed to have undergone relatively uniform movement through time, with average slip rates from 0.3 to 1.5 m/m.y. However, there were higher rates during the Paleocene–early Eocene and the Pliocene (4.4–27.4 m/m.y), suggesting that slip occurred primarily during large earthquakes. Further investigation of the Stafford fault system is needed to understand potential earthquake hazards for the Virginia, Maryland, and Washington, D.C., area. The combined Stafford fault system and aligned Piedmont faults are ~180 km long, so if the combined fault system ruptured in a single event, it would result in a significantly larger magnitude earthquake than the Mineral earthquake. Many structures most strongly affected during the Mineral earthquake are along or near the Stafford fault system and its proposed northeastward extension.

  1. Optimal fault signal estimation

    NARCIS (Netherlands)

    Stoorvogel, Antonie Arij; Niemann, H.H.; Saberi, A.; Sannuti, P.

    2002-01-01

    We consider here both fault identification and fault signal estimation. Regarding fault identification, we seek either exact or almost fault identification. On the other hand, regarding fault signal estimation, we seek either $H_2$ optimal, $H_2$ suboptimal or Hinfinity suboptimal estimation. By

  2. Faulting at Mormon Point, Death Valley, California: A low-angle normal fault cut by high-angle faults

    Science.gov (United States)

    Keener, Charles; Serpa, Laura; Pavlis, Terry L.

    1993-04-01

    New geophysical and fault kinematic studies indicate that late Cenozoic basin development in the Mormon Point area of Death Valley, California, was accommodated by fault rotations. Three of six fault segments recognized at Mormon Point are now inactive and have been rotated to low dips during extension. The remaining three segments are now active and moderately to steeply dipping. From the geophysical data, one active segment appears to offset the low-angle faults in the subsurface of Death Valley.

  3. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes

    Science.gov (United States)

    Jing, Lin; Brun, Todd; Quantum Research Team

    Quasi-cyclic LDPC codes can approach the Shannon capacity and have efficient decoders. Manabu Hagiwara et al., 2007 presented a method to calculate parity check matrices with high girth. Two distinct, orthogonal matrices Hc and Hd are used. Using submatrices obtained from Hc and Hd by deleting rows, we can alter the code rate. The submatrix of Hc is used to correct Pauli X errors, and the submatrix of Hd to correct Pauli Z errors. We simulated this system for depolarizing noise on USC's High Performance Computing Cluster, and obtained the block error rate (BER) as a function of the error weight and code rate. From the rates of uncorrectable errors under different error weights we can extrapolate the BER to any small error probability. Our results show that this code family can perform reasonably well even at high code rates, thus considerably reducing the overhead compared to concatenated and surface codes. This makes these codes promising as storage blocks in fault-tolerant quantum computation. Error Correction using Quantum Quasi-Cyclic Low-Density Parity-Check(LDPC) Codes.

  4. Real-time fault diagnosis and fault-tolerant control

    OpenAIRE

    Gao, Zhiwei; Ding, Steven X.; Cecati, Carlo

    2015-01-01

    This "Special Section on Real-Time Fault Diagnosis and Fault-Tolerant Control" of the IEEE Transactions on Industrial Electronics is motivated to provide a forum for academic and industrial communities to report recent theoretic/application results in real-time monitoring, diagnosis, and fault-tolerant design, and exchange the ideas about the emerging research direction in this field. Twenty-three papers were eventually selected through a strict peer-reviewed procedure, which represent the mo...

  5. Wind Power and Fault Clearance. Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Vikesjoe, Johnny; Messing, Lars (Gothia Power (Sweden))

    2011-04-15

    in case of a fault occurring elsewhere in the network. This can occur in a feeder bay connecting generation. Directional short circuit protection is proposed in these cases. - Prevention of feeder protection function. Fault current infeed from connected generation along a feeder will influence the fault current through the feeding bay. This problem occurs probably only for very long feeders with large infeed close to the feeding substation. - Clearance of busbar faults in the feeding substation. Normally used blocked overcurrent protection of busbars must be modified in case of fault current infeed from any feeder. Two solutions are possible: use of arc detection protection and/or directional short circuit protection in bays connecting feeders with generation. The impact of fault current infeed from wind generator systems at grid faults is discussed. Fault currents from the new types of generator systems; DFIG (Doubly Fed Induction Generator) and full power converter connected generator, differ from conventional synchronous generators. It is therefore concluded that conventional fault calculations will not give correct fault current levels. In many applications this error is negligible, but not always. Three different types of wind power applications are studied: - Protection with a limited number of wind power units connected to a distribution feeder - Protection with a small wind farm connected one feeder in a distribution system - Protection of a wind farm connected to the sub-transmission or transmission system Short circuits and earth faults are studied for different fault locations: in the wind power plant, on a feeder in the distribution/collection grid and in the connecting subtransmission/transmission grid. For these faults different kind of protections are discussed. Also protection for deviating voltage and frequency are discussed. As conclusion, guidelines are given for the choice of protection of different objects: - Protection in a substation bay

  6. How doctors diagnose diseases and prescribe treatments: an fMRI study of diagnostic salience

    OpenAIRE

    Melo, Marcio; Gusso, Gustavo D. F.; Levites, Marcelo; Amaro Jr., Edson; Massad, Eduardo; Lotufo, Paulo A.; Zeidman, Peter; Price, Cathy J.; Friston, Karl J.

    2017-01-01

    Understanding the brain mechanisms involved in diagnostic reasoning may contribute to the development of methods that reduce errors in medical practice. In this study we identified similar brain systems for diagnosing diseases, prescribing treatments, and naming animals and objects using written information as stimuli. Employing time resolved modeling of blood oxygen level dependent (BOLD) responses enabled time resolved (400 milliseconds epochs) analyses. With this approach it was possible t...

  7. Fault kinematics and localised inversion within the Troms-Finnmark Fault Complex, SW Barents Sea

    Science.gov (United States)

    Zervas, I.; Omosanya, K. O.; Lippard, S. J.; Johansen, S. E.

    2018-04-01

    The areas bounding the Troms-Finnmark Fault Complex are affected by complex tectonic evolution. In this work, the history of fault growth, reactivation, and inversion of major faults in the Troms-Finnmark Fault Complex and the Ringvassøy Loppa Fault Complex is interpreted from three-dimensional seismic data, structural maps and fault displacement plots. Our results reveal eight normal faults bounding rotated fault blocks in the Troms-Finnmark Fault Complex. Both the throw-depth and displacement-distance plots show that the faults exhibit complex configurations of lateral and vertical segmentation with varied profiles. Some of the faults were reactivated by dip-linkages during the Late Jurassic and exhibit polycyclic fault growth, including radial, syn-sedimentary, and hybrid propagation. Localised positive inversion is the main mechanism of fault reactivation occurring at the Troms-Finnmark Fault Complex. The observed structural styles include folds associated with extensional faults, folded growth wedges and inverted depocentres. Localised inversion was intermittent with rifting during the Middle Jurassic-Early Cretaceous at the boundaries of the Troms-Finnmark Fault Complex to the Finnmark Platform. Additionally, tectonic inversion was more intense at the boundaries of the two fault complexes, affecting Middle Triassic to Early Cretaceous strata. Our study shows that localised folding is either a product of compressional forces or of lateral movements in the Troms-Finnmark Fault Complex. Regional stresses due to the uplift in the Loppa High and halokinesis in the Tromsø Basin are likely additional causes of inversion in the Troms-Finnmark Fault Complex.

  8. Quantum computation with topological codes from qubit to topological fault-tolerance

    CERN Document Server

    Fujii, Keisuke

    2015-01-01

    This book presents a self-consistent review of quantum computation with topological quantum codes. The book covers everything required to understand topological fault-tolerant quantum computation, ranging from the definition of the surface code to topological quantum error correction and topological fault-tolerant operations. The underlying basic concepts and powerful tools, such as universal quantum computation, quantum algorithms, stabilizer formalism, and measurement-based quantum computation, are also introduced in a self-consistent way. The interdisciplinary fields between quantum information and other fields of physics such as condensed matter physics and statistical physics are also explored in terms of the topological quantum codes. This book thus provides the first comprehensive description of the whole picture of topological quantum codes and quantum computation with them.

  9. Sliding observer-based demagnetisation fault-tolerant control in permanent magnet synchronous motors

    Directory of Open Access Journals (Sweden)

    Changfan Zhang

    2017-04-01

    Full Text Available This study proposes a fault-tolerant control method for permanent magnet synchronous motors (PMSMs based on the active flux linkage concept, which addresses permanent magnet (PM demagnetisation faults in PMSMs. First, a mathematical model for a PMSM is established based on active flux linkage, and then the effect of PM demagnetisation on the PMSM is analysed. Second, the stator current in the static coordinate is set as the state variable, an observer is designed based on a sliding-mode variable structure, and an equation for active flux linkage is established for dynamic estimation based on the equivalent control principle of sliding-mode variable structure. Finally, the active flux linkage for the next moment is predicted according to the operating conditions of the motor and the observed values of the current active flux linkage. The deadbeat control strategy is applied to eliminate errors in the active flux linkage and realise the objective of fault-tolerant control. A timely and effective control for demagnetisation faults is achieved using the proposed method, which validity and feasibility are verified by the simulation and experiment results.

  10. Prescribed fire research in Pennsylvania

    Science.gov (United States)

    Patrick Brose

    2009-01-01

    Prescribed fire in Pennsylvania is a relatively new forestry practice because of the State's adverse experience with highly destructive wildfires in the early 1900s. The recent introduction of prescribed fire raises a myriad of questions regarding its correct and safe use. This poster briefly describes the prescribed fire research projects of the Forestry Sciences...

  11. Event-triggered decentralized adaptive fault-tolerant control of uncertain interconnected nonlinear systems with actuator failures.

    Science.gov (United States)

    Choi, Yun Ho; Yoo, Sung Jin

    2018-06-01

    This paper investigates the event-triggered decentralized adaptive tracking problem of a class of uncertain interconnected nonlinear systems with unexpected actuator failures. It is assumed that local control signals are transmitted to local actuators with time-varying faults whenever predefined conditions for triggering events are satisfied. Compared with the existing control-input-based event-triggering strategy for adaptive control of uncertain nonlinear systems, the aim of this paper is to propose a tracking-error-based event-triggering strategy in the decentralized adaptive fault-tolerant tracking framework. The proposed approach can relax drastic changes in control inputs caused by actuator faults in the existing triggering strategy. The stability of the proposed event-triggering control system is analyzed in the Lyapunov sense. Finally, simulation comparisons of the proposed and existing approaches are provided to show the effectiveness of the proposed theoretical result in the presence of actuator faults. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  12. An improved particle filtering algorithm for aircraft engine gas-path fault diagnosis

    Directory of Open Access Journals (Sweden)

    Qihang Wang

    2016-07-01

    Full Text Available In this article, an improved particle filter with electromagnetism-like mechanism algorithm is proposed for aircraft engine gas-path component abrupt fault diagnosis. In order to avoid the particle degeneracy and sample impoverishment of normal particle filter, the electromagnetism-like mechanism optimization algorithm is introduced into resampling procedure, which adjusts the position of the particles through simulating attraction–repulsion mechanism between charged particles of the electromagnetism theory. The improved particle filter can solve the particle degradation problem and ensure the diversity of the particle set. Meanwhile, it enhances the ability of tracking abrupt fault due to considering the latest measurement information. Comparison of the proposed method with three different filter algorithms is carried out on a univariate nonstationary growth model. Simulations on a turbofan engine model indicate that compared to the normal particle filter, the improved particle filter can ensure the completion of the fault diagnosis within less sampling period and the root mean square error of parameters estimation is reduced.

  13. Optimization of Second Fault Detection Thresholds to Maximize Mission POS

    Science.gov (United States)

    Anzalone, Evan

    2018-01-01

    In order to support manned spaceflight safety requirements, the Space Launch System (SLS) has defined program-level requirements for key systems to ensure successful operation under single fault conditions. To accommodate this with regards to Navigation, the SLS utilizes an internally redundant Inertial Navigation System (INS) with built-in capability to detect, isolate, and recover from first failure conditions and still maintain adherence to performance requirements. The unit utilizes multiple hardware- and software-level techniques to enable detection, isolation, and recovery from these events in terms of its built-in Fault Detection, Isolation, and Recovery (FDIR) algorithms. Successful operation is defined in terms of sufficient navigation accuracy at insertion while operating under worst case single sensor outages (gyroscope and accelerometer faults at launch). In addition to first fault detection and recovery, the SLS program has also levied requirements relating to the capability of the INS to detect a second fault, tracking any unacceptable uncertainty in knowledge of the vehicle's state. This detection functionality is required in order to feed abort analysis and ensure crew safety. Increases in navigation state error and sensor faults can drive the vehicle outside of its operational as-designed environments and outside of its performance envelope causing loss of mission, or worse, loss of crew. The criteria for operation under second faults allows for a larger set of achievable missions in terms of potential fault conditions, due to the INS operating at the edge of its capability. As this performance is defined and controlled at the vehicle level, it allows for the use of system level margins to increase probability of mission success on the operational edges of the design space. Due to the implications of the vehicle response to abort conditions (such as a potentially failed INS), it is important to consider a wide range of failure scenarios in terms of

  14. Prescriber and staff perceptions of an electronic prescribing system in primary care: a qualitative assessment

    Directory of Open Access Journals (Sweden)

    Sittig Dean F

    2010-11-01

    Full Text Available Abstract Background The United States (US Health Information Technology for Economic and Clinical Health Act of 2009 has spurred adoption of electronic health records. The corresponding meaningful use criteria proposed by the Centers for Medicare and Medicaid Services mandates use of computerized provider order entry (CPOE systems. Yet, adoption in the US and other Western countries is low and descriptions of successful implementations are primarily from the inpatient setting; less frequently the ambulatory setting. We describe prescriber and staff perceptions of implementation of a CPOE system for medications (electronic- or e-prescribing system in the ambulatory setting. Methods Using a cross-sectional study design, we conducted eight focus groups at three primary care sites in an independent medical group. Each site represented a unique stage of e-prescribing implementation - pre/transition/post. We used a theoretically based, semi-structured questionnaire to elicit physician (n = 17 and staff (n = 53 perceptions of implementation of the e-prescribing system. We conducted a thematic analysis of focus group discussions using formal qualitative analytic techniques (i.e. deductive framework and grounded theory. Two coders independently coded to theoretical saturation and resolved discrepancies through discussions. Results Ten themes emerged that describe perceptions of e-prescribing implementation: 1 improved availability of clinical information resulted in prescribing efficiencies and more coordinated care; 2 improved documentation resulted in safer care; 3 efficiencies were gained by using fewer paper charts; 4 organizational support facilitated adoption; 5 transition required time; resulted in workload shift to staff; 6 hardware configurations and network stability were important in facilitating workflow; 7 e-prescribing was time-neutral or time-saving; 8 changes in patient interactions enhanced patient care but required education; 9 pharmacy

  15. Neuropharmacology and mental health nurse prescribers.

    Science.gov (United States)

    Skingsley, David; Bradley, Eleanor J; Nolan, Peter

    2006-08-01

    To outline the development and content of a 'top-up' neuropharmacology module for mental health nurse prescribers and consider how much pharmacology training is required to ensure effective mental health prescribing practice. Debate about the content of prescribing training courses has persisted within the United Kingdom since the mid-1980s. In early 2003 supplementary prescribing was introduced and gave mental health nurses the opportunity to become prescribers. The challenge of the nurse prescribing curriculum for universities is that they have only a short time to provide nurses from a range of backgrounds with enough knowledge to ensure that they meet agreed levels of competency for safe prescribing. There is growing concern within mental health care that the prescribing of medication in mental health services falls short of what would be deemed good practice. Over the past two decades, nurse training has increasingly adopted a psychosocial approach to nursing care raising concerns that, although nurses attending prescribing training may be able to communicate effectively with service users, they may lack the basic knowledge of biology and pharmacology to make effective decisions about medication. Following the completion of a general nurse prescribing course, mental health nurses who attended were asked to identify their specific needs during the evaluation phase. Although they had covered basic pharmacological principles in their training, they stated that they needed more specific information about drugs used in mental health; particularly how to select appropriate drug treatments for mental health conditions. This paper describes how the nurses were involved in the design of a specific module which would enable them to transfer their theoretical leaning to practice and in so doing increase their confidence in their new roles. The findings of this study suggest that the understanding and confidence of mental health nurse prescribers about the drugs they

  16. Comprehensive analysis of a medication dosing error related to CPOE.

    Science.gov (United States)

    Horsky, Jan; Kuperman, Gilad J; Patel, Vimla L

    2005-01-01

    This case study of a serious medication error demonstrates the necessity of a comprehensive methodology for the analysis of failures in interaction between humans and information systems. The authors used a novel approach to analyze a dosing error related to computer-based ordering of potassium chloride (KCl). The method included a chronological reconstruction of events and their interdependencies from provider order entry usage logs, semistructured interviews with involved clinicians, and interface usability inspection of the ordering system. Information collected from all sources was compared and evaluated to understand how the error evolved and propagated through the system. In this case, the error was the product of faults in interaction among human and system agents that methods limited in scope to their distinct analytical domains would not identify. The authors characterized errors in several converging aspects of the drug ordering process: confusing on-screen laboratory results review, system usability difficulties, user training problems, and suboptimal clinical system safeguards that all contributed to a serious dosing error. The results of the authors' analysis were used to formulate specific recommendations for interface layout and functionality modifications, suggest new user alerts, propose changes to user training, and address error-prone steps of the KCl ordering process to reduce the risk of future medication dosing errors.

  17. Fault Ride-Through Capability Enhancement of VSC HVDC connected Offshore Wind Power Plants

    DEFF Research Database (Denmark)

    Sharma, Ranjan; Wu, Qiuwei; Cha, Seung-Tae

    2015-01-01

    This paper presents a feed forward direct current (DC) voltage control based fault ride-through (FRT) scheme for voltage source converter (VSC) high voltage DC (HVDC) connected offshore wind power plants (WPPs) in order to achieve active control of the WPP collector network AC voltage magnitude......, and to improve the FRT capability. During steady state operation, an open loop AC voltage control is implemented at the WPP side VSC of the HVDC system such that any possible control interactions between the WPP side VSC and the wind turbine VSC are minimized. Whereas during any grid faults, a dynamic AC voltage...... reference is applied based on both the DC voltage error and the AC active-current from the WPP collector system which ensures fast and robust FRT of the VSC HVDC connected offshore WPPs. Under unbalanced fault conditions in the host power system, the resulting oscillatory DC voltage is directly used...

  18. Prescribing antibiotics in general practice:

    DEFF Research Database (Denmark)

    Sydenham, Rikke Vognbjerg; Pedersen, Line Bjørnskov; Plejdrup Hansen, Malene

    Objectives The majority of antibiotics are prescribed from general practice. The use of broad-spectrum antibiotics increases the risk of development of bacteria resistant to antibiotic treatment. In spite of guidelines aiming to minimize the use of broad-spectrum antibiotics we see an increase...... in the use of these agents. The overall aim of the project is to explore factors influencing the decision process and the prescribing behaviour of the GPs when prescribing antibiotics. We will study the impact of microbiological testing on the choice of antibiotic. Furthermore the project will explore how...... the GPs’ prescribing behaviour is influenced by selected factors. Method The study consists of a register-based study and a questionnaire study. The register-based study is based on data from the Register of Medicinal Product Statistics (prescribed antibiotics), Statistics Denmark (socio-demographic data...

  19. (How) do we learn from errors? A prospective study of the link between the ward's learning practices and medication administration errors.

    Science.gov (United States)

    Drach-Zahavy, A; Somech, A; Admi, H; Peterfreund, I; Peker, H; Priente, O

    2014-03-01

    Attention in the ward should shift from preventing medication administration errors to managing them. Nevertheless, little is known in regard with the practices nursing wards apply to learn from medication administration errors as a means of limiting them. To test the effectiveness of four types of learning practices, namely, non-integrated, integrated, supervisory and patchy learning practices in limiting medication administration errors. Data were collected from a convenient sample of 4 hospitals in Israel by multiple methods (observations and self-report questionnaires) at two time points. The sample included 76 wards (360 nurses). Medication administration error was defined as any deviation from prescribed medication processes and measured by a validated structured observation sheet. Wards' use of medication administration technologies, location of the medication station, and workload were observed; learning practices and demographics were measured by validated questionnaires. Results of the mixed linear model analysis indicated that the use of technology and quiet location of the medication cabinet were significantly associated with reduced medication administration errors (estimate=.03, perrors (estimate=.04, plearning practices, supervisory learning was the only practice significantly linked to reduced medication administration errors (estimate=-.04, plearning were significantly linked to higher levels of medication administration errors (estimate=-.03, plearning was not associated with it (p>.05). How wards manage errors might have implications for medication administration errors beyond the effects of typical individual, organizational and technology risk factors. Head nurse can facilitate learning from errors by "management by walking around" and monitoring nurses' medication administration behaviors. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Sensor Selection for Aircraft Engine Performance Estimation and Gas Path Fault Diagnostics

    Science.gov (United States)

    Simon, Donald L.; Rinehart, Aidan W.

    2016-01-01

    This paper presents analytical techniques for aiding system designers in making aircraft engine health management sensor selection decisions. The presented techniques, which are based on linear estimation and probability theory, are tailored for gas turbine engine performance estimation and gas path fault diagnostics applications. They enable quantification of the performance estimation and diagnostic accuracy offered by different candidate sensor suites. For performance estimation, sensor selection metrics are presented for two types of estimators including a Kalman filter and a maximum a posteriori estimator. For each type of performance estimator, sensor selection is based on minimizing the theoretical sum of squared estimation errors in health parameters representing performance deterioration in the major rotating modules of the engine. For gas path fault diagnostics, the sensor selection metric is set up to maximize correct classification rate for a diagnostic strategy that performs fault classification by identifying the fault type that most closely matches the observed measurement signature in a weighted least squares sense. Results from the application of the sensor selection metrics to a linear engine model are presented and discussed. Given a baseline sensor suite and a candidate list of optional sensors, an exhaustive search is performed to determine the optimal sensor suites for performance estimation and fault diagnostics. For any given sensor suite, Monte Carlo simulation results are found to exhibit good agreement with theoretical predictions of estimation and diagnostic accuracies.

  1. Drug use evaluation of antibiotics prescribed in a Jordanian hospital outpatient and emergency clinics using WHO prescribing indicators

    International Nuclear Information System (INIS)

    Al-Niemat, Sahar I.; Bloukh, Diana T.; Al-Harasis, Manal D.; Al-Fanek, Alen F.; Salah, Rehab K.

    2008-01-01

    Objective was to evaluate the use of antibiotics prescribed in hospital outpatient and emergency clinics in King Hussein Medical Centre (KHMC) using WHO prescribing indicators in an attempt to rationalize the use of antibiotics in the Royal Medical Services. We retrospectively surveyed a sample of 187,822 antibiotic prescriptions obtained from 5 outpatient pharmacies in KHMC written over the period of 3 consecutive months May 2007 to July 2007. The percentage of encounters of an antibiotic prescribed was calculated using the methodology recommended by the WHO. An additional indicator, the percentage share of different antibiotics was also included to identify the frequency prescribed from those antibiotics. The average percentage of prescriptions involving antibiotics was 35.6% out of 187,822 prescriptions surveyed. From these, 65,500 antibiotic prescriptions were observed. Penicillins most frequently amoxcillins and Quinolones most frequently ciprofloxacinllin and norfloxacillin were the most commonly prescribed antibiotics with an average percentage of 31.8% and 27.5%. The average prescribing rate for the other antibiotic categories was as follows: macrolides 5.2%, cephalosporins 16% and amoxcillins/clavulanate 5.4%. The high percentage of prescriptions involving antibiotics observed in KHMC pharmacies requires rational use of antibiotics and judicious prescribing by Military prescribers. An insight into factors influencing antibiotic prescribing patterns and adherence to antibiotic prescribing guidelines by the Military prescribers is warranted. (author)

  2. Nurse practitioner prescribing: an international perspective

    Directory of Open Access Journals (Sweden)

    Fong J

    2015-10-01

    Full Text Available Jacqueline Fong,1,2 Thomas Buckley,2 Andrew Cashin3 1St George Hospital, Kogarah, 2Sydney Nursing School, University of Sydney, Camperdown, NSW, Australia; 3School of Health and Human Sciences, Southern Cross University, Lismore, NSW, Australia Background: Internationally, the delivery of care provided by nurses and midwives has undergone a significant change due to a variety of interrelated factors, including economic circumstances, a diminishing number of medical providers, the unavailability of adequate health care services in underserved and rural areas, and growing specialization among the professions. One solution to the challenges of care delivery has been the introduction of nurse practitioners (NPs and the authorization of NPs to prescribe medicines. Aim: The aim of this paper was to review the current international literature related to NP prescribing and compare the findings to the Australian context. The review focuses on literature from the United States, Canada, Europe, Australia, and New Zealand. Methods: Databases were searched from January 2000 to January 2015. The following keywords: “nurse practitioner”, “advanced nurse”, “advanced practice nurse”, “prescri*”, “Australia”, “United States America”, “UK”, “New Zealand”, “Canada”, “Europe”, “drug prescri*”, “prescri* authority”, and “prescri* legislation” were used. Findings: NPs tend to prescribe in differing contexts of practice to provide care in underserved populations and require good systems literacy to practice across complex systems. The key themes identified internationally related to NP prescribing relate to barriers to prescribing, confidence in prescribing, and the unique role of NPs in prescribing medicines, eg, the high prevalence of prescribing pain medicines in several countries, including Australia. Conclusion: Across all countries reviewed, there appears a need for further research into the organizational and

  3. How common are errors in the medication process in a psychiatric hospital?

    DEFF Research Database (Denmark)

    Sørensen, Ann Lykkegaard; Mainz, Jan; Lisby, Marianne

    frequency, type and potential clinical consequences of errors in all stages of the medication process in an inpatient psychiatric setting. Methods and materials: A cross-sectional study in two general psychiatric wards and one acute psychiatric ward. Participants were eligible psychiatric in......-hospital patients (n=67), physicians prescribing drugs and ward staff (nurses and nurses assistants) dispensing and administering drugs. The study was carried out using 3 methods of investigation – an observational study, an unannounced control visit and an audit of medical records. Medication errors were evaluated...

  4. An Autonomous Self-Aware and Adaptive Fault Tolerant Routing Technique for Wireless Sensor Networks

    Science.gov (United States)

    Abba, Sani; Lee, Jeong-A

    2015-01-01

    We propose an autonomous self-aware and adaptive fault-tolerant routing technique (ASAART) for wireless sensor networks. We address the limitations of self-healing routing (SHR) and self-selective routing (SSR) techniques for routing sensor data. We also examine the integration of autonomic self-aware and adaptive fault detection and resiliency techniques for route formation and route repair to provide resilience to errors and failures. We achieved this by using a combined continuous and slotted prioritized transmission back-off delay to obtain local and global network state information, as well as multiple random functions for attaining faster routing convergence and reliable route repair despite transient and permanent node failure rates and efficient adaptation to instantaneous network topology changes. The results of simulations based on a comparison of the ASAART with the SHR and SSR protocols for five different simulated scenarios in the presence of transient and permanent node failure rates exhibit a greater resiliency to errors and failure and better routing performance in terms of the number of successfully delivered network packets, end-to-end delay, delivered MAC layer packets, packet error rate, as well as efficient energy conservation in a highly congested, faulty, and scalable sensor network. PMID:26295236

  5. Modeling caprock fracture, CO2 migration and time dependent fault healing: A numerical study.

    Science.gov (United States)

    MacFarlane, J.; Mukerji, T.; Vanorio, T.

    2017-12-01

    The Campi Flegrei caldera, located near Naples, Italy, is one of the highest risk volcanoes on Earth due to its recent unrest and urban setting. A unique history of surface uplift within the caldera is characterized by long duration uplift and subsidence cycles which are periodically interrupted by rapid, short period uplift events. Several models have been proposed to explain this history; in this study we will present a hydro-mechanical model that takes into account the caprock that seismic studies show to exist at 1-2 km depth. Specifically, we develop a finite element model of the caldera and use a modified version of fault-valve theory to represent fracture within the caprock. The model accounts for fault healing using a simplified, time-dependent fault sealing model. Multiple fracture events are incorporated by using previous solutions to test prescribed conditions and determine changes in rock properties, such as porosity and permeability. Although fault-valve theory has been used to model single fractures and recharge, this model is unique in its ability to model multiple fracture events. By incorporating multiple fracture events we can assess changes in both long and short-term reservoir behavior at Campi Flegrei. By varying the model inputs, we model the poro-elastic response to CO2 injection at depth and the resulting surface deformation. The goal is to enable geophysicists to better interpret surface observations and predict outcomes from observed changes in reservoir conditions.

  6. Design of fault simulator

    Energy Technology Data Exchange (ETDEWEB)

    Gabbar, Hossam A. [Faculty of Energy Systems and Nuclear Science, University of Ontario Institute of Technology (UOIT), Ontario, L1H 7K4 (Canada)], E-mail: hossam.gabbar@uoit.ca; Sayed, Hanaa E.; Osunleke, Ajiboye S. [Okayama University, Graduate School of Natural Science and Technology, Division of Industrial Innovation Sciences Department of Intelligent Systems Engineering, Okayama 700-8530 (Japan); Masanobu, Hara [AspenTech Japan Co., Ltd., Kojimachi Crystal City 10F, Kojimachi, Chiyoda-ku, Tokyo 102-0083 (Japan)

    2009-08-15

    Fault simulator is proposed to understand and evaluate all possible fault propagation scenarios, which is an essential part of safety design and operation design and support of chemical/production processes. Process models are constructed and integrated with fault models, which are formulated in qualitative manner using fault semantic networks (FSN). Trend analysis techniques are used to map real time and simulation quantitative data into qualitative fault models for better decision support and tuning of FSN. The design of the proposed fault simulator is described and applied on experimental plant (G-Plant) to diagnose several fault scenarios. The proposed fault simulator will enable industrial plants to specify and validate safety requirements as part of safety system design as well as to support recovery and shutdown operation and disaster management.

  7. Selectively Fortifying Reconfigurable Computing Device to Achieve Higher Error Resilience

    Directory of Open Access Journals (Sweden)

    Mingjie Lin

    2012-01-01

    Full Text Available With the advent of 10 nm CMOS devices and “exotic” nanodevices, the location and occurrence time of hardware defects and design faults become increasingly unpredictable, therefore posing severe challenges to existing techniques for error-resilient computing because most of them statically assign hardware redundancy and do not account for the error tolerance inherently existing in many mission-critical applications. This work proposes a novel approach to selectively fortifying a target reconfigurable computing device in order to achieve hardware-efficient error resilience for a specific target application. We intend to demonstrate that such error resilience can be significantly improved with effective hardware support. The major contributions of this work include (1 the development of a complete methodology to perform sensitivity and criticality analysis of hardware redundancy, (2 a novel problem formulation and an efficient heuristic methodology to selectively allocate hardware redundancy among a target design’s key components in order to maximize its overall error resilience, and (3 an academic prototype of SFC computing device that illustrates a 4 times improvement of error resilience for a H.264 encoder implemented with an FPGA device.

  8. A novel Lagrangian approach for the stable numerical simulation of fault and fracture mechanics

    Science.gov (United States)

    Franceschini, Andrea; Ferronato, Massimiliano; Janna, Carlo; Teatini, Pietro

    2016-06-01

    The simulation of the mechanics of geological faults and fractures is of paramount importance in several applications, such as ensuring the safety of the underground storage of wastes and hydrocarbons or predicting the possible seismicity triggered by the production and injection of subsurface fluids. However, the stable numerical modeling of ground ruptures is still an open issue. The present work introduces a novel formulation based on the use of the Lagrange multipliers to prescribe the constraints on the contact surfaces. The variational formulation is modified in order to take into account the frictional work along the activated fault portion according to the principle of maximum plastic dissipation. The numerical model, developed in the framework of the Finite Element method, provides stable solutions with a fast convergence of the non-linear problem. The stabilizing properties of the proposed model are emphasized with the aid of a realistic numerical example dealing with the generation of ground fractures due to groundwater withdrawal in arid regions.

  9. Fault Management Metrics

    Science.gov (United States)

    Johnson, Stephen B.; Ghoshal, Sudipto; Haste, Deepak; Moore, Craig

    2017-01-01

    This paper describes the theory and considerations in the application of metrics to measure the effectiveness of fault management. Fault management refers here to the operational aspect of system health management, and as such is considered as a meta-control loop that operates to preserve or maximize the system's ability to achieve its goals in the face of current or prospective failure. As a suite of control loops, the metrics to estimate and measure the effectiveness of fault management are similar to those of classical control loops in being divided into two major classes: state estimation, and state control. State estimation metrics can be classified into lower-level subdivisions for detection coverage, detection effectiveness, fault isolation and fault identification (diagnostics), and failure prognosis. State control metrics can be classified into response determination effectiveness and response effectiveness. These metrics are applied to each and every fault management control loop in the system, for each failure to which they apply, and probabilistically summed to determine the effectiveness of these fault management control loops to preserve the relevant system goals that they are intended to protect.

  10. Evaluation of drug administration errors in a teaching hospital

    Directory of Open Access Journals (Sweden)

    Berdot Sarah

    2012-03-01

    Full Text Available Abstract Background Medication errors can occur at any of the three steps of the medication use process: prescribing, dispensing and administration. We aimed to determine the incidence, type and clinical importance of drug administration errors and to identify risk factors. Methods Prospective study based on disguised observation technique in four wards in a teaching hospital in Paris, France (800 beds. A pharmacist accompanied nurses and witnessed the preparation and administration of drugs to all patients during the three drug rounds on each of six days per ward. Main outcomes were number, type and clinical importance of errors and associated risk factors. Drug administration error rate was calculated with and without wrong time errors. Relationship between the occurrence of errors and potential risk factors were investigated using logistic regression models with random effects. Results Twenty-eight nurses caring for 108 patients were observed. Among 1501 opportunities for error, 415 administrations (430 errors with one or more errors were detected (27.6%. There were 312 wrong time errors, ten simultaneously with another type of error, resulting in an error rate without wrong time error of 7.5% (113/1501. The most frequently administered drugs were the cardiovascular drugs (425/1501, 28.3%. The highest risks of error in a drug administration were for dermatological drugs. No potentially life-threatening errors were witnessed and 6% of errors were classified as having a serious or significant impact on patients (mainly omission. In multivariate analysis, the occurrence of errors was associated with drug administration route, drug classification (ATC and the number of patient under the nurse's care. Conclusion Medication administration errors are frequent. The identification of its determinants helps to undertake designed interventions.

  11. Study on seismic hazard assessment of large active fault systems. Evolution of fault systems and associated geomorphic structures: fault model test and field survey

    International Nuclear Information System (INIS)

    Ueta, Keichi; Inoue, Daiei; Miyakoshi, Katsuyoshi; Miyagawa, Kimio; Miura, Daisuke

    2003-01-01

    Sandbox experiments and field surveys were performed to investigate fault system evolution and fault-related deformation of ground surface, the Quaternary deposits and rocks. The summary of the results is shown below. 1) In the case of strike-slip faulting, the basic fault sequence runs from early en echelon faults and pressure ridges through linear trough. The fault systems associated with the 2000 western Tottori earthquake are shown as en echelon pattern that characterize the early stage of wrench tectonics, therefore no thoroughgoing surface faulting was found above the rupture as defined by the main shock and aftershocks. 2) Low-angle and high-angle reverse faults commonly migrate basinward with time, respectively. With increasing normal fault displacement in bedrock, normal fault develops within range after reverse fault has formed along range front. 3) Horizontal distance of surface rupture from the bedrock fault normalized by the height of the Quaternary deposits agrees well with those of model tests. 4) Upward-widening damage zone, where secondary fractures develop, forms in the handing wall side of high-angle reverse fault at the Kamioka mine. (author)

  12. Identifying fallacious arguments in a qualitative study of antipsychotic prescribing in dementia.

    Science.gov (United States)

    Donyai, Parastou

    2017-10-01

    Dementia can result in cognitive, noncognitive and behavioural symptoms which are difficult to manage. Formal guidelines for the care and management of dementia in the UK state that antipsychotics should only be prescribed where fully justified. This is because inappropriate use, particularly problematic in care-home settings, can produce severe side effects including death. The aim of this study was to explore the use of fallacious arguments in professionals' deliberations about antipsychotic prescribing in dementia in care-home settings. Fallacious arguments have the potential to become unremarkable discourses that construct and validate practices which are counter to guidelines. This qualitative study involved interviews with 28 care-home managers and health professionals involved in caring for patients with dementia. Potentially fallacious arguments were identified using qualitative content analysis and a coding framework constructed from existing explanatory models of fallacious reasoning. Fallacious arguments were identified in a range of explanations and reasons that participants gave for in answer to questions about initiating, reducing doses of and stopping antipsychotics in dementia. The dominant fallacy was false dichotomy. Appeal to popularity, tradition, consequence, emotion, or fear, and the slippery slope argument was also identified. Fallacious arguments were often formulated to present convincing cases whereby prescribing antipsychotics or maintaining existing doses (versus not starting medication or reducing the dose, for example) appeared as the only acceptable decision but this is not always the case. The findings could help health professionals to recognise and mitigate the effect of logic-based errors in decisions about the prescribing of antipsychotics in dementia. © 2016 Royal Pharmaceutical Society.

  13. Research on Model-Based Fault Diagnosis for a Gas Turbine Based on Transient Performance

    Directory of Open Access Journals (Sweden)

    Detang Zeng

    2018-01-01

    Full Text Available It is essential to monitor and to diagnose faults in rotating machinery with a high thrust–weight ratio and complex structure for a variety of industrial applications, for which reliable signal measurements are required. However, the measured values consist of the true values of the parameters, the inertia of measurements, random errors and systematic errors. Such signals cannot reflect the true performance state and the health state of rotating machinery accurately. High-quality, steady-state measurements are necessary for most current diagnostic methods. Unfortunately, it is hard to obtain these kinds of measurements for most rotating machinery. Diagnosis based on transient performance is a useful tool that can potentially solve this problem. A model-based fault diagnosis method for gas turbines based on transient performance is proposed in this paper. The fault diagnosis consists of a dynamic simulation model, a diagnostic scheme, and an optimization algorithm. A high-accuracy, nonlinear, dynamic gas turbine model using a modular modeling method is presented that involves thermophysical properties, a component characteristic chart, and system inertial. The startup process is simulated using this model. The consistency between the simulation results and the field operation data shows the validity of the model and the advantages of transient accumulated deviation. In addition, a diagnostic scheme is designed to fulfill this process. Finally, cuckoo search is selected to solve the optimization problem in fault diagnosis. Comparative diagnostic results for a gas turbine before and after washing indicate the improved effectiveness and accuracy of the proposed method of using data from transient processes, compared with traditional methods using data from the steady state.

  14. Eigenvector of gravity gradient tensor for estimating fault dips considering fault type

    Science.gov (United States)

    Kusumoto, Shigekazu

    2017-12-01

    The dips of boundaries in faults and caldera walls play an important role in understanding their formation mechanisms. The fault dip is a particularly important parameter in numerical simulations for hazard map creation as the fault dip affects estimations of the area of disaster occurrence. In this study, I introduce a technique for estimating the fault dip using the eigenvector of the observed or calculated gravity gradient tensor on a profile and investigating its properties through numerical simulations. From numerical simulations, it was found that the maximum eigenvector of the tensor points to the high-density causative body, and the dip of the maximum eigenvector closely follows the dip of the normal fault. It was also found that the minimum eigenvector of the tensor points to the low-density causative body and that the dip of the minimum eigenvector closely follows the dip of the reverse fault. It was shown that the eigenvector of the gravity gradient tensor for estimating fault dips is determined by fault type. As an application of this technique, I estimated the dip of the Kurehayama Fault located in Toyama, Japan, and obtained a result that corresponded to conventional fault dip estimations by geology and geomorphology. Because the gravity gradient tensor is required for this analysis, I present a technique that estimates the gravity gradient tensor from the gravity anomaly on a profile.

  15. Reverse fault growth and fault interaction with frictional interfaces: insights from analogue models

    Science.gov (United States)

    Bonanno, Emanuele; Bonini, Lorenzo; Basili, Roberto; Toscani, Giovanni; Seno, Silvio

    2017-04-01

    The association of faulting and folding is a common feature in mountain chains, fold-and-thrust belts, and accretionary wedges. Kinematic models are developed and widely used to explain a range of relationships between faulting and folding. However, these models may result not to be completely appropriate to explain shortening in mechanically heterogeneous rock bodies. Weak layers, bedding surfaces, or pre-existing faults placed ahead of a propagating fault tip may influence the fault propagation rate itself and the associated fold shape. In this work, we employed clay analogue models to investigate how mechanical discontinuities affect the propagation rate and the associated fold shape during the growth of reverse master faults. The simulated master faults dip at 30° and 45°, recalling the range of the most frequent dip angles for active reverse faults that occurs in nature. The mechanical discontinuities are simulated by pre-cutting the clay pack. For both experimental setups (30° and 45° dipping faults) we analyzed three different configurations: 1) isotropic, i.e. without precuts; 2) with one precut in the middle of the clay pack; and 3) with two evenly-spaced precuts. To test the repeatability of the processes and to have a statistically valid dataset we replicate each configuration three times. The experiments were monitored by collecting successive snapshots with a high-resolution camera pointing at the side of the model. The pictures were then processed using the Digital Image Correlation method (D.I.C.), in order to extract the displacement and shear-rate fields. These two quantities effectively show both the on-fault and off-fault deformation, indicating the activity along the newly-formed faults and whether and at what stage the discontinuities (precuts) are reactivated. To study the fault propagation and fold shape variability we marked the position of the fault tips and the fold profiles for every successive step of deformation. Then we compared

  16. Fault Current Characteristics of the DFIG under Asymmetrical Fault Conditions

    Directory of Open Access Journals (Sweden)

    Fan Xiao

    2015-09-01

    Full Text Available During non-severe fault conditions, crowbar protection is not activated and the rotor windings of a doubly-fed induction generator (DFIG are excited by the AC/DC/AC converter. Meanwhile, under asymmetrical fault conditions, the electrical variables oscillate at twice the grid frequency in synchronous dq frame. In the engineering practice, notch filters are usually used to extract the positive and negative sequence components. In these cases, the dynamic response of a rotor-side converter (RSC and the notch filters have a large influence on the fault current characteristics of the DFIG. In this paper, the influence of the notch filters on the proportional integral (PI parameters is discussed and the simplified calculation models of the rotor current are established. Then, the dynamic performance of the stator flux linkage under asymmetrical fault conditions is also analyzed. Based on this, the fault characteristics of the stator current under asymmetrical fault conditions are studied and the corresponding analytical expressions of the stator fault current are obtained. Finally, digital simulation results validate the analytical results. The research results are helpful to meet the requirements of a practical short-circuit calculation and the construction of a relaying protection system for the power grid with penetration of DFIGs.

  17. Scissoring Fault Rupture Properties along the Median Tectonic Line Fault Zone, Southwest Japan

    Science.gov (United States)

    Ikeda, M.; Nishizaka, N.; Onishi, K.; Sakamoto, J.; Takahashi, K.

    2017-12-01

    The Median Tectonic Line fault zone (hereinafter MTLFZ) is the longest and most active fault zone in Japan. The MTLFZ is a 400-km-long trench parallel right-lateral strike-slip fault accommodating lateral slip components of the Philippine Sea plate oblique subduction beneath the Eurasian plate [Fitch, 1972; Yeats, 1996]. Complex fault geometry evolves along the MTLFZ. The geomorphic and geological characteristics show a remarkable change through the MTLFZ. Extensional step-overs and pull-apart basins and a pop-up structure develop in western and eastern parts of the MTLFZ, respectively. It is like a "scissoring fault properties". We can point out two main factors to form scissoring fault properties along the MTLFZ. One is a regional stress condition, and another is a preexisting fault. The direction of σ1 anticlockwise rotate from N170°E [Famin et al., 2014] in the eastern Shikoku to Kinki areas and N100°E [Research Group for Crustral Stress in Western Japan, 1980] in central Shikoku to N85°E [Onishi et al., 2016] in western Shikoku. According to the rotation of principal stress directions, the western and eastern parts of the MTLFZ are to be a transtension and compression regime, respectively. The MTLFZ formed as a terrain boundary at Cretaceous, and has evolved with a long active history. The fault style has changed variously, such as left-lateral, thrust, normal and right-lateral. Under the structural condition of a preexisting fault being, the rupture does not completely conform to Anderson's theory for a newly formed fault, as the theory would require either purely dip-slip motion on the 45° dipping fault or strike-slip motion on a vertical fault. The fault rupture of the 2013 Barochistan earthquake in Pakistan is a rare example of large strike-slip reactivation on a relatively low angle dipping fault (thrust fault), though many strike-slip faults have vertical plane generally [Avouac et al., 2014]. In this presentation, we, firstly, show deep subsurface

  18. Impact of Stewardship Interventions on Antiretroviral Medication Errors in an Urban Medical Center: A 3-Year, Multiphase Study.

    Science.gov (United States)

    Zucker, Jason; Mittal, Jaimie; Jen, Shin-Pung; Cheng, Lucy; Cennimo, David

    2016-03-01

    There is a high prevalence of HIV infection in Newark, New Jersey, with University Hospital admitting approximately 600 HIV-infected patients per year. Medication errors involving antiretroviral therapy (ART) could significantly affect treatment outcomes. The goal of this study was to evaluate the effectiveness of various stewardship interventions in reducing the prevalence of prescribing errors involving ART. This was a retrospective review of all inpatients receiving ART for HIV treatment during three distinct 6-month intervals over a 3-year period. During the first year, the baseline prevalence of medication errors was determined. During the second year, physician and pharmacist education was provided, and a computerized order entry system with drug information resources and prescribing recommendations was implemented. Prospective audit of ART orders with feedback was conducted in the third year. Analyses and comparisons were made across the three phases of this study. Of the 334 patients with HIV admitted in the first year, 45% had at least one antiretroviral medication error and 38% had uncorrected errors at the time of discharge. After education and computerized order entry, significant reductions in medication error rates were observed compared to baseline rates; 36% of 315 admissions had at least one error and 31% had uncorrected errors at discharge. While the prevalence of antiretroviral errors in year 3 was similar to that of year 2 (37% of 276 admissions), there was a significant decrease in the prevalence of uncorrected errors at discharge (12%) with the use of prospective review and intervention. Interventions, such as education and guideline development, can aid in reducing ART medication errors, but a committed stewardship program is necessary to elicit the greatest impact. © 2016 Pharmacotherapy Publications, Inc.

  19. Identifying medication error chains from critical incident reports: a new analytic approach.

    Science.gov (United States)

    Huckels-Baumgart, Saskia; Manser, Tanja

    2014-10-01

    Research into the distribution of medication errors usually focuses on isolated stages within the medication use process. Our study aimed to provide a novel process-oriented approach to medication incident analysis focusing on medication error chains. Our study was conducted across a 900-bed teaching hospital in Switzerland. All reported 1,591 medication errors 2009-2012 were categorized using the Medication Error Index NCC MERP and the WHO Classification for Patient Safety Methodology. In order to identify medication error chains, each reported medication incident was allocated to the relevant stage of the hospital medication use process. Only 25.8% of the reported medication errors were detected before they propagated through the medication use process. The majority of medication errors (74.2%) formed an error chain encompassing two or more stages. The most frequent error chain comprised preparation up to and including medication administration (45.2%). "Non-consideration of documentation/prescribing" during the drug preparation was the most frequent contributor for "wrong dose" during the administration of medication. Medication error chains provide important insights for detecting and stopping medication errors before they reach the patient. Existing and new safety barriers need to be extended to interrupt error chains and to improve patient safety. © 2014, The American College of Clinical Pharmacology.

  20. Assessing the Progress of Trapped-Ion Processors Towards Fault-Tolerant Quantum Computation

    Science.gov (United States)

    Bermudez, A.; Xu, X.; Nigmatullin, R.; O'Gorman, J.; Negnevitsky, V.; Schindler, P.; Monz, T.; Poschinger, U. G.; Hempel, C.; Home, J.; Schmidt-Kaler, F.; Biercuk, M.; Blatt, R.; Benjamin, S.; Müller, M.

    2017-10-01

    A quantitative assessment of the progress of small prototype quantum processors towards fault-tolerant quantum computation is a problem of current interest in experimental and theoretical quantum information science. We introduce a necessary and fair criterion for quantum error correction (QEC), which must be achieved in the development of these quantum processors before their sizes are sufficiently big to consider the well-known QEC threshold. We apply this criterion to benchmark the ongoing effort in implementing QEC with topological color codes using trapped-ion quantum processors and, more importantly, to guide the future hardware developments that will be required in order to demonstrate beneficial QEC with small topological quantum codes. In doing so, we present a thorough description of a realistic trapped-ion toolbox for QEC and a physically motivated error model that goes beyond standard simplifications in the QEC literature. We focus on laser-based quantum gates realized in two-species trapped-ion crystals in high-optical aperture segmented traps. Our large-scale numerical analysis shows that, with the foreseen technological improvements described here, this platform is a very promising candidate for fault-tolerant quantum computation.

  1. An Active Fault-Tolerant Control Method Ofunmanned Underwater Vehicles with Continuous and Uncertain Faults

    Directory of Open Access Journals (Sweden)

    Daqi Zhu

    2008-11-01

    Full Text Available This paper introduces a novel thruster fault diagnosis and accommodation system for open-frame underwater vehicles with abrupt faults. The proposed system consists of two subsystems: a fault diagnosis subsystem and a fault accommodation sub-system. In the fault diagnosis subsystem a ICMAC(Improved Credit Assignment Cerebellar Model Articulation Controllers neural network is used to realize the on-line fault identification and the weighting matrix computation. The fault accommodation subsystem uses a control algorithm based on weighted pseudo-inverse to find the solution of the control allocation problem. To illustrate the proposed method effective, simulation example, under multi-uncertain abrupt faults, is given in the paper.

  2. A human error taxonomy and its application to an automatic method accident analysis

    International Nuclear Information System (INIS)

    Matthews, R.H.; Winter, P.W.

    1983-01-01

    Commentary is provided on the quantification aspects of human factors analysis in risk assessment. Methods for quantifying human error in a plant environment are discussed and their application to system quantification explored. Such a programme entails consideration of the data base and a taxonomy of factors contributing to human error. A multi-levelled approach to system quantification is proposed, each level being treated differently drawing on the advantages of different techniques within the fault/event tree framework. Management, as controller of organization, planning and procedure, is assigned a dominant role. (author)

  3. Machine-learning-assisted correction of correlated qubit errors in a topological code

    Directory of Open Access Journals (Sweden)

    Paul Baireuther

    2018-01-01

    Full Text Available A fault-tolerant quantum computation requires an efficient means to detect and correct errors that accumulate in encoded quantum information. In the context of machine learning, neural networks are a promising new approach to quantum error correction. Here we show that a recurrent neural network can be trained, using only experimentally accessible data, to detect errors in a widely used topological code, the surface code, with a performance above that of the established minimum-weight perfect matching (or blossom decoder. The performance gain is achieved because the neural network decoder can detect correlations between bit-flip (X and phase-flip (Z errors. The machine learning algorithm adapts to the physical system, hence no noise model is needed. The long short-term memory layers of the recurrent neural network maintain their performance over a large number of quantum error correction cycles, making it a practical decoder for forthcoming experimental realizations of the surface code.

  4. Intraplate seismicity along the Gedi Fault in Kachchh rift basin of western India

    Science.gov (United States)

    Joshi, Vishwa; Rastogi, B. K.; Kumar, Santosh

    2017-11-01

    The Kachchh rift basin is located on the western continental margin of India and has a history of experiencing large to moderate intraplate earthquakes with M ≥ 5. During the past two centuries, two large earthquakes of Mw 7.8 (1819) and Mw 7.7 (2001) have occurred in the Kachchh region, the latter with an epicenter near Bhuj. The aftershock activity of the 2001 Bhuj earthquake is still ongoing with migration of seismicity. Initially, epicenters migrated towards the east and northeast within the Kachchh region but, since 2007, it has also migrated to the south. The triggered faults are mostly within 100 km and some up to 200 km distance from the epicentral area of the mainshock. Most of these faults are trending in E-W direction, and some are transverse. It was noticed that some faults generate earthquakes down to the Moho depth whereas some faults show earthquake activity within the upper crustal volume. The Gedi Fault, situated about 50 km northeast of the 2001 mainshock epicenter, triggered the largest earthquake of Mw 5.6 in 2006. We have carried out detailed seismological studies to evaluate the seismic potential of the Gedi Fault. We have relocated 331 earthquakes by HypoDD to improve upon location errors. Further, the relocated events are used to estimate the b value, p value, and fractal correlation dimension Dc of the fault zone. The present study indicates that all the events along the Gedi Fault are shallow in nature, with focal depths less than 20 km. The estimated b value shows that the Gedi aftershock sequence could be classified as Mogi's type 2 sequence, and the p value suggests a relatively slow decay of aftershocks. The fault plane solutions of some selected events of Mw > 3.5 are examined, and activeness of the Gedi Fault is assessed from the results of active fault studies as well as GPS and InSAR results. All these results are critically examined to evaluate the material properties and seismic potential of the Gedi Fault that may be useful

  5. Summary: Experimental validation of real-time fault-tolerant systems

    Science.gov (United States)

    Iyer, R. K.; Choi, G. S.

    1992-01-01

    Testing and validation of real-time systems is always difficult to perform since neither the error generation process nor the fault propagation problem is easy to comprehend. There is no better substitute to results based on actual measurements and experimentation. Such results are essential for developing a rational basis for evaluation and validation of real-time systems. However, with physical experimentation, controllability and observability are limited to external instrumentation that can be hooked-up to the system under test. And this process is quite a difficult, if not impossible, task for a complex system. Also, to set up such experiments for measurements, physical hardware must exist. On the other hand, a simulation approach allows flexibility that is unequaled by any other existing method for system evaluation. A simulation methodology for system evaluation was successfully developed and implemented and the environment was demonstrated using existing real-time avionic systems. The research was oriented toward evaluating the impact of permanent and transient faults in aircraft control computers. Results were obtained for the Bendix BDX 930 system and Hamilton Standard EEC131 jet engine controller. The studies showed that simulated fault injection is valuable, in the design stage, to evaluate the susceptibility of computing sytems to different types of failures.

  6. Faster quantum chemistry simulation on fault-tolerant quantum computers

    International Nuclear Information System (INIS)

    Cody Jones, N; McMahon, Peter L; Yamamoto, Yoshihisa; Whitfield, James D; Yung, Man-Hong; Aspuru-Guzik, Alán; Van Meter, Rodney

    2012-01-01

    Quantum computers can in principle simulate quantum physics exponentially faster than their classical counterparts, but some technical hurdles remain. We propose methods which substantially improve the performance of a particular form of simulation, ab initio quantum chemistry, on fault-tolerant quantum computers; these methods generalize readily to other quantum simulation problems. Quantum teleportation plays a key role in these improvements and is used extensively as a computing resource. To improve execution time, we examine techniques for constructing arbitrary gates which perform substantially faster than circuits based on the conventional Solovay–Kitaev algorithm (Dawson and Nielsen 2006 Quantum Inform. Comput. 6 81). For a given approximation error ϵ, arbitrary single-qubit gates can be produced fault-tolerantly and using a restricted set of gates in time which is O(log ϵ) or O(log log ϵ); with sufficient parallel preparation of ancillas, constant average depth is possible using a method we call programmable ancilla rotations. Moreover, we construct and analyze efficient implementations of first- and second-quantized simulation algorithms using the fault-tolerant arbitrary gates and other techniques, such as implementing various subroutines in constant time. A specific example we analyze is the ground-state energy calculation for lithium hydride. (paper)

  7. Fault-Tolerant Robot Programming through Simulation with Realistic Sensor Models

    Directory of Open Access Journals (Sweden)

    Axel Waggershauser

    2008-11-01

    Full Text Available We introduce a simulation system for mobile robots that allows a realistic interaction of multiple robots in a common environment. The simulated robots are closely modeled after robots from the EyeBot family and have an identical application programmer interface. The simulation supports driving commands at two levels of abstraction as well as numerous sensors such as shaft encoders, infrared distance sensors, and compass. Simulation of on-board digital cameras via synthetic images allows the use of image processing routines for robot control within the simulation. Specific error models for actuators, distance sensors, camera sensor, and wireless communication have been implemented. Progressively increasing error levels for an application program allows for testing and improving its robustness and fault-tolerance.

  8. Information Based Fault Diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2008-01-01

    Fault detection and isolation, (FDI) of parametric faults in dynamic systems will be considered in this paper. An active fault diagnosis (AFD) approach is applied. The fault diagnosis will be investigated with respect to different information levels from the external inputs to the systems. These ...

  9. Human Factors Reliability Analysis for Assuring Nuclear Safety Using Fuzzy Fault Tree

    International Nuclear Information System (INIS)

    Eisawy, E.A.-F. I.; Sallam, H.

    2016-01-01

    In order to ensure effective prevention of harmful events, the risk assessment process cannot ignore the role of humans in the dynamics of accidental events and thus the seriousness of the consequences that may derive from them. Human reliability analysis (HRA) involves the use of qualitative and quantitative methods to assess the human contribution to risk. HRA techniques have been developed in order to provide human error probability values associated with operators’ tasks to be included within the broader context of system risk assessment, and are aimed at reducing the probability of accidental events. Fault tree analysis (FTA) is a graphical model that displays the various combinations of equipment failures and human errors that can result in the main system failure of interest. FTA is a risk analysis technique to assess likelihood (in a probabilistic context) of an event. The objective data available to estimate the likelihood is often missing, and even if available, is subject to incompleteness and imprecision or vagueness. Without addressing incompleteness and imprecision in the available data, FTA and subsequent risk analysis give a false impression of precision and correctness that undermines the overall credibility of the process. To solve this problem, qualitative justification in the context of failure possibilities can be used as alternative for quantitative justification. In this paper, we introduce the approach of fuzzy reliability as solution for fault tree analysis drawbacks. A new fuzzy fault tree method is proposed for the analysis of human reliability based on fuzzy sets and fuzzy operations t-norms, co-norms, defuzzification, and fuzzy failure probability. (author)

  10. Fault Tolerant Feedback Control

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Niemann, H.

    2001-01-01

    An architecture for fault tolerant feedback controllers based on the Youla parameterization is suggested. It is shown that the Youla parameterization will give a residual vector directly in connection with the fault diagnosis part of the fault tolerant feedback controller. It turns out...... that there is a separation be-tween the feedback controller and the fault tolerant part. The closed loop feedback properties are handled by the nominal feedback controller and the fault tolerant part is handled by the design of the Youla parameter. The design of the fault tolerant part will not affect the design...... of the nominal feedback con-troller....

  11. Prescribing practices amid the OxyContin crisis: examining the effect of print media coverage on opioid prescribing among physicians.

    Science.gov (United States)

    Borwein, Alexandra; Kephart, George; Whelan, Emma; Asbridge, Mark

    2013-12-01

    The pain medication OxyContin (hereafter referred to as oxycodone extended release) has been the subject of sustained, and largely negative, media attention in recent years. We sought to determine whether media coverage of oxycodone extended release in North American newspapers has led to changes in prescribing of the drug in Nova Scotia, Canada. An interrupted time-series design examined the effect of media attention on physicians' monthly prescribing of opioids. The outcome measures were, for each physician, the monthly proportions of all opioids prescribed and the proportion of strong opioids prescribed that were for oxycodone extended release. The exposure of interest was media attention defined as the number of articles published each month in 27 North American newspapers. Variations in media effects by provider characteristics (specialty, prescribing volume, and region) were assessed. Within-provider changes in the prescribing of oxycodone extended release in Nova Scotia were observed, and they followed changes in media coverage. Oxycodone extended release prescribing rose steadily prior to receiving media attention. Following peak media attention in the United States, the prescribing of oxycodone extended release slowed. Likewise, following peak coverage in Canadian newspapers, the prescribing of oxycodone extended release declined. These patterns were observed across prescriber specialties and by prescriber volume, though the magnitude of change in prescribing varied. This study demonstrates that print media reporting of oxycodone extended release in North American newspapers, and its continued portrayal as a social problem, coincided with reductions in the prescribing of oxycodone extended release by physicians in Nova Scotia. Copyright © 2013 American Pain Society. Published by Elsevier Inc. All rights reserved.

  12. Data-driven design of fault diagnosis and fault-tolerant control systems

    CERN Document Server

    Ding, Steven X

    2014-01-01

    Data-driven Design of Fault Diagnosis and Fault-tolerant Control Systems presents basic statistical process monitoring, fault diagnosis, and control methods, and introduces advanced data-driven schemes for the design of fault diagnosis and fault-tolerant control systems catering to the needs of dynamic industrial processes. With ever increasing demands for reliability, availability and safety in technical processes and assets, process monitoring and fault-tolerance have become important issues surrounding the design of automatic control systems. This text shows the reader how, thanks to the rapid development of information technology, key techniques of data-driven and statistical process monitoring and control can now become widely used in industrial practice to address these issues. To allow for self-contained study and facilitate implementation in real applications, important mathematical and control theoretical knowledge and tools are included in this book. Major schemes are presented in algorithm form and...

  13. Fault diagnosis for the heat exchanger of the aircraft environmental control system based on the strong tracking filter.

    Science.gov (United States)

    Ma, Jian; Lu, Chen; Liu, Hongmei

    2015-01-01

    The aircraft environmental control system (ECS) is a critical aircraft system, which provides the appropriate environmental conditions to ensure the safe transport of air passengers and equipment. The functionality and reliability of ECS have received increasing attention in recent years. The heat exchanger is a particularly significant component of the ECS, because its failure decreases the system's efficiency, which can lead to catastrophic consequences. Fault diagnosis of the heat exchanger is necessary to prevent risks. However, two problems hinder the implementation of the heat exchanger fault diagnosis in practice. First, the actual measured parameter of the heat exchanger cannot effectively reflect the fault occurrence, whereas the heat exchanger faults are usually depicted by utilizing the corresponding fault-related state parameters that cannot be measured directly. Second, both the traditional Extended Kalman Filter (EKF) and the EKF-based Double Model Filter have certain disadvantages, such as sensitivity to modeling errors and difficulties in selection of initialization values. To solve the aforementioned problems, this paper presents a fault-related parameter adaptive estimation method based on strong tracking filter (STF) and Modified Bayes classification algorithm for fault detection and failure mode classification of the heat exchanger, respectively. Heat exchanger fault simulation is conducted to generate fault data, through which the proposed methods are validated. The results demonstrate that the proposed methods are capable of providing accurate, stable, and rapid fault diagnosis of the heat exchanger.

  14. Fault diagnosis for the heat exchanger of the aircraft environmental control system based on the strong tracking filter.

    Directory of Open Access Journals (Sweden)

    Jian Ma

    Full Text Available The aircraft environmental control system (ECS is a critical aircraft system, which provides the appropriate environmental conditions to ensure the safe transport of air passengers and equipment. The functionality and reliability of ECS have received increasing attention in recent years. The heat exchanger is a particularly significant component of the ECS, because its failure decreases the system's efficiency, which can lead to catastrophic consequences. Fault diagnosis of the heat exchanger is necessary to prevent risks. However, two problems hinder the implementation of the heat exchanger fault diagnosis in practice. First, the actual measured parameter of the heat exchanger cannot effectively reflect the fault occurrence, whereas the heat exchanger faults are usually depicted by utilizing the corresponding fault-related state parameters that cannot be measured directly. Second, both the traditional Extended Kalman Filter (EKF and the EKF-based Double Model Filter have certain disadvantages, such as sensitivity to modeling errors and difficulties in selection of initialization values. To solve the aforementioned problems, this paper presents a fault-related parameter adaptive estimation method based on strong tracking filter (STF and Modified Bayes classification algorithm for fault detection and failure mode classification of the heat exchanger, respectively. Heat exchanger fault simulation is conducted to generate fault data, through which the proposed methods are validated. The results demonstrate that the proposed methods are capable of providing accurate, stable, and rapid fault diagnosis of the heat exchanger.

  15. Distributed Fault-Tolerant Control of Networked Uncertain Euler-Lagrange Systems Under Actuator Faults.

    Science.gov (United States)

    Chen, Gang; Song, Yongduan; Lewis, Frank L

    2016-05-03

    This paper investigates the distributed fault-tolerant control problem of networked Euler-Lagrange systems with actuator and communication link faults. An adaptive fault-tolerant cooperative control scheme is proposed to achieve the coordinated tracking control of networked uncertain Lagrange systems on a general directed communication topology, which contains a spanning tree with the root node being the active target system. The proposed algorithm is capable of compensating for the actuator bias fault, the partial loss of effectiveness actuation fault, the communication link fault, the model uncertainty, and the external disturbance simultaneously. The control scheme does not use any fault detection and isolation mechanism to detect, separate, and identify the actuator faults online, which largely reduces the online computation and expedites the responsiveness of the controller. To validate the effectiveness of the proposed method, a test-bed of multiple robot-arm cooperative control system is developed for real-time verification. Experiments on the networked robot-arms are conduced and the results confirm the benefits and the effectiveness of the proposed distributed fault-tolerant control algorithms.

  16. Reliability of measured data for pH sensor arrays with fault diagnosis and data fusion based on LabVIEW.

    Science.gov (United States)

    Liao, Yi-Hung; Chou, Jung-Chuan; Lin, Chin-Yi

    2013-12-13

    Fault diagnosis (FD) and data fusion (DF) technologies implemented in the LabVIEW program were used for a ruthenium dioxide pH sensor array. The purpose of the fault diagnosis and data fusion technologies is to increase the reliability of measured data. Data fusion is a very useful statistical method used for sensor arrays in many fields. Fault diagnosis is used to avoid sensor faults and to measure errors in the electrochemical measurement system, therefore, in this study, we use fault diagnosis to remove any faulty sensors in advance, and then proceed with data fusion in the sensor array. The average, self-adaptive and coefficient of variance data fusion methods are used in this study. The pH electrode is fabricated with ruthenium dioxide (RuO2) sensing membrane using a sputtering system to deposit it onto a silicon substrate, and eight RuO2 pH electrodes are fabricated to form a sensor array for this study.

  17. Reliability of Measured Data for pH Sensor Arrays with Fault Diagnosis and Data Fusion Based on LabVIEW

    Directory of Open Access Journals (Sweden)

    Yi-Hung Liao

    2013-12-01

    Full Text Available Fault diagnosis (FD and data fusion (DF technologies implemented in the LabVIEW program were used for a ruthenium dioxide pH sensor array. The purpose of the fault diagnosis and data fusion technologies is to increase the reliability of measured data. Data fusion is a very useful statistical method used for sensor arrays in many fields. Fault diagnosis is used to avoid sensor faults and to measure errors in the electrochemical measurement system, therefore, in this study, we use fault diagnosis to remove any faulty sensors in advance, and then proceed with data fusion in the sensor array. The average, self-adaptive and coefficient of variance data fusion methods are used in this study. The pH electrode is fabricated with ruthenium dioxide (RuO2 sensing membrane using a sputtering system to deposit it onto a silicon substrate, and eight RuO2 pH electrodes are fabricated to form a sensor array for this study.

  18. Fault-tolerant Control of Unmanned Underwater Vehicles with Continuous Faults: Simulations and Experiments

    Directory of Open Access Journals (Sweden)

    Qian Liu

    2010-02-01

    Full Text Available A novel thruster fault diagnosis and accommodation method for open-frame underwater vehicles is presented in the paper. The proposed system consists of two units: a fault diagnosis unit and a fault accommodation unit. In the fault diagnosis unit an ICMAC (Improved Credit Assignment Cerebellar Model Articulation Controllers neural network information fusion model is used to realize the fault identification of the thruster. The fault accommodation unit is based on direct calculations of moment and the result of fault identification is used to find the solution of the control allocation problem. The approach resolves the continuous faulty identification of the UV. Results from the experiment are provided to illustrate the performance of the proposed method in uncertain continuous faulty situation.

  19. Fault-tolerant Control of Unmanned Underwater Vehicles with Continuous Faults: Simulations and Experiments

    Directory of Open Access Journals (Sweden)

    Qian Liu

    2009-12-01

    Full Text Available A novel thruster fault diagnosis and accommodation method for open-frame underwater vehicles is presented in the paper. The proposed system consists of two units: a fault diagnosis unit and a fault accommodation unit. In the fault diagnosis unit an ICMAC (Improved Credit Assignment Cerebellar Model Articulation Controllers neural network information fusion model is used to realize the fault identification of the thruster. The fault accommodation unit is based on direct calculations of moment and the result of fault identification is used to find the solution of the control allocation problem. The approach resolves the continuous faulty identification of the UV. Results from the experiment are provided to illustrate the performance of the proposed method in uncertain continuous faulty situation.

  20. Transformer Incipient Fault Prediction Using Combined Artificial Neural Network and Various Particle Swarm Optimisation Techniques.

    Directory of Open Access Journals (Sweden)

    Hazlee Azil Illias

    Full Text Available It is important to predict the incipient fault in transformer oil accurately so that the maintenance of transformer oil can be performed correctly, reducing the cost of maintenance and minimise the error. Dissolved gas analysis (DGA has been widely used to predict the incipient fault in power transformers. However, sometimes the existing DGA methods yield inaccurate prediction of the incipient fault in transformer oil because each method is only suitable for certain conditions. Many previous works have reported on the use of intelligence methods to predict the transformer faults. However, it is believed that the accuracy of the previously proposed methods can still be improved. Since artificial neural network (ANN and particle swarm optimisation (PSO techniques have never been used in the previously reported work, this work proposes a combination of ANN and various PSO techniques to predict the transformer incipient fault. The advantages of PSO are simplicity and easy implementation. The effectiveness of various PSO techniques in combination with ANN is validated by comparison with the results from the actual fault diagnosis, an existing diagnosis method and ANN alone. Comparison of the results from the proposed methods with the previously reported work was also performed to show the improvement of the proposed methods. It was found that the proposed ANN-Evolutionary PSO method yields the highest percentage of correct identification for transformer fault type than the existing diagnosis method and previously reported works.

  1. Transformer Incipient Fault Prediction Using Combined Artificial Neural Network and Various Particle Swarm Optimisation Techniques.

    Science.gov (United States)

    Illias, Hazlee Azil; Chai, Xin Rui; Abu Bakar, Ab Halim; Mokhlis, Hazlie

    2015-01-01

    It is important to predict the incipient fault in transformer oil accurately so that the maintenance of transformer oil can be performed correctly, reducing the cost of maintenance and minimise the error. Dissolved gas analysis (DGA) has been widely used to predict the incipient fault in power transformers. However, sometimes the existing DGA methods yield inaccurate prediction of the incipient fault in transformer oil because each method is only suitable for certain conditions. Many previous works have reported on the use of intelligence methods to predict the transformer faults. However, it is believed that the accuracy of the previously proposed methods can still be improved. Since artificial neural network (ANN) and particle swarm optimisation (PSO) techniques have never been used in the previously reported work, this work proposes a combination of ANN and various PSO techniques to predict the transformer incipient fault. The advantages of PSO are simplicity and easy implementation. The effectiveness of various PSO techniques in combination with ANN is validated by comparison with the results from the actual fault diagnosis, an existing diagnosis method and ANN alone. Comparison of the results from the proposed methods with the previously reported work was also performed to show the improvement of the proposed methods. It was found that the proposed ANN-Evolutionary PSO method yields the highest percentage of correct identification for transformer fault type than the existing diagnosis method and previously reported works.

  2. Transformer Incipient Fault Prediction Using Combined Artificial Neural Network and Various Particle Swarm Optimisation Techniques

    Science.gov (United States)

    2015-01-01

    It is important to predict the incipient fault in transformer oil accurately so that the maintenance of transformer oil can be performed correctly, reducing the cost of maintenance and minimise the error. Dissolved gas analysis (DGA) has been widely used to predict the incipient fault in power transformers. However, sometimes the existing DGA methods yield inaccurate prediction of the incipient fault in transformer oil because each method is only suitable for certain conditions. Many previous works have reported on the use of intelligence methods to predict the transformer faults. However, it is believed that the accuracy of the previously proposed methods can still be improved. Since artificial neural network (ANN) and particle swarm optimisation (PSO) techniques have never been used in the previously reported work, this work proposes a combination of ANN and various PSO techniques to predict the transformer incipient fault. The advantages of PSO are simplicity and easy implementation. The effectiveness of various PSO techniques in combination with ANN is validated by comparison with the results from the actual fault diagnosis, an existing diagnosis method and ANN alone. Comparison of the results from the proposed methods with the previously reported work was also performed to show the improvement of the proposed methods. It was found that the proposed ANN-Evolutionary PSO method yields the highest percentage of correct identification for transformer fault type than the existing diagnosis method and previously reported works. PMID:26103634

  3. Three Least-Squares Minimization Approaches to Interpret Gravity Data Due to Dipping Faults

    Science.gov (United States)

    Abdelrahman, E. M.; Essa, K. S.

    2015-02-01

    We have developed three different least-squares minimization approaches to determine, successively, the depth, dip angle, and amplitude coefficient related to the thickness and density contrast of a buried dipping fault from first moving average residual gravity anomalies. By defining the zero-anomaly distance and the anomaly value at the origin of the moving average residual profile, the problem of depth determination is transformed into a constrained nonlinear gravity inversion. After estimating the depth of the fault, the dip angle is estimated by solving a nonlinear inverse problem. Finally, after estimating the depth and dip angle, the amplitude coefficient is determined using a linear equation. This method can be applied to residuals as well as to measured gravity data because it uses the moving average residual gravity anomalies to estimate the model parameters of the faulted structure. The proposed method was tested on noise-corrupted synthetic and real gravity data. In the case of the synthetic data, good results are obtained when errors are given in the zero-anomaly distance and the anomaly value at the origin, and even when the origin is determined approximately. In the case of practical data (Bouguer anomaly over Gazal fault, south Aswan, Egypt), the fault parameters obtained are in good agreement with the actual ones and with those given in the published literature.

  4. Errors in veterinary practice: preliminary lessons for building better veterinary teams.

    Science.gov (United States)

    Kinnison, T; Guile, D; May, S A

    2015-11-14

    Case studies in two typical UK veterinary practices were undertaken to explore teamwork, including interprofessional working. Each study involved one week of whole team observation based on practice locations (reception, operating theatre), one week of shadowing six focus individuals (veterinary surgeons, veterinary nurses and administrators) and a final week consisting of semistructured interviews regarding teamwork. Errors emerged as a finding of the study. The definition of errors was inclusive, pertaining to inputs or omitted actions with potential adverse outcomes for patients, clients or the practice. The 40 identified instances could be grouped into clinical errors (dosing/drugs, surgical preparation, lack of follow-up), lost item errors, and most frequently, communication errors (records, procedures, missing face-to-face communication, mistakes within face-to-face communication). The qualitative nature of the study allowed the underlying cause of the errors to be explored. In addition to some individual mistakes, system faults were identified as a major cause of errors. Observed examples and interviews demonstrated several challenges to interprofessional teamworking which may cause errors, including: lack of time, part-time staff leading to frequent handovers, branch differences and individual veterinary surgeon work preferences. Lessons are drawn for building better veterinary teams and implications for Disciplinary Proceedings considered. British Veterinary Association.

  5. Faults Images

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Through the study of faults and their effects, much can be learned about the size and recurrence intervals of earthquakes. Faults also teach us about crustal...

  6. Are we setting about improving the safety of computerised prescribing in the right way? A workshop report

    Directory of Open Access Journals (Sweden)

    Arash Vaziri

    2009-09-01

    Conclusion Prescribing errors remain a major source of unnecessary morbidity and mortality and current systems do not appear to have significantly reduced this problem; nor has the extensive literature about how to reduce unnecessary alerts been taken into account. We need a new and more rational basis for the selection and presentation of alerts that would help, not hinder, the clinician's performance.

  7. Uncertainty quantification in a chemical system using error estimate-based mesh adaption

    International Nuclear Information System (INIS)

    Mathelin, Lionel; Le Maitre, Olivier P.

    2012-01-01

    This paper describes a rigorous a posteriori error analysis for the stochastic solution of non-linear uncertain chemical models. The dual-based a posteriori stochastic error analysis extends the methodology developed in the deterministic finite elements context to stochastic discretization frameworks. It requires the resolution of two additional (dual) problems to yield the local error estimate. The stochastic error estimate can then be used to adapt the stochastic discretization. Different anisotropic refinement strategies are proposed, leading to a cost-efficient tool suitable for multi-dimensional problems of moderate stochastic dimension. The adaptive strategies allow both for refinement and coarsening of the stochastic discretization, as needed to satisfy a prescribed error tolerance. The adaptive strategies were successfully tested on a model for the hydrogen oxidation in supercritical conditions having 8 random parameters. The proposed methodologies are however general enough to be also applicable for a wide class of models such as uncertain fluid flows. (authors)

  8. A Design Method for Fault Reconfiguration and Fault-Tolerant Control of a Servo Motor

    Directory of Open Access Journals (Sweden)

    Jing He

    2013-01-01

    Full Text Available A design scheme that integrates fault reconfiguration and fault-tolerant position control is proposed for a nonlinear servo system with friction. Analysis of the non-linear friction torque and fault in the system is used to guide design of a sliding mode position controller. A sliding mode observer is designed to achieve fault reconfiguration based on the equivalence principle. Thus, active fault-tolerant position control of the system can be realized. A real-time simulation experiment is performed on a hardware-in-loop simulation platform. The results show that the system reconfigures well for both incipient and abrupt faults. Under the fault-tolerant control mechanism, the output signal for the system position can rapidly track given values without being influenced by faults.

  9. Active Fault Isolation in MIMO Systems

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad

    2014-01-01

    isolation is based directly on the input/output s ignals applied for the fault detection. It is guaranteed that the fault group includes the fault that had occurred in the system. The second step is individual fault isolation in the fault group . Both types of isolation are obtained by applying dedicated......Active fault isolation of parametric faults in closed-loop MIMO system s are considered in this paper. The fault isolation consists of two steps. T he first step is group- wise fault isolation. Here, a group of faults is isolated from other pos sible faults in the system. The group-wise fault...

  10. Fault Features Extraction and Identification based Rolling Bearing Fault Diagnosis

    International Nuclear Information System (INIS)

    Qin, B; Sun, G D; Zhang L Y; Wang J G; HU, J

    2017-01-01

    For the fault classification model based on extreme learning machine (ELM), the diagnosis accuracy and stability of rolling bearing is greatly influenced by a critical parameter, which is the number of nodes in hidden layer of ELM. An adaptive adjustment strategy is proposed based on vibrational mode decomposition, permutation entropy, and nuclear kernel extreme learning machine to determine the tunable parameter. First, the vibration signals are measured and then decomposed into different fault feature models based on variation mode decomposition. Then, fault feature of each model is formed to a high dimensional feature vector set based on permutation entropy. Second, the ELM output function is expressed by the inner product of Gauss kernel function to adaptively determine the number of hidden layer nodes. Finally, the high dimension feature vector set is used as the input to establish the kernel ELM rolling bearing fault classification model, and the classification and identification of different fault states of rolling bearings are carried out. In comparison with the fault classification methods based on support vector machine and ELM, the experimental results show that the proposed method has higher classification accuracy and better generalization ability. (paper)

  11. Medication Errors in Pediatric Anesthesia: A Report From the Wake Up Safe Quality Improvement Initiative.

    Science.gov (United States)

    Lobaugh, Lauren M Y; Martin, Lizabeth D; Schleelein, Laura E; Tyler, Donald C; Litman, Ronald S

    2017-09-01

    Wake Up Safe is a quality improvement initiative of the Society for Pediatric Anesthesia that contains a deidentified registry of serious adverse events occurring in pediatric anesthesia. The aim of this study was to describe and characterize reported medication errors to find common patterns amenable to preventative strategies. In September 2016, we analyzed approximately 6 years' worth of medication error events reported to Wake Up Safe. Medication errors were classified by: (1) medication category; (2) error type by phase of administration: prescribing, preparation, or administration; (3) bolus or infusion error; (4) provider type and level of training; (5) harm as defined by the National Coordinating Council for Medication Error Reporting and Prevention; and (6) perceived preventability. From 2010 to the time of our data analysis in September 2016, 32 institutions had joined and submitted data on 2087 adverse events during 2,316,635 anesthetics. These reports contained details of 276 medication errors, which comprised the third highest category of events behind cardiac and respiratory related events. Medication errors most commonly involved opioids and sedative/hypnotics. When categorized by phase of handling, 30 events occurred during preparation, 67 during prescribing, and 179 during administration. The most common error type was accidental administration of the wrong dose (N = 84), followed by syringe swap (accidental administration of the wrong syringe, N = 49). Fifty-seven (21%) reported medication errors involved medications prepared as infusions as opposed to 1 time bolus administrations. Medication errors were committed by all types of anesthesia providers, most commonly by attendings. Over 80% of reported medication errors reached the patient and more than half of these events caused patient harm. Fifteen events (5%) required a life sustaining intervention. Nearly all cases (97%) were judged to be either likely or certainly preventable. Our findings

  12. Improving reliability of state estimation programming and computing suite based on analyzing a fault tree

    Directory of Open Access Journals (Sweden)

    Kolosok Irina

    2017-01-01

    Full Text Available Reliable information on the current state parameters obtained as a result of processing the measurements from systems of the SCADA and WAMS data acquisition and processing through methods of state estimation (SE is a condition that enables to successfully manage an energy power system (EPS. SCADA and WAMS systems themselves, as any technical systems, are subject to failures and faults that lead to distortion and loss of information. The SE procedure enables to find erroneous measurements, therefore, it is a barrier for the distorted information to penetrate into control problems. At the same time, the programming and computing suite (PCS implementing the SE functions may itself provide a wrong decision due to imperfection of the software algorithms and errors. In this study, we propose to use a fault tree to analyze consequences of failures and faults in SCADA and WAMS and in the very SE procedure. Based on the analysis of the obtained measurement information and on the SE results, we determine the state estimation PCS fault tolerance level featuring its reliability.

  13. A RCT evaluating the effectiveness and cost-effectiveness of academic detailing versus postal prescribing feedback in changing GP antibiotic prescribing.

    LENUS (Irish Health Repository)

    Naughton, Corina

    2009-10-01

    The aim of this study is to evaluate the effectiveness of academic detailing (AD) plus postal prescribing feedback versus postal prescribing feedback alone in reducing: (i) the overall rate of antibiotic; and (ii) proportion of second-line antibiotic prescribing. In addition, the cost-effectiveness of an outreach prescriber adviser service versus a postal prescribing feedback service was evaluated.

  14. Guaranteed Cost Fault-Tolerant Control for Networked Control Systems with Sensor Faults

    Directory of Open Access Journals (Sweden)

    Qixin Zhu

    2015-01-01

    Full Text Available For the large scale and complicated structure of networked control systems, time-varying sensor faults could inevitably occur when the system works in a poor environment. Guaranteed cost fault-tolerant controller for the new networked control systems with time-varying sensor faults is designed in this paper. Based on time delay of the network transmission environment, the networked control systems with sensor faults are modeled as a discrete-time system with uncertain parameters. And the model of networked control systems is related to the boundary values of the sensor faults. Moreover, using Lyapunov stability theory and linear matrix inequalities (LMI approach, the guaranteed cost fault-tolerant controller is verified to render such networked control systems asymptotically stable. Finally, simulations are included to demonstrate the theoretical results.

  15. Timing analysis for embedded systems using non-preemptive EDF scheduling under bounded error arrivals

    Directory of Open Access Journals (Sweden)

    Michael Short

    2017-07-01

    Full Text Available Embedded systems consist of one or more processing units which are completely encapsulated by the devices under their control, and they often have stringent timing constraints associated with their functional specification. Previous research has considered the performance of different types of task scheduling algorithm and developed associated timing analysis techniques for such systems. Although preemptive scheduling techniques have traditionally been favored, rapid increases in processor speeds combined with improved insights into the behavior of non-preemptive scheduling techniques have seen an increased interest in their use for real-time applications such as multimedia, automation and control. However when non-preemptive scheduling techniques are employed there is a potential lack of error confinement should any timing errors occur in individual software tasks. In this paper, the focus is upon adding fault tolerance in systems using non-preemptive deadline-driven scheduling. Schedulability conditions are derived for fault-tolerant periodic and sporadic task sets experiencing bounded error arrivals under non-preemptive deadline scheduling. A timing analysis algorithm is presented based upon these conditions and its run-time properties are studied. Computational experiments show it to be highly efficient in terms of run-time complexity and competitive ratio when compared to previous approaches.

  16. k-means algorithm and mixture distributions for locating faults in power systems

    Energy Technology Data Exchange (ETDEWEB)

    Mora-Florez, J. [The Technological University of Pereira, La Julita, Ciudad Universitaria, Pereira, Risaralda (Colombia); Cormane-Angarita, J.; Ordonez-Plata, G. [The Industrial University of Santander (Colombia)

    2009-05-15

    Enhancement of power distribution system reliability requires of a considerable investment in studies and equipment, however, not all the utilities have the capability to spend time and money to assume it. Therefore, any strategy that allows the improvement of reliability should be reflected directly in the reduction of the duration and frequency interruption indexes (SAIFI and SAIDI). In this paper, an alternative solution to the problem of power service continuity associated to fault location is presented. A methodology of statistical nature based on finite mixtures is proposed. A statistical model is obtained from the extraction of the magnitude of the voltage sag registered during a fault event, along with the network parameters and topology. The objective is to offer an economic alternative of easy implementation for the development of strategies oriented to improve the reliability from the reduction of the restoration times in power distribution systems. In the application case for an application example in a power distribution system, the faulted zones were identified, having low error rates. (author)

  17. Nests of red wood ants (Formica rufa-group) are positively associated with tectonic faults: a double-blind test.

    Science.gov (United States)

    Del Toro, Israel; Berberich, Gabriele M; Ribbons, Relena R; Berberich, Martin B; Sanders, Nathan J; Ellison, Aaron M

    2017-01-01

    Ecological studies often are subjected to unintentional biases, suggesting that improved research designs for hypothesis testing should be used. Double-blind ecological studies are rare but necessary to minimize sampling biases and omission errors, and improve the reliability of research. We used a double-blind design to evaluate associations between nests of red wood ants ( Formica rufa , RWA) and the distribution of tectonic faults. We randomly sampled two regions in western Denmark to map the spatial distribution of RWA nests. We then calculated nest proximity to the nearest active tectonic faults. Red wood ant nests were eight times more likely to be found within 60 m of known tectonic faults than were random points in the same region but without nests. This pattern paralleled the directionality of the fault system, with NNE-SSW faults having the strongest associations with RWA nests. The nest locations were collected without knowledge of the spatial distribution of active faults thus we are confident that the results are neither biased nor artefactual. This example highlights the benefits of double-blind designs in reducing sampling biases, testing controversial hypotheses, and increasing the reliability of the conclusions of research.

  18. Vipava fault (Slovenia

    Directory of Open Access Journals (Sweden)

    Ladislav Placer

    2008-06-01

    Full Text Available During mapping of the already accomplished Razdrto – Senožeče section of motorway and geologic surveying of construction operations of the trunk road between Razdrto and Vipava in northwestern part of External Dinarides on the southwestern slope of Mt. Nanos, called Rebrnice, a steep NW-SE striking fault was recognized, situated between the Predjama and the Ra{a faults. The fault was named Vipava fault after the Vipava town. An analysis of subrecent gravitational slips at Rebrnice indicates that they were probably associated with the activity of this fault. Unpublished results of a repeated levelling line along the regional road passing across the Vipava fault zone suggest its possible present activity. It would be meaningful to verify this by appropriate geodetic measurements, and to study the actual gravitational slips at Rebrnice. The association between tectonics and gravitational slips in this and in similar extreme cases in the areas of Alps and Dinarides points at the need of complex studying of geologic proceses.

  19. Robust Sensor Faults Reconstruction for a Class of Uncertain Linear Systems Using a Sliding Mode Observer: An LMI Approach

    International Nuclear Information System (INIS)

    Iskander, Boulaabi; Faycal, Ben Hmida; Moncef, Gossa; Anis, Sellami

    2009-01-01

    This paper presents a design method of a Sliding Mode Observer (SMO) for robust sensor faults reconstruction of systems with matched uncertainty. This class of uncertainty requires a known upper bound. The basic idea is to use the H ∞ concept to design the observer, which minimizes the effect of the uncertainty on the reconstruction of the sensor faults. Specifically, we applied the equivalent output error injection concept from previous work in Fault Detection and Isolation (FDI) scheme. Then, these two problems of design and reconstruction can be expressed and numerically formulate via Linear Matrix Inequalities (LMIs) optimization. Finally, a numerical example is given to illustrate the validity and the applicability of the proposed approach.

  20. Nuclear power plant pressurizer fault diagnosis using fuzzy signed-digraph and spurious faults elimination methods

    International Nuclear Information System (INIS)

    Park, Joo Hyun

    1994-02-01

    In this work, the Fuzzy Signed Digraph(FSD) method which has been researched for the fault diagnosis of industrial process plant systems is improved and applied to the fault diagnosis of the Kori-2 nuclear power plant pressurizer. A method for spurious faults elimination is also suggested and applied to the fault diagnosis. By using these methods, we could diagnose the multi-faults of the pressurizer and could also eliminate the spurious faults of the pressurizer caused by other subsystems. Besides the multi-fault diagnosis and system-wide diagnosis capabilities, the proposed method has many merits such as real-time diagnosis capability, independency of fault pattern, direct use of sensor values, and transparency of the fault propagation to the operators

  1. Nuclear power plant pressurizer fault diagnosis using fuzzy signed-digraph and spurious faults elimination methods

    International Nuclear Information System (INIS)

    Park, Joo Hyun; Seong, Poong Hyun

    1994-01-01

    In this work, the Fuzzy Signed Digraph (FSD) method which has been researched for the fault diagnosis of industrial process plant systems is improved and applied to the fault diagnosis of the Kori-2 nuclear power plant pressurizer. A method for spurious faults elimination is also suggested and applied to the fault diagnosis. By using these methods, we could diagnose the multi-faults of the pressurizer and could also eliminate the spurious faults of the pressurizer caused by other subsystems. Besides the multi-fault diagnosis and system-wide diagnosis capabilities, the proposed method has many merits such as real-time diagnosis capability, independency of fault pattern, direct use of sensor values, and transparency of the fault propagation to the operators. (Author)

  2. Roads towards fault-tolerant universal quantum computation

    Science.gov (United States)

    Campbell, Earl T.; Terhal, Barbara M.; Vuillot, Christophe

    2017-09-01

    A practical quantum computer must not merely store information, but also process it. To prevent errors introduced by noise from multiplying and spreading, a fault-tolerant computational architecture is required. Current experiments are taking the first steps toward noise-resilient logical qubits. But to convert these quantum devices from memories to processors, it is necessary to specify how a universal set of gates is performed on them. The leading proposals for doing so, such as magic-state distillation and colour-code techniques, have high resource demands. Alternative schemes, such as those that use high-dimensional quantum codes in a modular architecture, have potential benefits, but need to be explored further.

  3. Aircraft Engine Sensor/Actuator/Component Fault Diagnosis Using a Bank of Kalman Filters

    Science.gov (United States)

    Kobayashi, Takahisa; Simon, Donald L. (Technical Monitor)

    2003-01-01

    In this report, a fault detection and isolation (FDI) system which utilizes a bank of Kalman filters is developed for aircraft engine sensor and actuator FDI in conjunction with the detection of component faults. This FDI approach uses multiple Kalman filters, each of which is designed based on a specific hypothesis for detecting a specific sensor or actuator fault. In the event that a fault does occur, all filters except the one using the correct hypothesis will produce large estimation errors, from which a specific fault is isolated. In the meantime, a set of parameters that indicate engine component performance is estimated for the detection of abrupt degradation. The performance of the FDI system is evaluated against a nonlinear engine simulation for various engine faults at cruise operating conditions. In order to mimic the real engine environment, the nonlinear simulation is executed not only at the nominal, or healthy, condition but also at aged conditions. When the FDI system designed at the healthy condition is applied to an aged engine, the effectiveness of the FDI system is impacted by the mismatch in the engine health condition. Depending on its severity, this mismatch can cause the FDI system to generate incorrect diagnostic results, such as false alarms and missed detections. To partially recover the nominal performance, two approaches, which incorporate information regarding the engine s aging condition in the FDI system, will be discussed and evaluated. The results indicate that the proposed FDI system is promising for reliable diagnostics of aircraft engines.

  4. A validation methodology for fault-tolerant clock synchronization

    Science.gov (United States)

    Johnson, S. C.; Butler, R. W.

    1984-01-01

    A validation method for the synchronization subsystem of a fault-tolerant computer system is presented. The high reliability requirement of flight crucial systems precludes the use of most traditional validation methods. The method presented utilizes formal design proof to uncover design and coding errors and experimentation to validate the assumptions of the design proof. The experimental method is described and illustrated by validating an experimental implementation of the Software Implemented Fault Tolerance (SIFT) clock synchronization algorithm. The design proof of the algorithm defines the maximum skew between any two nonfaulty clocks in the system in terms of theoretical upper bounds on certain system parameters. The quantile to which each parameter must be estimated is determined by a combinatorial analysis of the system reliability. The parameters are measured by direct and indirect means, and upper bounds are estimated. A nonparametric method based on an asymptotic property of the tail of a distribution is used to estimate the upper bound of a critical system parameter. Although the proof process is very costly, it is extremely valuable when validating the crucial synchronization subsystem.

  5. Diagnosis and fault-tolerant control

    CERN Document Server

    Blanke, Mogens; Lunze, Jan; Staroswiecki, Marcel

    2016-01-01

    Fault-tolerant control aims at a gradual shutdown response in automated systems when faults occur. It satisfies the industrial demand for enhanced availability and safety, in contrast to traditional reactions to faults, which bring about sudden shutdowns and loss of availability. The book presents effective model-based analysis and design methods for fault diagnosis and fault-tolerant control. Architectural and structural models are used to analyse the propagation of the fault through the process, to test the fault detectability and to find the redundancies in the process that can be used to ensure fault tolerance. It also introduces design methods suitable for diagnostic systems and fault-tolerant controllers for continuous processes that are described by analytical models of discrete-event systems represented by automata. The book is suitable for engineering students, engineers in industry and researchers who wish to get an overview of the variety of approaches to process diagnosis and fault-tolerant contro...

  6. Optimal design of superconducting fault detector for superconductor triggered fault current limiters

    International Nuclear Information System (INIS)

    Yim, S.-W.; Kim, H.-R.; Hyun, O.-B.; Sim, J.; Park, K.B.; Lee, B.W.

    2008-01-01

    We have designed and tested a superconducting fault detector (SFD) for a 22.9 kV superconductor triggered fault current limiters (STFCLs) using Au/YBCO thin films. The SFD is to detect a fault and commutate the current from the primary path to the secondary path of the STFCL. First, quench characteristics of the Au/YBCO thin films were investigated for various faults having different fault duration. The rated voltage of the Au/YBCO thin films was determined from the results, considering the stability of the Au/YBCO elements. Second, the recovery time to superconductivity after quench was measured in each fault case. In addition, the dependence of the recovery characteristics on numbers and dimension of Au/YBCO elements were investigated. Based on the results, a SFD was designed, fabricated and tested. The SFD successfully detected a fault current and carried out the line commutation. Its recovery time was confirmed to be less than 0.5 s, satisfying the reclosing scheme in the Korea Electric Power Corporation (KEPCO)'s power grid

  7. Off-fault tip splay networks: a genetic and generic property of faults indicative of their long-term propagation, and a major component of off-fault damage

    Science.gov (United States)

    Perrin, C.; Manighetti, I.; Gaudemer, Y.

    2015-12-01

    Faults grow over the long-term by accumulating displacement and lengthening, i.e., propagating laterally. We use fault maps and fault propagation evidences available in literature to examine geometrical relations between parent faults and off-fault splays. The population includes 47 worldwide crustal faults with lengths from millimeters to thousands of kilometers and of different slip modes. We show that fault splays form adjacent to any propagating fault tip, whereas they are absent at non-propagating fault ends. Independent of parent fault length, slip mode, context, etc, tip splay networks have a similar fan shape widening in direction of long-term propagation, a similar relative length and width (~30 and ~10 % of parent fault length, respectively), and a similar range of mean angles to parent fault (10-20°). Tip splays more commonly develop on one side only of the parent fault. We infer that tip splay networks are a genetic and a generic property of faults indicative of their long-term propagation. We suggest that they represent the most recent damage off-the parent fault, formed during the most recent phase of fault lengthening. The scaling relation between parent fault length and width of tip splay network implies that damage zones enlarge as parent fault length increases. Elastic properties of host rocks might thus be modified at large distances away from a fault, up to 10% of its length. During an earthquake, a significant fraction of coseismic slip and stress is dissipated into the permanent damage zone that surrounds the causative fault. We infer that coseismic dissipation might occur away from a rupture zone as far as a distance of 10% of the length of its causative fault. Coseismic deformations and stress transfers might thus be significant in broad regions about principal rupture traces. This work has been published in Comptes Rendus Geoscience under doi:10.1016/j.crte.2015.05.002 (http://www.sciencedirect.com/science/article/pii/S1631071315000528).

  8. Data-based fault-tolerant control for affine nonlinear systems with actuator faults.

    Science.gov (United States)

    Xie, Chun-Hua; Yang, Guang-Hong

    2016-09-01

    This paper investigates the fault-tolerant control (FTC) problem for unknown nonlinear systems with actuator faults including stuck, outage, bias and loss of effectiveness. The upper bounds of stuck faults, bias faults and loss of effectiveness faults are unknown. A new data-based FTC scheme is proposed. It consists of the online estimations of the bounds and a state-dependent function. The estimations are adjusted online to compensate automatically the actuator faults. The state-dependent function solved by using real system data helps to stabilize the system. Furthermore, all signals in the resulting closed-loop system are uniformly bounded and the states converge asymptotically to zero. Compared with the existing results, the proposed approach is data-based. Finally, two simulation examples are provided to show the effectiveness of the proposed approach. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  9. Development of safety analysis and constraint detection techniques for process interaction errors

    Energy Technology Data Exchange (ETDEWEB)

    Fan, Chin-Feng, E-mail: csfanc@saturn.yzu.edu.tw [Computer Science and Engineering Dept., Yuan-Ze University, Taiwan (China); Tsai, Shang-Lin; Tseng, Wan-Hui [Computer Science and Engineering Dept., Yuan-Ze University, Taiwan (China)

    2011-02-15

    Among the new failure modes introduced by computer into safety systems, the process interaction error is the most unpredictable and complicated failure mode, which may cause disastrous consequences. This paper presents safety analysis and constraint detection techniques for process interaction errors among hardware, software, and human processes. Among interaction errors, the most dreadful ones are those that involve run-time misinterpretation from a logic process. We call them the 'semantic interaction errors'. Such abnormal interaction is not adequately emphasized in current research. In our static analysis, we provide a fault tree template focusing on semantic interaction errors by checking conflicting pre-conditions and post-conditions among interacting processes. Thus, far-fetched, but highly risky, interaction scenarios involve interpretation errors can be identified. For run-time monitoring, a range of constraint types is proposed for checking abnormal signs at run time. We extend current constraints to a broader relational level and a global level, considering process/device dependencies and physical conservation rules in order to detect process interaction errors. The proposed techniques can reduce abnormal interactions; they can also be used to assist in safety-case construction.

  10. Development of safety analysis and constraint detection techniques for process interaction errors

    International Nuclear Information System (INIS)

    Fan, Chin-Feng; Tsai, Shang-Lin; Tseng, Wan-Hui

    2011-01-01

    Among the new failure modes introduced by computer into safety systems, the process interaction error is the most unpredictable and complicated failure mode, which may cause disastrous consequences. This paper presents safety analysis and constraint detection techniques for process interaction errors among hardware, software, and human processes. Among interaction errors, the most dreadful ones are those that involve run-time misinterpretation from a logic process. We call them the 'semantic interaction errors'. Such abnormal interaction is not adequately emphasized in current research. In our static analysis, we provide a fault tree template focusing on semantic interaction errors by checking conflicting pre-conditions and post-conditions among interacting processes. Thus, far-fetched, but highly risky, interaction scenarios involve interpretation errors can be identified. For run-time monitoring, a range of constraint types is proposed for checking abnormal signs at run time. We extend current constraints to a broader relational level and a global level, considering process/device dependencies and physical conservation rules in order to detect process interaction errors. The proposed techniques can reduce abnormal interactions; they can also be used to assist in safety-case construction.

  11. RECENT GEODYNAMICS OF FAULT ZONES: FAULTING IN REAL TIME SCALE

    Directory of Open Access Journals (Sweden)

    Yu. O. Kuzmin

    2014-01-01

    Full Text Available Recent deformation processes taking place in real time are analyzed on the basis of data on fault zones which were collected by long-term detailed geodetic survey studies with application of field methods and satellite monitoring.A new category of recent crustal movements is described and termed as parametrically induced tectonic strain in fault zones. It is shown that in the fault zones located in seismically active and aseismic regions, super intensive displacements of the crust (5 to 7 cm per year, i.e. (5 to 7·10–5 per year occur due to very small external impacts of natural or technogenic / industrial origin.The spatial discreteness of anomalous deformation processes is established along the strike of the regional Rechitsky fault in the Pripyat basin. It is concluded that recent anomalous activity of the fault zones needs to be taken into account in defining regional regularities of geodynamic processes on the basis of real-time measurements.The paper presents results of analyses of data collected by long-term (20 to 50 years geodetic surveys in highly seismically active regions of Kopetdag, Kamchatka and California. It is evidenced by instrumental geodetic measurements of recent vertical and horizontal displacements in fault zones that deformations are ‘paradoxically’ deviating from the inherited movements of the past geological periods.In terms of the recent geodynamics, the ‘paradoxes’ of high and low strain velocities are related to a reliable empirical fact of the presence of extremely high local velocities of deformations in the fault zones (about 10–5 per year and above, which take place at the background of slow regional deformations which velocities are lower by the order of 2 to 3. Very low average annual velocities of horizontal deformation are recorded in the seismic regions of Kopetdag and Kamchatka and in the San Andreas fault zone; they amount to only 3 to 5 amplitudes of the earth tidal deformations per year.A ‘fault

  12. Implementing nurse prescribing: a case study in diabetes.

    Science.gov (United States)

    Stenner, Karen; Carey, Nicola; Courtenay, Molly

    2010-03-01

    This paper is a report of a study exploring the views of nurses and team members on the implementation of nurse prescribing in diabetes services. Nurse prescribing is adopted as a means of improving service efficiency, particularly where demand outstretches resources. Although factors that support nurse prescribing have been identified, it is not known how these function within specific contexts. This is important as its uptake and use varies according to mode of prescribing and area of practice. A case study was undertaken in nine practice settings across England where nurses prescribed medicines for patients with diabetes. Thematic analysis was conducted on qualitative data from 31 semi-structured interviews undertaken between 2007 and 2008. Participants were qualified nurse prescribers, administrative staff, physicians and non-nurse prescribers. Nurses prescribed more often following the expansion of nurse independent prescribing rights in 2006. Initial implementation problems had been resolved and few current problems were reported. As nurses' roles were well-established, no major alterations to service provision were required to implement nurse prescribing. Access to formal and informal resources for support and training were available. Participants were accepting and supportive of this initiative to improve the efficiency of diabetes services. The main factors that promoted implementation of nurse prescribing in this setting were the ability to prescribe independently, acceptance of the prescribing role, good working relationships between doctors and nurses, and sound organizational and interpersonal support. The history of established nursing roles in diabetes care, and increasing service demand, meant that these diabetes services were primed to assimilate nurse prescribing.

  13. Fault-tolerant computing systems

    International Nuclear Information System (INIS)

    Dal Cin, M.; Hohl, W.

    1991-01-01

    Tests, Diagnosis and Fault Treatment were chosen as the guiding themes of the conference. However, the scope of the conference included reliability, availability, safety and security issues in software and hardware systems as well. The sessions were organized for the conference which was completed by an industrial presentation: Keynote Address, Reconfiguration and Recover, System Level Diagnosis, Voting and Agreement, Testing, Fault-Tolerant Circuits, Array Testing, Modelling, Applied Fault Tolerance, Fault-Tolerant Arrays and Systems, Interconnection Networks, Fault-Tolerant Software. One paper has been indexed separately in the database. (orig./HP)

  14. Commission errors of active intentions: the roles of aging, cognitive load, and practice.

    Science.gov (United States)

    Boywitt, C Dennis; Rummel, Jan; Meiser, Thorsten

    2015-01-01

    Performing an intended action when it needs to be withheld, for example, when temporarily prescribed medication is incompatible with the other medication, is referred to as commission errors of prospective memory (PM). While recent research indicates that older adults are especially prone to commission errors for finished intentions, there is a lack of research on the effects of aging on commission errors for still active intentions. The present research investigates conditions which might contribute to older adults' propensity to perform planned intentions under inappropriate conditions. Specifically, disproportionally higher rates of commission errors for still active intentions were observed in older than in younger adults with both salient (Experiment 1) and non-salient (Experiment 2) target cues. Practicing the PM task in Experiment 2, however, helped execution of the intended action in terms of higher PM performance at faster ongoing-task response times but did not increase the rate of commission errors. The results have important implications for the understanding of older adults' PM commission errors and the processes involved in these errors.

  15. Safety analyses of potential exposure in medical irradiation plants by Fuzzy Fault Tree

    International Nuclear Information System (INIS)

    Casamirra, Maddalena; Castiglia, Francesco; Giardina, Mariarosa; Tomarchio, Elio

    2008-01-01

    The results of Fuzzy Fault Tree (FFT) analyses of various accidental scenarios, which involve the operators in potential exposures inside an High Dose Rate (HDR) remote after-loading systems for use in brachytherapy, are reported. To carry out fault tree analyses by means of fuzzy probabilities, the TREEZZY2 computer code is used. Moreover, the HEART (Human Error Assessment and Reduction Technique) model, properly modified on the basis of the fuzzy approach, has been employed to assess the impact of performances haping and error-promoting factors in the context of the accidental events. The assessment of potential dose values for some identified accidental scenarios allows to consider, for a particular event, a fuzzy uncertainty range in potential dose estimate. The availability of lower and upper limits allows evaluating the possibility of optimization of the installation from the point of view of radiation protection. The adequacy of the training and information program for staff and patients (and their family members) and the effectiveness of behavioural rules and safety procedures were tested. Some recommendations on procedures and equipment to reduce the risk of radiological exposure are also provided. (author)

  16. Psychiatric Prescribers' Experiences With Doctor Shoppers.

    Science.gov (United States)

    Worley, Julie; Johnson, Mary; Karnik, Niranjan

    2015-01-01

    Doctor shopping is a primary method of prescription medication diversion. After opioids, benzodiazepines and stimulants are the next most common prescription medications used nonmedically. Studies have shown that patients who engage in doctor shopping find it fun, exciting, and easy to do. There is a lack of research on the prescriber's perspective on the phenomenon of doctor shopping. This study investigates the experiences of prescribers in psychiatry with patients who engage in doctor shopping. Fifteen prescribers including psychiatrists and psychiatric nurse practitioners working in outpatient psychiatry were interviewed to elicit detailed information about their experiences with patients who engage in doctor shopping. Themes found throughout the interview were that psychiatric prescribers' experience with patients who engage in doctor shopping includes (a) detecting red flags, (b) negative emotional responding, (c) addressing the patient and the problem, and (d) inconsistently implementing precautions. When red flags were detected when prescribing controlled drugs, prescribers in psychiatry experienced both their own negative emotional responses such as disappointment and resentment as well as the negative emotions of the patients such as anger and other extreme emotional responses. Psychiatric prescribers responded to patient's doctor shopping in a variety of ways such as changing their practice, discharging the patients or taking steps to not accept certain patients identified as being at risk for doctor shopping, as well as by talking to the patient and trying to offer them help. Despite experiencing doctor shopping, the prescribers inconsistently implemented precautionary measures such as checking prescription drug monitoring programs. © The Author(s) 2015.

  17. The San Andreas Fault and a Strike-slip Fault on Europa

    Science.gov (United States)

    1998-01-01

    The mosaic on the right of the south polar region of Jupiter's moon Europa shows the northern 290 kilometers (180 miles) of a strike-slip fault named Astypalaea Linea. The entire fault is about 810 kilometers (500 miles) long, the size of the California portion of the San Andreas fault on Earth which runs from the California-Mexico border north to the San Francisco Bay. The left mosaic shows the portion of the San Andreas fault near California's san Francisco Bay that has been scaled to the same size and resolution as the Europa image. Each covers an area approximately 170 by 193 kilometers(105 by 120 miles). The red line marks the once active central crack of the Europan fault (right) and the line of the San Andreas fault (left). A strike-slip fault is one in which two crustal blocks move horizontally past one another, similar to two opposing lanes of traffic. The overall motion along the Europan fault seems to have followed a continuous narrow crack along the entire length of the feature, with a path resembling stepson a staircase crossing zones which have been pulled apart. The images show that about 50 kilometers (30 miles) of displacement have taken place along the fault. Opposite sides of the fault can be reconstructed like a puzzle, matching the shape of the sides as well as older individual cracks and ridges that had been broken by its movements. Bends in the Europan fault have allowed the surface to be pulled apart. This pulling-apart along the fault's bends created openings through which warmer, softer ice from below Europa's brittle ice shell surface, or frozen water from a possible subsurface ocean, could reach the surface. This upwelling of material formed large areas of new ice within the boundaries of the original fault. A similar pulling apart phenomenon can be observed in the geological trough surrounding California's Salton Sea, and in Death Valley and the Dead Sea. In those cases, the pulled apart regions can include upwelled materials, but may

  18. Inappropriate prescribing in the elderly.

    LENUS (Irish Health Repository)

    Gallagher, P

    2012-02-03

    BACKGROUND AND OBJECTIVE: Drug therapy is necessary to treat acute illness, maintain current health and prevent further decline. However, optimizing drug therapy for older patients is challenging and sometimes, drug therapy can do more harm than good. Drug utilization review tools can highlight instances of potentially inappropriate prescribing to those involved in elderly pharmacotherapy, i.e. doctors, nurses and pharmacists. We aim to provide a review of the literature on potentially inappropriate prescribing in the elderly and also to review the explicit criteria that have been designed to detect potentially inappropriate prescribing in the elderly. METHODS: We performed an electronic search of the PUBMED database for articles published between 1991 and 2006 and a manual search through major journals for articles referenced in those located through PUBMED. Search terms were elderly, inappropriate prescribing, prescriptions, prevalence, Beers criteria, health outcomes and Europe. RESULTS AND DISCUSSION: Prescription of potentially inappropriate medications to older people is highly prevalent in the United States and Europe, ranging from 12% in community-dwelling elderly to 40% in nursing home residents. Inappropriate prescribing is associated with adverse drug events. Limited data exists on health outcomes from use of inappropriate medications. There are no prospective randomized controlled studies that test the tangible clinical benefit to patients of using drug utilization review tools. Existing drug utilization review tools have been designed on the basis of North American and Canadian drug formularies and may not be appropriate for use in European countries because of the differences in national drug formularies and prescribing attitudes. CONCLUSION: Given the high prevalence of inappropriate prescribing despite the widespread use of drug-utilization review tools, prospective randomized controlled trials are necessary to identify useful interventions. Drug

  19. ESR dating of the fault rocks

    International Nuclear Information System (INIS)

    Lee, Hee Kwon

    2005-01-01

    We carried out ESR dating of fault rocks collected near the nuclear reactor. The Upcheon fault zone is exposed close to the Ulzin nuclear reactor. The space-time pattern of fault activity on the Upcheon fault deduced from ESR dating of fault gouge can be summarised as follows : this fault zone was reactivated between fault breccia derived from Cretaceous sandstone and tertiary volcanic sedimentary rocks about 2 Ma, 1.5 Ma and 1 Ma ago. After those movements, the Upcheon fault was reactivated between Cretaceous sandstone and fault breccia zone about 800 ka ago. This fault zone was reactivated again between fault breccia derived form Cretaceous sandstone and Tertiary volcanic sedimentary rocks about 650 ka and after 125 ka ago. These data suggest that the long-term(200-500 k.y.) cyclic fault activity of the Upcheon fault zone continued into the Pleistocene. In the Ulzin area, ESR dates from the NW and EW trend faults range from 800 ka to 600 ka NE and EW trend faults were reactivated about between 200 ka and 300 ka ago. On the other hand, ESR date of the NS trend fault is about 400 ka and 50 ka. Results of this research suggest the fault activity near the Ulzin nuclear reactor fault activity continued into the Pleistocene. One ESR date near the Youngkwang nuclear reactor is 200 ka

  20. Deformation around basin scale normal faults

    International Nuclear Information System (INIS)

    Spahic, D.

    2010-01-01

    Faults in the earth crust occur within large range of scales from microscale over mesoscopic to large basin scale faults. Frequently deformation associated with faulting is not only limited to the fault plane alone, but rather forms a combination with continuous near field deformation in the wall rock, a phenomenon that is generally called fault drag. The correct interpretation and recognition of fault drag is fundamental for the reconstruction of the fault history and determination of fault kinematics, as well as prediction in areas of limited exposure or beyond comprehensive seismic resolution. Based on fault analyses derived from 3D visualization of natural examples of fault drag, the importance of fault geometry for the deformation of marker horizons around faults is investigated. The complex 3D structural models presented here are based on a combination of geophysical datasets and geological fieldwork. On an outcrop scale example of fault drag in the hanging wall of a normal fault, located at St. Margarethen, Burgenland, Austria, data from Ground Penetrating Radar (GPR) measurements, detailed mapping and terrestrial laser scanning were used to construct a high-resolution structural model of the fault plane, the deformed marker horizons and associated secondary faults. In order to obtain geometrical information about the largely unexposed master fault surface, a standard listric balancing dip domain technique was employed. The results indicate that for this normal fault a listric shape can be excluded, as the constructed fault has a geologically meaningless shape cutting upsection into the sedimentary strata. This kinematic modeling result is additionally supported by the observation of deformed horizons in the footwall of the structure. Alternatively, a planar fault model with reverse drag of markers in the hanging wall and footwall is proposed. Deformation around basin scale normal faults. A second part of this thesis investigates a large scale normal fault

  1. Shipborne Magnetic Survey of San Pablo Bay and Implications on the Hayward-Rodgers Creek Fault Junction

    Science.gov (United States)

    Ponce, D. A.; Athens, N. D.; Denton, K.

    2012-12-01

    A shipborne magnetic survey of San Pablo Bay reveals a steep magnetic gradient as well as several prominent magnetic anomalies along the offshore extension of the Hayward Fault. The Hayward Fault enters San Pablo Bay at Pinole Point and potentially extends beneath San Pablo Bay for 15 km. About 1,000 line-km of shipborne magnetometer data were collected in San Pablo Bay along approximately north-east and north-west trending traverses. Shiptrack lines were spaced 200-m apart in a N55oE direction and tie-lines were spaced 500- and 1,000-m apart in a N145oE direction. Magnetometer and Geographic Positioning System (GPS) data were collected simultaneously at one-second intervals using a Geometrics G858 cesium vapor magnetometer with the sensor attached to a nonmagnetic pole extended about 2 m over the bow. Diurnal variations of the Earth's magnetic field were recorded at a ground magnetic base station and shipborne data were corrected for diurnal variations, International Geomagnetic Reference Field, cultural noise, heading errors, and leveling errors. The heading correction applied to the shipborne magnetic data accounts for a systematic shift in the magnetic readings due to the magnetic field produced by the boat and the orientation of the boat. The heading correction was determined by traversing several shiptrack lines in various azimuths in opposite directions. Magnetic measurements off the main survey lines (e.g., turns) were removed from the survey. After applying the heading correction, crossing values or the difference in values where two survey lines intersect were compared and the survey was leveled. Shipborne magnetic data reveal a prominent magnetic anomaly immediately offshore of Point Pinole that probably reflects ultramafic rocks (e.g. serpentinite), similar to those exposed in the northern part of the onshore Hayward Fault. Further to the northwest, shipborne magnetic data enhance two prominent aeromagnetic anomalies along the Hayward Fault in the

  2. How do normal faults grow?

    OpenAIRE

    Blækkan, Ingvild; Bell, Rebecca; Rotevatn, Atle; Jackson, Christopher; Tvedt, Anette

    2018-01-01

    Faults grow via a sympathetic increase in their displacement and length (isolated fault model), or by rapid length establishment and subsequent displacement accrual (constant-length fault model). To test the significance and applicability of these two models, we use time-series displacement (D) and length (L) data extracted for faults from nature and experiments. We document a range of fault behaviours, from sympathetic D-L fault growth (isolated growth) to sub-vertical D-L growth trajectorie...

  3. New construction of quantum error-avoiding codes via group representation of quantum stabilizer codes

    Energy Technology Data Exchange (ETDEWEB)

    Xiao, Hailin [Wenzhou University, College of Physics and Electronic Information Engineering, Wenzhou (China); Southeast University, National Mobile Communications Research Laboratory, Nanjing (China); Guilin University of Electronic Technology, Ministry of Education, Key Laboratory of Cognitive Radio and Information Processing, Guilin (China); Zhang, Zhongshan [University of Science and Technology Beijing, Beijing Engineering and Technology Research Center for Convergence Networks and Ubiquitous Services, Beijing (China); Chronopoulos, Anthony Theodore [University of Texas at San Antonio, Department of Computer Science, San Antonio, TX (United States)

    2017-10-15

    In quantum computing, nice error bases as generalization of the Pauli basis were introduced by Knill. These bases are known to be projective representations of finite groups. In this paper, we propose a group representation approach to the study of quantum stabilizer codes. We utilize this approach to define decoherence-free subspaces (DFSs). Unlike previous studies of DFSs, this type of DFSs does not involve any spatial symmetry assumptions on the system-environment interaction. Thus, it can be used to construct quantum error-avoiding codes (QEACs) that are fault tolerant automatically. We also propose a new simple construction of QEACs and subsequently develop several classes of QEACs. Finally, we present numerical simulation results encoding the logical error rate over physical error rate on the fidelity performance of these QEACs. Our study demonstrates that DFSs-based QEACs are capable of providing a generalized and unified framework for error-avoiding methods. (orig.)

  4. Fault diagnosis of power transformer based on fault-tree analysis (FTA)

    Science.gov (United States)

    Wang, Yongliang; Li, Xiaoqiang; Ma, Jianwei; Li, SuoYu

    2017-05-01

    Power transformers is an important equipment in power plants and substations, power distribution transmission link is made an important hub of power systems. Its performance directly affects the quality and health of the power system reliability and stability. This paper summarizes the five parts according to the fault type power transformers, then from the time dimension divided into three stages of power transformer fault, use DGA routine analysis and infrared diagnostics criterion set power transformer running state, finally, according to the needs of power transformer fault diagnosis, by the general to the section by stepwise refinement of dendritic tree constructed power transformer fault

  5. Large earthquakes and creeping faults

    Science.gov (United States)

    Harris, Ruth A.

    2017-01-01

    Faults are ubiquitous throughout the Earth's crust. The majority are silent for decades to centuries, until they suddenly rupture and produce earthquakes. With a focus on shallow continental active-tectonic regions, this paper reviews a subset of faults that have a different behavior. These unusual faults slowly creep for long periods of time and produce many small earthquakes. The presence of fault creep and the related microseismicity helps illuminate faults that might not otherwise be located in fine detail, but there is also the question of how creeping faults contribute to seismic hazard. It appears that well-recorded creeping fault earthquakes of up to magnitude 6.6 that have occurred in shallow continental regions produce similar fault-surface rupture areas and similar peak ground shaking as their locked fault counterparts of the same earthquake magnitude. The behavior of much larger earthquakes on shallow creeping continental faults is less well known, because there is a dearth of comprehensive observations. Computational simulations provide an opportunity to fill the gaps in our understanding, particularly of the dynamic processes that occur during large earthquake rupture and arrest.

  6. A novel KFCM based fault diagnosis method for unknown faults in satellite reaction wheels.

    Science.gov (United States)

    Hu, Di; Sarosh, Ali; Dong, Yun-Feng

    2012-03-01

    Reaction wheels are one of the most critical components of the satellite attitude control system, therefore correct diagnosis of their faults is quintessential for efficient operation of these spacecraft. The known faults in any of the subsystems are often diagnosed by supervised learning algorithms, however, this method fails to work correctly when a new or unknown fault occurs. In such cases an unsupervised learning algorithm becomes essential for obtaining the correct diagnosis. Kernel Fuzzy C-Means (KFCM) is one of the unsupervised algorithms, although it has its own limitations; however in this paper a novel method has been proposed for conditioning of KFCM method (C-KFCM) so that it can be effectively used for fault diagnosis of both known and unknown faults as in satellite reaction wheels. The C-KFCM approach involves determination of exact class centers from the data of known faults, in this way discrete number of fault classes are determined at the start. Similarity parameters are derived and determined for each of the fault data point. Thereafter depending on the similarity threshold each data point is issued with a class label. The high similarity points fall into one of the 'known-fault' classes while the low similarity points are labeled as 'unknown-faults'. Simulation results show that as compared to the supervised algorithm such as neural network, the C-KFCM method can effectively cluster historical fault data (as in reaction wheels) and diagnose the faults to an accuracy of more than 91%. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  7. Disjointness of Stabilizer Codes and Limitations on Fault-Tolerant Logical Gates

    Science.gov (United States)

    Jochym-O'Connor, Tomas; Kubica, Aleksander; Yoder, Theodore J.

    2018-04-01

    Stabilizer codes are among the most successful quantum error-correcting codes, yet they have important limitations on their ability to fault tolerantly compute. Here, we introduce a new quantity, the disjointness of the stabilizer code, which, roughly speaking, is the number of mostly nonoverlapping representations of any given nontrivial logical Pauli operator. The notion of disjointness proves useful in limiting transversal gates on any error-detecting stabilizer code to a finite level of the Clifford hierarchy. For code families, we can similarly restrict logical operators implemented by constant-depth circuits. For instance, we show that it is impossible, with a constant-depth but possibly geometrically nonlocal circuit, to implement a logical non-Clifford gate on the standard two-dimensional surface code.

  8. A quantum byte with 10{sup -4} crosstalk for fault-tolerant quantum computing

    Energy Technology Data Exchange (ETDEWEB)

    Piltz, Christian; Sriarunothai, Theeraphot; Varon, Andres; Wunderlich, Christof [Department Physik, Universitaet Siegen, 57068 Siegen (Germany)

    2014-07-01

    A prerequisite for fault-tolerant and thus scalable operation of a quantum computer is the use of quantum error correction protocols. Such protocols come with a maximum tolerable gate error, and there is consensus that an error of order 10{sup -4} is an important threshold. This threshold was already breached for single-qubit gates with trapped ions using microwave radiation. However, crosstalk - the error that is induced in qubits within a quantum register, when one qubit (or a subset of qubits) is coherently manipulated, still prevents the realization of a scalable quantum computer. The application of a quantum gate - even if the gate error itself is low - induces errors in other qubits within the quantum register. We present an experimental study using quantum registers consisting of microwave-driven trapped {sup 171}Yb{sup +} ions in a static magnetic gradient. We demonstrate a quantum register of three qubits with a next-neighbour crosstalk of 6(1) . 10{sup -5} that for the first time breaches the error correction threshold. Furthermore, we present a quantum register of eight qubits - a quantum byte - with a next-neighbour crosstalk error better than 2.9(4) . 10{sup -4}. Importantly, our results are obtained with thermally excited ions far above the motional ground state.

  9. Identifying Conventionally Sub-Seismic Faults in Polygonal Fault Systems

    Science.gov (United States)

    Fry, C.; Dix, J.

    2017-12-01

    Polygonal Fault Systems (PFS) are prevalent in hydrocarbon basins globally and represent potential fluid pathways. However the characterization of these pathways is subject to the limitations of conventional 3D seismic imaging; only capable of resolving features on a decametre scale horizontally and metres scale vertically. While outcrop and core examples can identify smaller features, they are limited by the extent of the exposures. The disparity between these scales can allow for smaller faults to be lost in a resolution gap which could mean potential pathways are left unseen. Here the focus is upon PFS from within the London Clay, a common bedrock that is tunnelled into and bears construction foundations for much of London. It is a continuation of the Ieper Clay where PFS were first identified and is found to approach the seafloor within the Outer Thames Estuary. This allows for the direct analysis of PFS surface expressions, via the use of high resolution 1m bathymetric imaging in combination with high resolution seismic imaging. Through use of these datasets surface expressions of over 1500 faults within the London Clay have been identified, with the smallest fault measuring 12m and the largest at 612m in length. The displacements over these faults established from both bathymetric and seismic imaging ranges from 30cm to a couple of metres, scales that would typically be sub-seismic for conventional basin seismic imaging. The orientations and dimensions of the faults within this network have been directly compared to 3D seismic data of the Ieper Clay from the offshore Dutch sector where it exists approximately 1km below the seafloor. These have typical PFS attributes with lengths of hundreds of metres to kilometres and throws of tens of metres, a magnitude larger than those identified in the Outer Thames Estuary. The similar orientations and polygonal patterns within both locations indicates that the smaller faults exist within typical PFS structure but are

  10. Influence of fault steps on rupture termination of strike-slip earthquake faults

    Science.gov (United States)

    Li, Zhengfang; Zhou, Bengang

    2018-03-01

    A statistical analysis was completed on the rupture data of 29 historical strike-slip earthquakes across the world. The purpose of this study is to examine the effects of fault steps on the rupture termination of these events. The results show good correlations between the type and length of steps with the seismic rupture and a poor correlation between the step number and seismic rupture. For different magnitude intervals, the smallest widths of the fault steps (Lt) that can terminate the rupture propagation are variable: Lt = 3 km for Ms 6.5 6.9, Lt = 4 km for Ms 7.0 7.5, Lt = 6 km for Ms 7.5 8.0, and Lt = 8 km for Ms 8.0 8.5. The dilational fault step is easier to rupture through than the compression fault step. The smallest widths of the fault step for the rupture arrest can be used as an indicator to judge the scale of the rupture termination of seismic faults. This is helpful for research on fault segmentation, as well as estimating the magnitude of potential earthquakes, and is thus of significance for the assessment of seismic risks.

  11. Correcting errors in a quantum gate with pushed ions via optimal control

    International Nuclear Information System (INIS)

    Poulsen, Uffe V.; Sklarz, Shlomo; Tannor, David; Calarco, Tommaso

    2010-01-01

    We analyze in detail the so-called pushing gate for trapped ions, introducing a time-dependent harmonic approximation for the external motion. We show how to extract the average fidelity for the gate from the resulting semiclassical simulations. We characterize and quantify precisely all types of errors coming from the quantum dynamics and reveal that slight nonlinearities in the ion-pushing force can have a dramatic effect on the adiabaticity of gate operation. By means of quantum optimal control techniques, we show how to suppress each of the resulting gate errors in order to reach a high fidelity compatible with scalable fault-tolerant quantum computing.

  12. Observer-based distributed adaptive fault-tolerant containment control of multi-agent systems with general linear dynamics.

    Science.gov (United States)

    Ye, Dan; Chen, Mengmeng; Li, Kui

    2017-11-01

    In this paper, we consider the distributed containment control problem of multi-agent systems with actuator bias faults based on observer method. The objective is to drive the followers into the convex hull spanned by the dynamic leaders, where the input is unknown but bounded. By constructing an observer to estimate the states and bias faults, an effective distributed adaptive fault-tolerant controller is developed. Different from the traditional method, an auxiliary controller gain is designed to deal with the unknown inputs and bias faults together. Moreover, the coupling gain can be adjusted online through the adaptive mechanism without using the global information. Furthermore, the proposed control protocol can guarantee that all the signals of the closed-loop systems are bounded and all the followers converge to the convex hull with bounded residual errors formed by the dynamic leaders. Finally, a decoupled linearized longitudinal motion model of the F-18 aircraft is used to demonstrate the effectiveness. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  13. Fault isolation through no-overhead link level CRC

    Science.gov (United States)

    Chen, Dong; Coteus, Paul W.; Gara, Alan G.

    2007-04-24

    A fault isolation technique for checking the accuracy of data packets transmitted between nodes of a parallel processor. An independent crc is kept of all data sent from one processor to another, and received from one processor to another. At the end of each checkpoint, the crcs are compared. If they do not match, there was an error. The crcs may be cleared and restarted at each checkpoint. In the preferred embodiment, the basic functionality is to calculate a CRC of all packet data that has been successfully transmitted across a given link. This CRC is done on both ends of the link, thereby allowing an independent check on all data believed to have been correctly transmitted. Preferably, all links have this CRC coverage, and the CRC used in this link level check is different from that used in the packet transfer protocol. This independent check, if successfully passed, virtually eliminates the possibility that any data errors were missed during the previous transfer period.

  14. Fault diagnosis of sensor networked structures with multiple faults using a virtual beam based approach

    Science.gov (United States)

    Wang, H.; Jing, X. J.

    2017-07-01

    This paper presents a virtual beam based approach suitable for conducting diagnosis of multiple faults in complex structures with limited prior knowledge of the faults involved. The "virtual beam", a recently-proposed concept for fault detection in complex structures, is applied, which consists of a chain of sensors representing a vibration energy transmission path embedded in the complex structure. Statistical tests and adaptive threshold are particularly adopted for fault detection due to limited prior knowledge of normal operational conditions and fault conditions. To isolate the multiple faults within a specific structure or substructure of a more complex one, a 'biased running' strategy is developed and embedded within the bacterial-based optimization method to construct effective virtual beams and thus to improve the accuracy of localization. The proposed method is easy and efficient to implement for multiple fault localization with limited prior knowledge of normal conditions and faults. With extensive experimental results, it is validated that the proposed method can localize both single fault and multiple faults more effectively than the classical trust index subtract on negative add on positive (TI-SNAP) method.

  15. On the evaluation of the sensitivity of SRAM-Based FPGA to errors due to natural radiation environment

    International Nuclear Information System (INIS)

    Bocquillon, Alexandre

    2009-01-01

    This work aims at designing a test methodology to analyze the effect of natural radiation on FPGA SRAM-based chip-sets. Study of likely errors due to single or multiple events occurring in the configuration memory will be based on fault-injection experiments performed with laser devices. It relies on both a description of scientific background and a description of complex architecture of FPGA SRAM-Based and usual testing apparatus. Fault injection experiments with laser are conducted on several classes of components in order to perform static tests of the configuration memory and identify the links with the application. It shows the organization and sensitivity of SRAM configuration cells. Criticality criteria for configuration bits have been specified following dynamic tests in protons accelerator, in regard to their impact on the application. From this classification was developed a predicting tool for critical error rate estimation. (author) [fr

  16. Study of fault diagnosis software design for complex system based on fault tree

    International Nuclear Information System (INIS)

    Yuan Run; Li Yazhou; Wang Jianye; Hu Liqin; Wang Jiaqun; Wu Yican

    2012-01-01

    Complex systems always have high-level reliability and safety requirements, and same does their diagnosis work. As a great deal of fault tree models have been acquired during the design and operation phases, a fault diagnosis method which combines fault tree analysis with knowledge-based technology has been proposed. The prototype of fault diagnosis software has been realized and applied to mobile LIDAR system. (authors)

  17. Illite authigenesis during faulting and fluid flow - a microstructural study of fault rocks

    Science.gov (United States)

    Scheiber, Thomas; Viola, Giulio; van der Lelij, Roelant; Margreth, Annina

    2017-04-01

    Authigenic illite can form synkinematically during slip events along brittle faults. In addition it can also crystallize as a result of fluid flow and associated mineral alteration processes in hydrothermal environments. K-Ar dating of illite-bearing fault rocks has recently become a common tool to constrain the timing of fault activity. However, to fully interpret the derived age spectra in terms of deformation ages, a careful investigation of the fault deformation history and architecture at the outcrop-scale, ideally followed by a detailed mineralogical analysis of the illite-forming processes at the micro-scale, are indispensable. Here we integrate this methodological approach by presenting microstructural observations from the host rock immediately adjacent to dated fault gouges from two sites located in the Rolvsnes granodiorite (Bømlo, western Norway). This granodiorite experienced multiple episodes of brittle faulting and fluid-induced alteration, starting in the Mid Ordovician (Scheiber et al., 2016). Fault gouges are predominantly associated with normal faults accommodating mainly E-W extension. K-Ar dating of illites separated from representative fault gouges constrains deformation and alteration due to fluid ingress from the Permian to the Cretaceous, with a cluster of ages for the finest (middle Jurassic. At site one, high-resolution thin section structural mapping reveals a complex deformation history characterized by several coexisting types of calcite veins and seven different generations of cataclasite, two of which contain a significant amount of authigenic and undoubtedly deformation-related illite. At site two, fluid ingress along and adjoining the fault core induced pervasive alteration of the host granodiorite. Quartz is crosscut by calcite veinlets whereas plagioclase, K-feldspar and biotite are almost completely replaced by the main alteration products kaolin, quartz and illite. Illite-bearing micro-domains were physically separated by

  18. Frecuencia de errores de los pacientes con su medicación Frequency of medication errors by patients

    Directory of Open Access Journals (Sweden)

    José Joaquín Mira

    2012-02-01

    Full Text Available OBJETIVO: Analizar la frecuencia de errores de medicación que son cometidos e informados por los pacientes. MÉTODOS: Estudio descriptivo basado en encuestas telefónicas a una muestra aleatoria de pacientes adultos del nivel primario de salud del sistema público español. Respondieron un total de 1 247 pacientes (tasa de respuesta, 75%. El 63% eran mujeres y 29% eran mayores de 70 años. RESULTADOS: Mientras 37 pacientes (3%, IC 95%: 2-4 sufrieron complicaciones asociadas a la medicación en el curso del tratamiento, 241 (19,4%, IC 95%: 17-21 informaron haber cometido algún error con la medicación. Un menor tiempo de consulta (P OBJECTIVE: Analyze the frequency of medication errors committed and reported by patients. METHODS: Descriptive study based on a telephone survey of a random sample of adult patients from the primary care level of the Spanish public health care system. A total of 1 247 patients responded (75% response rate; 63% were women and 29% were older than 70 years. RESULTS: While 37 patients (3%, 95% CI: 2-4 experienced complications associated with medication in the course of treatment, 241 (19.4%, 95% CI: 17-21 reported having made some mistake with their medication. A shorter consultation time (P < 0.01 and a worse assessment of the information provided by the physician (P < 0.01 were associated with the fact that during pharmacy dispensing the patient was told that the prescribed treatment was not appropriate. CONCLUSIONS: In addition to the known risks of an adverse event due to a health intervention resulting from a system or practitioner error, there are risks associated with patient errors in the self-administration of medication. Patients who were unsatisfied with the information provided by the physician reported a greater number of errors.

  19. Factors Influencing Patterns of Antibiotic Prescribing in Primary Health Care Centers in the Savodjbolaq District During 2012-13: A Cross-Sectional Study

    Directory of Open Access Journals (Sweden)

    Gh. Karimi

    2015-08-01

    Full Text Available Background and Objective: Inappropriate prescribing of antibiotics is one of the main reasons for antibiotic resistance in the world which has an increasing pressure and cost on health system and also household economy. The present study aimed to determine the pattern of antibiotic prescribing and related it,s factors in health centers. Materials and Methods: In a cross-sectional design, 1068 random prescriptions of General Physicians (GPs who work in Savodjbolaq Health Centers were studied. Variables included age, gender of patients and physicians, frequency of antibiotic prescribing, rate of combination therapy, methods of prescribing, type of patient’s insurance booklet and seasons. Statistical analysis was performed by SPSS version 18 software. Results: More than half of prescriptions (56.8% included at least one antibiotics. One in every four prescriptions had some sort of antibiotic combination therapy. According to the scientific criteria, 57.1% of antibiotics were prescribed inappropriately. among these criteria, the highest error belongs to doses per day with 67.72%. Frequency of antibiotic prescribing based on age, gender, type of patient’s insurance booklet, physicians experience, different seasons was significantly different (p<0.05. Conclusions: Combination therapy and unscientific prescribing of antibiotics for youths are concern for public health and household economy. Review of protocols and methods of supervision, Changes in purchasing medical services, Design and implementation of operational and targeted educational interventions, Training physicians emphasizing on logical aspects of antibiotic prescription and prescribing skills, are recommended.  

  20. How fault evolution changes strain partitioning and fault slip rates in Southern California: Results from geodynamic modeling

    Science.gov (United States)

    Ye, Jiyang; Liu, Mian

    2017-08-01

    In Southern California, the Pacific-North America relative plate motion is accommodated by the complex southern San Andreas Fault system that includes many young faults (faults and their impact on strain partitioning and fault slip rates are important for understanding the evolution of this plate boundary zone and assessing earthquake hazard in Southern California. Using a three-dimensional viscoelastoplastic finite element model, we have investigated how this plate boundary fault system has evolved to accommodate the relative plate motion in Southern California. Our results show that when the plate boundary faults are not optimally configured to accommodate the relative plate motion, strain is localized in places where new faults would initiate to improve the mechanical efficiency of the fault system. In particular, the Eastern California Shear Zone, the San Jacinto Fault, the Elsinore Fault, and the offshore dextral faults all developed in places of highly localized strain. These younger faults compensate for the reduced fault slip on the San Andreas Fault proper because of the Big Bend, a major restraining bend. The evolution of the fault system changes the apportionment of fault slip rates over time, which may explain some of the slip rate discrepancy between geological and geodetic measurements in Southern California. For the present fault configuration, our model predicts localized strain in western Transverse Ranges and along the dextral faults across the Mojave Desert, where numerous damaging earthquakes occurred in recent years.

  1. Combinatorial Optimization Algorithms for Dynamic Multiple Fault Diagnosis in Automotive and Aerospace Applications

    Science.gov (United States)

    Kodali, Anuradha

    facility, respectively. The set-covering matrix encapsulates the relationship among the rows (tests or demand points) and columns (faults or locations) of the system at each time. By relaxing the coupling constraints using Lagrange multipliers, the DSC problem can be decoupled into independent subproblems, one for each column. Each subproblem is solved using the Viterbi decoding algorithm, and a primal feasible solution is constructed by modifying the Viterbi solutions via a heuristic. The proposed Viterbi-Lagrangian relaxation algorithm (VLRA) provides a measure of suboptimality via an approximate duality gap. As a major practical extension of the above problem, we also consider the problem of diagnosing faults with delayed test outcomes, termed delay-dynamic set-covering (DDSC), and experiment with real-world problems that exhibit masking faults. Also, we present simulation results on OR-library datasets (set-covering formulations are predominantly validated on these matrices in the literature), posed as facility location problems. Finally, we implement these algorithms to solve problems in aerospace and automotive applications. Firstly, we address the diagnostic ambiguity problem in aerospace and automotive applications by developing a dynamic fusion framework that includes dynamic multiple fault diagnosis algorithms. This improves the correct fault isolation rate, while minimizing the false alarm rates, by considering multiple faults instead of the traditional data-driven techniques based on single fault (class)-single epoch (static) assumption. The dynamic fusion problem is formulated as a maximum a posteriori decision problem of inferring the fault sequence based on uncertain outcomes of multiple binary classifiers over time. The fusion process involves three steps: the first step transforms the multi-class problem into dichotomies using error correcting output codes (ECOC), thereby solving the concomitant binary classification problems; the second step fuses the

  2. ESR dating of fault rocks

    International Nuclear Information System (INIS)

    Lee, Hee Kwon

    2003-02-01

    Past movement on faults can be dated by measurement of the intensity of ESR signals in quartz. These signals are reset by local lattice deformation and local frictional heating on grain contacts at the time of fault movement. The ESR signals then grow back as a result of bombardment by ionizing radiation from surrounding rocks. The age is obtained from the ratio of the equivalent dose, needed to produce the observed signal, to the dose rate. Fine grains are more completely reset during faulting, and a plot of age vs. grain size shows a plateau for grains below critical size; these grains are presumed to have been completely zeroed by the last fault activity. We carried out ESR dating of fault rocks collected near the Gori nuclear reactor. Most of the ESR signals of fault rocks collected from the basement are saturated. This indicates that the last movement of the faults had occurred before the Quaternary period. However, ESR dates from the Oyong fault zone range from 370 to 310 ka. Results of this research suggest that long-term cyclic fault activity of the Oyong fault zone continued into the Pleistocene

  3. ESR dating of fault rocks

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hee Kwon [Kangwon National Univ., Chuncheon (Korea, Republic of)

    2003-02-15

    Past movement on faults can be dated by measurement of the intensity of ESR signals in quartz. These signals are reset by local lattice deformation and local frictional heating on grain contacts at the time of fault movement. The ESR signals then grow back as a result of bombardment by ionizing radiation from surrounding rocks. The age is obtained from the ratio of the equivalent dose, needed to produce the observed signal, to the dose rate. Fine grains are more completely reset during faulting, and a plot of age vs. grain size shows a plateau for grains below critical size; these grains are presumed to have been completely zeroed by the last fault activity. We carried out ESR dating of fault rocks collected near the Gori nuclear reactor. Most of the ESR signals of fault rocks collected from the basement are saturated. This indicates that the last movement of the faults had occurred before the Quaternary period. However, ESR dates from the Oyong fault zone range from 370 to 310 ka. Results of this research suggest that long-term cyclic fault activity of the Oyong fault zone continued into the Pleistocene.

  4. Fault-tolerant architecture: Evaluation methodology

    International Nuclear Information System (INIS)

    Battle, R.E.; Kisner, R.A.

    1992-08-01

    The design and reliability of four fault-tolerant architectures that may be used in nuclear power plant control systems were evaluated. Two architectures are variations of triple-modular-redundant (TMR) systems, and two are variations of dual redundant systems. The evaluation includes a review of methods of implementing fault-tolerant control, the importance of automatic recovery from failures, methods of self-testing diagnostics, block diagrams of typical fault-tolerant controllers, review of fault-tolerant controllers operating in nuclear power plants, and fault tree reliability analyses of fault-tolerant systems

  5. [Monitoring medication errors in an internal medicine service].

    Science.gov (United States)

    Smith, Ann-Loren M; Ruiz, Inés A; Jirón, Marcela A

    2014-01-01

    Patients admitted to internal medicine services receive multiple drugs and thus are at risk of medication errors. To determine the frequency of medication errors (ME) among patients admitted to an internal medicine service of a high complexity hospital. A prospective observational study conducted in 225 patients admitted to an internal medicine service. Each stage of drug utilization system (prescription, transcription, dispensing, preparation and administration) was directly observed by trained pharmacists not related to hospital staff during three months. ME were described and categorized according to the National Coordinating Council for Medication Error Reporting and Prevention. In each stage of medication use, the frequency of ME and their characteristics were determined. A total of 454 drugs were prescribed to the studied patients. In 138 (30,4%) indications, at least one ME occurred, involving 67 (29,8%) patients. Twenty four percent of detected ME occurred during administration, mainly due to wrong time schedules. Anticoagulants were the therapeutic group with the highest occurrence of ME. At least one ME occurred in approximately one third of patients studied, especially during the administration stage. These errors could affect the medication safety and avoid achieving therapeutic goals. Strategies to improve the quality and safe use of medications can be implemented using this information.

  6. ROCK FRACTURES NEAR FAULTS: SPECIFIC FEATURES OF STRUCTURAL‐PARAGENETIC ANALYSIS

    Directory of Open Access Journals (Sweden)

    Yu. P. Burzunova

    2017-01-01

    Full Text Available The new approach to structural‐paragenetic analysis of near‐fault fractures [Seminsky, 2014, 2015] and specific features of its application are discussed. This approach was tested in studies of fracturing in West Pribaikalie and Central Mongolia. We give some recommendations concerning collection, selection and initial processing of the data on fractures and faults. The analysis technique is briefly described, and its distinctive details are specified. Under the new approach, we compare systems of natural fractures with the standard joint sets. By analysing the mass measurements of the orientations of joint sets in a fault zone, it becomes possible to reveal the characteristics of this fault zone, such as its structure, morphogenetic type, etc. The comparative analysis is based on the identification of the main fracture paragenesis near the faults. This paragenesis is represented by a triplet of mutually perpendicular joint sets. The technique uses the qualitative approach to establish the rank hierarchy of fractures and stress fields on the basis of genetic subordination. We collect and analyse the data on tectonic fractures identified from a number of indicators, the main of which are the geometric structure of the (systematic or chaotic fracture system, and shear type of fractures. The new technique can be applied to analyse other genetic types of fractures (primary, hypergenic, provided that tectonic stresses were significantly involved in fracturing, which is evidenced by the corresponding indicators. Methods for conducting geological and structural observations are uniform for all sites and points, and increasing the number of observation points provides for a more effective use of the new technique. In our paper, we give specific parameters for constructing circle fracture diagrams. All the maximums in the diagram are involved in the analysis for comparison with the standard patterns. Errors caused by random coincidence are minimized

  7. Antimalarial prescribing patterns in state hospitals and selected ...

    African Journals Online (AJOL)

    slowdown of progression to resistance could be achieved by improving prescribing practice, drug quality, and patient compliance. Objective: To determine the antimalarial prescribing pattern and to assess rational prescribing of chloroquine by prescribers in government hospitals and parastatals in Lagos State. Methods: ...

  8. HOT Faults", Fault Organization, and the Occurrence of the Largest Earthquakes

    Science.gov (United States)

    Carlson, J. M.; Hillers, G.; Archuleta, R. J.

    2006-12-01

    We apply the concept of "Highly Optimized Tolerance" (HOT) for the investigation of spatio-temporal seismicity evolution, in particular mechanisms associated with largest earthquakes. HOT provides a framework for investigating both qualitative and quantitative features of complex feedback systems that are far from equilibrium and punctuated by rare, catastrophic events. In HOT, robustness trade-offs lead to complexity and power laws in systems that are coupled to evolving environments. HOT was originally inspired by biology and engineering, where systems are internally very highly structured, through biological evolution or deliberate design, and perform in an optimum manner despite fluctuations in their surroundings. Though faults and fault systems are not designed in ways comparable to biological and engineered structures, feedback processes are responsible in a conceptually comparable way for the development, evolution and maintenance of younger fault structures and primary slip surfaces of mature faults, respectively. Hence, in geophysical applications the "optimization" approach is perhaps more aptly replaced by "organization", reflecting the distinction between HOT and random, disorganized configurations, and highlighting the importance of structured interdependencies that evolve via feedback among and between different spatial and temporal scales. Expressed in the terminology of the HOT concept, mature faults represent a configuration optimally organized for the release of strain energy; whereas immature, more heterogeneous fault networks represent intermittent, suboptimal systems that are regularized towards structural simplicity and the ability to generate large earthquakes more easily. We discuss fault structure and associated seismic response pattern within the HOT concept, and outline fundamental differences between this novel interpretation to more orthodox viewpoints like the criticality concept. The discussion is flanked by numerical simulations of a

  9. Introduction to prescribed fires in Southern ecosystems

    Science.gov (United States)

    Thomas A. Waldrop; Scott L. Goodrick

    2012-01-01

    This publication is a guide for resource managers on planning and executing prescribed burns in Southern forests and grasslands. It includes explanations of reasons for prescribed burning, environmental effects, weather, and techniques as well as general information on prescribed burning.

  10. Fault Analysis in Solar Photovoltaic Arrays

    Science.gov (United States)

    Zhao, Ye

    Fault analysis in solar photovoltaic (PV) arrays is a fundamental task to increase reliability, efficiency and safety in PV systems. Conventional fault protection methods usually add fuses or circuit breakers in series with PV components. But these protection devices are only able to clear faults and isolate faulty circuits if they carry a large fault current. However, this research shows that faults in PV arrays may not be cleared by fuses under some fault scenarios, due to the current-limiting nature and non-linear output characteristics of PV arrays. First, this thesis introduces new simulation and analytic models that are suitable for fault analysis in PV arrays. Based on the simulation environment, this thesis studies a variety of typical faults in PV arrays, such as ground faults, line-line faults, and mismatch faults. The effect of a maximum power point tracker on fault current is discussed and shown to, at times, prevent the fault current protection devices to trip. A small-scale experimental PV benchmark system has been developed in Northeastern University to further validate the simulation conclusions. Additionally, this thesis examines two types of unique faults found in a PV array that have not been studied in the literature. One is a fault that occurs under low irradiance condition. The other is a fault evolution in a PV array during night-to-day transition. Our simulation and experimental results show that overcurrent protection devices are unable to clear the fault under "low irradiance" and "night-to-day transition". However, the overcurrent protection devices may work properly when the same PV fault occurs in daylight. As a result, a fault under "low irradiance" and "night-to-day transition" might be hidden in the PV array and become a potential hazard for system efficiency and reliability.

  11. Aftershocks illuminate the 2011 Mineral, Virginia, earthquake causative fault zone and nearby active faults

    Science.gov (United States)

    Horton, J. Wright; Shah, Anjana K.; McNamara, Daniel E.; Snyder, Stephen L.; Carter, Aina M

    2015-01-01

    Deployment of temporary seismic stations after the 2011 Mineral, Virginia (USA), earthquake produced a well-recorded aftershock sequence. The majority of aftershocks are in a tabular cluster that delineates the previously unknown Quail fault zone. Quail fault zone aftershocks range from ~3 to 8 km in depth and are in a 1-km-thick zone striking ~036° and dipping ~50°SE, consistent with a 028°, 50°SE main-shock nodal plane having mostly reverse slip. This cluster extends ~10 km along strike. The Quail fault zone projects to the surface in gneiss of the Ordovician Chopawamsic Formation just southeast of the Ordovician–Silurian Ellisville Granodiorite pluton tail. The following three clusters of shallow (<3 km) aftershocks illuminate other faults. (1) An elongate cluster of early aftershocks, ~10 km east of the Quail fault zone, extends 8 km from Fredericks Hall, strikes ~035°–039°, and appears to be roughly vertical. The Fredericks Hall fault may be a strand or splay of the older Lakeside fault zone, which to the south spans a width of several kilometers. (2) A cluster of later aftershocks ~3 km northeast of Cuckoo delineates a fault near the eastern contact of the Ordovician Quantico Formation. (3) An elongate cluster of late aftershocks ~1 km northwest of the Quail fault zone aftershock cluster delineates the northwest fault (described herein), which is temporally distinct, dips more steeply, and has a more northeastward strike. Some aftershock-illuminated faults coincide with preexisting units or structures evident from radiometric anomalies, suggesting tectonic inheritance or reactivation.

  12. Paleoseismicity of two historically quiescent faults in Australia: Implications for fault behavior in stable continental regions

    Science.gov (United States)

    Crone, A.J.; De Martini, P. M.; Machette, M.M.; Okumura, K.; Prescott, J.R.

    2003-01-01

    Paleoseismic studies of two historically aseismic Quaternary faults in Australia confirm that cratonic faults in stable continental regions (SCR) typically have a long-term behavior characterized by episodes of activity separated by quiescent intervals of at least 10,000 and commonly 100,000 years or more. Studies of the approximately 30-km-long Roopena fault in South Australia and the approximately 30-km-long Hyden fault in Western Australia document multiple Quaternary surface-faulting events that are unevenly spaced in time. The episodic clustering of events on cratonic SCR faults may be related to temporal fluctuations of fault-zone fluid pore pressures in a volume of strained crust. The long-term slip rate on cratonic SCR faults is extremely low, so the geomorphic expression of many cratonic SCR faults is subtle, and scarps may be difficult to detect because they are poorly preserved. Both the Roopena and Hyden faults are in areas of limited or no significant seismicity; these and other faults that we have studied indicate that many potentially hazardous SCR faults cannot be recognized solely on the basis of instrumental data or historical earthquakes. Although cratonic SCR faults may appear to be nonhazardous because they have been historically aseismic, those that are favorably oriented for movement in the current stress field can and have produced unexpected damaging earthquakes. Paleoseismic studies of modern and prehistoric SCR faulting events provide the basis for understanding of the long-term behavior of these faults and ultimately contribute to better seismic-hazard assessments.

  13. 69-74 A Retrospective Analysis of Prescribing Prac

    African Journals Online (AJOL)

    user

    A Retrospective Analysis of Prescribing Practice Based on WHO Prescribing Indicators at Four. Selected Hospitals of West ... Key words: World Health Organization, prescribing indicators, rational drug use. INTRODUCTION. Indicators of ... factors, the risk of irrational prescribing could raise several folds. Irrational use of ...

  14. The epidemiology and type of medication errors reported to the National Poisons Information Centre of Ireland.

    Science.gov (United States)

    Cassidy, Nicola; Duggan, Edel; Williams, David J P; Tracey, Joseph A

    2011-07-01

    Medication errors are widely reported for hospitalised patients, but limited data are available for medication errors that occur in community-based and clinical settings. Epidemiological data from poisons information centres enable characterisation of trends in medication errors occurring across the healthcare spectrum. The objective of this study was to characterise the epidemiology and type of medication errors reported to the National Poisons Information Centre (NPIC) of Ireland. A 3-year prospective study on medication errors reported to the NPIC was conducted from 1 January 2007 to 31 December 2009 inclusive. Data on patient demographics, enquiry source, location, pharmaceutical agent(s), type of medication error, and treatment advice were collated from standardised call report forms. Medication errors were categorised as (i) prescribing error (i.e. physician error), (ii) dispensing error (i.e. pharmacy error), and (iii) administration error involving the wrong medication, the wrong dose, wrong route, or the wrong time. Medication errors were reported for 2348 individuals, representing 9.56% of total enquiries to the NPIC over 3 years. In total, 1220 children and adolescents under 18 years of age and 1128 adults (≥ 18 years old) experienced a medication error. The majority of enquiries were received from healthcare professionals, but members of the public accounted for 31.3% (n = 736) of enquiries. Most medication errors occurred in a domestic setting (n = 2135), but a small number occurred in healthcare facilities: nursing homes (n = 110, 4.68%), hospitals (n = 53, 2.26%), and general practitioner surgeries (n = 32, 1.36%). In children, medication errors with non-prescription pharmaceuticals predominated (n = 722) and anti-pyretics and non-opioid analgesics, anti-bacterials, and cough and cold preparations were the main pharmaceutical classes involved. Medication errors with prescription medication predominated for adults (n = 866) and the major medication

  15. The epidemiology and type of medication errors reported to the National Poisons Information Centre of Ireland.

    LENUS (Irish Health Repository)

    Cassidy, Nicola

    2012-02-01

    INTRODUCTION: Medication errors are widely reported for hospitalised patients, but limited data are available for medication errors that occur in community-based and clinical settings. Epidemiological data from poisons information centres enable characterisation of trends in medication errors occurring across the healthcare spectrum. AIM: The objective of this study was to characterise the epidemiology and type of medication errors reported to the National Poisons Information Centre (NPIC) of Ireland. METHODS: A 3-year prospective study on medication errors reported to the NPIC was conducted from 1 January 2007 to 31 December 2009 inclusive. Data on patient demographics, enquiry source, location, pharmaceutical agent(s), type of medication error, and treatment advice were collated from standardised call report forms. Medication errors were categorised as (i) prescribing error (i.e. physician error), (ii) dispensing error (i.e. pharmacy error), and (iii) administration error involving the wrong medication, the wrong dose, wrong route, or the wrong time. RESULTS: Medication errors were reported for 2348 individuals, representing 9.56% of total enquiries to the NPIC over 3 years. In total, 1220 children and adolescents under 18 years of age and 1128 adults (>\\/= 18 years old) experienced a medication error. The majority of enquiries were received from healthcare professionals, but members of the public accounted for 31.3% (n = 736) of enquiries. Most medication errors occurred in a domestic setting (n = 2135), but a small number occurred in healthcare facilities: nursing homes (n = 110, 4.68%), hospitals (n = 53, 2.26%), and general practitioner surgeries (n = 32, 1.36%). In children, medication errors with non-prescription pharmaceuticals predominated (n = 722) and anti-pyretics and non-opioid analgesics, anti-bacterials, and cough and cold preparations were the main pharmaceutical classes involved. Medication errors with prescription medication predominated for

  16. Misbheaving Faults: The Expanding Role of Geodetic Imaging in Unraveling Unexpected Fault Slip Behavior

    Science.gov (United States)

    Barnhart, W. D.; Briggs, R.

    2015-12-01

    Geodetic imaging techniques enable researchers to "see" details of fault rupture that cannot be captured by complementary tools such as seismology and field studies, thus providing increasingly detailed information about surface strain, slip kinematics, and how an earthquake may be transcribed into the geological record. For example, the recent Haiti, Sierra El Mayor, and Nepal earthquakes illustrate the fundamental role of geodetic observations in recording blind ruptures where purely geological and seismological studies provided incomplete views of rupture kinematics. Traditional earthquake hazard analyses typically rely on sparse paleoseismic observations and incomplete mapping, simple assumptions of slip kinematics from Andersonian faulting, and earthquake analogs to characterize the probabilities of forthcoming ruptures and the severity of ground accelerations. Spatially dense geodetic observations in turn help to identify where these prevailing assumptions regarding fault behavior break down and highlight new and unexpected kinematic slip behavior. Here, we focus on three key contributions of space geodetic observations to the analysis of co-seismic deformation: identifying near-surface co-seismic slip where no easily recognized fault rupture exists; discerning non-Andersonian faulting styles; and quantifying distributed, off-fault deformation. The 2013 Balochistan strike slip earthquake in Pakistan illuminates how space geodesy precisely images non-Andersonian behavior and off-fault deformation. Through analysis of high-resolution optical imagery and DEMs, evidence emerges that a single fault map slip as both a strike slip and dip slip fault across multiple seismic cycles. These observations likewise enable us to quantify on-fault deformation, which account for ~72% of the displacements in this earthquake. Nonetheless, the spatial distribution of on- and off-fault deformation in this event is highly spatially variable- a complicating factor for comparisons

  17. Prescribing Antibiotics

    DEFF Research Database (Denmark)

    Pedersen, Inge Kryger; Jepsen, Kim Sune

    2018-01-01

    The medical professions will lose an indispensable tool in clinical practice if even simple infections cannot be cured because antibiotics have lost effectiveness. This article presents results from an exploratory enquiry into “good doctoring” in the case of antibiotic prescribing at a time when...... the knowledge base in the healthcare field is shifting. Drawing on in-depth interviews about diagnosing and prescribing, the article demonstrates how the problem of antimicrobial resistance is understood and engaged with by Danish general practitioners. When general practitioners speak of managing “non......-medical issues,” they refer to routines, clinical expertise, experiences with their patients, and decision-making based more on contextual circumstances than molecular conditions—and on the fact that such conditions can be hard to assess. This article’s contribution to knowledge about how new and global health...

  18. Fault strength in Marmara region inferred from the geometry of the principle stress axes and fault orientations: A case study for the Prince's Islands fault segment

    Science.gov (United States)

    Pinar, Ali; Coskun, Zeynep; Mert, Aydin; Kalafat, Dogan

    2015-04-01

    The general consensus based on historical earthquake data point out that the last major moment release on the Prince's islands fault was in 1766 which in turn signals an increased seismic risk for Istanbul Metropolitan area considering the fact that most of the 20 mm/yr GPS derived slip rate for the region is accommodated mostly by that fault segment. The orientation of the Prince's islands fault segment overlaps with the NW-SE direction of the maximum principle stress axis derived from the focal mechanism solutions of the large and moderate sized earthquakes occurred in the Marmara region. As such, the NW-SE trending fault segment translates the motion between the two E-W trending branches of the North Anatolian fault zone; one extending from the Gulf of Izmit towards Çınarcık basin and the other extending between offshore Bakırköy and Silivri. The basic relation between the orientation of the maximum and minimum principal stress axes, the shear and normal stresses, and the orientation of a fault provides clue on the strength of a fault, i.e., its frictional coefficient. Here, the angle between the fault normal and maximum compressive stress axis is a key parameter where fault normal and fault parallel maximum compressive stress might be a necessary and sufficient condition for a creeping event. That relation also implies that when the trend of the sigma-1 axis is close to the strike of the fault the shear stress acting on the fault plane approaches zero. On the other hand, the ratio between the shear and normal stresses acting on a fault plane is proportional to the coefficient of frictional coefficient of the fault. Accordingly, the geometry between the Prince's islands fault segment and a maximum principal stress axis matches a weak fault model. In the frame of the presentation we analyze seismological data acquired in Marmara region and interpret the results in conjuction with the above mentioned weak fault model.

  19. Distribution network fault section identification and fault location using artificial neural network

    DEFF Research Database (Denmark)

    Dashtdar, Masoud; Dashti, Rahman; Shaker, Hamid Reza

    2018-01-01

    In this paper, a method for fault location in power distribution network is presented. The proposed method uses artificial neural network. In order to train the neural network, a series of specific characteristic are extracted from the recorded fault signals in relay. These characteristics...... components of the sequences as well as three-phase signals could be obtained using statistics to extract the hidden features inside them and present them separately to train the neural network. Also, since the obtained inputs for the training of the neural network strongly depend on the fault angle, fault...... resistance, and fault location, the training data should be selected such that these differences are properly presented so that the neural network does not face any issues for identification. Therefore, selecting the signal processing function, data spectrum and subsequently, statistical parameters...

  20. Loading of the San Andreas fault by flood-induced rupture of faults beneath the Salton Sea

    Science.gov (United States)

    Brothers, Daniel; Kilb, Debi; Luttrell, Karen; Driscoll, Neal W.; Kent, Graham

    2011-01-01

    The southern San Andreas fault has not experienced a large earthquake for approximately 300 years, yet the previous five earthquakes occurred at ~180-year intervals. Large strike-slip faults are often segmented by lateral stepover zones. Movement on smaller faults within a stepover zone could perturb the main fault segments and potentially trigger a large earthquake. The southern San Andreas fault terminates in an extensional stepover zone beneath the Salton Sea—a lake that has experienced periodic flooding and desiccation since the late Holocene. Here we reconstruct the magnitude and timing of fault activity beneath the Salton Sea over several earthquake cycles. We observe coincident timing between flooding events, stepover fault displacement and ruptures on the San Andreas fault. Using Coulomb stress models, we show that the combined effect of lake loading, stepover fault movement and increased pore pressure could increase stress on the southern San Andreas fault to levels sufficient to induce failure. We conclude that rupture of the stepover faults, caused by periodic flooding of the palaeo-Salton Sea and by tectonic forcing, had the potential to trigger earthquake rupture on the southern San Andreas fault. Extensional stepover zones are highly susceptible to rapid stress loading and thus the Salton Sea may be a nucleation point for large ruptures on the southern San Andreas fault.

  1. Fault Detection for Industrial Processes

    Directory of Open Access Journals (Sweden)

    Yingwei Zhang

    2012-01-01

    Full Text Available A new fault-relevant KPCA algorithm is proposed. Then the fault detection approach is proposed based on the fault-relevant KPCA algorithm. The proposed method further decomposes both the KPCA principal space and residual space into two subspaces. Compared with traditional statistical techniques, the fault subspace is separated based on the fault-relevant influence. This method can find fault-relevant principal directions and principal components of systematic subspace and residual subspace for process monitoring. The proposed monitoring approach is applied to Tennessee Eastman process and penicillin fermentation process. The simulation results show the effectiveness of the proposed method.

  2. Numerical modelling of the mechanical and fluid flow properties of fault zones - Implications for fault seal analysis

    NARCIS (Netherlands)

    Heege, J.H. ter; Wassing, B.B.T.; Giger, S.B.; Clennell, M.B.

    2009-01-01

    Existing fault seal algorithms are based on fault zone composition and fault slip (e.g., shale gouge ratio), or on fault orientations within the contemporary stress field (e.g., slip tendency). In this study, we aim to develop improved fault seal algorithms that account for differences in fault zone

  3. Active faults, paleoseismology, and historical fault rupture in northern Wairarapa, North Island, New Zealand

    International Nuclear Information System (INIS)

    Schermer, E.R.; Van Dissen, R.; Berryman, K.R.; Kelsey, H.M.; Cashman, S.M.

    2004-01-01

    Active faulting in the upper plate of the Hikurangi subduction zone, North Island, New Zealand, represents a significant seismic hazard that is not yet well understood. In northern Wairarapa, the geometry and kinematics of active faults, and the Quaternary and historical surface-rupture record, have not previously been studied in detail. We present the results of mapping and paleoseismicity studies on faults in the northern Wairarapa region to document the characteristics of active faults and the timing of earthquakes. We focus on evidence for surface rupture in the 1855 Wairarapa (M w 8.2) and 1934 Pahiatua (M w 7.4) earthquakes, two of New Zealand's largest historical earthquakes. The Dreyers Rock, Alfredton, Saunders Road, Waitawhiti, and Waipukaka faults form a northeast-trending, east-stepping array of faults. Detailed mapping of offset geomorphic features shows the rupture lengths vary from c. 7 to 20 km and single-event displacements range from 3 to 7 m, suggesting the faults are capable of generating M >7 earthquakes. Trenching results show that two earthquakes have occurred on the Alfredton Fault since c. 2900 cal. BP. The most recent event probably occurred during the 1855 Wairarapa earthquake as slip propagated northward from the Wairarapa Fault and across a 6 km wide step. Waipukaka Fault trenches show that at least three surface-rupturing earthquakes have occurred since 8290-7880 cal. BP. Analysis of stratigraphic and historical evidence suggests the most recent rupture occurred during the 1934 Pahiatua earthquake. Estimates of slip rates provided by these data suggest that a larger component of strike slip than previously suspected is occurring within the upper plate and that the faults accommodate a significant proportion of the dextral component of oblique subduction. Assessment of seismic hazard is difficult because the known fault scarp lengths appear too short to have accommodated the estimated single-event displacements. Faults in the region are

  4. Fault prediction for nonlinear stochastic system with incipient faults based on particle filter and nonlinear regression.

    Science.gov (United States)

    Ding, Bo; Fang, Huajing

    2017-05-01

    This paper is concerned with the fault prediction for the nonlinear stochastic system with incipient faults. Based on the particle filter and the reasonable assumption about the incipient faults, the modified fault estimation algorithm is proposed, and the system state is estimated simultaneously. According to the modified fault estimation, an intuitive fault detection strategy is introduced. Once each of the incipient fault is detected, the parameters of which are identified by a nonlinear regression method. Then, based on the estimated parameters, the future fault signal can be predicted. Finally, the effectiveness of the proposed method is verified by the simulations of the Three-tank system. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  5. High-Intensity Radiated Field Fault-Injection Experiment for a Fault-Tolerant Distributed Communication System

    Science.gov (United States)

    Yates, Amy M.; Torres-Pomales, Wilfredo; Malekpour, Mahyar R.; Gonzalez, Oscar R.; Gray, W. Steven

    2010-01-01

    Safety-critical distributed flight control systems require robustness in the presence of faults. In general, these systems consist of a number of input/output (I/O) and computation nodes interacting through a fault-tolerant data communication system. The communication system transfers sensor data and control commands and can handle most faults under typical operating conditions. However, the performance of the closed-loop system can be adversely affected as a result of operating in harsh environments. In particular, High-Intensity Radiated Field (HIRF) environments have the potential to cause random fault manifestations in individual avionic components and to generate simultaneous system-wide communication faults that overwhelm existing fault management mechanisms. This paper presents the design of an experiment conducted at the NASA Langley Research Center's HIRF Laboratory to statistically characterize the faults that a HIRF environment can trigger on a single node of a distributed flight control system.

  6. Development of Fault Models for Hybrid Fault Detection and Diagnostics Algorithm: October 1, 2014 -- May 5, 2015

    Energy Technology Data Exchange (ETDEWEB)

    Cheung, Howard [Purdue Univ., West Lafayette, IN (United States); Braun, James E. [Purdue Univ., West Lafayette, IN (United States)

    2015-12-31

    This report describes models of building faults created for OpenStudio to support the ongoing development of fault detection and diagnostic (FDD) algorithms at the National Renewable Energy Laboratory. Building faults are operating abnormalities that degrade building performance, such as using more energy than normal operation, failing to maintain building temperatures according to the thermostat set points, etc. Models of building faults in OpenStudio can be used to estimate fault impacts on building performance and to develop and evaluate FDD algorithms. The aim of the project is to develop fault models of typical heating, ventilating and air conditioning (HVAC) equipment in the United States, and the fault models in this report are grouped as control faults, sensor faults, packaged and split air conditioner faults, water-cooled chiller faults, and other uncategorized faults. The control fault models simulate impacts of inappropriate thermostat control schemes such as an incorrect thermostat set point in unoccupied hours and manual changes of thermostat set point due to extreme outside temperature. Sensor fault models focus on the modeling of sensor biases including economizer relative humidity sensor bias, supply air temperature sensor bias, and water circuit temperature sensor bias. Packaged and split air conditioner fault models simulate refrigerant undercharging, condenser fouling, condenser fan motor efficiency degradation, non-condensable entrainment in refrigerant, and liquid line restriction. Other fault models that are uncategorized include duct fouling, excessive infiltration into the building, and blower and pump motor degradation.

  7. Absolute age determination of quaternary faults

    International Nuclear Information System (INIS)

    Cheong, Chang Sik; Lee, Seok Hoon; Choi, Man Sik

    2000-03-01

    To constrain the age of neotectonic fault movement, Rb-Sr, K-Ar, U-series disequilibrium, C-14 and Be-10 methods were applied to the fault gouges, fracture infillings and sediments from the Malbang, Ipsil, Wonwonsa faults faults in the Ulsan fault zone, Yangsan fault in the Yeongdeog area and southeastern coastal area. Rb-Sr and K-Ar data imply that the fault movement of the Ulan fault zone initiated at around 30 Ma and preliminary dating result for the Yang san fault is around 70 Ma in the Yeongdeog area. K-Ar and U-series disequilibrium dating results for fracture infillings in the Ipsil fault are consistent with reported ESR ages. Radiocarbon ages of quaternary sediments from the Jeongjari area are discordant with stratigraphic sequence. Carbon isotope data indicate a difference of sedimentry environment for those samples. Be-10 dating results for the Suryum fault area are consistent with reported OSL results

  8. Absolute age determination of quaternary faults

    Energy Technology Data Exchange (ETDEWEB)

    Cheong, Chang Sik; Lee, Seok Hoon; Choi, Man Sik [Korea Basic Science Institute, Seoul (Korea, Republic of)] (and others)

    2000-03-15

    To constrain the age of neotectonic fault movement, Rb-Sr, K-Ar, U-series disequilibrium, C-14 and Be-10 methods were applied to the fault gouges, fracture infillings and sediments from the Malbang, Ipsil, Wonwonsa faults faults in the Ulsan fault zone, Yangsan fault in the Yeongdeog area and southeastern coastal area. Rb-Sr and K-Ar data imply that the fault movement of the Ulan fault zone initiated at around 30 Ma and preliminary dating result for the Yang san fault is around 70 Ma in the Yeongdeog area. K-Ar and U-series disequilibrium dating results for fracture infillings in the Ipsil fault are consistent with reported ESR ages. Radiocarbon ages of quaternary sediments from the Jeongjari area are discordant with stratigraphic sequence. Carbon isotope data indicate a difference of sedimentry environment for those samples. Be-10 dating results for the Suryum fault area are consistent with reported OSL results.

  9. ESR dating of fault rocks

    International Nuclear Information System (INIS)

    Lee, Hee Kwon

    2002-03-01

    Past movement on faults can be dated by measurement of the intensity of ESR signals in quartz. These signals are reset by local lattice deformation and local frictional heating on grain contacts at the time of fault movement. The ESR signals then trow back as a result of bombardment by ionizing radiation from surrounding rocks. The age is obtained from the ratio of the equivalent dose, needed to produce the observed signal, to the dose rate. Fine grains are more completely reset during faulting, and a plot of age vs grain size shows a plateau for grains below critical size : these grains are presumed to have been completely zeroed by the last fault activity. We carried out ESR dating of fault rocks collected from the Yangsan fault system. ESR dates from the this fault system range from 870 to 240 ka. Results of this research suggest that long-term cyclic fault activity continued into the pleistocene

  10. ESR dating of fault rocks

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hee Kwon [Kangwon National Univ., Chuncheon (Korea, Republic of)

    2002-03-15

    Past movement on faults can be dated by measurement of the intensity of ESR signals in quartz. These signals are reset by local lattice deformation and local frictional heating on grain contacts at the time of fault movement. The ESR signals then trow back as a result of bombardment by ionizing radiation from surrounding rocks. The age is obtained from the ratio of the equivalent dose, needed to produce the observed signal, to the dose rate. Fine grains are more completely reset during faulting, and a plot of age vs grain size shows a plateau for grains below critical size : these grains are presumed to have been completely zeroed by the last fault activity. We carried out ESR dating of fault rocks collected from the Yangsan fault system. ESR dates from the this fault system range from 870 to 240 ka. Results of this research suggest that long-term cyclic fault activity continued into the pleistocene.

  11. Spatiotemporal patterns of fault slip rates across the Central Sierra Nevada frontal fault zone

    Science.gov (United States)

    Rood, Dylan H.; Burbank, Douglas W.; Finkel, Robert C.

    2011-01-01

    Patterns in fault slip rates through time and space are examined across the transition from the Sierra Nevada to the Eastern California Shear Zone-Walker Lane belt. At each of four sites along the eastern Sierra Nevada frontal fault zone between 38 and 39° N latitude, geomorphic markers, such as glacial moraines and outwash terraces, are displaced by a suite of range-front normal faults. Using geomorphic mapping, surveying, and 10Be surface exposure dating, mean fault slip rates are defined, and by utilizing markers of different ages (generally, ~ 20 ka and ~ 150 ka), rates through time and interactions among multiple faults are examined over 10 4-10 5 year timescales. At each site for which data are available for the last ~ 150 ky, mean slip rates across the Sierra Nevada frontal fault zone have probably not varied by more than a factor of two over time spans equal to half of the total time interval (~ 20 ky and ~ 150 ky timescales): 0.3 ± 0.1 mm year - 1 (mode and 95% CI) at both Buckeye Creek in the Bridgeport basin and Sonora Junction; and 0.4 + 0.3/-0.1 mm year - 1 along the West Fork of the Carson River at Woodfords. Data permit rates that are relatively constant over the time scales examined. In contrast, slip rates are highly variable in space over the last ~ 20 ky. Slip rates decrease by a factor of 3-5 northward over a distance of ~ 20 km between the northern Mono Basin (1.3 + 0.6/-0.3 mm year - 1 at Lundy Canyon site) to the Bridgeport Basin (0.3 ± 0.1 mm year - 1 ). The 3-fold decrease in the slip rate on the Sierra Nevada frontal fault zone northward from Mono Basin is indicative of a change in the character of faulting north of the Mina Deflection as extension is transferred eastward onto normal faults between the Sierra Nevada and Walker Lane belt. A compilation of regional deformation rates reveals that the spatial pattern of extension rates changes along strike of the Eastern California Shear Zone-Walker Lane belt. South of the Mina Deflection

  12. Geophysical Imaging of Fault Structures Over the Qadimah Fault, Saudi Arabia

    KAUST Repository

    AlTawash, Feras

    2011-06-01

    The purpose of this study is to use geophysical imaging methods to identify the conjectured location of the ‘Qadimah fault’ near the ‘King Abdullah Economic City’, Saudi Arabia. Towards this goal, 2-D resistivity and seismic surveys were conducted at two different locations, site 1 and site 2, along the proposed trace of the ‘Qadimah fault’. Three processing techniques were used to validate the fault (i) 2-D travel time tomography, (ii) resistivity imaging, and (iii) reflection trim stacking. The refraction traveltime tomograms at site 1 and site 2 both show low-velocity zones (LVZ’s) next to the conjectured fault trace. These LVZ’s are interpreted as colluvial wedges that are often observed on the downthrown side of normal faults. The resistivity tomograms are consistent with this interpretation in that there is a significant change in resistivity values along the conjectured fault trace. Processing the reflection data did not clearly reveal the existence of a fault, and is partly due to the sub-optimal design of the reflection experiment. Overall, the results of this study strongly, but not definitively, suggest the existence of the Qadimah fault in the ‘King Abdullah Economic City’ region of Saudi Arabia.

  13. The distribution of deformation in parallel fault-related folds with migrating axial surfaces: comparison between fault-propagation and fault-bend folding

    Science.gov (United States)

    Salvini, Francesco; Storti, Fabrizio

    2001-01-01

    In fault-related folds that form by axial surface migration, rocks undergo deformation as they pass through axial surfaces. The distribution and intensity of deformation in these structures has been impacted by the history of axial surface migration. Upon fold initiation, unique dip panels develop, each with a characteristic deformation intensity, depending on their history. During fold growth, rocks that pass through axial surfaces are transported between dip panels and accumulate additional deformation. By tracking the pattern of axial surface migration in model folds, we predict the distribution of relative deformation intensity in simple-step, parallel fault-bend and fault-propagation anticlines. In both cases the deformation is partitioned into unique domains we call deformation panels. For a given rheology of the folded multilayer, deformation intensity will be homogeneously distributed in each deformation panel. Fold limbs are always deformed. The flat crests of fault-propagation anticlines are always undeformed. Two asymmetric deformation panels develop in fault-propagation folds above ramp angles exceeding 29°. For lower ramp angles, an additional, more intensely-deformed panel develops at the transition between the crest and the forelimb. Deformation in the flat crests of fault-bend anticlines occurs when fault displacement exceeds the length of the footwall ramp, but is never found immediately hinterland of the crest to forelimb transition. In environments dominated by brittle deformation, our models may serve as a first-order approximation of the distribution of fractures in fault-related folds.

  14. Fault isolatability conditions for linear systems

    DEFF Research Database (Denmark)

    Stoustrup, Jakob; Niemann, Henrik

    2006-01-01

    In this paper, we shall show that an unlimited number of additive single faults can be isolated under mild conditions if a general isolation scheme is applied. Multiple faults are also covered. The approach is algebraic and is based on a set representation of faults, where all faults within a set...... the faults have occurred. The last step is a fault isolation (FI) of the faults occurring in a specific fault set, i.e. equivalent with the standard FI step. A simple example demonstrates how to turn the algebraic necessary and sufficient conditions into explicit algorithms for designing filter banks, which...

  15. Nonlinear Robust Observer-Based Fault Detection for Networked Suspension Control System of Maglev Train

    Directory of Open Access Journals (Sweden)

    Yun Li

    2013-01-01

    Full Text Available A fault detection approach based on nonlinear robust observer is designed for the networked suspension control system of Maglev train with random induced time delay. First, considering random bounded time-delay and external disturbance, the nonlinear model of the networked suspension control system is established. Then, a nonlinear robust observer is designed using the input of the suspension gap. And the estimate error is proved to be bounded with arbitrary precision by adopting an appropriate parameter. When sensor faults happen, the residual between the real states and the observer outputs indicates which kind of sensor failures occurs. Finally, simulation results using the actual parameters of CMS-04 maglev train indicate that the proposed method is effective for maglev train.

  16. The Heidelberg POLYP - a flexible and fault-tolerant poly-processor

    International Nuclear Information System (INIS)

    Maenner, R.; Deluigi, B.

    1981-01-01

    The Heidelberg poly-processor system POLYP is described. It is intended to be used in nuclear physics for reprocessing of experimental data, in high energy physics as second-stage trigger processor, and generally in other applications requiring high-computing power. The POLYP system consists of any number of I/O-processors, processor modules (eventually of different types), global memory segments, and a host processor. All modules (up to several hundred) are connected by a multiple common-data-bus system; all processors, additionally, by a multiple sync bus system for processor/task-scheduling. All hard- and software is designed to be decentralized and free of bottle-necks. Most hardware-faults like single-bit errors in memory or multi-bit errors during transfers are automatically corrected. Defective modules, buses, etc., can be removed with only a graceful degradation of the system-throughput. (orig.)

  17. Fault diagnosis of an intelligent hydraulic pump based on a nonlinear unknown input observer

    Directory of Open Access Journals (Sweden)

    Zhonghai MA

    2018-02-01

    Full Text Available Hydraulic piston pumps are commonly used in aircraft. In order to improve the viability of aircraft and energy efficiency, intelligent variable pressure pump systems have been used in aircraft hydraulic systems more and more widely. Efficient fault diagnosis plays an important role in improving the reliability and performance of hydraulic systems. In this paper, a fault diagnosis method of an intelligent hydraulic pump system (IHPS based on a nonlinear unknown input observer (NUIO is proposed. Different from factors of a full-order Luenberger-type unknown input observer, nonlinear factors of the IHPS are considered in the NUIO. Firstly, a new type of intelligent pump is presented, the mathematical model of which is established to describe the IHPS. Taking into account the real-time requirements of the IHPS and the special structure of the pump, the mechanism of the intelligent pump and failure modes are analyzed and two typical failure modes are obtained. Furthermore, a NUIO of the IHPS is performed based on the output pressure and swashplate angle signals. With the residual error signals produced by the NUIO, online intelligent pump failure occurring in real-time can be detected. Lastly, through analysis and simulation, it is confirmed that this diagnostic method could accurately diagnose and isolate those typical failure modes of the nonlinear IHPS. The method proposed in this paper is of great significance in improving the reliability of the IHPS. Keywords: Fault diagnosis, Hydraulic piston pump, Model-based, Nonlinear unknown input observer (NUIO, Residual error

  18. Quantile arithmetic methodology for uncertainty propagation in fault trees

    International Nuclear Information System (INIS)

    Abdelhai, M.; Ragheb, M.

    1986-01-01

    A methodology based on quantile arithmetic, the probabilistic analog to interval analysis, is proposed for the computation of uncertainties propagation in fault tree analysis. The basic events' continuous probability density functions (pdf's) are represented by equivalent discrete distributions by dividing them into a number of quantiles N. Quantile arithmetic is then used to performthe binary arithmetical operations corresponding to the logical gates in the Boolean expression of the top event expression of a given fault tree. The computational advantage of the present methodology as compared with the widely used Monte Carlo method was demonstrated for the cases of summation of M normal variables through the efficiency ratio defined as the product of the labor and error ratios. The efficiency ratio values obtained by the suggested methodology for M = 2 were 2279 for N = 5, 445 for N = 25, and 66 for N = 45 when compared with the results for 19,200 Monte Carlo samples at the 40th percentile point. Another advantage of the approach is that the exact analytical value of the median is always obtained for the top event

  19. System tuning and measurement error detection testing

    International Nuclear Information System (INIS)

    Krejci, Petr; Machek, Jindrich

    2008-09-01

    The project includes the use of the PEANO (Process Evaluation and Analysis by Neural Operators) system to verify the monitoring of the status of dependent measurements with a view to early measurement fault detection and estimation of selected signal levels. At the present stage, the system's capabilities of detecting measurement errors was assessed and the quality of the estimates was evaluated for various system configurations and the formation of empiric models, and rules were sought for system training at chosen process data recording parameters and operating modes. The aim was to find a suitable system configuration and to document the quality of the tuned system on artificial failures

  20. Fault Current Distribution and Pole Earth Potential Rise (EPR) Under Substation Fault

    Science.gov (United States)

    Nnassereddine, M.; Rizk, J.; Hellany, A.; Nagrial, M.

    2013-09-01

    New high-voltage (HV) substations are fed by transmission lines. The position of these lines necessitates earthing design to ensure safety compliance of the system. Conductive structures such as steel or concrete poles are widely used in HV transmission mains. The earth potential rise (EPR) generated by a fault at the substation could result in an unsafe condition. This article discusses EPR based on substation fault. The pole EPR assessment under substation fault is assessed with and without mutual impedance consideration. Split factor determination with and without the mutual impedance of the line is also discussed. Furthermore, a simplified formula to compute the pole grid current under substation fault is included. Also, it includes the introduction of the n factor which determines the number of poles that required earthing assessments under substation fault. A case study is shown.