WorldWideScience

Sample records for human error assessment

  1. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    Science.gov (United States)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Quality within space exploration ground processing operations, the identification and or classification of underlying contributors and causes of human error must be identified, in order to manage human error.This presentation will provide a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  2. Errors in Seismic Hazard Assessment are Creating Huge Human Losses

    Science.gov (United States)

    Bela, J.

    2015-12-01

    The current practice of representing earthquake hazards to the public based upon their perceived likelihood or probability of occurrence is proven now by the global record of actual earthquakes to be not only erroneous and unreliable, but also too deadly! Earthquake occurrence is sporadic and therefore assumptions of earthquake frequency and return-period are both not only misleading, but also categorically false. More than 700,000 people have now lost their lives (2000-2011), wherein 11 of the World's Deadliest Earthquakes have occurred in locations where probability-based seismic hazard assessments had predicted only low seismic low hazard. Unless seismic hazard assessment and the setting of minimum earthquake design safety standards for buildings and bridges are based on a more realistic deterministic recognition of "what can happen" rather than on what mathematical models suggest is "most likely to happen" such future huge human losses can only be expected to continue! The actual earthquake events that did occur were at or near the maximum potential-size event that either already had occurred in the past; or were geologically known to be possible. Haiti's M7 earthquake, 2010 (with > 222,000 fatalities) meant the dead could not even be buried with dignity. Japan's catastrophic Tohoku earthquake, 2011; a M9 Megathrust earthquake, unleashed a tsunami that not only obliterated coastal communities along the northern Japanese coast, but also claimed > 20,000 lives. This tsunami flooded nuclear reactors at Fukushima, causing 4 explosions and 3 reactors to melt down. But while this history of huge human losses due to erroneous and misleading seismic hazard estimates, despite its wrenching pain, cannot be unlived; if faced with courage and a more realistic deterministic estimate of "what is possible", it need not be lived again. An objective testing of the results of global probability based seismic hazard maps against real occurrences has never been done by the

  3. Formal safety assessment and application of the navigation simulators for preventing human error in ship operations

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    The International Maritime Organization (IMO) has encouraged its member countries to introduce Formal Safety Assessment (FSA) for ship operations since the end of the last century. FSA can be used through certain formal assessing steps to generate effective recommendations and cautions to control marine risks and improve the safety of ships. On the basis of the brief introduction of FSA, this paper describes the ideas of applying FSA to the prevention of human error in ship operations. It especially discusses the investigation and analysis of the information and data using navigation simulators and puts forward some suggestions for the introduction and development of the FSA research work for safer ship operations.

  4. A Preliminary Study on the Measures to Assess the Organizational Safety: The Cultural Impact on Human Error Potential

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Hee; Lee, Yong Hee [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2011-10-15

    The Fukushima I nuclear accident following the Tohoku earthquake and tsunami on 11 March 2011 occurred after twelve years had passed since the JCO accident which was caused as a result of an error made by JCO employees. These accidents, along with the Chernobyl accident, associated with characteristic problems of various organizations caused severe social and economic disruptions and have had significant environmental and health impact. The cultural problems with human errors occur for various reasons, and different actions are needed to prevent different errors. Unfortunately, much of the research on organization and human error has shown widely various or different results which call for different approaches. In other words, we have to find more practical solutions from various researches for nuclear safety and lead a systematic approach to organizational deficiency causing human error. This paper reviews Hofstede's criteria, IAEA safety culture, safety areas of periodic safety review (PSR), teamwork and performance, and an evaluation of HANARO safety culture to verify the measures used to assess the organizational safety

  5. Assessing Measurement Error in Medicare Coverage

    Data.gov (United States)

    U.S. Department of Health & Human Services — Assessing Measurement Error in Medicare Coverage From the National Health Interview Survey Using linked administrative data, to validate Medicare coverage estimates...

  6. Assessing Measurement Error in Medicare Coverage

    Data.gov (United States)

    U.S. Department of Health & Human Services — Assessing Measurement Error in Medicare Coverage From the National Health Interview Survey Using linked administrative data, to validate Medicare coverage estimates...

  7. A human error taxonomy for analysing healthcare incident reports: assessing reporting culture and its effects on safety perfomance

    DEFF Research Database (Denmark)

    Itoh, Kenji; Omata, N.; Andersen, Henning Boje

    2009-01-01

    The present paper reports on a human error taxonomy system developed for healthcare risk management and on its application to evaluating safety performance and reporting culture. The taxonomy comprises dimensions for classifying errors, for performance-shaping factors, and for the maturity...

  8. Errors in Human Performance

    Science.gov (United States)

    1980-08-15

    activities. Internatin , f - Studie, 1979, _ 5-24. Collins, A. h., & Loftus, E. F. A spreading activation theory of seman- tic processing. k fl _j-ej w...rea-daAm i_=E1jJh. Providence, R.I.: Brown University Press, 1967. LaBerge, D., & Samuels, S. J. Toward a theory of automatic information processing...Report, November, 1979. Norman, D. A. Er n human pefg ce (Tech. Rep. 8004). University of California, San Diego, July 1980. Norman, D. A. Post Freudian

  9. Managing human error in aviation.

    Science.gov (United States)

    Helmreich, R L

    1997-05-01

    Crew resource management (CRM) programs were developed to address team and leadership aspects of piloting modern airplanes. The goal is to reduce errors through team work. Human factors research and social, cognitive, and organizational psychology are used to develop programs tailored for individual airlines. Flight crews study accident case histories, group dynamics, and human error. Simulators provide pilots with the opportunity to solve complex flight problems. CRM in the simulator is called line-oriented flight training (LOFT). In automated cockpits CRM promotes the idea of automation as a crew member. Cultural aspects of aviation include professional, business, and national culture. The aviation CRM model has been adapted for training surgeons and operating room staff in human factors.

  10. Human Error Assessmentin Minefield Cleaning Operation Using Human Event Analysis

    Directory of Open Access Journals (Sweden)

    Mohammad Hajiakbari

    2015-12-01

    Full Text Available Background & objective: Human error is one of the main causes of accidents. Due to the unreliability of the human element and the high-risk nature of demining operations, this study aimed to assess and manage human errors likely to occur in such operations. Methods: This study was performed at a demining site in war zones located in the West of Iran. After acquiring an initial familiarity with the operations, methods, and tools of clearing minefields, job task related to clearing landmines were specified. Next, these tasks were studied using HTA and related possible errors were assessed using ATHEANA. Results: de-mining task was composed of four main operations, including primary detection, technical identification, investigation, and neutralization. There were found four main reasons for accidents occurring in such operations; walking on the mines, leaving mines with no action, error in neutralizing operation and environmental explosion. The possibility of human error in mine clearance operations was calculated as 0.010. Conclusion: The main causes of human error in de-mining operations can be attributed to various factors such as poor weather and operating conditions like outdoor work, inappropriate personal protective equipment, personality characteristics, insufficient accuracy in the work, and insufficient time available. To reduce the probability of human error in de-mining operations, the aforementioned factors should be managed properly.

  11. Human Errors and Bridge Management Systems

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle; Nowak, A. S.

    Human errors are divided in two groups. The first group contains human errors, which effect the reliability directly. The second group contains human errors, which will not directly effect the reliability of the structure. The methodology used to estimate so-called reliability distributions on ba...

  12. Game Design Principles based on Human Error

    Directory of Open Access Journals (Sweden)

    Guilherme Zaffari

    2016-03-01

    Full Text Available This paper displays the result of the authors’ research regarding to the incorporation of Human Error, through design principles, to video game design. In a general way, designers must consider Human Error factors throughout video game interface development; however, when related to its core design, adaptations are in need, since challenge is an important factor for fun and under the perspective of Human Error, challenge can be considered as a flaw in the system. The research utilized Human Error classifications, data triangulation via predictive human error analysis, and the expanded flow theory to allow the design of a set of principles in order to match the design of playful challenges with the principles of Human Error. From the results, it was possible to conclude that the application of Human Error in game design has a positive effect on player experience, allowing it to interact only with errors associated with the intended aesthetics of the game.

  13. Human error: A significant information security issue

    Energy Technology Data Exchange (ETDEWEB)

    Banks, W.W.

    1994-12-31

    One of the major threats to information security human error is often ignored or dismissed with statements such as {open_quotes}There is not much we can do about it.{close_quotes} This type of thinking runs counter to reality because studies have shown that, of all systems threats, human error has the highest probability of occurring and that, with professional assistance, human errors can be prevented or significantly reduced Security analysts often overlook human error as a major threat; however, other professionals such as human factors engineers are trained to deal with these probabilistic occurrences and mitigate them. In a recent study 55% of the respondents surveyed considered human error as the most important security threat. Documentation exists to show that human error was a major cause of the consequences suffered at Three Mile Island, Chernobyl, Bhopal, and the Exxon tanker, Valdez. Ironically, causes of human error can usually be quickly and easily eliminated.

  14. The cost of human error intervention

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, C.T.; Banks, W.W.; Jones, E.D.

    1994-03-01

    DOE has directed that cost-benefit analyses be conducted as part of the review process for all new DOE orders. This new policy will have the effect of ensuring that DOE analysts can justify the implementation costs of the orders that they develop. We would like to argue that a cost-benefit analysis is merely one phase of a complete risk management program -- one that would more than likely start with a probabilistic risk assessment. The safety community defines risk as the probability of failure times the severity of consequence. An engineering definition of failure can be considered in terms of physical performance, as in mean-time-between-failure; or, it can be thought of in terms of human performance, as in probability of human error. The severity of consequence of a failure can be measured along any one of a number of dimensions -- economic, political, or social. Clearly, an analysis along one dimension cannot be directly compared to another but, a set of cost-benefit analyses, based on a series of cost-dimensions, can be extremely useful to managers who must prioritize their resources. Over the last two years, DOE has been developing a series of human factors orders, directed a lowering the probability of human error -- or at least changing the distribution of those errors. The following discussion presents a series of cost-benefit analyses using historical events in the nuclear industry. However, we would first like to discuss some of the analytic cautions that must be considered when we deal with human error.

  15. Human Error Mechanisms in Complex Work Environments

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1988-01-01

    will account for most of the action errors observed. In addition, error mechanisms appear to be intimately related to the development of high skill and know-how in a complex work context. This relationship between errors and human adaptation is discussed in detail for individuals and organisations...

  16. Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR): Data manual. Part 2: Human error probability (HEP) data; Volume 5, Revision 4

    Energy Technology Data Exchange (ETDEWEB)

    Reece, W.J.; Gilbert, B.G.; Richards, R.E. [EG and G Idaho, Inc., Idaho Falls, ID (United States)

    1994-09-01

    This data manual contains a hard copy of the information in the Nuclear Computerized Library for Assessing Reactor Reliability (NUCLARR) Version 3.5 database, which is sponsored by the US Nuclear Regulatory Commission. NUCLARR was designed as a tool for risk analysis. Many of the nuclear reactors in the US and several outside the US are represented in the NUCLARR database. NUCLARR includes both human error probability estimates for workers at the plants and hardware failure data for nuclear reactor equipment. Aggregations of these data yield valuable reliability estimates for probabilistic risk assessments and human reliability analyses. The data manual is organized to permit manual searches of the information if the computerized version is not available. Originally, the manual was published in three parts. In this revision the introductory material located in the original Part 1 has been incorporated into the text of Parts 2 and 3. The user can now find introductory material either in the original Part 1, or in Parts 2 and 3 as revised. Part 2 contains the human error probability data, and Part 3, the hardware component reliability data.

  17. A technique for human error analysis (ATHEANA)

    Energy Technology Data Exchange (ETDEWEB)

    Cooper, S.E.; Ramey-Smith, A.M.; Wreathall, J.; Parry, G.W. [and others

    1996-05-01

    Probabilistic risk assessment (PRA) has become an important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. Human reliability analysis (HRA) is a critical element of PRA; however, limitations in the analysis of human actions in PRAs have long been recognized as a constraint when using PRA. A multidisciplinary HRA framework has been developed with the objective of providing a structured approach for analyzing operating experience and understanding nuclear plant safety, human error, and the underlying factors that affect them. The concepts of the framework have matured into a rudimentary working HRA method. A trial application of the method has demonstrated that it is possible to identify potentially significant human failure events from actual operating experience which are not generally included in current PRAs, as well as to identify associated performance shaping factors and plant conditions that have an observable impact on the frequency of core damage. A general process was developed, albeit in preliminary form, that addresses the iterative steps of defining human failure events and estimating their probabilities using search schemes. Additionally, a knowledge- base was developed which describes the links between performance shaping factors and resulting unsafe actions.

  18. Understanding Human Error Based on Automated Analyses

    Data.gov (United States)

    National Aeronautics and Space Administration — This is a report on a continuing study of automated analyses of experiential textual reports to gain insight into the causal factors of human errors in aviation...

  19. Wind power error estimation in resource assessments.

    Science.gov (United States)

    Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  20. Wind Power Error Estimation in Resource Assessments

    Science.gov (United States)

    Rodríguez, Osvaldo; del Río, Jesús A.; Jaramillo, Oscar A.; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444

  1. Wind power error estimation in resource assessments.

    Directory of Open Access Journals (Sweden)

    Osvaldo Rodríguez

    Full Text Available Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  2. FRamework Assessing Notorious Contributing Influences for Error (FRANCIE).

    Science.gov (United States)

    Haney, L N

    2000-09-01

    FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) is a framework and methodology for the systematic analysis, characterization, and prediction of human error. It was developed in a NASA Advanced Concepts Project by Idaho National Engineering and Environmental Laboratory, NASA Ames Research Center, Boeing, and America West Airlines, with input from United Airlines and Idaho State University. It was hypothesized that development of a comprehensive taxonomy of error-type and contributing-influences, in a framework and methodology addressing issues important for error analysis, would result in a useful tool for human error analysis. The development method included capturing expertise of human factors and domain experts in the framework, and ensuring that the approach addressed issues important for future human error analysis. This development resulted in creation of a FRANCIE taxonomy for airline maintenance, and a FRANCIE framework and approach that addresses important issues: proactive and reactive, comprehensive error-type and contributing-influences taxonomy, meaningful error reduction strategies, multilevel analyses, multiple user types, compatible with existing methods, applied in design phase or throughout system life cycle, capture of lessons learned, and ease of application. FRANCIE was designed to apply to any domain, given taxonomy refinement. This is demonstrated by its application for an aviation operations scenario for a new precision landing aid. Representative error-types and contributing-influences, two example analyses, and a case study are presented. In conclusion, FRANCIE is useful for analysis of human error, and the taxonomy is a starting point for development of taxonomies allowing application to other domains, such as spacecraft maintenance, operations, medicine, process control, and other transportation industries.

  3. Application of human error analysis to aviation and space operations

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, W.R.

    1998-03-01

    For the past several years at the Idaho National Engineering and Environmental Laboratory (INEEL) the authors have been working to apply methods of human error analysis to the design of complex systems. They have focused on adapting human reliability analysis (HRA) methods that were developed for Probabilistic Safety Assessment (PSA) for application to system design. They are developing methods so that human errors can be systematically identified during system design, the potential consequences of each error can be assessed, and potential corrective actions (e.g. changes to system design or procedures) can be identified. The primary vehicle the authors have used to develop and apply these methods has been a series of projects sponsored by the National Aeronautics and Space Administration (NASA) to apply human error analysis to aviation operations. They are currently adapting their methods and tools of human error analysis to the domain of air traffic management (ATM) systems. Under the NASA-sponsored Advanced Air Traffic Technologies (AATT) program they are working to address issues of human reliability in the design of ATM systems to support the development of a free flight environment for commercial air traffic in the US. They are also currently testing the application of their human error analysis approach for space flight operations. They have developed a simplified model of the critical habitability functions for the space station Mir, and have used this model to assess the affects of system failures and human errors that have occurred in the wake of the collision incident last year. They are developing an approach so that lessons learned from Mir operations can be systematically applied to design and operation of long-term space missions such as the International Space Station (ISS) and the manned Mars mission.

  4. Deaths during general anesthesia: technology-related, due to human error, or unavoidable? An ECRI technology assessment.

    Science.gov (United States)

    1985-01-01

    More than 2,000 healthy Americans die each year during general anesthesia, and at least half of these deaths may be preventable. Anesthetists and equipment manufacturers have made considerable progress in improving anesthesia safety. However, much more needs to be done, especially in "human-factors" areas such as improved training, consistent use of preanesthesia checklists, and anesthetists' willingness to enhance their vigilance by using appropriate monitoring equipment. While defective equipment and supplies are the direct cause of relatively few deaths, inexpensive oxygen analyzers and disconnect alarms could, if available in more ORs, warn anesthetists in time to convert many deaths to near misses. Some anesthetists are using other monitoring technologies that are more costly, but can detect a wider range of problems. The anesthesia community could expand its anesthesia-safety leadership and guidance, by improving technology-related training and by developing practice standards for anesthetists and safety standards for equipment. The Joint Commission on Accreditation of Hospitals could impose specific safety requirements on hospitals; malpractice insurance carriers could require anesthetists and hospitals to use monitors and alarms during all procedures; and the Food and Drug Administration could actively stimulate and oversee these efforts and perhaps provide seed money for some of them. The necessary equipment costs would likely be offset by long-term savings in malpractice premiums, as anesthesia incidents are the most costly of all types of malpractice claims. Concerted efforts such as these could greatly reduce the number of avoidable anesthesia-related deaths.

  5. Human error in daily intensive nursing care

    Directory of Open Access Journals (Sweden)

    Sabrina da Costa Machado Duarte

    2015-12-01

    Full Text Available Objectives: to identify the errors in daily intensive nursing care and analyze them according to the theory of human error. Method: quantitative, descriptive and exploratory study, undertaken at the Intensive Care Center of a hospital in the Brazilian Sentinel Hospital Network. The participants were 36 professionals from the nursing team. The data were collected through semistructured interviews, observation and lexical analysis in the software ALCESTE(r. Results: human error in nursing care can be related to the approach of the system, through active faults and latent conditions. The active faults are represented by the errors in medication administration and not raising the bedside rails. The latent conditions can be related to the communication difficulties in the multiprofessional team, lack of standards and institutional routines and absence of material resources. Conclusion: the errors identified interfere in nursing care and the clients' recovery and can cause damage. Nevertheless, they are treated as common events inherent in daily practice. The need to acknowledge these events is emphasized, stimulating the safety culture at the institution.

  6. Human error mitigation initiative (HEMI) : summary report.

    Energy Technology Data Exchange (ETDEWEB)

    Stevens, Susan M.; Ramos, M. Victoria; Wenner, Caren A.; Brannon, Nathan Gregory

    2004-11-01

    Despite continuing efforts to apply existing hazard analysis methods and comply with requirements, human errors persist across the nuclear weapons complex. Due to a number of factors, current retroactive and proactive methods to understand and minimize human error are highly subjective, inconsistent in numerous dimensions, and are cumbersome to characterize as thorough. An alternative and proposed method begins with leveraging historical data to understand what the systemic issues are and where resources need to be brought to bear proactively to minimize the risk of future occurrences. An illustrative analysis was performed using existing incident databases specific to Pantex weapons operations indicating systemic issues associated with operating procedures that undergo notably less development rigor relative to other task elements such as tooling and process flow. Future recommended steps to improve the objectivity, consistency, and thoroughness of hazard analysis and mitigation were delineated.

  7. Applications of human error analysis to aviation and space operations

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, W.R.

    1998-07-01

    For the past several years at the Idaho National Engineering and Environmental Laboratory (INEEL) we have been working to apply methods of human error analysis to the design of complex systems. We have focused on adapting human reliability analysis (HRA) methods that were developed for Probabilistic Safety Assessment (PSA) for application to system design. We are developing methods so that human errors can be systematically identified during system design, the potential consequences of each error can be assessed, and potential corrective actions (e.g. changes to system design or procedures) can be identified. These applications lead to different requirements when compared with HR.As performed as part of a PSA. For example, because the analysis will begin early during the design stage, the methods must be usable when only partial design information is available. In addition, the ability to perform numerous ''what if'' analyses to identify and compare multiple design alternatives is essential. Finally, since the goals of such human error analyses focus on proactive design changes rather than the estimate of failure probabilities for PRA, there is more emphasis on qualitative evaluations of error relationships and causal factors than on quantitative estimates of error frequency. The primary vehicle we have used to develop and apply these methods has been a series of projects sponsored by the National Aeronautics and Space Administration (NASA) to apply human error analysis to aviation operations. The first NASA-sponsored project had the goal to evaluate human errors caused by advanced cockpit automation. Our next aviation project focused on the development of methods and tools to apply human error analysis to the design of commercial aircraft. This project was performed by a consortium comprised of INEEL, NASA, and Boeing Commercial Airplane Group. The focus of the project was aircraft design and procedures that could lead to human errors during

  8. Comparison of risk sensitivity to human errors in the Oconee and LaSalle PRAs

    Energy Technology Data Exchange (ETDEWEB)

    Wong, S.; Higgins, J.

    1991-01-01

    This paper describes the comparative analyses of plant risk sensitivity to human errors in the Oconee and La Salle Probabilistic Risk Assessment (PRAs). These analyses were performed to determine the reasons for the observed differences in the sensitivity of core melt frequency (CMF) to changes in human error probabilities (HEPs). Plant-specific design features, PRA methods, and the level of detail and assumptions in the human error modeling were evaluated to assess their influence risk estimates and sensitivities.

  9. Perancangan Fasilitas Kerja untuk Mereduksi Human Error

    Directory of Open Access Journals (Sweden)

    Harmein Nasution

    2012-01-01

    Full Text Available Work equipments and environment which are not design ergonomically can cause physical exhaustion to the workers. As a result of that physical exhaustion, many defects in the production lines can happen due to human error and also cause musculoskeletal complaints. To overcome, those effects, we occupied methods for analyzing the workers posture based on the SNQ (Standard Nordic Questionnaire, plibel, QEC (Quick Exposure Check and biomechanism. Moreover, we applied those methods for designing rolling machines and grip egrek ergono-mically, so that the defects on those production lines can be minimized.

  10. Normal accidents: human error and medical equipment design.

    Science.gov (United States)

    Dain, Steven

    2002-01-01

    High-risk systems, which are typical of our technologically complex era, include not just nuclear power plants but also hospitals, anesthesia systems, and the practice of medicine and perfusion. In high-risk systems, no matter how effective safety devices are, some types of accidents are inevitable because the system's complexity leads to multiple and unexpected interactions. It is important for healthcare providers to apply a risk assessment and management process to decisions involving new equipment and procedures or staffing matters in order to minimize the residual risks of latent errors, which are amenable to correction because of the large window of opportunity for their detection. This article provides an introduction to basic risk management and error theory principles and examines ways in which they can be applied to reduce and mitigate the inevitable human errors that accompany high-risk systems. The article also discusses "human factor engineering" (HFE), the process which is used to design equipment/ human interfaces in order to mitigate design errors. The HFE process involves interaction between designers and endusers to produce a series of continuous refinements that are incorporated into the final product. The article also examines common design problems encountered in the operating room that may predispose operators to commit errors resulting in harm to the patient. While recognizing that errors and accidents are unavoidable, organizations that function within a high-risk system must adopt a "safety culture" that anticipates problems and acts aggressively through an anonymous, "blameless" reporting mechanism to resolve them. We must continuously examine and improve the design of equipment and procedures, personnel, supplies and materials, and the environment in which we work to reduce error and minimize its effects. Healthcare providers must take a leading role in the day-to-day management of the "Perioperative System" and be a role model in

  11. FRamework Assessing Notorious Contributing Influences for Error (FRANCIE): Perspective on Taxonomy Development to Support Error Reporting and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lon N. Haney; David I. Gertman

    2003-04-01

    Beginning in the 1980s a primary focus of human reliability analysis was estimation of human error probabilities. However, detailed qualitative modeling with comprehensive representation of contextual variables often was lacking. This was likely due to the lack of comprehensive error and performance shaping factor taxonomies, and the limited data available on observed error rates and their relationship to specific contextual variables. In the mid 90s Boeing, America West Airlines, NASA Ames Research Center and INEEL partnered in a NASA sponsored Advanced Concepts grant to: assess the state of the art in human error analysis, identify future needs for human error analysis, and develop an approach addressing these needs. Identified needs included the need for a method to identify and prioritize task and contextual characteristics affecting human reliability. Other needs identified included developing comprehensive taxonomies to support detailed qualitative modeling and to structure meaningful data collection efforts across domains. A result was the development of the FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) with a taxonomy for airline maintenance tasks. The assignment of performance shaping factors to generic errors by experts proved to be valuable to qualitative modeling. Performance shaping factors and error types from such detailed approaches can be used to structure error reporting schemes. In a recent NASA Advanced Human Support Technology grant FRANCIE was refined, and two new taxonomies for use on space missions were developed. The development, sharing, and use of error taxonomies, and the refinement of approaches for increased fidelity of qualitative modeling is offered as a means to help direct useful data collection strategies.

  12. How social is error observation? The neural mechanisms underlying the observation of human and machine errors.

    Science.gov (United States)

    Desmet, Charlotte; Deschrijver, Eliane; Brass, Marcel

    2014-04-01

    Recently, it has been shown that the medial prefrontal cortex (MPFC) is involved in error execution as well as error observation. Based on this finding, it has been argued that recognizing each other's mistakes might rely on motor simulation. In the current functional magnetic resonance imaging (fMRI) study, we directly tested this hypothesis by investigating whether medial prefrontal activity in error observation is restricted to situations that enable simulation. To this aim, we compared brain activity related to the observation of errors that can be simulated (human errors) with brain activity related to errors that cannot be simulated (machine errors). We show that medial prefrontal activity is not only restricted to the observation of human errors but also occurs when observing errors of a machine. In addition, our data indicate that the MPFC reflects a domain general mechanism of monitoring violations of expectancies.

  13. Human reliability, error, and human factors in power generation

    CERN Document Server

    Dhillon, B S

    2014-01-01

    Human reliability, error, and human factors in the area of power generation have been receiving increasing attention in recent years. Each year billions of dollars are spent in the area of power generation to design, construct/manufacture, operate, and maintain various types of power systems around the globe, and such systems often fail due to human error. This book compiles various recent results and data into one volume, and eliminates the need to consult many diverse sources to obtain vital information.  It enables potential readers to delve deeper into a specific area, providing the source of most of the material presented in references at the end of each chapter. Examples along with solutions are also provided at appropriate places, and there are numerous problems for testing the reader’s comprehension.  Chapters cover a broad range of topics, including general methods for performing human reliability and error analysis in power plants, specific human reliability analysis methods for nuclear power pl...

  14. Research Workshop on Expert Judgment, Human Error, and Intelligent Systems

    OpenAIRE

    Silverman, Barry G.

    1993-01-01

    This workshop brought together 20 computer scientists, psychologists, and human-computer interaction (HCI) researchers to exchange results and views on human error and judgment bias. Human error is typically studied when operators undertake actions, but judgment bias is an issue in thinking rather than acting. Both topics are generally ignored by the HCI community, which is interested in designs that eliminate human error and bias tendencies. As a result, almost no one at the workshop had met...

  15. Modeling human response errors in synthetic flight simulator domain

    Science.gov (United States)

    Ntuen, Celestine A.

    1992-01-01

    This paper presents a control theoretic approach to modeling human response errors (HRE) in the flight simulation domain. The human pilot is modeled as a supervisor of a highly automated system. The synthesis uses the theory of optimal control pilot modeling for integrating the pilot's observation error and the error due to the simulation model (experimental error). Methods for solving the HRE problem are suggested. Experimental verification of the models will be tested in a flight quality handling simulation.

  16. Information systems and human error in the lab.

    Science.gov (United States)

    Bissell, Michael G

    2004-01-01

    Health system costs in clinical laboratories are incurred daily due to human error. Indeed, a major impetus for automating clinical laboratories has always been the opportunity it presents to simultaneously reduce cost and improve quality of operations by decreasing human error. But merely automating these processes is not enough. To the extent that introduction of these systems results in operators having less practice in dealing with unexpected events or becoming deskilled in problemsolving, however new kinds of error will likely appear. Clinical laboratories could potentially benefit by integrating findings on human error from modern behavioral science into their operations. Fully understanding human error requires a deep understanding of human information processing and cognition. Predicting and preventing negative consequences requires application of this understanding to laboratory operations. Although the occurrence of a particular error at a particular instant cannot be absolutely prevented, human error rates can be reduced. The following principles are key: an understanding of the process of learning in relation to error; understanding the origin of errors since this knowledge can be used to reduce their occurrence; optimal systems should be forgiving to the operator by absorbing errors, at least for a time; although much is known by industrial psychologists about how to write operating procedures and instructions in ways that reduce the probability of error, this expertise is hardly ever put to use in the laboratory; and a feedback mechanism must be designed into the system that enables the operator to recognize in real time that an error has occurred.

  17. On the way to assess errors of commission

    Energy Technology Data Exchange (ETDEWEB)

    Straeter, Oliver; Dang, Vinh; Kaufer, Barry; Daniels, Ardela

    2004-02-01

    In January 2002, the OECD-NEA (Organization for the Economic Cooperation and Development, Nuclear Energy Agency) Working Group Risk (WGRISK) held a workshop at on Human Reliability data needs and potential solutions. The workshop was initiated to exchange the possibilities to proceed in the area of assessing errors of commission, those interventions of operators that are not required from the system point of view and aggravate the scenario evolution. A common sense in the research on errors of commission is that the respective HRA methods require a more profound database than the classical HRA methods. This paper summarizes the discussion of the workshop. It discusses the various data sources and their use in HRA, the problems that make it difficult to get appropriate data for HRA, and possible approaches to overcome this bottleneck in HRA.

  18. Compensating for Type-I Errors in Video Quality Assessment

    DEFF Research Database (Denmark)

    Brunnström, Kjell; Tavakoli, Samira; Søgaard, Jacob

    2015-01-01

    This paper analyzes the impact on compensating for Type-I errors in video quality assessment. A Type-I error is to incorrectly conclude that there is an effect. The risk increases with the number of comparisons that are performed in statistical tests. Type-I errors are an issue often neglected...

  19. Introduction to precision machine design and error assessment

    CERN Document Server

    Mekid, Samir

    2008-01-01

    While ultra-precision machines are now achieving sub-nanometer accuracy, unique challenges continue to arise due to their tight specifications. Written to meet the growing needs of mechanical engineers and other professionals to understand these specialized design process issues, Introduction to Precision Machine Design and Error Assessment places a particular focus on the errors associated with precision design, machine diagnostics, error modeling, and error compensation. Error Assessment and ControlThe book begins with a brief overview of precision engineering and applications before introdu

  20. The Detection of Human Spreadsheet Errors by Humans versus Inspection (Auditing) Software

    CERN Document Server

    Aurigemma, Salvatore

    2010-01-01

    Previous spreadsheet inspection experiments have had human subjects look for seeded errors in spreadsheets. In this study, subjects attempted to find errors in human-developed spreadsheets to avoid the potential artifacts created by error seeding. Human subject success rates were compared to the successful rates for error-flagging by spreadsheet static analysis tools (SSATs) applied to the same spreadsheets. The human error detection results were comparable to those of studies using error seeding. However, Excel Error Check and Spreadsheet Professional were almost useless for correctly flagging natural (human) errors in this study.

  1. Promoting safety improvements via potential human error audits

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, G.C. (International Mining Consultants (United Kingdom). Ergonomics and Safety Management)

    1994-08-01

    It has become increasingly recognised that human error plays a major role in mining accident causation. Moreover, it also recognised that this aspect of accident causation has had relatively little systematic attention in the past. Recent studies within British Coal have succeeded in developing a Potential Human Error Audit as a means of targeting accident prevention initiatives. 7 refs., 2 tabs.

  2. Performance Assessment of Hydrological Models Considering Acceptable Forecast Error Threshold

    Directory of Open Access Journals (Sweden)

    Qianjin Dong

    2015-11-01

    Full Text Available It is essential to consider the acceptable threshold in the assessment of a hydrological model because of the scarcity of research in the hydrology community and errors do not necessarily cause risk. Two forecast errors, including rainfall forecast error and peak flood forecast error, have been studied based on the reliability theory. The first order second moment (FOSM and bound methods are used to identify the reliability. Through the case study of the Dahuofang (DHF Reservoir, it is shown that the correlation between these two errors has great influence on the reliability index of hydrological model. In particular, the reliability index of the DHF hydrological model decreases with the increasing correlation. Based on the reliability theory, the proposed performance evaluation framework incorporating the acceptable forecast error threshold and correlation among the multiple errors can be used to evaluate the performance of a hydrological model and to quantify the uncertainties of a hydrological model output.

  3. Impact Propagation of Human Errors on Software Requirements Volatility

    Directory of Open Access Journals (Sweden)

    Zahra Askarinejadamiri

    2017-02-01

    Full Text Available Requirements volatility (RV is one of the key risk sources in software development and maintenance projects because of the frequent changes made to the software. Human faults and errors are major factors contributing to requirement change in software development projects. As such, predicting requirements volatility is a challenge to risk management in the software area. Previous studies only focused on certain aspects of the human error in this area. This study specifically identifies and analyses the impact of human errors on requirements gathering and requirements volatility. It proposes a model based on responses to a survey questionnaire administered to 215 participants who have experience in software requirement gathering. Exploratory factor analysis (EFA and structural equation modelling (SEM were used to analyse the correlation of human errors and requirement volatility. The results of the analysis confirm the correlation between human errors and RV. The results show that human actions have a higher impact on RV compared to human perception. The study provides insights into software management to understand socio-technical aspects of requirements volatility in order to control risk management. Human actions and perceptions respectively are a root cause contributing to human errors that lead to RV.

  4. Assessment of relative error sources in IR DIAL measurement accuracy

    Science.gov (United States)

    Menyuk, N.; Killinger, D. K.

    1983-01-01

    An assessment is made of the role the various error sources play in limiting the accuracy of infrared differential absorption lidar measurements used for the remote sensing of atmospheric species. An overview is presented of the relative contribution of each error source including the inadequate knowledge of the absorption coefficient, differential spectral reflectance, and background interference as well as measurement errors arising from signal fluctuations.

  5. Structured methods for identifying and correcting potential human errors in space operations.

    Science.gov (United States)

    Nelson, W R; Haney, L N; Ostrom, L T; Richards, R E

    1998-01-01

    Human performance plays a significant role in the development and operation of any complex system, and human errors are significant contributors to degraded performance, incidents, and accidents for technologies as diverse as medical systems, commercial aircraft, offshore oil platforms, nuclear power plants, and space systems. To date, serious accidents attributed to human error have fortunately been rare in space operations. However, as flight rates go up and the duration of space missions increases, the accident rate could increase unless proactive action is taken to identity and correct potential human errors in space operations. The Idaho National Engineering and Environmental Laboratory (INEEL) has developed and applied structured methods of human error analysis to identify potential human errors, assess their effects on system performance, and develop strategies to prevent the errors or mitigate their consequences. These methods are being applied in NASA-sponsored programs to the domain of commercial aviation, focusing on airplane maintenance and air traffic management. The application of human error analysis to space operations could contribute to minimize the risks associated with human error in the design and operation of future space systems.

  6. Gravity field determination and error assessment techniques

    Science.gov (United States)

    Yuan, D. N.; Shum, C. K.; Tapley, B. D.

    1989-01-01

    Linear estimation theory, along with a new technique to compute relative data weights, was applied to the determination of the Earth's geopotential field and other geophysical model parameters using a combination of satellite ground-based tracking data, satellite altimetry data, and the surface gravimetry data. The relative data weights for the inhomogeneous data sets are estimated simultaneously with the gravity field and other geophysical and orbit parameters in a least squares approach to produce the University of Texas gravity field models. New techniques to perform calibration of the formal covariance matrix for the geopotential solution were developed to obtain a reliable gravity field error estimate. Different techniques, which include orbit residual analysis, surface gravity anomaly residual analysis, subset gravity solution comparisons and consider covariance analysis, were applied to investigate the reliability of the calibration.

  7. The Public Understanding of Error in Educational Assessment

    Science.gov (United States)

    Gardner, John

    2013-01-01

    Evidence from recent research suggests that in the UK the public perception of errors in national examinations is that they are simply mistakes; events that are preventable. This perception predominates over the more sophisticated technical view that errors arise from many sources and create an inevitable variability in assessment outcomes. The…

  8. The Public Understanding of Error in Educational Assessment

    Science.gov (United States)

    Gardner, John

    2013-01-01

    Evidence from recent research suggests that in the UK the public perception of errors in national examinations is that they are simply mistakes; events that are preventable. This perception predominates over the more sophisticated technical view that errors arise from many sources and create an inevitable variability in assessment outcomes. The…

  9. Image Signature Based Mean Square Error for Image Quality Assessment

    Institute of Scientific and Technical Information of China (English)

    CUI Ziguan; GAN Zongliang; TANG Guijin; LIU Feng; ZHU Xiuchang

    2015-01-01

    Motivated by the importance of Human visual system (HVS) in image processing, we propose a novel Image signature based mean square error (ISMSE) metric for full reference Image quality assessment (IQA). Efficient image signature based describer is used to predict visual saliency map of the reference image. The saliency map is incorporated into luminance diff erence between the reference and distorted images to obtain image quality score. The eff ect of luminance diff erence on visual quality with larger saliency value which is usually corresponding to foreground objects is highlighted. Experimental results on LIVE database release 2 show that by integrating the eff ects of image signature based saliency on luminance dif-ference, the proposed ISMSE metric outperforms several state-of-the-art HVS-based IQA metrics but with lower complexity.

  10. Selecting Human Error Types for Cognitive Modelling and Simulation

    NARCIS (Netherlands)

    Mioch, T.; Osterloh, J.P.; Javaux, D.

    2010-01-01

    This paper presents a method that has enabled us to make a selection of error types and error production mechanisms relevant to the HUMAN European project, and discusses the reasons underlying those choices. We claim that this method has the advantage that it is very exhaustive in determining the re

  11. An error assessment of the kriging based approximation model using a mean square error

    Energy Technology Data Exchange (ETDEWEB)

    Ju, Byeong Hyeon; Cho, Tae Min; Lee, Byung Chai [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Jung, Do Hyun [Korea Automotive Technology Institute, Chonan (Korea, Republic of)

    2006-08-15

    A Kriging model is a sort of approximation model and used as a deterministic model of a computationally expensive analysis or simulation. Although it has various advantages, it is difficult to assess the accuracy of the approximated model. It is generally known that a Mean Square Error (MSE) obtained from the kriging model can't calculate statistically exact error bounds contrary to a response surface method, and a cross validation is mainly used. But the cross validation also has many uncertainties. Moreover, the cross validation can't be used when a maximum error is required in the given region. For solving this problem, we first proposed a modified mean square error which can consider relative errors. Using the modified mean square error, we developed the strategy of adding a new sample to the place that the MSE has the maximum when the MSE is used for the assessment of the kriging model. Finally, we offer guidelines for the use of the MSE which is obtained from the kriging model. Four test problems show that the proposed strategy is a proper method which can assess the accuracy of the kriging model. Based on the results of four test problems, a convergence coefficient of 0.01 is recommended for an exact function approximation.

  12. 人员素质测评在企业人力资源管理中的应用误区研究%Application Errors of Personal Quality Assessment in Human Resources Management for Enterprise

    Institute of Scientific and Technical Information of China (English)

    黄永鹏

    2015-01-01

    This paper analyzes the application errors of personal quality assessment in human resources management for China's enter-prises , for example:assessment tool is not fit for China's enterprises;there was a lack of assessment professionals;the system of personal quality assessment is non-standard;the design of assessment data are unreasonable . Based on facts , this paper puts forward some coun-termeasures for these problems , in order to optimize the personal quality assessment in new times.%分析现阶段我国企业人员素质测评在企业人力资源管理中存在的应用误区,如测评工具不符合中国企业的国情、企业人员素质测评专业人员缺乏、人员素质测评体系不标准、测评资料设计不尽合理等等;以事实依据为指导,针对以上问题提出相应的解决方案,为我国新时期人员素质测评在企业人力资源管理中的合理优化提供一点见解。

  13. Teachers’ explanations of learners’ errors in standardised mathematics assessments

    Directory of Open Access Journals (Sweden)

    Yael Shalem

    2014-05-01

    Full Text Available With the increased use of standardised mathematics assessments at the classroom level, teachers are encouraged, and sometimes required, to use data from these assessments to inform their practice. As a consequence, teacher educators and researchers are starting to focus on the development of analytical tools that will help them determine how teachers interpret learners’ work, in particular learners’ errors in the context of standardised and other assessments. To detect variation and associations between and within the different aspects of teacher knowledge related to mathematical error analysis, we developed an instrument with six criteria based on aspects of teachers’ knowledge related to explaining and diagnosing learners’ errors. In this study we provide evidence of the usability of the criteria by coding 572 explanations given by groups of mathematics educators (teachers and district officials in a professional development context. The findings consist of observable trends and associations between the different criteria that describe the nature of teachers’ explanations of learners’ errors.

  14. Positioning errors and quality assessment in panoramic radiography

    Energy Technology Data Exchange (ETDEWEB)

    Dhillon, Manu; Lakhanpal, Manisha; Krishnamoorthy, Bhuvana [Dept. of Oral Medicine and Radiology, ITS Centre for Dental Studies and Research, Ghaziabad (India); Raju, Srinivasa M [Dept. of Oral Medicine and Radiology, Teerthanker Mahavir Dental College, Moradabad (India); Verma, Sankalp; Mohan, Raviprakash S [Dept. of Oral Medicine and Radiology, Kothiwal Dental College and Research Centre, Moradabad (India); Tomar, Divya [Dept. of Pedodontics and Preventive Dentistry, IDST Dental College and Research Centre, Modinagar (India)

    2012-09-15

    This study was performed to determine the relative frequency of positioning errors, to identify those errors directly responsible for diagnostically inadequate images, and to assess the quality of panoramic radiographs in a sample of records collected from a dental college. This study consisted of 1,782 panoramic radiographs obtained from the Department of Oral and Maxillofacial Radiology. The positioning errors of the radiographs were assessed and categorized into nine groups: the chin tipped high, chin tipped low, a slumped position, the patient positioned forward, the patient positioned backward, failure to position the tongue against the palate, patient movement during exposure, the head tilted, and the head turned to one side. The quality of the radiographs was further judged as being 'excellent', 'diagnostically acceptable', or 'unacceptable'. Out of 1,782 radiographs, 196 (11%) were error free and 1,586 (89%) were present with positioning errors. The most common error observed was the failure to position the tongue against the palate (55.7%) and the least commonly experienced error was patient movement during exposure (1.6%). Only 11% of the radiographs were excellent, 64.1% were diagnostically acceptable, and 24.9% were unacceptable. The positioning errors found on panoramic radiographs were relatively common in our study. The quality of panoramic radiographs could be improved by careful attention to patient positioning.

  15. #2 - An Empirical Assessment of Exposure Measurement Error ...

    Science.gov (United States)

    Background• Differing degrees of exposure error acrosspollutants• Previous focus on quantifying and accounting forexposure error in single-pollutant models• Examine exposure errors for multiple pollutantsand provide insights on the potential for bias andattenuation of effect estimates in single and bipollutantepidemiological models The National Exposure Research Laboratory (NERL) Human Exposure and Atmospheric Sciences Division (HEASD) conducts research in support of EPA mission to protect human health and the environment. HEASD research program supports Goal 1 (Clean Air) and Goal 4 (Healthy People) of EPA strategic plan. More specifically, our division conducts research to characterize the movement of pollutants from the source to contact with humans. Our multidisciplinary research program produces Methods, Measurements, and Models to identify relationships between and characterize processes that link source emissions, environmental concentrations, human exposures, and target-tissue dose. The impact of these tools is improved regulatory programs and policies for EPA.

  16. Avoiding Human Error in Mission Operations: Cassini Flight Experience

    Science.gov (United States)

    Burk, Thomas A.

    2012-01-01

    Operating spacecraft is a never-ending challenge and the risk of human error is ever- present. Many missions have been significantly affected by human error on the part of ground controllers. The Cassini mission at Saturn has not been immune to human error, but Cassini operations engineers use tools and follow processes that find and correct most human errors before they reach the spacecraft. What is needed are skilled engineers with good technical knowledge, good interpersonal communications, quality ground software, regular peer reviews, up-to-date procedures, as well as careful attention to detail and the discipline to test and verify all commands that will be sent to the spacecraft. Two areas of special concern are changes to flight software and response to in-flight anomalies. The Cassini team has a lot of practical experience in all these areas and they have found that well-trained engineers with good tools who follow clear procedures can catch most errors before they get into command sequences to be sent to the spacecraft. Finally, having a robust and fault-tolerant spacecraft that allows ground controllers excellent visibility of its condition is the most important way to ensure human error does not compromise the mission.

  17. Quality assessment of speckle patterns for DIC by consideration of both systematic errors and random errors

    Science.gov (United States)

    Su, Yong; Zhang, Qingchuan; Xu, Xiaohai; Gao, Zeren

    2016-11-01

    The performance of digital image correlation (DIC) is influenced by the quality of speckle patterns significantly. Thus, it is crucial to present a valid and practical method to assess the quality of speckle patterns. However, existing assessment methods either lack a solid theoretical foundation or fail to consider the errors due to interpolation. In this work, it is proposed to assess the quality of speckle patterns by estimating the root mean square error (RMSE) of DIC, which is the square root of the sum of square of systematic error and random error. Two performance evaluation parameters, respectively the maximum and the quadratic mean of RMSE, are proposed to characterize the total error. An efficient algorithm is developed to estimate these parameters, and the correctness of this algorithm is verified by numerical experiments for both 1 dimensional signal and actual speckle images. The influences of correlation criterion, shape function order, and sub-pixel registration algorithm are briefly discussed. Compared to existing methods, method presented by this paper is more valid due to the consideration of both measurement accuracy and precision.

  18. The commission errors search and assessment (CESA) method

    Energy Technology Data Exchange (ETDEWEB)

    Reer, B.; Dang, V. N

    2007-05-15

    Errors of Commission (EOCs) refer to the performance of inappropriate actions that aggravate a situation. In Probabilistic Safety Assessment (PSA) terms, they are human failure events that result from the performance of an action. This report presents the Commission Errors Search and Assessment (CESA) method and describes the method in the form of user guidance. The purpose of the method is to identify risk-significant situations with a potential for EOCs in a predictive analysis. The main idea underlying the CESA method is to catalog the key actions that are required in the procedural response to plant events and to identify specific scenarios in which these candidate actions could erroneously appear to be required. The catalog of required actions provides a basis for a systematic search of context-action combinations. To focus the search towards risk-significant scenarios, the actions that are examined in the CESA search are prioritized according to the importance of the systems and functions that are affected by these actions. The existing PSA provides this importance information; the Risk Achievement Worth or Risk Increase Factor values indicate the systems/functions for which an EOC contribution would be more significant. In addition, the contexts, i.e. PSA scenarios, for which the EOC opportunities are reviewed are also prioritized according to their importance (top sequences or cut sets). The search through these context-action combinations results in a set of EOC situations to be examined in detail. CESA has been applied in a plant-specific pilot study, which showed the method to be feasible and effective in identifying plausible EOC opportunities. This experience, as well as the experience with other EOC analyses, showed that the quantification of EOCs remains an issue. The quantification difficulties and the outlook for their resolution conclude the report. (author)

  19. Assessment of errors and uncertainty patterns in GIA modeling

    DEFF Research Database (Denmark)

    Barletta, Valentina Roberta; Spada, G.

    During the last decade many efforts have been devoted to the assessment of global sea level rise and to the determination of the mass balance of continental ice sheets. In this context, the important role of glacial-isostatic adjustment (GIA) has been clearly recognized. Yet, in many cases only one...... "preferred" GIA model has been used, without any consideration of the possible errors involved. Lacking a rigorous assessment of systematic errors in GIA modeling, the reliability of the results is uncertain. GIA sensitivity and uncertainties associated with the viscosity models have been explored......, such as time-evolving shorelines and paleo-coastlines. In this study we quantify these uncertainties and their propagation in GIA response using a Monte Carlo approach to obtain spatio-temporal patterns of GIA errors. A direct application is the error estimates in ice mass balance in Antarctica and Greenland...

  20. Error detection in spoken human-machine interaction

    NARCIS (Netherlands)

    Krahmer, E.; Swerts, M.; Theune, Mariet; Weegels, M.

    Given the state of the art of current language and speech technology, errors are unavoidable in present-day spoken dialogue systems. Therefore, one of the main concerns in dialogue design is how to decide whether or not the system has understood the user correctly. In human-human communication,

  1. Error detection in spoken human-machine interaction

    NARCIS (Netherlands)

    Krahmer, E.; Swerts, M.; Theune, M.; Weegels, M.

    2001-01-01

    Given the state of the art of current language and speech technology, errors are unavoidable in present-day spoken dialogue systems. Therefore, one of the main concerns in dialogue design is how to decide whether or not the system has understood the user correctly. In human-human communication, dial

  2. Advanced MMIS Toward Substantial Reduction in Human Errors in NPPs

    Energy Technology Data Exchange (ETDEWEB)

    Seong, Poong Hyun; Kang, Hyun Gook [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Na, Man Gyun [Chosun Univ., Gwangju (Korea, Republic of); Kim, Jong Hyun [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of); Heo, Gyunyoung [Kyung Hee Univ., Yongin (Korea, Republic of); Jung, Yoensub [Korea Hydro and Nuclear Power Co., Ltd., Daejeon (Korea, Republic of)

    2013-04-15

    This paper aims to give an overview of the methods to inherently prevent human errors and to effectively mitigate the consequences of such errors by securing defense-in-depth during plant management through the advanced man-machine interface system (MMIS). It is needless to stress the significance of human error reduction during an accident in nuclear power plants (NPPs). Unexpected shutdowns caused by human errors not only threaten nuclear safety but also make public acceptance of nuclear power extremely lower. We have to recognize there must be the possibility of human errors occurring since humans are not essentially perfect particularly under stressful conditions. However, we have the opportunity to improve such a situation through advanced information and communication technologies on the basis of lessons learned from our experiences. As important lessons, authors explained key issues associated with automation, man-machine interface, operator support systems, and procedures. Upon this investigation, we outlined the concept and technical factors to develop advanced automation, operation and maintenance support systems, and computer-based procedures using wired/wireless technology. It should be noted that the ultimate responsibility of nuclear safety obviously belongs to humans not to machines. Therefore, safety culture including education and training, which is a kind of organizational factor, should be emphasized as well. In regard to safety culture for human error reduction, several issues that we are facing these days were described. We expect the ideas of the advanced MMIS proposed in this paper to lead in the future direction of related researches and finally supplement the safety of NPPs.

  3. ADVANCED MMIS TOWARD SUBSTANTIAL REDUCTION IN HUMAN ERRORS IN NPPS

    Directory of Open Access Journals (Sweden)

    POONG HYUN SEONG

    2013-04-01

    Full Text Available This paper aims to give an overview of the methods to inherently prevent human errors and to effectively mitigate the consequences of such errors by securing defense-in-depth during plant management through the advanced man-machine interface system (MMIS. It is needless to stress the significance of human error reduction during an accident in nuclear power plants (NPPs. Unexpected shutdowns caused by human errors not only threaten nuclear safety but also make public acceptance of nuclear power extremely lower. We have to recognize there must be the possibility of human errors occurring since humans are not essentially perfect particularly under stressful conditions. However, we have the opportunity to improve such a situation through advanced information and communication technologies on the basis of lessons learned from our experiences. As important lessons, authors explained key issues associated with automation, man-machine interface, operator support systems, and procedures. Upon this investigation, we outlined the concept and technical factors to develop advanced automation, operation and maintenance support systems, and computer-based procedures using wired/wireless technology. It should be noted that the ultimate responsibility of nuclear safety obviously belongs to humans not to machines. Therefore, safety culture including education and training, which is a kind of organizational factor, should be emphasized as well. In regard to safety culture for human error reduction, several issues that we are facing these days were described. We expect the ideas of the advanced MMIS proposed in this paper to lead in the future direction of related researches and finally supplement the safety of NPPs.

  4. SCHEME (Soft Control Human error Evaluation MEthod) for advanced MCR HRA

    Energy Technology Data Exchange (ETDEWEB)

    Jang, Inseok; Jung, Wondea [KAERI, Daejeon (Korea, Republic of); Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of)

    2015-05-15

    The Technique for Human Error Rate Prediction (THERP), Korean Human Reliability Analysis (K-HRA), Human Error Assessment and Reduction Technique (HEART), A Technique for Human Event Analysis (ATHEANA), Cognitive Reliability and Error Analysis Method (CREAM), and Simplified Plant Analysis Risk Human Reliability Assessment (SPAR-H) in relation to NPP maintenance and operation. Most of these methods were developed considering the conventional type of Main Control Rooms (MCRs). They are still used for HRA in advanced MCRs even though the operating environment of advanced MCRs in NPPs has been considerably changed by the adoption of new human-system interfaces such as computer-based soft controls. Among the many features in advanced MCRs, soft controls are an important feature because the operation action in NPP advanced MCRs is performed by soft controls. Consequently, those conventional methods may not sufficiently consider the features of soft control execution human errors. To this end, a new framework of a HRA method for evaluating soft control execution human error is suggested by performing the soft control task analysis and the literature reviews regarding widely accepted human error taxonomies. In this study, the framework of a HRA method for evaluating soft control execution human error in advanced MCRs is developed. First, the factors which HRA method in advanced MCRs should encompass are derived based on the literature review, and soft control task analysis. Based on the derived factors, execution HRA framework in advanced MCRs is developed mainly focusing on the features of soft control. Moreover, since most current HRA database deal with operation in conventional type of MCRs and are not explicitly designed to deal with digital HSI, HRA database are developed under lab scale simulation.

  5. Human Errors - A Taxonomy for Describing Human Malfunction in Industrial Installations

    DEFF Research Database (Denmark)

    Rasmussen, J.

    1982-01-01

    This paper describes the definition and the characteristics of human errors. Different types of human behavior are classified, and their relation to different error mechanisms are analyzed. The effect of conditioning factors related to affective, motivating aspects of the work situation as well...... as physiological factors are also taken into consideration. The taxonomy for event analysis, including human malfunction, is presented. Possibilities for the prediction of human error are discussed. The need for careful studies in actual work situations is expressed. Such studies could provide a better...... understanding of the complexity of human error situations as well as the data needed to characterize these situations....

  6. A Human Reliability Analysis of Post- Accident Human Errors in the Low Power and Shutdown PSA of KSNP

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Daeil; Kim, J. H.; Jang, S. C

    2007-03-15

    Korea Atomic Energy Research Institute, using the ANS low power and shutdown (LPSD) probabilistic risk assessment (PRA) Standard, evaluated the LPSD PSA model of the KSNP, Yonggwang Units 5 and 6, and identified the items to be improved. The evaluation results of human reliability analysis (HRA) of the post-accident human errors in the LPSD PSA model for the KSNP showed that 10 items among 19 items of supporting requirements for those in the ANS PRA Standard were identified as them to be improved. Thus, we newly carried out a HRA for post-accident human errors in the LPSD PSA model for the KSNP. Following tasks are the improvements in the HRA of post-accident human errors of the LPSD PSA model for the KSNP compared with the previous one: Interviews with operators in the interpretation of the procedure, modeling of operator actions, and the quantification results of human errors, site visit. Applications of limiting value to the combined post-accident human errors. Documentation of information of all the input and bases for the detailed quantifications and the dependency analysis using the quantification sheets The assessment results for the new HRA results of post-accident human errors using the ANS LPSD PRA Standard show that above 80% items of its supporting requirements for post-accident human errors were graded as its Category II. The number of the re-estimated human errors using the LPSD Korea Standard HRA method is 385. Among them, the number of individual post-accident human errors is 253. The number of dependent post-accident human errors is 135. The quantification results of the LPSD PSA model for the KSNP with new HEPs show that core damage frequency (CDF) is increased by 5.1% compared with the previous baseline CDF It is expected that this study results will be greatly helpful to improve the PSA quality for the domestic nuclear power plants because they have sufficient PSA quality to meet the Category II of Supporting Requirements for the post

  7. An Estimation of Human Error Probability of Filtered Containment Venting System Using Dynamic HRA Method

    Energy Technology Data Exchange (ETDEWEB)

    Jang, Seunghyun; Jae, Moosung [Hanyang University, Seoul (Korea, Republic of)

    2016-10-15

    The human failure events (HFEs) are considered in the development of system fault trees as well as accident sequence event trees in part of Probabilistic Safety Assessment (PSA). As a method for analyzing the human error, several methods, such as Technique for Human Error Rate Prediction (THERP), Human Cognitive Reliability (HCR), and Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) are used and new methods for human reliability analysis (HRA) are under developing at this time. This paper presents a dynamic HRA method for assessing the human failure events and estimation of human error probability for filtered containment venting system (FCVS) is performed. The action associated with implementation of the containment venting during a station blackout sequence is used as an example. In this report, dynamic HRA method was used to analyze FCVS-related operator action. The distributions of the required time and the available time were developed by MAAP code and LHS sampling. Though the numerical calculations given here are only for illustrative purpose, the dynamic HRA method can be useful tools to estimate the human error estimation and it can be applied to any kind of the operator actions, including the severe accident management strategy.

  8. Error Assessment in Modeling with Fractal Brownian Motions

    CERN Document Server

    Qiao, Bingqiang

    2013-01-01

    To model a given time series $F(t)$ with fractal Brownian motions (fBms), it is necessary to have appropriate error assessment for related quantities. Usually the fractal dimension $D$ is derived from the Hurst exponent $H$ via the relation $D=2-H$, and the Hurst exponent can be evaluated by analyzing the dependence of the rescaled range $\\langle|F(t+\\tau)-F(t)|\\rangle$ on the time span $\\tau$. For fBms, the error of the rescaled range not only depends on data sampling but also varies with $H$ due to the presence of long term memory. This error for a given time series then can not be assessed without knowing the fractal dimension. We carry out extensive numerical simulations to explore the error of rescaled range of fBms and find that for $0error of $\\langle|F(t+\\tau)-F(t)|\\rangle$. The e...

  9. Human error in strabismus surgery: Quantification with a sensitivity analysis

    NARCIS (Netherlands)

    S. Schutte (Sander); J.R. Polling; F.C.T. van der Helm (Frans); H.J. Simonsz (Huib)

    2009-01-01

    textabstractBackground: Reoperations are frequently necessary in strabismus surgery. The goal of this study was to analyze human-error related factors that introduce variability in the results of strabismus surgery in a systematic fashion. Methods: We identified the primary factors that influence th

  10. Human error in strabismus surgery: quantification with a sensitivity analysis

    NARCIS (Netherlands)

    Schutte, S.; Polling, J.R.; Van der Helm, F.C.T.; Simonsz, H.J.

    2008-01-01

    Background- Reoperations are frequently necessary in strabismus surgery. The goal of this study was to analyze human-error related factors that introduce variability in the results of strabismus surgery in a systematic fashion. Methods- We identified the primary factors that influence the outcome of

  11. Assessment of errors and uncertainty patterns in GIA modeling

    DEFF Research Database (Denmark)

    Barletta, Valentina Roberta; Spada, G.

    GIA modeling. GIA errors are also important in the far field of previously glaciated areas and in the time evolution of global indicators. In this regard we also account for other possible errors sources which can impact global indicators like the sea level history related to GIA. The thermal......During the last decade many efforts have been devoted to the assessment of global sea level rise and to the determination of the mass balance of continental ice sheets. In this context, the important role of glacial-isostatic adjustment (GIA) has been clearly recognized. Yet, in many cases only one...... in the literature. However, at least two major sources of errors remain. The first is associated with the ice models, spatial distribution of ice and history of melting (this is especially the case of Antarctica), the second with the numerical implementation of model features relevant to sea level modeling...

  12. The Relationship between Human Operators' Psycho-physiological Condition and Human Errors in Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Arryum; Jang, Inseok; Kang, Hyungook; Seong, Poonghyun [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)

    2013-05-15

    The safe operation of nuclear power plants (NPPs) is substantially dependent on the performance of the human operators who operate the systems. In this environment, human errors caused by inappropriate performance of operator have been considered to be critical since it may lead serious problems in the safety-critical plants. In order to provide meaningful insights to prevent human errors and enhance the human performance, operators' physiological conditions such as stress and workload have been investigated. Physiological measurements were considered as reliable tools to assess the stress and workload. T. Q. Tran et al. and J. B. Brooking et al pointed out that operators' workload can be assessed using eye tracking, galvanic skin response, electroencephalograms (EEGs), heart rate, respiration and other measurements. The purpose of this study is to investigate the effect of the human operators' tense level and knowledge level to the number of human errors. For this study, the experiments were conducted in the mimic of the main control rooms (MCR) in NPP. It utilized the compact nuclear simulator (CNS) which is modeled based on the three loop Pressurized Water Reactor, 993MWe, Kori unit 3 and 4 in Korea and the subjects were asked to follow the tasks described in the emergency operating procedures (EOP). During the simulation, three kinds of physiological measurement were utilized; Electrocardiogram (ECG), EEG and nose temperature. Also, subjects were divided into three groups based on their knowledge of the plant operation. The result shows that subjects who are tense make fewer errors. In addition, subjects who are in higher knowledge level tend to be tense and make fewer errors. For the ECG data, subjects who make fewer human errors tend to be located in higher tense level area of high SNS activity and low PSNS activity. The results of EEG data are also similar to ECG result. Beta power ratio of subjects who make fewer errors was higher. Since beta

  13. Development of the Barriers to Error Disclosure Assessment Tool.

    Science.gov (United States)

    Welsh, Darlene; Zephyr, Dominique; Pfeifle, Andrea L; Carr, Douglas E; Fink, Joseph L; Jones, Mandy

    2017-06-30

    An interprofessional group of health colleges' faculty created and piloted the Barriers to Error Disclosure Assessment tool as an instrument to measure barriers to medical error disclosure among health care providers. A review of the literature guided the creation of items describing influences on the decision to disclose a medical error. Local and national experts in error disclosure used a modified Delphi process to gain consensus on the items included in the pilot. After receiving university institutional review board approval, researchers distributed the tool to a convenience sample of physicians (n = 19), pharmacists (n = 20), and nurses (n = 20) from an academic medical center. Means and SDs were used to describe the sample. Intraclass correlation coefficients were used to examine test-retest correspondence between the continuous items on the scale. Factor analysis with varimax rotation was used to determine factor loadings and examine internal consistency reliability. Cronbach α coefficients were calculated during initial and subsequent administrations to assess test-retest reliability. After omitting 2 items with intraclass correlation coefficient of less than 0.40, intraclass correlation coefficients ranged from 0.43 to 0.70, indicating fair to good test-retest correspondence between the continuous items on the final draft. Factor analysis revealed the following factors during the initial administration: confidence and knowledge barriers, institutional barriers, psychological barriers, and financial concern barriers to medical error disclosure. α Coefficients of 0.85 to 0.93 at time 1 and 0.82 to 0.95 at time 2 supported test-retest reliability. The final version of the 31-item tool can be used to measure perceptions about abilities for disclosing, impressions regarding institutional policies and climate, and specific barriers that inhibit disclosure by health care providers. Preliminary evidence supports the tool's validity and reliability for measuring

  14. Applying lessons learned to enhance human performance and reduce human error for ISS operations

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, W.R.

    1998-09-01

    A major component of reliability, safety, and mission success for space missions is ensuring that the humans involved (flight crew, ground crew, mission control, etc.) perform their tasks and functions as required. This includes compliance with training and procedures during normal conditions, and successful compensation when malfunctions or unexpected conditions occur. A very significant issue that affects human performance in space flight is human error. Human errors can invalidate carefully designed equipment and procedures. If certain errors combine with equipment failures or design flaws, mission failure or loss of life can occur. The control of human error during operation of the International Space Station (ISS) will be critical to the overall success of the program. As experience from Mir operations has shown, human performance plays a vital role in the success or failure of long duration space missions. The Department of Energy`s Idaho National Engineering and Environmental Laboratory (INEEL) is developed a systematic approach to enhance human performance and reduce human errors for ISS operations. This approach is based on the systematic identification and evaluation of lessons learned from past space missions such as Mir to enhance the design and operation of ISS. This paper describes previous INEEL research on human error sponsored by NASA and how it can be applied to enhance human reliability for ISS.

  15. Assessment of human exposures

    Energy Technology Data Exchange (ETDEWEB)

    Lebret, E. [RIVM-National Inst. of Public Health and Environmental Protection (Netherlands)

    1995-12-31

    This article describes some of the features of the assessment of human exposure to environmental pollutants in epidemiological studies. Since exposure assessment in air pollution epidemiology studies typically involve professionals from various backgrounds, interpretation of a concepts like `exposure` may vary. A brief descriptions is therefore given by way of introduction

  16. A Human Reliability Analysis of Pre-Accident Human Errors in the Low Power and Shutdown PSA of the KSNP

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Daeil; Jang, Seungchul

    2007-03-15

    Korea Atomic Energy Research Institute, using the ANS Low Power /Shutdown (LPSD)PRA Standard, evaluated the LPSD PSA model of the KSNP, Younggwang (YGN) Units 5 and 6, and identified the items to be improved. The evaluation results of human reliability analysis (HRA) of the pre-accident human errors in the LPSD PSA model of the KSNP showed that 13 items among 15 items of supporting requirements for those in the ANS PRA Standard were identified as them to be improved. Thus, we newly carried out a HRA for pre-accident human errors in the LPSD PSA model for the KSNP to improve its quality. We considered potential pre-accident human errors for all manual valves and control/instrumentation equipment of the systems modeled in the KSNP LPSD PSA model except reactor protection system/ engineering safety features actuation system. We reviewed 160 manual valves and 56 control/instrumentation equipment. The number of newly identified pre-accident human errors is 101. Among them, the number of those related to testing/maintenance tasks is 56. The number of those related to calibration tasks is 45. The number of those related to only shutdown operation is 10. It was shown that the pre-accident human errors related to only shutdown operation contributions to the core damage frequency of LPSD PSA model for the KSNP was negligible.The self-assessment results for the new HRA results of pre-accident human errors using the ANS LPSD PRA Standard show that above 80% items of its supporting requirements for post-accident human errors were graded as its Category II or III. It is expected that the HRA results for the pre-accident human errors presented in this study will be greatly helpful to improve the PSA quality for the domestic nuclear power plants because they have sufficient PSA quality to meet the Category II of supporting requirements for the postaccident human errors in the ANS LPSD PRA Standard.

  17. A Human Reliability Analysis of Pre-Accident Human Errors in the Low Power and Shutdown PSA of the KSNP

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Daeil; Jang, Seungchul

    2007-03-15

    Korea Atomic Energy Research Institute, using the ANS Low Power /Shutdown (LPSD)PRA Standard, evaluated the LPSD PSA model of the KSNP, Younggwang (YGN) Units 5 and 6, and identified the items to be improved. The evaluation results of human reliability analysis (HRA) of the pre-accident human errors in the LPSD PSA model of the KSNP showed that 13 items among 15 items of supporting requirements for those in the ANS PRA Standard were identified as them to be improved. Thus, we newly carried out a HRA for pre-accident human errors in the LPSD PSA model for the KSNP to improve its quality. We considered potential pre-accident human errors for all manual valves and control/instrumentation equipment of the systems modeled in the KSNP LPSD PSA model except reactor protection system/ engineering safety features actuation system. We reviewed 160 manual valves and 56 control/instrumentation equipment. The number of newly identified pre-accident human errors is 101. Among them, the number of those related to testing/maintenance tasks is 56. The number of those related to calibration tasks is 45. The number of those related to only shutdown operation is 10. It was shown that the pre-accident human errors related to only shutdown operation contributions to the core damage frequency of LPSD PSA model for the KSNP was negligible.The self-assessment results for the new HRA results of pre-accident human errors using the ANS LPSD PRA Standard show that above 80% items of its supporting requirements for post-accident human errors were graded as its Category II or III. It is expected that the HRA results for the pre-accident human errors presented in this study will be greatly helpful to improve the PSA quality for the domestic nuclear power plants because they have sufficient PSA quality to meet the Category II of supporting requirements for the postaccident human errors in the ANS LPSD PRA Standard.

  18. Target registration and target positioning errors in computer-assisted neurosurgery: proposal for a standardized reporting of error assessment.

    Science.gov (United States)

    Widmann, Gerlig; Stoffner, Rudolf; Sieb, Michael; Bale, Reto

    2009-12-01

    Assessment of errors is essential in development, testing and clinical application of computer-assisted neurosurgery. Our aim was to provide a comprehensive overview of the different methods to assess target registration error (TRE) and target positioning error (TPE) and to develop a proposal for a standardized reporting of error assessment. A PubMed research on phantom, cadaver or clinical studies on TRE and TPE has been performed. Reporting standards have been defined according to (a) study design and evaluation methods and (b) specifications of the navigation technology. The proposed standardized reporting includes (a) study design (controlled, non-controlled), study type (non-anthropomorphic phantom, anthropomorphic phantom, cadaver, patient), target design, error type and subtypes, space of TPE measurement, statistics, and (b) image modality, scan parameters, tracking technology, registration procedure and targeting technique. Adoption of the proposed standardized reporting may help in the understanding and comparability of different accuracy reports. Copyright (c) 2009 John Wiley & Sons, Ltd.

  19. Human factors and error prevention in emergency medicine.

    Science.gov (United States)

    Bleetman, Anthony; Sanusi, Seliat; Dale, Trevor; Brace, Samantha

    2012-05-01

    Emergency departments are one of the highest risk areas in health care. Emergency physicians have to assemble and manage unrehearsed multidisciplinary teams with little notice and manage critically ill patients. With greater emphasis on management and leadership skills, there is an increasing awareness of the importance of human factors in making changes to improve patient safety. Non-clinical skills are required to achieve this in an information-poor environment and to minimise the risk of errors. Training in these non-clinical skills is a mandatory component in other high-risk industries, such as aviation and, needs to be part of an emergency physician's skill set. Therefore, there remains an educational gap that we need to fill before an emergency physician is equipped to function as a team leader and manager. This review will examine the lessons from aviation and how these are applicable to emergency medicine. Solutions to averting errors are discussed and the need for formal human factors training in emergency medicine.

  20. Avoiding and identifying errors in health technology assessment models: qualitative study and methodological review.

    Science.gov (United States)

    Chilcott, J; Tappenden, P; Rawdin, A; Johnson, M; Kaltenthaler, E; Paisley, S; Papaioannou, D; Shippam, A

    2010-05-01

    Health policy decisions must be relevant, evidence-based and transparent. Decision-analytic modelling supports this process but its role is reliant on its credibility. Errors in mathematical decision models or simulation exercises are unavoidable but little attention has been paid to processes in model development. Numerous error avoidance/identification strategies could be adopted but it is difficult to evaluate the merits of strategies for improving the credibility of models without first developing an understanding of error types and causes. The study aims to describe the current comprehension of errors in the HTA modelling community and generate a taxonomy of model errors. Four primary objectives are to: (1) describe the current understanding of errors in HTA modelling; (2) understand current processes applied by the technology assessment community for avoiding errors in development, debugging and critically appraising models for errors; (3) use HTA modellers' perceptions of model errors with the wider non-HTA literature to develop a taxonomy of model errors; and (4) explore potential methods and procedures to reduce the occurrence of errors in models. It also describes the model development process as perceived by practitioners working within the HTA community. A methodological review was undertaken using an iterative search methodology. Exploratory searches informed the scope of interviews; later searches focused on issues arising from the interviews. Searches were undertaken in February 2008 and January 2009. In-depth qualitative interviews were performed with 12 HTA modellers from academic and commercial modelling sectors. All qualitative data were analysed using the Framework approach. Descriptive and explanatory accounts were used to interrogate the data within and across themes and subthemes: organisation, roles and communication; the model development process; definition of error; types of model error; strategies for avoiding errors; strategies for

  1. Measurement error in CT assessment of appendix diameter

    Energy Technology Data Exchange (ETDEWEB)

    Trout, Andrew T.; Towbin, Alexander J. [Cincinnati Children' s Hospital Medical Center, Department of Radiology, MLC 5031, Cincinnati, OH (United States); Zhang, Bin [Cincinnati Children' s Hospital Medical Center, Department of Biostatistics and Epidemiology, Cincinnati, OH (United States)

    2016-12-15

    Appendiceal diameter continues to be cited as an important criterion for diagnosis of appendicitis by computed tomography (CT). To assess sources of error and variability in appendiceal diameter measurements by CT. In this institutional review board-approved review of imaging and medical records, we reviewed CTs performed in children <18 years of age between Jan. 1 and Dec. 31, 2010. Appendiceal diameter was measured in the axial and coronal planes by two reviewers (R1, R2). One year later, 10% of cases were remeasured. For patients who had multiple CTs, serial measurements were made to assess within patient variability. Measurement differences between planes, within and between reviewers, within patients and between CT and pathological measurements were assessed using correlation coefficients and paired t-tests. Six hundred thirty-one CTs performed in 519 patients (mean age: 10.9 ± 4.9 years, 50.8% female) were reviewed. Axial and coronal measurements were strongly correlated (r = 0.92-0.94, P < 0.0001) with coronal plane measurements significantly larger (P < 0.0001). Measurements were strongly correlated between reviewers (r = 0.89-0.9, P < 0.0001) but differed significantly in both planes (axial: +0.2 mm, P=0.003; coronal: +0.1 mm, P=0.007). Repeat measurements were significantly different for one reviewer only in the axial plane (0.3 mm difference, P<0.05). Within patients imaged multiple times, measured appendix diameters differed significantly in the axial plane for both reviewers (R1: 0.5 mm, P = 0.031; R2: 0.7 mm, P = 0.022). Multiple potential sources of measurement error raise concern about the use of rigid diameter cutoffs for the diagnosis of acute appendicitis by CT. (orig.)

  2. Development of an improved HRA method: A technique for human error analysis (ATHEANA)

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, J.H.; Luckas, W.J. [Brookhaven National Lab., Upton, NY (United States); Wreathall, J. [John Wreathall & Co., Dublin, OH (United States)] [and others

    1996-03-01

    Probabilistic risk assessment (PRA) has become an increasingly important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. The NRC recently published a final policy statement, SECY-95-126, encouraging the use of PRA in regulatory activities. Human reliability analysis (HRA), while a critical element of PRA, has limitations in the analysis of human actions in PRAs that have long been recognized as a constraint when using PRA. In fact, better integration of HRA into the PRA process has long been a NRC issue. Of particular concern, has been the omission of errors of commission - those errors that are associated with inappropriate interventions by operators with operating systems. To address these concerns, the NRC identified the need to develop an improved HRA method, so that human reliability can be better represented and integrated into PRA modeling and quantification.

  3. Predicting errors from reconfiguration patterns in human brain networks.

    Science.gov (United States)

    Ekman, Matthias; Derrfuss, Jan; Tittgemeyer, Marc; Fiebach, Christian J

    2012-10-09

    Task preparation is a complex cognitive process that implements anticipatory adjustments to facilitate future task performance. Little is known about quantitative network parameters governing this process in humans. Using functional magnetic resonance imaging (fMRI) and functional connectivity measurements, we show that the large-scale topology of the brain network involved in task preparation shows a pattern of dynamic reconfigurations that guides optimal behavior. This network could be decomposed into two distinct topological structures, an error-resilient core acting as a major hub that integrates most of the network's communication and a predominantly sensory periphery showing more flexible network adaptations. During task preparation, core-periphery interactions were dynamically adjusted. Task-relevant visual areas showed a higher topological proximity to the network core and an enhancement in their local centrality and interconnectivity. Failure to reconfigure the network topology was predictive for errors, indicating that anticipatory network reconfigurations are crucial for successful task performance. On the basis of a unique network decoding approach, we also develop a general framework for the identification of characteristic patterns in complex networks, which is applicable to other fields in neuroscience that relate dynamic network properties to behavior.

  4. Robot perception errors and human resolution strategies in situated human-robot dialogue

    OpenAIRE

    Schutte, Niels; Kelleher, John; MacNamee, Brian

    2017-01-01

    We performed an experiment in which human participants interacted through a natural language dialogue interface with a simulated robot to fulfil a series of object manipulation tasks. We introduced errors into the robot’s perception, and observed the resulting problems in the dialogues and their resolutions. We then introduced different methods for the user to request information about the robot’s understanding of the environment. We quantify the impact of perception errors on the dialogues, ...

  5. Latent human error analysis and efficient improvement strategies by fuzzy TOPSIS in aviation maintenance tasks.

    Science.gov (United States)

    Chiu, Ming-Chuan; Hsieh, Min-Chih

    2016-05-01

    The purposes of this study were to develop a latent human error analysis process, to explore the factors of latent human error in aviation maintenance tasks, and to provide an efficient improvement strategy for addressing those errors. First, we used HFACS and RCA to define the error factors related to aviation maintenance tasks. Fuzzy TOPSIS with four criteria was applied to evaluate the error factors. Results show that 1) adverse physiological states, 2) physical/mental limitations, and 3) coordination, communication, and planning are the factors related to airline maintenance tasks that could be addressed easily and efficiently. This research establishes a new analytic process for investigating latent human error and provides a strategy for analyzing human error using fuzzy TOPSIS. Our analysis process complements shortages in existing methodologies by incorporating improvement efficiency, and it enhances the depth and broadness of human error analysis methodology.

  6. A Conceptual Framework of Human Reliability Analysis for Execution Human Error in NPP Advanced MCRs

    Energy Technology Data Exchange (ETDEWEB)

    Jang, In Seok; Kim, Ar Ryum; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of); Jung, Won Dea [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-08-15

    The operation environment of Main Control Rooms (MCRs) in Nuclear Power Plants (NPPs) has changed with the adoption of new human-system interfaces that are based on computer-based technologies. The MCRs that include these digital and computer technologies, such as large display panels, computerized procedures, and soft controls, are called Advanced MCRs. Among the many features of Advanced MCRs, soft controls are a particularly important feature because the operation action in NPP Advanced MCRs is performed by soft control. Using soft controls such as mouse control, and touch screens, operators can select a specific screen, then choose the controller, and finally manipulate the given devices. Due to the different interfaces between soft control and hardwired conventional type control, different human error probabilities and a new Human Reliability Analysis (HRA) framework should be considered in the HRA for advanced MCRs. In other words, new human error modes should be considered for interface management tasks such as navigation tasks, and icon (device) selection tasks in monitors and a new framework of HRA method taking these newly generated human error modes into account should be considered. In this paper, a conceptual framework for a HRA method for the evaluation of soft control execution human error in advanced MCRs is suggested by analyzing soft control tasks.

  7. Spelling in adolescents with dyslexia: errors and modes of assessment.

    Science.gov (United States)

    Tops, Wim; Callens, Maaike; Bijn, Evi; Brysbaert, Marc

    2014-01-01

    In this study we focused on the spelling of high-functioning students with dyslexia. We made a detailed classification of the errors in a word and sentence dictation task made by 100 students with dyslexia and 100 matched control students. All participants were in the first year of their bachelor's studies and had Dutch as mother tongue. Three main error categories were distinguished: phonological, orthographic, and grammatical errors (on the basis of morphology and language-specific spelling rules). The results indicated that higher-education students with dyslexia made on average twice as many spelling errors as the controls, with effect sizes of d ≥ 2. When the errors were classified as phonological, orthographic, or grammatical, we found a slight dominance of phonological errors in students with dyslexia. Sentence dictation did not provide more information than word dictation in the correct classification of students with and without dyslexia.

  8. Human Error Classification for the Permit to Work System by SHERPA in a Petrochemical Industry

    Directory of Open Access Journals (Sweden)

    Arash Ghasemi

    2015-12-01

    Full Text Available Background & objective: Occupational accidents may occur in any types of activities. Carrying out daily activities such as repairing and maintaining are one of the work phases that have high risck. Despite the issuance of work permits or work license systems for controling the risks of non-routine activities, the high rate of accidents during activity indicates the inadequacy of such systems. A main portion of this lacking is attributed to the human errors. Then, it is necessary to identify and control the probable human errors during issuing permits. Methods: In the present study, the probable errors for four categories of working permits were identified using SHERPA method. Then, an expert team analyzed 25500 issued permits during a period of approximately one year. Most of frequent human errors and their types were determined. Results: The “Excavation” and “Entry to confined space” permit possess the most errors. Approximately, 28.5 present of all errors were related to the excavation permits. The implementation error was recognized as the most frequent error for all types of error taxonomy. For every category of permits, about 40% of all errors were attributed to the implementation errors. Conclusion: The results may indicate the weakness points in the practical training of the licensing system. The human error identification methods can be used to predict and decrease the human errors.

  9. Assessment of errors and uncertainty patterns in GIA modeling

    DEFF Research Database (Denmark)

    Barletta, Valentina Roberta; Spada, G.

    , such as time-evolving shorelines and paleo-coastlines. In this study we quantify these uncertainties and their propagation in GIA response using a Monte Carlo approach to obtain spatio-temporal patterns of GIA errors. A direct application is the error estimates in ice mass balance in Antarctica and Greenland...

  10. Assessment of errors and uncertainty patterns in GIA modeling

    DEFF Research Database (Denmark)

    Barletta, Valentina Roberta; Spada, G.

    2012-01-01

    , such as time-evolving shorelines and paleo coastlines. In this study we quantify these uncertainties and their propagation in GIA response using a Monte Carlo approach to obtain spatio-temporal patterns of GIA errors. A direct application is the error estimates in ice mass balance in Antarctica and Greenland...

  11. ASSESSING THE DYNAMIC ERRORS OF COORDINATE MEASURING MACHINES

    Institute of Scientific and Technical Information of China (English)

    1998-01-01

    The main factors affecting the dynamic errors of coordinate measuring machines are analyzed. It is pointed out that there are two main contributors to the dynamic errors: One is the rotation of the elements around the joints connected with air bearings and the other is the bending of the elements caused by the dynamic inertial forces. A method for obtaining the displacement errors at the probe position from dynamic rotational errors is presented. The dynamic rotational errors are measured with inductive position sensors and a laser interferometer. The theoretical and experimental results both show that during the process of fast probing, due to the dynamic inertial forces, there are not only large rotation of the elements around the joints connected with air bearings but also large bending of the weak elements themselves.

  12. Examining rating quality in writing assessment: rater agreement, error, and accuracy.

    Science.gov (United States)

    Wind, Stefanie A; Engelhard, George

    2012-01-01

    The use of performance assessments in which human raters evaluate student achievement has become increasingly prevalent in high-stakes assessment systems such as those associated with recent policy initiatives (e.g., Race to the Top). In this study, indices of rating quality are compared between two measurement perspectives. Within the context of a large-scale writing assessment, this study focuses on the alignment between indices of rater agreement, error, and accuracy based on traditional and Rasch measurement theory perspectives. Major empirical findings suggest that Rasch-based indices of model-data fit for ratings provide information about raters that is comparable to direct measures of accuracy. The use of easily obtained approximations of direct accuracy measures holds significant implications for monitoring rating quality in large-scale rater-mediated performance assessments.

  13. Hard Data on Soft Errors: A Large-Scale Assessment of Real-World Error Rates in GPGPU

    CERN Document Server

    Haque, Imran S

    2009-01-01

    Graphics processing units (GPUs) are gaining widespread use in computational chemistry and other scientific simulation contexts because of their huge performance advantages relative to conventional CPUs. However, the reliability of GPUs in error-intolerant applications is largely unproven. In particular, a lack of error checking and correcting (ECC) capability in the memory subsystems of graphics cards has been cited as a hindrance to the acceptance of GPUs as high-performance coprocessors, but the impact of this design has not been previously quantified. In this article we present MemtestG80, our software for assessing memory error rates on NVIDIA G80 and GT200-architecture-based graphics cards. Furthermore, we present the results of a large-scale assessment of GPU error rate, conducted by running MemtestG80 on over 20,000 hosts on the Folding@home distributed computing network. Our control experiments on consumer-grade and dedicated-GPGPU hardware in a controlled environment found no errors. However, our su...

  14. Multidisciplinary framework for human reliability analysis with an application to errors of commission and dependencies

    Energy Technology Data Exchange (ETDEWEB)

    Barriere, M.T.; Luckas, W.J. [Brookhaven National Lab., Upton, NY (United States); Wreathall, J. [Wreathall (John) and Co., Dublin, OH (United States); Cooper, S.E. [Science Applications International Corp., Reston, VA (United States); Bley, D.C. [PLG, Inc., Newport Beach, CA (United States); Ramey-Smith, A. [Nuclear Regulatory Commission, Washington, DC (United States). Div. of Systems Technology

    1995-08-01

    Since the early 1970s, human reliability analysis (HRA) has been considered to be an integral part of probabilistic risk assessments (PRAs). Nuclear power plant (NPP) events, from Three Mile Island through the mid-1980s, showed the importance of human performance to NPP risk. Recent events demonstrate that human performance continues to be a dominant source of risk. In light of these observations, the current limitations of existing HRA approaches become apparent when the role of humans is examined explicitly in the context of real NPP events. The development of new or improved HRA methodologies to more realistically represent human performance is recognized by the Nuclear Regulatory Commission (NRC) as a necessary means to increase the utility of PRAS. To accomplish this objective, an Improved HRA Project, sponsored by the NRC`s Office of Nuclear Regulatory Research (RES), was initiated in late February, 1992, at Brookhaven National Laboratory (BNL) to develop an improved method for HRA that more realistically assesses the human contribution to plant risk and can be fully integrated with PRA. This report describes the research efforts including the development of a multidisciplinary HRA framework, the characterization and representation of errors of commission, and an approach for addressing human dependencies. The implications of the research and necessary requirements for further development also are discussed.

  15. Content Coverage of Single-Word Tests Used to Assess Common Phonological Error Patterns

    Science.gov (United States)

    Kirk, Cecilia; Vigeland, Laura

    2015-01-01

    Purpose: This review evaluated whether 9 single-word tests of phonological error patterns provide adequate content coverage to accurately identify error patterns that are active in a child's speech. Method: Tests in the current study were considered to display sufficient opportunities to assess common phonological error patterns if they…

  16. Content Coverage of Single-Word Tests Used to Assess Common Phonological Error Patterns

    Science.gov (United States)

    Kirk, Cecilia; Vigeland, Laura

    2015-01-01

    Purpose: This review evaluated whether 9 single-word tests of phonological error patterns provide adequate content coverage to accurately identify error patterns that are active in a child's speech. Method: Tests in the current study were considered to display sufficient opportunities to assess common phonological error patterns if they…

  17. Design of a Human Reliability Assessment model for structural engineering

    NARCIS (Netherlands)

    De Haan, J.; Terwel, K.C.; Al-Jibouri, S.H.S.

    2013-01-01

    It is generally accepted that humans are the “weakest link” in structural design and construction processes. Despite this, few models are available to quantify human error within engineering processes. This paper demonstrates the use of a quantitative Human Reliability Assessment model within struct

  18. Assessing Errors Inherent in OCT-Derived Macular Thickness Maps

    Directory of Open Access Journals (Sweden)

    Daniel Odell

    2011-01-01

    Full Text Available SD-OCT has become an essential tool for evaluating macular pathology; however several aspects of data collection and analysis affect the accuracy of retinal thickness measurements. Here we evaluated sampling density, scan centering, and axial length compensation as factors affecting the accuracy of macular thickness maps. Forty-three patients with various retinal pathologies and 113 normal subjects were imaged using Cirrus HD-OCT. Reduced B-scan density was associated with increased interpolation error in ETDRS macular thickness plots. Correcting for individual differences in axial length revealed modest errors in retinal thickness maps, while more pronounced errors were observed when the ETDRS plot was not positioned at the center of the fovea (which can occur as a result of errant fixation. Cumulative error can exceed hundreds of microns, even under “ideal observer” conditions. This preventable error is particularly relevant when attempting to compare macular thickness maps to normative databases or measuring the area or volume of retinal features.

  19. Assessment of Measurement Error when Using the Laser Spectrum Analyzers

    Directory of Open Access Journals (Sweden)

    A. A. Titov

    2015-01-01

    Full Text Available The article dwells on assessment of measurement errors when using the laser spectrum analyzers. It presents the analysis results to show that it is possible to carry out a spectral analysis of both amplitudes and phases of frequency components of signals and to analyze a changing phase of frequency components of radio signals using interferential methods of measurements. It is found that the interferometers with Mach-Zehnder arrangement are most widely used for measurement of signal phase. A possibility to increase resolution when using the combined method as compared to the other considered methods is shown since with its application spatial integration is performed over one coordinate while time integration is done over the other coordinate that is reached by the orthogonal arrangement of modulators relative each other. The article defines a drawback of this method. It is complicatedness and low-speed because of integrator that disables measurement of spectral components of a radio pulse if its width is less than a temporary aperture. There is a proposal to create an advanced option of the spectrum analyzer in which phase is determined through the signal processing. The article presents resolution when using such a spectrum analyzer. It also reviews the possible options for creating devices to measure the phase components of a spectrum depending on the methods applied to measure a phase. The analysis has shown that for phase measurement a time-pulse method is the most perspective. It is found that the known circuits of digital phase-meters using this method cannot be directly used in spectrum analyzers as they are designed for measurement of the phase only of one signal frequency. In this regard a number of circuits were developed to measure the amplitude and phase of frequency components of the radio signal. It is shown that the perspective option of creating a spectrum analyzer is device in which the phase is determined through the signal

  20. An Approach to Human Error Hazard Detection of Unexpected Situations in NPPs

    Energy Technology Data Exchange (ETDEWEB)

    Park, Sangjun; Oh, Yeonju; Shin, Youmin; Lee, Yong-Hee [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    Fukushima accident is a typical complex event including the extreme situations induced by the succeeding earthquake, tsunami, explosion, and human errors. And it is judged with incomplete cause of system build-up same manner, procedure as a deficiency of response manual, education and training, team capability and the discharge of operator from human engineering point of view. Especially, the guidelines of current operating NPPs are not enough including countermeasures to the human errors at the extreme situations. Therefore, this paper describes a trial to detect the hazards of human errors at extreme situation, and to define the countermeasures that can properly response to the human error hazards when an individual, team, organization, and working entities that encounter the extreme situation in NPPs. In this paper we try to propose an approach to analyzing and extracting human error hazards for suggesting additional countermeasures to the human errors in unexpected situations. They might be utilized to develop contingency guidelines, especially for reducing the human error accident in NPPs. But the trial application in this study is currently limited since it is not easy to find accidents cases in detail enough to enumerate the proposed steps. Therefore, we will try to analyze as more cases as possible, and consider other environmental factors and human error conditions.

  1. The treatment of commission errors in first generation human reliability analysis methods

    Energy Technology Data Exchange (ETDEWEB)

    Alvarengga, Marco Antonio Bayout; Fonseca, Renato Alves da, E-mail: bayout@cnen.gov.b, E-mail: rfonseca@cnen.gov.b [Comissao Nacional de Energia Nuclear (CNEN) Rio de Janeiro, RJ (Brazil); Melo, Paulo Fernando Frutuoso e, E-mail: frutuoso@nuclear.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Nuclear

    2011-07-01

    Human errors in human reliability analysis can be classified generically as errors of omission and commission errors. Omission errors are related to the omission of any human action that should have been performed, but does not occur. Errors of commission are those related to human actions that should not be performed, but which in fact are performed. Both involve specific types of cognitive error mechanisms, however, errors of commission are more difficult to model because they are characterized by non-anticipated actions that are performed instead of others that are omitted (omission errors) or are entered into an operational task without being part of the normal sequence of this task. The identification of actions that are not supposed to occur depends on the operational context that will influence or become easy certain unsafe actions of the operator depending on the operational performance of its parameters and variables. The survey of operational contexts and associated unsafe actions is a characteristic of second-generation models, unlike the first generation models. This paper discusses how first generation models can treat errors of commission in the steps of detection, diagnosis, decision-making and implementation, in the human information processing, particularly with the use of THERP tables of errors quantification. (author)

  2. Assessing Visibility of Individual Transmission Errors in Networked Video

    DEFF Research Database (Denmark)

    Korhonen, Jari; Mantel, Claire

    2016-01-01

    could benefit from information about subjective visibility of individual packet losses; for example, computational resources could be directed more efficiently to unequal error protection and concealment by focusing in the visually most disturbing artifacts. In this paper, we present a novel subjective...

  3. Spelling in Adolescents with Dyslexia: Errors and Modes of Assessment

    Science.gov (United States)

    Tops, Wim; Callens, Maaike; Bijn, Evi; Brysbaert, Marc

    2014-01-01

    In this study we focused on the spelling of high-functioning students with dyslexia. We made a detailed classification of the errors in a word and sentence dictation task made by 100 students with dyslexia and 100 matched control students. All participants were in the first year of their bachelor's studies and had Dutch as mother tongue. Three…

  4. Spelling in Adolescents with Dyslexia: Errors and Modes of Assessment

    Science.gov (United States)

    Tops, Wim; Callens, Maaike; Bijn, Evi; Brysbaert, Marc

    2014-01-01

    In this study we focused on the spelling of high-functioning students with dyslexia. We made a detailed classification of the errors in a word and sentence dictation task made by 100 students with dyslexia and 100 matched control students. All participants were in the first year of their bachelor's studies and had Dutch as mother tongue.…

  5. Spelling in Adolescents with Dyslexia: Errors and Modes of Assessment

    Science.gov (United States)

    Tops, Wim; Callens, Maaike; Bijn, Evi; Brysbaert, Marc

    2014-01-01

    In this study we focused on the spelling of high-functioning students with dyslexia. We made a detailed classification of the errors in a word and sentence dictation task made by 100 students with dyslexia and 100 matched control students. All participants were in the first year of their bachelor's studies and had Dutch as mother tongue. Three…

  6. Assessment of salivary flow rate: biologic variation and measure error.

    NARCIS (Netherlands)

    Jongerius, P.H.; Limbeek, J. van; Rotteveel, J.J.

    2004-01-01

    OBJECTIVE: To investigate the applicability of the swab method in the measurement of salivary flow rate in multiple-handicap drooling children. To quantify the measurement error of the procedure and the biologic variation in the population. STUDY DESIGN: Cohort study. METHODS: In a repeated measurem

  7. Error-related EEG patterns during tactile human-machine interaction

    NARCIS (Netherlands)

    Lehne, M.; Ihme, K.; Brouwer, A.M.; Erp, J.B.F. van; Zander, T.O.

    2009-01-01

    Recently, the use of brain-computer interfaces (BCIs) has been extended from active control to passive detection of cognitive user states. These passive BCI systems can be especially useful for automatic error detection in human-machine systems by recording EEG potentials related to human error proc

  8. Error assessment of digital elevation models obtained by interpolation

    Directory of Open Access Journals (Sweden)

    Jean François Mas

    2009-10-01

    Full Text Available Son pocos los estudios enfocados en la evaluación de los errores inherentes a los modelos digitales de elevación (MDE. Por esta razón se evaluaron los errores de los MDE obtenidos por diferentes metodos de interpolación (ARC/INFO, IDRISI, ILWIS y NEW-MIEL y con diferentes resoluciones, con la finalidad de obtener una representación del relieve más precisa. Esta evaluación de los métodos de interpolación es crucial, si se tiene en cuenta que los MDE son la forma más efectiva de representación de la superficie terrestre para el análisis del terreno y que son ampliamente utilizados en ciencias ambientales. Los resultados obtenidos muestran que la resolución, el método de interpolación y los insumos (curvas de nivel solas o con datos de escurrimientos y puntos acotados influyen de manera importante en la magnitud de la cantidad de los errores generados en el MDE. En este estudio, que se llevó a cabo con base en curvas de nivel cada 50 m en una zona montañosa, la resolución más idónea fue de 30 m. El MDE con el menor error (Error Medio Cuadrático −EMC− de 7.3 m fue obtenido con ARC/INFO. Sin embargo, programas sin costo como NEWMIEL o ILWIS permitieron la obtención de resultados con un EMC de 10 m.

  9. Agreeableness and Conscientiousness as Predictors of University Students' Self/Peer-Assessment Rating Error

    Science.gov (United States)

    Birjandi, Parviz; Siyyari, Masood

    2016-01-01

    This paper presents the results of an investigation into the role of two personality traits (i.e. Agreeableness and Conscientiousness from the Big Five personality traits) in predicting rating error in the self-assessment and peer-assessment of composition writing. The average self/peer-rating errors of 136 Iranian English major undergraduates…

  10. Quality of IT service delivery — Analysis and framework for human error prevention

    KAUST Repository

    Shwartz, L.

    2010-12-01

    In this paper, we address the problem of reducing the occurrence of Human Errors that cause service interruptions in IT Service Support and Delivery operations. Analysis of a large volume of service interruption records revealed that more than 21% of interruptions were caused by human error. We focus on Change Management, the process with the largest risk of human error, and identify the main instances of human errors as the 4 Wrongs: request, time, configuration item, and command. Analysis of change records revealed that the humanerror prevention by partial automation is highly relevant. We propose the HEP Framework, a framework for execution of IT Service Delivery operations that reduces human error by addressing the 4 Wrongs using content integration, contextualization of operation patterns, partial automation of command execution, and controlled access to resources.

  11. Results of a nuclear power plant application of A New Technique for Human Error Analysis (ATHEANA)

    Energy Technology Data Exchange (ETDEWEB)

    Whitehead, D.W.; Forester, J.A. [Sandia National Labs., Albuquerque, NM (United States); Bley, D.C. [Buttonwood Consulting, Inc. (United States)] [and others

    1998-03-01

    A new method to analyze human errors has been demonstrated at a pressurized water reactor (PWR) nuclear power plant. This was the first application of the new method referred to as A Technique for Human Error Analysis (ATHEANA). The main goals of the demonstration were to test the ATHEANA process as described in the frame-of-reference manual and the implementation guideline, test a training package developed for the method, test the hypothesis that plant operators and trainers have significant insight into the error-forcing-contexts (EFCs) that can make unsafe actions (UAs) more likely, and to identify ways to improve the method and its documentation. A set of criteria to evaluate the success of the ATHEANA method as used in the demonstration was identified. A human reliability analysis (HRA) team was formed that consisted of an expert in probabilistic risk assessment (PRA) with some background in HRA (not ATHEANA) and four personnel from the nuclear power plant. Personnel from the plant included two individuals from their PRA staff and two individuals from their training staff. Both individuals from training are currently licensed operators and one of them was a senior reactor operator on shift until a few months before the demonstration. The demonstration was conducted over a 5-month period and was observed by members of the Nuclear Regulatory Commission`s ATHEANA development team, who also served as consultants to the HRA team when necessary. Example results of the demonstration to date, including identified human failure events (HFEs), UAs, and EFCs are discussed. Also addressed is how simulator exercises are used in the ATHEANA demonstration project.

  12. Modeling Human Error Mechanism for Soft Control in Advanced Control Rooms (ACRs)

    Energy Technology Data Exchange (ETDEWEB)

    Aljneibi, Hanan Salah Ali [Khalifa Univ., Abu Dhabi (United Arab Emirates); Ha, Jun Su; Kang, Seongkeun; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of)

    2015-10-15

    To achieve the switch from conventional analog-based design to digital design in ACRs, a large number of manual operating controls and switches have to be replaced by a few common multi-function devices which is called soft control system. The soft controls in APR-1400 ACRs are classified into safety-grade and non-safety-grade soft controls; each was designed using different and independent input devices in ACRs. The operations using soft controls require operators to perform new tasks which were not necessary in conventional controls such as navigating computerized displays to monitor plant information and control devices. These kinds of computerized displays and soft controls may make operations more convenient but they might cause new types of human error. In this study the human error mechanism during the soft controls is studied and modeled to be used for analysis and enhancement of human performance (or human errors) during NPP operation. The developed model would contribute to a lot of applications to improve human performance (or reduce human errors), HMI designs, and operators' training program in ACRs. The developed model of human error mechanism for the soft control is based on assumptions that a human operator has certain amount of capacity in cognitive resources and if resources required by operating tasks are greater than resources invested by the operator, human error (or poor human performance) is likely to occur (especially in 'slip'); good HMI (Human-machine Interface) design decreases the required resources; operator's skillfulness decreases the required resources; and high vigilance increases the invested resources. In this study the human error mechanism during the soft controls is studied and modeled to be used for analysis and enhancement of human performance (or reduction of human errors) during NPP operation.

  13. Human error and the problem of causality in analysis of accidents

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1990-01-01

    and for termination of the search for `causes'. In addition, the concept of human error is analysed and its intimate relation with human adaptation and learning is discussed. It is concluded that identification of errors as a separate class of behaviour is becoming increasingly difficult in modern work environments......Present technology is characterized by complexity, rapid change and growing size of technical systems. This has caused increasing concern with the human involvement in system safety. Analyses of the major accidents during recent decades have concluded that human errors on part of operators...

  14. Automation of Commanding at NASA: Reducing Human Error in Space Flight

    Science.gov (United States)

    Dorn, Sarah J.

    2010-01-01

    Automation has been implemented in many different industries to improve efficiency and reduce human error. Reducing or eliminating the human interaction in tasks has been proven to increase productivity in manufacturing and lessen the risk of mistakes by humans in the airline industry. Human space flight requires the flight controllers to monitor multiple systems and react quickly when failures occur so NASA is interested in implementing techniques that can assist in these tasks. Using automation to control some of these responsibilities could reduce the number of errors the flight controllers encounter due to standard human error characteristics. This paper will investigate the possibility of reducing human error in the critical area of manned space flight at NASA.

  15. Derivation of main drivers affecting the possibility of human errors during low power and shutdown operation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ar Ryum; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of); Park, Jin Kyun; Kim, Jae Whan [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    In order to estimate the possibility of human error and identify its nature, human reliability analysis (HRA) methods have been implemented. For this, various HRA methods have been developed so far: techniques for human error rate prediction (THERP), cause based decision tree (CBDT), the cognitive reliability and error analysis method (CREAM) and so on. Most HRA methods have been developed with a focus on full power operation of NPPs even though human performance may more largely affect the safety of the system during low power and shutdown (LPSD) operation than it would when the system is in full power operation. In this regard, it is necessary to conduct a research for developing HRA method to be used in LPSD operation. For the first step of the study, main drivers which affect the possibility of human error have been developed. Drivers which are commonly called as performance shaping factors (PSFs) are aspects of the human's individual characteristics, environment, organization, or task that specifically decrements or improves human performance, thus respectively increasing or decreasing the likelihood of human errors. In order to estimate the possibility of human error and identify its nature, human reliability analysis (HRA) methods have been implemented. For this, various HRA methods have been developed so far: techniques for human error rate prediction (THERP), cause based decision tree (CBDT), the cognitive reliability and error analysis method (CREAM) and so on. Most HRA methods have been developed with a focus on full power operation of NPPs even though human performance may more largely affect the safety of the system during low power and shutdown (LPSD) operation than it would when the system is in full power operation. In this regard, it is necessary to conduct a research for developing HRA method to be used in LPSD operation. For the first step of the study, main drivers which affect the possibility of human error have been developed. Drivers

  16. Human reliability analysis of errors of commission: a review of methods and applications

    Energy Technology Data Exchange (ETDEWEB)

    Reer, B

    2007-06-15

    Illustrated by specific examples relevant to contemporary probabilistic safety assessment (PSA), this report presents a review of human reliability analysis (HRA) addressing post initiator errors of commission (EOCs), i.e. inappropriate actions under abnormal operating conditions. The review addressed both methods and applications. Emerging HRA methods providing advanced features and explicit guidance suitable for PSA are: A Technique for Human Event Analysis (ATHEANA, key publications in 1998/2000), Methode d'Evaluation de la Realisation des Missions Operateur pour la Surete (MERMOS, 1998/2000), the EOC HRA method developed by the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS, 2003), the Misdiagnosis Tree Analysis (MDTA) method (2005/2006), the Cognitive Reliability and Error Analysis Method (CREAM, 1998), and the Commission Errors Search and Assessment (CESA) method (2002/2004). As a result of a thorough investigation of various PSA/HRA applications, this paper furthermore presents an overview of EOCs (termination of safety injection, shutdown of secondary cooling, etc.) referred to in predictive studies and a qualitative review of cases of EOC quantification. The main conclusions of the review of both the methods and the EOC HRA cases are: (1) The CESA search scheme, which proceeds from possible operator actions to the affected systems to scenarios, may be preferable because this scheme provides a formalized way for identifying relatively important scenarios with EOC opportunities; (2) an EOC identification guidance like CESA, which is strongly based on the procedural guidance and important measures of systems or components affected by inappropriate actions, however should pay some attention to EOCs associated with familiar but non-procedural actions and EOCs leading to failures of manually initiated safety functions. (3) Orientations of advanced EOC quantification comprise a) modeling of multiple contexts for a given scenario, b) accounting for

  17. Measurement, Sampling, and Equating Errors in Large-Scale Assessments

    Science.gov (United States)

    Wu, Margaret

    2010-01-01

    In large-scale assessments, such as state-wide testing programs, national sample-based assessments, and international comparative studies, there are many steps involved in the measurement and reporting of student achievement. There are always sources of inaccuracies in each of the steps. It is of interest to identify the source and magnitude of…

  18. Analysis of measured data of human body based on error correcting frequency

    Science.gov (United States)

    Jin, Aiyan; Peipei, Gao; Shang, Xiaomei

    2014-04-01

    Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.

  19. Measurement errors in dietary assessment using duplicate portions as reference method

    NARCIS (Netherlands)

    Trijsburg, L.E.

    2016-01-01

    Measurement errors in dietary assessment using duplicate portions as reference method Laura Trijsburg Background: As Food Frequency Questionnaires (FFQs) are subject to measurement error, associations between self-reported intake by FFQ and outcome measures should b

  20. Use of a Diagnostic Errors Framework to Classify Mistakes in an Assessment of a Bilingual Child

    Science.gov (United States)

    Lee, Brason

    2014-01-01

    This study applies a diagnostic errors framework to identify and classify mistakes that were made in a psychoeducational assessment of a bilingual student who was misidentified as a person with autism. Findings of diagnostic errors were categorized under four domains--faulty knowledge, faulty data gathering, faulty data processing, and faulty…

  1. Prediction of human errors by maladaptive changes in event-related brain networks

    NARCIS (Netherlands)

    Eichele, T.; Debener, S.; Calhoun, V.D.; Specht, K.; Engel, A.K.; Hugdahl, K.; Cramon, D.Y. von; Ullsperger, M.

    2008-01-01

    Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional Mill and applying independent component analysis followed by deconvolution of hemodynamic responses, we

  2. Human errors evaluation for muster in emergency situations applying human error probability index (HEPI, in the oil company warehouse in Hamadan City

    Directory of Open Access Journals (Sweden)

    2012-12-01

    Full Text Available Introduction: Emergency situation is one of the influencing factors on human error. The aim of this research was purpose to evaluate human error in emergency situation of fire and explosion at the oil company warehouse in Hamadan city applying human error probability index (HEPI. . Material and Method: First, the scenario of emergency situation of those situation of fire and explosion at the oil company warehouse was designed and then maneuver against, was performed. The scaled questionnaire of muster for the maneuver was completed in the next stage. Collected data were analyzed to calculate the probability success for the 18 actions required in an emergency situation from starting point of the muster until the latest action to temporary sheltersafe. .Result: The result showed that the highest probability of error occurrence was related to make safe workplace (evaluation phase with 32.4 % and lowest probability of occurrence error in detection alarm (awareness phase with 1.8 %, probability. The highest severity of error was in the evaluation phase and the lowest severity of error was in the awareness and recovery phase. Maximum risk level was related to the evaluating exit routes and selecting one route and choosy another exit route and minimum risk level was related to the four evaluation phases. . Conclusion: To reduce the risk of reaction in the exit phases of an emergency situation, the following actions are recommended, based on the finding in this study: A periodic evaluation of the exit phase and modifying them if necessary, conducting more maneuvers and analyzing this results along with a sufficient feedback to the employees.

  3. Detection of error related neuronal responses recorded by electrocorticography in humans during continuous movements.

    Directory of Open Access Journals (Sweden)

    Tomislav Milekovic

    Full Text Available BACKGROUND: Brain-machine interfaces (BMIs can translate the neuronal activity underlying a user's movement intention into movements of an artificial effector. In spite of continuous improvements, errors in movement decoding are still a major problem of current BMI systems. If the difference between the decoded and intended movements becomes noticeable, it may lead to an execution error. Outcome errors, where subjects fail to reach a certain movement goal, are also present during online BMI operation. Detecting such errors can be beneficial for BMI operation: (i errors can be corrected online after being detected and (ii adaptive BMI decoding algorithm can be updated to make fewer errors in the future. METHODOLOGY/PRINCIPAL FINDINGS: Here, we show that error events can be detected from human electrocorticography (ECoG during a continuous task with high precision, given a temporal tolerance of 300-400 milliseconds. We quantified the error detection accuracy and showed that, using only a small subset of 2×2 ECoG electrodes, 82% of detection information for outcome error and 74% of detection information for execution error available from all ECoG electrodes could be retained. CONCLUSIONS/SIGNIFICANCE: The error detection method presented here could be used to correct errors made during BMI operation or to adapt a BMI algorithm to make fewer errors in the future. Furthermore, our results indicate that smaller ECoG implant could be used for error detection. Reducing the size of an ECoG electrode implant used for BMI decoding and error detection could significantly reduce the medical risk of implantation.

  4. Human Error and the International Space Station: Challenges and Triumphs in Science Operations

    Science.gov (United States)

    Harris, Samantha S.; Simpson, Beau C.

    2016-01-01

    Any system with a human component is inherently risky. Studies in human factors and psychology have repeatedly shown that human operators will inevitably make errors, regardless of how well they are trained. Onboard the International Space Station (ISS) where crew time is arguably the most valuable resource, errors by the crew or ground operators can be costly to critical science objectives. Operations experts at the ISS Payload Operations Integration Center (POIC), located at NASA's Marshall Space Flight Center in Huntsville, Alabama, have learned that from payload concept development through execution, there are countless opportunities to introduce errors that can potentially result in costly losses of crew time and science. To effectively address this challenge, we must approach the design, testing, and operation processes with two specific goals in mind. First, a systematic approach to error and human centered design methodology should be implemented to minimize opportunities for user error. Second, we must assume that human errors will be made and enable rapid identification and recoverability when they occur. While a systematic approach and human centered development process can go a long way toward eliminating error, the complete exclusion of operator error is not a reasonable expectation. The ISS environment in particular poses challenging conditions, especially for flight controllers and astronauts. Operating a scientific laboratory 250 miles above the Earth is a complicated and dangerous task with high stakes and a steep learning curve. While human error is a reality that may never be fully eliminated, smart implementation of carefully chosen tools and techniques can go a long way toward minimizing risk and increasing the efficiency of NASA's space science operations.

  5. Assessing Geologic Image Interpretation Errors Occurring in Extraterrestrial Robotic Exploration

    Science.gov (United States)

    Wagner, J.; Anderson, R. C.; Thomas, G.; Cabrol, N.; Grin, E.; Glasgow, J.

    2003-12-01

    Robotic exploration of the Martian surface requires numerous interpretations of imaged data, where incorrect results can have drastic consequences. The imaging process transforms and reduces the amount of information available. Three experiments measured the differences in interpretation between imaged sediments and physical sediments. Three characteristics were analyzed: grain length, grain shape, and grain distribution. The results found the difference between the grain length measured on an image and the true length to +/- 2.333 pixels (p <0.0001); the difference is similar to the amount of blurring introduced by the camera. Both grain roundness and grain sphericity were classified on a scale from 1 to 6 in the shape experiment. The roundness classification differed by 0.114 categories (p = 0.0082) with the imaged grains being rounder. The sphericity classification differed by 0.151 categories (p = 0.0010) with the imaged grains being less spherical. In the distribution experiment, the subjects determined the percentage of the total image area covered by grains in six specified size ranges. The average error for each size range was 11.112 % of the total area (p < 0.0001). In all three experiments, the measurements taken using the imaged specimens significantly differed from the measurements taken using the physical specimens. The magnitudes of the differences were small and may not be scientifically significant.

  6. Implications of Monte Carlo Statistical Errors in Criticality Safety Assessments

    Energy Technology Data Exchange (ETDEWEB)

    Pevey, Ronald E.

    2005-09-15

    Most criticality safety calculations are performed using Monte Carlo techniques because of Monte Carlo's ability to handle complex three-dimensional geometries. For Monte Carlo calculations, the more histories sampled, the lower the standard deviation of the resulting estimates. The common intuition is, therefore, that the more histories, the better; as a result, analysts tend to run Monte Carlo analyses as long as possible (or at least to a minimum acceptable uncertainty). For Monte Carlo criticality safety analyses, however, the optimization situation is complicated by the fact that procedures usually require that an extra margin of safety be added because of the statistical uncertainty of the Monte Carlo calculations. This additional safety margin affects the impact of the choice of the calculational standard deviation, both on production and on safety. This paper shows that, under the assumptions of normally distributed benchmarking calculational errors and exact compliance with the upper subcritical limit (USL), the standard deviation that optimizes production is zero, but there is a non-zero value of the calculational standard deviation that minimizes the risk of inadvertently labeling a supercritical configuration as subcritical. Furthermore, this value is shown to be a simple function of the typical benchmarking step outcomes--the bias, the standard deviation of the bias, the upper subcritical limit, and the number of standard deviations added to calculated k-effectives before comparison to the USL.

  7. Human oocytes. Error-prone chromosome-mediated spindle assembly favors chromosome segregation defects in human oocytes.

    Science.gov (United States)

    Holubcová, Zuzana; Blayney, Martyn; Elder, Kay; Schuh, Melina

    2015-06-05

    Aneuploidy in human eggs is the leading cause of pregnancy loss and several genetic disorders such as Down syndrome. Most aneuploidy results from chromosome segregation errors during the meiotic divisions of an oocyte, the egg's progenitor cell. The basis for particularly error-prone chromosome segregation in human oocytes is not known. We analyzed meiosis in more than 100 live human oocytes and identified an error-prone chromosome-mediated spindle assembly mechanism as a major contributor to chromosome segregation defects. Human oocytes assembled a meiotic spindle independently of either centrosomes or other microtubule organizing centers. Instead, spindle assembly was mediated by chromosomes and the small guanosine triphosphatase Ran in a process requiring ~16 hours. This unusually long spindle assembly period was marked by intrinsic spindle instability and abnormal kinetochore-microtubule attachments, which favor chromosome segregation errors and provide a possible explanation for high rates of aneuploidy in human eggs.

  8. Assessment of errors and uncertainty patterns in GIA modeling

    DEFF Research Database (Denmark)

    Barletta, Valentina Roberta; Spada, G.

    During the last decade many efforts have been devoted to the assessment of global sea level rise and to the determination of the mass balance of continental ice sheets. In this context, the important role of glacial-isostatic adjustment (GIA) has been clearly recognized. Yet, in many cases only one...... expansion of the oceans and other sources of water during the deglaciation different from the one coming from ice-sheets account for further sea level rise since LGM, and the water stored in atmosphere, groundwater and lakes accounts for negative contribution to it. In this way we aim to assess which GIA...

  9. Resilience to evolving drinking water contamination risks: a human error prevention perspective

    OpenAIRE

    Tang, Yanhong; Wu, Shaomin; Miao, Xin; Pollard, Simon J.T.; Hrudey, Steve E.

    2013-01-01

    Human error contributes to one of the major causes of the prevalence of drinking water contamination incidents. It has, however, attracted insufficient attention in the cleaner production management community. This paper analyzes human error appearing in each stage of the gestation of 40 drinking water incidents and their causes, proposes resilience-based mechanisms and tools within three groups: consumers, drinking water companies, and policy regulators. The mechanism analysis involves conce...

  10. Errors and misunderstandings among novice programmers: Assessing the student not the program

    OpenAIRE

    Johansen, Mathias Johan

    2015-01-01

    Novice programmers make a lot of programming errors as they strive to become experts. This is a known fact to teaching faculty in introductory programming courses. The errors play a major role in both formative and summative assessment of the students. The Computer Science research of today trends towards focusing on automatic assessment of program, becoming more remote from the student who wrote the program. In an attempt to create a better understanding of the novice program- mers and the e...

  11. Coping with human errors through system design: Implications for ecological interface design

    DEFF Research Database (Denmark)

    Rasmussen, Jens; Vicente, Kim J.

    1989-01-01

    Research during recent years has revealed that human errors are not stochastic events which can be removed through improved training programs or optimal interface design. Rather, errors tend to reflect either systematic interference between various models, rules, and schemata, or the effects...... of the adaptive mechanisms involved in learning. In terms of design implications, these findings suggest that reliable human-system interaction will be achieved by designing interfaces which tend to minimize the potential for control interference and support recovery from errors. In other words, the focus should...... be on control of the effects of errors rather than on the elimination of errors per se. In this paper, we propose a theoretical framework for interface design that attempts to satisfy these objectives. The goal of our framework, called ecological interface design, is to develop a meaningful representation...

  12. Are seismic hazard assessment errors and earthquake surprises unavoidable?

    Science.gov (United States)

    Kossobokov, Vladimir

    2013-04-01

    Why earthquake occurrences bring us so many surprises? The answer seems evident if we review the relationships that are commonly used to assess seismic hazard. The time-span of physically reliable Seismic History is yet a small portion of a rupture recurrence cycle at an earthquake-prone site, which makes premature any kind of reliable probabilistic statements about narrowly localized seismic hazard. Moreover, seismic evidences accumulated to-date demonstrate clearly that most of the empirical relations commonly accepted in the early history of instrumental seismology can be proved erroneous when testing statistical significance is applied. Seismic events, including mega-earthquakes, cluster displaying behaviors that are far from independent or periodic. Their distribution in space is possibly fractal, definitely, far from uniform even in a single segment of a fault zone. Such a situation contradicts generally accepted assumptions used for analytically tractable or computer simulations and complicates design of reliable methodologies for realistic earthquake hazard assessment, as well as search and definition of precursory behaviors to be used for forecast/prediction purposes. As a result, the conclusions drawn from such simulations and analyses can MISLEAD TO SCIENTIFICALLY GROUNDLESS APPLICATION, which is unwise and extremely dangerous in assessing expected societal risks and losses. For example, a systematic comparison of the GSHAP peak ground acceleration estimates with those related to actual strong earthquakes, unfortunately, discloses gross inadequacy of this "probabilistic" product, which appears UNACCEPTABLE FOR ANY KIND OF RESPONSIBLE SEISMIC RISK EVALUATION AND KNOWLEDGEABLE DISASTER PREVENTION. The self-evident shortcomings and failures of GSHAP appeals to all earthquake scientists and engineers for an urgent revision of the global seismic hazard maps from the first principles including background methodologies involved, such that there becomes: (a) a

  13. Assessing Numerical Error in Structural Dynamics Using Energy Balance

    Directory of Open Access Journals (Sweden)

    Rabindranath Andujar

    2013-01-01

    Full Text Available This work applies the variational principles of Lagrange and Hamilton to the assessment of numerical methods of linear structural analysis. Different numerical methods are used to simulate the behaviour of three structural configurations and benchmarked in their computation of the Lagrangian action integral over time. According to the principle of energy conservation, the difference at each time step between the kinetic and the strain energies must equal the work done by the external forces. By computing this difference, the degree of accuracy of each combination of numerical methods can be assessed. Moreover, it is often difficult to perceive numerical instabilities due to the inherent complexities of the modelled structures. By means of the proposed procedure, these complexities can be globally controlled and visualized in a straightforward way. The paper presents the variational principles to be considered for the collection and computation of the energy-related parameters (kinetic, strain, dissipative, and external work. It then introduces a systematic framework within which the numerical methods can be compared in a qualitative as well as in a quantitative manner. Finally, a series of numerical experiments is conducted using three simple 2D models subjected to the effect of four different dynamic loadings.

  14. An experimental approach to validating a theory of human error in complex systems

    Science.gov (United States)

    Morris, N. M.; Rouse, W. B.

    1985-01-01

    The problem of 'human error' is pervasive in engineering systems in which the human is involved. In contrast to the common engineering approach of dealing with error probabilistically, the present research seeks to alleviate problems associated with error by gaining a greater understanding of causes and contributing factors from a human information processing perspective. The general approach involves identifying conditions which are hypothesized to contribute to errors, and experimentally creating the conditions in order to verify the hypotheses. The conceptual framework which serves as the basis for this research is discussed briefly, followed by a description of upcoming research. Finally, the potential relevance of this research to design, training, and aiding issues is discussed.

  15. Integrated Framework for Understanding Relationship Between Human Error and Aviation Safety

    Institute of Scientific and Technical Information of China (English)

    徐锡东

    2009-01-01

    Introducing a framework for understanding the relationship between human error and aviation safety from mul-tiple perspectives and using multiple models. The first part of the framework is the perspective of individual operator using the information processing model. The second part is the group perspective with the Crew Re-source Management (CRM) model. The third and final is the organization perspective using Reason's Swiss cheese model. Each of the perspectives and models has been in existence for a long time, but the integrated framework presented allows a systematic understanding of the complex relationship between human error and aviation safety, along with the numerous factors that cause or influence error. The framework also allows the i-dentification of mitigation measures to systematically reduce human error and improve aviation safety.

  16. Assessment of medication errors and adherence to WHO prescription writing guidelines in a tertiary care hospital

    Directory of Open Access Journals (Sweden)

    Dilnasheen Sheikh

    2017-06-01

    Full Text Available The objective of the study is to assess the medication errors and adherence to WHO prescription writing guidelines in a tertiary care hospital. A prospective observational study was carried out for a period of 8 months from June 2015 to February 2016 at tertiary care hospital. At inpatient department regular chart review of patient case records was carried out to assess the medication errors. The observed medication errors were assessed for level of harm by using NCCMERP index. The outpatient prescriptions were screened for adherence to WHO prescription writing guidelines. Out of 200 patients, 40 patients developed medication errors. Most of the medication errors were observed in the age group above 61 years (40%. Majority of the medication errors were observed with drug class of antibiotics 9 (22.5% and bronchodilators 9 (22.5%. Most of the errors were under the NCCMERP index category C. Out of 545 outpatient prescriptions, 51 (9.37% prescriptions did not have prescriber’s name and all of the prescriptions lack prescriber’s personal contact number. Eighteen prescriptions did not have patient’s name and 426 (78.2% prescriptions did not have patient’s age. The prevalence of medication errors in this study was relatively low (20% without any fatal outcome. Omission error was the most frequently observed medication errors 31 (77.5%. In the present study, the patient’s age was missing in 78.2% of the prescriptions and none of the prescriptions had patient’s address and the drug names were not mentioned by their generic names.

  17. Human errors: their psychophysical bases and the Proprioceptive Diagnosis of Temperament and Character (DP-TC as a tool for measuring.

    Directory of Open Access Journals (Sweden)

    Tous Ral J.M.

    2014-07-01

    Full Text Available Human error is commonly differentiated into three different types. These are: errors in perception, errors in decision and errors in sensation. This analysis is based on classical psychophysics (Fechner, 1860 and describes the errors of detection and perception. Decision- making errors are evaluated in terms of the theory of signal detection (McNicholson, 1974, and errors of sensation or sensitivity are evaluated in terms of proprioceptive information (van Beers, 2001. Each of these stages developed its own method of evaluation that has influenced the development of ergonomics in the event of errors in perception and the verbal assessment of personality (stress, impulsiveness, burnout, etc. in decision-making errors. Here we represent the method we have developed, the Proprioceptive Diagnosis of Temperament and Character (DP- TC test, for the specific assessment of errors of perception or expressivity which are based on fine motor precision performance. Each of the described errors types are interdependent of each other in such a manner that observable stress in behaviour may be caused due to: the inadequate performance of a task due to the perception of the person (i.e. from right to left for a right-handed person; performing a task that requires attentive decision-making to be performed too hastily; undertaking a task that does not correspond to the prevailing disposition of the person.

  18. Behind Human Error: Cognitive Systems, Computers and Hindsight

    Science.gov (United States)

    1994-12-01

    squeeze became on the powers of the operator.... And as Norbert Wiener noted some years later (1964, p. 63): The gadget-minded people often have the...for one exception see Woods and Elias , 1988). This failure to develop representations that reveal change and highlight events in the monitored...Woods, D. D., and Elias , G. (1988). Significance messages: An inte- gral display concept. In Proceedings of the 32nd Annual Meeting of the Human

  19. IASI temperature and water vapor retrievals – error assessment and validation

    Directory of Open Access Journals (Sweden)

    N. Pougatchev

    2009-03-01

    Full Text Available The METOP-A satellite Infrared Atmospheric Sounding Interferometer (IASI Level 2 products comprise retrievals of vertical profiles of temperature and water vapor. The error covariance matrices and biases of the most recent version (4.3.1 of the L2 data were assessed, and the assessment was validated using radiosonde data for reference. The radiosonde data set includes dedicated and synoptic time launches at the Lindenberg station in Germany. For optimal validation, the linear statistical Validation Assessment Model (VAM was used. The VAM uses radiosonde profiles as input and provides optimal estimate of the nominal IASI retrieval by utilizing IASI averaging kernels and statistical characteristics of the ensembles of the reference radiosondes. For temperature temperatures above 900 mb and water retrievals above 700 mb, level expected and assessed errors are in good agreement. Below those levels, noticeable excess in assessed error is observed, possibly due to inaccurate surface parameters and undetected clouds/haze.

  20. Student Self-Assessment and Faculty Assessment of Performance in an Interprofessional Error Disclosure Simulation Training Program.

    Science.gov (United States)

    Poirier, Therese I; Pailden, Junvie; Jhala, Ray; Ronald, Katie; Wilhelm, Miranda; Fan, Jingyang

    2017-04-01

    Objectives. To conduct a prospective evaluation for effectiveness of an error disclosure assessment tool and video recordings to enhance student learning and metacognitive skills while assessing the IPEC competencies. Design. The instruments for assessing performance (planning, communication, process, and team dynamics) in interprofessional error disclosure were developed. Student self-assessment of performance before and after viewing the recordings of their encounters were obtained. Faculty used a similar instrument to conduct real-time assessments. An instrument to assess achievement of the Interprofessional Education Collaborative (IPEC) core competencies was developed. Qualitative data was reviewed to determine student and faculty perceptions of the simulation. Assessment. The interprofessional simulation training involved a total of 233 students (50 dental, 109 nursing and 74 pharmacy). Use of video recordings made a significant difference in student self-assessment for communication and process categories of error disclosure. No differences in student self-assessments were noted among the different professions. There were differences among the family member affects for planning and communication for both pre-video and post-video data. There were significant differences between student self-assessment and faculty assessment for all paired comparisons, except communication in student post-video self-assessment. Students' perceptions of achievement of the IPEC core competencies were positive. Conclusion. The use of assessment instruments and video recordings may have enhanced students' metacognitive skills for assessing performance in interprofessional error disclosure. The simulation training was effective in enhancing perceptions on achievement of IPEC core competencies. This enhanced assessment process appeared to enhance learning about the skills needed for interprofessional error disclosure.

  1. Human error and the problem of causality in analysis of accidents

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1990-01-01

    , designers or managers have played a major role. There are, however, several basic problems in analysis of accidents and identification of human error. This paper addresses the nature of causal explanations and the ambiguity of the rules applied for identification of the events to include in analysis......Present technology is characterized by complexity, rapid change and growing size of technical systems. This has caused increasing concern with the human involvement in system safety. Analyses of the major accidents during recent decades have concluded that human errors on part of operators...

  2. HUMAN ERROR QUANTIFICATION USING PERFORMANCE SHAPING FACTORS IN THE SPAR-H METHOD

    Energy Technology Data Exchange (ETDEWEB)

    Harold S. Blackman; David I. Gertman; Ronald L. Boring

    2008-09-01

    This paper describes a cognitively based human reliability analysis (HRA) quantification technique for estimating the human error probabilities (HEPs) associated with operator and crew actions at nuclear power plants. The method described here, Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) method, was developed to aid in characterizing and quantifying human performance at nuclear power plants. The intent was to develop a defensible method that would consider all factors that may influence performance. In the SPAR-H approach, calculation of HEP rates is especially straightforward, starting with pre-defined nominal error rates for cognitive vs. action-oriented tasks, and incorporating performance shaping factor multipliers upon those nominal error rates.

  3. Computational analysis of splicing errors and mutations in human transcripts

    Directory of Open Access Journals (Sweden)

    Gelfand Mikhail S

    2008-01-01

    Full Text Available Abstract Background Most retained introns found in human cDNAs generated by high-throughput sequencing projects seem to result from underspliced transcripts, and thus they capture intermediate steps of pre-mRNA splicing. On the other hand, mutations in splice sites cause exon skipping of the respective exon or activation of pre-existing cryptic sites. Both types of events reflect properties of the splicing mechanism. Results The retained introns were significantly shorter than constitutive ones, and skipped exons are shorter than exons with cryptic sites. Both donor and acceptor splice sites of retained introns were weaker than splice sites of constitutive introns. The authentic acceptor sites affected by mutations were significantly weaker in exons with activated cryptic sites than in skipped exons. The distance from a mutated splice site to the nearest equivalent site is significantly shorter in cases of activated cryptic sites compared to exon skipping events. The prevalence of retained introns within genes monotonically increased in the 5'-to-3' direction (more retained introns close to the 3'-end, consistent with the model of co-transcriptional splicing. The density of exonic splicing enhancers was higher, and the density of exonic splicing silencers lower in retained introns compared to constitutive ones and in exons with cryptic sites compared to skipped exons. Conclusion Thus the analysis of retained introns in human cDNA, exons skipped due to mutations in splice sites and exons with cryptic sites produced results consistent with the intron definition mechanism of splicing of short introns, co-transcriptional splicing, dependence of splicing efficiency on the splice site strength and the density of candidate exonic splicing enhancers and silencers. These results are consistent with other, recently published analyses.

  4. Human and organizational errors in loading and discharge operations at marine terminals: Reduction of tanker oil and chemical spills. Organizing to minimize human and organizational errors

    Energy Technology Data Exchange (ETDEWEB)

    Mannarelli, T.; Roberts, K.; Bea, R.

    1995-11-01

    This report summarizes organizational and managerial findings, and proposes corresponding recommendations, based on a program of research conducted at two major locations: Chevron USA Products Company Refinery in Richmond, California and Arco Marine Incorporated shipping operations in Long Beach, California. The Organizational Behavior and Industrial Relations group from the Business School approached the project with the same objective (of reducing the risk of accidents resulting from human and/or organizational errors), but used a different means of achieving those ends. On the Business side, the aim of the project is to identify organizational and managerial practices, problems, and potential problems, analyze them, and then make recommendations that offer potential solutions to those circumstances which pose a human and/or organizational error (HOE) risk.

  5. Impact of Non-Gaussian Error Volumes on Conjunction Assessment Risk Analysis

    Science.gov (United States)

    Ghrist, Richard W.; Plakalovic, Dragan

    2012-01-01

    An understanding of how an initially Gaussian error volume becomes non-Gaussian over time is an important consideration for space-vehicle conjunction assessment. Traditional assumptions applied to the error volume artificially suppress the true non-Gaussian nature of the space-vehicle position uncertainties. For typical conjunction assessment objects, representation of the error volume by a state error covariance matrix in a Cartesian reference frame is a more significant limitation than is the assumption of linearized dynamics for propagating the error volume. In this study, the impact of each assumption is examined and isolated for each point in the volume. Limitations arising from representing the error volume in a Cartesian reference frame is corrected by employing a Monte Carlo approach to probability of collision (Pc), using equinoctial samples from the Cartesian position covariance at the time of closest approach (TCA) between the pair of space objects. A set of actual, higher risk (Pc >= 10 (exp -4)+) conjunction events in various low-Earth orbits using Monte Carlo methods are analyzed. The impact of non-Gaussian error volumes on Pc for these cases is minimal, even when the deviation from a Gaussian distribution is significant.

  6. Maneuver Performance Assessment of the Cassini Spacecraft Through Execution-Error Modeling and Analysis

    Science.gov (United States)

    Wagner, Sean

    2014-01-01

    The Cassini spacecraft has executed nearly 300 maneuvers since 1997, providing ample data for execution-error model updates. With maneuvers through 2017, opportunities remain to improve on the models and remove biases identified in maneuver executions. This manuscript focuses on how execution-error models can be used to judge maneuver performance, while providing a means for detecting performance degradation. Additionally, this paper describes Cassini's execution-error model updates in August 2012. An assessment of Cassini's maneuver performance through OTM-368 on January 5, 2014 is also presented.

  7. Maneuver Performance Assessment of the Cassini Spacecraft Through Execution-Error Modeling and Analysis

    Science.gov (United States)

    Wagner, Sean

    2014-01-01

    The Cassini spacecraft has executed nearly 300 maneuvers since 1997, providing ample data for execution-error model updates. With maneuvers through 2017, opportunities remain to improve on the models and remove biases identified in maneuver executions. This manuscript focuses on how execution-error models can be used to judge maneuver performance, while providing a means for detecting performance degradation. Additionally, this paper describes Cassini's execution-error model updates in August 2012. An assessment of Cassini's maneuver performance through OTM-368 on January 5, 2014 is also presented.

  8. Extracting and Converting Quantitative Data into Human Error Probabilities

    Energy Technology Data Exchange (ETDEWEB)

    Tuan Q. Tran; Ronald L. Boring; Jeffrey C. Joe; Candice D. Griffith

    2007-08-01

    This paper discusses a proposed method using a combination of advanced statistical approaches (e.g., meta-analysis, regression, structural equation modeling) that will not only convert different empirical results into a common metric for scaling individual PSFs effects, but will also examine the complex interrelationships among PSFs. Furthermore, the paper discusses how the derived statistical estimates (i.e., effect sizes) can be mapped onto a HRA method (e.g. SPAR-H) to generate HEPs that can then be use in probabilistic risk assessment (PRA). The paper concludes with a discussion of the benefits of using academic literature in assisting HRA analysts in generating sound HEPs and HRA developers in validating current HRA models and formulating new HRA models.

  9. Assessment of the relative error in sessile drop method automation task

    OpenAIRE

    Levitskaya T.О.

    2015-01-01

    Assessment of the relative error in the sessile drop method automation. Further development of the sessile drop method is directly related to the development of new techniques and specially developed algorithms enabling automatic computer calculation of surface properties. The sessile drop method mathematical apparatus improvement, drop circuit equation transformation to a form suitable for working, the drop surface calculation method automation, analysis of relative errors in the calculation...

  10. Psychological assessment of torture survivors: essential steps, avoidable errors, and helpful resources.

    Science.gov (United States)

    Pope, Kenneth S

    2012-01-01

    This article provides ideas, information, and resources that may be helpful in conducting psychological evaluations of people who have been tortured. The first section discusses essential steps, including achieving competence; clarifying the purpose; selecting methods appropriate to the individual, the purpose, and the situation; addressing issues of culture and language; maintaining awareness of ways in which the presence of third parties and recording can affect the assessment; attending carefully to similarities, echoes, and triggers; and actively searching for ways to transcend our own limited experiences and misleading expectations. The second section discusses avoiding five common errors that undermine these evaluations: mismatched validity; confirmation bias; confusing retrospective and prospective accuracy (switching conditional probabilities); ignoring the effects of low base rates; and misinterpreting dual high base rates. The third section identifies resources on the web (e.g., major centers, legal services, online courses, information about asylum and refuge, networks of torture survivors, human rights organizations providing information and services, guides to assessment) that people working with torture survivors, refugees, and asylum-seekers may find helpful.

  11. Human error identification for laparoscopic surgery: Development of a motion economy perspective.

    Science.gov (United States)

    Al-Hakim, Latif; Sevdalis, Nick; Maiping, Tanaphon; Watanachote, Damrongpan; Sengupta, Shomik; Dissaranan, Charuspong

    2015-09-01

    This study postulates that traditional human error identification techniques fail to consider motion economy principles and, accordingly, their applicability in operating theatres may be limited. This study addresses this gap in the literature with a dual aim. First, it identifies the principles of motion economy that suit the operative environment and second, it develops a new error mode taxonomy for human error identification techniques which recognises motion economy deficiencies affecting the performance of surgeons and predisposing them to errors. A total of 30 principles of motion economy were developed and categorised into five areas. A hierarchical task analysis was used to break down main tasks of a urological laparoscopic surgery (hand-assisted laparoscopic nephrectomy) to their elements and the new taxonomy was used to identify errors and their root causes resulting from violation of motion economy principles. The approach was prospectively tested in 12 observed laparoscopic surgeries performed by 5 experienced surgeons. A total of 86 errors were identified and linked to the motion economy deficiencies. Results indicate the developed methodology is promising. Our methodology allows error prevention in surgery and the developed set of motion economy principles could be useful for training surgeons on motion economy principles. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  12. Condom-use errors and problems: a neglected aspect of studies assessing condom effectiveness.

    Science.gov (United States)

    Crosby, Richard; Sanders, Stephanie; Yarber, William L; Graham, Cynthia A

    2003-05-01

    To assess and compare condom-use errors and problems among condom-using university males and females. A convenience sample of 260 undergraduates was utilized. Males (n=118) and females (n=142) reported using condoms in the past 3 months for at least one episode of sex (penis in the mouth, vagina, or rectum) with a partner of the other sex. A questionnaire assessed 15 errors and problems associated with condom use that could be observed or experienced by females as well as males. About 44% reported lack of condom availability. Errors that could contribute to failure included using sharp instruments to open condom packages (11%), storing condoms in wallets (19%), and not using a new condom when switching from one form of sex to another (83%). Thirty-eight percent reported that condoms were applied after sex had begun, and nearly 14% indicated they removed condoms before sex was concluded. Problems included loss of erection during condom application (15%) or during sex (10%). About 28% reported that condoms had either slipped off or broken. Nearly 19% perceived, at least once, that their condom problems necessitated the use of a new condom. Few differences were observed in errors and problems between males and females. Findings suggest that condom-use errors and problems may be quite common and that assessment of errors and problems do not necessarily need to be gender specific. Findings also suggest that correcting "user failure" may represent an important challenge in the practice of preventive medicine.

  13. Safety coaches in radiology: decreasing human error and minimizing patient harm

    Energy Technology Data Exchange (ETDEWEB)

    Dickerson, Julie M.; Adams, Janet M. [Cincinnati Children' s Hospital Medical Center, Department of Radiology, MLC 5031, Cincinnati, OH (United States); Koch, Bernadette L.; Donnelly, Lane F. [Cincinnati Children' s Hospital Medical Center, Department of Radiology, MLC 5031, Cincinnati, OH (United States); Cincinnati Children' s Hospital Medical Center, Department of Pediatrics, Cincinnati, OH (United States); Goodfriend, Martha A. [Cincinnati Children' s Hospital Medical Center, Department of Quality Improvement, Cincinnati, OH (United States)

    2010-09-15

    Successful programs to improve patient safety require a component aimed at improving safety culture and environment, resulting in a reduced number of human errors that could lead to patient harm. Safety coaching provides peer accountability. It involves observing for safety behaviors and use of error prevention techniques and provides immediate feedback. For more than a decade, behavior-based safety coaching has been a successful strategy for reducing error within the context of occupational safety in industry. We describe the use of safety coaches in radiology. Safety coaches are an important component of our comprehensive patient safety program. (orig.)

  14. Assessment of the relative error in the automation task by sessile drop method

    Directory of Open Access Journals (Sweden)

    T. О. Levitskaya

    2015-11-01

    Full Text Available Assessment of the relative error in the sessile drop method automation. Further development of the sessile drop method is directly related to the development of new techniques and specially developed algorithms enabling automatic computer calculation of surface properties. The sessile drop method mathematical apparatus improvement, drop circuit equation transformation to a form suitable for working, the drop surface calculation method automation, analysis of relative errors in the calculation of surface tension are relevant and are important in experimental determinations. The surface tension measurement relative error, as well as the error caused by the drop ellipsoidness in the plan were determined in the task of the sessile drop automation. It should be noted that if the drop maximum diameter (l is big or if the ratio of l to the drop height above the equatorial diameter(h is big, the relative error in the measurement of surface tension by sessile drop method does not depend much on the equatorial diameter of the drop and ellipsoidness of the drop. In this case, the accuracy of determination of the surface tension varies from 1,0 to 0,5%. At lower values the drop ellipsoidness begins to affect the relative error of surface tension (from 1,2 to 0,8%, but in this case the drop ellipsoidness is less. Therefore, in subsequent experiments, we used larger drops. On the basis of the assessment of the relative error in determining the liquid surface tension by sessile drop method caused by drop ellipsoidness in the plan, the tables showing the limits of the drop parameters (h and l measurement necessary accuracy to get the overall relative error have been made up. Previously, the surface tension used to be calculated with the relative error in the range of 2-3%

  15. The analysis of human error as causes in the maintenance of machines: a case study in mining companies

    Directory of Open Access Journals (Sweden)

    Kovacevic, Srdja

    2016-12-01

    Full Text Available This paper describes the two-step method used to analyse the factors and aspects influencing human error during the maintenance of mining machines. The first step is the cause-effect analysis, supported by brainstorming, where five factors and 21 aspects are identified. During the second step, the group fuzzy analytic hierarchy process is used to rank the identified factors and aspects. A case study is done on mining companies in Serbia. The key aspects are ranked according to an analysis that included experts who assess risks in mining companies (a maintenance engineer, a technologist, an ergonomist, a psychologist, and an organisational scientist. Failure to follow technical maintenance instructions, poor organisation of the training process, inadequate diagnostic equipment, and a lack of understanding of the work process are identified as the most important causes of human error.

  16. A wavelet-based approach to assessing timing errors in hydrologic predictions

    Science.gov (United States)

    Liu, Yuqiong; Brown, James; Demargne, Julie; Seo, Dong-Jun

    2011-02-01

    SummaryStreamflow predictions typically contain errors in both the timing and the magnitude of peak flows. These two types of error often originate from different sources (e.g. rainfall-runoff modeling vs. routing) and hence may have different implications and ramifications for both model diagnosis and decision support. Thus, where possible and relevant, they should be distinguished and separated in model evaluation and forecast verification applications. Distinct information on timing errors in hydrologic prediction could lead to more targeted model improvements in a diagnostic evaluation context, as well as better-informed decisions in many practical applications, such as flood prediction, water supply forecasting, river regulation, navigation, and engineering design. However, information on timing errors in hydrologic predictions is rarely evaluated or provided. In this paper, we discuss the importance of assessing and quantifying timing error in hydrologic predictions and present a new approach, which is based on the cross wavelet transform (XWT) technique. The XWT technique transforms the time series of predictions and corresponding observations into a two-dimensional time-scale space and provides information on scale- and time-dependent timing differences between the two time series. The results for synthetic timing errors (both constant and time-varying) indicate that the XWT-based approach can estimate timing errors in streamflow predictions with reasonable reliability. The approach is then employed to analyze the timing errors in real streamflow simulations for a number of headwater basins in the US state of Texas. The resulting timing error estimates were consistent with the physiographic and climatic characteristics of these basins. A simple post-factum timing adjustment based on these estimates led to considerably improved agreement between streamflow observations and simulations, further illustrating the potential for using the XWT-based approach for

  17. ATHEANA: {open_quotes}a technique for human error analysis{close_quotes} entering the implementation phase

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, J.; O`Hara, J.; Luckas, W. [Brookhaven National Lab., Upton, NY (United States)] [and others

    1997-02-01

    Probabilistic Risk Assessment (PRA) has become an increasingly important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. The NRC recently published a final policy statement, SECY-95-126, encouraging the use of PRA in regulatory activities. Human reliability analysis (HRA), while a critical element of PRA, has limitations in the analysis of human actions in PRAs that have long been recognized as a constraint when using PRA. In fact, better integration of HRA into the PRA process has long been a NRC issue. Of particular concern, has been the omission of errors of commission - those errors that are associated with inappropriate interventions by operators with operating systems. To address these concerns, the NRC identified the need to develop an improved HRA method, so that human reliability can be better represented and integrated into PRA modeling and quantification. The purpose of the Brookhaven National Laboratory (BNL) project, entitled `Improved HRA Method Based on Operating Experience` is to develop a new method for HRA which is supported by the analysis of risk-significant operating experience. This approach will allow a more realistic assessment and representation of the human contribution to plant risk, and thereby increase the utility of PRA. The project`s completed, ongoing, and future efforts fall into four phases: (1) Assessment phase (FY 92/93); (2) Analysis and Characterization phase (FY 93/94); (3) Development phase (FY 95/96); and (4) Implementation phase (FY 96/97 ongoing).

  18. 基于人差错纠正能力的人因可靠性模型研究%Human Reliability Method Analysis Based on Human Error Correcting Ability

    Institute of Scientific and Technical Information of China (English)

    陈炉云; 张裕芳

    2011-01-01

    Based on the theory of time sequence and error correcting ability character of the human operator behaviors in man-machine system, combining the key performance shaping factor analysis, the human reliability analysis of the vessel chamber is investigated. By the time sequence parameter and error correcting parameter in the human errors analysis, the operator behaviors shaping model of man-machine system and human errors event tree are proposed. By the error correcting ability analysis, the quantitative model and allowance theory in human reliability analysis are discussed. In the end, with the monitoring task of the operation desk in the vessel chamber as an example, a human reliability analysis was conducted to quantitatively assess the mission reliability of the operator.%根据人-机系统中人的操作行为具有时序性和差错可纠正性的特点,结合船舶舱室行为形成主因子,开展船舶舱室人因可靠性研究.以人因失误的时序性和差错纠正参数为基础,建立人-机系统中操作者行为模式和人因失误事件树模型.通过对人的差错纠正能力的分析,开展人因可靠性量化模型纠正理论研究.最后,以船舶舱室操作台的监控任务人因可靠性为例进行量化计算,定量评估操作人员执行任务的可靠度.

  19. Human error in medical practice: an unavoidable presence El error en la práctica médica: una presencia ineludible

    OpenAIRE

    Gladis Adriana Vélez Álvarez

    2006-01-01

    Making mistakes is a human characteristic and a mechanism to learn, but at the same time it may become a threat to human beings in some scenarios. Aviation and Medicine are good examples of this. Some data are presented about the frequency of error in Medicine, its ubiquity and the circumstances that favor it. A reflection is done about how the error is being managed and why it is not more often discussed. It is proposed that the first step in learning from an error is to accept it as an unav...

  20. Leak in the breathing circuit: CO2 absorber and human error.

    Science.gov (United States)

    Umesh, Goneppanavar; Jasvinder, Kaur; Sagarnil, Roy

    2010-04-01

    A couple of reports in literature have mentioned CO2 absorbers to be the cause for breathing circuit leak during anesthesia. Defective canister, failure to close the absorber chamber and overfilling of the chamber with sodalime were the problems in these reports. Among these, the last two are reports of human error resulting in problems. We report a case where despite taking precautions in this regard, we experienced a significant leak in the system due to a problem with the CO2 absorber, secondary to human error.

  1. A Conceptual Framework for Predicting Error in Complex Human-Machine Environments

    Science.gov (United States)

    Freed, Michael; Remington, Roger; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    We present a Goals, Operators, Methods, and Selection Rules-Model Human Processor (GOMS-MHP) style model-based approach to the problem of predicting human habit capture errors. Habit captures occur when the model fails to allocate limited cognitive resources to retrieve task-relevant information from memory. Lacking the unretrieved information, decision mechanisms act in accordance with implicit default assumptions, resulting in error when relied upon assumptions prove incorrect. The model helps interface designers identify situations in which such failures are especially likely.

  2. Proficiency in identifying, managing and communicating medical errors: feasibility and validity study assessing two core competencies.

    Science.gov (United States)

    Abu Dabrh, Abd Moain; Murad, Mohammad Hassan; Newcomb, Richard D; Buchta, William G; Steffen, Mark W; Wang, Zhen; Lovett, Amanda K; Steinkraus, Lawrence W

    2016-09-02

    Communication skills and professionalism are two competencies in graduate medical education that are challenging to evaluate. We aimed to develop, test and validate a de novo instrument to evaluate these two competencies. Using an Objective Standardized Clinical Examination (OSCE) based on a medication error scenario, we developed an assessment instrument that focuses on distinctive domains [context of discussion, communication and detection of error, management of error, empathy, use of electronic medical record (EMR) and electronic medical information resources (EMIR), and global rating]. The aim was to test feasibility, acceptability, and reliability of the method. Faculty and standardized patients (SPs) evaluated 56 trainees using the instrument. The inter-rater reliability of agreement between faculty was substantial (Fleiss k = 0.71) and intraclass correlation efficient was excellent (ICC = 0.80). The measured agreement between faculty and SPs evaluation of resident was lower (Fleiss k = 0.36). The instrument showed good conformity (ICC = 0.74). The majority of the trainees (75 %) had satisfactory or higher performance in all six assessed domains and 86 % found the OSCE to be realistic. Sixty percent reported not receiving feedback on EMR use and asked for subsequent training. An OSCE-based instrument using a medical error scenario can be used to assess competency in professionalism, communication, using EMRs and managing medical errors.

  3. Methodological Approach for Performing Human Reliability and Error Analysis in Railway Transportation System

    Directory of Open Access Journals (Sweden)

    Fabio De Felice

    2011-10-01

    Full Text Available Today, billions of dollars are being spent annually world wide to develop, manufacture, and operate transportation system such trains, ships, aircraft, and motor vehicles. Around 70 to 90 percent oftransportation crashes are, directly or indirectly, the result of human error. In fact, with the development of technology, system reliability has increased dramatically during the past decades, while human reliability has remained unchanged over the same period. Accordingly, human error is now considered as the most significant source of accidents or incidents in safety-critical systems. The aim of the paper is the proposal of a methodological approach to improve the transportation system reliability and in particular railway transportation system. The methodology presented is based on Failure Modes, Effects and Criticality Analysis (FMECA and Human Reliability Analysis (HRA.

  4. A human error analysis methodology, AGAPE-ET, for emergency tasks in nuclear power plants and its application

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jae Whan; Jung, Won Dea [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2002-03-01

    This report presents a procedurised human reliability analysis (HRA) methodology, AGAPE-ET (A Guidance And Procedure for Human Error Analysis for Emergency Tasks), for both qualitative error analysis and quantification of human error probability (HEP) of emergency tasks in nuclear power plants. The AGAPE-ET is based on the simplified cognitive model. By each cognitive function, error causes or error-likely situations have been identified considering the characteristics of the performance of each cognitive function and influencing mechanism of PIFs on the cognitive function. Then, error analysis items have been determined from the identified error causes or error-likely situations to help the analysts cue or guide overall human error analysis. A human error analysis procedure based on the error analysis items is organised. The basic scheme for the quantification of HEP consists in the multiplication of the BHEP assigned by the error analysis item and the weight from the influencing factors decision tree (IFDT) constituted by cognitive function. The method can be characterised by the structured identification of the weak points of the task required to perform and the efficient analysis process that the analysts have only to carry out with the necessary cognitive functions. The report also presents the the application of AFAPE-ET to 31 nuclear emergency tasks and its results. 42 refs., 7 figs., 36 tabs. (Author)

  5. Examiner Errors on the Reynolds Intellectual Assessment Scales Committed by Graduate Student Examiners

    Science.gov (United States)

    Loe, Scott A.

    2014-01-01

    Protocols from 108 administrations of the Reynolds Intellectual Assessment Scales were evaluated to determine the frequency of examiner errors and their impact on the accuracy of three test composite scores, the Composite Ability Index (CIX), Verbal Ability Index (VIX), and Nonverbal Ability Index (NIX). Students committed at least one…

  6. How we learn to make decisions: rapid propagation of reinforcement learning prediction errors in humans.

    Science.gov (United States)

    Krigolson, Olav E; Hassall, Cameron D; Handy, Todd C

    2014-03-01

    Our ability to make decisions is predicated upon our knowledge of the outcomes of the actions available to us. Reinforcement learning theory posits that actions followed by a reward or punishment acquire value through the computation of prediction errors-discrepancies between the predicted and the actual reward. A multitude of neuroimaging studies have demonstrated that rewards and punishments evoke neural responses that appear to reflect reinforcement learning prediction errors [e.g., Krigolson, O. E., Pierce, L. J., Holroyd, C. B., & Tanaka, J. W. Learning to become an expert: Reinforcement learning and the acquisition of perceptual expertise. Journal of Cognitive Neuroscience, 21, 1833-1840, 2009; Bayer, H. M., & Glimcher, P. W. Midbrain dopamine neurons encode a quantitative reward prediction error signal. Neuron, 47, 129-141, 2005; O'Doherty, J. P. Reward representations and reward-related learning in the human brain: Insights from neuroimaging. Current Opinion in Neurobiology, 14, 769-776, 2004; Holroyd, C. B., & Coles, M. G. H. The neural basis of human error processing: Reinforcement learning, dopamine, and the error-related negativity. Psychological Review, 109, 679-709, 2002]. Here, we used the brain ERP technique to demonstrate that not only do rewards elicit a neural response akin to a prediction error but also that this signal rapidly diminished and propagated to the time of choice presentation with learning. Specifically, in a simple, learnable gambling task, we show that novel rewards elicited a feedback error-related negativity that rapidly decreased in amplitude with learning. Furthermore, we demonstrate the existence of a reward positivity at choice presentation, a previously unreported ERP component that has a similar timing and topography as the feedback error-related negativity that increased in amplitude with learning. The pattern of results we observed mirrored the output of a computational model that we implemented to compute reward

  7. The Role of Human Error in Design, Construction, and Reliability of Marine Structures.

    Science.gov (United States)

    1994-10-01

    OrganizationHuman Resources Syste ms Facilities The entire process is Equipment iterative (the design spiral) [Taggart, 1980]. The preliminary design...quantitative analyses. New, little Standard, good experience, experience, insufficient sufficient Materials ms Constuction - PSrocdures SyIstlI Design - ~J...of the MSIP project [Bea, 1993] indicated that there were four general approaches that should be considered in developing human error tol- erant

  8. Support of protective work of human error in a nuclear power plant

    Energy Technology Data Exchange (ETDEWEB)

    Yoshizawa, Yuriko [Tokyo Electric Power Co., Inc. (Japan)

    1999-12-01

    The nuclear power plant human factor group of the Tokyo Electric Power Co., Ltd. supports various protective work of human error conducted at the nuclear power plant. Its main researching theme are studies on human factor on operation of a nuclear power plant, and on recovery and common basic study on human factor. In addition, on a base of the obtained informations, assistance to protective work of human error conducted at the nuclear power plant as well as development for its actual use was also promoted. Especially, for actions sharing some dangerous informations, various assistances such as a proposal on actual example analytical method to effectively understand a dangerous information not facially but faithfully, construction of a data base to conveniently share such dangerous information, and practice on non-accident business survey for a hint of effective promotion of the protection work, were promoted. Here were introduced on assistance and investigation for effective sharing of the dangerous informations for various actions on protection of human error mainly conducted in nuclear power plant. (G.K.)

  9. In-plant reliability data base for nuclear plant components: a feasibility study on human error information

    Energy Technology Data Exchange (ETDEWEB)

    Borkowski, R.J.; Fragola, J.R.; Schurman, D.L.; Johnson, J.W.

    1984-03-01

    This report documents the procedure and final results of a feasibility study which examined the usefulness of nuclear plant maintenance work requests in the IPRDS as tools for understanding human error and its influence on component failure and repair. Developed in this study were (1) a set of criteria for judging the quality of a plant maintenance record set for studying human error; (2) a scheme for identifying human errors in the maintenance records; and (3) two taxonomies (engineering-based and psychology-based) for categorizing and coding human error-related events.

  10. Assessing the effect of estimation error on risk-adjusted CUSUM chart performance.

    Science.gov (United States)

    2015-12-01

    Mark A. Jones, Stefan H. Steiner. Assessing the effect of estimation error on risk-adjusted CUSUM chart performance. Int J Qual Health Care (2012) 24(2): 176–181 doi: 10.1093/intqhc/mzr082. The authors would like to correct an error identified in the above paper. Table 5 included incorrect information. The correct table has been reprinted below. Furthermore, in the discussion on p. 180 of this paper, one of the incorrect numbers in Table 5 was quoted. This section is reproduced below with the correct numbers. In the case of homogeneous patients where adverse event risk was assumed to be constant at 6.6% the estimated level of estimation error: SD (ARL0) = 85.9 was less than the equivalent risk-adjusted scenario where SD (ARL0) = 89.2 but only by around 4%.

  11. Human Error Probabilites (HEPs) for generic tasks and Performance Shaping Factors (PSFs) selected for railway operations

    DEFF Research Database (Denmark)

    Thommesen, Jacob; Andersen, Henning Boje

    at task level, which can be performed with fewer resources than a more detailed analysis of specific errors for each task. The generic tasks are presented with estimated Human Error Probabili-ties (HEPs) based on and extrapolated from the HRA literature, and estimates are compared with samples of measures...... on estimates derived from industries other than rail and the general warning that a task-based analysis is less precise than an error-based one. The authors recommend that estimates be adjusted to actual measures of task failures when feasible....... collaboration with Banedanmark. The estimates provided are based on HRA literature and primarily the HEART method, being recently been adapted for railway tasks by the British Rail Safety and Stan-dards Board (RSSB). The method presented in this report differs from the RSSB tool by supporting an analysis...

  12. Improved assessment of multiple sclerosis lesion segmentation agreement via detection and outline error estimates

    Directory of Open Access Journals (Sweden)

    Wack David S

    2012-07-01

    Full Text Available Abstract Background Presented is the method “Detection and Outline Error Estimates” (DOEE for assessing rater agreement in the delineation of multiple sclerosis (MS lesions. The DOEE method divides operator or rater assessment into two parts: 1 Detection Error (DE -- rater agreement in detecting the same regions to mark, and 2 Outline Error (OE -- agreement of the raters in outlining of the same lesion. Methods DE, OE and Similarity Index (SI values were calculated for two raters tested on a set of 17 fluid-attenuated inversion-recovery (FLAIR images of patients with MS. DE, OE, and SI values were tested for dependence with mean total area (MTA of the raters' Region of Interests (ROIs. Results When correlated with MTA, neither DE (ρ = .056, p=.83 nor the ratio of OE to MTA (ρ = .23, p=.37, referred to as Outline Error Rate (OER, exhibited significant correlation. In contrast, SI is found to be strongly correlated with MTA (ρ = .75, p  Conclusions The DE and OER indices are proposed as a better method than SI for comparing rater agreement of ROIs, which also provide specific information for raters to improve their agreement.

  13. Multisite Parent-Centered Risk Assessment to Reduce Pediatric Oral Chemotherapy Errors

    Science.gov (United States)

    Walsh, Kathleen E.; Mazor, Kathleen M.; Roblin, Douglas; Biggins, Colleen; Wagner, Joann L.; Houlahan, Kathleen; Li, Justin W.; Keuker, Christopher; Wasilewski-Masker, Karen; Donovan, Jennifer; Kanaan, Abir; Weingart, Saul N.

    2013-01-01

    Purpose: Observational studies describe high rates of errors in home oral chemotherapy use in children. In hospitals, proactive risk assessment methods help front-line health care workers develop error prevention strategies. Our objective was to engage parents of children with cancer in a multisite study using proactive risk assessment methods to identify how errors occur at home and propose risk reduction strategies. Methods: We recruited parents from three outpatient pediatric oncology clinics in the northeast and southeast United States to participate in failure mode and effects analyses (FMEA). An FMEA is a systematic team-based proactive risk assessment approach in understanding ways a process can fail and develop prevention strategies. Steps included diagram the process, brainstorm and prioritize failure modes (places where things go wrong), and propose risk reduction strategies. We focused on home oral chemotherapy administration after a change in dose because prior studies identified this area as high risk. Results: Parent teams consisted of four parents at two of the sites and 10 at the third. Parents developed a 13-step process map, with two to 19 failure modes per step. The highest priority failure modes included miscommunication when receiving instructions from the clinician (caused by conflicting instructions or parent lapses) and unsafe chemotherapy handling at home. Recommended risk assessment strategies included novel uses of technology to improve parent access to information, clinicians, and other parents while at home. Conclusion: Parents of pediatric oncology patients readily participated in a proactive risk assessment method, identifying processes that pose a risk for medication errors involving home oral chemotherapy. PMID:23633976

  14. El error en la práctica médica: una presencia ineludible Human error in medical practice: an unavoidable presence

    Directory of Open Access Journals (Sweden)

    Gladis Adriana Vélez Álvarez

    2006-01-01

    Full Text Available El errar, que es una característica humana y un mecanismo de aprendizaje, se convierte en una amenaza para el hombre mismo en algunos escenarios como la aviación y la medicina. Se presentan algunos datos acerca de la frecuencia del error en medicina, su ubicuidad y las circunstancias que lo favorecen, y se hace una reflexión acerca de cómo se ha enfrentado el error y de por qué no se habla abiertamente del mismo. Se propone que el primer paso para aprender del error es aceptarlo como una presencia ineludible. Making mistakes is a human characteristic and a mechanism to learn, but at the same time it may become a threat to human beings in some scenarios. Aviation and Medicine are good examples of this. Some data are presented about the frequency of error in Medicine, its ubiquity and the circumstances that favor it. A reflection is done about how the error is being managed and why it is not more often discussed. It is proposed that the first step in learning from an error is to accept it as an unavoidable presence.

  15. Multivariate error assessment of response time histories method for dynamic systems

    Institute of Scientific and Technical Information of China (English)

    Zhen-fei ZHAN; Jie HU; Yan FU; Ren-Jye YANG; Ying-hong PENG; Jin QI

    2012-01-01

    In this paper,an integrated validation method and process are developed for multivariate dynamic systems.The principal component analysis approach is used to address multivariate correlation and dimensionality reduction,the dynamic time warping and correlation coefficient are used for error assessment,and the subject matter experts (SMEs)' opinions and principal component analysis coefficients are incorporated to provide the overall rating of the dynamic system.The proposed method and process are successfully demonstrated through a vehicle dynamic system problem.

  16. DISTANCE MEASURING MODELING AND ERROR ANALYSIS OF DUAL CCD VISION SYSTEM SIMULATING HUMAN EYES AND NECK

    Institute of Scientific and Technical Information of China (English)

    Wang Xuanyin; Xiao Baoping; Pan Feng

    2003-01-01

    A dual-CCD simulating human eyes and neck (DSHEN) vision system is put forward. Its structure and principle are introduced. The DSHEN vision system can perform some movements simulating human eyes and neck by means of four rotating joints, and realize precise object recognizing and distance measuring in all orientations. The mathematic model of the DSHEN vision system is built, and its movement equation is solved. The coordinate error and measure precision affected by the movement parameters are analyzed by means of intersection measuring method. So a theoretic foundation for further research on automatic object recognizing and precise target tracking is provided.

  17. Does the A-not-B error in adult pet dogs indicate sensitivity to human communication?

    Science.gov (United States)

    Kis, Anna; Topál, József; Gácsi, Márta; Range, Friederike; Huber, Ludwig; Miklósi, Adám; Virányi, Zsófia

    2012-07-01

    Recent dog-infant comparisons have indicated that the experimenter's communicative signals in object hide-and-search tasks increase the probability of perseverative (A-not-B) errors in both species (Topál et al. 2009). These behaviourally similar results, however, might reflect different mechanisms in dogs and in children. Similar errors may occur if the motor response of retrieving the object during the A trials cannot be inhibited in the B trials or if the experimenter's movements and signals toward the A hiding place in the B trials ('sham-baiting') distract the dogs' attention. In order to test these hypotheses, we tested dogs similarly to Topál et al. (2009) but eliminated the motor search in the A trials and 'sham-baiting' in the B trials. We found that neither an inability to inhibit previously rewarded motor response nor insufficiencies in their working memory and/or attention skills can explain dogs' erroneous choices. Further, we replicated the finding that dogs have a strong tendency to commit the A-not-B error after ostensive-communicative hiding and demonstrated the crucial effect of socio-communicative cues as the A-not-B error diminishes when location B is ostensively enhanced. These findings further support the hypothesis that the dogs' A-not-B error may reflect a special sensitivity to human communicative cues. Such object-hiding and search tasks provide a typical case for how susceptibility to human social signals could (mis)lead domestic dogs.

  18. Shortcut in DIC error assessment induced by image interpolation used for subpixel shifting

    Science.gov (United States)

    Bornert, Michel; Doumalin, Pascal; Dupré, Jean-Christophe; Poilane, Christophe; Robert, Laurent; Toussaint, Evelyne; Wattrisse, Bertrand

    2017-04-01

    In order to characterize errors of Digital Image Correlation (DIC) algorithms, sets of virtual images are often generated from a reference image by in-plane sub-pixel translations. This leads to the determination of the well-known S-shaped bias error curves and their corresponding random error curves. As images are usually shifted by using interpolation schemes similar to those used in DIC algorithms, the question of the possible bias in the quantification of measurement uncertainties of DIC softwares is raised and constitutes the main problematic of this paper. In this collaborative work, synthetic numerically shifted images are built from two methods: one based on interpolations of the reference image and the other based on the transformation of an analytic texture function. Images are analyzed using an in-house subset-based DIC software and results are compared and discussed. The effect of image noise is also highlighted. The main result is that the a priori choices to numerically shift the reference image modify DIC results and may lead to wrong conclusions in terms of DIC error assessment.

  19. Analytical Assessment of Simultaneous Parallel Approach Feasibility from Total System Error

    Science.gov (United States)

    Madden, Michael M.

    2014-01-01

    In a simultaneous paired approach to closely-spaced parallel runways, a pair of aircraft flies in close proximity on parallel approach paths. The aircraft pair must maintain a longitudinal separation within a range that avoids wake encounters and, if one of the aircraft blunders, avoids collision. Wake avoidance defines the rear gate of the longitudinal separation. The lead aircraft generates a wake vortex that, with the aid of crosswinds, can travel laterally onto the path of the trail aircraft. As runway separation decreases, the wake has less distance to traverse to reach the path of the trail aircraft. The total system error of each aircraft further reduces this distance. The total system error is often modeled as a probability distribution function. Therefore, Monte-Carlo simulations are a favored tool for assessing a "safe" rear-gate. However, safety for paired approaches typically requires that a catastrophic wake encounter be a rare one-in-a-billion event during normal operation. Using a Monte-Carlo simulation to assert this event rarity with confidence requires a massive number of runs. Such large runs do not lend themselves to rapid turn-around during the early stages of investigation when the goal is to eliminate the infeasible regions of the solution space and to perform trades among the independent variables in the operational concept. One can employ statistical analysis using simplified models more efficiently to narrow the solution space and identify promising trades for more in-depth investigation using Monte-Carlo simulations. These simple, analytical models not only have to address the uncertainty of the total system error but also the uncertainty in navigation sources used to alert an abort of the procedure. This paper presents a method for integrating total system error, procedure abort rates, avionics failures, and surveillance errors into a statistical analysis that identifies the likely feasible runway separations for simultaneous paired

  20. The examination of commercial printing defects to assess common origin, batch variation, and error rate.

    Science.gov (United States)

    LaPorte, Gerald M; Stephens, Joseph C; Beuchel, Amanda K

    2010-01-01

    The examination of printing defects, or imperfections, found on printed or copied documents has been recognized as a generally accepted approach for linking questioned documents to a common source. This research paper will highlight the results from two mutually exclusive studies. The first involved the examination and characterization of printing defects found in a controlled production run of 500,000 envelopes bearing text and images. It was concluded that printing defects are random occurrences and that morphological differences can be used to identify variations within the same production batch. The second part incorporated a blind study to assess the error rate of associating randomly selected envelopes from different retail locations to a known source. The examination was based on the comparison of printing defects in the security patterns found in some envelopes. The results demonstrated that it is possible to associate envelopes to a common origin with a 0% error rate.

  1. Joint Estimation of Contamination, Error and Demography for Nuclear DNA from Ancient Humans.

    Directory of Open Access Journals (Sweden)

    Fernando Racimo

    2016-04-01

    Full Text Available When sequencing an ancient DNA sample from a hominin fossil, DNA from present-day humans involved in excavation and extraction will be sequenced along with the endogenous material. This type of contamination is problematic for downstream analyses as it will introduce a bias towards the population of the contaminating individual(s. Quantifying the extent of contamination is a crucial step as it allows researchers to account for possible biases that may arise in downstream genetic analyses. Here, we present an MCMC algorithm to co-estimate the contamination rate, sequencing error rate and demographic parameters-including drift times and admixture rates-for an ancient nuclear genome obtained from human remains, when the putative contaminating DNA comes from present-day humans. We assume we have a large panel representing the putative contaminant population (e.g. European, East Asian or African. The method is implemented in a C++ program called 'Demographic Inference with Contamination and Error' (DICE. We applied it to simulations and genome data from ancient Neanderthals and modern humans. With reasonable levels of genome sequence coverage (>3X, we find we can recover accurate estimates of all these parameters, even when the contamination rate is as high as 50%.

  2. Faces in places: humans and machines make similar face detection errors.

    Directory of Open Access Journals (Sweden)

    Bernard Marius 't Hart

    Full Text Available The human visual system seems to be particularly efficient at detecting faces. This efficiency sometimes comes at the cost of wrongfully seeing faces in arbitrary patterns, including famous examples such as a rock configuration on Mars or a toast's roast patterns. In machine vision, face detection has made considerable progress and has become a standard feature of many digital cameras. The arguably most wide-spread algorithm for such applications ("Viola-Jones" algorithm achieves high detection rates at high computational efficiency. To what extent do the patterns that the algorithm mistakenly classifies as faces also fool humans? We selected three kinds of stimuli from real-life, first-person perspective movies based on the algorithm's output: correct detections ("real faces", false positives ("illusory faces" and correctly rejected locations ("non faces". Observers were shown pairs of these for 20 ms and had to direct their gaze to the location of the face. We found that illusory faces were mistaken for faces more frequently than non faces. In addition, rotation of the real face yielded more errors, while rotation of the illusory face yielded fewer errors. Using colored stimuli increases overall performance, but does not change the pattern of results. When replacing the eye movement by a manual response, however, the preference for illusory faces over non faces disappeared. Taken together, our data show that humans make similar face-detection errors as the Viola-Jones algorithm, when directing their gaze to briefly presented stimuli. In particular, the relative spatial arrangement of oriented filters seems of relevance. This suggests that efficient face detection in humans is likely to be pre-attentive and based on rather simple features as those encoded in the early visual system.

  3. Factors controlling volume errors through 2D gully erosion assessment: guidelines for optimal survey design

    Science.gov (United States)

    Castillo, Carlos; Pérez, Rafael

    2017-04-01

    The assessment of gully erosion volumes is essential for the quantification of soil losses derived from this relevant degradation process. Traditionally, 2D and 3D approaches has been applied for this purpose (Casalí et al., 2006). Although innovative 3D approaches have recently been proposed for gully volume quantification, a renewed interest can be found in literature regarding the useful information that cross-section analysis still provides in gully erosion research. Moreover, the application of methods based on 2D approaches can be the most cost-effective approach in many situations such as preliminary studies with low accuracy requirements or surveys under time or budget constraints. The main aim of this work is to examine the key factors controlling volume error variability in 2D gully assessment by means of a stochastic experiment involving a Monte Carlo analysis over synthetic gully profiles in order to 1) contribute to a better understanding of the drivers and magnitude of gully erosion 2D-surveys uncertainty and 2) provide guidelines for optimal survey designs. Owing to the stochastic properties of error generation in 2D volume assessment, a statistical approach was followed to generate a large and significant set of gully reach configurations to evaluate quantitatively the influence of the main factors controlling the uncertainty of the volume assessment. For this purpose, a simulation algorithm in Matlab® code was written, involving the following stages: - Generation of synthetic gully area profiles with different degrees of complexity (characterized by the cross-section variability) - Simulation of field measurements characterised by a survey intensity and the precision of the measurement method - Quantification of the volume error uncertainty as a function of the key factors In this communication we will present the relationships between volume error and the studied factors and propose guidelines for 2D field surveys based on the minimal survey

  4. Measurement error of self-reported physical activity levels in New York City: assessment and correction.

    Science.gov (United States)

    Lim, Sungwoo; Wyker, Brett; Bartley, Katherine; Eisenhower, Donna

    2015-05-01

    Because it is difficult to objectively measure population-level physical activity levels, self-reported measures have been used as a surveillance tool. However, little is known about their validity in populations living in dense urban areas. We aimed to assess the validity of self-reported physical activity data against accelerometer-based measurements among adults living in New York City and to apply a practical tool to adjust for measurement error in complex sample data using a regression calibration method. We used 2 components of data: 1) dual-frame random digit dialing telephone survey data from 3,806 adults in 2010-2011 and 2) accelerometer data from a subsample of 679 survey participants. Self-reported physical activity levels were measured using a version of the Global Physical Activity Questionnaire, whereas data on weekly moderate-equivalent minutes of activity were collected using accelerometers. Two self-reported health measures (obesity and diabetes) were included as outcomes. Participants with higher accelerometer values were more likely to underreport the actual levels. (Accelerometer values were considered to be the reference values.) After correcting for measurement errors, we found that associations between outcomes and physical activity levels were substantially deattenuated. Despite difficulties in accurately monitoring physical activity levels in dense urban areas using self-reported data, our findings show the importance of performing a well-designed validation study because it allows for understanding and correcting measurement errors.

  5. On the Orientation Error of IMU: Investigating Static and Dynamic Accuracy Targeting Human Motion.

    Science.gov (United States)

    Ricci, Luca; Taffoni, Fabrizio; Formica, Domenico

    2016-01-01

    The accuracy in orientation tracking attainable by using inertial measurement units (IMU) when measuring human motion is still an open issue. This study presents a systematic quantification of the accuracy under static conditions and typical human dynamics, simulated by means of a robotic arm. Two sensor fusion algorithms, selected from the classes of the stochastic and complementary methods, are considered. The proposed protocol implements controlled and repeatable experimental conditions and validates accuracy for an extensive set of dynamic movements, that differ in frequency and amplitude of the movement. We found that dynamic performance of the tracking is only slightly dependent on the sensor fusion algorithm. Instead, it is dependent on the amplitude and frequency of the movement and a major contribution to the error derives from the orientation of the rotation axis w.r.t. the gravity vector. Absolute and relative errors upper bounds are found respectively in the range [0.7° ÷ 8.2°] and [1.0° ÷ 10.3°]. Alongside dynamic, static accuracy is thoroughly investigated, also with an emphasis on convergence behavior of the different algorithms. Reported results emphasize critical issues associated with the use of this technology and provide a baseline level of performance for the human motion related application.

  6. On the Orientation Error of IMU: Investigating Static and Dynamic Accuracy Targeting Human Motion

    Science.gov (United States)

    Ricci, Luca; Taffoni, Fabrizio

    2016-01-01

    The accuracy in orientation tracking attainable by using inertial measurement units (IMU) when measuring human motion is still an open issue. This study presents a systematic quantification of the accuracy under static conditions and typical human dynamics, simulated by means of a robotic arm. Two sensor fusion algorithms, selected from the classes of the stochastic and complementary methods, are considered. The proposed protocol implements controlled and repeatable experimental conditions and validates accuracy for an extensive set of dynamic movements, that differ in frequency and amplitude of the movement. We found that dynamic performance of the tracking is only slightly dependent on the sensor fusion algorithm. Instead, it is dependent on the amplitude and frequency of the movement and a major contribution to the error derives from the orientation of the rotation axis w.r.t. the gravity vector. Absolute and relative errors upper bounds are found respectively in the range [0.7° ÷ 8.2°] and [1.0° ÷ 10.3°]. Alongside dynamic, static accuracy is thoroughly investigated, also with an emphasis on convergence behavior of the different algorithms. Reported results emphasize critical issues associated with the use of this technology and provide a baseline level of performance for the human motion related application. PMID:27612100

  7. Determining The Factors Causing Human Error Deficiencies At A Public Utility Company

    Directory of Open Access Journals (Sweden)

    F. W. Badenhorst

    2004-11-01

    Full Text Available According to Neff (1977, as cited by Bergh (1995, the westernised culture considers work important for industrial mental health. Most individuals experience work positively, which creates a positive attitude. Should this positive attitude be inhibited, workers could lose concentration and become bored, potentially resulting in some form of human error. The aim of this research was to determine the factors responsible for human error events, which lead to power supply failures at Eskom power stations. Proposals were made for the reduction of these contributing factors towards improving plant performance. The target population was 700 panel operators in Eskom’s Power Generation Group. The results showed that factors leading to human error can be reduced or even eliminated. Opsomming Neff (1977 soos aangehaal deur Bergh (1995, skryf dat in die westerse kultuur werk belangrik vir bedryfsgeestesgesondheid is. Die meeste persone ervaar werk as positief, wat ’n positiewe gesindheid kweek. Indien hierdie positiewe gesindheid geïnhibeer word, kan dit lei tot ’n gebrek aan konsentrasie by die werkers. Werkers kan verveeld raak en dit kan weer lei tot menslike foute. Die doel van hierdie navorsing is om die faktore vas te stel wat tot menslike foute lei, en wat bydra tot onderbrekings in kragvoorsiening by Eskom kragstasies. Voorstelle is gemaak vir die vermindering van hierdie bydraende faktore ten einde die kragaanleg se prestasie te verbeter. Die teiken-populasie was 700 paneel-operateurs in die Kragopwekkingsgroep by Eskom. Die resultate dui daarop dat die faktore wat aanleiding gee tot menslike foute wel verminder, of geëlimineer kan word.

  8. Assessment of the sources of error affecting the quantitative accuracy of SPECT imaging in small animals

    Science.gov (United States)

    Hwang, Andrew B.; Franc, Benjamin L.; Gullberg, Grant T.; Hasegawa, Bruce H.

    2008-05-01

    Small animal SPECT imaging systems have multiple potential applications in biomedical research. Whereas SPECT data are commonly interpreted qualitatively in a clinical setting, the ability to accurately quantify measurements will increase the utility of the SPECT data for laboratory measurements involving small animals. In this work, we assess the effect of photon attenuation, scatter and partial volume errors on the quantitative accuracy of small animal SPECT measurements, first with Monte Carlo simulation and then confirmed with experimental measurements. The simulations modeled the imaging geometry of a commercially available small animal SPECT system. We simulated the imaging of a radioactive source within a cylinder of water, and reconstructed the projection data using iterative reconstruction algorithms. The size of the source and the size of the surrounding cylinder were varied to evaluate the effects of photon attenuation and scatter on quantitative accuracy. We found that photon attenuation can reduce the measured concentration of radioactivity in a volume of interest in the center of a rat-sized cylinder of water by up to 50% when imaging with iodine-125, and up to 25% when imaging with technetium-99m. When imaging with iodine-125, the scatter-to-primary ratio can reach up to approximately 30%, and can cause overestimation of the radioactivity concentration when reconstructing data with attenuation correction. We varied the size of the source to evaluate partial volume errors, which we found to be a strong function of the size of the volume of interest and the spatial resolution. These errors can result in large (>50%) changes in the measured amount of radioactivity. The simulation results were compared with and found to agree with experimental measurements. The inclusion of attenuation correction in the reconstruction algorithm improved quantitative accuracy. We also found that an improvement of the spatial resolution through the use of resolution

  9. Assessment of the sources of error affecting the quantitative accuracy of SPECT imaging in small animals

    Energy Technology Data Exchange (ETDEWEB)

    Joint Graduate Group in Bioengineering, University of California, San Francisco and University of California, Berkeley; Department of Radiology, University of California; Gullberg, Grant T; Hwang, Andrew B.; Franc, Benjamin L.; Gullberg, Grant T.; Hasegawa, Bruce H.

    2008-02-15

    Small animal SPECT imaging systems have multiple potential applications in biomedical research. Whereas SPECT data are commonly interpreted qualitatively in a clinical setting, the ability to accurately quantify measurements will increase the utility of the SPECT data for laboratory measurements involving small animals. In this work, we assess the effect of photon attenuation, scatter and partial volume errors on the quantitative accuracy of small animal SPECT measurements, first with Monte Carlo simulation and then confirmed with experimental measurements. The simulations modeled the imaging geometry of a commercially available small animal SPECT system. We simulated the imaging of a radioactive source within a cylinder of water, and reconstructed the projection data using iterative reconstruction algorithms. The size of the source and the size of the surrounding cylinder were varied to evaluate the effects of photon attenuation and scatter on quantitative accuracy. We found that photon attenuation can reduce the measured concentration of radioactivity in a volume of interest in the center of a rat-sized cylinder of water by up to 50percent when imaging with iodine-125, and up to 25percent when imaging with technetium-99m. When imaging with iodine-125, the scatter-to-primary ratio can reach up to approximately 30percent, and can cause overestimation of the radioactivity concentration when reconstructing data with attenuation correction. We varied the size of the source to evaluate partial volume errors, which we found to be a strong function of the size of the volume of interest and the spatial resolution. These errors can result in large (>50percent) changes in the measured amount of radioactivity. The simulation results were compared with and found to agree with experimental measurements. The inclusion of attenuation correction in the reconstruction algorithm improved quantitative accuracy. We also found that an improvement of the spatial resolution through the

  10. Report: Human biochemical genetics: an insight into inborn errors of metabolism

    Institute of Scientific and Technical Information of China (English)

    YU Chunli; SCOTT C. Ronald

    2006-01-01

    Inborn errors of metabolism (IEM) include a broad spectrum of defects of various gene products that affect intermediary metabolism in the body. Studying the molecular and biochemical mechanisms of those inherited disorder, systematically summarizing the disease phenotype and natural history, providing diagnostic rationale and methodology and treatment strategy comprise the context of human biochemical genetics. This session focused on: (1) manifestations of representative metabolic disorders; (2) the emergent technology and application of newborn screening of metabolic disorders using tandem mass spectrometry; (3) principles of managing IEM; (4) the concept of carrier testing aiming prevention. Early detection of patients with IEM allows early intervention and more options for treatment.

  11. Revised Human Health Risk Assessment on Chlorpyrifos

    Science.gov (United States)

    We have revised our human health risk assessment and drinking water exposure assessment for chlorpyrifos that supported our October 2015 proposal to revoke all food residue tolerances for chlorpyrifos. Learn about the revised analysis.

  12. Performance measure of image and video quality assessment algorithms: subjective root-mean-square error

    Science.gov (United States)

    Nuutinen, Mikko; Virtanen, Toni; Häkkinen, Jukka

    2016-03-01

    Evaluating algorithms used to assess image and video quality requires performance measures. Traditional performance measures (e.g., Pearson's linear correlation coefficient, Spearman's rank-order correlation coefficient, and root mean square error) compare quality predictions of algorithms to subjective mean opinion scores (mean opinion score/differential mean opinion score). We propose a subjective root-mean-square error (SRMSE) performance measure for evaluating the accuracy of algorithms used to assess image and video quality. The SRMSE performance measure takes into account dispersion between observers. The other important property of the SRMSE performance measure is its measurement scale, which is calibrated to units of the number of average observers. The results of the SRMSE performance measure indicate the extent to which the algorithm can replace the subjective experiment (as the number of observers). Furthermore, we have presented the concept of target values, which define the performance level of the ideal algorithm. We have calculated the target values for all sample sets of the CID2013, CVD2014, and LIVE multiply distorted image quality databases.The target values and MATLAB implementation of the SRMSE performance measure are available on the project page of this study.

  13. Linguistic Discrimination in Writing Assessment: How Raters React to African American "Errors," ESL Errors, and Standard English Errors on a State-Mandated Writing Exam

    Science.gov (United States)

    Johnson, David; VanBrackle, Lewis

    2012-01-01

    Raters of Georgia's (USA) state-mandated college-level writing exam, which is intended to ensure a minimal university-level writing competency, are trained to grade holistically when assessing these exams. A guiding principle in holistic grading is to not focus exclusively on any one aspect of writing but rather to give equal weight to style,…

  14. Human errors and violations in computer and information security: the viewpoint of network administrators and security specialists.

    Science.gov (United States)

    Kraemer, Sara; Carayon, Pascale

    2007-03-01

    This paper describes human errors and violations of end users and network administration in computer and information security. This information is summarized in a conceptual framework for examining the human and organizational factors contributing to computer and information security. This framework includes human error taxonomies to describe the work conditions that contribute adversely to computer and information security, i.e. to security vulnerabilities and breaches. The issue of human error and violation in computer and information security was explored through a series of 16 interviews with network administrators and security specialists. The interviews were audio taped, transcribed, and analyzed by coding specific themes in a node structure. The result is an expanded framework that classifies types of human error and identifies specific human and organizational factors that contribute to computer and information security. Network administrators tended to view errors created by end users as more intentional than unintentional, while errors created by network administrators as more unintentional than intentional. Organizational factors, such as communication, security culture, policy, and organizational structure, were the most frequently cited factors associated with computer and information security.

  15. How to Cope with the Rare Human Error Events Involved with organizational Factors in Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sa Kil; Luo, Meiling; Lee, Yong Hee [Korea Atomic Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    The current human error guidelines (e.g. US DOD handbooks, US NRC Guidelines) are representative tools to prevent human errors. These tools, however, have limits that they do not adapt all operating situations and circumstances such as design base events. In other words, these tools are only adapted foreseeable standardized operating situations and circumstances. In this study, our research team proposed an evidence-based approach such as UK's safety case to coping with the rare human error events such as TMI, Chernobyl, Fukushima accidents. These accidents are representative events involved with rare human errors. Our research team defined the 'rare human errors' as the follow three characterized events; Extremely low frequency Extremely high complicated structure Extremely serious damage of human life and property A safety case is a structured argument, supported by evidence, intended to justify that a system is acceptably safe. The definition by UK defense standard 00-56 issue 4 states that such an evidence-based approach can be contrast with a prescriptive approach to safety certification, which require safety to be justified using a prescribed process. Safety managements and safety regulatory activities based on safety case are effective to control organizational factors in terms of integrated safety management. Especially safety issues relevant with public acceptance are useful to provide practical evidences to the public reasonably. European Union including UK has developed the concept of engineered safety management system to deal with public acceptance using the safety case. In Korea nuclear industry, the Korean Atomic Research Institute has firstly performed a basic research to adapt the safety case in the field of radioactive waste according to the IAEA SSG-23(KAERI/TR-4497, 4531). Excepting the radioactive waste, there is no try to adapt the safety case yet. Most incidents and accidents involved human during operating NPPs have a tendency

  16. Procedures for using expert judgment to estimate human-error probabilities in nuclear power plant operations. [PWR; BWR

    Energy Technology Data Exchange (ETDEWEB)

    Seaver, D.A.; Stillwell, W.G.

    1983-03-01

    This report describes and evaluates several procedures for using expert judgment to estimate human-error probabilities (HEPs) in nuclear power plant operations. These HEPs are currently needed for several purposes, particularly for probabilistic risk assessments. Data do not exist for estimating these HEPs, so expert judgment can provide these estimates in a timely manner. Five judgmental procedures are described here: paired comparisons, ranking and rating, direct numerical estimation, indirect numerical estimation and multiattribute utility measurement. These procedures are evaluated in terms of several criteria: quality of judgments, difficulty of data collection, empirical support, acceptability, theoretical justification, and data processing. Situational constraints such as the number of experts available, the number of HEPs to be estimated, the time available, the location of the experts, and the resources available are discussed in regard to their implications for selecting a procedure for use.

  17. VR-based training and assessment in ultrasound-guided regional anesthesia: from error analysis to system design.

    LENUS (Irish Health Repository)

    2011-01-01

    If VR-based medical training and assessment is to improve patient care and safety (i.e. a genuine health gain), it has to be based on clinically relevant measurement of performance. Metrics on errors are particularly useful for capturing and correcting undesired behaviors before they occur in the operating room. However, translating clinically relevant metrics and errors into meaningful system design is a challenging process. This paper discusses how an existing task and error analysis was translated into the system design of a VR-based training and assessment environment for Ultrasound Guided Regional Anesthesia (UGRA).

  18. Assessing the impact of differential genotyping errors on rare variant tests of association.

    Science.gov (United States)

    Mayer-Jochimsen, Morgan; Fast, Shannon; Tintle, Nathan L

    2013-01-01

    Genotyping errors are well-known to impact the power and type I error rate in single marker tests of association. Genotyping errors that happen according to the same process in cases and controls are known as non-differential genotyping errors, whereas genotyping errors that occur with different processes in the cases and controls are known as differential genotype errors. For single marker tests, non-differential genotyping errors reduce power, while differential genotyping errors increase the type I error rate. However, little is known about the behavior of the new generation of rare variant tests of association in the presence of genotyping errors. In this manuscript we use a comprehensive simulation study to explore the effects of numerous factors on the type I error rate of rare variant tests of association in the presence of differential genotyping error. We find that increased sample size, decreased minor allele frequency, and an increased number of single nucleotide variants (SNVs) included in the test all increase the type I error rate in the presence of differential genotyping errors. We also find that the greater the relative difference in case-control genotyping error rates the larger the type I error rate. Lastly, as is the case for single marker tests, genotyping errors classifying the common homozygote as the heterozygote inflate the type I error rate significantly more than errors classifying the heterozygote as the common homozygote. In general, our findings are in line with results from single marker tests. To ensure that type I error inflation does not occur when analyzing next-generation sequencing data careful consideration of study design (e.g. use of randomization), caution in meta-analysis and using publicly available controls, and the use of standard quality control metrics is critical.

  19. Assessment factors for human health risk assessment: A discussion paper

    NARCIS (Netherlands)

    Vermeire, T.; Stevenson, H.; Pieters, M.N.; Rennen, M.; Slob, W.; Hakkert, B.C.

    1999-01-01

    The general goal of this discussion paper is to contribute toward the further harmonization of human health risk assessment. It first discusses the development of a formal, harmonized set of assessment factors. The status quo with regard to assessment factors is reviewed, that is, the type of factor

  20. Ultrasound Images of the Tongue: A Tutorial for Assessment and Remediation of Speech Sound Errors.

    Science.gov (United States)

    Preston, Jonathan L; McAllister Byun, Tara; Boyce, Suzanne E; Hamilton, Sarah; Tiede, Mark; Phillips, Emily; Rivera-Campos, Ahmed; Whalen, Douglas H

    2017-01-03

    Diagnostic ultrasound imaging has been a common tool in medical practice for several decades. It provides a safe and effective method for imaging structures internal to the body. There has been a recent increase in the use of ultrasound technology to visualize the shape and movements of the tongue during speech, both in typical speakers and in clinical populations. Ultrasound imaging of speech has greatly expanded our understanding of how sounds articulated with the tongue (lingual sounds) are produced. Such information can be particularly valuable for speech-language pathologists. Among other advantages, ultrasound images can be used during speech therapy to provide (1) illustrative models of typical (i.e. "correct") tongue configurations for speech sounds, and (2) a source of insight into the articulatory nature of deviant productions. The images can also be used as an additional source of feedback for clinical populations learning to distinguish their better productions from their incorrect productions, en route to establishing more effective articulatory habits. Ultrasound feedback is increasingly used by scientists and clinicians as both the expertise of the users increases and as the expense of the equipment declines. In this tutorial, procedures are presented for collecting ultrasound images of the tongue in a clinical context. We illustrate these procedures in an extended example featuring one common error sound, American English /r/. Images of correct and distorted /r/ are used to demonstrate (1) how to interpret ultrasound images, (2) how to assess tongue shape during production of speech sounds, (3), how to categorize tongue shape errors, and (4), how to provide visual feedback to elicit a more appropriate and functional tongue shape. We present a sample protocol for using real-time ultrasound images of the tongue for visual feedback to remediate speech sound errors. Additionally, example data are shown to illustrate outcomes with the procedure.

  1. Human factors evaluation of remote afterloading brachytherapy: Human error and critical tasks in remote afterloading brachytherapy and approaches for improved system performance. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Callan, J.R.; Kelly, R.T.; Quinn, M.L. [Pacific Science and Engineering Group, San Diego, CA (United States)] [and others

    1995-05-01

    Remote Afterloading Brachytherapy (RAB) is a medical process used in the treatment of cancer. RAB uses a computer-controlled device to remotely insert and remove radioactive sources close to a target (or tumor) in the body. Some RAB problems affecting the radiation dose to the patient have been reported and attributed to human error. To determine the root cause of human error in the RAB system, a human factors team visited 23 RAB treatment sites in the US The team observed RAB treatment planning and delivery, interviewed RAB personnel, and performed walk-throughs, during which staff demonstrated the procedures and practices used in performing RAB tasks. Factors leading to human error in the RAB system were identified. The impact of those factors on the performance of RAB was then evaluated and prioritized in terms of safety significance. Finally, the project identified and evaluated alternative approaches for resolving the safety significant problems related to human error.

  2. Quantitative assessment of errors in monitoring landcover changes by comparison of maps

    Directory of Open Access Journals (Sweden)

    Jean Francois Mas

    2012-02-01

    Full Text Available Many studies aimed at assessing land-cover changes are based upon the comparison of maps elaborated in different dates. This comparison allows the calculation of change rates as well as to generate more detailed data such as the transition matrix and the change map. In this study, we evaluated the errors incurred when comparing maps elaborated at different scales, obtained through independent digitalisation processes, elaborated using different classification schemes or when the maps were elaborated with inputs from different dates. Errors derived from the difference of scale or from the map-digitalisation processes led to false changes with a similar or greater scale to that of true changes. The comparison of maps based on different classification schemes invalidated the results of the comparison. By contrast, the different approaches used to tackle the issue of maps with multiple dates produced similar results. The paper discusses some methods aimed at reducing these problems and evaluating the reliability of multi-temporal databases.

  3. The effect of retinal image error update rate on human vestibulo-ocular reflex gain adaptation.

    Science.gov (United States)

    Fadaee, Shannon B; Migliaccio, Americo A

    2016-04-01

    The primary function of the angular vestibulo-ocular reflex (VOR) is to stabilise images on the retina during head movements. Retinal image movement is the likely feedback signal that drives VOR modification/adaptation for different viewing contexts. However, it is not clear whether a retinal image position or velocity error is used primarily as the feedback signal. Recent studies examining this signal are limited because they used near viewing to modify the VOR. However, it is not known whether near viewing drives VOR adaptation or is a pre-programmed contextual cue that modifies the VOR. Our study is based on analysis of the VOR evoked by horizontal head impulses during an established adaptation task. Fourteen human subjects underwent incremental unilateral VOR adaptation training and were tested using the scleral search coil technique over three separate sessions. The update rate of the laser target position (source of the retinal image error signal) used to drive VOR adaptation was different for each session [50 (once every 20 ms), 20 and 15/35 Hz]. Our results show unilateral VOR adaptation occurred at 50 and 20 Hz for both the active (23.0 ± 9.6 and 11.9 ± 9.1% increase on adapting side, respectively) and passive VOR (13.5 ± 14.9, 10.4 ± 12.2%). At 15 Hz, unilateral adaptation no longer occurred in the subject group for both the active and passive VOR, whereas individually, 4/9 subjects tested at 15 Hz had significant adaptation. Our findings suggest that 1-2 retinal image position error signals every 100 ms (i.e. target position update rate 15-20 Hz) are sufficient to drive VOR adaptation.

  4. A Bayesian method for using simulator data to enhance human error probabilities assigned by existing HRA methods

    Energy Technology Data Exchange (ETDEWEB)

    Katrinia M. Groth; Curtis L. Smith; Laura P. Swiler

    2014-08-01

    In the past several years, several international organizations have begun to collect data on human performance in nuclear power plant simulators. The data collected provide a valuable opportunity to improve human reliability analysis (HRA), but these improvements will not be realized without implementation of Bayesian methods. Bayesian methods are widely used to incorporate sparse data into models in many parts of probabilistic risk assessment (PRA), but Bayesian methods have not been adopted by the HRA community. In this paper, we provide a Bayesian methodology to formally use simulator data to refine the human error probabilities (HEPs) assigned by existing HRA methods. We demonstrate the methodology with a case study, wherein we use simulator data from the Halden Reactor Project to update the probability assignments from the SPAR-H method. The case study demonstrates the ability to use performance data, even sparse data, to improve existing HRA methods. Furthermore, this paper also serves as a demonstration of the value of Bayesian methods to improve the technical basis of HRA.

  5. The clinical utility of HIV outpatient pharmacist prescreening to reduce medication error and assess adherence.

    Science.gov (United States)

    Seden, K; Bradley, M; Miller, A R O; Beadsworth, M B J; Khoo, S H

    2013-03-01

    Antiretroviral therapy (ART) is complex and has high propensity for medication error and drug-drug interactions (DDIs). We evaluated the clinical utility of pharmacist prescreening for DDIs, adherence to ART and medicines reconciliation prior to HIV outpatient appointments. A pharmacist took detailed medication histories and ART adherence assessments, then screened medication for DDIs. A template detailing current medication, potential DDIs and adherence was filed in the clinical notes and physicians were asked for structured feedback. Potential DDIs were observed in 58% of 200 patients, with 22 (9%) potential DDIs occurring with medication that was not previously recorded in the patients' notes. Of 103 physician responses, 61.2% reported that the pharmacist consultation told them something they did not know, and pharmacist consultants led to change in management in 13.6% of cases. Pharmacist consultations were more likely to add benefit in patients taking two or more concomitant medications in addition to ART (P = 0.0012).

  6. Temporal and Developmental-Stage Variation in the Occurrence of Mitotic Errors in Tripronuclear Human Preimplantation Embryos

    NARCIS (Netherlands)

    Mantikou, Eleni; van Echten-Arends, Jannie; Sikkema-Raddatz, Birgit; van der Veen, Fulco; Repping, Sjoerd; Mastenbroek, Sebastiaan

    2013-01-01

    Mitotic errors during early development of human preimplantation embryos are common, rendering a large proportion of embryos chromosomally mosaic. It is also known that the percentage of diploid cells in human diploid-aneuploid mosaic embryos is higher at the blastocyst than at the cleavage stage. I

  7. AN IV CATHETER FRAGMENTS DURING MDCT SCANNING OF HUMAN ERROR: EXPERIMENTAL AND REPRODUCIBLE MICROSCOPIC MAGNIFICATION ANALYSIS

    Energy Technology Data Exchange (ETDEWEB)

    Kweon, Dae Cheol [Dept. of Radiologic Science, Shin Heung College, Uijeongbu (Korea, Republic of); Lee, Jong Woong [Dept. of of Radiology, Kyung Hee University Hospital at Gang-dong, Seoul (Korea, Republic of); Choi, Ji Won [Dept. of Radiological Science, Jeonju University, Jeonju (Korea, Republic of); Yang, Sung Hwan [Dept. of of Prosthetics and Orthotics, Korean National College of Rehabilitation and Welfare, Pyeongtaek (Korea, Republic of); Dong, Kyung Rae [Dept. of Radiological Technology, Gwangju Health College University, Gwangju (Korea, Republic of); Chung, Won Kwan [Dept. of of Nuclear Engineering, Chosun University, Gwangju (Korea, Republic of)

    2011-12-15

    The use of intravenous catheters are occasionally complicated by intravascular fragments and swelling of the catheter fragments. We present a patient in whom an intravenous catheter fragments was retrieved from the dorsal metacarpal vein following its incidental CT examination detection. The case of demonstrates the utility of microscopy and multi-detector CT in localizing small of subtle intravenous catheter fragments as a human error. A case of IV catheter fragments in the metacarpal vein, in which reproducible and microscopy data allowed complete localization of a missing fragments and guided surgery with respect to the optimal incision site for fragments removal. These reproducible studies may help to determine the best course of action and treatment for the patient who presents with such a case.

  8. Adaptation of hybrid human-computer interaction systems using EEG error-related potentials.

    Science.gov (United States)

    Chavarriaga, Ricardo; Biasiucci, Andrea; Forster, Killian; Roggen, Daniel; Troster, Gerhard; Millan, Jose Del R

    2010-01-01

    Performance improvement in both humans and artificial systems strongly relies in the ability of recognizing erroneous behavior or decisions. This paper, that builds upon previous studies on EEG error-related signals, presents a hybrid approach for human computer interaction that uses human gestures to send commands to a computer and exploits brain activity to provide implicit feedback about the recognition of such commands. Using a simple computer game as a case study, we show that EEG activity evoked by erroneous gesture recognition can be classified in single trials above random levels. Automatic artifact rejection techniques are used, taking into account that subjects are allowed to move during the experiment. Moreover, we present a simple adaptation mechanism that uses the EEG signal to label newly acquired samples and can be used to re-calibrate the gesture recognition system in a supervised manner. Offline analysis show that, although the achieved EEG decoding accuracy is far from being perfect, these signals convey sufficient information to significantly improve the overall system performance.

  9. A method for sensitivity analysis to assess the effects of measurement error in multiple exposure variables using external validation data

    Directory of Open Access Journals (Sweden)

    George O. Agogo

    2016-10-01

    Full Text Available Abstract Background Measurement error in self-reported dietary intakes is known to bias the association between dietary intake and a health outcome of interest such as risk of a disease. The association can be distorted further by mismeasured confounders, leading to invalid results and conclusions. It is, however, difficult to adjust for the bias in the association when there is no internal validation data. Methods We proposed a method to adjust for the bias in the diet-disease association (hereafter, association, due to measurement error in dietary intake and a mismeasured confounder, when there is no internal validation data. The method combines prior information on the validity of the self-report instrument with the observed data to adjust for the bias in the association. We compared the proposed method with the method that ignores the confounder effect, and with the method that ignores measurement errors completely. We assessed the sensitivity of the estimates to various magnitudes of measurement error, error correlations and uncertainty in the literature-reported validation data. We applied the methods to fruits and vegetables (FV intakes, cigarette smoking (confounder and all-cause mortality data from the European Prospective Investigation into Cancer and Nutrition study. Results Using the proposed method resulted in about four times increase in the strength of association between FV intake and mortality. For weakly correlated errors, measurement error in the confounder minimally affected the hazard ratio estimate for FV intake. The effect was more pronounced for strong error correlations. Conclusions The proposed method permits sensitivity analysis on measurement error structures and accounts for uncertainties in the reported validity coefficients. The method is useful in assessing the direction and quantifying the magnitude of bias in the association due to measurement errors in the confounders.

  10. A Large-Area, Spatially Continuous Assessment of Land Cover Map Error and Its Impact on Downstream Analyses.

    Science.gov (United States)

    Estes, Lyndon; Chen, Peng; Debats, Stephanie; Evans, Tom; Ferreira, Stefanus; Kuemmerle, Tobias; Ragazzo, Gabrielle; Sheffield, Justin; Wolf, Adam; Wood, Eric; Caylor, Kelly

    2017-09-16

    Land cover maps increasingly underlie research into socioeconomic and environmental patterns and processes, including global change. It is known that map errors impact our understanding of these phenomena, but quantifying these impacts is difficult because many areas lack adequate reference data. We used a highly accurate, high-resolution map of South African cropland to assess 1) the magnitude of error in several current generation land cover maps, and 2) how these errors propagate in downstream studies. We first quantified pixel-wise errors in the cropland classes of four widely used land cover maps at resolutions ranging from 1 to 100 km, then calculated errors in several representative "downstream" (map-based) analyses, including assessments of vegetative carbon stocks, evapotranspiration, crop production, and household food security. We also evaluated maps' spatial accuracy based on how precisely they could be used to locate specific landscape features. We found that cropland maps can have substantial biases and poor accuracy at all resolutions (e.g. at 1 km resolution, up to ∼45% underestimates of cropland (bias) and nearly 50% mean absolute error (MAE, describing accuracy); at 100 km, up to 15% underestimates and nearly 20% MAE). National-scale maps derived from higher resolution imagery were most accurate, followed by multi-map fusion products. Constraining mapped values to match survey statistics may be effective at minimizing bias (provided the statistics are accurate). Errors in downstream analyses could be substantially amplified or muted, depending on the values ascribed to cropland-adjacent covers (e.g. with forest as adjacent cover, carbon map error was 200-500% greater than in input cropland maps, but ∼40% less for sparse cover types). The average locational error was 6 km (600%). These findings provide deeper insight into the causes and potential consequences of land cover map error, and suggest several recommendations for land cover map users

  11. Human factors engineering in healthcare systems: the problem of human error and accident management.

    Science.gov (United States)

    Cacciabue, P C; Vella, G

    2010-04-01

    This paper discusses some crucial issues associated with the exploitation of data and information about health care for the improvement of patient safety. In particular, the issues of human factors and safety management are analysed in relation to exploitation of reports about non-conformity events and field observations. A methodology for integrating field observation and theoretical approaches for safety studies is described. Two sample cases are discussed in detail: the first one makes reference to the use of data collected in the aviation domain and shows how these can be utilised to define hazard and risk; the second one concerns a typical ethnographic study in a large hospital structure for the identification of most relevant areas of intervention. The results show that, if national authorities find a way to harmonise and formalize critical aspects, such as the severity of standard events, it is possible to estimate risk and define auditing needs, well before the occurrence of serious incidents, and to indicate practical ways forward for improving safety standards.

  12. Reliability of a Simple Physical Therapist Screening Tool to Assess Errors during Resistance Exercises for Musculoskeletal Pain

    DEFF Research Database (Denmark)

    Andersen, Kenneth Jay; Sundstrup, E.; Andersen, L. L.

    2014-01-01

    elastic tubing. At 2-week follow-up, the participants were invited for a test-retest assessment on errors in technical execution. The assessment was based on ordinal deviation of joint position from neutral of the shoulder, elbow, and wrist in a single plane by visual observation. Moderate intratester...

  13. Head repositioning errors in normal student volunteers: a possible tool to assess the neck's neuromuscular system

    Directory of Open Access Journals (Sweden)

    Gudavalli M Ram

    2006-03-01

    proprioceptive accuracy in the necks of humans. This finding may be used to elucidate the mechanism behind repositioning errors seen in people with neck pain and could guide development of a clinical test for involvement of paraspinal muscles in cervical pain and dysfunction.

  14. Assessing Human Health Risk from Pesticides

    Science.gov (United States)

    EPA protects human health and the environment by evaluating the risk associated with pesticides before allowing them to be used in the United States. Learn about the tools and processes used in risk assessment for pesticides.

  15. Assessing prebaccalaureate human physiology courses.

    Science.gov (United States)

    McCleary, V L

    1998-12-01

    Two surveys were conducted between 1994 and 1996. The purpose of the initial survey was to obtain demographic information about prebaccaulareate human physiology courses. Of the 117 responding physiology departments, 50% offered human physiology at the prebaccalaureate level to 14,185 students during the 1994-1995 academic year. The mean was 245 students per year (+/- 30 SE). Class size was limited by 44% of the respondents. Prebaccaluareate human physiology was offered as a separate course from anatomy by 93% of the departments. Sixty-one percent scheduled the course once a year. The purpose of the second survey was to determine how physiology departments evaluated prebaccalaureate physiology courses and faculty. All responding departments utilized student feedback; 38% of the departments included physiology chair review, 38% peer review, and 9% allied health faculty review. Twenty-eight percent of allied health programs evaluated the course. Results indicated that, whereas a significant number of undergraduate students are enrolled in prebaccaluareate physiology courses annually, those courses appear to lack formal, consistent formative evaluation.

  16. A study on fatigue measurement of operators for human error prevention in NPPs

    Energy Technology Data Exchange (ETDEWEB)

    Ju, Oh Yeon; Il, Jang Tong; Meiling, Luo; Hee, Lee Young [KAERI, Daejeon (Korea, Republic of)

    2012-10-15

    The identification and the analysis of individual factor of operators, which is one of the various causes of adverse effects in human performance, is not easy in NPPs. There are work types (including shift), environment, personality, qualification, training, education, cognition, fatigue, job stress, workload, etc in individual factors for the operators. Research at the Finnish Institute of Occupational Health (FIOH) reported that a 'burn out (extreme fatigue)' is related to alcohol dependent habits and must be dealt with using a stress management program. USNRC (U.S. Nuclear Regulatory Commission) developed FFD (Fitness for Duty) for improving the task efficiency and preventing human errors. 'Managing Fatigue' of 10CFR26 presented as requirements to control operator fatigue in NPPs. The committee explained that excessive fatigue is due to stressful work environments, working hours, shifts, sleep disorders, and unstable circadian rhythms. In addition, an International Labor Organization (ILO) developed and suggested a checklist to manage fatigue and job stress. In domestic, a systematic evaluation way is presented by the Final Safety Analysis Report (FSAR) chapter 18, Human Factors, in the licensing process. However, it almost focused on the interface design such as HMI (Human Machine Interface), not individual factors. In particular, because our country is in a process of the exporting the NPP to UAE, the development and setting of fatigue management technique is important and urgent to present the technical standard and FFD criteria to UAE. And also, it is anticipated that the domestic regulatory body applies the FFD program as the regulation requirement so that a preparation for that situation is required. In this paper, advanced researches are investigated to find the fatigue measurement and evaluation methods of operators in a high reliability industry. Also, this study tries to review the NRC report and discuss the causal factors and

  17. Systematic analysis of video data from different human-robot interaction studies: a categorization of social signals during error situations.

    Science.gov (United States)

    Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred

    2015-01-01

    Human-robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human-robot interaction experiments. For that, we analyzed 201 videos of five human-robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human-robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies.

  18. Assessment and Categorization of TLE Orbit Errors for the US SSN Catalogue

    Science.gov (United States)

    Flohrer, T.; Krag, H.; Klinkrad, H.

    orbits determined from precise radar tracking with external high-precision orbits obtained from laser-tracking and Doppler ranging, and by comparing propagated states to theses high-precision orbits. For a current catalogue we assess the TLE orbit errors in alongtrack, cross-track, and out-of-plane coordinates (i.e. as function of eccentricity, inclination and perigee height). This analysis provides a more realistic look-up table for the collision risk assessment with CRASS. Insights into the applicability of the TLE theory to certain classes of orbits will be helpful in particular for the selection of data product formats for the European Space Situational Awareness system that is under study. Finally the presented approach may be the basis for comparisons of snapshots of the TLE catalogue of past epochs.

  19. Errors in using two dimensional methods for ergonomic assessment of motion in three dimensional space

    Energy Technology Data Exchange (ETDEWEB)

    Hollerbach, K.; Van Vorhis, R.L. [Lawrence Livermore National Lab., CA (United States); Hollister, A. [Louisiana State Univ., Shreveport, LA (United States)

    1996-03-01

    Wrist posture and rapid wrist movements are risk factors for work related musculoskeletal disorders. Measurement studies frequently involve optoelectronic methods in which markers are placed on the subject`s hand and wrist and the trajectories of the markers are tracked in three dimensional space. A goal of wrist posture measurements is to quantitatively establish wrist posture orientation. Accuracy and fidelity of the measurement data with respect to kinematic mechanisms are essential in wrist motion studies. Fidelity with the physical kinematic mechanism can be limited by the choice of kinematic modeling techniques and the representation of motion. Frequently, ergonomic studies involving wrist kinematics make use of two dimensional measurement and analysis techniques. Two dimensional measurement of human joint motion involves the analysis of three dimensional displacements in an obersver selected measurement plane. Accurate marker placement and alignment of joint motion plane with the observer plane are difficult. In nature, joint axes can exist at any orientation and location relative to an arbitrarily chosen global reference frame. An arbitrary axis is any axis that is not coincident with a reference coordinate. We calculate the errors that result from measuring joint motion about an arbitrary axis using two dimensional methods.

  20. Assessment of the energetics of human labor

    Energy Technology Data Exchange (ETDEWEB)

    Giampietro, M. (Istituto Nazionale della Nutrizione, Rome (Italy)); Pimentel, D. (Cornell Univ., Ithaca, NY (USA))

    1990-10-01

    The energetic analysis of farming systems implies an assessment of the energetics of human labor. The energy cost of 1 h of human labor is generally estimated according to its physiological requirement (the hierarchical level at which the assessment is made is at the individual level). A different way of describing the interaction between human society and the ecosystem is presented (assessment referred to the society level). The shift from the individual level to the societal level provides a new perspective when assessing the energetic efficiency of farming. For example, the power level of the system becomes a new and important parameter to consider. Numerical examples illustrate the proposed approach. 4 figs., 12 refs., 1 tab.

  1. NASA Human System Risk Assessment Process

    Science.gov (United States)

    Francisco, D.; Romero, E.

    2016-01-01

    NASA utilizes an evidence based system to perform risk assessments for the human system for spaceflight missions. The center of this process is the multi-disciplinary Human System Risk Board (HSRB). The HSRB is chartered from the Chief Health and Medical Officer (OCHMO) at NASA Headquarters. The HSRB reviews all human system risks via an established comprehensive risk and configuration management plan based on a project management approach. The HSRB facilitates the integration of human research (terrestrial and spaceflight), medical operations, occupational surveillance, systems engineering and many other disciplines in a comprehensive review of human system risks. The HSRB considers all factors that influence human risk. These factors include pre-mission considerations such as screening criteria, training, age, sex, and physiological condition. In mission factors such as available countermeasures, mission duration and location and post mission factors such as time to return to baseline (reconditioning), post mission health screening, and available treatments. All of the factors influence the total risk assessment for each human risk. The HSRB performed a comprehensive review of all potential inflight medical conditions and events and over the course of several reviews consolidated the number of human system risks to 30, where the greatest emphasis is placed for investing program dollars for risk mitigation. The HSRB considers all available evidence from human research and, medical operations and occupational surveillance in assessing the risks for appropriate mitigation and future work. All applicable DRMs (low earth orbit for 6 and 12 months, deep space for 30 days and 1 year, a lunar mission for 1 year, and a planetary mission for 3 years) are considered as human system risks are modified by the hazards associated with space flight such as microgravity, exposure to radiation, distance from the earth, isolation and a closed environment. Each risk has a summary

  2. Assessing Completeness and Spatial Error of Features in Volunteered Geographic Information

    Directory of Open Access Journals (Sweden)

    Anthony Stefanidis

    2013-06-01

    Full Text Available The assessment of the quality and accuracy of Volunteered Geographic Information (VGI contributions, and by extension the ultimate utility of VGI data has fostered much debate within the geographic community. The limited research to date has been focused on VGI data of linear features and has shown that the error in the data is heterogeneously distributed. Some have argued that data produced by numerous contributors will produce a more accurate product than an individual and some research on crowd-sourced initiatives has shown that to be true, although research on VGI is more infrequent. This paper proposes a method for quantifying the completeness and accuracy of a select subset of infrastructure-associated point datasets of volunteered geographic data within a major metropolitan area using a national geospatial dataset as the reference benchmark with two datasets from volunteers used as test datasets. The results of this study illustrate the benefits of including quality control in the collection process for volunteered data.

  3. Using the Sampling Margin of Error to Assess the Interpretative Validity of Student Evaluations of Teaching

    Science.gov (United States)

    James, David E.; Schraw, Gregory; Kuch, Fred

    2015-01-01

    We present an equation, derived from standard statistical theory, that can be used to estimate sampling margin of error for student evaluations of teaching (SETs). We use the equation to examine the effect of sample size, response rates and sample variability on the estimated sampling margin of error, and present results in four tables that allow…

  4. Error-Correcting Output Codes in Classification of Human Induced Pluripotent Stem Cell Colony Images

    Directory of Open Access Journals (Sweden)

    Henry Joutsijoki

    2016-01-01

    Full Text Available The purpose of this paper is to examine how well the human induced pluripotent stem cell (hiPSC colony images can be classified using error-correcting output codes (ECOC. Our image dataset includes hiPSC colony images from three classes (bad, semigood, and good which makes our classification task a multiclass problem. ECOC is a general framework to model multiclass classification problems. We focus on four different coding designs of ECOC and apply to each one of them k-Nearest Neighbor (k-NN searching, naïve Bayes, classification tree, and discriminant analysis variants classifiers. We use Scaled Invariant Feature Transformation (SIFT based features in classification. The best accuracy (62.4% is obtained with ternary complete ECOC coding design and k-NN classifier (standardized Euclidean distance measure and inverse weighting. The best result is comparable with our earlier research. The quality identification of hiPSC colony images is an essential problem to be solved before hiPSCs can be used in practice in large-scale. ECOC methods examined are promising techniques for solving this challenging problem.

  5. Error-Correcting Output Codes in Classification of Human Induced Pluripotent Stem Cell Colony Images.

    Science.gov (United States)

    Joutsijoki, Henry; Haponen, Markus; Rasku, Jyrki; Aalto-Setälä, Katriina; Juhola, Martti

    2016-01-01

    The purpose of this paper is to examine how well the human induced pluripotent stem cell (hiPSC) colony images can be classified using error-correcting output codes (ECOC). Our image dataset includes hiPSC colony images from three classes (bad, semigood, and good) which makes our classification task a multiclass problem. ECOC is a general framework to model multiclass classification problems. We focus on four different coding designs of ECOC and apply to each one of them k-Nearest Neighbor (k-NN) searching, naïve Bayes, classification tree, and discriminant analysis variants classifiers. We use Scaled Invariant Feature Transformation (SIFT) based features in classification. The best accuracy (62.4%) is obtained with ternary complete ECOC coding design and k-NN classifier (standardized Euclidean distance measure and inverse weighting). The best result is comparable with our earlier research. The quality identification of hiPSC colony images is an essential problem to be solved before hiPSCs can be used in practice in large-scale. ECOC methods examined are promising techniques for solving this challenging problem.

  6. Assessment of error propagation in ultraspectral sounder data via JPEG2000 compression and turbo coding

    Science.gov (United States)

    Olsen, Donald P.; Wang, Charles C.; Sklar, Dean; Huang, Bormin; Ahuja, Alok

    2005-08-01

    Research has been undertaken to examine the robustness of JPEG2000 when corrupted by transmission bit errors in a satellite data stream. Contemporary and future ultraspectral sounders such as Atmospheric Infrared Sounder (AIRS), Cross-track Infrared Sounder (CrIS), Infrared Atmospheric Sounding Interferometer (IASI), Geosynchronous Imaging Fourier Transform Spectrometer (GIFTS), and Hyperspectral Environmental Suite (HES) generate a large volume of three-dimensional data. Hence, compression of ultraspectral sounder data will facilitate data transmission and archiving. There is a need for lossless or near-lossless compression of ultraspectral sounder data to avoid potential retrieval degradation of geophysical parameters due to lossy compression. This paper investigates the simulated error propagation in AIRS ultraspectral sounder data with advanced source and channel coding in a satellite data stream. The source coding is done via JPEG2000, the latest International Organization for Standardization (ISO)/International Telecommunication Union (ITU) standard for image compression. After JPEG2000 compression the AIRS ultraspectral sounder data is then error correction encoded using a rate 0.954 turbo product code (TPC) for channel error control. Experimental results of error patterns on both channel and source decoding are presented. The error propagation effects are curbed via the block-based protection mechanism in the JPEG2000 codec as well as memory characteristics of the forward error correction (FEC) scheme to contain decoding errors within received blocks. A single nonheader bit error in a source code block tends to contaminate the bits until the end of the source code block before the inverse discrete wavelet transform (IDWT), and those erroneous bits propagate even further after the IDWT. Furthermore, a single header bit error may result in the corruption of almost the entire decompressed granule. JPEG2000 appears vulnerable to bit errors in a noisy channel of

  7. Assessment of the prediction error in a large-scale application of a dynamic soil acidification model

    NARCIS (Netherlands)

    Kros, J.; Mol-Dijkstra, J.P.; Pebesma, E.J.

    2002-01-01

    The prediction error of a relatively simple soil acidification model (SMART2) was assessed before and after calibration, focussing on the aluminium and nitrate concentrations on a block scale. Although SMART2 is especially developed for application ona national to European scale, it still runs at a

  8. Quality specifications for glucose meters: assessment by simulation modeling of errors in insulin dose.

    Science.gov (United States)

    Boyd, J C; Bruns, D E

    2001-02-01

    Proposed quality specifications for glucose meters allow results to be in error by 5-10% or more of the "true" concentration. Because meters are used as aids in the adjustment of insulin doses, we aimed to characterize the quantitative effect of meter error on the ability to identify the insulin dose appropriate for the true glucose concentration. Using Monte Carlo simulation, we generated random "true" glucose values within defined intervals. These values were converted to "measured" glucose values using mathematical models of glucose meters having defined imprecision (CV) and bias. For each combination of bias and imprecision, 10,000-20,000 true and measured glucose concentrations were matched with the corresponding insulin doses specified by selected insulin-dosing regimens. Discrepancies in prescribed doses were counted and their frequencies plotted in relation to bias and imprecision. For meters with a total analytical error of 5%, dosage errors occurred in approximately 8-23% of insulin doses. At 10% total error, 16-45% of doses were in error. Large errors of insulin dose (two-step or greater) occurred >5% of the time when the CV and/or bias exceeded 10-15%. Total dosage error rates were affected only slightly by choices of sliding scale among insulin dosage rules or by the range of blood glucose. To provide the intended insulin dosage 95% of the time required that both the bias and the CV of the glucose meter be <1% or <2%, depending on mean glucose concentrations and the rules for insulin dosing. Glucose meters that meet current quality specifications allow a large fraction of administered insulin doses to differ from the intended doses. The effects of such dosage errors on blood glucose and on patient outcomes require study.

  9. A web-based team-oriented medical error communication assessment tool: development, preliminary reliability, validity, and user ratings.

    Science.gov (United States)

    Kim, Sara; Brock, Doug; Prouty, Carolyn D; Odegard, Peggy Soule; Shannon, Sarah E; Robins, Lynne; Boggs, Jim G; Clark, Fiona J; Gallagher, Thomas

    2011-01-01

    Multiple-choice exams are not well suited for assessing communication skills. Standardized patient assessments are costly and patient and peer assessments are often biased. Web-based assessment using video content offers the possibility of reliable, valid, and cost-efficient means for measuring complex communication skills, including interprofessional communication. We report development of the Web-based Team-Oriented Medical Error Communication Assessment Tool, which uses videotaped cases for assessing skills in error disclosure and team communication. Steps in development included (a) defining communication behaviors, (b) creating scenarios, (c) developing scripts, (d) filming video with professional actors, and (e) writing assessment questions targeting team communication during planning and error disclosure. Using valid data from 78 participants in the intervention group, coefficient alpha estimates of internal consistency were calculated based on the Likert-scale questions and ranged from α=.79 to α=.89 for each set of 7 Likert-type discussion/planning items and from α=.70 to α=.86 for each set of 8 Likert-type disclosure items. The preliminary test-retest Pearson correlation based on the scores of the intervention group was r=.59 for discussion/planning and r=.25 for error disclosure sections, respectively. Content validity was established through reliance on empirically driven published principles of effective disclosure as well as integration of expert views across all aspects of the development process. In addition, data from 122 medicine and surgical physicians and nurses showed high ratings for video quality (4.3 of 5.0), acting (4.3), and case content (4.5). Web assessment of communication skills appears promising. Physicians and nurses across specialties respond favorably to the tool.

  10. Assessment of the rate and etiology of pharmacological errors by nurses of two major teaching hospitals in Shiraz

    Directory of Open Access Journals (Sweden)

    Fatemeh Vizeshfar

    2015-06-01

    Full Text Available Medication errors have serious consequences for patients, their families and care givers. Reduction of these faults by care givers such as nurses can increase the safety of patients. The goal of study was to assess the rate and etiology of medication error in pediatric and medical wards. This cross-sectional-analytic study is done on 101 registered nurses who had the duty of drug administration in medical pediatric and adults’ wards. Data was collected by a questionnaire including demographic information, self report faults, etiology of medication error and researcher observations. The results showed that nurses’ faults in pediatric wards were 51/6% and in adults wards were 47/4%. The most common faults in adults wards were later or sooner drug administration (48/6%, and administration of drugs without prescription and administering wrong drugs were the most common medication errors in pediatric wards (each one 49/2%. According to researchers’ observations, the medication error rate of 57/9% was rated low in adults wards and the rate of 69/4% in pediatric wards was rated moderate. The most frequent medication errors in both adults and pediatric wards were that nurses didn’t explain the reason and type of drug they were going to administer to patients. Independent T-test showed a significant change in faults observations in pediatric wards (p=0.000 and in adults wards (p=0.000. Several studies have shown medication errors all over the world, especially in pediatric wards. However, by designing a suitable report system and use a multi disciplinary approach, we can be reduced the occurrence of medication errors and its negative consequences.

  11. Methodical errors of measurement of the human body tissues electrical parameters

    OpenAIRE

    Antoniuk, O.; Pokhodylo, Y.

    2015-01-01

    Sources of methodical measurement errors of immitance parameters of biological tissues are described. Modeling measurement errors of RC-parameters of biological tissues equivalent circuits into the frequency range is analyzed. Recommendations on the choice of test signal frequency for measurement of these elements is provided.

  12. Assessment of antibody library diversity through next generation sequencing and technical error compensation

    Science.gov (United States)

    Lisi, Simonetta; Chirichella, Michele; Arisi, Ivan; Goracci, Martina; Cremisi, Federico; Cattaneo, Antonino

    2017-01-01

    Antibody libraries are important resources to derive antibodies to be used for a wide range of applications, from structural and functional studies to intracellular protein interference studies to developing new diagnostics and therapeutics. Whatever the goal, the key parameter for an antibody library is its complexity (also known as diversity), i.e. the number of distinct elements in the collection, which directly reflects the probability of finding in the library an antibody against a given antigen, of sufficiently high affinity. Quantitative evaluation of antibody library complexity and quality has been for a long time inadequately addressed, due to the high similarity and length of the sequences of the library. Complexity was usually inferred by the transformation efficiency and tested either by fingerprinting and/or sequencing of a few hundred random library elements. Inferring complexity from such a small sampling is, however, very rudimental and gives limited information about the real diversity, because complexity does not scale linearly with sample size. Next-generation sequencing (NGS) has opened new ways to tackle the antibody library complexity quality assessment. However, much remains to be done to fully exploit the potential of NGS for the quantitative analysis of antibody repertoires and to overcome current limitations. To obtain a more reliable antibody library complexity estimate here we show a new, PCR-free, NGS approach to sequence antibody libraries on Illumina platform, coupled to a new bioinformatic analysis and software (Diversity Estimator of Antibody Library, DEAL) that allows to reliably estimate the complexity, taking in consideration the sequencing error. PMID:28505201

  13. Dependence assessment in human reliability analysis based on D numbers and AHP

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Xinyi; Deng, Xinyang [School of Computer and Information Science, Southwest University, Chongqing 400715 (China); Deng, Yong, E-mail: ydeng@swu.edu.cn [School of Computer and Information Science, Southwest University, Chongqing 400715 (China); Institute of Fundamental and Frontier Sciences, University of Electronic Science and Technology of China, Chengdu, Sichuan 610054 (China); Mahadevan, Sankaran [School of Engineering, Vanderbilt University, Nashville, TN 37235 (United States)

    2017-03-15

    Highlights: • D numbers and AHP are combined to implement dependence assessment in HRA. • A new tool, called D numbers, is used to deal with the uncertainty in HRA. • The proposed method can well address the fuzziness and subjectivity in linguistic assessment. • The proposed method is well applicable in dependence assessment which inherently has a linguistic assessment process. - Abstract: Since human errors always cause heavy loss especially in nuclear engineering, human reliability analysis (HRA) has attracted more and more attention. Dependence assessment plays a vital role in HRA, measuring the dependence degree of human errors. Many researches have been done while still have improvement space. In this paper, a dependence assessment model based on D numbers and analytic hierarchy process (AHP) is proposed. Firstly, identify the factors used to measure the dependence level of two human operations. Besides, in terms of the suggested dependence level, determine and quantify the anchor points for each factor. Secondly, D numbers and AHP are adopted in model. Experts evaluate the dependence level of human operations for each factor. Then, the evaluation results are presented as D numbers and fused by D number’s combination rule that can obtain the dependence probability of human operations for each factor. The weights of factors can be determined by AHP. Thirdly, based on the dependence probability for each factor and its corresponding weight, the dependence probability of two human operations and its confidence can be obtained. The proposed method can well address the fuzziness and subjectivity in linguistic assessment. The proposed method is well applicable to assess the dependence degree of human errors in HRA which inherently has a linguistic assessment process.

  14. Management and Evaluation System on Human Error, Licence Requirements, and Job-aptitude in Rail and the Other Industries

    Energy Technology Data Exchange (ETDEWEB)

    Koo, In Soo; Suh, S. M.; Park, G. O. (and others)

    2006-07-15

    Rail system is a system that is very closely related to the public life. When an accident happens, the public using this system should be injured or even be killed. The accident that recently took place in Taegu subway system, because of the inappropriate human-side task performance, showed demonstratively how its results could turn out to be tragic one. Many studies have shown that the most cases of the accidents have occurred because of performing his/her tasks in inappropriate way. It is generally recognised that the rail system without human element could never be happened quite long time. So human element in rail system is going to be the major factor to the next tragic accident. This state of the art report studied the cases of the managements and evaluation systems related to human errors, license requirements, and job aptitudes in the areas of rail and the other industries for the purpose of improvement of the task performance of personnel which consists of an element and finally enhancement of rail safety. The human errors, license requirements, and evaluation system of the job aptitude on people engaged in agencies with close relation to rail do much for development and preservation their abilities. But due to various inside and outside factors, to some extent it may have limitations to timely reflect overall trends of society, technology, and a sense of value. Removal and control of the factors of human errors will have epochal roles in safety of the rail system through the case studies of this report. Analytical results on case studies of this report will be used in the project 'Development of Management Criteria on Human Error and Evaluation Criteria on Job-aptitude of Rail Safe-operation Personnel' which has been carried out as a part of 'Integrated R and D Program for Railway Safety'.

  15. Assessment of Aliasing Errors in Low-Degree Coefficients Inferred from GPS Data

    Directory of Open Access Journals (Sweden)

    Na Wei

    2016-05-01

    Full Text Available With sparse and uneven site distribution, Global Positioning System (GPS data is just barely able to infer low-degree coefficients in the surface mass field. The unresolved higher-degree coefficients turn out to introduce aliasing errors into the estimates of low-degree coefficients. To reduce the aliasing errors, the optimal truncation degree should be employed. Using surface displacements simulated from loading models, we theoretically prove that the optimal truncation degree should be degree 6–7 for a GPS inversion and degree 20 for combing GPS and Ocean Bottom Pressure (OBP with no additional regularization. The optimal truncation degree should be decreased to degree 4–5 for real GPS data. Additionally, we prove that a Scaled Sensitivity Matrix (SSM approach can be used to quantify the aliasing errors due to any one or any combination of unresolved higher degrees, which is beneficial to identify the major error source from among all the unresolved higher degrees. Results show that the unresolved higher degrees lower than degree 20 are the major error source for global inversion. We also theoretically prove that the SSM approach can be used to mitigate the aliasing errors in a GPS inversion, if the neglected higher degrees are well known from other sources.

  16. How does prostate biopsy guidance error impact pathologic cancer risk assessment?

    Science.gov (United States)

    Martin, Peter R.; Gaed, Mena; Gómez, José A.; Moussa, Madeleine; Gibson, Eli; Cool, Derek W.; Chin, Joseph L.; Pautler, Stephen; Fenster, Aaron; Ward, Aaron D.

    2016-03-01

    Magnetic resonance imaging (MRI)-targeted, 3D transrectal ultrasound (TRUS)-guided "fusion" prostate biopsy aims to reduce the 21-47% false negative rate of clinical 2D TRUS-guided sextant biopsy, but still has a substantial false negative rate. This could be improved via biopsy needle target optimization, accounting for uncertainties due to guidance system errors, image registration errors, and irregular tumor shapes. As an initial step toward the broader goal of optimized prostate biopsy targeting, in this study we elucidated the impact of biopsy needle delivery error on the probability of obtaining a tumor sample, and on the core involvement. These are both important parameters to patient risk stratification and the decision for active surveillance vs. definitive therapy. We addressed these questions for cancer of all grades, and separately for high grade (>= Gleason 4+3) cancer. We used expert-contoured gold-standard prostatectomy histology to simulate targeted biopsies using an isotropic Gaussian needle delivery error from 1 to 6 mm, and investigated the amount of cancer obtained in each biopsy core as determined by histology. Needle delivery error resulted in variability in core involvement that could influence treatment decisions; the presence or absence of cancer in 1/3 or more of each needle core can be attributed to a needle delivery error of 4 mm. However, our data showed that by making multiple biopsy attempts at selected tumor foci, we may increase the probability of correctly characterizing the extent and grade of the cancer.

  17. Physical security and cyber security issues and human error prevention for 3D printed objects: detecting the use of an incorrect printing material

    Science.gov (United States)

    Straub, Jeremy

    2017-06-01

    A wide variety of characteristics of 3D printed objects have been linked to impaired structural integrity and use-efficacy. The printing material can also have a significant impact on the quality, utility and safety characteristics of a 3D printed object. Material issues can be created by vendor issues, physical security issues and human error. This paper presents and evaluates a system that can be used to detect incorrect material use in a 3D printer, using visible light imaging. Specifically, it assesses the ability to ascertain the difference between materials of different color and different types of material with similar coloration.

  18. Medication Errors

    Science.gov (United States)

    ... Proprietary Names (PDF - 146KB) Draft Guidance for Industry: Best Practices in Developing Proprietary Names for Drugs (PDF - 279KB) ... or (301) 796-3400 druginfo@fda.hhs.gov Human Drug ... in Medication Errors Resources for You Agency for Healthcare Research and Quality: ...

  19. Pharmacists' assessment of dispensing errors: risk factors, practice sites, professional functions, and satisfaction.

    Science.gov (United States)

    Bond, C A; Raehl, C L

    2001-05-01

    Certain demographic, practice, staffing, and pharmacist satisfaction variables may contribute to dispensing errors. A survey was randomly mailed to 7298 (50%) Texas pharmacists, of which 2862 were returned (39% response rate). Responders were 2437 pharmacists who indicated that they were in practice. Of these, 535 (23%) reported no risk to patients for dispensing errors and 793 (34%) reported at least one patient/week was at risk for such an error. There was a positive relationship between number of prescription orders filled/hour and the estimated risk of dispensing errors (r(s)=0.285, ppharmacies (risk score = 1.85 +/- 1.32), traditional chain store pharmacies (1.66 +/- 1.18), and hospital pharmacies (1.61 +/- 1.09) reported a higher risk than other groups. Pharmacists practicing in independent community pharmacies (0.75 +/- 0.84), home health care (0.83 +/- 0.99), grocery chain store pharmacies (1.30 +/- 0.96), and mass merchandise chain store pharmacies (1.30 +/- 1.08) reported a lower risk (H=260, df=8, p<0.001). Nine job satisfaction variables were strongly associated with the risk of dispensing errors (r(s) = between -0.3 and -0.422, p<0.001), as were prescription volume, practice site, staffing, training, pharmacist functions, and professional organization membership. The results of this survey should help pharmacists and management develop specific plans for reducing the risks of dispensing errors. These data should be useful for more in-depth study of such errors.

  20. Doppler imaging of chemical spots on magnetic Ap/Bp stars. Numerical tests and assessment of systematic errors

    CERN Document Server

    Kochukhov, O

    2016-01-01

    Doppler imaging (DI) is a powerful spectroscopic inversion technique that enables conversion of a line profile time series into a two-dimensional map of the stellar surface inhomogeneities. In this paper we investigate the accuracy of chemical abundance DI of Ap/Bp stars and assess the impact of several different systematic errors on the reconstructed spot maps. We simulate spectroscopic observational data for different spot distributions in the presence of a moderately strong dipolar magnetic field. We then reconstruct chemical maps using different sets of spectral lines and making different assumptions about line formation in the inversion calculations. Our numerical experiments demonstrate that a modern DI code successfully recovers the input chemical spot distributions comprised of multiple circular spots at different latitudes or an element overabundance belt at the magnetic equator. For the optimal reconstruction the average reconstruction errors do not exceed ~0.10 dex. The errors increase to about 0.1...

  1. Application of objective clinical human reliability analysis (OCHRA) in assessment of technical performance in laparoscopic rectal cancer surgery.

    Science.gov (United States)

    Foster, J D; Miskovic, D; Allison, A S; Conti, J A; Ockrim, J; Cooper, E J; Hanna, G B; Francis, N K

    2016-06-01

    Laparoscopic rectal resection is technically challenging, with outcomes dependent upon technical performance. No robust objective assessment tool exists for laparoscopic rectal resection surgery. This study aimed to investigate the application of the objective clinical human reliability analysis (OCHRA) technique for assessing technical performance of laparoscopic rectal surgery and explore the validity and reliability of this technique. Laparoscopic rectal cancer resection operations were described in the format of a hierarchical task analysis. Potential technical errors were defined. The OCHRA technique was used to identify technical errors enacted in videos of twenty consecutive laparoscopic rectal cancer resection operations from a single site. The procedural task, spatial location, and circumstances of all identified errors were logged. Clinical validity was assessed through correlation with clinical outcomes; reliability was assessed by test-retest. A total of 335 execution errors identified, with a median 15 per operation. More errors were observed during pelvic tasks compared with abdominal tasks (p technical performance of laparoscopic rectal surgery.

  2. An empirical assessment of exposure measurement error and effect attenuation in bipollutant epidemiologic models.

    Science.gov (United States)

    Dionisio, Kathie L; Baxter, Lisa K; Chang, Howard H

    2014-11-01

    Using multipollutant models to understand combined health effects of exposure to multiple pollutants is becoming more common. However, complex relationships between pollutants and differing degrees of exposure error across pollutants can make health effect estimates from multipollutant models difficult to interpret. We aimed to quantify relationships between multiple pollutants and their associated exposure errors across metrics of exposure and to use empirical values to evaluate potential attenuation of coefficients in epidemiologic models. We used three daily exposure metrics (central-site measurements, air quality model estimates, and population exposure model estimates) for 193 ZIP codes in the Atlanta, Georgia, metropolitan area from 1999 through 2002 for PM2.5 and its components (EC and SO4), as well as O3, CO, and NOx, to construct three types of exposure error: δspatial (comparing air quality model estimates to central-site measurements), δpopulation (comparing population exposure model estimates to air quality model estimates), and δtotal (comparing population exposure model estimates to central-site measurements). We compared exposure metrics and exposure errors within and across pollutants and derived attenuation factors (ratio of observed to true coefficient for pollutant of interest) for single- and bipollutant model coefficients. Pollutant concentrations and their exposure errors were moderately to highly correlated (typically, > 0.5), especially for CO, NOx, and EC (i.e., "local" pollutants); correlations differed across exposure metrics and types of exposure error. Spatial variability was evident, with variance of exposure error for local pollutants ranging from 0.25 to 0.83 for δspatial and δtotal. The attenuation of model coefficients in single- and bipollutant epidemiologic models relative to the true value differed across types of exposure error, pollutants, and space. Under a classical exposure-error framework, attenuation may be

  3. Assessment of measurement errors and dynamic calibration methods for three different tipping bucket rain gauges

    Science.gov (United States)

    Shedekar, Vinayak S.; King, Kevin W.; Fausey, Norman R.; Soboyejo, Alfred B. O.; Harmel, R. Daren; Brown, Larry C.

    2016-09-01

    Three different models of tipping bucket rain gauges (TBRs), viz. HS-TB3 (Hydrological Services Pty Ltd.), ISCO-674 (Isco, Inc.) and TR-525 (Texas Electronics, Inc.), were calibrated in the lab to quantify measurement errors across a range of rainfall intensities (5 mm·h- 1 to 250 mm·h- 1) and three different volumetric settings. Instantaneous and cumulative values of simulated rainfall were recorded at 1, 2, 5, 10 and 20-min intervals. All three TBR models showed a substantial deviation (α = 0.05) in measurements from actual rainfall depths, with increasing underestimation errors at greater rainfall intensities. Simple linear regression equations were developed for each TBR to correct the TBR readings based on measured intensities (R2 > 0.98). Additionally, two dynamic calibration techniques, viz. quadratic model (R2 > 0.7) and T vs. 1/Q model (R2 = > 0.98), were tested and found to be useful in situations when the volumetric settings of TBRs are unknown. The correction models were successfully applied to correct field-collected rainfall data from respective TBR models. The calibration parameters of correction models were found to be highly sensitive to changes in volumetric calibration of TBRs. Overall, the HS-TB3 model (with a better protected tipping bucket mechanism, and consistent measurement errors across a range of rainfall intensities) was found to be the most reliable and consistent for rainfall measurements, followed by the ISCO-674 (with susceptibility to clogging and relatively smaller measurement errors across a range of rainfall intensities) and the TR-525 (with high susceptibility to clogging and frequent changes in volumetric calibration, and highly intensity-dependent measurement errors). The study demonstrated that corrections based on dynamic and volumetric calibration can only help minimize-but not completely eliminate the measurement errors. The findings from this study will be useful for correcting field data from TBRs; and may have major

  4. How realistically does outdoor UV-B supplementation with lamps reflect ozone depletion: an assessment of enhancement errors.

    Science.gov (United States)

    Kotilainen, Titta; Lindfors, Anders; Tegelberg, Riitta; Aphalo, Pedro J

    2011-01-01

    Limitations in the realism of currently available lamps mean that enhancement errors in outdoor experiments simulating UV-B radiation effects of stratospheric ozone depletion can be large. Here, we assess the magnitude of such errors at two Finnish locations, during May and June, under three cloud conditions. First we simulated solar radiation spectra for normal, compared with 10% and 20% ozone depletion, and convoluted the daily integrated solar spectra with eight biological spectral weighting functions (BSWFs) of relevance to effects of UV on plants. We also convoluted a measured spectrum from cellulose-acetate filtered UV-B lamps with the same eight BSWFs. From these intermediate results we calculated the enhancement errors. Differences between locations and months were small, cloudiness had only a minor effect. This assessment was based on the assumption that no extra enhancement compensating for shading of UV radiation by lamp frames is performed. Under this assumption errors between spectra are due to differences in the UV-B effectiveness rather than differences in the UV-A effectiveness. Hence, conclusions about plant growth from past UV-supplementation experiments should be valid. However, interpretation of the response of individual physiological processes is less secure, so results from some field experiments with lamps might need reassessment. © 2010 The Authors. Photochemistry and Photobiology © 2010 The American Society of Photobiology.

  5. A development of the Human Factors Assessment Guide for the Study of Erroneous Human Behaviors in Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Yeon Ju; Lee, Yong Hee; Jang, Tong Il; Kim, Sa Kil [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-08-15

    The aim of this paper is to describe a human factors assessment guide for the study of the erroneous characteristic of operators in nuclear power plants (NPPs). We think there are still remaining the human factors issues such as an uneasy emotion, fatigue and stress, varying mental workload situation by digital environment, and various new type of unsafe response to digital interface for better decisions, although introducing an advanced main control room. These human factors issues may not be resolved through the current human reliability assessment which evaluates the total probability of a human error occurring throughout the completion of a specific task. This paper provides an assessment guide for the human factors issues a set of experimental methodology, and presents an assessment case of measurement and analysis especially from neuro physiology approach. It would be the most objective psycho-physiological research technique on human performance for a qualitative analysis considering the safety aspects. This paper can be trial to experimental assessment of erroneous behaviors and their influencing factors, and it can be used as an index for recognition and a method to apply human factors engineering V and V, which is required as a mandatory element of human factor engineering program plan for a NPP design.

  6. Pain assessment in human fetus and infants.

    Science.gov (United States)

    Bellieni, Carlo Valerio

    2012-09-01

    In humans, painful stimuli can arrive to the brain at 20-22 weeks of gestation. Therefore several researchers have devoted their efforts to study fetal analgesia during prenatal surgery, and during painful procedures in premature babies. Aim of this paper is to gather from scientific literature the available data on the signals that the human fetus and newborns produce, and that can be interpreted as signals of pain. Several signs can be interpreted as signals of pain. We will describe them in the text. In infants, these signs can be combined to create specific and sensible pain assessment tools, called pain scales, used to rate the level of pain.

  7. STRATEGIC HUMAN RESOURCE MANAGEMENT ASSESSMENT AT PRICEWATERHOUSECOOPERS

    Directory of Open Access Journals (Sweden)

    Amelia Boncea

    2010-12-01

    Full Text Available The world we are living in today has increasingly become aware of the importance of the human factor in all types of organizations. The present paper is intended to assess the performance of the human resource department at PricewaterhouseCoopers and to provide adequate recommendations for activity improvement. After a statement of the current HR strategy and an in-depth analysis of the external and internal environment, the paper continues with some proposals upon a more efficient HR function and the corresponding action plan to achieve this objective. In addition, the paper presents a section on how employees respond to change inside the company.

  8. A Psychometric Review of Norm-Referenced Tests Used to Assess Phonological Error Patterns

    Science.gov (United States)

    Kirk, Celia; Vigeland, Laura

    2014-01-01

    Purpose: The authors provide a review of the psychometric properties of 6 norm-referenced tests designed to measure children's phonological error patterns. Three aspects of the tests' psychometric adequacy were evaluated: the normative sample, reliability, and validity. Method: The specific criteria used for determining the psychometric…

  9. Influence of wood density in tree-ring based annual productivity assessments and its errors in Norway spruce

    Science.gov (United States)

    Bouriaud, O.; Teodosiu, M.; Kirdyanov, A. V.; Wirth, C.

    2015-04-01

    Estimations of tree annual biomass increments are used by a variety of studies related to forest productivity or carbon fluxes. Biomass increment estimations can be easily obtained from diameter surveys or historical diameter reconstructions based on tree rings records. However, the biomass models rely on the assumption of a constant wood density. Converting volume increment into biomass also requires assumptions on the wood density. Wood density has been largely reported to vary both in time and between trees. In Norway spruce, wood density is known to increase with decreasing ring width. This could lead to underestimating the biomass or carbon deposition in bad years. The variations between trees of wood density has never been discussed but could also contribute to deviations. A modelling approach could attenuate these effects but will also generate errors. Here were developed a model of wood density variations in Norway spruce, and an allometric model of volume growth. We accounted for variations in wood density both between years and between trees, based on specific measurements. We compared the effects of neglecting each variation source on the estimations of annual biomass increment. We also assessed the errors of the biomass increment predictions at tree level, and of the annual productivity at plot level. Our results showed a partial compensation of the decrease in ring width in bad years by the increase in wood density. The underestimation of the biomass increment in those years reached 15%. The errors related to the use of an allometric model of volume growth were modest, around ±15%. The errors related to variations in wood density were much larger, the biggest component being the inter-tree variability. The errors in plot-level annual biomass productivity reached up to 40%, with a full account of all the error sources.

  10. Influence of wood density in tree-ring-based annual productivity assessments and its errors in Norway spruce

    Science.gov (United States)

    Bouriaud, O.; Teodosiu, M.; Kirdyanov, A. V.; Wirth, C.

    2015-10-01

    Estimations of tree annual biomass increments are used by a variety of studies related to forest productivity or carbon fluxes. Biomass increment estimations can be easily obtained from diameter surveys or historical diameter reconstructions based on tree rings' records. However, the biomass models rely on the assumption that wood density is constant. Converting volume increment into biomass also requires assumptions about the wood density. Wood density has been largely reported to vary both in time and between trees. In Norway spruce, wood density is known to increase with decreasing ring width. This could lead to underestimating the biomass or carbon deposition in bad years. The variations between trees of wood density have never been discussed but could also contribute to deviations. A modelling approach could attenuate these effects but will also generate errors. Here a model of wood density variations in Norway spruce, and an allometric model of volume growth were developed. We accounted for variations in wood density both between years and between trees, based on specific measurements. We compared the effects of neglecting each variation source on the estimations of annual biomass increment. We also assessed the errors of the biomass increment predictions at tree level, and of the annual productivity at plot level. Our results showed a partial compensation of the decrease in ring width in bad years by the increase in wood density. The underestimation of the biomass increment in those years reached 15 %. The errors related to the use of an allometric model of volume growth were modest, around ±15 %. The errors related to variations in wood density were much larger, the biggest component being the inter-tree variability. The errors in plot-level annual biomass productivity reached up to 40 %, with a full account of all the error sources.

  11. Influence of wood density in tree-ring based annual productivity assessments and its errors in Norway spruce

    Directory of Open Access Journals (Sweden)

    O. Bouriaud

    2015-04-01

    Full Text Available Estimations of tree annual biomass increments are used by a variety of studies related to forest productivity or carbon fluxes. Biomass increment estimations can be easily obtained from diameter surveys or historical diameter reconstructions based on tree rings records. However, the biomass models rely on the assumption of a constant wood density. Converting volume increment into biomass also requires assumptions on the wood density. Wood density has been largely reported to vary both in time and between trees. In Norway spruce, wood density is known to increase with decreasing ring width. This could lead to underestimating the biomass or carbon deposition in bad years. The variations between trees of wood density has never been discussed but could also contribute to deviations. A modelling approach could attenuate these effects but will also generate errors. Here were developed a model of wood density variations in Norway spruce, and an allometric model of volume growth. We accounted for variations in wood density both between years and between trees, based on specific measurements. We compared the effects of neglecting each variation source on the estimations of annual biomass increment. We also assessed the errors of the biomass increment predictions at tree level, and of the annual productivity at plot level. Our results showed a partial compensation of the decrease in ring width in bad years by the increase in wood density. The underestimation of the biomass increment in those years reached 15%. The errors related to the use of an allometric model of volume growth were modest, around ±15%. The errors related to variations in wood density were much larger, the biggest component being the inter-tree variability. The errors in plot-level annual biomass productivity reached up to 40%, with a full account of all the error sources.

  12. The application of SHERPA (Systematic Human Error Reduction and Prediction Approach) in the development of compensatory cognitive rehabilitation strategies for stroke patients with left and right brain damage.

    Science.gov (United States)

    Hughes, Charmayne M L; Baber, Chris; Bienkiewicz, Marta; Worthington, Andrew; Hazell, Alexa; Hermsdörfer, Joachim

    2015-01-01

    Approximately 33% of stroke patients have difficulty performing activities of daily living, often committing errors during the planning and execution of such activities. The objective of this study was to evaluate the ability of the human error identification (HEI) technique SHERPA (Systematic Human Error Reduction and Prediction Approach) to predict errors during the performance of daily activities in stroke patients with left and right hemisphere lesions. Using SHERPA we successfully predicted 36 of the 38 observed errors, with analysis indicating that the proportion of predicted and observed errors was similar for all sub-tasks and severity levels. HEI results were used to develop compensatory cognitive strategies that clinicians could employ to reduce or prevent errors from occurring. This study provides evidence for the reliability and validity of SHERPA in the design of cognitive rehabilitation strategies in stroke populations.

  13. Moving Forward in Human Cancer Risk Assessment

    OpenAIRE

    Paules, Richard S.; Aubrecht, Jiri; Corvi, Raffaella; Garthoff, Bernward; Kleinjans, Jos C.

    2010-01-01

    Background The current safety paradigm for assessing carcinogenic properties of drugs, cosmetics, industrial chemicals, and environmental exposures relies mainly on in vitro genotoxicity testing followed by 2-year rodent bioassays. This testing battery is extremely sensitive but has low specificity. Furthermore, rodent bioassays are associated with high costs, high animal burden, and limited predictive value for human risks. Objectives We provide a response to a growing appeal for a paradigm ...

  14. Assessing the impact of misclassification error on an epidemiological association between two helminthic infections.

    Directory of Open Access Journals (Sweden)

    Mushfiqur R Tarafder

    2011-03-01

    Full Text Available Polyparasitism can lead to severe disability in endemic populations. Yet, the association between soil-transmitted helminth (STH and the cumulative incidence of Schistosoma japonicum infection has not been described. The aim of this work was to quantify the effect of misclassification error, which occurs when less than 100% accurate tests are used, in STH and S. japonicum infection status on the estimation of this association.Longitudinal data from 2276 participants in 50 villages in Samar province, Philippines treated at baseline for S. japonicum infection and followed for one year, served as the basis for this analysis. Participants provided 1-3 stool samples at baseline and 12 months later (2004-2005 to detect infections with STH and S. japonicum using the Kato-Katz technique. Variation from day-to-day in the excretion of eggs in feces introduces individual variations in the sensitivity and specificity of the Kato-Katz to detect infection. Bayesian logit models were used to take this variation into account and to investigate the impact of misclassification error on the association between these infections. Uniform priors for sensitivity and specificity of the diagnostic test to detect the three STH and S. japonicum were used. All results were adjusted for age, sex, occupation, and village-level clustering. Without correction for misclassification error, the odds ratios (ORs between hookworm, Ascaris lumbricoides, and Trichuris trichiura, and S. japonicum infections were 1.28 (95% Bayesian credible intervals: 0.93, 1.76, 0.91 (95% BCI: 0.66, 1.26, and 1.11 (95% BCI: 0.80, 1.55, respectively, and 2.13 (95% BCI: 1.16, 4.08, 0.74 (95% BCI: 0.43, 1.25, and 1.32 (95% BCI: 0.80, 2.27, respectively, after correction for misclassification error for both exposure and outcome.The misclassification bias increased with decreasing test accuracy. Hookworm infection was found to be associated with increased 12-month cumulative incidence of S. japonicum

  15. Doppler imaging of chemical spots on magnetic Ap/Bp stars. Numerical tests and assessment of systematic errors

    Science.gov (United States)

    Kochukhov, O.

    2017-01-01

    Context. Doppler imaging (DI) is a powerful spectroscopic inversion technique that enables conversion of a line profile time series into a two-dimensional map of the stellar surface inhomogeneities. DI has been repeatedly applied to reconstruct chemical spot topologies of magnetic Ap/Bp stars with the goal of understanding variability of these objects and gaining an insight into the physical processes responsible for spot formation. Aims: In this paper we investigate the accuracy of chemical abundance DI and assess the impact of several different systematic errors on the reconstructed spot maps. Methods: We have simulated spectroscopic observational data for two different Fe spot distributions with a surface abundance contrast of 1.5 dex in the presence of a moderately strong dipolar magnetic field. We then reconstructed chemical maps using different sets of spectral lines and making different assumptions about line formation in the inversion calculations. Results: Our numerical experiments demonstrate that a modern DI code successfully recovers the input chemical spot distributions comprised of multiple circular spots at different latitudes or an element overabundance belt at the magnetic equator. For the optimal reconstruction based on half a dozen spectral intervals, the average reconstruction errors do not exceed 0.10 dex. The errors increase to about 0.15 dex when abundance distributions are recovered from a few and/or blended spectral lines. Ignoring a 2.5 kG dipolar magnetic field in chemical abundance DI leads to an average relative error of 0.2 dex and maximum errors of 0.3 dex. Similar errors are encountered if a DI inversion is carried out neglecting a non-uniform continuum brightness distribution and variation of the local atmospheric structure. None of the considered systematic effects lead to major spurious features in the recovered abundance maps. Conclusions: This series of numerical DI simulations proves that inversions based on one or two spectral

  16. New PAT tools for assessing content uniformity, sampling error, and degree of crystallinity in pharmaceutical tablets

    DEFF Research Database (Denmark)

    Warnecke, Solveig

    overlooked sampling uncertainty that exist in all analytical measurements. The sampling error was studied using an example involving near infrared transmission (NIT) spectroscopy to study content of uniformity of five batches of escitalopram tablets, produced at different active pharmaceutical ingredients...... and the two APIs, it was possible to establish calibrations using partial least squares regression (PLS) on unfolded fluorescence landscapes with relative errors of 9.1 % for FLU and 4.1 % for MEL, respectively. Both fluorescence spectroscopy and terahertz time domain spectroscopy are new tools...... compression forces, and measured with the spectrometer in different orientations. The study showed that a minimum of 18 tablets from each level of API concentrations (90 spectra in total) were required to establish a robust NIT calibration. Further, it was shown that the spectra from tablets with the scored...

  17. Qualitative assessment of visuospatial errors in mercury-exposed Amazonian children

    DEFF Research Database (Denmark)

    Chevrier, Cécile; Sullivan, Kimberly; White, Roberta F.;

    2009-01-01

    In order to better define the effects of methylmercury (MeHg) exposure on neurodevelopment, qualitative error types observed in the responses of exposed children to the Stanford-Binet Copying Test were categorized and quantified using raw data from two studies of 395 Amazonian children aged 7...... with adjustment for confounders. In the combined data set, mercury exposure was negatively associated with scores on the drawing task: a score reduction of 1.2 (s.e., 0.3) points was observed in the children with a hair-mercury concentration above 10 μg/g compared to those with a hair level below 1 μg....../g; this effect appeared to be stronger in the younger children. Risk of committing one or more errors of rotation, simplification or perseveration in the drawings increased with hair-mercury concentration in both cultural settings, providing convergent evidence of specific types of MeHg-related neurocognitive...

  18. Generalized additive models and Lucilia sericata growth: assessing confidence intervals and error rates in forensic entomology.

    Science.gov (United States)

    Tarone, Aaron M; Foran, David R

    2008-07-01

    Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.

  19. Error Field Assessment from Driven Rotation of Stable External Kinks at EXTRAP-T2R Reversed Field Pinch

    CERN Document Server

    Volpe, F A; Brunsell, P R; Drake, J R; Olofsson, K E J

    2013-01-01

    A new non-disruptive error field (EF) assessment technique not restricted to low density and thus low beta was demonstrated at the EXTRAP-T2R reversed field pinch. Stable and marginally stable external kink modes of toroidal mode number n=10 and n=8, respectively, were generated, and their rotation sustained, by means of rotating magnetic perturbations of the same n. Due to finite EFs, and in spite of the applied perturbations rotating uniformly and having constant amplitude, the kink modes were observed to rotate non-uniformly and be modulated in amplitude. This behavior was used to precisely infer the amplitude and approximately estimate the toroidal phase of the EF. A subsequent scan permitted to optimize the toroidal phase. The technique was tested against deliberately applied as well as intrinsic error fields of n=8 and 10. Corrections equal and opposite to the estimated error fields were applied. The efficacy of the error compensation was indicated by the increased discharge duration and more uniform mo...

  20. TO ASSESS THE IMPACT OF THE EDUCATIONAL INTERVENTIONS FOR UNCORRECTED REFRACTIVE ERROR AMONG SCHOOL CHILDREN IN MEERUT.

    Directory of Open Access Journals (Sweden)

    A Davey

    2013-08-01

    Full Text Available Background: Eyes are the best God gift to our body as vision is important in development as it allows interaction with the environment. Appropriate correction prevents the development of childhood amblyopia and enables better performance at school. Later in life carrier of the youth is very much dependent on the visual acuity. Therefore study aims to find the prevalence of the uncorrected refractive error among school children in the age group of 13-16 years and factors contributing to the refractive error. Methods: It is institutional based crossed sectional study in English medium private school children in the age group 13-16 years. For one week they were screened for visual acuity from a Standard Snellen Chart. On pre-informed date educational intervention was conducted; they were followed up after one week of intervention for final assessment. Results: Prevalence of the uncorrected refractive error was 14.8% Distance for watching TV less than 3 m and computer less than 1 m were highly significant. Prolonged duration of TV watching for more than 4 hours in a day and indulgence in computers for more than one year were also significant. In follow up after education intervention, all the children with uncorrected refractive error except 2 had paid visit to ophthalmologist. Conclusion: Community based screening through school is most appropriate strategy to detect early any visual impairment, but school based approach must include teachers orientation also for prevention of eye disease.

  1. On the Assessment of Monte Carlo Error in Simulation-Based Statistical Analyses.

    Science.gov (United States)

    Koehler, Elizabeth; Brown, Elizabeth; Haneuse, Sebastien J-P A

    2009-05-01

    Statistical experiments, more commonly referred to as Monte Carlo or simulation studies, are used to study the behavior of statistical methods and measures under controlled situations. Whereas recent computing and methodological advances have permitted increased efficiency in the simulation process, known as variance reduction, such experiments remain limited by their finite nature and hence are subject to uncertainty; when a simulation is run more than once, different results are obtained. However, virtually no emphasis has been placed on reporting the uncertainty, referred to here as Monte Carlo error, associated with simulation results in the published literature, or on justifying the number of replications used. These deserve broader consideration. Here we present a series of simple and practical methods for estimating Monte Carlo error as well as determining the number of replications required to achieve a desired level of accuracy. The issues and methods are demonstrated with two simple examples, one evaluating operating characteristics of the maximum likelihood estimator for the parameters in logistic regression and the other in the context of using the bootstrap to obtain 95% confidence intervals. The results suggest that in many settings, Monte Carlo error may be more substantial than traditionally thought.

  2. Assessment of Human Exposure to ENMs.

    Science.gov (United States)

    Jiménez, Araceli Sánchez; van Tongeren, Martie

    2017-01-01

    Human exposure assessment of engineered nanomaterials (ENMs) is hampered, among other factors, by the difficulty to differentiate ENM from other nanomaterials (incidental to processes or naturally occurring) and the lack of a single metric that can be used for health risk assessment. It is important that the exposure assessment is carried out throughout the entire life-cycle as releases can occur at the different stages of the product life-cycle, from the synthesis, manufacture of the nano-enable product (occupational exposure) to the professional and consumer use of nano-enabled product (consumer exposure) and at the end of life.Occupational exposure surveys should follow a tiered approach, increasing in complexity in terms of instruments used and sampling strategy applied with higher tiers in order tailor the exposure assessment to the specific materials used and workplace exposure scenarios and to reduce uncertainty in assessment of exposure. Assessment of consumer exposure and of releases from end-of-life processes currently relies on release testing of nano-enabled products in laboratory settings.

  3. Error Detection-Based Model to Assess Educational Outcomes in Crisis Resource Management Training: A Pilot Study.

    Science.gov (United States)

    Bouhabel, Sarah; Kay-Rivest, Emily; Nhan, Carol; Bank, Ilana; Nugus, Peter; Fisher, Rachel; Nguyen, Lily Hp

    2017-06-01

    Otolaryngology-head and neck surgery (OTL-HNS) residents face a variety of difficult, high-stress situations, which may occur early in their training. Since these events occur infrequently, simulation-based learning has become an important part of residents' training and is already well established in fields such as anesthesia and emergency medicine. In the domain of OTL-HNS, it is gradually gaining in popularity. Crisis Resource Management (CRM), a program adapted from the aviation industry, aims to improve outcomes of crisis situations by attempting to mitigate human errors. Some examples of CRM principles include cultivating situational awareness; promoting proper use of available resources; and improving rapid decision making, particularly in high-acuity, low-frequency clinical situations. Our pilot project sought to integrate CRM principles into an airway simulation course for OTL-HNS residents, but most important, it evaluated whether learning objectives were met, through use of a novel error identification model.

  4. Distinguishing science from pseudoscience in school psychology: science and scientific thinking as safeguards against human error.

    Science.gov (United States)

    Lilienfeld, Scott O; Ammirati, Rachel; David, Michal

    2012-02-01

    Like many domains of professional psychology, school psychology continues to struggle with the problem of distinguishing scientific from pseudoscientific and otherwise questionable clinical practices. We review evidence for the scientist-practitioner gap in school psychology and provide a user-friendly primer on science and scientific thinking for school psychologists. Specifically, we (a) outline basic principles of scientific thinking, (b) delineate widespread cognitive errors that can contribute to belief in pseudoscientific practices within school psychology and allied professions, (c) provide a list of 10 key warning signs of pseudoscience, illustrated by contemporary examples from school psychology and allied disciplines, and (d) offer 10 user-friendly prescriptions designed to encourage scientific thinking among school psychology practitioners and researchers. We argue that scientific thinking, although fallible, is ultimately school psychologists' best safeguard against a host of errors in thinking.

  5. Assessment of the potential impact of a reminder system on the reduction of diagnostic errors: a quasi-experimental study

    Directory of Open Access Journals (Sweden)

    Taylor Paul M

    2006-04-01

    Full Text Available Abstract Background Computerized decision support systems (DSS have mainly focused on improving clinicians' diagnostic accuracy in unusual and challenging cases. However, since diagnostic omission errors may predominantly result from incomplete workup in routine clinical practice, the provision of appropriate patient- and context-specific reminders may result in greater impact on patient safety. In this experimental study, a mix of easy and difficult simulated cases were used to assess the impact of a novel diagnostic reminder system (ISABEL on the quality of clinical decisions made by various grades of clinicians during acute assessment. Methods Subjects of different grades (consultants, registrars, senior house officers and medical students, assessed a balanced set of 24 simulated cases on a trial website. Subjects recorded their clinical decisions for the cases (differential diagnosis, test-ordering and treatment, before and after system consultation. A panel of two pediatric consultants independently provided gold standard responses for each case, against which subjects' quality of decisions was measured. The primary outcome measure was change in the count of diagnostic errors of omission (DEO. A more sensitive assessment of the system's impact was achieved using specific quality scores; additional consultation time resulting from DSS use was also calculated. Results 76 subjects (18 consultants, 24 registrars, 19 senior house officers and 15 students completed a total of 751 case episodes. The mean count of DEO fell from 5.5 to 5.0 across all subjects (repeated measures ANOVA, p Conclusion The provision of patient- and context-specific reminders has the potential to reduce diagnostic omissions across all subject grades for a range of cases. This study suggests a promising role for the use of future reminder-based DSS in the reduction of diagnostic error.

  6. Is human muscle spindle afference dependent on perceived size of error in visual tracking?

    Science.gov (United States)

    Kakuda, N; Wessberg, J; Vallbo, A B

    1997-04-01

    Impulses of 16 muscle spindle afferents from finger extensor muscles were recorded from the radial nerve along with electromyographic (EMG) activity and kinematics of joint movement. Twelve units were classified as Ia and 4 as II spindle afferents. Subjects were requested to perform precision movements at a single metacarpophalangeal joint in an indirect visual tracking task. Similar movements were executed under two different conditions, i.e. with high and low error gain. The purpose was to explore whether different precision demands were associated with different spindle firing rates. With high error gain, a small but significantly higher impulse rate was found in pooled data from Ia afferents during lengthening movements but not during shortening movements, nor with II afferents. EMG was also significantly higher with high error gain in recordings with Ia afferents. When the effect of EMG was factored out, using partial correlation analysis, the significant difference in Ia firing rate vanished. The findings suggest that fusimotor drive as well as skeletomotor activity were both marginally higher when the precision demand was higher, whereas no indication of independent fusimotor adjustments was found. These results are discussed with respect to data from behaving animals and the role of fusimotor independence in various limb muscles proposed.

  7. El error en la práctica médica: una presencia ineludible Human error in medical practice: an unavoidable presence

    OpenAIRE

    Gladis Adriana Vélez Álvarez

    2006-01-01

    El errar, que es una característica humana y un mecanismo de aprendizaje, se convierte en una amenaza para el hombre mismo en algunos escenarios como la aviación y la medicina. Se presentan algunos datos acerca de la frecuencia del error en medicina, su ubicuidad y las circunstancias que lo favorecen, y se hace una reflexión acerca de cómo se ha enfrentado el error y de por qué no se habla abiertamente del mismo. Se propone que el primer paso para aprender del error es aceptarlo como una pres...

  8. Assessing the Effects of Climate Variability on Orange Yield in Florida to Reduce Production Forecast Errors

    Science.gov (United States)

    Concha Larrauri, P.

    2015-12-01

    Orange production in Florida has experienced a decline over the past decade. Hurricanes in 2004 and 2005 greatly affected production, almost to the same degree as strong freezes that occurred in the 1980's. The spread of the citrus greening disease after the hurricanes has also contributed to a reduction in orange production in Florida. The occurrence of hurricanes and diseases cannot easily be predicted but the additional effects of climate on orange yield can be studied and incorporated into existing production forecasts that are based on physical surveys, such as the October Citrus forecast issued every year by the USDA. Specific climate variables ocurring before and after the October forecast is issued can have impacts on flowering, orange drop rates, growth, and maturation, and can contribute to the forecast error. Here we present a methodology to incorporate local climate variables to predict the USDA's orange production forecast error, and we study the local effects of climate on yield in different counties in Florida. This information can aid farmers to gain an insight on what is to be expected during the orange production cycle, and can help supply chain managers to better plan their strategy.

  9. Analysis of Task Types and Error Types of the Human Actions Involved in the Human-related Unplanned Reactor Trip Events

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jae Whan; Park, Jin Kyun; Jung, Won Dea

    2008-02-15

    This report provides the task types and error types involved in the unplanned reactor trip events that have occurred during 1986 - 2006. The events that were caused by the secondary system of the nuclear power plants amount to 67 %, and the remaining 33 % was by the primary system. The contribution of the activities of the plant personnel was identified as the following order: corrective maintenance (25.7 %), planned maintenance (22.8 %), planned operation (19.8 %), periodic preventive maintenance (14.9 %), response to a transient (9.9 %), and design/manufacturing/installation (9.9%). According to the analysis of error modes, the error modes such as control failure (22.2 %), wrong object (18.5 %), omission (14.8 %), wrong action (11.1 %), and inadequate (8.3 %) take up about 75 % of all the unplanned trip events. The analysis of the cognitive functions involved showed that the planning function makes the highest contribution to the human actions leading to unplanned reactor trips, and it is followed by the observation function (23.4%), the execution function (17.8 %), and the interpretation function (10.3 %). The results of this report are to be used as important bases for development of the error reduction measures or development of the error mode prediction system for the test and maintenance tasks in nuclear power plants.

  10. Agreement between Computerized and Human Assessment of Performance on the Ruff Figural Fluency Test

    Science.gov (United States)

    Elderson, Martin F.; Pham, Sander; van Eersel, Marlise E. A.; Wolffenbuttel, Bruce H. R.; Kok, Johan; Gansevoort, Ron T.; Tucha, Oliver; van der Klauw, Melanie M.; Slaets, Joris P. J.

    2016-01-01

    The Ruff Figural Fluency Test (RFFT) is a sensitive test for nonverbal fluency suitable for all age groups. However, assessment of performance on the RFFT is time-consuming and may be affected by interrater differences. Therefore, we developed computer software specifically designed to analyze performance on the RFFT by automated pattern recognition. The aim of this study was to compare assessment by the new software with conventional assessment by human raters. The software was developed using data from the Lifelines Cohort Study and validated in an independent cohort of the Prevention of Renal and Vascular End Stage Disease (PREVEND) study. The total study population included 1,761 persons: 54% men; mean age (SD), 58 (10) years. All RFFT protocols were assessed by the new software and two independent human raters (criterion standard). The mean number of unique designs (SD) was 81 (29) and the median number of perseverative errors (interquartile range) was 9 (4 to 16). The intraclass correlation coefficient (ICC) between the computerized and human assessment was 0.994 (95%CI, 0.988 to 0.996; p<0.001) and 0.991 (95%CI, 0.990 to 0.991; p<0.001) for the number of unique designs and perseverative errors, respectively. The mean difference (SD) between the computerized and human assessment was -1.42 (2.78) and +0.02 (1.94) points for the number of unique designs and perseverative errors, respectively. This was comparable to the agreement between two independent human assessments: ICC, 0.995 (0.994 to 0.995; p<0.001) and 0.985 (0.982 to 0.988; p<0.001), and mean difference (SD), -0.44 (2.98) and +0.56 (2.36) points for the number of unique designs and perseverative errors, respectively. We conclude that the agreement between the computerized and human assessment was very high and comparable to the agreement between two independent human assessments. Therefore, the software is an accurate tool for the assessment of performance on the RFFT. PMID:27661083

  11. Analysis of Human Errors in Industrial Incidents and Accidents for Improvement of Work Safety

    DEFF Research Database (Denmark)

    Leplat, J.; Rasmussen, Jens

    1984-01-01

    recommendations, the method proposed identifies very explicit countermeasures. Improvements require a change in human decisions during equipment design, work planning, or the execution itself. The use of a model of human behavior drawing a distinction between automated skill-based behavior, rule-based 'know......-how' and knowledge-based analysis is proposed for identification of the human decisions which are most sensitive to improvements...

  12. Assessment of error in aerosol optical depth measured by AERONET due to aerosol forward scattering

    Science.gov (United States)

    Sinyuk, Alexander; Holben, Brent N.; Smirnov, Alexander; Eck, Thomas F.; Slutsker, Ilya; Schafer, Joel S.; Giles, David M.; Sorokin, Mikhail

    2012-12-01

    We present an analysis of the effect of aerosol forward scattering on the accuracy of aerosol optical depth (AOD) measured by CIMEL Sun photometers. The effect is quantified in terms of AOD and solar zenith angle using radiative transfer modeling. The analysis is based on aerosol size distributions derived from multi-year climatologies of AERONET aerosol retrievals. The study shows that the modeled error is lower than AOD calibration uncertainty (0.01) for the vast majority of AERONET level 2 observations, ∼99.53%. Only ∼0.47% of the AERONET database corresponding mostly to dust aerosol with high AOD and low solar elevations has larger biases. We also show that observations with extreme reductions in direct solar irradiance do not contribute to level 2 AOD due to low Sun photometer digital counts below a quality control cutoff threshold.

  13. [The potentials for errors in the hygienic assessment of the general vibrations in tractors].

    Science.gov (United States)

    Ivanovich, E; Goranova, L; Enev, S

    1991-01-01

    The data for the parameters of the general vibrations in tractors are comparatively scanty and contradictory. In the present work are analyzed the most frequently met omissions and errors in the measurement and evaluation of the general vibrations, as well as the factors, which can effect the intensity of the general vibrations; constructive and technological peculiarities, technical state, rate of machine amortization, construction, damping qualities, and regulation of the seat, motion velocity, relief, type of the performed agricultural activity. The necessity for taking under consideration these factors in measuring the general vibrations and the hygiene interpretation of the data, as well as precise report on the daily, weekly and general exposure, in view of defining the total vibration loading, is underlined.

  14. Ideal point error for model assessment in data-driven river flow forecasting

    Directory of Open Access Journals (Sweden)

    C. W. Dawson

    2012-08-01

    Full Text Available When analysing the performance of hydrological models in river forecasting, researchers use a number of diverse statistics. Although some statistics appear to be used more regularly in such analyses than others, there is a distinct lack of consistency in evaluation, making studies undertaken by different authors or performed at different locations difficult to compare in a meaningful manner. Moreover, even within individual reported case studies, substantial contradictions are found to occur between one measure of performance and another. In this paper we examine the ideal point error (IPE metric – a recently introduced measure of model performance that integrates a number of recognised metrics in a logical way. Having a single, integrated measure of performance is appealing as it should permit more straightforward model inter-comparisons. However, this is reliant on a transferrable standardisation of the individual metrics that are combined to form the IPE. This paper examines one potential option for standardisation: the use of naive model benchmarking.

  15. Error field assessment from driven rotation of stable external kinks at EXTRAP-T2R reversed field pinch

    Science.gov (United States)

    Volpe, F. A.; Frassinetti, L.; Brunsell, P. R.; Drake, J. R.; Olofsson, K. E. J.

    2013-04-01

    A new non-disruptive error field (EF) assessment technique not restricted to low density and thus low beta was demonstrated at the EXTRAP-T2R reversed field pinch. Stable and marginally stable external kink modes of toroidal mode number n = 10 and n = 8, respectively, were generated, and their rotation sustained, by means of rotating magnetic perturbations of the same n. Due to finite EFs, and in spite of the applied perturbations rotating uniformly and having constant amplitude, the kink modes were observed to rotate non-uniformly and be modulated in amplitude. This behaviour was used to precisely infer the amplitude and approximately estimate the toroidal phase of the EF. A subsequent scan permitted to optimize the toroidal phase. The technique was tested against deliberately applied as well as intrinsic EFs of n = 8 and 10. Corrections equal and opposite to the estimated error fields were applied. The efficacy of the error compensation was indicated by the increased discharge duration and more uniform mode rotation in response to a uniformly rotating perturbation. The results are in good agreement with theory, and the extension to lower n, to tearing modes and to tokamaks, including ITER, is discussed.

  16. ASSESSMENT OF HUMAN EXPOSURE TO TOLUENE DIISOCYANATE

    Directory of Open Access Journals (Sweden)

    OLIVIA ANCA RUSU

    2011-03-01

    Full Text Available Assessment of human exposure to toluene diisocyanate. Toluene diisocyanate (TDI, an aromatic compound, may be dangerous for human health. Diisocyanates have wide industrial use in the fabrication of flexible and rigid foams, fibers, elastomers, and coatings such as paints and varnishes. Isocyanates are known skin and respiratory sensitizers, and proper engineering controls should be in place to prevent exposure to isocyanate liquid and vapor; exposure to TDI vapors is well documented to increase asthma risk. The study focused on the exposure of workers and nearby populations to toluene diisocyanate in a Polyurethane Foam Factory located in Baia Mare, Romania. Workplace air measurements were performed in different departments of the plant, after sampling either in fixed points or as personal monitoring. Sampling in four different locations of Baia Mare town was carried out, - during and after the foaming process. TDI sampling was performed on silica cartridge followed by GC-MS analysis. TDI concentration at workplace was lower than 0,035 mg/m³, which represents the permissible exposure limit, while in the city the TDI concentration had shown values below 0,20 μg/m³. Health assessment of a group of 49 workers was based on questionnaire interview, determination of TDI antibodies and lung function tests. Data collected until this stage do not show any negative effects of TDI on the employees health. Since this plant had only recently begun operating, continuous workplace and ambient air TDI monitoring, along with workers health surveillance, is deemed necessary.

  17. Minimising human error in malaria rapid diagnosis: clarity of written instructions and health worker performance.

    Science.gov (United States)

    Rennie, Waverly; Phetsouvanh, Rattanaxay; Lupisan, Socorro; Vanisaveth, Viengsay; Hongvanthong, Bouasy; Phompida, Samlane; Alday, Portia; Fulache, Mila; Lumagui, Richard; Jorgensen, Pernille; Bell, David; Harvey, Steven

    2007-01-01

    The usefulness of rapid diagnostic tests (RDT) in malaria case management depends on the accuracy of the diagnoses they provide. Despite their apparent simplicity, previous studies indicate that RDT accuracy is highly user-dependent. As malaria RDTs will frequently be used in remote areas with little supervision or support, minimising mistakes is crucial. This paper describes the development of new instructions (job aids) to improve health worker performance, based on observations of common errors made by remote health workers and villagers in preparing and interpreting RDTs, in the Philippines and Laos. Initial preparation using the instructions provided by the manufacturer was poor, but improved significantly with the job aids (e.g. correct use both of the dipstick and cassette increased in the Philippines by 17%). However, mistakes in preparation remained commonplace, especially for dipstick RDTs, as did mistakes in interpretation of results. A short orientation on correct use and interpretation further improved accuracy, from 70% to 80%. The results indicate that apparently simple diagnostic tests can be poorly performed and interpreted, but provision of clear, simple instructions can reduce these errors. Preparation of appropriate instructions and training as well as monitoring of user behaviour are an essential part of rapid test implementation.

  18. Human Reliability in Probabilistic Safety Assessments; Fiabilidad Humana en los Analisis Probabilisticos de Seguridad

    Energy Technology Data Exchange (ETDEWEB)

    Nunez Mendez, J.

    1989-07-01

    Nowadays a growing interest in environmental aspects is detected in our country. It implies an assessment of the risk involved in the industrial processes and installations in order to determine if those are into the acceptable limits. In these safety assessments, among which PSA (Probabilistic Safety Assessments), can be pointed out the role played by the human being in the system is one of the more relevant subjects (This relevance has been demonstrated in the accidents happened) . However, in Spain there aren't manuals specifically dedicated to asses the human contribution to risk in the frame of PSAs. This report aims to improve this situation providing: a) a theoretical background to help the reader in the understanding of the nature of the human error, b) a quid to carry out a Human Reliability Analysis and c) a selected overview of the techniques and methodologies currently applied in this area. (Author) 20 refs.

  19. Research on the Mechanism of Human Error in Ship Building%舰船建造中人因失误机理的研究

    Institute of Scientific and Technical Information of China (English)

    石小岗; 周宏; 莫一峰

    2014-01-01

    由于舰船建造的人-机-环境系统的复杂性使得在建造过程中人因失误事件的发生概率很大。如何预防与减少人因失误提高人的可靠性已成为保证舰船建造安全生产的主要因素。本文研究了人因失误的特点,根据人的认知行为对舰船建造过程的人为失误进行了分类同时总结出了影响舰船建造过程中人因失误的影响因素,针对影响因素给出了预防舰船建造中人因失误的有效措施。%The complexity of the man-machine-environment system for ship building results in big probability of human error. How to prevent and decrease human error and improve the reliability of people has become the main factor for ensuring shipbuilding safety. This paper studies the characteristics of human error, classifies the human error in building according to human cognitive behavior and summarizes the influencing factors of human error in shipbuilding. Effective measures to prevent human error are put forward.

  20. The Measure of Human Error: Direct and Indirect Performance Shaping Factors

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring; Candice D. Griffith; Jeffrey C. Joe

    2007-08-01

    The goal of performance shaping factors (PSFs) is to provide measures to account for human performance. PSFs fall into two categories—direct and indirect measures of human performance. While some PSFs such as “time to complete a task” are directly measurable, other PSFs, such as “fitness for duty,” can only be measured indirectly through other measures and PSFs, such as through fatigue measures. This paper explores the role of direct and indirect measures in human reliability analysis (HRA) and the implications that measurement theory has on analyses and applications using PSFs. The paper concludes with suggestions for maximizing the reliability and validity of PSFs.

  1. An overview of the evolution of human reliability analysis in the context of probabilistic risk assessment.

    Energy Technology Data Exchange (ETDEWEB)

    Bley, Dennis C. (Buttonwood Consulting Inc., Oakton, VA); Lois, Erasmia (U.S. Nuclear Regulatory Commission, Washington, DC); Kolaczkowski, Alan M. (Science Applications International Corporation, Eugene, OR); Forester, John Alan; Wreathall, John (John Wreathall and Co., Dublin, OH); Cooper, Susan E. (U.S. Nuclear Regulatory Commission, Washington, DC)

    2009-01-01

    Since the Reactor Safety Study in the early 1970's, human reliability analysis (HRA) has been evolving towards a better ability to account for the factors and conditions that can lead humans to take unsafe actions and thereby provide better estimates of the likelihood of human error for probabilistic risk assessments (PRAs). The purpose of this paper is to provide an overview of recent reviews of operational events and advances in the behavioral sciences that have impacted the evolution of HRA methods and contributed to improvements. The paper discusses the importance of human errors in complex human-technical systems, examines why humans contribute to accidents and unsafe conditions, and discusses how lessons learned over the years have changed the perspective and approach for modeling human behavior in PRAs of complicated domains such as nuclear power plants. It is argued that it has become increasingly more important to understand and model the more cognitive aspects of human performance and to address the broader range of factors that have been shown to influence human performance in complex domains. The paper concludes by addressing the current ability of HRA to adequately predict human failure events and their likelihood.

  2. Biochemical analysis of six genetic variants of error-prone human DNA polymerase ι involved in translesion DNA synthesis.

    Science.gov (United States)

    Kim, Jinsook; Song, Insil; Jo, Ara; Shin, Joo-Ho; Cho, Hana; Eoff, Robert L; Guengerich, F Peter; Choi, Jeong-Yun

    2014-10-20

    DNA polymerase (pol) ι is the most error-prone among the Y-family polymerases that participate in translesion synthesis (TLS). Pol ι can bypass various DNA lesions, e.g., N(2)-ethyl(Et)G, O(6)-methyl(Me)G, 8-oxo-7,8-dihydroguanine (8-oxoG), and an abasic site, though frequently with low fidelity. We assessed the biochemical effects of six reported genetic variations of human pol ι on its TLS properties, using the recombinant pol ι (residues 1-445) proteins and DNA templates containing a G, N(2)-EtG, O(6)-MeG, 8-oxoG, or abasic site. The Δ1-25 variant, which is the N-terminal truncation of 25 residues resulting from an initiation codon variant (c.3G > A) and also is the formerly misassigned wild-type, exhibited considerably higher polymerase activity than wild-type with Mg(2+) (but not with Mn(2+)), coinciding with its steady-state kinetic data showing a ∼10-fold increase in kcat/Km for nucleotide incorporation opposite templates (only with Mg(2+)). The R96G variant, which lacks a R96 residue known to interact with the incoming nucleotide, lost much of its polymerase activity, consistent with the kinetic data displaying 5- to 72-fold decreases in kcat/Km for nucleotide incorporation opposite templates either with Mg(2+) or Mn(2+), except for that opposite N(2)-EtG with Mn(2+) (showing a 9-fold increase for dCTP incorporation). The Δ1-25 variant bound DNA 20- to 29-fold more tightly than wild-type (with Mg(2+)), but the R96G variant bound DNA 2-fold less tightly than wild-type. The DNA-binding affinity of wild-type, but not of the Δ1-25 variant, was ∼7-fold stronger with 0.15 mM Mn(2+) than with Mg(2+). The results indicate that the R96G variation severely impairs most of the Mg(2+)- and Mn(2+)-dependent TLS abilities of pol ι, whereas the Δ1-25 variation selectively and substantially enhances the Mg(2+)-dependent TLS capability of pol ι, emphasizing the potential translational importance of these pol ι genetic variations, e.g., individual differences

  3. Low aerial imagery - an assessment of georeferencing errors and the potential for use in environmental inventory

    Science.gov (United States)

    Smaczyński, Maciej; Medyńska-Gulij, Beata

    2017-06-01

    Unmanned aerial vehicles are increasingly being used in close range photogrammetry. Real-time observation of the Earth's surface and the photogrammetric images obtained are used as material for surveying and environmental inventory. The following study was conducted on a small area (approximately 1 ha). In such cases, the classical method of topographic mapping is not accurate enough. The geodetic method of topographic surveying, on the other hand, is an overly precise measurement technique for the purpose of inventorying the natural environment components. The author of the following study has proposed using the unmanned aerial vehicle technology and tying in the obtained images to the control point network established with the aid of GNSS technology. Georeferencing the acquired images and using them to create a photogrammetric model of the studied area enabled the researcher to perform calculations, which yielded a total root mean square error below 9 cm. The performed comparison of the real lengths of the vectors connecting the control points and their lengths calculated on the basis of the photogrammetric model made it possible to fully confirm the RMSE calculated and prove the usefulness of the UAV technology in observing terrain components for the purpose of environmental inventory. Such environmental components include, among others, elements of road infrastructure, green areas, but also changes in the location of moving pedestrians and vehicles, as well as other changes in the natural environment that are not registered on classical base maps or topographic maps.

  4. The role of usability in the evaluation of accidents: human error or design flaw?

    Science.gov (United States)

    Correia, Walter; Soares, Marcelo; Barros, Marina; Campos, Fábio

    2012-01-01

    This article aims to highlight the role of consumer products companies in the heart and the extent of accidents involving these types of products, and as such undesired events take part as an agent in influencing decision making for the purchase of a product that nature on the part of consumers and users. The article demonstrates, by reference, interviews and case studies such as the development of poorly designed products and design errors of design can influence the usage behavior of users, thus leading to accidents, and also negatively affect the next image of a company. The full explanation of these types of questions aims to raise awareness, plan on a reliable usability, users and consumers in general about the safe use of consumer products, and also safeguard their rights before a legal system of consumer protection, even far away by the CDC--Code of Consumer Protection.

  5. Electrophysiological correlates of reward prediction error recorded in the human prefrontal cortex

    Science.gov (United States)

    Oya, Hiroyuki; Adolphs, Ralph; Kawasaki, Hiroto; Bechara, Antoine; Damasio, Antonio; Howard, Matthew A.

    2005-01-01

    Lesion and functional imaging studies have shown that the ventromedial prefrontal cortex is critically involved in the avoidance of risky choices. However, detailed descriptions of the mechanisms that underlie the establishment of such behaviors remain elusive, due in part to the spatial and temporal limitations of available research techniques. We investigated this issue by recording directly from prefrontal depth electrodes in a rare neurosurgical patient while he performed the Iowa Gambling Task, and we concurrently measured behavioral, autonomic, and electrophysiological responses. We found a robust alpha-band component of event-related potentials that reflected the mismatch between expected outcomes and actual outcomes in the task, correlating closely with the reward-related error obtained from a reinforcement learning model of the patient's choice behavior. The finding implicates this brain region in the acquisition of choice bias by means of a continuous updating of expectations about reward and punishment. PMID:15928095

  6. When errors are rewarding

    NARCIS (Netherlands)

    Bruijn, E.R.A. de; Lange, F.P. de; Cramon, D.Y. von; Ullsperger, M.

    2009-01-01

    For social beings like humans, detecting one's own and others' errors is essential for efficient goal-directed behavior. Although one's own errors are always negative events, errors from other persons may be negative or positive depending on the social context. We used neuroimaging to disentangle br

  7. Human factors assessment mechanical compression tools

    Energy Technology Data Exchange (ETDEWEB)

    Murphy, C. [BC Research Inc., Vancouver, BC (Canada)

    1999-09-01

    The design and use of mechanical compression tools in electrical distribution functions were examined from the point of view of effects of design and use of tools on human operators. Various alternative tools such as manual compression tools, battery operated tools, wedge pressure tools, hydraulic tools, and insulating piercing connectors were also examined for purposes of comparison. Results of the comparative assessment were summarized and a tool satisfaction ratings table was produced for Burndy MD6, Huskie-Robo (REC 258) and Ampact (small) tools, rating level of effort, fatigue experienced, tool mass, force required to crimp, ease of use, comfort while using the tool, maneuverability, and overall satisfaction. Both the battery operated tool as well as the wedge pressure tool have been found to have ergonomic advantages over the mechanical compression tool.

  8. Effects of Error Correction during Assessment Probes on the Acquisition of Sight Words for Students with Moderate Intellectual Disabilities

    Science.gov (United States)

    Waugh, Rebecca E.; Alberto, Paul A.; Fredrick, Laura D.

    2011-01-01

    Simultaneous prompting is an errorless learning strategy designed to reduce the number of errors students make; however, research has shown a disparity in the number of errors students make during instructional versus probe trials. This study directly examined the effects of error correction versus no error correction during probe trials on the…

  9. The Error Is the Clue: Breakdown In Human-Machine Interaction

    Science.gov (United States)

    2006-01-01

    prolonged vowel on line 35 above, utterance 16 in Figure1. After two unsuccessful attempts to book a train the user tries one more time. At that point she has...is because it is sought in fusion” writes Levinas in his essay “The Other in Proust” [10]. Levinas meant fusion of humans, of views, of perspectives... styles ’ or to get their hats and leave. Thus on one hand, we don’t need to work for fusion between humans and machines by frenetically trying to

  10. Assessing the performance of data assimilation algorithms which employ linear error feedback.

    Science.gov (United States)

    Mallia-Parfitt, Noeleene; Bröcker, Jochen

    2016-10-01

    Data assimilation means to find an (approximate) trajectory of a dynamical model that (approximately) matches a given set of observations. A direct evaluation of the trajectory against the available observations is likely to yield a too optimistic view of performance, since the observations were already used to find the solution. A possible remedy is presented which simply consists of estimating that optimism, thereby giving a more realistic picture of the "out of sample" performance. Our approach is inspired by methods from statistical learning employed for model selection and assessment purposes in statistics. Applying similar ideas to data assimilation algorithms yields an operationally viable means of assessment. The approach can be used to improve the performance of models or the data assimilation itself. This is illustrated by optimising the feedback gain for data assimilation employing linear feedback.

  11. Human Mars Missions: Cost Driven Architecture Assessments

    Science.gov (United States)

    Donahue, Benjamin

    1998-01-01

    This report investigates various methods of reducing the cost in space transportation systems for human Mars missions. The reference mission for this task is a mission currently under study at NASA. called the Mars Design Reference Mission, characterized by In-Situ propellant production at Mars. This study mainly consists of comparative evaluations to the reference mission with a view to selecting strategies that would reduce the cost of the Mars program as a whole. One of the objectives is to understand the implications of certain Mars architectures, mission modes, vehicle configurations, and potentials for vehicle reusability. The evaluations start with year 2011-2014 conjunction missions which were characterized by their abort-to-the-surface mission abort philosophy. Variations within this mission architecture, as well as outside the set to other architectures (not predicated on an abort to surface philosophy) were evaluated. Specific emphasis has been placed on identifying and assessing overall mission risk. Impacts that Mars mission vehicles might place upon the Space Station, if it were to be used as an assembly or operations base, were also discussed. Because of the short duration of this study only on a few propulsion elements were addressed (nuclear thermal, cryogenic oxygen-hydrogen, cryogenic oxygen-methane, and aerocapture). Primary ground rules and assumptions were taken from NASA material used in Marshall Space Flight Center's own assessment done in 1997.

  12. Cardiovascular pressure measurement in safety assessment studies: technology requirements and potential errors.

    Science.gov (United States)

    Sarazan, R Dustan

    2014-01-01

    these factors are understood, a pressure sensing and measurement system can be selected that is optimized for the experimental model being studied, thus eliminating errors or inaccurate results. Copyright © 2014. Published by Elsevier Inc.

  13. Condition-based Human Reliability Assessment for digitalized control room

    Energy Technology Data Exchange (ETDEWEB)

    Kang, H. G.; Jang, S. C.; Eom, H. S.; Ha, J. J

    2005-04-01

    In safety-critical systems, the generation failure of an actuation signal is caused by the concurrent failures of the automated systems and an operator action. These two sources of safety signals are complicatedly correlated. The failures of sensors or automated systems will cause a lack of necessary information for a human operator and result in error-forcing contexts such as the loss of corresponding alarms and indications. In the conventional analysis, the Human Error Probabilities (HEP) are estimated based on the assumption of 'normal condition of indications and alarms'. In order to construct a more realistic signal-generation failure model, we have to consider more complicated conditions in a more realistic manner. In this study, we performed two kinds of investigation for addressing this issue. We performed the analytic calculations for estimating the effect of sensors failures on the system unavailability and plant risk. For the single-parameter safety signals, the analysis result reveals that the quantification of the HEP should be performed by focusing on the 'no alarm from the automatic system and corresponding indications unavailable' situation. This study also proposes a Condition-Based Human Reliability Assessment (CBHRA) method in order to address these complicated conditions in a practical way. We apply the CBHRA method to the manual actuation of the safety features such as a reactor trip and auxiliary feedwater actuation in Korean Standard Nuclear Power Plants. In the case of conventional single HEP method, it is very hard to consider the multiple HE conditions. The merit of CBHRA is clearly shown in the application to the AFAS generation where no dominating HE condition exits. In this case, even if the HE conditions are carefully investigated, the single HEP method cannot accommodate the multiple conditions in a fault tree. On the other hand, the application result of the reactor trip in SLOCA shows that if there is a

  14. The future of human rights impact assessments of trade agreements

    NARCIS (Netherlands)

    Walker, S.M.

    2009-01-01

    The Future of Human Rights Impact Assessments of Trade Agreements develops a methodology for human rights impact assessments of trade agreements and considers whether there is any value in using the methodology on a sustained basis to ensure that the human dimensions of international trade are taken

  15. Common errors in textbook descriptions of muscle fiber size in nontrained humans.

    Science.gov (United States)

    Chalmers, Gordon R; Row, Brandi S

    2011-09-01

    Exercise science and human anatomy and physiology textbooks commonly report that type IIB muscle fibers have the largest cross-sectional area of the three fiber types. These descriptions of muscle fiber sizes do not match with the research literature examining muscle fibers in young adult nontrained humans. For men, most commonly type IIA fibers were significantly larger than other fiber types (six out of 10 cases across six different muscles). For women, either type I, or both I and IIA muscle fibers were usually significantly the largest (five out of six cases across four different muscles). In none of these reports were type IIB fibers significantly larger than both other fiber types. In 27 studies that did not include statistical comparisons of mean fiber sizes across fiber types, in no cases were type IIB or fast glycolytic fibers larger than both type I and IIA, or slow oxidative and fast oxidative glycolytic fibers. The likely reason for mistakes in textbook descriptions of human muscle fiber sizes is that animal data were presented without being labeled as such, and without any warning that there are interspecies differences in muscle fiber properties. Correct knowledge of muscle fiber sizes may facilitate interpreting training and aging adaptations.

  16. Avoiding a Systematic Error in Assessing Fat Graft Survival in the Breast with Repeated Magnetic Resonance Imaging

    DEFF Research Database (Denmark)

    Glovinski, Peter Viktor; Herly, Mikkel; Müller, Felix C

    2016-01-01

    Several techniques for measuring breast volume (BV) are based on examining the breast on magnetic resonance imaging. However, when techniques designed to measure total BV are used to quantify BV changes, for example, after fat grafting, a systematic error is introduced because BV changes lead to ...... for assessing BV changes to determine fat graft retention and may be useful for evaluating and comparing available surgical techniques for breast augmentation and reconstruction using fat grafting.......Several techniques for measuring breast volume (BV) are based on examining the breast on magnetic resonance imaging. However, when techniques designed to measure total BV are used to quantify BV changes, for example, after fat grafting, a systematic error is introduced because BV changes lead...... to contour alterations of the breast. The volume of the altered breast includes not only the injected volume but also tissue previously surrounding the breast. Therefore, the quantitative difference in BV before and after augmentation will differ from the injected volume. Here, we present a new technique...

  17. Identification of chromosomal errors in human preimplantation embryos with oligonucleotide DNA microarray.

    Directory of Open Access Journals (Sweden)

    Lifeng Liang

    Full Text Available A previous study comparing the performance of different platforms for DNA microarray found that the oligonucleotide (oligo microarray platform containing 385K isothermal probes had the best performance when evaluating dosage sensitivity, precision, specificity, sensitivity and copy number variations border definition. Although oligo microarray platform has been used in some research fields and clinics, it has not been used for aneuploidy screening in human embryos. The present study was designed to use this new microarray platform for preimplantation genetic screening in the human. A total of 383 blastocysts from 72 infertility patients with either advanced maternal age or with previous miscarriage were analyzed after biopsy and microarray. Euploid blastocysts were transferred to patients and clinical pregnancy and implantation rates were measured. Chromosomes in some aneuploid blastocysts were further analyzed by fluorescence in-situ hybridization (FISH to evaluate accuracy of the results. We found that most (58.1% of the blastocysts had chromosomal abnormalities that included single or multiple gains and/or losses of chromosome(s, partial chromosome deletions and/or duplications in both euploid and aneuploid embryos. Transfer of normal euploid blastocysts in 34 cycles resulted in 58.8% clinical pregnancy and 54.4% implantation rates. Examination of abnormal blastocysts by FISH showed that all embryos had matching results comparing microarray and FISH analysis. The present study indicates that oligo microarray conducted with a higher resolution and a greater number of probes is able to detect not only aneuploidy, but also minor chromosomal abnormalities, such as partial chromosome deletion and/or duplication in human embryos. Preimplantation genetic screening of the aneuploidy by DNA microarray is an advanced technology used to select embryos for transfer and improved embryo implantation can be obtained after transfer of the screened normal

  18. 船舶事故中人因失误机理的研究%Study on the Human Error Mechanism in Ship Accident

    Institute of Scientific and Technical Information of China (English)

    彭陈; 张圆圆

    2015-01-01

    Due to the complexity of the man-machine-environment system in ship accident, human error is of great possibility;therefore,to reduce human errors becomes important for prevention of ship accidents.This essay analyzes the reasons of human errors,constructs the human error model and the reliability mathematical model of human in ship accident,and gives an outlook on the study of human errors in ship accidents.%船舶事故中人-机-环境系统的复杂性,使得人因失误的概率很大,减少人因失误成为船舶事故的重要因素,本文分析了人因失误原因,构建了人因失误模型及船舶事故中人的可靠性数学模型,并对船舶事故人因失误的研究方向提出了展望。

  19. Structural basis of error-prone replication and stalling at a thymine base by human DNA polymerase

    Energy Technology Data Exchange (ETDEWEB)

    Kirouac, Kevin N.; Ling, Hong; (UWO)

    2009-06-30

    Human DNA polymerase iota (pol iota) is a unique member of Y-family polymerases, which preferentially misincorporates nucleotides opposite thymines (T) and halts replication at T bases. The structural basis of the high error rates remains elusive. We present three crystal structures of pol complexed with DNA containing a thymine base, paired with correct or incorrect incoming nucleotides. A narrowed active site supports a pyrimidine to pyrimidine mismatch and excludes Watson-Crick base pairing by pol. The template thymine remains in an anti conformation irrespective of incoming nucleotides. Incoming ddATP adopts a syn conformation with reduced base stacking, whereas incorrect dGTP and dTTP maintain anti conformations with normal base stacking. Further stabilization of dGTP by H-bonding with Gln59 of the finger domain explains the preferential T to G mismatch. A template 'U-turn' is stabilized by pol and the methyl group of the thymine template, revealing the structural basis of T stalling. Our structural and domain-swapping experiments indicate that the finger domain is responsible for pol's high error rates on pyrimidines and determines the incorporation specificity.

  20. Assessment of error rates in acoustic monitoring with the R package monitoR

    Science.gov (United States)

    Katz, Jonathan; Hafner, Sasha D.; Donovan, Therese

    2016-01-01

    Detecting population-scale reactions to climate change and land-use change may require monitoring many sites for many years, a process that is suited for an automated system. We developed and tested monitoR, an R package for long-term, multi-taxa acoustic monitoring programs. We tested monitoR with two northeastern songbird species: black-throated green warbler (Setophaga virens) and ovenbird (Seiurus aurocapilla). We compared detection results from monitoR in 52 10-minute surveys recorded at 10 sites in Vermont and New York, USA to a subset of songs identified by a human that were of a single song type and had visually identifiable spectrograms (e.g. a signal:noise ratio of at least 10 dB: 166 out of 439 total songs for black-throated green warbler, 502 out of 990 total songs for ovenbird). monitoR’s automated detection process uses a ‘score cutoff’, which is the minimum match needed for an unknown event to be considered a detection and results in a true positive, true negative, false positive or false negative detection. At the chosen score cut-offs, monitoR correctly identified presence for black-throated green warbler and ovenbird in 64% and 72% of the 52 surveys using binary point matching, respectively, and 73% and 72% of the 52 surveys using spectrogram cross-correlation, respectively. Of individual songs, 72% of black-throated green warbler songs and 62% of ovenbird songs were identified by binary point matching. Spectrogram cross-correlation identified 83% of black-throated green warbler songs and 66% of ovenbird songs. False positive rates were  for song event detection.

  1. Assessing stand water use in four coastal wetland forests using sapflow techniques: annual estimates, errors and associated uncertainties

    Science.gov (United States)

    Krauss, Ken W.; Duberstein, Jamie A.; Conner, William H.

    2015-01-01

    Forests comprise approximately 37% of the terrestrial land surface and influence global water cycling. However, very little attention has been directed towards understanding environmental impacts on stand water use (S) or in identifying rates of S from specific forested wetlands. Here, we use sapflow techniques to address two separate but linked objectives: (1) determine S in four, hydrologically distinctive South Carolina (USA) wetland forests from 2009–2010 and (2) describe potential error, uncertainty and stand-level variation associated with these assessments. Sapflow measurements were made from a number of tree species for approximately 2–8 months over 2 years to initiate the model, which was applied to canopy trees (DBH > 10–20 cm). We determined that S in three healthy forested wetlands varied from 1.97–3.97 mm day−1 or 355–687 mm year−1 when scaled. In contrast, saltwater intrusion impacted individual tree physiology and size class distributions on a fourth site, which decreased S to 0.61–1.13 mm day−1 or 110–196 mm year−1. The primary sources of error in estimations using sapflow probes would relate to calibration of probes and standardization relative to no flow periods and accounting for accurate sapflow attenuation with radial depth into the sapwood by species and site. Such inherent variation in water use among wetland forest stands makes small differences in S (<200 mm year−1) difficult to detect statistically through modelling, even though small differences may be important to local water cycling. These data also represent some of the first assessments of S from temperate, coastal forested wetlands along the Atlantic coast of the USA.

  2. De novo centriole formation in human cells is error-prone and does not require SAS-6 self-assembly.

    Science.gov (United States)

    Wang, Won-Jing; Acehan, Devrim; Kao, Chien-Han; Jane, Wann-Neng; Uryu, Kunihiro; Tsou, Meng-Fu Bryan

    2015-11-26

    Vertebrate centrioles normally propagate through duplication, but in the absence of preexisting centrioles, de novo synthesis can occur. Consistently, centriole formation is thought to strictly rely on self-assembly, involving self-oligomerization of the centriolar protein SAS-6. Here, through reconstitution of de novo synthesis in human cells, we surprisingly found that normal looking centrioles capable of duplication and ciliation can arise in the absence of SAS-6 self-oligomerization. Moreover, whereas canonically duplicated centrioles always form correctly, de novo centrioles are prone to structural errors, even in the presence of SAS-6 self-oligomerization. These results indicate that centriole biogenesis does not strictly depend on SAS-6 self-assembly, and may require preexisting centrioles to ensure structural accuracy, fundamentally deviating from the current paradigm.

  3. A dynamic human health risk assessment system.

    Science.gov (United States)

    Prasad, Umesh; Singh, Gurmit; Pant, A B

    2012-05-01

    An online human health risk assessment system (OHHRAS) has been designed and developed in the form of a prototype database-driven system and made available for the population of India through a website - www.healthriskindia.in. OHHRAS provide the three utilities, that is, health survey, health status, and bio-calculators. The first utility health survey is functional on the basis of database being developed dynamically and gives the desired output to the user on the basis of input criteria entered into the system; the second utility health status is providing the output on the basis of dynamic questionnaire and ticked (selected) answers and generates the health status reports based on multiple matches set as per advise of medical experts and the third utility bio-calculators are very useful for the scientists/researchers as online statistical analysis tool that gives more accuracy and save the time of user. The whole system and database-driven website has been designed and developed by using the software (mainly are PHP, My-SQL, Deamweaver, C++ etc.) and made available publically through a database-driven website (www.healthriskindia.in), which are very useful for researchers, academia, students, and general masses of all sectors.

  4. How to assess microvascular structure in humans.

    Science.gov (United States)

    Rizzoni, Damiano; Aalkjaer, Christian; De Ciuceis, Carolina; Porteri, Enzo; Rossini, Claudia; Rosei, Claudia Agabiti; Sarkar, Annamaria; Rosei, Enrico Agabiti

    2011-12-01

    Structural alterations of subcutaneous small resistance arteries, as indicated by an increased media to lumen ratio, are frequently present in hypertensive and/or diabetic patients. However, the evaluation of microvascular structure is not an easy task. Among the methods that may be applied to humans, plethysmographic evaluation of small arteries and wire or pressure micromyography were extensively used in the last decades. Media to lumen ratio of small arteries evaluated by micromyography was demonstrated to possess a strong prognostic significance; however, its extensive evaluation is limited by the invasiveness of the assessment, since a biopsy of subcutaneous fat is needed. Non-invasive approaches were then proposed, including capillaroscopy, which provides information about microvascular rarefaction. Recently, the interest of investigators has focused on the retinal microvascular bed. In particular, a non-invasive measurement of wall thickness to internal lumen ratio of retinal arterioles using scanning laser Doppler flowmetry has been recently introduced. Preliminary data suggest a fairly good agreement between this approach and micromyographic measurements, generally considered the gold standard approach. Therefore, the evaluation of microvascular structure is progressively moving from bench to bedside, and it could represent, in the immediate future, an evaluation to be performed in all hypertensive patients, in order to obtain a better stratification of cardiovascular risk. © 2011 Adis Data Information BV. All rights reserved.

  5. TECHNOLOGY VS NATURE: HUMAN ERROR IN DEALING WITH NATURE IN CRICHTON'S JURASSIC PARK

    Directory of Open Access Journals (Sweden)

    Sarah Prasasti

    2000-01-01

    Full Text Available Witnessing the euphoria of the era of biotechnology in the late twentieth century, Crichton exposes the theme of biotechnology in his works. In Jurassic Park, he voices his concern about the impact of the use of biotechnology to preserve nature and its living creatures. He further describes how the purpose of preserving nature and the creatures has turned out to be destructive. This article discusses Crichton's main character, Hammond, who attempts to control nature by genetically recreating the extinct fossil animals. It seems that the attempt ignores his human limitations. Although he is confident that has been equipped with the technology, he forgets to get along with nature. His way of using technology to accomplish his purpose proves not to be in harmony with nature. As a consequence, nature fights back. And he is conquered.

  6. ASSESSMENT OF HUMAN RESOURCES FOR REGIONAL INNOVATION ACTIVITY

    Directory of Open Access Journals (Sweden)

    R. R. Lukyanova

    2010-03-01

    Full Text Available The paper deals with the issues of human resource development regarding an innovation activity. Concepts of labor and human resources have been surveyed. An integral index for assessment of human resources for regional innovation activity has been developed and assessment of the Russian regions has been made on the basis of it. Development tendencies of modern human resources for innovation activity in Russia have been revealed.

  7. The R package "sperrorest" : Parallelized spatial error estimation and variable importance assessment for geospatial machine learning

    Science.gov (United States)

    Schratz, Patrick; Herrmann, Tobias; Brenning, Alexander

    2017-04-01

    Computational and statistical prediction methods such as the support vector machine have gained popularity in remote-sensing applications in recent years and are often compared to more traditional approaches like maximum-likelihood classification. However, the accuracy assessment of such predictive models in a spatial context needs to account for the presence of spatial autocorrelation in geospatial data by using spatial cross-validation and bootstrap strategies instead of their now more widely used non-spatial equivalent. The R package sperrorest by A. Brenning [IEEE International Geoscience and Remote Sensing Symposium, 1, 374 (2012)] provides a generic interface for performing (spatial) cross-validation of any statistical or machine-learning technique available in R. Since spatial statistical models as well as flexible machine-learning algorithms can be computationally expensive, parallel computing strategies are required to perform cross-validation efficiently. The most recent major release of sperrorest therefore comes with two new features (aside from improved documentation): The first one is the parallelized version of sperrorest(), parsperrorest(). This function features two parallel modes to greatly speed up cross-validation runs. Both parallel modes are platform independent and provide progress information. par.mode = 1 relies on the pbapply package and calls interactively (depending on the platform) parallel::mclapply() or parallel::parApply() in the background. While forking is used on Unix-Systems, Windows systems use a cluster approach for parallel execution. par.mode = 2 uses the foreach package to perform parallelization. This method uses a different way of cluster parallelization than the parallel package does. In summary, the robustness of parsperrorest() is increased with the implementation of two independent parallel modes. A new way of partitioning the data in sperrorest is provided by partition.factor.cv(). This function gives the user the

  8. Inborn errors of the Krebs cycle: a group of unusual mitochondrial diseases in human.

    Science.gov (United States)

    Rustin, P; Bourgeron, T; Parfait, B; Chretien, D; Munnich, A; Rötig, A

    1997-08-22

    Krebs cycle disorders constitute a group of rare human diseases which present an amazing complexity considering our current knowledge on the Krebs cycle function and biogenesis. Acting as a turntable of cell metabolism, it is ubiquitously distributed in the organism and its enzyme components encoded by supposedly typical house-keeping genes. However, the investigation of patients presenting specific defects of Krebs cycle enzymes, resulting from deleterious mutations of the considered genes, leads to reconsider this simple envision by revealing organ-specific impairments, mostly affecting neuromuscular system. This often leaves aside organs the metabolism of which strongly depends on mitochondrial energy metabolism as well, such as heart, kidney or liver. Additionally, in some patients, a complex pattern of tissue-specific enzyme defect was also observed. The lack of functional additional copies of Krebs cycle genes suggests that the complex expression pattern should be ascribed to tissue-specific regulations of transcriptional and/or translational activities, together with a variable cell adaptability to Krebs cycle functional defects.

  9. Medication prescribing errors and associated factors at the pediatric wards of Dessie Referral Hospital, Northeast Ethiopia

    OpenAIRE

    Zeleke, Abebe; Chanie, Tesfahun; Woldie, Mirkuzie

    2014-01-01

    Background Medication error is common and preventable cause of medical errors and occurs as a result of either human error or a system flaw. The consequences of such errors are more harmful and frequent among pediatric patients. Objective To assess medication prescribing errors and associated factors in the pediatric wards of Dessie Referral Hospital, Northeast Ethiopia. Methods A cross-sectional study was carried out in the pediatric wards of Dessie Referral Hospital from February 17 to Marc...

  10. Implementation of pharmacists' interventions and assessment of medication errors in an intensive care unit of a Chinese tertiary hospital.

    Science.gov (United States)

    Jiang, Sai-Ping; Chen, Jian; Zhang, Xing-Guo; Lu, Xiao-Yang; Zhao, Qing-Wei

    2014-01-01

    Pharmacist interventions and medication errors potentially differ between the People's Republic of China and other countries. This study aimed to report interventions administered by clinical pharmacists and analyze medication errors in an intensive care unit (ICU) in a tertiary hospital in People's Republic of China. A prospective, noncomparative, 6-month observational study was conducted in a general ICU of a tertiary hospital in the People's Republic of China. Clinical pharmacists performed interventions to prevent or resolve medication errors during daily rounds and documented all of these interventions and medication errors. Such interventions and medication errors were categorized and then analyzed. During the 6-month observation period, a total of 489 pharmacist interventions were reported. Approximately 407 (83.2%) pharmacist interventions were accepted by ICU physicians. The incidence rate of medication errors was 124.7 per 1,000 patient-days. Improper drug frequency or dosing (n=152, 37.3%), drug omission (n=83, 20.4%), and potential or actual occurrence of adverse drug reaction (n=54, 13.3%) were the three most commonly committed medication errors. Approximately 339 (83.4%) medication errors did not pose any risks to the patients. Antimicrobials (n=171, 35.0%) were the most frequent type of medication associated with errors. Medication errors during prescription frequently occurred in an ICU of a tertiary hospital in the People's Republic of China. Pharmacist interventions were also efficient in preventing medication errors.

  11. Disease model: a simplified approach for analysis and management of human error: a quality improvement study.

    Science.gov (United States)

    Ahmad-Sabry, Mohammad H I

    2015-04-01

    During 6 weeks, we had 4 incidents of echocardiography machine malfunction. There were 3 in the operating room, which were damaged due to intravenous (IV) fluid spillage over the keyboard of the machine leading to burning of the keyboard electric connection, and 1 in the cardiology department, which was damagaed due to spillage of coffee on it. The malfunction had an economic impact on the hospital (about $ 20,000) in addition to the nonavailability of the ultrasound (US) machine for the cardiac patient after the incident till the end of the case and for consequent cases till the fixation of the machine. We undertook an analysis of the incidents using simplified approach. The first incident happened when changing an empty IV fluid bag for a full one led to spillage of some fluid onto the keyboard. The second incidence was due to the use of needle to depressurize a medication bottle for continuous IV drip, and the third event was due to disconnection of the IV set from the bottle during transfer of the patient from operation room to intensive care unit. The fundamental problem is of course that fluid is harmful to the US machine. In addition, the machines are in a position between the patient bed and anesthesia machine. This means that IV pulls are on each side of the patient bed, which makes the machine vulnerable to fluid spillage. We considered a machine modification, to create a protective cover, but this was hindered by complexity of keyboard of the US machine, technical and financial challenges, and the time it would take to achieve. Second, we considered the creation of a protocol, with putting the machine in a position where no IV pulls are around and transferring the machine out of the room when transferring the patient will endanger the machine by the IV fluid. Third, changing of human behavior; to do this, we announced the protocol in our anesthesia conference to make it known to each and every one. We taught residents, fellows, and staff about the new

  12. EVALUACIÓN DE COMPETENCIAS EN NIÑOS: UN ERROR DE APRECIACIÓN Y PERSPECTIVA (COMPETENCE ASSESSMENT IN CHILDREN: AN ERROR OF JUDGEMENT AND PERSPECTIVE

    Directory of Open Access Journals (Sweden)

    Climént Bonilla Juan

    2010-12-01

    Full Text Available Resumen:El movimiento de educación por competencias, en boga con los procesos de globalización en curso, además de fomentarse en el ámbito laboral y vocacional, donde tuvo su origen, se ha hecho presente en políticas e iniciativas de educación elemental. El principal propósito de este artículo es proporcionar evidencia de las inconsistencias de esta tendencia en educación básica (preescolar, primaria y secundaria, a partir del análisis de diversos aspectos implicados. El artículo es producto de una investigación teórica, basada en la revisión de un extenso marco referencial, sobre asuntos directa e indirectamente relacionados con las competencias de las personas, en torno al propósito expuesto. Del proceso de investigación se desprendieron tres grandes temas de discusión y análisis: a las competencias individuales desde las perspectivas educativa y laboral; b el soporte lógico para la comprensión, desarrollo y aplicación de las competencias (estandarizadas y no estandarizadas; y c el otro lado (o lado “oculto” de las competencias individuales. Evidencias, argumentos y conclusiones sostienen la tesis de que existen serias omisiones y errores en las políticas y prácticas de evaluación de competencias en niños.Abstract:The movement of competence-based education, in vogue with the current globalization processes, in addition to being promoted in labor and vocational areas, where it had its origin, has become part of elementary education policies and initiatives. The main purpose of this article is to provide evidence of this trend inconsistencies in basic education (pre-school, primary and lower secondary, from the analysis of different aspects involved. The article is product of a theoretical research supported by an extensive referential framework revision, on issues direct and indirectly related to people's competencies, around the stated purpose. Three central issues of discussion and analysis arose from the

  13. Progress in human exposure assessment for biocidal products

    NARCIS (Netherlands)

    Hemmen, J.J. van

    2004-01-01

    An important shortcoming in our present knowledge required for risk assessment of biocidal products is the assessment of human exposure. This knowledge gap has been filled in a preliminary fashion with the TNsG on human exposure to biocidal products (available from the ECB website). Explicit User gu

  14. Human exposure assessment: Approaches for chemicals (REACH) and biocides (BPD)

    NARCIS (Netherlands)

    Hemmen, J.J. van; Gerritsen-Ebben, R.

    2008-01-01

    The approaches that are indicated in the various guidance documents for the assessment of human exposure for chemicals and biocides are summarised. This reflects the TNsG (Technical notes for Guidance) version 2: human exposure assessment for biocidal products (1) under the BPD (Biocidal Products Di

  15. [Analysis, identification and correction of some errors of model refseqs appeared in NCBI Human Gene Database by in silico cloning and experimental verification of novel human genes].

    Science.gov (United States)

    Zhang, De-Li; Ji, Liang; Li, Yan-Da

    2004-05-01

    We found that human genome coding regions annotated by computers have different kinds of many errors in public domain through homologous BLAST of our cloned genes in non-redundant (nr) database, including insertions, deletions or mutations of one base pair or a segment in sequences at the cDNA level, or different permutation and combination of these errors. Basically, we use the three means for validating and identifying some errors of the model genes appeared in NCBI GENOME ANNOTATION PROJECT REFSEQS: (I) Evaluating the support degree of human EST clustering and draft human genome BLAST. (2) Preparation of chromosomal mapping of our verified genes and analysis of genomic organization of the genes. All of the exon/intron boundaries should be consistent with the GT/AG rule, and consensuses surrounding the splice boundaries should be found as well. (3) Experimental verification by RT-PCR of the in silico cloning genes and further by cDNA sequencing. And then we use the three means as reference: (1) Web searching or in silico cloning of the genes of different species, especially mouse and rat homologous genes, and thus judging the gene existence by ontology. (2) By using the released genes in public domain as standard, which should be highly homologous to our verified genes, especially the released human genes appeared in NCBI GENOME ANNOTATION PROJECT REFSEQS, we try to clone each a highly homologous complete gene similar to the released genes in public domain according to the strategy we developed in this paper. If we can not get it, our verified gene may be correct and the released gene in public domain may be wrong. (3) To find more evidence, we verified our cloned genes by RT-PCR or hybrid technique. Here we list some errors we found from NCBI GENOME ANNOTATION PROJECT REFSEQs: (1) Insert a base in the ORF by mistake which causes the frame shift of the coding amino acid. In detail, abase in the ORF of a gene is a redundant insertion, which causes a reading frame

  16. Human Rights within Education: Assessing the Justifications

    Science.gov (United States)

    McCowan, Tristan

    2012-01-01

    While respect for human rights has long been endorsed as a goal of education, only recently has significant attention been paid to the need to incorporate rights within educational processes. Current support for human rights within education, however, has a variety of motivations. This paper provides a theoretical exploration of these diverse…

  17. Human Rights within Education: Assessing the Justifications

    Science.gov (United States)

    McCowan, Tristan

    2012-01-01

    While respect for human rights has long been endorsed as a goal of education, only recently has significant attention been paid to the need to incorporate rights within educational processes. Current support for human rights within education, however, has a variety of motivations. This paper provides a theoretical exploration of these diverse…

  18. Cognition Analysis of Human Errors in ATC Based on HERA-JANUS Model%基于HERA-JANUS模型的空管人误认知分析

    Institute of Scientific and Technical Information of China (English)

    吴聪; 解佳妮; 杜红兵; 袁乐平

    2012-01-01

    空管人误分类分析是空管人误研究的基础.为了对管制员人误进行系统的分类研究,结合空管业务知识和认知心理学理论,对欧洲航空安全局和美国联邦航空局合作开发的HERA-JANUS模型的工作原理和流程进行较详细地分析.运用该方法模型,对我国一起空管不安全事件案例进行分析后得到3个由管制员所产生的人误差错,并对这3个人误差错分别从人误类型、人误认知、相关因素3方面进行详尽的分析研究,最后得出该不安全事件的21项人误结果.结果表明,HERA-JANUS模型能较全面地从深层次分析管制员的人误,其分类形式也便于开展空管人误统计.%It was held that classification and analysis of human errors were a basis for ATM system human factors study. With the professional knowledge of ATM and cognitive psychology theory, the principle and flowchart of HERA-JANUS model developed by European Aviation Safety Agency and Federal Aviation Administration were introduced in detail in order to research controllers' errors more systematically. An unsafe incident case of ATC in China was investigated by employing the model, and three human errors stumbled by a controller in this case were identified. These errors were classified from three respects, viz. human error type, human error cognition, and influencing factors, respectively. Twenty-one causal factors of human errors of the unsafe occurrence were ultimately obtained. The results show that the model can analyze controllers' errors more comprehensively and its classification way is helpful in earring out statistics of controllers' errors.

  19. Implementation of pharmacists’ interventions and assessment of medication errors in an intensive care unit of a Chinese tertiary hospital

    Directory of Open Access Journals (Sweden)

    Jiang SP

    2014-10-01

    Full Text Available Sai-Ping Jiang,1,* Jian Chen,2,* Xing-Guo Zhang,1 Xiao-Yang Lu,1 Qing-Wei Zhao1 1Department of Pharmacy, 2Intensive Care Unit, the First Affiliated Hospital, College of Medicine, Zhejiang University, Hangzhou, People’s Republic of China *These authors contributed equally to this work Background: Pharmacist interventions and medication errors potentially differ between the People’s Republic of China and other countries. This study aimed to report interventions administered by clinical pharmacists and analyze medication errors in an intensive care unit (ICU in a tertiary hospital in People’s Republic of China.Method: A prospective, noncomparative, 6-month observational study was conducted in a general ICU of a tertiary hospital in the People’s Republic of China. Clinical pharmacists performed interventions to prevent or resolve medication errors during daily rounds and documented all of these interventions and medication errors. Such interventions and medication errors were categorized and then analyzed.Results: During the 6-month observation period, a total of 489 pharmacist interventions were reported. Approximately 407 (83.2% pharmacist interventions were accepted by ICU physicians. The incidence rate of medication errors was 124.7 per 1,000 patient-days. Improper drug frequency or dosing (n=152, 37.3%, drug omission (n=83, 20.4%, and potential or actual occurrence of adverse drug reaction (n=54, 13.3% were the three most commonly committed medication errors. Approximately 339 (83.4% medication errors did not pose any risks to the patients. Antimicrobials (n=171, 35.0% were the most frequent type of medication associated with errors.Conclusion: Medication errors during prescription frequently occurred in an ICU of a tertiary hospital in the People’s Republic of China. Pharmacist interventions were also efficient in preventing medication errors. Keywords: pharmacist, medication error, preva­lence rate, type, severity, intensive care

  20. 认知控制模式下的CREAM方法概率量化%Quantification of human error probability of CREAM in cognitive control mode

    Institute of Scientific and Technical Information of China (English)

    蒋英杰; 孙志强; 宫二玲; 谢红卫

    2011-01-01

    Human errors have nowadays turned lo be the main factor that may reduce the reliability and safety of human-machine system, and therefore necessary to be attached special attention to. It is for this reason that the quantification of human error probability has become the research topic of this paper known as a key ingredient of human reliability analysis (HRA) . However, the first step for us to do here is to introduce the basic method of cognitive reliability and error analysis method (CREAM) as a kind of widely accepted HRA method as well as the hasic theory it involves, And, then, we would like to introduce the steps for quantifying human error probability in details. Considering that cognitive ben a vi or mode provided by CREAM should be continuous, we have put forward two methods for defining the probabilistic control modes by HRA practitioners, which arc based on Bayesian nell and the fuzzy logic, respectively. The reason for so doing is that if the human error probability were not lo be quantified, it would be necessary to construct a method to deal with the human error probability in probabilistic mode, which makes it necessary to apply a method for quantifying the human error probability in probabilistic control modes. In preparing for such a method, we should lake the lognormal function as the probabilistic density function of human error probability in the mode and the probabilistic density function of human error probability in probabilistic cognitive behavior mode as the linear combination of the functions in each cognitive behavior mode. However, the human error probability in probabilistic mode is quantified through theoretical inference. In order to heighten the efficiency of calculation, we have also applied the Monte Carlo algorithm to our work. And, last of all, the validity of the method has been demonstrated by means of a sample study to show the process of the method.%研究了人因可靠性分析(Human Reliability Analysis,HRA)中人为差

  1. A Model-based Framework for Risk Assessment in Human-Computer Controlled Systems

    Science.gov (United States)

    Hatanaka, Iwao

    2000-01-01

    The rapid growth of computer technology and innovation has played a significant role in the rise of computer automation of human tasks in modem production systems across all industries. Although the rationale for automation has been to eliminate "human error" or to relieve humans from manual repetitive tasks, various computer-related hazards and accidents have emerged as a direct result of increased system complexity attributed to computer automation. The risk assessment techniques utilized for electromechanical systems are not suitable for today's software-intensive systems or complex human-computer controlled systems. This thesis will propose a new systemic model-based framework for analyzing risk in safety-critical systems where both computers and humans are controlling safety-critical functions. A new systems accident model will be developed based upon modem systems theory and human cognitive processes to better characterize system accidents, the role of human operators, and the influence of software in its direct control of significant system functions. Better risk assessments will then be achievable through the application of this new framework to complex human-computer controlled systems.

  2. Modeling and Quantification of Team Performance in Human Reliability Analysis for Probabilistic Risk Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Jeffrey C. JOe; Ronald L. Boring

    2014-06-01

    Probabilistic Risk Assessment (PRA) and Human Reliability Assessment (HRA) are important technical contributors to the United States (U.S.) Nuclear Regulatory Commission’s (NRC) risk-informed and performance based approach to regulating U.S. commercial nuclear activities. Furthermore, all currently operating commercial NPPs in the U.S. are required by federal regulation to be staffed with crews of operators. Yet, aspects of team performance are underspecified in most HRA methods that are widely used in the nuclear industry. There are a variety of "emergent" team cognition and teamwork errors (e.g., communication errors) that are 1) distinct from individual human errors, and 2) important to understand from a PRA perspective. The lack of robust models or quantification of team performance is an issue that affects the accuracy and validity of HRA methods and models, leading to significant uncertainty in estimating HEPs. This paper describes research that has the objective to model and quantify team dynamics and teamwork within NPP control room crews for risk informed applications, thereby improving the technical basis of HRA, which improves the risk-informed approach the NRC uses to regulate the U.S. commercial nuclear industry.

  3. ERRORS AND CORRECTION

    Institute of Scientific and Technical Information of China (English)

    1998-01-01

    To err is human . Since the 1960s, most second language teachers or language theorists have regarded errors as natural and inevitable in the language learning process . Instead of regarding them as terrible and disappointing, teachers have come to realize their value. This paper will consider these values, analyze some errors and propose some effective correction techniques.

  4. Re-Assessing Poverty Dynamics and State Protections in Britain and the US: The Role of Measurement Error

    Science.gov (United States)

    Worts, Diana; Sacker, Amanda; McDonough, Peggy

    2010-01-01

    This paper addresses a key methodological challenge in the modeling of individual poverty dynamics--the influence of measurement error. Taking the US and Britain as case studies and building on recent research that uses latent Markov models to reduce bias, we examine how measurement error can affect a range of important poverty estimates. Our data…

  5. Re-Assessing Poverty Dynamics and State Protections in Britain and the US: The Role of Measurement Error

    Science.gov (United States)

    Worts, Diana; Sacker, Amanda; McDonough, Peggy

    2010-01-01

    This paper addresses a key methodological challenge in the modeling of individual poverty dynamics--the influence of measurement error. Taking the US and Britain as case studies and building on recent research that uses latent Markov models to reduce bias, we examine how measurement error can affect a range of important poverty estimates. Our data…

  6. Science, practice, and human errors in controlling Clostridium botulinum in heat-preserved food in hermetic containers.

    Science.gov (United States)

    Pflug, Irving J

    2010-05-01

    The incidence of botulism in canned food in the last century is reviewed along with the background science; a few conclusions are reached based on analysis of published data. There are two primary aspects to botulism control: the design of an adequate process and the delivery of the adequate process to containers of food. The probability that the designed process will not be adequate to control Clostridium botulinum is very small, probably less than 1.0 x 10(-6), based on containers of food, whereas the failure of the operator of the processing equipment to deliver the specified process to containers of food may be of the order of 1 in 40, to 1 in 100, based on processing units (retort loads). In the commercial food canning industry, failure to deliver the process will probably be of the order of 1.0 x 10(-4) to 1.0 x 10(-6) when U.S. Food and Drug Administration (FDA) regulations are followed. Botulism incidents have occurred in food canning plants that have not followed the FDA regulations. It is possible but very rare to have botulism result from postprocessing contamination. It may thus be concluded that botulism incidents in canned food are primarily the result of human failure in the delivery of the designed or specified process to containers of food that, in turn, result in the survival, outgrowth, and toxin production of C. botulinum spores. Therefore, efforts in C. botulinum control should be concentrated on reducing human errors in the delivery of the specified process to containers of food.

  7. Who cares about consent requirements for sourcing human embryonic stem cells? Are errors in the past really errors of the past?

    Science.gov (United States)

    Krahn, Timothy M; Wallwork, Thomas E

    2011-01-01

    Through an Access to Information Act request, we have obtained the consent forms used by the providers of every human embryonic stem cell (hESC) line approved for use by the Canadian Institutes of Health Research (CIHR), and examined them to verify whether or not they meet the consent requirements established by Canadian law and regulations. Our findings show that at least seven out of ten consent forms studied did not satisfy these minimum requirements. We then outline various options for responding to this situation in terms of: (i) remedial measures for dealing with executive problems with regulatory oversight procedures; and (ii) remedial measures for dealing with the impugned lines.

  8. To Err Is Robot: How Humans Assess and Act toward an Erroneous Social Robot

    Directory of Open Access Journals (Sweden)

    Nicole Mirnig

    2017-05-01

    Full Text Available We conducted a user study for which we purposefully programmed faulty behavior into a robot’s routine. It was our aim to explore if participants rate the faulty robot different from an error-free robot and which reactions people show in interaction with a faulty robot. The study was based on our previous research on robot errors where we detected typical error situations and the resulting social signals of our participants during social human–robot interaction. In contrast to our previous work, where we studied video material in which robot errors occurred unintentionally, in the herein reported user study, we purposefully elicited robot errors to further explore the human interaction partners’ social signals following a robot error. Our participants interacted with a human-like NAO, and the robot either performed faulty or free from error. First, the robot asked the participants a set of predefined questions and then it asked them to complete a couple of LEGO building tasks. After the interaction, we asked the participants to rate the robot’s anthropomorphism, likability, and perceived intelligence. We also interviewed the participants on their opinion about the interaction. Additionally, we video-coded the social signals the participants showed during their interaction with the robot as well as the answers they provided the robot with. Our results show that participants liked the faulty robot significantly better than the robot that interacted flawlessly. We did not find significant differences in people’s ratings of the robot’s anthropomorphism and perceived intelligence. The qualitative data confirmed the questionnaire results in showing that although the participants recognized the robot’s mistakes, they did not necessarily reject the erroneous robot. The annotations of the video data further showed that gaze shifts (e.g., from an object to the robot or vice versa and laughter are typical reactions to unexpected robot behavior

  9. Payment Error Rate Measurement (PERM)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The PERM program measures improper payments in Medicaid and CHIP and produces error rates for each program. The error rates are based on reviews of the...

  10. Framework for Human Health Risk Assessment to Inform Decision Making

    Science.gov (United States)

    The purpose of this document is to describe a Framework for conducting human health risk assessments that are responsive to the needs of decision‐making processes in the U.S. Environmental Protection Agency (EPA).

  11. Graduate medical education in humanism and professionalism: a needs assessment survey of pediatric gastroenterology fellows.

    Science.gov (United States)

    Garvey, Katharine C; Kesselheim, Jennifer C; Herrick, Daniel B; Woolf, Alan D; Leichtner, Alan M

    2014-01-01

    The deterioration of humanism and professionalism during graduate medical training is an acknowledged concern, and programs are required to provide professionalism education for pediatric fellows. We conducted a needs assessment survey in a national sample of 138 first- and second-year gastroenterology fellows (82% response rate). Most believed that present humanism and professionalism education met their needs, but this education was largely informal (eg, role modeling). Areas for formal education desired by >70% included competing demands of clinical practice versus research, difficult doctor-patient relationships, depression/burnout, angry parents, medical errors, work-life balance, and the patient illness experience. These results may guide curricula to formalize humanism and professionalism education in pediatric gastroenterology fellowships.

  12. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-Area Sky Surveys

    Energy Technology Data Exchange (ETDEWEB)

    Li, T. S. [et al.

    2016-05-27

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is stable in time and uniform over the sky to 1% precision or better. Past surveys have achieved photometric precision of 1-2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors using photometry from the Dark Energy Survey (DES) as an example. We define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes, when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the systematic chromatic errors caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane, can be up to 2% in some bandpasses. We compare the calculated systematic chromatic errors with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput. The residual after correction is less than 0.3%. We also find that the errors for non-stellar objects are redshift-dependent and can be larger than those for stars at certain redshifts.

  13. Sources of error in the estimation of mosquito infection rates used to assess risk of arbovirus transmission.

    Science.gov (United States)

    Bustamante, Dulce M; Lord, Cynthia C

    2010-06-01

    Infection rate is an estimate of the prevalence of arbovirus infection in a mosquito population. It is assumed that when infection rate increases, the risk of arbovirus transmission to humans and animals also increases. We examined some of the factors that can invalidate this assumption. First, we used a model to illustrate how the proportion of mosquitoes capable of virus transmission, or infectious, is not a constant fraction of the number of infected mosquitoes. Thus, infection rate is not always a straightforward indicator of risk. Second, we used a model that simulated the process of mosquito sampling, pooling, and virus testing and found that mosquito infection rates commonly underestimate the prevalence of arbovirus infection in a mosquito population. Infection rate should always be used in conjunction with other surveillance indicators (mosquito population size, age structure, weather) and historical baseline data when assessing the risk of arbovirus transmission.

  14. A Model of Image Quality Assessment Based on Wavelet Second Coefficient Error%基于小波第二级系数误差的图像质量评价模型

    Institute of Scientific and Technical Information of China (English)

    郑江云; 江巨浪

    2012-01-01

    The goal of quality assessment research is to design algorithms for objective evaluation of quality in a way that is consistent with subjective human evaluation. Based on the human visual system(HVS) with different sensitivities in the frequency domain, low frequency coefficient error and high frequency coefficient error are calculated by different methods. The product of two errors is adopted as objective evaluation of quality. The new method is validated with subjective quality scores on LIVE database which containing 982 images. Experimental results show that the performance of the new method is superior to the algorithms of PSNR.LMSE and SSIM.%图像质量研究的目标是设计客观评价算法,它与主观评价有很强的相关性.本文根据人眼对图像高低频失真的敏感度不同,采用不同方法分别计算图像高频和低频失真量,然后将两种失真量直接相乘得到客观评价值.实验结果表明,这种客观评价模型能够一致评价各种失真类型、各种失真强度的图像质量,与LIVE图库上差异主观评价分(DMOS)的线性相关性优于PSNR、LMSE和SSIM算法.

  15. Unavoidable Human Errors of Tumor Size Measurement during Specimen Attachment after Endoscopic Resection: A Clinical Prospective Study

    Science.gov (United States)

    Mori, Hirohito; Kobara, Hideki; Tsushimi, Takaaki; Nishiyama, Noriko; Fujihara, Shintaro; Masaki, Tsutomu

    2015-01-01

    Objective Objective evaluation of resected specimen and tumor size is critical because the tumor diameter after endoscopic submucosal dissection affects therapeutic strategies. In this study, we investigated whether the true tumor diameter of gastrointestinal cancer specimens measured by flexible endoscopy is subjective by testing whether the specimen is correctly attached to the specimen board after endoscopic submucosal dissection resection and whether the size differs depending on the endoscopist who attached the specimen. Methods Seventy-two patients diagnosed with early gastric cancer who satisfied the endoscopic submucosal dissection expanded-indication guideline were enrolled. Three endoscopists were randomly selected before every endoscopic submucosal dissection. Each endoscopist separately attached the same resected specimen, measured the maximum resection diameter and tumor size, and removed the lesion from the attachment board. Results The resected specimen diameters of the 3 endoscopists were 44.5±13.9 mm (95% Confidence Interval (CI): 23–67), 37.4±12.0 mm (95% CI: 18–60), and 41.1±13.3 mm (95% CI: 20–63) mm. Comparison among 3 groups (Kruskal Wallis H- test), there were significant differences (H = 6.397, P = 0.040), and recorded tumor sizes were 38.3±13.1 mm (95% CI: 16–67), 31.1±11.2 mm (95% CI: 12.5–53.3), and 34.8±12.8 (95% CI: 11.5–62.3) mm. Comparison among 3 groups, there were significant differences (H = 6.917, P = 0.031). Conclusions Human errors regarding the size of attached resected specimens are unavoidable, but it cannot be ignored because it affects the patient’s additional treatment and/or surgical intervention. We must develop a more precise methodology to obtain accurate tumor size. Trial Registration University hospital Medical Information Network UMIN No. 000012915 PMID:25856397

  16. Ambulatory assessment of human body kinematics and kinetics

    NARCIS (Netherlands)

    Schepers, H. Martin

    2009-01-01

    Traditional human movement analysis systems consist of an optical position measurement system with one or more 6D force plates mounted in a laboratory. Although clinically accepted as `the golden standard' for the assessment of human movement, the restriction to a laboratory environment with its

  17. Perspectives for integrating human and environmental exposure assessments.

    Science.gov (United States)

    Ciffroy, P; Péry, A R R; Roth, N

    2016-10-15

    Integrated Risk Assessment (IRA) has been defined by the EU FP7 HEROIC Coordination action as "the mutual exploitation of Environmental Risk Assessment for Human Health Risk Assessment and vice versa in order to coherently and more efficiently characterize an overall risk to humans and the environment for better informing the risk analysis process" (Wilks et al., 2015). Since exposure assessment and hazard characterization are the pillars of risk assessment, integrating Environmental Exposure assessment (EEA) and Human Exposure assessment (HEA) is a major component of an IRA framework. EEA and HEA typically pursue different targets, protection goals and timeframe. However, human and wildlife species also share the same environment and they similarly inhale air and ingest water and food through often similar overlapping pathways of exposure. Fate models used in EEA and HEA to predict the chemicals distribution among physical and biological media are essentially based on common properties of chemicals, and internal concentration estimations are largely based on inter-species (i.e. biota-to-human) extrapolations. Also, both EEA and HEA are challenged by increasing scientific complexity and resources constraints. Altogether, these points create the need for a better exploitation of all currently existing data, experimental approaches and modeling tools and it is assumed that a more integrated approach of both EEA and HEA may be part of the solution. Based on the outcome of an Expert Workshop on Extrapolations in Integrated Exposure Assessment organized by the HEROIC project in January 2014, this paper identifies perspectives and recommendations to better harmonize and extrapolate exposure assessment data, models and methods between Human Health and Environmental Risk Assessments to support the further development and promotion of the concept of IRA. Ultimately, these recommendations may feed into guidance showing when and how to apply IRA in the regulatory decision

  18. 情景环境与人为差错的对应关系分析方法%Method for correlation analysis between scenario and human error

    Institute of Scientific and Technical Information of China (English)

    蒋英杰; 孙志强; 宫二玲; 谢红卫

    2011-01-01

    A new method is proposed to analyze the correlation between scenario and human error. The scenario is decomposed into six aspects, which are operator, machine, task, organization, environment and assistant devices. Based on the scenario decomposition, a taxonomy of performance shaping factor is constructed, which includes thirty-eight items and can provide a reference template for the investigation of human error causes. Based on the skill-based, rule-based and knowledge-based (SRK) model, the slip/lapse/mistake framework is introduced to classify human errors, which are categorized as skill-based slip and lapse, rule-based slip and mistake, and knowledge-based mistake. Grey relational analysis is introduced to analyze the correlation between performance shaping factors and human error types, in which the correlations of "consequent-antecedent" and "antecedent-consequent" are both analyzed. By this method, performance shaping factors related to some specified human error type and human error types caused by some specified performance shaping factor both can be sorted according to their correlation degrees. A case study is provided, which shows that the proposed method is applicable in analyzing the correlation between scenario and human error, and can provide some important implications for human error prediction and human error reduction.%提出了一种分析情景环境与人为差错之间对应关系的方法.将情景环境分为操作者、机器、任务、组织、环境和辅助系统6个方面,建立了包含38个元素的行为形成因子分类方法,为人为差错成因的查找提供了参考模板.在SRK(skill-based,rule-based and knowledge-based)模型的基础上引入疏忽/遗忘/错误分类框架,将人为差错分为技能型疏忽、技能型遗忘、规则型疏忽、规则型错误以及知识型错误等5种基本的人为差错类型.使用灰色关联分析方法,从“结果-原因”和“原因-结果”两个方向分析行为形

  19. Human Health Effects, Task Force Assessment, Preliminary Report.

    Science.gov (United States)

    Aronow, Wilbert S.; And Others

    Presented in this preliminary report is one of seven assessments conducted by a special task force of Project Clean Air, the Human Health Effects Task Force. The reports summarize assessments of the state of knowledge on various air pollution problems, particularly in California, and make tentative recommendations as to what the University of…

  20. Safety assessment of probiotics for human use

    Science.gov (United States)

    Akkermans, Louis MA; Haller, Dirk; Hammerman, Cathy; Heimbach, James; Hörmannsperger, Gabriele; Huys, Geert; Levy, Dan D; Lutgendorff, Femke; Mack, David; Phothirath, Phoukham; Solano-Aguilar, Gloria; Vaughan, Elaine

    2010-01-01

    The safety of probiotics is tied to their intended use, which includes consideration of potential vulnerability of the consumer or patient, dose and duration of consumption, and both the manner and frequency of administration. Unique to probiotics is that they are alive when administered, and unlike other food or drug ingredients, possess the potential for infectivity or in situ toxin production. Since numerous types of microbes are used as probiotics, safety is also intricately tied to the nature of the specific microbe being used. The presence of transferable antibiotic resistance genes, which comprises a theoretical risk of transfer to a less innocuous member of the gut microbial community, must also be considered. Genetic stability of the probiotic over time, deleterious metabolic activities, and the potential for pathogenicity or toxicogenicity must be assessed depending on the characteristics of the genus and species of the microbe being used. Immunological effects must be considered, especially in certain vulnerable populations, including infants with undeveloped immune function. A few reports about negative probiotic effects have surfaced, the significance of which would be better understood with more complete understanding of the mechanisms of probiotic interaction with the host and colonizing microbes. Use of readily available and low cost genomic sequencing technologies to assure the absence of genes of concern is advisable for candidate probiotic strains. The field of probiotic safety is characterized by the scarcity of studies specifically designed to assess safety contrasted with the long history of safe use of many of these microbes in foods. PMID:21327023

  1. Assessment of the knowledge and attitudes of intern doctors to medication prescribing errors in a Nigeria tertiary hospital.

    Science.gov (United States)

    Ajemigbitse, Adetutu A; Omole, Moses Kayode; Ezike, Nnamdi Chika; Erhun, Wilson O

    2013-12-01

    Junior doctors are reported to make most of the prescribing errors in the hospital setting. The aim of the following study is to determine the knowledge intern doctors have about prescribing errors and circumstances contributing to making them. A structured questionnaire was distributed to intern doctors in National Hospital Abuja Nigeria. Respondents gave information about their experience with prescribing medicines, the extent to which they agreed with the definition of a clinically meaningful prescribing error and events that constituted such. Their experience with prescribing certain categories of medicines was also sought. Data was analyzed with Statistical Package for the Social Sciences (SPSS) software version 17 (SPSS Inc Chicago, Ill, USA). Chi-squared analysis contrasted differences in proportions; P Interns were least confident prescribing antibiotics (12, 25.5%), opioid analgesics (12, 25.5%) cytotoxics (10, 21.3%) and antipsychotics (9, 19.1%) unsupervised. Respondents seemed to have a low awareness of making prescribing errors. Principles of rational prescribing and events that constitute prescribing errors should be taught in the practice setting.

  2. Assessing human health risk in the USDA forest service

    Energy Technology Data Exchange (ETDEWEB)

    Hamel, D.R. [Department of Agriculture-Forest Service, Washington, DC (United States)

    1990-12-31

    This paper identifies the kinds of risk assessments being done by or for the US Department of Agriculture (USDA) Forest Service. Summaries of data sources currently in use and the pesticide risk assessments completed by the agency or its contractors are discussed. An overview is provided of the agency`s standard operating procedures for the conduct of toxicological, ecological, environmental fate, and human health risk assessments.

  3. EMG versus torque control of human-machine systems: equalizing control signal variability does not equalize error or uncertainty.

    Science.gov (United States)

    Johnson, Reva E; Koerding, Konrad P; Hargrove, Levi J; Sensinger, Jonathon W

    2016-08-25

    In this paper we asked the question: if we artificially raise the variability of torque control signals to match that of EMG, do subjects make similar errors and have similar uncertainty about their movements? We answered this question using two experiments in which subjects used three different control signals: torque, torque+noise, and EMG. First, we measured error on a simple target-hitting task in which subjects received visual feedback only at the end of their movements. We found that even when the signal-to-noise ratio was equal across EMG and torque+noise control signals, EMG resulted in larger errors. Second, we quantified uncertainty by measuring the just-noticeable difference of a visual perturbation. We found that for equal errors, EMG resulted in higher movement uncertainty than both torque and torque+noise. The differences suggest that performance and confidence are influenced by more than just the noisiness of the control signal, and suggest that other factors, such as the user's ability to incorporate feedback and develop accurate internal models, also have significant impacts on the performance and confidence of a person's actions. We theorize that users have difficulty distinguishing between random and systematic errors for EMG control, and future work should examine in more detail the types of errors made with EMG control.

  4. Previous estimates of mitochondrial DNA mutation level variance did not account for sampling error: comparing the mtDNA genetic bottleneck in mice and humans.

    Science.gov (United States)

    Wonnapinij, Passorn; Chinnery, Patrick F; Samuels, David C

    2010-04-09

    In cases of inherited pathogenic mitochondrial DNA (mtDNA) mutations, a mother and her offspring generally have large and seemingly random differences in the amount of mutated mtDNA that they carry. Comparisons of measured mtDNA mutation level variance values have become an important issue in determining the mechanisms that cause these large random shifts in mutation level. These variance measurements have been made with samples of quite modest size, which should be a source of concern because higher-order statistics, such as variance, are poorly estimated from small sample sizes. We have developed an analysis of the standard error of variance from a sample of size n, and we have defined error bars for variance measurements based on this standard error. We calculate variance error bars for several published sets of measurements of mtDNA mutation level variance and show how the addition of the error bars alters the interpretation of these experimental results. We compare variance measurements from human clinical data and from mouse models and show that the mutation level variance is clearly higher in the human data than it is in the mouse models at both the primary oocyte and offspring stages of inheritance. We discuss how the standard error of variance can be used in the design of experiments measuring mtDNA mutation level variance. Our results show that variance measurements based on fewer than 20 measurements are generally unreliable and ideally more than 50 measurements are required to reliably compare variances with less than a 2-fold difference.

  5. Assessing data assimilation and model boundary error strategies for high resolution ocean model downscaling in the Northern North Sea

    Science.gov (United States)

    Sandvig Mariegaard, Jesper; Huiban, Méven Robin; Tornfeldt Sørensen, Jacob; Andersson, Henrik

    2017-04-01

    . The success of the downscaling is to a large degree determined by the ability to realistically describe and dynamically model the errors on the open boundaries. Three different sizes of downscaling model domains in the Northern North Sea have been examined and two different strategies for modelling the uncertainties on the open Flather boundaries are investigated. The combined downscaling and local data assimilation skill is assessed and the impact on recommended domain size is compared to pure downscaling.

  6. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-Area Sky Surveys

    CERN Document Server

    Li, T S; Marshall, J L; Tucker, D; Kessler, R; Annis, J; Bernstein, G M; Boada, S; Burke, D L; Finley, D A; James, D J; Kent, S; Lin, H; Marriner, J; Mondrik, N; Nagasawa, D; Rykoff, E S; Scolnic, D; Walker, A R; Wester, W; Abbott, T M C; Allam, S; Benoit-Lévy, A; Bertin, E; Brooks, D; Capozzi, D; Rosell, A Carnero; Kind, M Carrasco; Carretero, J; Crocce, M; Cunha, C E; D'Andrea, C B; da Costa, L N; Desai, S; Diehl, H T; Doel, P; Flaugher, B; Fosalba, P; Frieman, J; Gaztanaga, E; Goldstein, D A; Gruen, D; Gruendl, R A; Gutierrez, G; Honscheid, K; Kuehn, K; Kuropatkin, N; Maia, M A G; Melchior28, P; Miller, C J; Miquel, R; Mohr, J J; Neilsen, E; Nichol, R C; Nord, B; Ogando, R; Plazas, A A; Romer, A K; Roodman, A; Sako, M; Sanchez, E; Scarpine, V; Schubnell, M; Sevilla-Noarbe, I; Smith, R C; Soares-Santos, M; Sobreira, F; Suchyta, E; Tarle, G; Thomas, D; Vikram, V

    2016-01-01

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is stable in time and uniform over the sky to 1% precision or better. Past surveys have achieved photometric precision of 1-2% by calibrating the survey's stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors using photometry from the Dark Energy Survey (DES) as an example...

  7. Reporting of Human Genome Epidemiology (HuGE association studies: An empirical assessment

    Directory of Open Access Journals (Sweden)

    Gwinn Marta

    2008-05-01

    Full Text Available Abstract Background Several thousand human genome epidemiology association studies are published every year investigating the relationship between common genetic variants and diverse phenotypes. Transparent reporting of study methods and results allows readers to better assess the validity of study findings. Here, we document reporting practices of human genome epidemiology studies. Methods Articles were randomly selected from a continuously updated database of human genome epidemiology association studies to be representative of genetic epidemiology literature. The main analysis evaluated 315 articles published in 2001–2003. For a comparative update, we evaluated 28 more recent articles published in 2006, focusing on issues that were poorly reported in 2001–2003. Results During both time periods, most studies comprised relatively small study populations and examined one or more genetic variants within a single gene. Articles were inconsistent in reporting the data needed to assess selection bias and the methods used to minimize misclassification (of the genotype, outcome, and environmental exposure or to identify population stratification. Statistical power, the use of unrelated study participants, and the use of replicate samples were reported more often in articles published during 2006 when compared with the earlier sample. Conclusion We conclude that many items needed to assess error and bias in human genome epidemiology association studies are not consistently reported. Although some improvements were seen over time, reporting guidelines and online supplemental material may help enhance the transparency of this literature.

  8. Considering the role of time budgets on copy-error rates in material culture traditions: an experimental assessment.

    Science.gov (United States)

    Schillinger, Kerstin; Mesoudi, Alex; Lycett, Stephen J

    2014-01-01

    Ethnographic research highlights that there are constraints placed on the time available to produce cultural artefacts in differing circumstances. Given that copying error, or cultural 'mutation', can have important implications for the evolutionary processes involved in material culture change, it is essential to explore empirically how such 'time constraints' affect patterns of artefactual variation. Here, we report an experiment that systematically tests whether, and how, varying time constraints affect shape copying error rates. A total of 90 participants copied the shape of a 3D 'target handaxe form' using a standardized foam block and a plastic knife. Three distinct 'time conditions' were examined, whereupon participants had either 20, 15, or 10 minutes to complete the task. One aim of this study was to determine whether reducing production time produced a proportional increase in copy error rates across all conditions, or whether the concept of a task specific 'threshold' might be a more appropriate manner to model the effect of time budgets on copy-error rates. We found that mean levels of shape copying error increased when production time was reduced. However, there were no statistically significant differences between the 20 minute and 15 minute conditions. Significant differences were only obtained between conditions when production time was reduced to 10 minutes. Hence, our results more strongly support the hypothesis that the effects of time constraints on copying error are best modelled according to a 'threshold' effect, below which mutation rates increase more markedly. Our results also suggest that 'time budgets' available in the past will have generated varying patterns of shape variation, potentially affecting spatial and temporal trends seen in the archaeological record. Hence, 'time-budgeting' factors need to be given greater consideration in evolutionary models of material culture change.

  9. Assessment of Systematic Chromatic Errors that Impact Sub-1% Photometric Precision in Large-area Sky Surveys

    Science.gov (United States)

    Li, T. S.; DePoy, D. L.; Marshall, J. L.; Tucker, D.; Kessler, R.; Annis, J.; Bernstein, G. M.; Boada, S.; Burke, D. L.; Finley, D. A.; James, D. J.; Kent, S.; Lin, H.; Marriner, J.; Mondrik, N.; Nagasawa, D.; Rykoff, E. S.; Scolnic, D.; Walker, A. R.; Wester, W.; Abbott, T. M. C.; Allam, S.; Benoit-Lévy, A.; Bertin, E.; Brooks, D.; Capozzi, D.; Carnero Rosell, A.; Carrasco Kind, M.; Carretero, J.; Crocce, M.; Cunha, C. E.; D'Andrea, C. B.; da Costa, L. N.; Desai, S.; Diehl, H. T.; Doel, P.; Flaugher, B.; Fosalba, P.; Frieman, J.; Gaztanaga, E.; Goldstein, D. A.; Gruen, D.; Gruendl, R. A.; Gutierrez, G.; Honscheid, K.; Kuehn, K.; Kuropatkin, N.; Maia, M. A. G.; Melchior, P.; Miller, C. J.; Miquel, R.; Mohr, J. J.; Neilsen, E.; Nichol, R. C.; Nord, B.; Ogando, R.; Plazas, A. A.; Romer, A. K.; Roodman, A.; Sako, M.; Sanchez, E.; Scarpine, V.; Schubnell, M.; Sevilla-Noarbe, I.; Smith, R. C.; Soares-Santos, M.; Sobreira, F.; Suchyta, E.; Tarle, G.; Thomas, D.; Vikram, V.; DES Collaboration

    2016-06-01

    Meeting the science goals for many current and future ground-based optical large-area sky surveys requires that the calibrated broadband photometry is both stable in time and uniform over the sky to 1% precision or better. Past and current surveys have achieved photometric precision of 1%-2% by calibrating the survey’s stellar photometry with repeated measurements of a large number of stars observed in multiple epochs. The calibration techniques employed by these surveys only consider the relative frame-by-frame photometric zeropoint offset and the focal plane position-dependent illumination corrections, which are independent of the source color. However, variations in the wavelength dependence of the atmospheric transmission and the instrumental throughput induce source color-dependent systematic errors. These systematic errors must also be considered to achieve the most precise photometric measurements. In this paper, we examine such systematic chromatic errors (SCEs) using photometry from the Dark Energy Survey (DES) as an example. We first define a natural magnitude system for DES and calculate the systematic errors on stellar magnitudes when the atmospheric transmission and instrumental throughput deviate from the natural system. We conclude that the SCEs caused by the change of airmass in each exposure, the change of the precipitable water vapor and aerosol in the atmosphere over time, and the non-uniformity of instrumental throughput over the focal plane can be up to 2% in some bandpasses. We then compare the calculated SCEs with the observed DES data. For the test sample data, we correct these errors using measurements of the atmospheric transmission and instrumental throughput from auxiliary calibration systems. The residual after correction is less than 0.3%. Moreover, we calculate such SCEs for Type Ia supernovae and elliptical galaxies and find that the chromatic errors for non-stellar objects are redshift-dependent and can be larger than those for

  10. Human monitoring of phthalates and risk assessment.

    Science.gov (United States)

    Koo, Hyun Jung; Lee, Byung Mu

    2005-08-27

    Some phthalates, such as di(2-ethylhexyl) phthalate (DEHP) and dibutyl phthalate (DBP), and their metabolites are suspected of producing teratogenic and endocrino-disrupting effects. In this study, urinary levels of phthalates (DEHP, DBP, diethyl phthalate (DEP), butylbenzyl phthalate BBP), and monoethylhexyl phthalate (MEHP, a major metabolite of DEHP) were measured by high performance liquid chromatography (HPLC) in human populations (women [hospital visitors], n = 150, and children, n = 150). Daily exposure level of DEHP in children was estimated to be 12.4 microg/kg body weight/d (male 9.9 microg/kg body weight/d, female 17.8 microg/kg body weight/d), but, in women was estimated to be 41.7 microg/kg body weight/d, which exceeded the tolerable daily intake (TDI, 37 microg/kg body weight/day) level established by the European Union (EU) Scientific Committee for Toxicity, Ecotoxicity, and the Environment (SCTEE) based on reproductive toxicity. Based on these data, hazard indices (HIs) were calculated to be 1.12 (41.7/37 TDI) for women and 0.33 (12.4/37 TDI) for children, respectively. These data suggest that Koreans (women and children) were exposed to significant levels of phthalates, which should be reduced to as low a level as technologically feasible to protect Koreans from the exposure to toxic phthalates.

  11. 76 FR 39399 - Chlorpyrifos Registration Review; Preliminary Human Health Risk Assessment; Notice of Availability

    Science.gov (United States)

    2011-07-06

    ... AGENCY Chlorpyrifos Registration Review; Preliminary Human Health Risk Assessment; Notice of Availability... availability of EPA's preliminary human health risk assessment for the registration review of chlorpyrifos and... comprehensive preliminary human health risk assessment for all chlorpyrifos uses. After reviewing comments...

  12. 76 FR 52945 - Chlorpyrifos Registration Review; Preliminary Human Health Risk Assessment; Extension of Comment...

    Science.gov (United States)

    2011-08-24

    ... AGENCY Chlorpyrifos Registration Review; Preliminary Human Health Risk Assessment; Extension of Comment... availability of the chlorpyrifos registration review; preliminary human health risk assessment. This document... for the chlorpyrifos reregistration review, preliminary human health risk assessment, established in...

  13. Effect of Transducer Orientation on Errors in Ultrasound Image-Based Measurements of Human Medial Gastrocnemius Muscle Fascicle Length and Pennation.

    Science.gov (United States)

    Bolsterlee, Bart; Gandevia, Simon C; Herbert, Robert D

    2016-01-01

    Ultrasound imaging is often used to measure muscle fascicle lengths and pennation angles in human muscles in vivo. Theoretically the most accurate measurements are made when the transducer is oriented so that the image plane aligns with muscle fascicles and, for measurements of pennation, when the image plane also intersects the aponeuroses perpendicularly. However this orientation is difficult to achieve and usually there is some degree of misalignment. Here, we used simulated ultrasound images based on three-dimensional models of the human medial gastrocnemius, derived from magnetic resonance and diffusion tensor images, to describe the relationship between transducer orientation and measurement errors. With the transducer oriented perpendicular to the surface of the leg, the error in measurement of fascicle lengths was about 0.4 mm per degree of misalignment of the ultrasound image with the muscle fascicles. If the transducer is then tipped by 20°, the error increases to 1.1 mm per degree of misalignment. For a given degree of misalignment of muscle fascicles with the image plane, the smallest absolute error in fascicle length measurements occurs when the transducer is held perpendicular to the surface of the leg. Misalignment of the transducer with the fascicles may cause fascicle length measurements to be underestimated or overestimated. Contrary to widely held beliefs, it is shown that pennation angles are always overestimated if the image is not perpendicular to the aponeurosis, even when the image is perfectly aligned with the fascicles. An analytical explanation is provided for this finding.

  14. Assessing global transitions in human development and colorectal cancer incidence.

    Science.gov (United States)

    Fidler, Miranda M; Bray, Freddie; Vaccarella, Salvatore; Soerjomataram, Isabelle

    2017-06-15

    Colorectal cancer incidence has paralleled increases in human development across most countries. Yet, marked decreases in incidence are now observed in countries that have attained very high human development. Thus, in this study, we explored the relationship between human development and colorectal cancer incidence, and in particular assessed whether national transitions to very high human development are linked to temporal patterns in colorectal cancer incidence. For these analyses, we utilized the Human Development Index (HDI) and annual incidence data from regional and national cancer registries. Truncated (30-74 years) age-standardized incidence rates were calculated. Yearly incidence rate ratios and HDI ratios, before and after transitioning to very high human development, were also estimated. Among the 29 countries investigated, colorectal cancer incidence was observed to decrease after reaching the very high human development threshold for 12 countries; decreases were also observed in a further five countries, but the age-standardized incidence rates remained higher than that observed at the threshold. Such declines or stabilizations are likely due to colorectal cancer screening in some populations, as well as varying levels of exposure to protective factors. In summary, it appears that there is a threshold at which human development predicts a stabilization or decline in colorectal cancer incidence, though this pattern was not observed for all countries assessed. Future cancer planning must consider the increasing colorectal cancer burden expected in countries transitioning towards higher levels of human development, as well as possible declines in incidence among countries reaching the highest development level. © 2017 UICC.

  15. Measurement errors in the assessment of exposure to solar ultraviolet radiation and its impact on risk estimates in epidemiological studies.

    Science.gov (United States)

    Dadvand, Payam; Basagaña, Xavier; Barrera-Gómez, Jose; Diffey, Brian; Nieuwenhuijsen, Mark

    2011-07-01

    To date, many studies addressing long-term effects of ultraviolet radiation (UVR) exposure on human health have relied on a range of surrogates such as the latitude of the city of residence, ambient UVR levels, or time spent outdoors to estimate personal UVR exposure. This study aimed to differentiate the contributions of personal behaviour and ambient UVR levels on facial UVR exposure and to evaluate the impact of using UVR exposure surrogates on detecting exposure-outcome associations. Data on time-activity, holiday behaviour, and ambient UVR levels were obtained for adult (aged 25-55 years old) indoor workers in six European cities: Athens (37°N), Grenoble (45°N), Milan (45°N), Prague (50°N), Oxford (52°N), and Helsinki (60°N). Annual UVR facial exposure levels were simulated for 10,000 subjects for each city, using a behavioural UVR exposure model. Within-city variations of facial UVR exposure were three times larger than the variation between cities, mainly because of time-activity patterns. In univariate models, ambient UVR levels, latitude and time spent outdoors, each accounted for less than one fourth of the variation in facial exposure levels. Use of these surrogates to assess long-term exposure to UVR resulted in requiring more than four times more participants to achieve similar statistical power to the study that applied simulated facial exposure. Our results emphasise the importance of integrating both personal behaviour and ambient UVR levels/latitude in exposure assessment methodologies.

  16. Proofreading for word errors.

    Science.gov (United States)

    Pilotti, Maura; Chodorow, Martin; Agpawa, Ian; Krajniak, Marta; Mahamane, Salif

    2012-04-01

    Proofreading (i.e., reading text for the purpose of detecting and correcting typographical errors) is viewed as a component of the activity of revising text and thus is a necessary (albeit not sufficient) procedural step for enhancing the quality of a written product. The purpose of the present research was to test competing accounts of word-error detection which predict factors that may influence reading and proofreading differently. Word errors, which change a word into another word (e.g., from --> form), were selected for examination because they are unlikely to be detected by automatic spell-checking functions. Consequently, their detection still rests mostly in the hands of the human proofreader. Findings highlighted the weaknesses of existing accounts of proofreading and identified factors, such as length and frequency of the error in the English language relative to frequency of the correct word, which might play a key role in detection of word errors.

  17. Logic and human reasoning: an assessment of the deduction paradigm.

    Science.gov (United States)

    Evans, Jonathan St B T

    2002-11-01

    The study of deductive reasoning has been a major paradigm in psychology for approximately the past 40 years. Research has shown that people make many logical errors on such tasks and are strongly influenced by problem content and context. It is argued that this paradigm was developed in a context of logicist thinking that is now outmoded. Few reasoning researchers still believe that logic is an appropriate normative system for most human reasoning, let alone a model for describing the process of human reasoning, and many use the paradigm principally to study pragmatic and probabilistic processes. It is suggested that the methods used for studying reasoning be reviewed, especially the instructional context, which necessarily defines pragmatic influences as biases.

  18. Uncertainty quantification and error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  19. Analysis of human error in occupational accidents in the power plant industries using combining innovative FTA and meta-heuristic algorithms

    Directory of Open Access Journals (Sweden)

    M. Omidvari

    2015-09-01

    Full Text Available Introduction: Occupational accidents are of the main issues in industries. It is necessary to identify the main root causes of accidents for their control. Several models have been proposed for determining the accidents root causes. FTA is one of the most widely used models which could graphically establish the root causes of accidents. The non-linear function is one of the main challenges in FTA compliance and in order to obtain the exact number, the meta-heuristic algorithms can be used. Material and Method: The present research was done in power plant industries in construction phase. In this study, a pattern for the analysis of human error in work-related accidents was provided by combination of neural network algorithms and FTA analytical model. Finally, using this pattern, the potential rate of all causes was determined. Result: The results showed that training, age, and non-compliance with safety principals in the workplace were the most important factors influencing human error in the occupational accident. Conclusion: According to the obtained results, it can be concluded that human errors can be greatly reduced by training, right choice of workers with regard to the type of occupations, and provision of appropriate safety conditions in the work place.

  20. APJE-SLIM Based Method for Marine Human Error Probability Estimation%基于APJE-SLIM的海运人因失误概率的确定

    Institute of Scientific and Technical Information of China (English)

    席永涛; 陈伟炯; 夏少生; 张晓东

    2011-01-01

    Safety is the eternal theme in shipping industry.Research shows that human error is the main reason of maritime accidents.In order to research marine human errors, the PSF are discussed, and the human error probability (HEP) is estimated under the influence of PSF.Based on the detailed investigation of human errors in collision avoidance behavior which is the most key mission in navigation and the PSF, human reliability of mariners in collision avoidance is analyzed by using the integration of APJE and SLIM.Result shows that PSF such as fatigue and health status, knowledge, experience and training, task complexity, safety management and organizational effectiveness, etc.have varying influence on HEP.If the level of PSF can be improved, the HEP can decreased.Using APJE to determine the absolute human error probabilities of extreme point can solve the problem that the probability of reference point is hard to obtain in SLIM method, and obtain the marine HEP under the different influence levels of PSF.%安全是海运行业永恒的主题,调查研究表明,人因失误是造成海事的主要原因.为了对海运人因失误进行研究,探讨引起人因失误的行为形成因子(PSF),确定在PSF影响下的人因失误概率.在调查海上避让行为的人因失误和这些失误的行为形成因子的基础上,采用APJE和SLIM 相结合的方法对航海人员避让行为中的可靠性进行分析.结果表明,航海人员疲劳与健康程度、知识、经验与培训水平、任务复杂程度、安全管理水平与组织有效性等PSF对人因失误概率有着不同程度的影响,相应提高PSF水平,可极大地减少人因失误概率.利用APJE确定端点绝对失误概率,解决了SLIM方法中难以获得参考点概率的问题,获得了在不同种类不同水平PSF影响下的海运人因失误概率.

  1. Uncorrected refractive errors

    Directory of Open Access Journals (Sweden)

    Kovin S Naidoo

    2012-01-01

    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  2. Uncorrected refractive errors.

    Science.gov (United States)

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  3. Assessing human rights impacts in corporate development projects

    Energy Technology Data Exchange (ETDEWEB)

    Salcito, Kendyl, E-mail: kendyl.salcito@unibas.ch [Department of Epidemiology and Public Health, Swiss Tropical and Public Health Institute, P.O. Box, CH-4002 Basel (Switzerland); University of Basel, P.O. Box, CH-4003 Basel (Switzerland); NomoGaia, 1900 Wazee Street, Suite 303, Denver, CO 80202 (United States); NewFields, LLC, Denver, CO 80202 (United States); Utzinger, Jürg, E-mail: juerg.utzinger@unibas.ch [Department of Epidemiology and Public Health, Swiss Tropical and Public Health Institute, P.O. Box, CH-4002 Basel (Switzerland); University of Basel, P.O. Box, CH-4003 Basel (Switzerland); Weiss, Mitchell G., E-mail: Mitchell-g.Weiss@unibas.ch [Department of Epidemiology and Public Health, Swiss Tropical and Public Health Institute, P.O. Box, CH-4002 Basel (Switzerland); University of Basel, P.O. Box, CH-4003 Basel (Switzerland); Münch, Anna K., E-mail: annak.muench@gmail.com [Emerging Pathogens Institute, University of Florida, Gainesville, FL 32610 (United States); Singer, Burton H., E-mail: bhsinger@epi.ufl.edu [Emerging Pathogens Institute, University of Florida, Gainesville, FL 32610 (United States); Krieger, Gary R., E-mail: gkrieger@newfields.com [NewFields, LLC, Denver, CO 80202 (United States); Wielga, Mark, E-mail: wielga@nomogaia.org [NomoGaia, 1900 Wazee Street, Suite 303, Denver, CO 80202 (United States); NewFields, LLC, Denver, CO 80202 (United States)

    2013-09-15

    Human rights impact assessment (HRIA) is a process for systematically identifying, predicting and responding to the potential impact on human rights of a business operation, capital project, government policy or trade agreement. Traditionally, it has been conducted as a desktop exercise to predict the effects of trade agreements and government policies on individuals and communities. In line with a growing call for multinational corporations to ensure they do not violate human rights in their activities, HRIA is increasingly incorporated into the standard suite of corporate development project impact assessments. In this context, the policy world's non-structured, desk-based approaches to HRIA are insufficient. Although a number of corporations have commissioned and conducted HRIA, no broadly accepted and validated assessment tool is currently available. The lack of standardisation has complicated efforts to evaluate the effectiveness of HRIA as a risk mitigation tool, and has caused confusion in the corporate world regarding company duties. Hence, clarification is needed. The objectives of this paper are (i) to describe an HRIA methodology, (ii) to provide a rationale for its components and design, and (iii) to illustrate implementation of HRIA using the methodology in two selected corporate development projects—a uranium mine in Malawi and a tree farm in Tanzania. We found that as a prognostic tool, HRIA could examine potential positive and negative human rights impacts and provide effective recommendations for mitigation. However, longer-term monitoring revealed that recommendations were unevenly implemented, dependent on market conditions and personnel movements. This instability in the approach to human rights suggests a need for on-going monitoring and surveillance. -- Highlights: • We developed a novel methodology for corporate human rights impact assessment. • We piloted the methodology on two corporate projects—a mine and a plantation.

  4. THE VALIDITY OF HUMAN AND COMPUTERIZED WRITING ASSESSMENT

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring

    2005-09-01

    This paper summarizes an experiment designed to assess the validity of essay grading between holistic and analytic human graders and a computerized grader based on latent semantic analysis. The validity of the grade was gauged by the extent to which the student’s knowledge of the topic correlated with the grader’s expert knowledge. To assess knowledge, Pathfinder networks were generated by the student essay writers, the holistic and analytic graders, and the computerized grader. It was found that the computer generated grades more closely matched the definition of valid grading than did human generated grades.

  5. Impact of food and fluid intake on technical and biological measurement error in body composition assessment methods in athletes.

    Science.gov (United States)

    Kerr, Ava; Slater, Gary J; Byrne, Nuala

    2017-02-01

    Two, three and four compartment (2C, 3C and 4C) models of body composition are popular methods to measure fat mass (FM) and fat-free mass (FFM) in athletes. However, the impact of food and fluid intake on measurement error has not been established. The purpose of this study was to evaluate standardised (overnight fasted, rested and hydrated) v. non-standardised (afternoon and non-fasted) presentation on technical and biological error on surface anthropometry (SA), 2C, 3C and 4C models. In thirty-two athletic males, measures of SA, dual-energy X-ray absorptiometry (DXA), bioelectrical impedance spectroscopy (BIS) and air displacement plethysmography (BOD POD) were taken to establish 2C, 3C and 4C models. Tests were conducted after an overnight fast (duplicate), about 7 h later after ad libitum food and fluid intake, and repeated 24 h later before and after ingestion of a specified meal. Magnitudes of changes in the mean and typical errors of measurement were determined. Mean change scores for non-standardised presentation and post meal tests for FM were substantially large in BIS, SA, 3C and 4C models. For FFM, mean change scores for non-standardised conditions produced large changes for BIS, 3C and 4C models, small for DXA, trivial for BOD POD and SA. Models that included a total body water (TBW) value from BIS (3C and 4C) were more sensitive to TBW changes in non-standardised conditions than 2C models. Biological error is minimised in all models with standardised presentation but DXA and BOD POD are acceptable if acute food and fluid intake remains below 500 g.

  6. Assessment of the Ionospheric and Tropospheric Effects in Location Errors of Data Collection Platforms in Equatorial Region during High and Low Solar Activity Periods

    Directory of Open Access Journals (Sweden)

    Áurea Aparecida da Silva

    2012-01-01

    Full Text Available The geographical locations of data collection platforms (DCP in the Brazilian Environmental Data Collection System are obtained by processing Doppler shift measurements between satellites and DCP. When the signals travel from a DCP to a satellite crossing the terrestrial atmosphere, they are affected by the atmosphere layers, which generate a delay in the signal propagation, and cause errors in its final location coordinates computation. The signal propagation delay due to the atmospheric effects consists, essentially, of the ionospheric and tropospheric effects. This work provides an assessment of ionospheric effects using IRI and IONEX models and tropospheric delay compensation using climatic data provided by National Climatic Data Center. Two selected DCPs were used in this study in conjunction with SCD-2 satellite during high and low solar activity periods. Results show that the ionospheric effects on transmission delays are significant (about hundreds of meters in equatorial region and should be considered to reduce DCP location errors, mainly in high solar activity periods, while in those due to tropospheric effects the zenith errors are about threemeters. Therefore it is shown that the platform location errors can be reduced when the ionospheric and tropospheric effects are properly considered.

  7. Researching Human Experience: video intervention/prevention assessment (VIA

    Directory of Open Access Journals (Sweden)

    Jennifer Patashnick

    2005-05-01

    Full Text Available Human experience is a critical subject for research. By discussing Video Intervention/Prevention Assessment (VIA, a patient-centered health research method where patients teach their clinicians about living with a chronic condition through the creation of visual illness narratives, this paper examines the value of qualitative inquiry and why human experience rarely is investigated directly. An analysis of a sample VIA data is presented to demonstrate how, by utilizing grounded theory and qualitative analysis, one can derive rich and unique information from human experience.

  8. Experience and lessons from health impact assessment for human rights impact assessment.

    Science.gov (United States)

    Salcito, Kendyl; Utzinger, Jürg; Krieger, Gary R; Wielga, Mark; Singer, Burton H; Winkler, Mirko S; Weiss, Mitchell G

    2015-09-16

    As globalisation has opened remote parts of the world to foreign investment, global leaders at the United Nations and beyond have called on multinational companies to foresee and mitigate negative impacts on the communities surrounding their overseas operations. This movement towards corporate impact assessment began with a push for environmental and social inquiries. It has been followed by demands for more detailed assessments, including health and human rights. In the policy world the two have been joined as a right-to-health impact assessment. In the corporate world, the right-to-health approach fulfils neither managers' need to comprehensively understand impacts of a project, nor rightsholders' need to know that the full suite of their human rights will be safe from violation. Despite the limitations of a right-to-health tool for companies, integration of health into human rights provides numerous potential benefits to companies and the communities they affect. Here, a detailed health analysis through the human rights lens is carried out, drawing on a case study from the United Republic of Tanzania. This paper examines the positive and negative health and human rights impacts of a corporate operation in a low-income setting, as viewed through the human rights lens, considering observations on the added value of the approach. It explores the relationship between health impact assessment (HIA) and human rights impact assessment (HRIA). First, it considers the ways in which HIA, as a study directly concerned with human welfare, is a more appropriate guide than environmental or social impact assessment for evaluating human rights impacts. Second, it considers the contributions HRIA can make to HIA, by viewing determinants of health not as direct versus indirect, but as interrelated.

  9. Human behavioral assessments in current research of Parkinson's disease.

    Science.gov (United States)

    Asakawa, Tetsuya; Fang, Huan; Sugiyama, Kenji; Nozaki, Takao; Kobayashi, Susumu; Hong, Zhen; Suzuki, Katsuaki; Mori, Norio; Yang, Yilin; Hua, Fei; Ding, Guanghong; Wen, Guoqiang; Namba, Hiroki; Xia, Ying

    2016-09-01

    Parkinson's disease (PD) is traditionally classified as a movement disorder because patients mainly complain about motor symptoms. Recently, non-motor symptoms of PD have been recognized by clinicians and scientists as early signs of PD, and they are detrimental factors in the quality of life in advanced PD patients. It is crucial to comprehensively understand the essence of behavioral assessments, from the simplest measurement of certain symptoms to complex neuropsychological tasks. We have recently reviewed behavioral assessments in PD research with animal models (Asakawa et al., 2016). As a companion volume, this article will systematically review the behavioral assessments of motor and non-motor PD symptoms of human patients in current research. The major aims of this article are: (1) promoting a comparative understanding of various behavioral assessments in terms of the principle and measuring indexes; (2) addressing the major strengths and weaknesses of these behavioral assessments for a better selection of tasks/tests in order to avoid biased conclusions due to inappropriate assessments; and (3) presenting new concepts regarding the development of wearable devices and mobile internet in future assessments. In conclusion we emphasize the importance of improving the assessments for non-motor symptoms because of their complex and unique mechanisms in human PD brains.

  10. Quantification and Assessment of Interfraction Setup Errors Based on Cone Beam CT and Determination of Safety Margins for Radiotherapy.

    Directory of Open Access Journals (Sweden)

    Macarena Cubillos Mesías

    Full Text Available To quantify interfraction patient setup-errors for radiotherapy based on cone-beam computed tomography and suggest safety margins accordingly.Positioning vectors of pre-treatment cone-beam computed tomography for different treatment sites were collected (n = 9504. For each patient group the total average and standard deviation were calculated and the overall mean, systematic and random errors as well as safety margins were determined.The systematic (and random errors in the superior-inferior, left-right and anterior-posterior directions were: for prostate, 2.5(3.0, 2.6(3.9 and 2.9(3.9mm; for prostate bed, 1.7(2.0, 2.2(3.6 and 2.6(3.1mm; for cervix, 2.8(3.4, 2.3(4.6 and 3.2(3.9mm; for rectum, 1.6(3.1, 2.1(2.9 and 2.5(3.8mm; for anal, 1.7(3.7, 2.1(5.1 and 2.5(4.8mm; for head and neck, 1.9(2.3, 1.4(2.0 and 1.7(2.2mm; for brain, 1.0(1.5, 1.1(1.4 and 1.0(1.1mm; and for mediastinum, 3.3(4.6, 2.6(3.7 and 3.5(4.0mm. The CTV-to-PTV margins had the smallest value for brain (3.6, 3.7 and 3.3mm and the largest for mediastinum (11.5, 9.1 and 11.6mm. For pelvic treatments the means (and standard deviations were 7.3 (1.6, 8.5 (0.8 and 9.6 (0.8mm.Systematic and random setup-errors were smaller than 5mm. The largest errors were found for organs with higher motion probability. The suggested safety margins were comparable to published values in previous but often smaller studies.

  11. Human scenarios for the screening assessment. Columbia River Comprehensive Impact Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Napier, B.A.; Harper, B.L.; Lane, N.K.; Strenge, D.L.; Spivey, R.B.

    1996-03-01

    Because of past nuclear production operations along the Columbia River, there is intense public and tribal interest in assessing any residual Hanford Site related contamination along the river from the Hanford Reach to the Pacific Ocean. The Columbia River Impact Assessment (CRCIA) was proposed to address these concerns. The assessment of the Columbia River is being conducted in phases. The initial phase is a screening assessment of risk, which addresses current environmental conditions for a range of potential uses. One component of the screening assessment estimates the risk from contaminants in the Columbia River to humans. Because humans affected by the Columbia river are involved in a wide range of activities, various scenarios have been developed on which to base the risk assessments. The scenarios illustrate the range of activities possible by members of the public coming in contact with the Columbia River so that the impact of contaminants in the river on human health can be assessed. Each scenario illustrates particular activity patterns by a specific group. Risk will be assessed at the screening level for each scenario. This report defines the scenarios and the exposure factors that will be the basis for estimating the potential range of risk to human health from Hanford-derived radioactive as well as non-radioactive contaminants associated with the Columbia River.

  12. Assessing the predictive performance of risk-based water quality criteria using decision error estimates from receiver operating characteristics (ROC) analysis.

    Science.gov (United States)

    McLaughlin, Douglas B

    2012-10-01

    Field data relating aquatic ecosystem responses with water quality constituents that are potential ecosystem stressors are being used increasingly in the United States in the derivation of water quality criteria to protect aquatic life. In light of this trend, there is a need for transparent quantitative methods to assess the performance of models that predict ecological conditions using a stressor-response relationship, a response variable threshold, and a stressor variable criterion. Analysis of receiver operating characteristics (ROC analysis) has a considerable history of successful use in medical diagnostic, industrial, and other fields for similarly structured decision problems, but its use for informing water quality management decisions involving risk-based environmental criteria is less common. In this article, ROC analysis is used to evaluate predictions of ecological response variable status for 3 water quality stressor-response data sets. Information on error rates is emphasized due in part to their common use in environmental studies to describe uncertainty. One data set is comprised of simulated data, and 2 involve field measurements described previously in the literature. These data sets are also analyzed using linear regression and conditional probability analysis for comparison. Results indicate that of the methods studied, ROC analysis provides the most comprehensive characterization of prediction error rates including false positive, false negative, positive predictive, and negative predictive errors. This information may be used along with other data analysis procedures to set quality objectives for and assess the predictive performance of risk-based criteria to support water quality management decisions.

  13. 核电厂数字化人-机界面特征对人因失误的影响研究%Effects of Digital Human-Machine Interface Characteristics on Human Error in Nuclear Power Plants

    Institute of Scientific and Technical Information of China (English)

    李鹏程; 张力; 戴立操; 黄卫刚

    2011-01-01

    In order to identify the effects of digital human-machine interface characteristics on human error in nuclear power plants, the new characteristics of digital human-machine interface are identified by comparing with the traditional analog control systems in the aspects of the information display, user interface interaction and management, control systems, alarm systems and procedures system, and the negative effects of digital human-machine interface characteristics on human error are identified by field research and interviewing with operators such as increased cognitive load and workload, mode confusion, loss of situation awareness. As to the adverse effects related above, the corresponding prevention and control measures of human errors are provided to support the prevention and minimization of human errors and the optimization of human - machine interface design.%以数字化主控室的现场调研和对操纵员的访谈内容为依据,分别从信息显示、用户界面交互与管理、控制系统、报警系统、规程系统等方面与传统的模拟控制系统进行了比较分析,识别数字化人-机界面新特征.结果显示,数字化人.机界面新特征对人因失误产生的不利影响主要表现为操纵员的认知负荷和操作负荷的增加,容易产生模式混淆、情境意识丧失等方面.针对上述不利的影响,提出了相应的人因失误预防对策,为人因失误的预防和人-机界面的优化设计提供决策支持.

  14. Considerations for the integration of human and wildlife radiological assessments

    Energy Technology Data Exchange (ETDEWEB)

    Copplestone, D [Environment Agency, PO Box 12, Richard Fairclough House, Knutsford Road, Warrington WA4 1HG (United Kingdom); Brown, J E [Norwegian Radiation Protection Authority, Grini Naeringspark 13, 1361 Oesteraas (Norway); Beresford, N A, E-mail: david.copplestone@environment-agency.gov.u [Centre for Ecology and Hydrology, CEH-Lancaster, Lancaster Environment Centre, Library Avenue, Bailrigg, Lancaster LA1 4AP (United Kingdom)

    2010-06-15

    A number of tools and approaches have been developed recently to allow assessments of the environmental impact of radiation on wildlife to be undertaken. The International Commission on Radiological Protection (ICRP) has stated an intention to provide a more inclusive protection framework for humans and the environment. Using scenarios, which are loosely based on real or predicted discharge data, we investigate how radiological assessments of humans and wildlife can be integrated with special consideration given to the recent outputs of the ICRP. We highlight how assumptions about the location of the exposed population of humans and wildlife, and the selection of appropriate benchmarks for determining potential risks can influence the outcome of the assessments. A number of issues associated with the transfer component and numeric benchmarks were identified, which need to be addressed in order to fully integrate the assessment approaches. A particular issue was the lack of comparable benchmark values for humans and wildlife. In part this may be addressed via the ICRP's recommended derived consideration reference levels for their 12 Reference Animals and Plants.

  15. Assessing exposure to phthalates - the human biomonitoring approach.

    Science.gov (United States)

    Wittassek, Matthias; Koch, Holger Martin; Angerer, Jürgen; Brüning, Thomas

    2011-01-01

    Some phthalates are developmental and reproductive toxicants in animals. Exposure to phthalates is considered to be potentially harmful to human health as well. Based on a comprehensive literature research, we present an overview of the sources of human phthalate exposure and results of exposure assessments with special focus on human biomonitoring data. Among the general population, there is widespread exposure to a number of phthalates. Foodstuff is the major source of phthalate exposure, particularly for the long-chain phthalates such as di(2-ethylhexyl) phthalate. For short-chain phthalates such as di-n-butyl-phthalate, additional pathways are of relevance. In general, children are exposed to higher phthalate doses than adults. Especially, high exposures can occur through some medications or medical devices. By comparing exposure data with existing limit values, one can also assess the risks associated with exposure to phthalates. Within the general population, some individuals exceed tolerable daily intake values for one or more phthalates. In high exposure groups, (intensive medical care, medications) tolerable daily intake transgressions can be substantial. Recent findings from animal studies suggest that a cumulative risk assessment for phthalates is warranted, and a cumulative exposure assessment to phthalates via human biomonitoring is a major step into this direction.

  16. Considerations for the integration of human and wildlife radiological assessments.

    Science.gov (United States)

    Copplestone, D; Brown, J E; Beresford, N A

    2010-06-01

    A number of tools and approaches have been developed recently to allow assessments of the environmental impact of radiation on wildlife to be undertaken. The International Commission on Radiological Protection (ICRP) has stated an intention to provide a more inclusive protection framework for humans and the environment. Using scenarios, which are loosely based on real or predicted discharge data, we investigate how radiological assessments of humans and wildlife can be integrated with special consideration given to the recent outputs of the ICRP. We highlight how assumptions about the location of the exposed population of humans and wildlife, and the selection of appropriate benchmarks for determining potential risks can influence the outcome of the assessments. A number of issues associated with the transfer component and numeric benchmarks were identified, which need to be addressed in order to fully integrate the assessment approaches. A particular issue was the lack of comparable benchmark values for humans and wildlife. In part this may be addressed via the ICRP's recommended derived consideration reference levels for their 12 Reference Animals and Plants.

  17. Human health risk assessment for silver catfish Schilbe intermedius ...

    African Journals Online (AJOL)

    2014-09-03

    Sep 3, 2014 ... such as metals and pesticides (Dudgeon et al., 2006; Strayer and Dudgeon ... fish, piscivorous birds, mammals and humans (Chapman and. Wang, 2000). .... Risk assessments evaluating non-carcinogenic toxic effects of contaminants use ..... AL-KAHTANI MA (2009) Accumulation of heavy metals in tilapia.

  18. Biocides Steering Group on human exposure assessment: A preliminary report

    NARCIS (Netherlands)

    Hemmen, J.J. van

    1999-01-01

    In a project granted by DG XI of the European Commission, it is attempted to collate experimental and theoretical data on human (workers and consumers) exposure assessment to biocidal products, and to outline the methodology for sampling and measurement. On the basis of the available evidence, appro

  19. Effective use of pre-job briefing as tool for the prevention of human error; Effektive Nutzung der Arbeitsvorbesprechung als Werkzeug zur Vermeidung von Fehlhandlungen

    Energy Technology Data Exchange (ETDEWEB)

    Schlump, Ansgar [KLE GmbH, Lingen (Germany). Kernkraftwerk Emsland

    2015-06-15

    There is a fundamental demand to minimise the risks for workers and facilities while executing maintenance work. To ensure that facilities are secure and reliable, any deviation from normal operation behaviour has to be avoided. Accurate planning is the basis for minimising mistakes and making work more secure. All workers involved should understand how the work should be done and what is expected to avoid human errors. Especially in nuclear power plants, the human performance tools (HPT) have proved to be an effective instrument to minimise human errors. These human performance tools consist of numerous different tools that complement each other (e.g. pre-job briefing). The safety culture of the plants is also characterised by these tools. The choice of using the right HP-Tool is often a difficult task for the work planer. On the one hand, he wants to avoid mistakes during the execution of work but on the other hand he does not want to irritate the workers with unnecessary requirements. The proposed concept uses a simple risk analysis to take into account the complexity of the task, the experience of the past and the consequences of failure in to account. One main result of this risk analysis is a recommendation of the detailing of the pre-job briefing, to reduce the risks for the involved staff to a minimum.

  20. Assessment of measurement error due to sampling perspective in the space-based Doppler lidar wind profiler

    Science.gov (United States)

    Houston, S. H.; Emmitt, G. D.

    1986-01-01

    A Multipair Algorithm (MPA) has been developed to minimize the contribution of the sampling error in the simulated Doppler lidar wind profiler measurements (due to angular and spatial separation between shots in a shot pair) to the total measurement uncertainty. Idealized wind fields are used as input to the profiling model, and radial wind estimates are passed through the MPA to yield a wind measurement for 300 x 300 sq km areas. The derived divergence fields illustrate the gradient patterns that are particular to the Doppler lidar sampling strategy and perspective.

  1. Humans vs Hardware: The Unique World of NASA Human System Risk Assessment

    Science.gov (United States)

    Anton, W.; Havenhill, M.; Overton, Eric

    2016-01-01

    Understanding spaceflight risks to crew health and performance is a crucial aspect of preparing for exploration missions in the future. The research activities of the Human Research Program (HRP) provide substantial evidence to support most risk reduction work. The Human System Risk Board (HSRB), acting on behalf of the Office of Chief Health and Medical Officer (OCHMO), assesses these risks and assigns likelihood and consequence ratings to track progress. Unfortunately, many traditional approaches in risk assessment such as those used in the engineering aspects of spaceflight are difficult to apply to human system risks. This presentation discusses the unique aspects of risk assessment from the human system risk perspective and how these limitations are accommodated and addressed in order to ensure that reasonable inputs are provided to support the OCHMO's overall risk posture for manned exploration missions.

  2. A systems perspective of managing error recovery and tactical re-planning of operating teams in safety critical domains.

    Science.gov (United States)

    Kontogiannis, Tom

    2011-04-01

    Research in human error has provided useful tools for designing procedures, training, and intelligent interfaces that trap errors at an early stage. However, this "error prevention" policy may not be entirely successful because human errors will inevitably occur. This requires that the error management process (e.g., detection, diagnosis and correction) must also be supported. Research has focused almost exclusively on error detection; little is known about error recovery, especially in the context of safety critical systems. The aim of this paper is to develop a research framework that integrates error recovery strategies employed by experienced practitioners in handling their own errors. A control theoretic model of human performance was used to integrate error recovery strategies assembled from reviews of the literature, analyses of near misses from aviation and command & control domains, and observations of abnormal situations training at air traffic control facilities. The method of system dynamics has been used to analyze and compare error recovery strategies in terms of patterns of interaction, system affordances, and types of recovery plans. System dynamics offer a promising basis for studying the nature of error recovery management in the context of team interactions and system characteristics. The proposed taxonomy of error recovery strategies can help human factors and safety experts to develop resilient system designs and training solutions for managing human errors in unforeseen situations; it may also help incident investigators to explore why people's actions and assessments were not corrected at the time. Copyright © 2011 Elsevier Ltd. All rights reserved.

  3. Sources of errors and uncertainties in the assessment of forest soil carbon stocks at different scales—review and recommendations

    NARCIS (Netherlands)

    Vanguelova, E.I.; Bonifacio, E.; Vos, De B.; Hoosbeek, M.R.; Berger, T.W.; Vesterdal, L.; Armolaitis, K.; Celi, L.; Dinca, L.; Kjønaas, O.J.; Pavlenda, P.; Pumpanen, J.; Püttsepp,; Reidy, B.; Simončič, P.; Tobin, B.; Zhiyanski, M.

    2016-01-01

    Spatially explicit knowledge of recent and past soil organic carbon (SOC) stocks in forests will improve our understanding of the effect of human- and non-human-induced changes on forest C fluxes. For SOC accounting, a minimum detectable difference must be defined in order to adequately determine te

  4. Errors in Radiologic Reporting

    Directory of Open Access Journals (Sweden)

    Esmaeel Shokrollahi

    2010-05-01

    Full Text Available Given that the report is a professional document and bears the associated responsibilities, all of the radiologist's errors appear in it, either directly or indirectly. It is not easy to distinguish and classify the mistakes made when a report is prepared, because in most cases the errors are complex and attributable to more than one cause and because many errors depend on the individual radiologists' professional, behavioral and psychological traits."nIn fact, anyone can make a mistake, but some radiologists make more mistakes, and some types of mistakes are predictable to some extent."nReporting errors can be categorized differently:"nUniversal vs. individual"nHuman related vs. system related"nPerceptive vs. cognitive errors"n1. Descriptive "n2. Interpretative "n3. Decision related Perceptive errors"n1. False positive "n2. False negative"n Nonidentification "n Erroneous identification "nCognitive errors "n Knowledge-based"n Psychological  

  5. Refractive Errors

    Science.gov (United States)

    ... does the eye focus light? In order to see clearly, light rays from an object must focus onto the ... The refractive errors are: myopia, hyperopia and astigmatism [See figures 2 and 3]. What is hyperopia (farsightedness)? Hyperopia occurs when light rays focus behind the retina (because the eye ...

  6. Human errors in medical practice and the prevention%医疗活动中的人为错误及其防范

    Institute of Scientific and Technical Information of China (English)

    周大春; 陈肖敏; 赵彩莲; 蔡秀军

    2009-01-01

    Human errors are errors found in planning or implementation, and those found in medical practice are often major causes of mishaps.To name a few, wrong-site surgery, medication error, wrong treatment, and inadvertent equipment operation.Errors of this category can be prevented by learning from experiences and achievement worldwide.Preventive measures include those taken in human aspect and system aspect, reinforced education and training, process optimization, and hardware redesign.These measures can be aided by multiple safety steps in risky technical operations, in an effort to break the accident chain.For example, pre-operative surgical site marking, multi-department co-operated patient identification, bar-coded medication delivery, read-back during verbal communication, and observation of clinical pathway.Continuous quality improvement may be achieved when both the management and staff see medical errors in the correct sense, and frontline staff are willing to report their errors.%人为错误是与主观愿望相违背的计划错误或执行错误.医疗活动中的人为错误是导致医疗事故的重要原因.常见的有手术部位错误、药物误用、治疗方案错误、医嘱误写误读、设备误接误操作等.防范医疗活动中的人为错误可以结合国内和国外的经验,从人员角度和系统角度着手,加强员工教育,改进操作流程,改善硬件设施.对有风险的技术操作设置多重安全措施,以增加打断事故发生链的概率,如手术部位预先画标记和多部门合作核对,用药前人工核对与计算机条形码匹配相结合,采用规范的临床路径等.管理层和一线员工都要对医疗差错有理性认识,鼓励基层上报差错事故,借以发现问题并进行持续质量改进.

  7. Comparative Human Health Impact Assessment of Engineered Nanomaterials in the Framework of Life Cycle Assessment.

    Science.gov (United States)

    Fransman, Wouter; Buist, Harrie; Kuijpers, Eelco; Walser, Tobias; Meyer, David; Zondervan-van den Beuken, Esther; Westerhout, Joost; Klein Entink, Rinke H; Brouwer, Derk H

    2017-07-01

    For safe innovation, knowledge on potential human health impacts is essential. Ideally, these impacts are considered within a larger life-cycle-based context to support sustainable development of new applications and products. A methodological framework that accounts for human health impacts caused by inhalation of engineered nanomaterials (ENMs) in an indoor air environment has been previously developed. The objectives of this study are as follows: (i) evaluate the feasibility of applying the CF framework for NP exposure in the workplace based on currently available data; and (ii) supplement any resulting knowledge gaps with methods and data from the life cycle approach and human risk assessment (LICARA) project to develop a modified case-specific version of the framework that will enable near-term inclusion of NP human health impacts in life cycle assessment (LCA) using a case study involving nanoscale titanium dioxide (nanoTiO2 ). The intent is to enhance typical LCA with elements of regulatory risk assessment, including its more detailed measure of uncertainty. The proof-of-principle demonstration of the framework highlighted the lack of available data for both the workplace emissions and human health effects of ENMs that is needed to calculate generalizable characterization factors using common human health impact assessment practices in LCA. The alternative approach of using intake fractions derived from workplace air concentration measurements and effect factors based on best-available toxicity data supported the current case-by-case approach for assessing the human health life cycle impacts of ENMs. Ultimately, the proposed framework and calculations demonstrate the potential utility of integrating elements of risk assessment with LCA for ENMs once the data are available. © 2016 Society for Risk Analysis.

  8. Toward a cognitive taxonomy of medical errors.

    Science.gov (United States)

    Zhang, Jiajie; Patel, Vimla L; Johnson, Todd R; Shortliffe, Edward H

    2002-01-01

    One critical step in addressing and resolving the problems associated with human errors is the development of a cognitive taxonomy of such errors. In the case of errors, such a taxonomy may be developed (1) to categorize all types of errors along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to explain why, and even predict when and where, a specific error will occur, and (4) to generate intervention strategies for each type of error. Based on Reason's (1992) definition of human errors and Norman's (1986) cognitive theory of human action, we have developed a preliminary action-based cognitive taxonomy of errors that largely satisfies these four criteria in the domain of medicine. We discuss initial steps for applying this taxonomy to develop an online medical error reporting system that not only categorizes errors but also identifies problems and generates solutions.

  9. Dependence Assessment in Human Reliability Analysis Using Evidence Theory and AHP.

    Science.gov (United States)

    Su, Xiaoyan; Mahadevan, Sankaran; Xu, Peida; Deng, Yong

    2015-07-01

    Dependence assessment among human errors in human reliability analysis (HRA) is an important issue. Many of the dependence assessment methods in HRA rely heavily on the expert's opinion, thus are subjective and may sometimes cause inconsistency. In this article, we propose a computational model based on the Dempster-Shafer evidence theory (DSET) and the analytic hierarchy process (AHP) method to handle dependence in HRA. First, dependence influencing factors among human tasks are identified and the weights of the factors are determined by experts using the AHP method. Second, judgment on each factor is given by the analyst referring to anchors and linguistic labels. Third, the judgments are represented as basic belief assignments (BBAs) and are integrated into a fused BBA by weighted average combination in DSET. Finally, the CHEP is calculated based on the fused BBA. The proposed model can deal with ambiguity and the degree of confidence in the judgments, and is able to reduce the subjectivity and improve the consistency in the evaluation process.

  10. Hand-held dynamometry in patients with haematological malignancies: Measurement error in the clinical assessment of knee extension strength

    Directory of Open Access Journals (Sweden)

    Uebelhart Daniel

    2009-03-01

    Full Text Available Abstract Background Hand-held dynamometry is a portable and inexpensive method to quantify muscle strength. To determine if muscle strength has changed, an examiner must know what part of the difference between a patient's pre-treatment and post-treatment measurements is attributable to real change, and what part is due to measurement error. This study aimed to determine the relative and absolute reliability of intra and inter-observer strength measurements with a hand-held dynamometer (HHD. Methods Two observers performed maximum voluntary peak torque measurements (MVPT for isometric knee extension in 24 patients with haematological malignancies. For each patient, the measurements were carried out on the same day. The main outcome measures were the intraclass correlation coefficient (ICC ± 95%CI, the standard error of measurement (SEM, the smallest detectable difference (SDD, the relative values as % of the grand mean of the SEM and SDD, and the limits of agreement for the intra- and inter-observer '3 repetition average' and the 'highest value of 3 MVPT' knee extension strength measures. Results The intra-observer ICCs were 0.94 for the average of 3 MVPT (95%CI: 0.86–0.97 and 0.86 for the highest value of 3 MVPT (95%CI: 0.71–0.94. The ICCs for the inter-observer measurements were 0.89 for the average of 3 MVPT (95%CI: 0.75–0.95 and 0.77 for the highest value of 3 MVPT (95%CI: 0.54–0.90. The SEMs for the intra-observer measurements were 6.22 Nm (3.98% of the grand mean (GM and 9.83 Nm (5.88% of GM. For the inter-observer measurements, the SEMs were 9.65 Nm (6.65% of GM and 11.41 Nm (6.73% of GM. The SDDs for the generated parameters varied from 17.23 Nm (11.04% of GM to 27.26 Nm (17.09% of GM for intra-observer measurements, and 26.76 Nm (16.77% of GM to 31.62 Nm (18.66% of GM for inter-observer measurements, with similar results for the limits of agreement. Conclusion The results indicate that there is acceptable relative reliability

  11. Vulnerability assessment of atmospheric environment driven by human impacts.

    Science.gov (United States)

    Zhang, Yang; Shen, Jing; Ding, Feng; Li, Yu; He, Li

    2016-11-15

    Atmospheric environment quality worsening is a substantial threat to public health worldwide, and in many places, air pollution due to the intensification of the human activity is increasing dramatically. However, no studies have been investigated the integration of vulnerability assessment and atmospheric environment driven by human impacts. The objective of this study was to identify and prioritize the undesirable environmental changes as an early warning system for environment managers and decision makers in term of human, atmospheric environment, and social economic elements. We conduct a vulnerability assessment method of atmospheric environment associated with human impact, this method integrates spatial context of Geographic Information System (GIS) tool, multi-criteria decision analysis (MCDA) method, ordered weighted averaging (OWA) operators under the Exposure-Sensitivity- Adaptive Capacity (ESA) framework. Decision makers can find out relevant vulnerability assessment results with different vulnerable attitudes. In the Beijing-Tianjin-Hebei (BTH) region, China, we further applied this developed method and proved it to be reliable and consistent with the China Environmental Status Bulletin. Results indicate that the vulnerability of atmospheric environment in the BTH region is not optimistic, and environment managers should do more about air pollution. Thus, the most appropriate strategic decision and development program of city or state can be picked out assisting by the vulnerable results. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. The human component of sustainability: a study for assessing "human performances" of energy efficient construction blocks.

    Science.gov (United States)

    Attaianese, Erminia; Duca, Gabriella

    2012-01-01

    This paper presents an applied research aimed at understanding the relevance and the applicability of human related criteria in sustainability assessment of construction materials. Under a theoretical perspective, human factors consideration is strongly encouraged by building sustainability assessment methods, but the practice demonstrates that current models for building sustainability assessment neglect ergonomic issues, especially those ones concerning the construction phase. The study starts from the observation that new construction techniques for high energy efficient external walls are characterized by elements generally heavier and bigger than traditional materials. In this case, high sustainability performances connected with energy saving could be reached only consuming high, and then not very much sustainable, human efforts during setting-up operations. The paper illustrates a practical approach for encompassing human factors in sustainability assessment of four block types for energy efficient external walls. Research steps, from block selections to bricklaying task analysis, human factors indicators and metrics formulation, data gathering and final assessment are going to be presented. Finally, open issues and further possible generalizations from the particular case study will be discussed.

  13. Assessing sustainable biophysical human-nature connectedness at regional scales

    Science.gov (United States)

    Dorninger, Christian; Abson, David J.; Fischer, Joern; von Wehrden, Henrik

    2017-05-01

    Humans are biophysically connected to the biosphere through the flows of materials and energy appropriated from ecosystems. While this connection is fundamental for human well-being, many modern societies have—for better or worse—disconnected themselves from the natural productivity of their immediate regional environment. In this paper, we conceptualize the biophysical human-nature connectedness of land use systems at regional scales. We distinguish two mechanisms by which primordial connectedness of people to regional ecosystems has been circumvented via the use of external inputs. First, ‘biospheric disconnection’ refers to people drawing on non-renewable minerals from outside the biosphere (e.g. fossils, metals and other minerals). Second, ‘spatial disconnection’ arises from the imports and exports of biomass products and imported mineral resources used to extract and process ecological goods. Both mechanisms allow for greater regional resource use than would be possible otherwise, but both pose challenges for sustainability, for example, through waste generation, depletion of non-renewable resources and environmental burden shifting to distant regions. In contrast, biophysically reconnected land use systems may provide renewed opportunities for inhabitants to develop an awareness of their impacts and fundamental reliance on ecosystems. To better understand the causes, consequences, and possible remedies related to biophysical disconnectedness, new quantitative methods to assess the extent of regional biophysical human-nature connectedness are needed. To this end, we propose a new methodological framework that can be applied to assess biophysical human-nature connectedness in any region of the world.

  14. Human health risk assessment of heavy metals in urban stormwater.

    Science.gov (United States)

    Ma, Yukun; Egodawatta, Prasanna; McGree, James; Liu, An; Goonetilleke, Ashantha

    2016-07-01

    Toxic chemical pollutants such as heavy metals (HMs) are commonly present in urban stormwater. These pollutants can pose a significant risk to human health and hence a significant barrier for urban stormwater reuse. The primary aim of this study was to develop an approach for quantitatively assessing the risk to human health due to the presence of HMs in stormwater. This approach will lead to informed decision making in relation to risk management of urban stormwater reuse, enabling efficient implementation of appropriate treatment strategies. In this study, risks to human health from heavy metals were assessed as hazard index (HI) and quantified as a function of traffic and land use related parameters. Traffic and land use are the primary factors influencing heavy metal loads in the urban environment. The risks posed by heavy metals associated with total solids and fine solids (heavy metal does not pose a significant risk, the presence of multiple heavy metals could be detrimental to human health. These findings suggest that stormwater guidelines should consider the combined risk from multiple heavy metals rather than the threshold concentration of an individual species. Furthermore, it was found that risk to human health from heavy metals in stormwater is significantly influenced by traffic volume and the risk associated with stormwater from industrial areas is generally higher than that from commercial and residential areas.

  15. Landscape-based assessment of human disturbance for michigan lakes.

    Science.gov (United States)

    Wang, Lizhu; Wehrly, Kevin; Breck, James E; Kraft, Lidia Szabo

    2010-09-01

    Assessment of lake impairment status and identification of threats' type and source is essential for protection of intact, enhancement of modified, and restoration of impaired lakes. For regions in which large numbers of lakes occur, such assessment has usually been done for only small fractions of lakes due to resource and time limitation. This study describes a process for assessing lake impairment status and identifying which human disturbances have the greatest impact on each lake for all lakes that are 2 ha or larger in the state of Michigan using readily available, georeferenced natural and human disturbance databases. In-lake indicators of impairment are available for only a small subset of lakes in Michigan. Using statistical relationships between the in-lake indicators and landscape natural and human-induced measures from the subset lakes, we assessed the likely human impairment condition of lakes for which in-lake indicator data were unavailable using landscape natural and human disturbance measures. Approximately 92% of lakes in Michigan were identified as being least to marginally impacted and about 8% were moderately to heavily impacted by landscape human disturbances. Among lakes that were heavily impacted, more inline lakes (92%) were impacted by human disturbances than disconnected (6%) or headwater lakes (2%). More small lakes were impacted than medium to large lakes. For inline lakes, 90% of the heavily impacted lakes were less than 40 ha, 10% were between 40 and 405 ha, and 1% was greater than 405 ha. For disconnected and headwater lakes, all of the heavily impacted lakes were less than 40 ha. Among the anthropogenic disturbances that contributed the most to lake disturbance index scores, nutrient yields and farm animal density affected the highest number of lakes, agricultural land use affected a moderate number of lakes, and point-source pollution and road measures affected least number of lakes. Our process for assessing lake condition

  16. Development of concepts for human labour accounting in Emergy Assessment and other Environmental Sustainability Assessment methods

    DEFF Research Database (Denmark)

    Kamp, Andreas; Morandi, Fabiana; Østergård, Hanne

    2016-01-01

    Human labour is central to the functioning of any human-influenced process. Nevertheless, Environmental Sustainability Assessments (ESAs) do not systematically include human labour as an input. Systematic omission of labour inputs in ESAs may constitute an unfortunate, significant bias in favour...... of labour intensive processes and a systematic underestimation of environmental impacts has implications for decision-making. A brief review of the evaluation of human labour in ESAs reveals that only Emergy Assessment (EmA) accounts for labour as standard. Focussing on EmA, we find, however...... calculation approach is demonstrated using examples from the literature (USA, with allocation based on educational level; Ghana, with allocation based on income level; the World, with no allocation). We elaborate on how labour may be considered as endogenous or exogenous to the studied system, and how inputs...

  17. Towards a systematic assessment of errors in diffusion Monte Carlo calculations of semiconductors: Case study of zinc selenide and zinc oxide

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Jaehyung [Department of Mechanical Science and Engineering, 1206 W Green Street, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States); Wagner, Lucas K. [Department of Physics, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States); Ertekin, Elif, E-mail: ertekin@illinois.edu [Department of Mechanical Science and Engineering, 1206 W Green Street, University of Illinois at Urbana-Champaign, Urbana, Illinois 61801 (United States); International Institute for Carbon Neutral Energy Research - WPI-I" 2CNER, Kyushu University, 744 Moto-oka, Nishi-ku, Fukuoka 819-0395 (Japan)

    2015-12-14

    The fixed node diffusion Monte Carlo (DMC) method has attracted interest in recent years as a way to calculate properties of solid materials with high accuracy. However, the framework for the calculation of properties such as total energies, atomization energies, and excited state energies is not yet fully established. Several outstanding questions remain as to the effect of pseudopotentials, the magnitude of the fixed node error, and the size of supercell finite size effects. Here, we consider in detail the semiconductors ZnSe and ZnO and carry out systematic studies to assess the magnitude of the energy differences arising from controlled and uncontrolled approximations in DMC. The former include time step errors and supercell finite size effects for ground and optically excited states, and the latter include pseudopotentials, the pseudopotential localization approximation, and the fixed node approximation. We find that for these compounds, the errors can be controlled to good precision using modern computational resources and that quantum Monte Carlo calculations using Dirac-Fock pseudopotentials can offer good estimates of both cohesive energy and the gap of these systems. We do however observe differences in calculated optical gaps that arise when different pseudopotentials are used.

  18. A new assessment method of pHEMT models by comparing relative errors of drain current and its derivatives up to the third order

    Science.gov (United States)

    Dobeš, Josef; Grábner, Martin; Puričer, Pavel; Vejražka, František; Míchal, Jan; Popp, Jakub

    2017-05-01

    Nowadays, there exist relatively precise pHEMT models available for computer-aided design, and they are frequently compared to each other. However, such comparisons are mostly based on absolute errors of drain-current equations and their derivatives. In the paper, a novel method is suggested based on relative root-mean-square errors of both drain current and its derivatives up to the third order. Moreover, the relative errors are subsequently relativized to the best model in each category to further clarify obtained accuracies of both drain current and its derivatives. Furthermore, one our older and two newly suggested models are also included in comparison with the traditionally precise Ahmed, TOM-2 and Materka ones. The assessment is performed using measured characteristics of a pHEMT operating up to 110 GHz. Finally, a usability of the proposed models including the higher-order derivatives is illustrated using s-parameters analysis and measurement at more operating points as well as computation and measurement of IP3 points of a low-noise amplifier of a multi-constellation satellite navigation receiver with ATF-54143 pHEMT.

  19. Image cytometer method for automated assessment of human spermatozoa concentration

    DEFF Research Database (Denmark)

    Egeberg, D L; Kjaerulff, S; Hansen, C

    2013-01-01

    to investigator bias. Here we show that image cytometry can be used to accurately measure the sperm concentration of human semen samples with great ease and reproducibility. The impact of several factors (pipetting, mixing, round cell content, sperm concentration), which can influence the read-out as well......In the basic clinical work-up of infertile couples, a semen analysis is mandatory and the sperm concentration is one of the most essential variables to be determined. Sperm concentration is usually assessed by manual counting using a haemocytometer and is hence labour intensive and may be subjected...... and easy measurement of human sperm concentration....

  20. 基于Bayes信息融合的人为差错概率计算方法%Human error probability quantification method based on Bayesian information fusion

    Institute of Scientific and Technical Information of China (English)

    蒋英杰; 孙志强; 谢红卫; 宫二玲

    2011-01-01

    研究了人为差错概率的计算.首先,介绍了可用于人为差错概率计算的数据来源,主要包括:通用数据、专家数据、仿真实验数据和现场数据.然后,分析了Bayes信息融合方法的基本思想,强调了该方法的两个关键性问题:验前分布的构建和融合权重的确定.最后,构建了基于Bayes信息融合的人为差错概率计算方法.将前3种数据作为脸前信息,融合形成验前分布.使用Bayes方法完成与现场数据的数据综合,得到人为差错概率的验后分布.基于该验后分布,完成人为差错概率的计算.通过示例分析,演示了方法的使用过程,证明了方法的有效性.%The quantification of human error probability is researched. Firstly, the data resources that can be used in the quantification of human error probability are introduced, including general data, expert data, simulation data, and spot data. Their characteristics are analyzed. Secondly, the basic idea of Bayesian information fusing is analyzed. Two key prololems are emphasized, which are the formation of prior distributions and the determination of fusing weights. Finally, the new method is presented, which quantifies the human error probability based on Bayesian information fusing. The first three kinds of data are regarded as prior information to form the fused prior distribution. The Bayesian method is used to synthesize all the data and get the posterior distribution. Based on the posterior distribution, the human error probability can be quantified. An example is analyzed, which shows the process of the method and proves its validity.

  1. Impact Analysis of Human Error on Protection System Reliability%人为失误对保护系统可靠性的影响

    Institute of Scientific and Technical Information of China (English)

    张晶晶; 丁明; 李生虎

    2012-01-01

    针对单一主保护和主后备保护系统,基于状态维修环境,首次建立了详细的、考虑人为失误影响的保护系统可靠性模型。定义了相应的可靠性指标,并通过算例分析了人为失误对保护系统可靠性指标的影响。分析结果表明:人为失误对单一主保护和主后备保护系统的可靠性影响都较大,在正常运行及修理等过程中要尽量减少人为失误,提高人员可靠性和保护系统可靠性。在多重保护系统运行中,不仅要提高主保护的可靠性,也要提高后备保护的可靠性,并把防止误动作作为指导思想。%In view of the single main protection and main and backup protection system, a protection system reliability model considering the impact of hun-lan error is firstly developed in detail, which is based on the condition-based maintenance environment. Corresponding reliability indices are defined, through an example the impact of human error on the protection system reliability is analyzed. The analysis results show that human error has a great impact on both single main protection and main and backup protection system, and human error must be reduced as possible during normal operation and maintenance process. The human reliability and protection system reliability must be improved. Not only reliability of main protection should be increased, but also reliability of backup protection in the multiple protection system, and preventing malfunction of protection system should be guideline.

  2. Toward a cognitive taxonomy of medical errors.

    OpenAIRE

    Zhang, Jiajie; Patel, Vimla L.; Johnson, Todd R.; Shortliffe, Edward H.

    2002-01-01

    One critical step in addressing and resolving the problems associated with human errors is the development of a cognitive taxonomy of such errors. In the case of errors, such a taxonomy may be developed (1) to categorize all types of errors along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to explain why, and even predict when and where, a specific error will occur, and (4) to generate intervention strategies for each type of e...

  3. Method to control depth error when ablating human dentin with numerically controlled picosecond laser: a preliminary study.

    Science.gov (United States)

    Sun, Yuchun; Yuan, Fusong; Lv, Peijun; Wang, Dangxiao; Wang, Lei; Wang, Yong

    2015-07-01

    A three-axis numerically controlled picosecond laser was used to ablate dentin to investigate the quantitative relationships among the number of additive pulse layers in two-dimensional scans starting from the focal plane, step size along the normal of the focal plane (focal plane normal), and ablation depth error. A method to control the ablation depth error, suitable to control stepping along the focal plane normal, was preliminarily established. Twenty-four freshly removed mandibular first molars were cut transversely along the long axis of the crown and prepared as 48 tooth sample slices with approximately flat surfaces. Forty-two slices were used in the first section. The picosecond laser was 1,064 nm in wavelength, 3 W in power, and 10 kHz in repetition frequency. For a varying number (n = 5-70) of focal plane additive pulse layers (14 groups, three repetitions each), two-dimensional scanning and ablation were performed on the dentin regions of the tooth sample slices, which were fixed on the focal plane. The ablation depth, d, was measured, and the quantitative function between n and d was established. Six slices were used in the second section. The function was used to calculate and set the timing of stepwise increments, and the single-step size along the focal plane normal was d micrometer after ablation of n layers (n = 5-50; 10 groups, six repetitions each). Each sample underwent three-dimensional scanning and ablation to produce 2 × 2-mm square cavities. The difference, e, between the measured cavity depth and theoretical value was calculated, along with the difference, e 1, between the measured average ablation depth of a single-step along the focal plane normal and theoretical value. Values of n and d corresponding to the minimum values of e and e 1, respectively, were obtained. In two-dimensional ablation, d was largest (720.61 μm) when n = 65 and smallest when n = 5 (45.00 μm). Linear regression yielded the quantitative

  4. Multi-frequency bioimpedance in human muscle assessment

    DEFF Research Database (Denmark)

    Bartels, Else Marie; Sørensen, Emma Rudbæk; Harrison, Adrian Paul

    2015-01-01

    Bioimpedance analysis (BIA) is a well-known and tested method for body mass and muscular health assessment. Multi-frequency BIA (mfBIA) equipment now makes it possible to assess a particular muscle as a whole, as well as looking at a muscle at the fiber level. The aim of this study was to test...... healthy human control subjects and three selected cases were examined to demonstrate the extent to which this method may be used clinically, and in relation to training in sport. The electrode setup is shown to affect the mfBIA parameters recorded. Our recommendation is the use of noble metal electrodes...

  5. Humanized mouse model for assessing the human immune response to xenogeneic and allogeneic decellularized biomaterials.

    Science.gov (United States)

    Wang, Raymond M; Johnson, Todd D; He, Jingjin; Rong, Zhili; Wong, Michelle; Nigam, Vishal; Behfar, Atta; Xu, Yang; Christman, Karen L

    2017-06-01

    Current assessment of biomaterial biocompatibility is typically implemented in wild type rodent models. Unfortunately, different characteristics of the immune systems in rodents versus humans limit the capability of these models to mimic the human immune response to naturally derived biomaterials. Here we investigated the utility of humanized mice as an improved model for testing naturally derived biomaterials. Two injectable hydrogels derived from decellularized porcine or human cadaveric myocardium were compared. Three days and one week after subcutaneous injection, the hydrogels were analyzed for early and mid-phase immune responses, respectively. Immune cells in the humanized mouse model, particularly T-helper cells, responded distinctly between the xenogeneic and allogeneic biomaterials. The allogeneic extracellular matrix derived hydrogels elicited significantly reduced total, human specific, and CD4(+) T-helper cell infiltration in humanized mice compared to xenogeneic extracellular matrix hydrogels, which was not recapitulated in wild type mice. T-helper cells, in response to the allogeneic hydrogel material, were also less polarized towards a pro-remodeling Th2 phenotype compared to xenogeneic extracellular matrix hydrogels in humanized mice. In both models, both biomaterials induced the infiltration of macrophages polarized towards a M2 phenotype and T-helper cells polarized towards a Th2 phenotype. In conclusion, these studies showed the importance of testing naturally derived biomaterials in immune competent animals and the potential of utilizing this humanized mouse model for further studying human immune cell responses to biomaterials in an in vivo environment.

  6. Quantification of Human Movement for Assessment in Automated Exercise Coaching

    CERN Document Server

    Hagler, Stuart; Bajczy, Ruzena; Pavel, Misha

    2016-01-01

    Quantification of human movement is a challenge in many areas, ranging from physical therapy to robotics. We quantify of human movement for the purpose of providing automated exercise coaching in the home. We developed a model-based assessment and inference process that combines biomechanical constraints with movement assessment based on the Microsoft Kinect camera. To illustrate the approach, we quantify the performance of a simple squatting exercise using two model-based metrics that are related to strength and endurance, and provide an estimate of the strength and energy-expenditure of each exercise session. We look at data for 5 subjects, and show that for some subjects the metrics indicate a trend consistent with improved exercise performance.

  7. Human errors and work performance in a nuclear power plant control room: associations with work-related factors and behavioral coping

    Energy Technology Data Exchange (ETDEWEB)

    Kecklund, Lena Jacobsson; Svenson, Ola

    1997-04-01

    The present study investigated the relationships between the operator's appraisal of his own work situation and the quality of his own work performance as well as self-reported errors in a nuclear power plant control room. In all, 98 control room operators from two nuclear power units filled out a questionnaire and several diaries during two operational conditions, annual outage and normal operation. As expected, the operators reported higher work demands in annual outage as compared to normal operation. In response to the increased demands, the operators reported that they used coping strategies such as increased effort, decreased aspiration level for work performance quality and increased use of delegation of tasks to others. This way of coping does not reflect less positive motivation for the work during the outage period. Instead, the operators maintain the same positive motivation for their work, and succeed in being more alert during morning and night shifts. However, the operators feel less satisfied with their work result. The operators also perceive the risk of making minor errors as increasing during outage. The decreased level of satisfaction with work result during outage is a fact despite the lowering of aspiration level for work performance quality during outage. In order to decrease relative frequencies for minor errors, special attention should be given to reduce work demands, such as time pressure and memory demands. In order to decrease misinterpretation errors special attention should be given to organizational factors such as planning and shift turnovers in addition to training. In summary, the outage period seems to be a significantly more vulnerable window in the management of a nuclear power plant than the normal power production state. Thus, an increased focus on the outage period and human factors issues, addressing the synergetic effects or work demands, organizational factors and coping resources is an important area for improvement

  8. The assessment of virtual reality for human anatomy instruction

    Science.gov (United States)

    Benn, Karen P.

    1994-01-01

    This research project seeks to meet the objective of science training by developing, assessing, and validating virtual reality as a human anatomy training medium. In ideal situations, anatomic models, computer-based instruction, and cadaver dissection are utilized to augment the traditional methods of instruction. At many institutions, lack of financial resources limits anatomy instruction to textbooks and lectures. However, human anatomy is three dimensional, unlike the one dimensional depiction found in textbooks and the two dimensional depiction found on the computer. Virtual reality is a breakthrough technology that allows one to step through the computer screen into a three dimensional world. This technology offers many opportunities to enhance science education. Therefore, a virtual testing environment of the abdominopelvic region of a human cadaver was created to study the placement of body parts within the nine anatomical divisions of the abdominopelvic region and the four abdominal quadrants.

  9. Error Assessment of Solar Irradiance Forecasts and AC Power from Energy Conversion Model in Grid-Connected Photovoltaic Systems

    Directory of Open Access Journals (Sweden)

    Gianfranco Chicco

    2015-12-01

    Full Text Available Availability of effective estimation of the power profiles of photovoltaic systems is essential for studying how to increase the share of intermittent renewable sources in the electricity mix of many countries. For this purpose, weather forecasts, together with historical data of the meteorological quantities, provide fundamental information. The weak point of the forecasts depends on variable sky conditions, when the clouds successively cover and uncover the solar disc. This causes remarkable positive and negative variations in the irradiance pattern measured at the photovoltaic (PV site location. This paper starts from 1 to 3 days-ahead solar irradiance forecasts available during one year, with a few points for each day. These forecasts are interpolated to obtain more irradiance estimations per day. The estimated irradiance data are used to classify the sky conditions into clear, variable or cloudy. The results are compared with the outcomes of the same classification carried out with the irradiance measured in meteorological stations at two real PV sites. The occurrence of irradiance spikes in “broken cloud” conditions is identified and discussed. From the measured irradiance, the Alternating Current (AC power injected into the grid at two PV sites is estimated by using a PV energy conversion model. The AC power errors resulting from the PV model with respect to on-site AC power measurements are shown and discussed.

  10. Low aerial imagery – an assessment of georeferencing errors and the potential for use in environmental inventory

    Directory of Open Access Journals (Sweden)

    Smaczyński Maciej

    2017-06-01

    Full Text Available Unmanned aerial vehicles are increasingly being used in close range photogrammetry. Real-time observation of the Earth’s surface and the photogrammetric images obtained are used as material for surveying and environmental inventory. The following study was conducted on a small area (approximately 1 ha. In such cases, the classical method of topographic mapping is not accurate enough. The geodetic method of topographic surveying, on the other hand, is an overly precise measurement technique for the purpose of inventorying the natural environment components. The author of the following study has proposed using the unmanned aerial vehicle technology and tying in the obtained images to the control point network established with the aid of GNSS technology. Georeferencing the acquired images and using them to create a photogrammetric model of the studied area enabled the researcher to perform calculations, which yielded a total root mean square error below 9 cm. The performed comparison of the real lengths of the vectors connecting the control points and their lengths calculated on the basis of the photogrammetric model made it possible to fully confirm the RMSE calculated and prove the usefulness of the UAV technology in observing terrain components for the purpose of environmental inventory. Such environmental components include, among others, elements of road infrastructure, green areas, but also changes in the location of moving pedestrians and vehicles, as well as other changes in the natural environment that are not registered on classical base maps or topographic maps.

  11. Human error probability quantification using fuzzy methodology in nuclear plants; Aplicacao da metodologia fuzzy na quantificacao da probabilidade de erro humano em instalacoes nucleares

    Energy Technology Data Exchange (ETDEWEB)

    Nascimento, Claudio Souza do

    2010-07-01

    This work obtains Human Error Probability (HEP) estimates from operator's actions in response to emergency situations a hypothesis on Research Reactor IEA-R1 from IPEN. It was also obtained a Performance Shaping Factors (PSF) evaluation in order to classify them according to their influence level onto the operator's actions and to determine these PSF actual states over the plant. Both HEP estimation and PSF evaluation were done based on Specialists Evaluation using interviews and questionnaires. Specialists group was composed from selected IEA-R1 operators. Specialist's knowledge representation into linguistic variables and group evaluation values were obtained through Fuzzy Logic and Fuzzy Set Theory. HEP obtained values show good agreement with literature published data corroborating the proposed methodology as a good alternative to be used on Human Reliability Analysis (HRA). (author)

  12. Advancing human health risk assessment: integrating recent advisory committee recommendations.

    Science.gov (United States)

    Dourson, Michael; Becker, Richard A; Haber, Lynne T; Pottenger, Lynn H; Bredfeldt, Tiffany; Fenner-Crisp, Penelope A

    2013-07-01

    Over the last dozen years, many national and international expert groups have considered specific improvements to risk assessment. Many of their stated recommendations are mutually supportive, but others appear conflicting, at least in an initial assessment. This review identifies areas of consensus and difference and recommends a practical, biology-centric course forward, which includes: (1) incorporating a clear problem formulation at the outset of the assessment with a level of complexity that is appropriate for informing the relevant risk management decision; (2) using toxicokinetics and toxicodynamic information to develop Chemical Specific Adjustment Factors (CSAF); (3) using mode of action (MOA) information and an understanding of the relevant biology as the key, central organizing principle for the risk assessment; (4) integrating MOA information into dose-response assessments using existing guidelines for non-cancer and cancer assessments; (5) using a tiered, iterative approach developed by the World Health Organization/International Programme on Chemical Safety (WHO/IPCS) as a scientifically robust, fit-for-purpose approach for risk assessment of combined exposures (chemical mixtures); and (6) applying all of this knowledge to enable interpretation of human biomonitoring data in a risk context. While scientifically based defaults will remain important and useful when data on CSAF or MOA to refine an assessment are absent or insufficient, assessments should always strive to use these data. The use of available 21st century knowledge of biological processes, clinical findings, chemical interactions, and dose-response at the molecular, cellular, organ and organism levels will minimize the need for extrapolation and reliance on default approaches.

  13. Three-dimensional surface imaging system for assessing human obesity

    Science.gov (United States)

    Xu, Bugao; Yu, Wurong; Yao, Ming; Pepper, M. Reese; Freeland-Graves, Jeanne H.

    2009-10-01

    The increasing prevalence of obesity suggests a need to develop a convenient, reliable, and economical tool for assessment of this condition. Three-dimensional (3-D) body surface imaging has emerged as an exciting technology for the estimation of body composition. We present a new 3-D body imaging system, which is designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology is used to satisfy the requirement for a simple hardware setup and fast image acquisition. The portability of the system is created via a two-stand configuration, and the accuracy of body volume measurements is improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3-D body imaging. Body measurement functions dedicated to body composition assessment also are developed. The overall performance of the system is evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.

  14. A 3D surface imaging system for assessing human obesity

    Science.gov (United States)

    Xu, B.; Yu, W.; Yao, M.; Yao, X.; Li, Q.; Pepper, M. R.; Freeland-Graves, J. H.

    2009-08-01

    The increasing prevalence of obesity suggests a need to develop a convenient, reliable and economical tool for assessment of this condition. Three-dimensional (3D) body surface imaging has emerged as an exciting technology for estimation of body composition. This paper presents a new 3D body imaging system, which was designed for enhanced portability, affordability, and functionality. In this system, stereo vision technology was used to satisfy the requirements for a simple hardware setup and fast image acquisitions. The portability of the system was created via a two-stand configuration, and the accuracy of body volume measurements was improved by customizing stereo matching and surface reconstruction algorithms that target specific problems in 3D body imaging. Body measurement functions dedicated to body composition assessment also were developed. The overall performance of the system was evaluated in human subjects by comparison to other conventional anthropometric methods, as well as air displacement plethysmography, for body fat assessment.

  15. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  16. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  17. Building a World-Class Safety Culture: The National Ignition Facility and the Control of Human and Organizational Error

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, C T; Stalnaker, G

    2002-12-06

    Accidents in complex systems send us signals. They may be harbingers of a catastrophe. Some even argue that a ''normal'' consequence of operations in a complex organization may not only be the goods it produces, but also accidents and--inevitably--catastrophes. We would like to tell you the story of a large, complex organization, whose history questions the argument ''that accidents just happen.'' Starting from a less than enviable safety record, the National Ignition Facility (NIF) has accumulated over 2.5 million safe hours. The story of NIF is still unfolding. The facility is still being constructed and commissioned. But the steps NIF has taken in achieving its safety record provide a principled blueprint that may be of value to others. Describing that principled blueprint is the purpose of this paper. The first part of this paper is a case study of NIF and its effort to achieve a world-class safety record. This case study will include a description of (1) NIF's complex systems, (2) NIF's early safety history, (3) factors that may have initiated its safety culture change, and (4) the evolution of its safety blueprint. In the last part of the paper, we will compare NIF's safety culture to what safety industry experts, psychologists, and sociologists say about how to shape a culture and control organizational error.

  18. Assessing Spatial and Attribute Errors of Input Data in Large National Datasets for use in Population Distribution Models

    Energy Technology Data Exchange (ETDEWEB)

    Patterson, Lauren A [ORNL; Urban, Marie L [ORNL; Myers, Aaron T [ORNL; Bhaduri, Budhendra L [ORNL; Bright, Eddie A [ORNL; Coleman, Phil R [ORNL

    2007-01-01

    Geospatial technologies and digital data have developed and disseminated rapidly in conjunction with increasing computing performance and internet availability. The ability to store and transmit large datasets has encouraged the development of national datasets in geospatial format. National datasets are used by numerous agencies for analysis and modeling purposes because these datasets are standardized, and are considered to be of acceptable accuracy. At Oak Ridge National Laboratory, a national population model incorporating multiple ancillary variables was developed and one of the inputs required is a school database. This paper examines inaccuracies present within two national school datasets, TeleAtlas North America (TANA) and National Center of Education Statistics (NCES). Schools are an important component of the population model, because they serve as locations containing dense clusters of vulnerable populations. It is therefore essential to validate the quality of the school input data, which was made possible by increasing national coverage of high resolution imagery. Schools were also chosen since a 'real-world' representation of K-12 schools for the Philadelphia School District was produced; thereby enabling 'ground-truthing' of the national datasets. Analyses found the national datasets not standardized and incomplete, containing 76 to 90% of existing schools. The temporal accuracy of enrollment values of updating national datasets resulted in 89% inaccuracy to match 2003 data. Spatial rectification was required for 87% of the NCES points, of which 58% of the errors were attributed to the geocoding process. Lastly, it was found that by combining the two national datasets together, the resultant dataset provided a more useful and accurate solution. Acknowledgment Prepared by Oak Ridge National Laboratory, P.O. Box 2008, Oak Ridge, Tennessee 37831-6285, managed by UT-Battelle, LLC for the U. S. Department of Energy undercontract no

  19. The Value of Mainstreaming Human Rights into Health Impact Assessment

    Science.gov (United States)

    MacNaughton, Gillian; Forman, Lisa

    2014-01-01

    Health impact assessment (HIA) is increasingly being used to predict the health and social impacts of domestic and global laws, policies and programs. In a comprehensive review of HIA practice in 2012, the authors indicated that, given the diverse range of HIA practice, there is an immediate need to reconsider the governing values and standards for HIA implementation [1]. This article responds to this call for governing values and standards for HIA. It proposes that international human rights standards be integrated into HIA to provide a universal value system backed up by international and domestic laws and mechanisms of accountability. The idea of mainstreaming human rights into HIA is illustrated with the example of impact assessments that have been carried out to predict the potential effects of intellectual property rights in international trade agreements on the availability and affordability of medicines. The article concludes by recommending international human rights standards as a legal and ethical framework for HIA that will enhance the universal values of nondiscrimination, participation, transparency and accountability and bring legitimacy and coherence to HIA practice as well. PMID:25264683

  20. Addressing Human Variability in Next-Generation Human Health Risk Assessments of Environmental Chemicals

    Science.gov (United States)

    Bois, Frederic Y.; Chiu, Weihsueh A.; Hattis, Dale; Rusyn, Ivan; Guyton, Kathryn Z.

    2012-01-01

    Background: Characterizing variability in the extent and nature of responses to environmental exposures is a critical aspect of human health risk assessment. Objective: Our goal was to explore how next-generation human health risk assessments may better characterize variability in the context of the conceptual framework for the source-to-outcome continuum. Methods: This review was informed by a National Research Council workshop titled “Biological Factors that Underlie Individual Susceptibility to Environmental Stressors and Their Implications for Decision-Making.” We considered current experimental and in silico approaches, and emerging data streams (such as genetically defined human cells lines, genetically diverse rodent models, human omic profiling, and genome-wide association studies) that are providing new types of information and models relevant for assessing interindividual variability for application to human health risk assessments of environmental chemicals. Discussion: One challenge for characterizing variability is the wide range of sources of inherent biological variability (e.g., genetic and epigenetic variants) among individuals. A second challenge is that each particular pair of health outcomes and chemical exposures involves combinations of these sources, which may be further compounded by extrinsic factors (e.g., diet, psychosocial stressors, other exogenous chemical exposures). A third challenge is that different decision contexts present distinct needs regarding the identification—and extent of characterization—of interindividual variability in the human population. Conclusions: Despite these inherent challenges, opportunities exist to incorporate evidence from emerging data streams for addressing interindividual variability in a range of decision-making contexts. PMID:23086705

  1. Simulation testing the robustness of stock assessment models to error: some results from the ICES strategic initiative on stock assessment methods

    DEFF Research Database (Denmark)

    Deroba, J. J.; Butterworth, D. S.; Methot, R. D.

    2015-01-01

    The World Conference on Stock Assessment Methods (July 2013) included a workshop on testing assessment methods through simulations. The exercise was made up of two steps applied to datasets from 14 representative fish stocks from around the world. Step 1 involved applying stock assessments......-testing and cross-testing of models are a useful diagnostic approach, and suggested that estimates in the most recent years of time-series were the least robust. Results from the simulation exercise provide a basis for guidance on future large-scale simulation experiments and demonstrate the need for strategic...... investments in the evaluation and development of stock assessment methods...

  2. A 21st century roadmap for human health risk assessment.

    Science.gov (United States)

    Pastoor, Timothy P; Bachman, Ammie N; Bell, David R; Cohen, Samuel M; Dellarco, Michael; Dewhurst, Ian C; Doe, John E; Doerrer, Nancy G; Embry, Michelle R; Hines, Ronald N; Moretto, Angelo; Phillips, Richard D; Rowlands, J Craig; Tanir, Jennifer Y; Wolf, Douglas C; Boobis, Alan R

    2014-08-01

    The Health and Environmental Sciences Institute (HESI)-coordinated Risk Assessment in the 21st Century (RISK21) project was initiated to develop a scientific, transparent, and efficient approach to the evolving world of human health risk assessment, and involved over 120 participants from 12 countries, 15 government institutions, 20 universities, 2 non-governmental organizations, and 12 corporations. This paper provides a brief overview of the tiered RISK21 framework called the roadmap and risk visualization matrix, and articulates the core principles derived by RISK21 participants that guided its development. Subsequent papers describe the roadmap and matrix in greater detail. RISK21 principles include focusing on problem formulation, utilizing existing information, starting with exposure assessment (rather than toxicity), and using a tiered process for data development. Bringing estimates of exposure and toxicity together on a two-dimensional matrix provides a clear rendition of human safety and risk. The value of the roadmap is its capacity to chronicle the stepwise acquisition of scientific information and display it in a clear and concise fashion. Furthermore, the tiered approach and transparent display of information will contribute to greater efficiencies by calling for data only as needed (enough precision to make a decision), thus conserving animals and other resources.

  3. Errors in Neonatology

    Directory of Open Access Journals (Sweden)

    Antonio Boldrini

    2013-06-01

    Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research

  4. Error analysis and assessment of unsteady forces acting on a flapping wing micro air vehicle: free flight versus wind-tunnel experimental methods.

    Science.gov (United States)

    Caetano, J V; Percin, M; van Oudheusden, B W; Remes, B; de Wagter, C; de Croon, G C H E; de Visser, C C

    2015-08-20

    An accurate knowledge of the unsteady aerodynamic forces acting on a bio-inspired, flapping-wing micro air vehicle (FWMAV) is crucial in the design development and optimization cycle. Two different types of experimental approaches are often used: determination of forces from position data obtained from external optical tracking during free flight, or direct measurements of forces by attaching the FWMAV to a force transducer in a wind-tunnel. This study compares the quality of the forces obtained from both methods as applied to a 17.4 gram FWMAV capable of controlled flight. A comprehensive analysis of various error sources is performed. The effects of different factors, e.g., measurement errors, error propagation, numerical differentiation, filtering frequency selection, and structural eigenmode interference, are assessed. For the forces obtained from free flight experiments it is shown that a data acquisition frequency below 200 Hz and an accuracy in the position measurements lower than ± 0.2 mm may considerably hinder determination of the unsteady forces. In general, the force component parallel to the fuselage determined by the two methods compares well for identical flight conditions; however, a significant difference was observed for the forces along the stroke plane of the wings. This was found to originate from the restrictions applied by the clamp to the dynamic oscillations observed in free flight and from the structural resonance of the clamped FWMAV structure, which generates loads that cannot be distinguished from the external forces. Furthermore, the clamping position was found to have a pronounced influence on the eigenmodes of the structure, and this effect should be taken into account for accurate force measurements.

  5. Principles for ethical research involving humans: ethical professional practice in impact assessment Part I

    National Research Council Canada - National Science Library

    Vanclay, Frank; Baines, James T; Taylor, C. Nicholas

    2013-01-01

    ... methods textbooks, this paper identifies current principles for ethical research involving humans and discusses their implications for impact assessment practice generally and social impact assessment specifically...

  6. Error monitoring in musicians

    Directory of Open Access Journals (Sweden)

    Clemens eMaidhof

    2013-07-01

    Full Text Available To err is human, and hence even professional musicians make errors occasionally during their performances. This paper summarizes recent work investigating error monitoring in musicians, i.e. the processes and their neural correlates associated with the monitoring of ongoing actions and the detection of deviations from intended sounds. EEG Studies reported an early component of the event-related potential (ERP occurring before the onsets of pitch errors. This component, which can be altered in musicians with focal dystonia, likely reflects processes of error detection and/or error compensation, i.e. attempts to cancel the undesired sensory consequence (a wrong tone a musician is about to perceive. Thus, auditory feedback seems not to be a prerequisite for error detection, consistent with previous behavioral results. In contrast, when auditory feedback is externally manipulated and thus unexpected, motor performance can be severely distorted, although not all feedback alterations result in performance impairments. Recent studies investigating the neural correlates of feedback processing showed that unexpected feedback elicits an ERP component after note onsets, which shows larger amplitudes during music performance than during mere perception of the same musical sequences. Hence, these results stress the role of motor actions for the processing of auditory information. Furthermore, recent methodological advances like the combination of 3D motion capture techniques with EEG will be discussed. Such combinations of different measures can potentially help to disentangle the roles of different feedback types such as proprioceptive and auditory feedback, and in general to derive at a better understanding of the complex interactions between the motor and auditory domain during error monitoring. Finally, outstanding questions and future directions in this context will be discussed.

  7. Recognition and discrimination of tissue-marking dye color by surgical pathologists: recommendations to avoid errors in margin assessment.

    Science.gov (United States)

    Williams, Andrew S; Hache, Kelly Dakin

    2014-09-01

    A variety of tissue-marking dye (TMD) colors can be used to indicate surgical pathology specimen margins; however, the ability of pathologists to differentiate between specific microscopic margin colors has not been assessed systematically. This study aimed to evaluate pathologists' accuracy in identifying TMD color and determine the least ambiguous combinations of colors for use in surgical pathology. Seven colors of TMD were obtained from three manufacturers and applied to excess formalin-fixed uterine tissue. Study blocks contained multiple tissue pieces, each marked with a different color from the same manufacturer. Slides were assessed by eight participants for color and color distinctness of each piece of tissue. Black, green, red, and blue TMDs were accurately identified by most participants, but participants had difficulty identifying violet, orange, and yellow TMDs. Black, green, and blue TMDs were most commonly rated as "confidently discernable." Pathologists have difficulty identifying and distinguishing certain colors of TMDs. The combined use of certain colors of TMDs (yellow/orange/red, blue/violet, and red/violet) within the same specimen should be avoided to decrease the risk of inaccurately reporting specimen margins. Copyright© by the American Society for Clinical Pathology.

  8. A limited assessment of the ASEP human reliability analysis procedure using simulator examination results

    Energy Technology Data Exchange (ETDEWEB)

    Gore, B.R.; Dukelow, J.S. Jr.; Mitts, T.M.; Nicholson, W.L. [Pacific Northwest Lab., Richland, WA (United States)

    1995-10-01

    This report presents a limited assessment of the conservatism of the Accident Sequence Evaluation Program (ASEP) human reliability analysis (HRA) procedure described in NUREG/CR-4772. In particular, the, ASEP post-accident, post-diagnosis, nominal HRA procedure is assessed within the context of an individual`s performance of critical tasks on the simulator portion of requalification examinations administered to nuclear power plant operators. An assessment of the degree to which operator perforn:Lance during simulator examinations is an accurate reflection of operator performance during actual accident conditions was outside the scope of work for this project; therefore, no direct inference can be made from this report about such performance. The data for this study are derived from simulator examination reports from the NRC requalification examination cycle. A total of 4071 critical tasks were identified, of which 45 had been failed. The ASEP procedure was used to estimate human error probability (HEP) values for critical tasks, and the HEP results were compared with the failure rates observed in the examinations. The ASEP procedure was applied by PNL operator license examiners who supplemented the limited information in the examination reports with expert judgment based upon their extensive simulator examination experience. ASEP analyses were performed for a sample of 162 critical tasks selected randomly from the 4071, and the results were used to characterize the entire population. ASEP analyses were also performed for all of the 45 failed critical tasks. Two tests were performed to assess the bias of the ASEP HEPs compared with the data from the requalification examinations. The first compared the average of the ASEP HEP values with the fraction of the population actually failed and it found a statistically significant factor of two bias on the average.

  9. Controlling errors in unidosis carts

    Directory of Open Access Journals (Sweden)

    Inmaculada Díaz Fernández

    2010-01-01

    Full Text Available Objective: To identify errors in the unidosis system carts. Method: For two months, the Pharmacy Service controlled medication either returned or missing from the unidosis carts both in the pharmacy and in the wards. Results: Uncorrected unidosis carts show a 0.9% of medication errors (264 versus 0.6% (154 which appeared in unidosis carts previously revised. In carts not revised, the error is 70.83% and mainly caused when setting up unidosis carts. The rest are due to a lack of stock or unavailability (21.6%, errors in the transcription of medical orders (6.81% or that the boxes had not been emptied previously (0.76%. The errors found in the units correspond to errors in the transcription of the treatment (3.46%, non-receipt of the unidosis copy (23.14%, the patient did not take the medication (14.36%or was discharged without medication (12.77%, was not provided by nurses (14.09%, was withdrawn from the stocks of the unit (14.62%, and errors of the pharmacy service (17.56% . Conclusions: It is concluded the need to redress unidosis carts and a computerized prescription system to avoid errors in transcription.Discussion: A high percentage of medication errors is caused by human error. If unidosis carts are overlooked before sent to hospitalization units, the error diminishes to 0.3%.

  10. Learning from Errors

    Directory of Open Access Journals (Sweden)

    MA. Lendita Kryeziu

    2015-06-01

    Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.

  11. Indicators for human toxicity in Life Cycle Impact Assessment

    DEFF Research Database (Denmark)

    Krewitt, Wolfram; Pennington, David W.; Olsen, Stig Irving

    2002-01-01

    The main objectives of this task group under SETAC-Europe’s Second Working Group on Life Cycle Impact Assessment (LCIA-WIA2) were to identify and discuss the suitability of toxicological impact measures for human health for use in characterization in LCIA. The current state of the art of defining...... such as No Observed Effect Levels (NOEL). NOELs, and similar data, are determined in laboratory studies using rodents and are then extrapolated to more relevant human measures. Many examples also exist of measures and methods beyond potency-based indicators that attempt to account for differences in expected severity......, as well as potency. Quantitative severity-based indicators yield measures in terms of Years of Life Lost (YOLL), Disability Adjusted Life Years (DALY), Quality Adjusted Life Years (QALY) and other similar measures. DALYs and QALYs are examples of approaches that attempt to account for both years of life...

  12. Assessing neuromuscular mechanisms in human-exoskeleton interaction.

    Science.gov (United States)

    Sylla, N; Bonnet, V; Venture, G; Armande, N; Fraisse, P

    2014-01-01

    In this study, we propose to evaluate a 7 DOF exoskeleton in terms of motion control. Using criteria from the human motor control literature, inverse optimization was performed to assess an industrial screwing movement. The results of our study show that the hybrid composition of the free arm movement was accurately determined. At contrary, when wearing the exoskeleton, which produces an arbitrary determined torque compensation, the motion is different from the naturally adopted one. This study is part of the evaluation and comprehension of the complex neuromuscular mechanism resulting in wearing an exoskeleton several hours per day for industrial tasks assistance.

  13. Using skin to assess iron accumulation in human metabolic disorders

    Energy Technology Data Exchange (ETDEWEB)

    Guinote, I. [Laboratorio de Feixes de Ioes, Instituto Tecnologico e Nuclear, E.N. 10, 2685-953 Sacavem (Portugal); Fleming, R. [Imunohaemotherapy Department, Hospital de St. Maria, Lisbon (Portugal); Silva, R. [Dermatology Department, Hospital de St. Maria, Lisbon (Portugal); Filipe, P. [Dermatology Department, Hospital de St. Maria, Lisbon (Portugal); Silva, J.N. [Dermatology Department, Hospital de St. Maria, Lisbon (Portugal); Verissimo, A. [Laboratorio de Feixes de Ioes, Instituto Tecnologico e Nuclear, E.N. 10, 2685-953 Sacavem (Portugal); Napoleao, P. [Laboratorio de Feixes de Ioes, Instituto Tecnologico e Nuclear, E.N. 10, 2685-953 Sacavem (Portugal); Centro de Fisica Nuclear, Universidade de Lisbon (Portugal); Alves, L.C. [Laboratorio de Feixes de Ioes, Instituto Tecnologico e Nuclear, E.N. 10, 2685-953 Sacavem (Portugal); Centro de Fisica Nuclear, Universidade de Lisbon (Portugal); Pinheiro, T. [Laboratorio de Feixes de Ioes, Instituto Tecnologico e Nuclear, E.N. 10, 2685-953 Sacavem (Portugal) and Centro de Fisica Nuclear, Universidade de Lisbon (Portugal)]. E-mail: murmur@itn.pt

    2006-08-15

    The distribution of Fe in skin was assessed to monitor body Fe status in human hereditary hemochromatosis. The paper reports on data from nine patients with hemochromatosis that were studied along the therapeutic programme. Systemic evaluation of Fe metabolism was carried out by measuring with PIXE technique the Fe concentration in plasma and blood cells, and by determining with biochemical methods the indicators of Fe transport in serum (ferritin and transferrin). The Fe distribution and concentration in skin was assessed by nuclear microscopy and Fe deposits in liver estimated through nuclear magnetic resonance. Elevated Fe concentrations in skin were related to increased plasma Fe (p < 0.004), serum ferritin content (p < 0.01) and Fe deposits in liver (p < 0.004). The relationship of Fe deposits in organs and metabolism markers may help to better understand Fe pools mobilisation and to establish the quality of skin as a marker for the disease progression and therapy efficacy.

  14. Using skin to assess iron accumulation in human metabolic disorders

    Science.gov (United States)

    Guinote, I.; Fleming, R.; Silva, R.; Filipe, P.; Silva, J. N.; Veríssimo, A.; Napoleão, P.; Alves, L. C.; Pinheiro, T.

    2006-08-01

    The distribution of Fe in skin was assessed to monitor body Fe status in human hereditary hemochromatosis. The paper reports on data from nine patients with hemochromatosis that were studied along the therapeutic programme. Systemic evaluation of Fe metabolism was carried out by measuring with PIXE technique the Fe concentration in plasma and blood cells, and by determining with biochemical methods the indicators of Fe transport in serum (ferritin and transferrin). The Fe distribution and concentration in skin was assessed by nuclear microscopy and Fe deposits in liver estimated through nuclear magnetic resonance. Elevated Fe concentrations in skin were related to increased plasma Fe (p serum ferritin content (p < 0.01) and Fe deposits in liver (p < 0.004). The relationship of Fe deposits in organs and metabolism markers may help to better understand Fe pools mobilisation and to establish the quality of skin as a marker for the disease progression and therapy efficacy.

  15. Assessing the impact of climate variability and human activity to streamflow variation

    Directory of Open Access Journals (Sweden)

    J. Chang

    2015-06-01

    Full Text Available Water resources in river systems have been changing under the impacts of both climate variability and human activities. Assessing the respective impacts on decadal streamflow variation is important for water resources management. By using an elasticity-based method, calibrated TOPMODEL and VIC hydrologic models, we have quantitatively isolated the relative contributions that human activity and climate variability made to decadal streamflow changes in Jinhe basin located in northwest of China. This is an important watershed of Shaanxi Province that supplies drinking water for a population of over 6 million. The results from the three methods show that both human activity and climatic differences can have major effects on catchment streamflow, and the estimates of climate variability impacts from the hydrological models are similar to those from the elasticity-based method. Compared with the baseline period of 1960–1970, streamflow greatly decreased during 2001–2010. The change impacts of human activity and climate variability in 2001–2010 were about 83.5 and 16.5% of the total reduction respectively when averaged over the three methods. The maximum contribution value of human activity was appeared in 1981–1990 due to the effects of soil and water conservation measures and irrigation water withdrawal, which was 95, 112.5 and 92.4% from TOPMODEL, VIC model and elasticity-based method respectively. The maximum value of the aridity index (E0/P was 1.91 appeared in 1991–2000. Compared with 1960–1970 baseline period, climate variability made the greatest contributions reduction in 1991–2000, which was 47.4, 43.9 and 29.9% from TOPMODEL, VIC model and elasticity-based method respectively. We emphasized various source of errors and uncertainties that may occurre in the hydrological model (parameter and structural uncertainty and elasticity-based method (model parameter in climate change impact studies.

  16. Comprehensive Error Rate Testing (CERT)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services (CMS) implemented the Comprehensive Error Rate Testing (CERT) program to measure improper payments in the Medicare...

  17. Human Capital Questionnaire: Assessment of European nurses' perceptions as indicators of human capital quality.

    Science.gov (United States)

    Yepes-Baldó, Montserrat; Romeo, Marina; Berger, Rita

    2013-06-01

    Healthcare accreditation models generally include indicators related to healthcare employees' perceptions (e.g. satisfaction, career development, and health safety). During the accreditation process, organizations are asked to demonstrate the methods with which assessments are made. However, none of the models provide standardized systems for the assessment of employees. In this study, we analyzed the psychometric properties of an instrument for the assessment of nurses' perceptions as indicators of human capital quality in healthcare organizations. The Human Capital Questionnaire was applied to a sample of 902 nurses in four European countries (Spain, Portugal, Poland, and the UK). Exploratory factor analysis identified six factors: satisfaction with leadership, identification and commitment, satisfaction with participation, staff well-being, career development opportunities, and motivation. The results showed the validity and reliability of the questionnaire, which when applied to healthcare organizations, provide a better understanding of nurses' perceptions, and is a parsimonious instrument for assessment and organizational accreditation. From a practical point of view, improving the quality of human capital, by analyzing nurses and other healthcare employees' perceptions, is related to workforce empowerment.

  18. Mechanism of Error-Free Bypass of the Environmental Carcinogen N-(2'-Deoxyguanosin-8-yl)-3-aminobenzanthrone Adduct by Human DNA Polymerase η.

    Science.gov (United States)

    Patra, Amritraj; Politica, Dustin A; Chatterjee, Arindom; Tokarsky, E John; Suo, Zucai; Basu, Ashis K; Stone, Michael P; Egli, Martin

    2016-11-03

    The environmental pollutant 3-nitrobenzanthrone produces bulky aminobenzanthrone (ABA) DNA adducts with both guanine and adenine nucleobases. A major product occurs at the C8 position of guanine (C8-dG-ABA). These adducts present a strong block to replicative polymerases but, remarkably, can be bypassed in a largely error-free manner by the human Y-family polymerase η (hPol η). Here, we report the crystal structure of a ternary Pol⋅DNA⋅dCTP complex between a C8-dG-ABA-containing template:primer duplex and hPol η. The complex was captured at the insertion stage and provides crucial insight into the mechanism of error-free bypass of this bulky lesion. Specifically, bypass involves accommodation of the ABA moiety inside a hydrophobic cleft to the side of the enzyme active site and formation of an intra-nucleotide hydrogen bond between the phosphate and ABA amino moiety, allowing the adducted guanine to form a standard Watson-Crick pair with the incoming dCTP. © 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. 火炮身管直线度的定义及误差评定方法%Artillery Barrel Straightness Definition And Error Assessment Method

    Institute of Scientific and Technical Information of China (English)

    程杰

    2016-01-01

    Firstly, The reason of artillery barrel curl is analysed. The definition of artillery barrel straightness is advanced. Finally, The error assessment method of artillery barrel straightness is elaborated in detail. It includes the two points link line law, the least square law and the minimum contain regional law.%首先分析了火炮身管弯曲原因,提出了火炮身管直线度的定义,最后详细阐述了火炮身管直线度误差评定方法,包括两端点连线法、最小二乘法和最小包容区域法。

  20. 40 CFR 158.2083 - Experimental use permit biochemical pesticides human health assessment data requirements table.

    Science.gov (United States)

    2010-07-01

    ... pesticides human health assessment data requirements table. 158.2083 Section 158.2083 Protection of... determine the human health assessment data requirements for a particular biochemical pesticide product. (2.... Table—EUP Biochemical Pesticides Human Health Assessment Data Requirements Guideline Number Data...

  1. New approach for assessing human perfluoroalkyl exposure via hair.

    Science.gov (United States)

    Alves, Andreia; Jacobs, Griet; Vanermen, Guido; Covaci, Adrian; Voorspoels, Stefan

    2015-11-01

    In the recent years hair has been increasingly used as alternative matrix in human biomonitoring (HBM) of environmental pollutants. Sampling advantages and time integration of exposure assessment seems the most attractive features of hair matrix. In the current study, a novel miniaturized method was developed and validated for measuring 15 perfluoroalkyl substances (PFAS), including perfluoro n-butanoic acid (PFBA), perfluoro n-pentanoic acid (PFPeA), perfluoro n-hexanoic acid (PFHxA), perfluoro n-heptanoic acid (PFHpA), perfluor n-octanoic acid (PFOA), perfluoro n-nonanoic acid (PFNA), perfluoro tetradecanoic acid (PFTeDA), perfluorobutane sulfonic acid (PFBS), perfluoro pentane sulfonic acid (PFPeS), perfluorohexane sulfonic acid (PFHxS), perfluoroheptane sulfonic acid (PFHpS), perfluorooctane sulfonic acid (PFOS), perfluorononane sulfonic acid (PFNS), perfluorodecane sulfonic acid (PFDS) and perfluorododecane sulfonic acid (PFDoS) in human hair by liquid chromatography tandem mass spectrometry (LC-MS/MS). After extraction using ethyl acetate, dispersive ENVI-Carb was used for clean-up. Good intra- and inter-day precision for low (LQ 5 ng/g hair) and high spike (HQ 15n g/g) levels were achieved (in general RSD hair and 3-13 pg/g hair, respectively. The method limit of quantification (LOQm) ranged between 6 and 301 pg/g hair. The PFAS levels were measured in 30 human hair samples indicating that the levels are low (14-1534 pg/g hair). Some PFAS were not present in any hair sample (e.g. PFHpA, PFTeDA, PFNA, PFPeS, PFHpS, PFOS and PFNS), while other PFAS were frequently detected (PFBA, PFPeA, PFHxA, PFOA, PFBS, PFHxS, PFOS, PFDS and PFDoS) in human hair. Although levels in general were low, there is evidence of higher human exposure to some analytes, such as PFBA, PFPeA, PFHxA, PFOA, PFBS, PFHxS, and PFDoS. The current study shows that hair is a suitable alternative non-invasive matrix for exposure assessment of PFAS.

  2. Automatic shape recognition of human limbs to avoid errors due to skin marker shifting in motion analysis

    Science.gov (United States)

    Hatze, Herbert; Baca, Arnold

    1991-12-01

    A new method in human motion analysis is presented for overcoming the problem of the shifting of skin-mounted position markers relative to the skeleton. The present version of the method is based on two-dimensional video processing and involves the recording of subjects wearing special clothing. The clothing is designed in such a way as to permit the unambiguous spatial shape recognition of each of the 17 body segments by means of an edge detection algorithm. The latter and the algorithms for the computation of segment translation and rotation constitute improved versions of previously used algorithms, especially with respect to the execution times of the respective computer program on ordinary PCs. From the recognized shapes, the translation and rotation of each segment relative to its initial configuration is computed by using positional information from the previous frames. For the first frame to be analyzed, a starting algorithm has to be applied. Finally, the configurational coordinates of the body model are calculated from the respective spatial linear and angular positions.

  3. Assessing the impact of climate variability and human activities on streamflow variation

    Science.gov (United States)

    Chang, Jianxia; Zhang, Hongxue; Wang, Yimin; Zhu, Yuelu

    2016-04-01

    Water resources in river systems have been changing under the impact of both climate variability and human activities. Assessing the respective impact on decadal streamflow variation is important for water resource management. By using an elasticity-based method and calibrated TOPMODEL and VIC hydrological models, we quantitatively isolated the relative contributions that human activities and climate variability made to decadal streamflow changes in the Jinghe basin, located in the northwest of China. This is an important watershed of the Shaanxi province that supplies drinking water for a population of over 6 million people. The results showed that the maximum value of the moisture index (E0/P) was 1.91 and appeared in 1991-2000, and the decreased speed of streamflow was higher since 1990 compared with 1960-1990. The average annual streamflow from 1990 to 2010 was reduced by 26.96 % compared with the multiyear average value (from 1960 to 2010). The estimates of the impacts of climate variability and human activities on streamflow decreases from the hydrological models were similar to those from the elasticity-based method. The maximum contribution value of human activities was 99 % when averaged over the three methods, and appeared in 1981-1990 due to the effects of soil and water conservation measures and irrigation water withdrawal. Climate variability made the greatest contribution to streamflow reduction in 1991-2000, the values of which was 40.4 %. We emphasized various source of errors and uncertainties that may occur in the hydrological model (parameter and structural uncertainty) and elasticity-based method (model parameter) in climate change impact studies.

  4. Development of Human Posture Simulation Method for Assessing Posture Angles and Spinal Loads

    Science.gov (United States)

    Lu, Ming-Lun; Waters, Thomas; Werren, Dwight

    2015-01-01

    Video-based posture analysis employing a biomechanical model is gaining a growing popularity for ergonomic assessments. A human posture simulation method of estimating multiple body postural angles and spinal loads from a video record was developed to expedite ergonomic assessments. The method was evaluated by a repeated measures study design with three trunk flexion levels, two lift asymmetry levels, three viewing angles and three trial repetitions as experimental factors. The study comprised two phases evaluating the accuracy of simulating self and other people’s lifting posture via a proxy of a computer-generated humanoid. The mean values of the accuracy of simulating self and humanoid postures were 12° and 15°, respectively. The repeatability of the method for the same lifting condition was excellent (~2°). The least simulation error was associated with side viewing angle. The estimated back compressive force and moment, calculated by a three dimensional biomechanical model, exhibited a range of 5% underestimation. The posture simulation method enables researchers to simultaneously quantify body posture angles and spinal loading variables with accuracy and precision comparable to on-screen posture matching methods. PMID:26361435

  5. 人因失误分类理论及其在医疗领域中的应用%Human Error Taxonomy Theory and Its Application in Medical Field

    Institute of Scientific and Technical Information of China (English)

    冯庆敏; 刘胜林; 张强; 严毅; 程鹏

    2012-01-01

    Objective To introduce the definition and taxonomic methods of human error, and its applications in medical filed. Methods Four taxonomies of human errors are summarized, including Norman taxonomy, Rasmussen taxonomy, Reason model and Eindhoven taxonomy. The characteristics and the application scope of each taxonomic method are presented, and then the applications of these methods in some medical error classification analysis are discussed , such as transfusion system, medications, surgery and anesthesia. Results Human error taxonomic methods can be used to analyze human errors in medical field, among which the systematic, taxonomic method is more suitable for medical surroundings. Conclusion Human error taxonomies can assist analysing underlying factors in medical adverse events or accidents, provide guidance for reducing medical errors or carrying out remedial actions, improve patient safety.%目的 介绍人因失误的定义、分类方法及其在医疗领域中的应用.方法 总结了4种人因失误分类方法,包括Norman分类法、Rasmussen分类法、Reason模型和Eindhoven分类法,并指出各个方法的特点和适用情况,列举其在输血系统、药物治疗、外科手术和麻醉等医疗失误分类分析中的应用.结果 人因失误分类方法可用于医疗领域中的人因失误分析,其中系统性人误分类方法更适用于医疗环境.结论 人误分类能够辅助分析医疗不良事件或事故的深层原因,为减少医疗失误或实施补救措施提供指导,改善患者安全.

  6. Assessment on tracking error performance of Cascade P/PI, NPID and N-Cascade controller for precise positioning of xy table ballscrew drive system

    Science.gov (United States)

    Abdullah, L.; Jamaludin, Z.; Rafan, N. A.; Jamaludin, J.; Chiew, T. H.

    2013-12-01

    At present, positioning plants in machine tools are looking for high degree of accuracy and robustness attributes for the purpose of compensating various disturbance forces. The objective of this paper is to assess the tracking performance of Cascade P/PI, Nonlinear PID (NPID) and Nonlinear cascade (N-Cascade) controller with the existence of disturbance forces in the form of cutting forces. Cutting force characteristics at different cutting parameters; such as spindle speed rotations is analysed using Fast Fourier Transform. The tracking performance of a Nonlinear cascade controller in presence of these cutting forces is compared with NPID controller and Cascade P/PI controller. Robustness of these controllers in compensating different cutting characteristics is compared based on reduction in the amplitudes of cutting force harmonics using Fast Fourier Transform. It is found that the N-cascade controller performs better than both NPID controller and Cascade P/PI controller. The average percentage error reduction between N-cascade controller and Cascade P/PI controller is about 65% whereas the average percentage error reduction between cascade controller and NPID controller is about 82% at spindle speed of 3000 rpm spindle speed rotation. The finalized design of N-cascade controller could be utilized further for machining application such as milling process. The implementation of N-cascade in machine tools applications will increase the quality of the end product and the productivity in industry by saving the machining time. It is suggested that the range of the spindle speed could be made wider to accommodate the needs for high speed machining.

  7. Human factors assessments of innovative technologies: Robotics sector

    Energy Technology Data Exchange (ETDEWEB)

    Moran, J.B. [Operating Engineers National Hazmat Program, Beaver, WV (United States)

    1997-12-01

    The U.S. Department of Energy (DOE) has funded major environmental technology developments over the past several years. One area that received significant attention is robotics, which has resulted in the development of a wide range of unique robotic systems tailored to the many tasks unique to the DOE complex. These systems are often used in highly hazardous environments, which reduces or eliminates worker exposures. The DOE, concurrent with the technology development initiative, also established and funded a 5-yr cooperative agreement intended to interface with the technology development community-with specific attention to the occupational safety and health aspects associated with individual technologies through human factors and hazard assessments. This program is now in its third year.

  8. An empirical assessment of generational differences in basic human values.

    Science.gov (United States)

    Lyons, Sean T; Duxbury, Linda; Higgins, Christopher

    2007-10-01

    This study assessed generational differences in human values as measured by the Schwartz Value Survey. It was proposed that the two most recent generations, Millennials and Generation Xers, would value Self-enhancement and Openness to Change more than the two older generations, Baby Boomers and Matures, while the two older generations would value Self-transcendence and Conservation more. The hypotheses were tested with a combined sample of Canadian knowledge workers and undergraduate business students (N = 1,194). Two hypotheses were largely supported, although an unexpectedly large difference was observed between Millennials and Generation Xers with respect to Openness to Change and Self-enhancement. The findings suggest that generation is a useful variable in examining differences in social values.

  9. Indicators for human toxicity in Life Cycle Impact Assessment

    DEFF Research Database (Denmark)

    Krewitt, Wolfram; Pennington, David W.; Olsen, Stig Irving

    2002-01-01

    The main objectives of this task group under SETAC-Europe’s Second Working Group on Life Cycle Impact Assessment (LCIA-WIA2) were to identify and discuss the suitability of toxicological impact measures for human health for use in characterization in LCIA. The current state of the art of defining......, as well as potency. Quantitative severity-based indicators yield measures in terms of Years of Life Lost (YOLL), Disability Adjusted Life Years (DALY), Quality Adjusted Life Years (QALY) and other similar measures. DALYs and QALYs are examples of approaches that attempt to account for both years of life...... lost (mortality) and years of impaired life (morbidity). Qualitative severity approaches tend to arrange potency-based indicators in categories, avoiding the need to quantitatively express differences in severity. Based on the proposed criteria and current state of the knowledge, toxicological potency...

  10. Significance of rat mammary tumors for human risk assessment.

    Science.gov (United States)

    Russo, Jose

    2015-02-01

    We have previously indicated that the ideal animal tumor model should mimic the human disease. This means that the investigator should be able to ascertain the influence of host factors on the initiation of tumorigenesis, mimic the susceptibility of tumor response based on age and reproductive history, and determine the response of the tumors induced to chemotherapy. The utilization of experimental models of mammary carcinogenesis in risk assessment requires that the influence of ovarian, pituitary, and placental hormones, among others, as well as overall reproductive events are taken into consideration, since they are important modifiers of the susceptibility of the organ to neoplastic development. Several species, such as rodents, dogs, cats, and monkeys, have been evaluated for these purposes; however, none of them fulfills all the criteria specified previously. Rodents, however, are the most widely used models; therefore, this work will concentrate on discussing the rat rodent model of mammary carcinogenesis. © 2014 by The Author(s).

  11. Human Papilloma Viruses and Breast Cancer – Assessment of Causality

    Science.gov (United States)

    Lawson, James Sutherland; Glenn, Wendy K.; Whitaker, Noel James

    2016-01-01

    High risk human papilloma viruses (HPVs) may have a causal role in some breast cancers. Case–control studies, conducted in many different countries, consistently indicate that HPVs are more frequently present in breast cancers as compared to benign breast and normal breast controls (odds ratio 4.02). The assessment of causality of HPVs in breast cancer is difficult because (i) the HPV viral load is extremely low, (ii) HPV infections are common but HPV associated breast cancers are uncommon, and (iii) HPV infections may precede the development of breast and other cancers by years or even decades. Further, HPV oncogenesis can be indirect. Despite these difficulties, the emergence of new evidence has made the assessment of HPV causality, in breast cancer, a practical proposition. With one exception, the evidence meets all the conventional criteria for a causal role of HPVs in breast cancer. The exception is “specificity.” HPVs are ubiquitous, which is the exact opposite of specificity. An additional reservation is that the prevalence of breast cancer is not increased in immunocompromised patients as is the case with respect to HPV-associated cervical cancer. This indicates that HPVs may have an indirect causal influence in breast cancer. Based on the overall evidence, high-risk HPVs may have a causal role in some breast cancers. PMID:27747193

  12. Development of a human reliability analysis procedure for a low power/shutdown probabilistic safety assessment in pressurized light water reactors

    Energy Technology Data Exchange (ETDEWEB)

    Kang, D. I.; Sung, T. Y.; Park, J. H.; Kim, T. W.; Han, S. H.; Kim, K. Y.; Yang, J. E.; Jung, W. D.; Lee, Y. H.; Hwang, M. J.

    1997-09-01

    A human reliability analysis (HRA) procedure is developed for a low power/shutdown probalistic safety assessment (PSA) in pressurized light water reactors. At first, the HRA procedure developed is based on the two major current methods: THERP (technique for human error rate prediction) and SHARP (systematic human action reliability procedure). Then, it focuses on the specific situation of low power and shutdown operation of pressurized light water reactors. Major characteristics of the HRA procedure are as follows; 1) The use of the work sheet developed increase the plausibility and credibility of the quantification process of human actions and enable use to trace easily it. 2) The explicit use of decision tree could partly eliminate the possible subjectiveness in human reliability analyst`s judgement used for HRA. It is expected that the HRA procedure developed allow human reliability analyst to perform a systematic and consistent HRA. (author). 26 refs., 13 tabs., 8 figs.

  13. Performance assessment of human resource by integration of HSE and ergonomics and EFQM management system.

    Science.gov (United States)

    Sadegh Amalnick, Mohsen; Zarrin, Mansour

    2017-03-13

    Purpose The purpose of this paper is to present an integrated framework for performance evaluation and analysis of human resource (HR) with respect to the factors of health, safety, environment and ergonomics (HSEE) management system, and also the criteria of European federation for quality management (EFQM) as one of the well-known business excellence models. Design/methodology/approach In this study, an intelligent algorithm based on adaptive neuro-fuzzy inference system (ANFIS) along with fuzzy data envelopment analysis (FDEA) are developed and employed to assess the performance of the company. Furthermore, the impact of the factors on the company's performance as well as their strengths and weaknesses are identified by conducting a sensitivity analysis on the results. Similarly, a design of experiment is performed to prioritize the factors in the order of importance. Findings The results show that EFQM model has a far greater impact upon the company's performance than HSEE management system. According to the obtained results, it can be argued that integration of HSEE and EFQM leads to the performance improvement in the company. Practical implications In current study, the required data for executing the proposed framework are collected via valid questionnaires which are filled in by the staff of an aviation industry located in Tehran, Iran. Originality/value Managing HR performance results in improving usability, maintainability and reliability and finally in a significant reduction in the commercial aviation accident rate. Also, study of factors affecting HR performance authorities participate in developing systems in order to help operators better manage human error. This paper for the first time presents an intelligent framework based on ANFIS, FDEA and statistical tests for HR performance assessment and analysis with the ability of handling uncertainty and vagueness existing in real world environment.

  14. Computer Aided Design in Digital Human Modeling for Human Computer Interaction in Ergonomic Assessment: A Review

    Directory of Open Access Journals (Sweden)

    Suman Mukhopadhyay , Sanjib Kumar Das and Tania Chakraborty

    2012-12-01

    Full Text Available Research in Human-Computer Interaction (HCI hasbeen enormously successful in the area of computeraidedergonomics or human-centric designs. Perfectfit for people has always been a target for productdesign. Designers traditionally used anthropometricdimensions for 3D product design which created a lotof fitting problems when dealing with thecomplexities of the human body shapes. Computeraided design (CAD, also known as Computer aideddesign and drafting (CADD is the computertechnology used for the design processing and designdocumentation. CAD has now been used extensivelyin many applications such as automotive,shipbuilding, aerospace industries, architectural andindustrial designs, prosthetics, computer animationfor special effects in movies, advertising andtechnical manuals. As a technology, digital humanmodeling (DHM has rapidly emerged as atechnology that creates, manipulates and controlhuman representations and human-machine systemsscenes on computers for interactive ergonomic designproblem solving. DHM promises to profoundlychange how products or systems are designed, howergonomics analysis is performed, how disorders andimpairments are assessed and how therapies andsurgeries are conducted. The imperative andemerging need for the DHM appears to be consistentwith the fact that the past decade has witnessedsignificant growth in both the software systemsoffering DHM capabilities as well as the corporateadapting the technology.The authors shall dwell atlength and deliberate on how research in DHM hasfinally brought about an enhanced HCI, in thecontext of computer-aided ergonomics or humancentricdesign and discuss about future trends in thiscontext.

  15. Relationships of Measurement Error and Prediction Error in Observed-Score Regression

    Science.gov (United States)

    Moses, Tim

    2012-01-01

    The focus of this paper is assessing the impact of measurement errors on the prediction error of an observed-score regression. Measures are presented and described for decomposing the linear regression's prediction error variance into parts attributable to the true score variance and the error variances of the dependent variable and the predictor…

  16. Nuclear power plant personnel errors in decision-making as an object of probabilistic risk assessment. Methodological extensions on the basis of a differentiated analysis of safety-relevant goals; Entscheidungsfehler des Betriebspersonals von Kernkraftwerken als Objekt probabilistischer Risikoanalysen; Methodische Erweiterungen auf der Basis einer differenzierten Betrachtungsweise sicherheitsgerichteter Ziele

    Energy Technology Data Exchange (ETDEWEB)

    Reer, B.

    1993-09-01

    Integration of human error (man-machine system analysis (MMSA)) is an essential part of probabilistic risk assessment (PRA). A method is presented for systematic, comprehensive PRA inclusions of decision-based errors due to conflicts or similarities. For error identification procedure, new question techniques are developed. These errors are identified by looking at retroactions caused by subordinate goals as components of overall safety relevant goal. New quantification methods for estimating situation-specific probabilities are developed. The factors conflict and similarity are operationalized in a way that allows their quantification based on informations usually available in PRA. Quantification procedure uses extrapolations and interpolations based on a poor set of data related to decision-based errors. Moreover, for passive errors in decision-making a completely new approach is presented where errors are quantified via a delay initiating the required action rather than via error probabilities. Practicability of this dynamic approach is demonstrated by probabilistic analysis of the actions required during the total loss of feedwater event at the Davis-Besse plant 1985. The extensions of the classical PRA method developed in this work are applied to a MMSA of the decay heat removal (DHR) of the HTR-500. Errors in decision-making - as potential roots of extraneous acts - are taken into account in a comprehensive and systematic manner. Five additional errors are identified. However, the probabilistic quantification results a nonsignificant increase of the DHR failure probability. (orig.) [Deutsch] Einbeziehung von Operateurfehlern (Mensch-Maschine-Systemanalyse (MMSA)) ist Bestandteil einer probabilistischen Risikoanalyse (PRA). Es wird eine Methode vorgestellt, mit der sich Entscheidungsfehler aufgrund der Faktoren Konflikt und Aehnlichkeit systematisch und umfassend in MMSA integrieren lassen. Zur Identifizierung der entsprechenden Situationen im Stoerfallablauf

  17. Non-invasive assessment of bone quantity and quality in human trabeculae using scanning ultrasound imaging

    Science.gov (United States)

    Xia, Yi

    Fractures and associated bone fragility induced by osteoporosis and osteopenia are widespread health threat to current society. Early detection of fracture risk associated with bone quantity and quality is important for both the prevention and treatment of osteoporosis and consequent complications. Quantitative ultrasound (QUS) is an engineering technology for monitoring bone quantity and quality of humans on earth and astronauts subjected to long duration microgravity. Factors currently limiting the acceptance of QUS technology involve precision, accuracy, single index and standardization. The objective of this study was to improve the accuracy and precision of an image-based QUS technique for non-invasive evaluation of trabecular bone quantity and quality by developing new techniques and understanding ultrasound/tissue interaction. Several new techniques have been developed in this dissertation study, including the automatic identification of irregular region of interest (iROI) in bone, surface topology mapping (STM) and mean scattering spacing (MSS) estimation for evaluating trabecular bone structure. In vitro results have shown that (1) the inter- and intra-observer errors in QUS measurement were reduced two to five fold by iROI compared to previous results; (2) the accuracy of QUS parameter, e.g., ultrasound velocity (UV) through bone, was improved 16% by STM; and (3) the averaged trabecular spacing can be estimated by MSS technique (r2=0.72, p<0.01). The measurement errors of BUA and UV introduced by the soft tissue and cortical shells in vivo can be quantified by developed foot model and simplified cortical-trabecular-cortical sandwich model, which were verified by the experimental results. The mechanisms of the errors induced by the cortical and soft tissues were revealed by the model. With developed new techniques and understanding of sound-tissue interaction, in vivo clinical trail and bed rest study were preformed to evaluate the performance of QUS in

  18. Error Field Assessment from Driven Mode Rotation: Results from Extrap-T2R Reversed-Field-Pinch and Perspectives for ITER

    Science.gov (United States)

    Volpe, F. A.; Frassinetti, L.; Brunsell, P. R.; Drake, J. R.; Olofsson, K. E. J.

    2012-10-01

    A new ITER-relevant non-disruptive error field (EF) assessment technique not restricted to low density and thus low beta was demonstrated at the Extrap-T2R reversed field pinch. Resistive Wall Modes (RWMs) were generated and their rotation sustained by rotating magnetic perturbations. In particular, stable modes of toroidal mode number n=8 and 10 and unstable modes of n=1 were used in this experiment. Due to finite EFs, and in spite of the applied perturbations rotating uniformly and having constant amplitude, the RWMs were observed to rotate non-uniformly and be modulated in amplitude (in the case of unstable modes, the observed oscillation was superimposed to the mode growth). This behavior was used to infer the amplitude and toroidal phase of n=1, 8 and 10 EFs. The method was first tested against known, deliberately applied EFs, and then against actual intrinsic EFs. Applying equal and opposite corrections resulted in longer discharges and more uniform mode rotation, indicating good EF compensation. The results agree with a simple theoretical model. Extensions to tearing modes, to the non-uniform plasma response to rotating perturbations, and to tokamaks, including ITER, will be discussed.

  19. Analytic concepts for assessing risk as applied to human space flight

    Energy Technology Data Exchange (ETDEWEB)

    Garrick, B.J.

    1997-04-30

    Quantitative risk assessment (QRA) principles provide an effective framework for quantifying individual elements of risk, including the risk to astronauts and spacecraft of the radiation environment of space flight. The concept of QRA is based on a structured set of scenarios that could lead to different damage states initiated by either hardware failure, human error, or external events. In the context of a spacecraft risk assessment, radiation may be considered as an external event and analyzed in the same basic way as any other contributor to risk. It is possible to turn up the microscope on any particular contributor to risk and ask more detailed questions than might be necessary to simply assess safety. The methods of QRA allow for as much fine structure in the analysis as is desired. For the purpose of developing a basis for comprehensive risk management and considering the tendency to {open_quotes}fear anything nuclear,{close_quotes} radiation risk is a prime candidate for examination beyond that necessary to answer the basic question of risk. Thus, rather than considering only the customary damage states of fatalities or loss of a spacecraft, it is suggested that the full range of damage be analyzed to quantify radiation risk. Radiation dose levels in the form of a risk curve accomplish such a result. If the risk curve is the complementary cumulative distribution function, then it answers the extended question of what is the likelihood of receiving a specific dose of radiation or greater. Such results can be converted to specific health effects as desired. Knowing the full range of the radiation risk of a space mission and the contributors to that risk provides the information necessary to take risk management actions [operational, design, scheduling of missions around solar particle events (SPE), etc.] that clearly control radiation exposure.

  20. Teleoperator hand controllers: A contextual human factors assessment

    Energy Technology Data Exchange (ETDEWEB)

    Draper, J.V.

    1994-05-01

    This document provides a human factors assessment of controllers for use with remotely controlled manipulators deployed to remove hazardous waste from underground storage tanks. The analysis concentrates on controller technique (i.e., the broad class of hand controller) and not on details of controller ergonomics. Examples of controller techniques include, for example, direct rate control, resolved unilateral position control, and direct bilateral position control. Using an existing concept, the Tank Waste Retrieval Manipulator System, as a reference, two basic types of manipulators may be identified for this application. A long reach, gross-positioning manipulator (LRM) may be used to position a smaller manipulator or an end-effector within a work site. For a Long Reach Manipulator, which will have an enormous motion range and be capable of high end-effector velocity, it will be safest and most efficient to use a resolved rate control system. A smaller, dexterous manipulator may be used to perform handling work within a relatively small work site, (i.e., to complete tasks requiring near-human dexterity). For a Dexterous Manipulator, which will have a smaller motion range than the LRM and be required to perform more difficult tasks, a resolved bilateral position control system will be safest and most efficient. However, during some waste recovery tasks it may be important to support the users by restricting movements to a single plane or axis. This can be done with a resolved bilateral position control system by (1) using the master controller force output to restrict controller inputs or (2) switching the controller to a multiaxis rate control mode and using the force output to provide a spring return to center functionality.

  1. Somatic microindels in human cancer: the insertions are highly error-prone and derive from nearby but not adjacent sense and antisense templates.

    Science.gov (United States)

    Scaringe, William A; Li, Kai; Gu, Dongqing; Gonzalez, Kelly D; Chen, Zhenbin; Hill, Kathleen A; Sommer, Steve S

    2008-09-15

    Somatic microindels (microdeletions with microinsertions) have been studied in normal mouse tissues using the Big Blue lacI transgenic mutation detection system. Here we analyze microindels in human cancers using an endogenous and transcribed gene, the TP53 gene. Microindel frequency, the enhancement of 1-2 microindels and other features are generally similar to that observed in the non-transcribed lacI gene in normal mouse tissues. The current larger sample of somatic microindels reveals recurroids: mutations in which deletions are identical and the co-localized insertion is similar. The data reveal that the inserted sequences derive from nearby but not adjacent sequences in contrast to the slippage that characterizes the great majority of pure microinsertions. The microindel inserted sequences derive from a template on the sense or antisense strand with similar frequency. The estimated error rate of the insertion process of 13% per bp is by far the largest reported in vivo, with the possible exception of somatic hypermutation in the immunoglobulin gene. The data constrain possible mechanisms of microindels and raise the question of whether microindels are 'scars' from the bypass of large DNA adducts by a translesional polymerase, e.g. the 'Tarzan model' presented herein.

  2. 基于人误系统复合状态(MSHES)的人误防范理论研究%Study on Human Error Prevention Theories Based on MSHES

    Institute of Scientific and Technical Information of China (English)

    李卫民; 陶志

    2007-01-01

    评析国内外以第一代人因可靠性分析(静态)、第二代人因可靠性分析(动态)为主体形成的人误防范理论和方法;针对目前不能量化人的生理、认知、心理等相关非结构性和非确定性参数和数据的"瓶颈",建立基于人-机-环系统业务流程的人误系统复合状态(Multiplex State of Human Errors System,MSHES)结构模型;探求运用粗糙集数据挖掘,对资深专业人员的经验规则信息、人因事故或事件分析的信息,挖掘人因层次结构中的根因与人误层次结构中的差错之间的关联关系,构建基于规则的人误防范专家系统结构模型;探究人的风险性评估和人误防范理论.

  3. Reliability Analysis of a Man-Machine System with Human Error%具有人为故障的人-机系统的可靠性分析

    Institute of Scientific and Technical Information of China (English)

    常立波; 张玉峰

    2012-01-01

    介绍了一个具有人为故障的人-机系统的可修复模型,利用算子半群理论证明了新模型系统解的存在唯一性和指数型稳定性.另外,当故障率λ0→∞时,系统的瞬态可用度逼近弱解系统瞬态可用度.即,新模型系统逼近原模型弱解系统.%This paper presents the repairable model of a man-machine with human error. By operator theory ,the existence and uniqueness and the exponential stability of new model system is discussed in the paper.particularly,when Ao approaches to infinity, the instanta- neous stability of the system approach to the instantaneous stability of the weak solution of the system.That is, the new system approximate to the original system with mild solution.

  4. Identifying types and causes of errors in mortality data in a clinical registry using multiple information systems.

    Science.gov (United States)

    Koetsier, Antonie; Peek, Niels; de Keizer, Nicolette

    2012-01-01

    Errors may occur in the registration of in-hospital mortality, making it less reliable as a quality indicator. We assessed the types of errors made in in-hospital mortality registration in the clinical quality registry National Intensive Care Evaluation (NICE) by comparing its mortality data to data from a national insurance claims database. Subsequently, we performed site visits at eleven Intensive Care Units (ICUs) to investigate the number, types and causes of errors made in in-hospital mortality registration. A total of 255 errors were found in the NICE registry. Two different types of software malfunction accounted for almost 80% of the errors. The remaining 20% were five types of manual transcription errors and human failures to record outcome data. Clinical registries should be aware of the possible existence of errors in recorded outcome data and understand their causes. In order to prevent errors, we recommend to thoroughly verify the software that is used in the registration process.

  5. Performance Assessment of Human and Cattle Associated Quantitative Real-time PCR Assays - slides

    Science.gov (United States)

    The presentation overview is (1) Single laboratory performance assessment of human- and cattle associated PCR assays and (2) A Field Study: Evaluation of two human fecal waste management practices in Ohio watershed.

  6. Developing Cost-Effective Field Assessments of Carbon Stocks in Human-Modified Tropical Forests.

    Directory of Open Access Journals (Sweden)

    Erika Berenguer

    Full Text Available Across the tropics, there is a growing financial investment in activities that aim to reduce emissions from deforestation and forest degradation, such as REDD+. However, most tropical countries lack on-the-ground capacity to conduct reliable and replicable assessments of forest carbon stocks, undermining their ability to secure long-term carbon finance for forest conservation programs. Clear guidance on how to reduce the monetary and time costs of field assessments of forest carbon can help tropical countries to overcome this capacity gap. Here we provide such guidance for cost-effective one-off field assessments of forest carbon stocks. We sampled a total of eight components from four different carbon pools (i.e. aboveground, dead wood, litter and soil in 224 study plots distributed across two regions of eastern Amazon. For each component we estimated survey costs, contribution to total forest carbon stocks and sensitivity to disturbance. Sampling costs varied thirty-one-fold between the most expensive component, soil, and the least, leaf litter. Large live stems (≥10 cm DBH, which represented only 15% of the overall sampling costs, was by far the most important component to be assessed, as it stores the largest amount of carbon and is highly sensitive to disturbance. If large stems are not taxonomically identified, costs can be reduced by a further 51%, while incurring an error in aboveground carbon estimates of only 5% in primary forests, but 31% in secondary forests. For rapid assessments, necessary to help prioritize locations for carbon- conservation activities, sampling of stems ≥20cm DBH without taxonomic identification can predict with confidence (R2 = 0.85 whether an area is relatively carbon-rich or carbon-poor-an approach that is 74% cheaper than sampling and identifying all the stems ≥10cm DBH. We use these results to evaluate the reliability of forest carbon stock estimates provided by the IPCC and FAO when applied to human

  7. Potency Evaluation of Recombinant Human Erythropoietin in Brazil: Assessment of Reproducibility Using a Practical Approach

    Directory of Open Access Journals (Sweden)

    Michele Cardoso do Nascimento

    2015-08-01

    Full Text Available In this study, we compared the results of potency determination of recombinant human erythropoietin (rhEPO obtained between 2010 and 2012 by the National Institute of Quality Control in Health (INCQS/Fiocruz, i.e., the National Control Laboratory (NCL, and by a manufacturer of rhEPO. In total, 47 different batches of commercially prepared rhEPO (alpha isoform were analyzed. All results, including those of the control and warning limits, remained within the limits recommended by European Pharmacopoeia (Ph. Eur.. All relative error (RE values were less than ± 30%, wh ereas most were approximately ± 20%. Applying the Bland-Altman plot, only two of 47 values remained outside the limits of agreement (LA. In addition, agreement of potency determination between INCQS and the manufacturer coefficient of variation of reproducibility (% CVR was considered satisfactory. Taken together, our results demonstrate (i. the potency assay of rhEPO performed at INCQS, is standardized and controlled, (ii. the comparison of our results with those of the manufacturer, revealed an adequate inter-laboratory variation, and (iii. the critical appraisal proposed here appears to be a feasible tool to assess the reproducibility of biological activity, providing additional information regarding monitoring and production consistency to manufacturers and NCLs.

  8. [Survey in hospitals. Nursing errors, error culture and error management].

    Science.gov (United States)

    Habermann, Monika; Cramer, Henning

    2010-09-01

    Knowledge on errors is important to design safe nursing practice and its framework. This article presents results of a survey on this topic, including data of a representative sample of 724 nurses from 30 German hospitals. Participants predominantly remembered medication errors. Structural and organizational factors were rated as most important causes of errors. Reporting rates were considered low; this was explained by organizational barriers. Nurses in large part expressed having suffered from mental problems after error events. Nurses' perception focussing on medication errors seems to be influenced by current discussions which are mainly medication-related. This priority should be revised. Hospitals' risk management should concentrate on organizational deficits and positive error cultures. Decision makers are requested to tackle structural problems such as staff shortage.

  9. Analysis on critical factor of human error accidents in coal mine based on gray system theory%基于灰色系统理论的煤矿人因事故关键因素分析

    Institute of Scientific and Technical Information of China (English)

    兰建义; 乔美英; 周英

    2015-01-01

    Through analyzing the influence factors causing human error accidents in coal mine, the critical influ-ence factors were summarized.By applying the gray system correlation theory, according to the statistical data of mine accidents in recent 10 years from the State Administration of Coal Mine Safety, the influence types of human error accidents in coal mine were analyzed.Taking the number of accidents and death toll of accidents as reference index, the gray correlation degree about 10 kinds of factors mainly related to human error accident in coal mine, such as behavior error, personal violation, organization and management error and so on, were calculated and ana-lyzed.The gray correlation orders of these factors were derived, and the critical influence factors of human error ac-cident in coal mine were determined.Finally, the quantitative analysis result between the critical influence factors and human error accidents in coal mine were obtained.Using gray correlation theory to analyze the influence factors of human error in coal mine can well explain the weight relationship between human error and each critical affecting factor.It provides a strong reference for pretending and controlling the human error accident in coal mine, with more understanding on the main causing mechanism of human error accidents.%通过对煤矿人因失误事故致因因素进行分析,统计出相关的关键影响因素,运用灰色系统关联理论,根据国家安监局近十年煤矿事故统计数据,对煤矿人因失误事故影响类型进行了分析。以煤矿事故发生起数和事故死亡人数作为参考指标,计算和分析行为失误致因、个人违章、组织管理失误等十项主要与煤矿人因事故相关的灰色关联度,进而推算出这些因素的灰色关联序,确定出导致煤矿人因失误事故的关键因素,最终得到煤矿人因事故与关键影响因素之间的定量化分析结果。采用灰色关联理论对煤

  10. Characterization of Evidence for Human System Risk Assessment

    Science.gov (United States)

    Steinberg, S. L.; Van Baalen, M.; Rossi, M.; Riccio, G.; Romero, E.; Francisco, D.

    2016-01-01

    Understanding the kinds of evidence available and using the best evidence to answer a question is critical to evidenced-based decision-making, and it requires synthesis of evidence from a variety of sources. Categorization of human system risks in spaceflight, in particular, focuses on how well the integration and interpretation of all available evidence informs the risk statement that describes the relationship between spaceflight hazards and an outcome of interest. A mature understanding and categorization of these risks requires: 1) sufficient characterization of risk, 2) sufficient knowledge to determine an acceptable level of risk (i.e., a standard), 3) development of mitigations to meet the acceptable level of risk, and 4) identification of factors affecting generalizability of the evidence to different design reference missions. In the medical research community, evidence is often ranked by increasing confidence in findings gleaned from observational and experimental research (e.g., "levels of evidence"). However, an approach based solely on aspects of experimental design is problematic in assessing human system risks for spaceflight. For spaceflight, the unique challenges and opportunities include: (1) The independent variables in most evidence are the hazards of spaceflight, such as space radiation or low gravity, which cannot be entirely duplicated in terrestrial (Earth-based) analogs, (2) Evidence is drawn from multiple sources including medical and mission operations, Lifetime Surveillance of Astronaut Health (LSAH), spaceflight research (LSDA), and relevant environmental & terrestrial databases, (3) Risk metrics based primarily on LSAH data are typically derived from available prevalence or incidence data, which may limit rigorous interpretation, (4) The timeframe for obtaining adequate spaceflight sample size (n) is very long, given the small population, (5) Randomized controlled trials are unattainable in spaceflight, (6) Collection of personal and

  11. 电梯检验过程人因失误及其影响因素的实证研究%Empirical Study on Influencing Factors of Human Errors in the Process of Elevator Inspection

    Institute of Scientific and Technical Information of China (English)

    胡晓; 黄端; 石岿然; 蒋凤

    2014-01-01

    This paper examines the empirical test of key factors affecting human errors based on the samples of 248 senior and middle managers and primary technical staffs in foreign and state -owned elevator firms .The results show that personnel ability is negatively associated with human errors ;similarly ,organizational communication and organizational culture also have a directly and significantly negative impact on it .In addition ,there exists the related relationship among individual age ,work experiences ,marital status and human errors .This research provides sufficient basis to improve organizational management and avoid human errors for the elevator industry .%以248家电梯企业(包括外企和国企)的中高层管理人员和基层技术人员为调查对象,对人因失误的主要影响因素进行实证研究。研究结果表明,员工的能力素质、组织沟通与组织文化因素与人因失误的频繁程度显著负相关。此外,电梯检验过程人因失误与个体年龄、工龄、婚姻状况也存在相关性。研究结果为电梯行业改善组织管理,降低人因失误提供了充分的依据。

  12. Error-associated behaviors and error rates for robotic geology

    Science.gov (United States)

    Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin

    2004-01-01

    This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.

  13. Error-associated behaviors and error rates for robotic geology

    Science.gov (United States)

    Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin

    2004-01-01

    This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.

  14. Measurement error in a single regressor

    NARCIS (Netherlands)

    Meijer, H.J.; Wansbeek, T.J.

    2000-01-01

    For the setting of multiple regression with measurement error in a single regressor, we present some very simple formulas to assess the result that one may expect when correcting for measurement error. It is shown where the corrected estimated regression coefficients and the error variance may lie,

  15. Data quality and practical challenges of thyroid volume assessment by ultrasound under field conditions - observer errors may affect prevalence estimates of goitre

    Directory of Open Access Journals (Sweden)

    Torheim Liv E

    2010-12-01

    Full Text Available Abstract Background The ultrasonographic estimation of thyroid size has been advocated as being more precise than palpation to diagnose goitre. However, ultrasound also requires technical proficiency. This study was conducted among Saharawi refugees, where goitre is highly prevalent. The objectives were to assess the overall data quality of ultrasound measurements of thyroid volume (Tvol, including the intra- and inter-observer agreement, under field conditions, and to describe some of the practical challenges encountered. Methods In 2007 a cross-sectional study of 419 children (6-14 years old and 405 women (15-45 years old was performed on a population of Saharawi refugees with prevalent goitre, who reside in the Algerian desert. Tvol was measured by two trained fieldworkers using portable ultrasound equipment (examiner 1 measured 406 individuals, and examiner 2, 418 individuals. Intra- and inter-observer agreement was estimated in 12 children selected from the study population but not part of the main study. In the main study, an observer error was found in one examiner whose ultrasound images were corrected by linear regression after printing and remeasuring a sample of 272 images. Results The intra-observer agreement in Tvol was higher in examiner 1, with an intraclass correlation coefficient (ICC of 0.97 (95% CI: 0.91, 0.99 compared to 0.86 (95% CI: 0.60, 0.96 in examiner 2. The ICC for inter-observer agreement in Tvol was 0.38 (95% CI: -0.20, 0.77. Linear regression coefficients indicated a significant scaling bias in the original measurements of the AP and ML diameter and a systematic underestimation of Tvol (a product of AP, ML, CC and a constant. The agreement between re-measured and original Tvol measured by ICC (95% CI was 0.76 (0.71, 0.81. The agreement between re-measured and corrected Tvol measured by ICC (95% CI was 0.97 (0.96, 0.97. Conclusions An important challenge when using ultrasound to assess thyroid volume under field

  16. Chronomics and ``Glocal'' (Combined Globaland Local) Assessment of Human Life

    Science.gov (United States)

    Otsuka, K.; Cornélissen, G.; Norboo, T.; Takasugi, E.; Halberg, F.

    Most organisms, from cyanobacteria to mammals, are known to use circadian mechanisms to coordinate their activities with the natural 24-hour light/dark cycle and/or interacting socio-ecologic schedules. When the human clock gene was discovered in 1997, it was surprising to see that it was very similar in all earthly life. Recent findings suggest that organisms which evolved on Earth acquired many of the visible and invisible cycles of their habitat and/or of their cosmos. While circadian systems are well documented both time-macroscopically and time-microscopically, the temporal organization of physiological function is much more extensive. Long-term physiological quasi-ambulatory monitoring of blood pressure and heart rate, among other variables, such as those of the ECG and other tools of the neuroendocrinologic armamentarium, have already yielded information, among others, on circaseptan (about 7-day), transyears and cisyears (with periods slightly longer or shorter tha n one year, respectively), and circadecennian (about 10-year) cycles; the nervous system displays rhythms, chaos and trends, mapped as chronomes. Chronomes are time structures consisting of multifrequency rhythms covering frequencies over 18 orders of magnitude, elements of chaos, trends in chaotic and rhythmic endpoints, and other, as-yet unresolved variability. These resolvable time structures, chronomes, in us have counterparts around us, also consisting of rhythms, trends and chaos, as is increasingly being recognized. In 2000, we began a community-based study, relying on 7-day/24-hour monitoring of blood pressure as a public service. Our goal was the prevention of stroke and myocardial infarction and of the decline in cognitive function of the elderly in a community. Chronomic detection of elevated illness-risks aim at the prevention of diseases of individuals, such as myocardial infarctions and strokes, and, equally important, chronomics resolves illness of societies, such as crime and war

  17. Assessment of human resources management practices in Lebanese hospitals

    Directory of Open Access Journals (Sweden)

    Jamal Diana

    2009-11-01

    Full Text Available Abstract Background Sound human resources (HR management practices are essential for retaining effective professionals in hospitals. Given the recruitment and retention reality of health workers in the twenty-first century, the role of HR managers in hospitals and those who combine the role of HR managers with other responsibilities should not be underestimated. The objective of this study is to assess the perception of HR managers about the challenges they face and the current strategies being adopted. The study also aims at assessing enabling factors including role, education, experience and HR training. Methods A cross-sectional survey design of HR managers (and those who combine their role as HR manager with other duties in Lebanese hospitals was utilized. The survey included a combination of open- and close-ended questions. Questions included educational background, work experience, and demographics, in addition to questions about perceived challenges and key strategies being used. Quantitative data analysis included uni-variate analysis, whereas thematic analysis was used for open-ended questions. Results A total of 96 respondents from 61 hospitals responded. Respondents had varying levels of expertise in the realm of HR management. Thematic analysis revealed that challenges varied across respondents and participating hospitals. The most frequently reported challenge was poor employee retention (56.7%, lack of qualified personnel (35.1%, and lack of a system for performance evaluation (28.9%. Some of the strategies used to mitigate the above challenges included offering continuing education and training for employees (19.6%, improving salaries (14.4%, and developing retention strategies (10.3%. Mismatch between reported challenges and strategies were observed. Conclusion To enable hospitals to deliver good quality, safe healthcare, improving HR management is critical. There is a need for a cadre of competent HR managers who can fully

  18. Assessment of human resources management practices in Lebanese hospitals.

    Science.gov (United States)

    El-Jardali, Fadi; Tchaghchagian, Victoria; Jamal, Diana

    2010-01-01

    Sound human resources (HR) management practices are essential for retaining effective professionals in hospitals. Given the recruitment and retention reality of health workers in the twenty-first century, the role of HR managers in hospitals and those who combine the role of HR managers with other responsibilities should not be underestimated. The objective of this study is to assess the perception of HR managers about the challenges they face and the current strategies being adopted. The study also aims at assessing enabling factors including role, education, experience and HR training. A cross-sectional survey design of HR managers (and those who combine their role as HR manager with other duties) in Lebanese hospitals was utilized. The survey included a combination of open- and close-ended questions. Questions included educational background, work experience, and demographics, in addition to questions about perceived challenges and key strategies being used. Quantitative data analysis included uni-variate analysis, whereas thematic analysis was used for open-ended questions. A total of 96 respondents from 61 hospitals responded. Respondents had varying levels of expertise in the realm of HR management. Thematic analysis revealed that challenges varied across respondents and participating hospitals. The most frequently reported challenge was poor employee retention (56.7%), lack of qualified personnel (35.1%), and lack of a system for performance evaluation (28.9%). Some of the strategies used to mitigate the above challenges included offering continuing education and training for employees (19.6%), improving salaries (14.4%), and developing retention strategies (10.3%). Mismatch between reported challenges and strategies were observed. To enable hospitals to deliver good quality, safe healthcare, improving HR management is critical. There is a need for a cadre of competent HR managers who can fully assume these responsibilities and who can continuously improve

  19. Training-induced improvement of response selection and error detection in aging assessed by task switching: Effects of cognitive, physical and relaxation training

    Directory of Open Access Journals (Sweden)

    Patrick Darius Gajewski

    2012-05-01

    Full Text Available Cognitive control functions decline with increasing age. One of them is response selection that forms the link between the goals and the motor system and is therefore crucial for performance outcomes in cognitive tasks. The present study examines if different types of group-based and trainer-guided training effectively enhance performance of older adults in a task switching task, and how this expected enhancement is reflected in electrophysiological brain activity, as measured in event-related potentials (ERPs. 141 healthy participants aged 65 years and older were randomly assigned to one of four groups: physical training (combined aerobic and strength-training, cognitive training (paper-pencil and computer-aided, relaxation and wellness (social control group and a no-contact control group that did not receive any intervention. Training sessions took place twice a week for 90 minutes for a period of 4 months.The results showed a greater improvement of performance for attendants of the cognitive training group compared to the other groups. This improvement was evident in a reduction of mixing costs in accuracy and intraindividual variability of speed, indexing improved maintenance of multiple task-sets in working memory and an enhanced coherence of neuronal processing. These findings were supported by event-related brain potentials (ERP which showed higher amplitudes in a number of potentials associated with response selection (N2, allocation of cognitive resources (P3b and error detection (Ne.Taken together, our findings suggest neurocognitive plasticity of aging brains which can be stimulated by broad and multilayered cognitive training and assessed in detail by electrophysiological methods.

  20. The virtual approach to the assessment of skeletal injuries in human skeletal remains of forensic importance.

    Science.gov (United States)

    Urbanová, Petra; Ross, Ann H; Jurda, Mikoláš; Šplíchalová, Ivana

    2017-07-01

    While assessing skeletal injuries in human skeletal remains, forensic anthropologists are frequently presented with fractured, fragmented, or otherwise modified skeletal remains. The examination of evidence and the mechanisms of skeletal injuries often require that separate osseous elements be permanently or temporarily reassembled or reconstructed. If not dealt with properly, such reconstructions may impede accurate interpretation of the evidence. Nowadays, routine forensic examinations increasingly incorporate digital imaging technologies. As a result, a variety of PC-assisted imaging techniques, collectively referred to as the virtual approach, have been made available to treat fragmentary skeletal remains. The present study employs a 3D virtual approach to assess mechanisms of skeletal injuries, and provides an expert opinion of causative tools in three forensic cases involving human skeletal remains where integrity was compromised by multiple peri- or postmortem alterations resulting in fragmentation and/or incompleteness. Three fragmentary skulls and an incomplete set of foot bones with evidence of perimortem fractures (gunshot wounds) and sharp force trauma (saw marks) were digitized using a desktop laser scanner. The digitized skeletal elements were reassembled in the virtual workspace using functionalities incorporated in AMIRA(®) version 5.0 software, and simultaneously in real physical space by traditional reconstructive approaches. For this study, the original skeletal fragments were substituted by replicas built by 3D printing. Inter-method differences were quantified by mesh-based comparison after the physically reassembled elements had been re-digitized. Observed differences were further reinforced by visualizing local variations using colormaps and other advanced 3D visualization techniques. In addition, intra-operator and inter-operator error was computed. The results demonstrate that the importance of incorporating the virtual approach into the

  1. A Methodological Review of the Assessment of Humanism in Medical Students.

    Science.gov (United States)

    Buck, Era; Holden, Mark; Szauter, Karen

    2015-11-01

    Humanism is a complex construct that defies simplistic measurement. How educators measure humanism shapes understanding and implications for learners. This systematic review sought to address the following questions: How do medical educators assess humanism in medical students, and how does the measurement impact the understanding of humanism in undergraduate medical education (UME)? Using the IECARES (integrity, excellence, compassion, altruism, respect, empathy, and service) Gold Foundation framework, a search of English literature databases from 2000 to 2013 on assessment of humanism in medical students revealed more than 900 articles, of which 155 met criteria for analysis. Using descriptive statistics, articles and assessments were analyzed for construct measured, study design, assessment method, instrument type, perspective/source of assessment, student level, validity evidence, and national context. Of 202 assessments reported in 155 articles, 162 (80%) used surveys; 164 (81%) used student self-reports. One hundred nine articles (70%) included only one humanism construct. Empathy was the most prevalent construct present in 96 (62%); 49 (51%) of those used a single instrument. One hundred fifteen (74%) used exclusively quantitative data; only 48 (31%) used a longitudinal design. Construct underrepresentation was identified as a threat to validity in half of the assessments. Articles included 34 countries; 87 (56%) were from North America. Assessment of humanism in UME incorporates a limited scope of a complex construct, often relying on single quantitative measures from self-reported survey instruments. This highlights the need for multiple methods, perspectives, and longitudinal designs to strengthen the validity of humanism assessments.

  2. The human health programme under AMAP. AMAP Human Health Group. Arctic Monitoring and Assessment Program.

    Science.gov (United States)

    Hansen, J C

    1998-10-01

    The human health programme of the first phase of AMAP was planned at an international meeting held in Nuuk, Greenland, October 1992. As the most vulnerable period to adverse effects of contaminants is during fetal development, it was decided to concentrate on analyses of umbilical cord blood and maternal blood. The programme was designed as a core programme in which 150 sample pairs should be collected in each of the 8 arctic countries and analyzed for persistant organic pollutants (POPs) and heavy metals (mercury, lead and cadmium). As some essential elements such as copper, zinc and selenium interfere with heavy metal toxicity these elements should also be analyzed. Additional analyses such as nickel and arsenic in urine, mercury in hair, and POPs in breast milk could be incorporated regionally according to specific local conditions. Radionucleides were not a major focus in the human programme as this issue was be dealt with by AMAP's radiation group. Implementation of the programme was a problem in most of the countries due to lack of funding. However, an offer from Canada to analyze all contaminants in 50 samples from each country enabled the first comparative circumpolar study of human exposure to contaminants to be completed. The study confirmed that in general the most important source of exposure to both POPs and mercury is food of marine origin and that Greenlanders and Inuit from the Canadian Arctic, due to their traditional lifestyle, are among the most highly exposed populations in the Arctic. This is not a result of local pollution in Greenland and Canada, but is due to long range transport of persistent contaminants through the atmosphere and their biomagnification in the marine food chain. For these reasons the most important recommendation of the first AMAP assessment is that priority should be given to the expeditious completion of negotiations to establish protocols for the control of POPs and heavy metals under the Convention on Long Range

  3. Assessing sources of error in comparative analyses of primate behavior: Intraspecific variation in group size and the social brain hypothesis.

    Science.gov (United States)

    Sandel, Aaron A; Miller, Jordan A; Mitani, John C; Nunn, Charles L; Patterson, Samantha K; Garamszegi, László Zsolt

    2016-05-01

    Phylogenetic comparative methods have become standard for investigating evolutionary hypotheses, including in studies of human evolution. While these methods account for the non-independence of trait data due to phylogeny, they often fail to consider intraspecific variation, which may lead to biased or erroneous results. We assessed the degree to which intraspecific variation impacts the results of comparative analyses by investigating the "social brain" hypothesis, which has provided a framework for explaining complex cognition and large brains in humans. This hypothesis suggests that group life imposes a cognitive challenge, with species living in larger social groups having comparably larger neocortex ratios than those living in smaller groups. Primates, however, vary considerably in group size within species, a fact that has been ignored in previous analyses. When within-species variation in group size is high, the common practice of using a mean value to represent the species may be inappropriate. We conducted regression and resampling analyses to ascertain whether the relationship between neocortex ratio and group size across primate species persists after controlling for within-species variation in group size. We found that in a sample of 23 primates, 70% of the variation in group size was due to between-species variation. Controlling for within-species variation in group size did not affect the results of phylogenetic analyses, which continued to show a positive relationship between neocortex ratio and group size. Analyses restricted to non-monogamous primates revealed considerable intraspecific variation in group size, but the positive association between neocortex ratio and group size remained even after controlling for within-species variation in group size. Our findings suggest that the relationship between neocortex size and group size in primates is robust. In addition, our methods and associated computer code provide a way to assess and account for

  4. Effect of Human Error Factors on the Major Accidents of Tungsten Mining Heading Face%某钨矿掘进工作面人因失误评价研究

    Institute of Scientific and Technical Information of China (English)

    史德强; 靳波; 陆刚; 戚星; 曾旭; 陈振伟

    2016-01-01

    Human errors in arbitrated mining and tunneling partially contributed to the major mining accidents. For a comprehensive evaluation of the effect of human error factors on the accidents on tungsten mine heading face, this paper established a human error evaluation system by analyzing the relationship among man, machine and environment. By applying G1 improved fuzzy algorithm method and surveyed data, a model of human error factors is founded to identify important affecting factors. The results show that the highest impact in the human error factors model is environment, followed by human elements and equipments. The evaluation model provides theoretical and practical supports for the prevention of major accidents in tungsten mining heading face caused by human errors.%钨矿掘进工作面人因失误是导致重大事故发生的重要因素之一.为综合评价钨矿掘进工作面的人因失误问题,以人、机和环境三个影响因素为出发点,分析掘进工作面人-机-环境系统的关系,识别人因失误的影响因子,建立人-机-环境系统的人因失误评价体系.引入G1法改进模糊算法构建了人因失误评价模型,并结合调查统计数据,识别重要的影响因素.应用实例分析表明,人因失误致因模型中影响程度最高的是环境因素,其次是人的因素和设备因素等,该评价模型有助于对钨矿掘进工作面人因失误的预防提供理论与实践支持.

  5. 基于事故/事件的民机人因防错设计关键因素研究%Research on key factors of human error proofing design for civil aircraft based on accidents/incidents

    Institute of Scientific and Technical Information of China (English)

    高扬; 王向章; 李晓旭

    2015-01-01

    Aiming at the influence of human error proofing design for civil aircraft on flight safety, 92 typical acci-dents cases by human factors were selected from the world civil aviation safety accidents/incidents database.The element incident analysis method was applied to conduct deep analysis, then the important design factors which need to be considered in the human error proofing design for civil aircraft was summarized, and the important design factor set was established.Based on the man-machine-environment model in systems engineering, and combined with the relevant standards for aircrafts design at home and abroad, an index system of important factors about hu-man error proofing design for civil aircraft was built.The FAHP method was used to calculate the weight of inde-xes, and 14 key factors of human error proofing design that influence the flight safety were determined.Finally, the general requirements of human error proofing design for civil aircraft were proposed against the key factors.It can provide reference for the human error proofing design for civil aircraft to better meet the requirements of initial air-worthiness.%针对民机人因防错设计对飞行安全的影响,从世界民航安全事故/事件数据库中筛选出92起典型的人为因素事故案例,采用基元事件分析法进行深度分析,提炼出民机人因防错设计需要考虑的重要设计因素,并建立重要设计因素集。基于系统工程学的“人机环”模型,结合国内外飞机设计相关标准,建立民机人因防错设计重要因素指标体系。运用模糊层次分析法对因素指标进行权重计算,确立影响飞行安全的14项人因防错设计关键因素,并针对这些关键因素提出民机人因防错设计通用要求,以期为民机人因防错设计满足初始适航要求提供参考。

  6. Human health impact of Salmonella contamination in imported soybean products: A semiquantitative risk assessment

    DEFF Research Database (Denmark)

    Hald, Tine; Wingstrand, Anne; Brondsted, T.

    2006-01-01

    The objectives of our study were to estimate the number of reported cases of human salmonellosis in Denmark that can be attributed to the occurrence of Salmonella in soy-based animal feed and to assess whether certain serotypes can be considered of less importance to human health. The assessment ...

  7. Quantitative assessment of the accuracy for three interpolation techniques in kinematic analysis of human movement.

    Science.gov (United States)

    Howarth, Samuel J; Callaghan, Jack P

    2010-12-01

    Marker obstruction during human movement analyses requires interpolation to reconstruct missing kinematic data. This investigation quantifies errors associated with three interpolation techniques and varying interpolated durations. Right ulnar styloid kinematics from 13 participants performing manual wheelchair ramp ascent were reconstructed using linear, cubic spline and local coordinate system (LCS) interpolation from 11-90% of one propulsive cycle. Elbow angles (flexion/extension and pronation/supination) were calculated using real and reconstructed kinematics. Reconstructed kinematics produced maximum elbow flexion/extension errors of 37.1 (linear), 23.4 (spline) and 9.3 (LCS) degrees. Reconstruction errors are unavoidable [minimum errors of 6.7 mm (LCS); 0.29 mm (spline); 0.42 mm (linear)], emphasising careful motion capture system setup must be performed to minimise data interpolation. For the observed movement, LCS-based interpolation (average error of 14.3 mm; correlation of 0.976 for elbow flexion/extension) was most suitable for reconstructing durations longer than 200 ms. Spline interpolation was superior for shorter durations.

  8. Human health risk assessment related to contaminated land: st