WorldWideScience

Sample records for human error based

  1. Game Design Principles based on Human Error

    Directory of Open Access Journals (Sweden)

    Guilherme Zaffari

    2016-03-01

    Full Text Available This paper displays the result of the authors’ research regarding to the incorporation of Human Error, through design principles, to video game design. In a general way, designers must consider Human Error factors throughout video game interface development; however, when related to its core design, adaptations are in need, since challenge is an important factor for fun and under the perspective of Human Error, challenge can be considered as a flaw in the system. The research utilized Human Error classifications, data triangulation via predictive human error analysis, and the expanded flow theory to allow the design of a set of principles in order to match the design of playful challenges with the principles of Human Error. From the results, it was possible to conclude that the application of Human Error in game design has a positive effect on player experience, allowing it to interact only with errors associated with the intended aesthetics of the game.

  2. Understanding Human Error Based on Automated Analyses

    Data.gov (United States)

    National Aeronautics and Space Administration — This is a report on a continuing study of automated analyses of experiential textual reports to gain insight into the causal factors of human errors in aviation...

  3. Analysis of measured data of human body based on error correcting frequency

    Science.gov (United States)

    Jin, Aiyan; Peipei, Gao; Shang, Xiaomei

    2014-04-01

    Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.

  4. In-plant reliability data base for nuclear plant components: a feasibility study on human error information

    Energy Technology Data Exchange (ETDEWEB)

    Borkowski, R.J.; Fragola, J.R.; Schurman, D.L.; Johnson, J.W.

    1984-03-01

    This report documents the procedure and final results of a feasibility study which examined the usefulness of nuclear plant maintenance work requests in the IPRDS as tools for understanding human error and its influence on component failure and repair. Developed in this study were (1) a set of criteria for judging the quality of a plant maintenance record set for studying human error; (2) a scheme for identifying human errors in the maintenance records; and (3) two taxonomies (engineering-based and psychology-based) for categorizing and coding human error-related events.

  5. Errors in Human Performance

    Science.gov (United States)

    1980-08-15

    activities. Internatin , f - Studie, 1979, _ 5-24. Collins, A. h., & Loftus, E. F. A spreading activation theory of seman- tic processing. k fl _j-ej w...rea-daAm i_=E1jJh. Providence, R.I.: Brown University Press, 1967. LaBerge, D., & Samuels, S. J. Toward a theory of automatic information processing...Report, November, 1979. Norman, D. A. Er n human pefg ce (Tech. Rep. 8004). University of California, San Diego, July 1980. Norman, D. A. Post Freudian

  6. Managing human error in aviation.

    Science.gov (United States)

    Helmreich, R L

    1997-05-01

    Crew resource management (CRM) programs were developed to address team and leadership aspects of piloting modern airplanes. The goal is to reduce errors through team work. Human factors research and social, cognitive, and organizational psychology are used to develop programs tailored for individual airlines. Flight crews study accident case histories, group dynamics, and human error. Simulators provide pilots with the opportunity to solve complex flight problems. CRM in the simulator is called line-oriented flight training (LOFT). In automated cockpits CRM promotes the idea of automation as a crew member. Cultural aspects of aviation include professional, business, and national culture. The aviation CRM model has been adapted for training surgeons and operating room staff in human factors.

  7. 基于人差错纠正能力的人因可靠性模型研究%Human Reliability Method Analysis Based on Human Error Correcting Ability

    Institute of Scientific and Technical Information of China (English)

    陈炉云; 张裕芳

    2011-01-01

    Based on the theory of time sequence and error correcting ability character of the human operator behaviors in man-machine system, combining the key performance shaping factor analysis, the human reliability analysis of the vessel chamber is investigated. By the time sequence parameter and error correcting parameter in the human errors analysis, the operator behaviors shaping model of man-machine system and human errors event tree are proposed. By the error correcting ability analysis, the quantitative model and allowance theory in human reliability analysis are discussed. In the end, with the monitoring task of the operation desk in the vessel chamber as an example, a human reliability analysis was conducted to quantitatively assess the mission reliability of the operator.%根据人-机系统中人的操作行为具有时序性和差错可纠正性的特点,结合船舶舱室行为形成主因子,开展船舶舱室人因可靠性研究.以人因失误的时序性和差错纠正参数为基础,建立人-机系统中操作者行为模式和人因失误事件树模型.通过对人的差错纠正能力的分析,开展人因可靠性量化模型纠正理论研究.最后,以船舶舱室操作台的监控任务人因可靠性为例进行量化计算,定量评估操作人员执行任务的可靠度.

  8. Structural basis of error-prone replication and stalling at a thymine base by human DNA polymerase

    Energy Technology Data Exchange (ETDEWEB)

    Kirouac, Kevin N.; Ling, Hong; (UWO)

    2009-06-30

    Human DNA polymerase iota (pol iota) is a unique member of Y-family polymerases, which preferentially misincorporates nucleotides opposite thymines (T) and halts replication at T bases. The structural basis of the high error rates remains elusive. We present three crystal structures of pol complexed with DNA containing a thymine base, paired with correct or incorrect incoming nucleotides. A narrowed active site supports a pyrimidine to pyrimidine mismatch and excludes Watson-Crick base pairing by pol. The template thymine remains in an anti conformation irrespective of incoming nucleotides. Incoming ddATP adopts a syn conformation with reduced base stacking, whereas incorrect dGTP and dTTP maintain anti conformations with normal base stacking. Further stabilization of dGTP by H-bonding with Gln59 of the finger domain explains the preferential T to G mismatch. A template 'U-turn' is stabilized by pol and the methyl group of the thymine template, revealing the structural basis of T stalling. Our structural and domain-swapping experiments indicate that the finger domain is responsible for pol's high error rates on pyrimidines and determines the incorporation specificity.

  9. Human Errors and Bridge Management Systems

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle; Nowak, A. S.

    Human errors are divided in two groups. The first group contains human errors, which effect the reliability directly. The second group contains human errors, which will not directly effect the reliability of the structure. The methodology used to estimate so-called reliability distributions on ba...

  10. Human error: A significant information security issue

    Energy Technology Data Exchange (ETDEWEB)

    Banks, W.W.

    1994-12-31

    One of the major threats to information security human error is often ignored or dismissed with statements such as {open_quotes}There is not much we can do about it.{close_quotes} This type of thinking runs counter to reality because studies have shown that, of all systems threats, human error has the highest probability of occurring and that, with professional assistance, human errors can be prevented or significantly reduced Security analysts often overlook human error as a major threat; however, other professionals such as human factors engineers are trained to deal with these probabilistic occurrences and mitigate them. In a recent study 55% of the respondents surveyed considered human error as the most important security threat. Documentation exists to show that human error was a major cause of the consequences suffered at Three Mile Island, Chernobyl, Bhopal, and the Exxon tanker, Valdez. Ironically, causes of human error can usually be quickly and easily eliminated.

  11. Human errors: their psychophysical bases and the Proprioceptive Diagnosis of Temperament and Character (DP-TC as a tool for measuring.

    Directory of Open Access Journals (Sweden)

    Tous Ral J.M.

    2014-07-01

    Full Text Available Human error is commonly differentiated into three different types. These are: errors in perception, errors in decision and errors in sensation. This analysis is based on classical psychophysics (Fechner, 1860 and describes the errors of detection and perception. Decision- making errors are evaluated in terms of the theory of signal detection (McNicholson, 1974, and errors of sensation or sensitivity are evaluated in terms of proprioceptive information (van Beers, 2001. Each of these stages developed its own method of evaluation that has influenced the development of ergonomics in the event of errors in perception and the verbal assessment of personality (stress, impulsiveness, burnout, etc. in decision-making errors. Here we represent the method we have developed, the Proprioceptive Diagnosis of Temperament and Character (DP- TC test, for the specific assessment of errors of perception or expressivity which are based on fine motor precision performance. Each of the described errors types are interdependent of each other in such a manner that observable stress in behaviour may be caused due to: the inadequate performance of a task due to the perception of the person (i.e. from right to left for a right-handed person; performing a task that requires attentive decision-making to be performed too hastily; undertaking a task that does not correspond to the prevailing disposition of the person.

  12. How social is error observation? The neural mechanisms underlying the observation of human and machine errors.

    Science.gov (United States)

    Desmet, Charlotte; Deschrijver, Eliane; Brass, Marcel

    2014-04-01

    Recently, it has been shown that the medial prefrontal cortex (MPFC) is involved in error execution as well as error observation. Based on this finding, it has been argued that recognizing each other's mistakes might rely on motor simulation. In the current functional magnetic resonance imaging (fMRI) study, we directly tested this hypothesis by investigating whether medial prefrontal activity in error observation is restricted to situations that enable simulation. To this aim, we compared brain activity related to the observation of errors that can be simulated (human errors) with brain activity related to errors that cannot be simulated (machine errors). We show that medial prefrontal activity is not only restricted to the observation of human errors but also occurs when observing errors of a machine. In addition, our data indicate that the MPFC reflects a domain general mechanism of monitoring violations of expectancies.

  13. The cost of human error intervention

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, C.T.; Banks, W.W.; Jones, E.D.

    1994-03-01

    DOE has directed that cost-benefit analyses be conducted as part of the review process for all new DOE orders. This new policy will have the effect of ensuring that DOE analysts can justify the implementation costs of the orders that they develop. We would like to argue that a cost-benefit analysis is merely one phase of a complete risk management program -- one that would more than likely start with a probabilistic risk assessment. The safety community defines risk as the probability of failure times the severity of consequence. An engineering definition of failure can be considered in terms of physical performance, as in mean-time-between-failure; or, it can be thought of in terms of human performance, as in probability of human error. The severity of consequence of a failure can be measured along any one of a number of dimensions -- economic, political, or social. Clearly, an analysis along one dimension cannot be directly compared to another but, a set of cost-benefit analyses, based on a series of cost-dimensions, can be extremely useful to managers who must prioritize their resources. Over the last two years, DOE has been developing a series of human factors orders, directed a lowering the probability of human error -- or at least changing the distribution of those errors. The following discussion presents a series of cost-benefit analyses using historical events in the nuclear industry. However, we would first like to discuss some of the analytic cautions that must be considered when we deal with human error.

  14. Human Error Mechanisms in Complex Work Environments

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1988-01-01

    will account for most of the action errors observed. In addition, error mechanisms appear to be intimately related to the development of high skill and know-how in a complex work context. This relationship between errors and human adaptation is discussed in detail for individuals and organisations...

  15. Perancangan Fasilitas Kerja untuk Mereduksi Human Error

    Directory of Open Access Journals (Sweden)

    Harmein Nasution

    2012-01-01

    Full Text Available Work equipments and environment which are not design ergonomically can cause physical exhaustion to the workers. As a result of that physical exhaustion, many defects in the production lines can happen due to human error and also cause musculoskeletal complaints. To overcome, those effects, we occupied methods for analyzing the workers posture based on the SNQ (Standard Nordic Questionnaire, plibel, QEC (Quick Exposure Check and biomechanism. Moreover, we applied those methods for designing rolling machines and grip egrek ergono-mically, so that the defects on those production lines can be minimized.

  16. A technique for human error analysis (ATHEANA)

    Energy Technology Data Exchange (ETDEWEB)

    Cooper, S.E.; Ramey-Smith, A.M.; Wreathall, J.; Parry, G.W. [and others

    1996-05-01

    Probabilistic risk assessment (PRA) has become an important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. Human reliability analysis (HRA) is a critical element of PRA; however, limitations in the analysis of human actions in PRAs have long been recognized as a constraint when using PRA. A multidisciplinary HRA framework has been developed with the objective of providing a structured approach for analyzing operating experience and understanding nuclear plant safety, human error, and the underlying factors that affect them. The concepts of the framework have matured into a rudimentary working HRA method. A trial application of the method has demonstrated that it is possible to identify potentially significant human failure events from actual operating experience which are not generally included in current PRAs, as well as to identify associated performance shaping factors and plant conditions that have an observable impact on the frequency of core damage. A general process was developed, albeit in preliminary form, that addresses the iterative steps of defining human failure events and estimating their probabilities using search schemes. Additionally, a knowledge- base was developed which describes the links between performance shaping factors and resulting unsafe actions.

  17. Impact Propagation of Human Errors on Software Requirements Volatility

    Directory of Open Access Journals (Sweden)

    Zahra Askarinejadamiri

    2017-02-01

    Full Text Available Requirements volatility (RV is one of the key risk sources in software development and maintenance projects because of the frequent changes made to the software. Human faults and errors are major factors contributing to requirement change in software development projects. As such, predicting requirements volatility is a challenge to risk management in the software area. Previous studies only focused on certain aspects of the human error in this area. This study specifically identifies and analyses the impact of human errors on requirements gathering and requirements volatility. It proposes a model based on responses to a survey questionnaire administered to 215 participants who have experience in software requirement gathering. Exploratory factor analysis (EFA and structural equation modelling (SEM were used to analyse the correlation of human errors and requirement volatility. The results of the analysis confirm the correlation between human errors and RV. The results show that human actions have a higher impact on RV compared to human perception. The study provides insights into software management to understand socio-technical aspects of requirements volatility in order to control risk management. Human actions and perceptions respectively are a root cause contributing to human errors that lead to RV.

  18. Effect of Transducer Orientation on Errors in Ultrasound Image-Based Measurements of Human Medial Gastrocnemius Muscle Fascicle Length and Pennation.

    Science.gov (United States)

    Bolsterlee, Bart; Gandevia, Simon C; Herbert, Robert D

    2016-01-01

    Ultrasound imaging is often used to measure muscle fascicle lengths and pennation angles in human muscles in vivo. Theoretically the most accurate measurements are made when the transducer is oriented so that the image plane aligns with muscle fascicles and, for measurements of pennation, when the image plane also intersects the aponeuroses perpendicularly. However this orientation is difficult to achieve and usually there is some degree of misalignment. Here, we used simulated ultrasound images based on three-dimensional models of the human medial gastrocnemius, derived from magnetic resonance and diffusion tensor images, to describe the relationship between transducer orientation and measurement errors. With the transducer oriented perpendicular to the surface of the leg, the error in measurement of fascicle lengths was about 0.4 mm per degree of misalignment of the ultrasound image with the muscle fascicles. If the transducer is then tipped by 20°, the error increases to 1.1 mm per degree of misalignment. For a given degree of misalignment of muscle fascicles with the image plane, the smallest absolute error in fascicle length measurements occurs when the transducer is held perpendicular to the surface of the leg. Misalignment of the transducer with the fascicles may cause fascicle length measurements to be underestimated or overestimated. Contrary to widely held beliefs, it is shown that pennation angles are always overestimated if the image is not perpendicular to the aponeurosis, even when the image is perfectly aligned with the fascicles. An analytical explanation is provided for this finding.

  19. Human error in daily intensive nursing care

    Directory of Open Access Journals (Sweden)

    Sabrina da Costa Machado Duarte

    2015-12-01

    Full Text Available Objectives: to identify the errors in daily intensive nursing care and analyze them according to the theory of human error. Method: quantitative, descriptive and exploratory study, undertaken at the Intensive Care Center of a hospital in the Brazilian Sentinel Hospital Network. The participants were 36 professionals from the nursing team. The data were collected through semistructured interviews, observation and lexical analysis in the software ALCESTE(r. Results: human error in nursing care can be related to the approach of the system, through active faults and latent conditions. The active faults are represented by the errors in medication administration and not raising the bedside rails. The latent conditions can be related to the communication difficulties in the multiprofessional team, lack of standards and institutional routines and absence of material resources. Conclusion: the errors identified interfere in nursing care and the clients' recovery and can cause damage. Nevertheless, they are treated as common events inherent in daily practice. The need to acknowledge these events is emphasized, stimulating the safety culture at the institution.

  20. APJE-SLIM Based Method for Marine Human Error Probability Estimation%基于APJE-SLIM的海运人因失误概率的确定

    Institute of Scientific and Technical Information of China (English)

    席永涛; 陈伟炯; 夏少生; 张晓东

    2011-01-01

    Safety is the eternal theme in shipping industry.Research shows that human error is the main reason of maritime accidents.In order to research marine human errors, the PSF are discussed, and the human error probability (HEP) is estimated under the influence of PSF.Based on the detailed investigation of human errors in collision avoidance behavior which is the most key mission in navigation and the PSF, human reliability of mariners in collision avoidance is analyzed by using the integration of APJE and SLIM.Result shows that PSF such as fatigue and health status, knowledge, experience and training, task complexity, safety management and organizational effectiveness, etc.have varying influence on HEP.If the level of PSF can be improved, the HEP can decreased.Using APJE to determine the absolute human error probabilities of extreme point can solve the problem that the probability of reference point is hard to obtain in SLIM method, and obtain the marine HEP under the different influence levels of PSF.%安全是海运行业永恒的主题,调查研究表明,人因失误是造成海事的主要原因.为了对海运人因失误进行研究,探讨引起人因失误的行为形成因子(PSF),确定在PSF影响下的人因失误概率.在调查海上避让行为的人因失误和这些失误的行为形成因子的基础上,采用APJE和SLIM 相结合的方法对航海人员避让行为中的可靠性进行分析.结果表明,航海人员疲劳与健康程度、知识、经验与培训水平、任务复杂程度、安全管理水平与组织有效性等PSF对人因失误概率有着不同程度的影响,相应提高PSF水平,可极大地减少人因失误概率.利用APJE确定端点绝对失误概率,解决了SLIM方法中难以获得参考点概率的问题,获得了在不同种类不同水平PSF影响下的海运人因失误概率.

  1. Human Error Assessmentin Minefield Cleaning Operation Using Human Event Analysis

    Directory of Open Access Journals (Sweden)

    Mohammad Hajiakbari

    2015-12-01

    Full Text Available Background & objective: Human error is one of the main causes of accidents. Due to the unreliability of the human element and the high-risk nature of demining operations, this study aimed to assess and manage human errors likely to occur in such operations. Methods: This study was performed at a demining site in war zones located in the West of Iran. After acquiring an initial familiarity with the operations, methods, and tools of clearing minefields, job task related to clearing landmines were specified. Next, these tasks were studied using HTA and related possible errors were assessed using ATHEANA. Results: de-mining task was composed of four main operations, including primary detection, technical identification, investigation, and neutralization. There were found four main reasons for accidents occurring in such operations; walking on the mines, leaving mines with no action, error in neutralizing operation and environmental explosion. The possibility of human error in mine clearance operations was calculated as 0.010. Conclusion: The main causes of human error in de-mining operations can be attributed to various factors such as poor weather and operating conditions like outdoor work, inappropriate personal protective equipment, personality characteristics, insufficient accuracy in the work, and insufficient time available. To reduce the probability of human error in de-mining operations, the aforementioned factors should be managed properly.

  2. Human error mitigation initiative (HEMI) : summary report.

    Energy Technology Data Exchange (ETDEWEB)

    Stevens, Susan M.; Ramos, M. Victoria; Wenner, Caren A.; Brannon, Nathan Gregory

    2004-11-01

    Despite continuing efforts to apply existing hazard analysis methods and comply with requirements, human errors persist across the nuclear weapons complex. Due to a number of factors, current retroactive and proactive methods to understand and minimize human error are highly subjective, inconsistent in numerous dimensions, and are cumbersome to characterize as thorough. An alternative and proposed method begins with leveraging historical data to understand what the systemic issues are and where resources need to be brought to bear proactively to minimize the risk of future occurrences. An illustrative analysis was performed using existing incident databases specific to Pantex weapons operations indicating systemic issues associated with operating procedures that undergo notably less development rigor relative to other task elements such as tooling and process flow. Future recommended steps to improve the objectivity, consistency, and thoroughness of hazard analysis and mitigation were delineated.

  3. 基于Bayes信息融合的人为差错概率计算方法%Human error probability quantification method based on Bayesian information fusion

    Institute of Scientific and Technical Information of China (English)

    蒋英杰; 孙志强; 谢红卫; 宫二玲

    2011-01-01

    研究了人为差错概率的计算.首先,介绍了可用于人为差错概率计算的数据来源,主要包括:通用数据、专家数据、仿真实验数据和现场数据.然后,分析了Bayes信息融合方法的基本思想,强调了该方法的两个关键性问题:验前分布的构建和融合权重的确定.最后,构建了基于Bayes信息融合的人为差错概率计算方法.将前3种数据作为脸前信息,融合形成验前分布.使用Bayes方法完成与现场数据的数据综合,得到人为差错概率的验后分布.基于该验后分布,完成人为差错概率的计算.通过示例分析,演示了方法的使用过程,证明了方法的有效性.%The quantification of human error probability is researched. Firstly, the data resources that can be used in the quantification of human error probability are introduced, including general data, expert data, simulation data, and spot data. Their characteristics are analyzed. Secondly, the basic idea of Bayesian information fusing is analyzed. Two key prololems are emphasized, which are the formation of prior distributions and the determination of fusing weights. Finally, the new method is presented, which quantifies the human error probability based on Bayesian information fusing. The first three kinds of data are regarded as prior information to form the fused prior distribution. The Bayesian method is used to synthesize all the data and get the posterior distribution. Based on the posterior distribution, the human error probability can be quantified. An example is analyzed, which shows the process of the method and proves its validity.

  4. Group representations, error bases and quantum codes

    Energy Technology Data Exchange (ETDEWEB)

    Knill, E

    1996-01-01

    This report continues the discussion of unitary error bases and quantum codes. Nice error bases are characterized in terms of the existence of certain characters in a group. A general construction for error bases which are non-abelian over the center is given. The method for obtaining codes due to Calderbank et al. is generalized and expressed purely in representation theoretic terms. The significance of the inertia subgroup both for constructing codes and obtaining the set of transversally implementable operations is demonstrated.

  5. Advanced MMIS Toward Substantial Reduction in Human Errors in NPPs

    Energy Technology Data Exchange (ETDEWEB)

    Seong, Poong Hyun; Kang, Hyun Gook [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Na, Man Gyun [Chosun Univ., Gwangju (Korea, Republic of); Kim, Jong Hyun [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of); Heo, Gyunyoung [Kyung Hee Univ., Yongin (Korea, Republic of); Jung, Yoensub [Korea Hydro and Nuclear Power Co., Ltd., Daejeon (Korea, Republic of)

    2013-04-15

    This paper aims to give an overview of the methods to inherently prevent human errors and to effectively mitigate the consequences of such errors by securing defense-in-depth during plant management through the advanced man-machine interface system (MMIS). It is needless to stress the significance of human error reduction during an accident in nuclear power plants (NPPs). Unexpected shutdowns caused by human errors not only threaten nuclear safety but also make public acceptance of nuclear power extremely lower. We have to recognize there must be the possibility of human errors occurring since humans are not essentially perfect particularly under stressful conditions. However, we have the opportunity to improve such a situation through advanced information and communication technologies on the basis of lessons learned from our experiences. As important lessons, authors explained key issues associated with automation, man-machine interface, operator support systems, and procedures. Upon this investigation, we outlined the concept and technical factors to develop advanced automation, operation and maintenance support systems, and computer-based procedures using wired/wireless technology. It should be noted that the ultimate responsibility of nuclear safety obviously belongs to humans not to machines. Therefore, safety culture including education and training, which is a kind of organizational factor, should be emphasized as well. In regard to safety culture for human error reduction, several issues that we are facing these days were described. We expect the ideas of the advanced MMIS proposed in this paper to lead in the future direction of related researches and finally supplement the safety of NPPs.

  6. ADVANCED MMIS TOWARD SUBSTANTIAL REDUCTION IN HUMAN ERRORS IN NPPS

    Directory of Open Access Journals (Sweden)

    POONG HYUN SEONG

    2013-04-01

    Full Text Available This paper aims to give an overview of the methods to inherently prevent human errors and to effectively mitigate the consequences of such errors by securing defense-in-depth during plant management through the advanced man-machine interface system (MMIS. It is needless to stress the significance of human error reduction during an accident in nuclear power plants (NPPs. Unexpected shutdowns caused by human errors not only threaten nuclear safety but also make public acceptance of nuclear power extremely lower. We have to recognize there must be the possibility of human errors occurring since humans are not essentially perfect particularly under stressful conditions. However, we have the opportunity to improve such a situation through advanced information and communication technologies on the basis of lessons learned from our experiences. As important lessons, authors explained key issues associated with automation, man-machine interface, operator support systems, and procedures. Upon this investigation, we outlined the concept and technical factors to develop advanced automation, operation and maintenance support systems, and computer-based procedures using wired/wireless technology. It should be noted that the ultimate responsibility of nuclear safety obviously belongs to humans not to machines. Therefore, safety culture including education and training, which is a kind of organizational factor, should be emphasized as well. In regard to safety culture for human error reduction, several issues that we are facing these days were described. We expect the ideas of the advanced MMIS proposed in this paper to lead in the future direction of related researches and finally supplement the safety of NPPs.

  7. Applying lessons learned to enhance human performance and reduce human error for ISS operations

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, W.R.

    1998-09-01

    A major component of reliability, safety, and mission success for space missions is ensuring that the humans involved (flight crew, ground crew, mission control, etc.) perform their tasks and functions as required. This includes compliance with training and procedures during normal conditions, and successful compensation when malfunctions or unexpected conditions occur. A very significant issue that affects human performance in space flight is human error. Human errors can invalidate carefully designed equipment and procedures. If certain errors combine with equipment failures or design flaws, mission failure or loss of life can occur. The control of human error during operation of the International Space Station (ISS) will be critical to the overall success of the program. As experience from Mir operations has shown, human performance plays a vital role in the success or failure of long duration space missions. The Department of Energy`s Idaho National Engineering and Environmental Laboratory (INEEL) is developed a systematic approach to enhance human performance and reduce human errors for ISS operations. This approach is based on the systematic identification and evaluation of lessons learned from past space missions such as Mir to enhance the design and operation of ISS. This paper describes previous INEEL research on human error sponsored by NASA and how it can be applied to enhance human reliability for ISS.

  8. Human reliability, error, and human factors in power generation

    CERN Document Server

    Dhillon, B S

    2014-01-01

    Human reliability, error, and human factors in the area of power generation have been receiving increasing attention in recent years. Each year billions of dollars are spent in the area of power generation to design, construct/manufacture, operate, and maintain various types of power systems around the globe, and such systems often fail due to human error. This book compiles various recent results and data into one volume, and eliminates the need to consult many diverse sources to obtain vital information.  It enables potential readers to delve deeper into a specific area, providing the source of most of the material presented in references at the end of each chapter. Examples along with solutions are also provided at appropriate places, and there are numerous problems for testing the reader’s comprehension.  Chapters cover a broad range of topics, including general methods for performing human reliability and error analysis in power plants, specific human reliability analysis methods for nuclear power pl...

  9. Cognition Analysis of Human Errors in ATC Based on HERA-JANUS Model%基于HERA-JANUS模型的空管人误认知分析

    Institute of Scientific and Technical Information of China (English)

    吴聪; 解佳妮; 杜红兵; 袁乐平

    2012-01-01

    空管人误分类分析是空管人误研究的基础.为了对管制员人误进行系统的分类研究,结合空管业务知识和认知心理学理论,对欧洲航空安全局和美国联邦航空局合作开发的HERA-JANUS模型的工作原理和流程进行较详细地分析.运用该方法模型,对我国一起空管不安全事件案例进行分析后得到3个由管制员所产生的人误差错,并对这3个人误差错分别从人误类型、人误认知、相关因素3方面进行详尽的分析研究,最后得出该不安全事件的21项人误结果.结果表明,HERA-JANUS模型能较全面地从深层次分析管制员的人误,其分类形式也便于开展空管人误统计.%It was held that classification and analysis of human errors were a basis for ATM system human factors study. With the professional knowledge of ATM and cognitive psychology theory, the principle and flowchart of HERA-JANUS model developed by European Aviation Safety Agency and Federal Aviation Administration were introduced in detail in order to research controllers' errors more systematically. An unsafe incident case of ATC in China was investigated by employing the model, and three human errors stumbled by a controller in this case were identified. These errors were classified from three respects, viz. human error type, human error cognition, and influencing factors, respectively. Twenty-one causal factors of human errors of the unsafe occurrence were ultimately obtained. The results show that the model can analyze controllers' errors more comprehensively and its classification way is helpful in earring out statistics of controllers' errors.

  10. Research Workshop on Expert Judgment, Human Error, and Intelligent Systems

    OpenAIRE

    Silverman, Barry G.

    1993-01-01

    This workshop brought together 20 computer scientists, psychologists, and human-computer interaction (HCI) researchers to exchange results and views on human error and judgment bias. Human error is typically studied when operators undertake actions, but judgment bias is an issue in thinking rather than acting. Both topics are generally ignored by the HCI community, which is interested in designs that eliminate human error and bias tendencies. As a result, almost no one at the workshop had met...

  11. Modeling human response errors in synthetic flight simulator domain

    Science.gov (United States)

    Ntuen, Celestine A.

    1992-01-01

    This paper presents a control theoretic approach to modeling human response errors (HRE) in the flight simulation domain. The human pilot is modeled as a supervisor of a highly automated system. The synthesis uses the theory of optimal control pilot modeling for integrating the pilot's observation error and the error due to the simulation model (experimental error). Methods for solving the HRE problem are suggested. Experimental verification of the models will be tested in a flight quality handling simulation.

  12. Information systems and human error in the lab.

    Science.gov (United States)

    Bissell, Michael G

    2004-01-01

    Health system costs in clinical laboratories are incurred daily due to human error. Indeed, a major impetus for automating clinical laboratories has always been the opportunity it presents to simultaneously reduce cost and improve quality of operations by decreasing human error. But merely automating these processes is not enough. To the extent that introduction of these systems results in operators having less practice in dealing with unexpected events or becoming deskilled in problemsolving, however new kinds of error will likely appear. Clinical laboratories could potentially benefit by integrating findings on human error from modern behavioral science into their operations. Fully understanding human error requires a deep understanding of human information processing and cognition. Predicting and preventing negative consequences requires application of this understanding to laboratory operations. Although the occurrence of a particular error at a particular instant cannot be absolutely prevented, human error rates can be reduced. The following principles are key: an understanding of the process of learning in relation to error; understanding the origin of errors since this knowledge can be used to reduce their occurrence; optimal systems should be forgiving to the operator by absorbing errors, at least for a time; although much is known by industrial psychologists about how to write operating procedures and instructions in ways that reduce the probability of error, this expertise is hardly ever put to use in the laboratory; and a feedback mechanism must be designed into the system that enables the operator to recognize in real time that an error has occurred.

  13. The Detection of Human Spreadsheet Errors by Humans versus Inspection (Auditing) Software

    CERN Document Server

    Aurigemma, Salvatore

    2010-01-01

    Previous spreadsheet inspection experiments have had human subjects look for seeded errors in spreadsheets. In this study, subjects attempted to find errors in human-developed spreadsheets to avoid the potential artifacts created by error seeding. Human subject success rates were compared to the successful rates for error-flagging by spreadsheet static analysis tools (SSATs) applied to the same spreadsheets. The human error detection results were comparable to those of studies using error seeding. However, Excel Error Check and Spreadsheet Professional were almost useless for correctly flagging natural (human) errors in this study.

  14. Promoting safety improvements via potential human error audits

    Energy Technology Data Exchange (ETDEWEB)

    Simpson, G.C. (International Mining Consultants (United Kingdom). Ergonomics and Safety Management)

    1994-08-01

    It has become increasingly recognised that human error plays a major role in mining accident causation. Moreover, it also recognised that this aspect of accident causation has had relatively little systematic attention in the past. Recent studies within British Coal have succeeded in developing a Potential Human Error Audit as a means of targeting accident prevention initiatives. 7 refs., 2 tabs.

  15. Error detection based on MB types

    Institute of Scientific and Technical Information of China (English)

    FANG Yong; JEONG JeChang; WU ChengKe

    2008-01-01

    This paper proposes a method of error detection based on macroblock (MB) types for video transmission. For decoded inter MBs, the absolute values of received residues are accumulated. At the same time, the intra textural complexity of the current MB is estimated by that of the motion compensated reference block. We compare the inter residue with the intra textural complexity. If the inter residue is larger than the intra textural complexity by a predefined threshold, the MB is con-sidered to be erroneous and errors are concealed. For decoded intra MBs, the connective smoothness of the current MB with neighboring MBs is tested to find erroneous MBs. Simulation results show that the new method can remove those seriously-corrupted MBs efficiently. Combined with error concealment, the new method improves the recovered quality at the decoder by about 0.5-1 dB.

  16. ERROR DETECTION BY ANTICIPATION FOR VISION-BASED CONTROL

    Directory of Open Access Journals (Sweden)

    A ZAATRI

    2001-06-01

    Full Text Available A vision-based control system has been developed.  It enables a human operator to remotely direct a robot, equipped with a camera, towards targets in 3D space by simply pointing on their images with a pointing device. This paper presents an anticipatory system, which has been designed for improving the safety and the effectiveness of the vision-based commands. It simulates these commands in a virtual environment. It attempts to detect hard contacts that may occur between the robot and its environment, which can be caused by machine errors or operator errors as well.

  17. A Conceptual Framework of Human Reliability Analysis for Execution Human Error in NPP Advanced MCRs

    Energy Technology Data Exchange (ETDEWEB)

    Jang, In Seok; Kim, Ar Ryum; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of); Jung, Won Dea [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-08-15

    The operation environment of Main Control Rooms (MCRs) in Nuclear Power Plants (NPPs) has changed with the adoption of new human-system interfaces that are based on computer-based technologies. The MCRs that include these digital and computer technologies, such as large display panels, computerized procedures, and soft controls, are called Advanced MCRs. Among the many features of Advanced MCRs, soft controls are a particularly important feature because the operation action in NPP Advanced MCRs is performed by soft control. Using soft controls such as mouse control, and touch screens, operators can select a specific screen, then choose the controller, and finally manipulate the given devices. Due to the different interfaces between soft control and hardwired conventional type control, different human error probabilities and a new Human Reliability Analysis (HRA) framework should be considered in the HRA for advanced MCRs. In other words, new human error modes should be considered for interface management tasks such as navigation tasks, and icon (device) selection tasks in monitors and a new framework of HRA method taking these newly generated human error modes into account should be considered. In this paper, a conceptual framework for a HRA method for the evaluation of soft control execution human error in advanced MCRs is suggested by analyzing soft control tasks.

  18. Selecting Human Error Types for Cognitive Modelling and Simulation

    NARCIS (Netherlands)

    Mioch, T.; Osterloh, J.P.; Javaux, D.

    2010-01-01

    This paper presents a method that has enabled us to make a selection of error types and error production mechanisms relevant to the HUMAN European project, and discusses the reasons underlying those choices. We claim that this method has the advantage that it is very exhaustive in determining the re

  19. Research on the technology for processing errors of photoelectric theodolite based on error design idea

    Science.gov (United States)

    Guo, Xiaosong; Pu, Pengcheng; Zhou, Zhaofa; Wang, Kunming

    2012-10-01

    The errors existing in photoelectric theodolite were studied according to the error design idea , that is - the correction of theodolite errors was achieved by analyzing the effect of errors actively instead of processing the data with error passively. Aiming at the shafting error, the relationship between different errors was analyzed by the error model based on coordinate transformation, and the real-time error compensation method based on the normal-reversed measuring method and levelness auto-detection was supposed. As to the eccentric error of dial, the idea of eccentric residual error was presented and its influence to measuring precision was studied, then the dynamic compensation model was build, so the influence of eccentric error of dial to measuring precision can be eliminated. For the centering deviation in the process of measuring angle, the compensation method based on the error model was supposed, in which the centering deviation was detected automatically based on computer vision. The above method based on error design idea reduced the influence to measuring result by software compensation method effectively, and improved the automation degree of azimuth angle measuring of theodolite, at the same time the precision was not depressed.

  20. Detecting Soft Errors in Stencil based Computations

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, V. [Univ. of Utah, Salt Lake City, UT (United States); Gopalkrishnan, G. [Univ. of Utah, Salt Lake City, UT (United States); Bronevetsky, G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-05-06

    Given the growing emphasis on system resilience, it is important to develop software-level error detectors that help trap hardware-level faults with reasonable accuracy while minimizing false alarms as well as the performance overhead introduced. We present a technique that approaches this idea by taking stencil computations as our target, and synthesizing detectors based on machine learning. In particular, we employ linear regression to generate computationally inexpensive models which form the basis for error detection. Our technique has been incorporated into a new open-source library called SORREL. In addition to reporting encouraging experimental results, we demonstrate techniques that help reduce the size of training data. We also discuss the efficacy of various detectors synthesized, as well as our future plans.

  1. Development of a Drosophila cell-based error correction assay

    Directory of Open Access Journals (Sweden)

    Jeffrey D. Salemi

    2013-07-01

    Full Text Available Accurate transmission of the genome through cell division requires microtubules from opposing spindle poles to interact with protein super-structures called kinetochores that assemble on each sister chromatid. Most kinetochores establish erroneous attachments that are destabilized through a process called error correction. Failure to correct improper kinetochore-microtubule (kt-MT interactions before anaphase onset results in chromosomal instability (CIN, which has been implicated in tumorigenesis and tumor adaptation. Thus, it is important to characterize the molecular basis of error correction to better comprehend how CIN occurs and how it can be modulated. An error correction assay has been previously developed in cultured mammalian cells in which incorrect kt-MT attachments are created through the induction of monopolar spindle assembly via chemical inhibition of kinesin-5. Error correction is then monitored following inhibitor wash out. Implementing the error correction assay in Drosophila melanogaster S2 cells would be valuable because kt-MT attachments are easily visualized and the cells are highly amenable to RNAi and high-throughput screening. However, Drosophila kinesin-5 (Klp61F is unaffected by available small molecule inhibitors. To overcome this limitation, we have rendered S2 cells susceptible to kinesin-5 inhibitors by functionally replacing Klp61F with human kinesin-5 (Eg5. Eg5 expression rescued the assembly of monopolar spindles typically caused by Klp61F depletion. Eg5-mediated bipoles collapsed into monopoles due to the activity of kinesin-14 (Ncd when treated with the kinesin-5 inhibitor S-trityl-L-cysteine (STLC. Furthermore, bipolar spindles reassembled and error correction was observed after STLC wash out. Importantly, error correction in Eg5-expressing S2 cells was dependent on the well-established error correction kinase Aurora B. This system provides a powerful new cell-based platform for studying error correction and

  2. Development of a Drosophila cell-based error correction assay.

    Science.gov (United States)

    Salemi, Jeffrey D; McGilvray, Philip T; Maresca, Thomas J

    2013-01-01

    Accurate transmission of the genome through cell division requires microtubules from opposing spindle poles to interact with protein super-structures called kinetochores that assemble on each sister chromatid. Most kinetochores establish erroneous attachments that are destabilized through a process called error correction. Failure to correct improper kinetochore-microtubule (kt-MT) interactions before anaphase onset results in chromosomal instability (CIN), which has been implicated in tumorigenesis and tumor adaptation. Thus, it is important to characterize the molecular basis of error correction to better comprehend how CIN occurs and how it can be modulated. An error correction assay has been previously developed in cultured mammalian cells in which incorrect kt-MT attachments are created through the induction of monopolar spindle assembly via chemical inhibition of kinesin-5. Error correction is then monitored following inhibitor wash out. Implementing the error correction assay in Drosophila melanogaster S2 cells would be valuable because kt-MT attachments are easily visualized and the cells are highly amenable to RNAi and high-throughput screening. However, Drosophila kinesin-5 (Klp61F) is unaffected by available small molecule inhibitors. To overcome this limitation, we have rendered S2 cells susceptible to kinesin-5 inhibitors by functionally replacing Klp61F with human kinesin-5 (Eg5). Eg5 expression rescued the assembly of monopolar spindles typically caused by Klp61F depletion. Eg5-mediated bipoles collapsed into monopoles due, in part, to kinesin-14 (Ncd) activity when treated with the kinesin-5 inhibitor S-trityl-L-cysteine (STLC). Furthermore, bipolar spindles reassembled and error correction was observed after STLC wash out. Importantly, error correction in Eg5-expressing S2 cells was dependent on the well-established error correction kinase Aurora B. This system provides a powerful new cell-based platform for studying error correction and CIN.

  3. Modeling Human Error Mechanism for Soft Control in Advanced Control Rooms (ACRs)

    Energy Technology Data Exchange (ETDEWEB)

    Aljneibi, Hanan Salah Ali [Khalifa Univ., Abu Dhabi (United Arab Emirates); Ha, Jun Su; Kang, Seongkeun; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of)

    2015-10-15

    To achieve the switch from conventional analog-based design to digital design in ACRs, a large number of manual operating controls and switches have to be replaced by a few common multi-function devices which is called soft control system. The soft controls in APR-1400 ACRs are classified into safety-grade and non-safety-grade soft controls; each was designed using different and independent input devices in ACRs. The operations using soft controls require operators to perform new tasks which were not necessary in conventional controls such as navigating computerized displays to monitor plant information and control devices. These kinds of computerized displays and soft controls may make operations more convenient but they might cause new types of human error. In this study the human error mechanism during the soft controls is studied and modeled to be used for analysis and enhancement of human performance (or human errors) during NPP operation. The developed model would contribute to a lot of applications to improve human performance (or reduce human errors), HMI designs, and operators' training program in ACRs. The developed model of human error mechanism for the soft control is based on assumptions that a human operator has certain amount of capacity in cognitive resources and if resources required by operating tasks are greater than resources invested by the operator, human error (or poor human performance) is likely to occur (especially in 'slip'); good HMI (Human-machine Interface) design decreases the required resources; operator's skillfulness decreases the required resources; and high vigilance increases the invested resources. In this study the human error mechanism during the soft controls is studied and modeled to be used for analysis and enhancement of human performance (or reduction of human errors) during NPP operation.

  4. Avoiding Human Error in Mission Operations: Cassini Flight Experience

    Science.gov (United States)

    Burk, Thomas A.

    2012-01-01

    Operating spacecraft is a never-ending challenge and the risk of human error is ever- present. Many missions have been significantly affected by human error on the part of ground controllers. The Cassini mission at Saturn has not been immune to human error, but Cassini operations engineers use tools and follow processes that find and correct most human errors before they reach the spacecraft. What is needed are skilled engineers with good technical knowledge, good interpersonal communications, quality ground software, regular peer reviews, up-to-date procedures, as well as careful attention to detail and the discipline to test and verify all commands that will be sent to the spacecraft. Two areas of special concern are changes to flight software and response to in-flight anomalies. The Cassini team has a lot of practical experience in all these areas and they have found that well-trained engineers with good tools who follow clear procedures can catch most errors before they get into command sequences to be sent to the spacecraft. Finally, having a robust and fault-tolerant spacecraft that allows ground controllers excellent visibility of its condition is the most important way to ensure human error does not compromise the mission.

  5. Application of human error analysis to aviation and space operations

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, W.R.

    1998-03-01

    For the past several years at the Idaho National Engineering and Environmental Laboratory (INEEL) the authors have been working to apply methods of human error analysis to the design of complex systems. They have focused on adapting human reliability analysis (HRA) methods that were developed for Probabilistic Safety Assessment (PSA) for application to system design. They are developing methods so that human errors can be systematically identified during system design, the potential consequences of each error can be assessed, and potential corrective actions (e.g. changes to system design or procedures) can be identified. The primary vehicle the authors have used to develop and apply these methods has been a series of projects sponsored by the National Aeronautics and Space Administration (NASA) to apply human error analysis to aviation operations. They are currently adapting their methods and tools of human error analysis to the domain of air traffic management (ATM) systems. Under the NASA-sponsored Advanced Air Traffic Technologies (AATT) program they are working to address issues of human reliability in the design of ATM systems to support the development of a free flight environment for commercial air traffic in the US. They are also currently testing the application of their human error analysis approach for space flight operations. They have developed a simplified model of the critical habitability functions for the space station Mir, and have used this model to assess the affects of system failures and human errors that have occurred in the wake of the collision incident last year. They are developing an approach so that lessons learned from Mir operations can be systematically applied to design and operation of long-term space missions such as the International Space Station (ISS) and the manned Mars mission.

  6. Error detection in spoken human-machine interaction

    NARCIS (Netherlands)

    Krahmer, E.; Swerts, M.; Theune, Mariet; Weegels, M.

    Given the state of the art of current language and speech technology, errors are unavoidable in present-day spoken dialogue systems. Therefore, one of the main concerns in dialogue design is how to decide whether or not the system has understood the user correctly. In human-human communication,

  7. Error detection in spoken human-machine interaction

    NARCIS (Netherlands)

    Krahmer, E.; Swerts, M.; Theune, M.; Weegels, M.

    2001-01-01

    Given the state of the art of current language and speech technology, errors are unavoidable in present-day spoken dialogue systems. Therefore, one of the main concerns in dialogue design is how to decide whether or not the system has understood the user correctly. In human-human communication, dial

  8. ROBOT'S MOTION ERROR AND ONLINE COMPENSATION BASED ON FORCE SENSOR

    Institute of Scientific and Technical Information of China (English)

    GAN Fangjian; LIU Zhengshi; REN Chuansheng; ZHANG Ping

    2007-01-01

    Robot's dynamic motion error and on-line compensation based on multi-axis force sensor are dealt with. It is revealed that the reasons of the error are formed and the relations of the error are delivered. A motion equation of robot's termination with the error is established, and then, an error matrix and an error compensation matrix of the motion equation are also defined. An on-line error's compensation method is put forward to decrease the displacement error, which is a degree of millimeter, shown by the result of Simulation of PUMA562 robot.

  9. 基于事故/事件的民机人因防错设计关键因素研究%Research on key factors of human error proofing design for civil aircraft based on accidents/incidents

    Institute of Scientific and Technical Information of China (English)

    高扬; 王向章; 李晓旭

    2015-01-01

    Aiming at the influence of human error proofing design for civil aircraft on flight safety, 92 typical acci-dents cases by human factors were selected from the world civil aviation safety accidents/incidents database.The element incident analysis method was applied to conduct deep analysis, then the important design factors which need to be considered in the human error proofing design for civil aircraft was summarized, and the important design factor set was established.Based on the man-machine-environment model in systems engineering, and combined with the relevant standards for aircrafts design at home and abroad, an index system of important factors about hu-man error proofing design for civil aircraft was built.The FAHP method was used to calculate the weight of inde-xes, and 14 key factors of human error proofing design that influence the flight safety were determined.Finally, the general requirements of human error proofing design for civil aircraft were proposed against the key factors.It can provide reference for the human error proofing design for civil aircraft to better meet the requirements of initial air-worthiness.%针对民机人因防错设计对飞行安全的影响,从世界民航安全事故/事件数据库中筛选出92起典型的人为因素事故案例,采用基元事件分析法进行深度分析,提炼出民机人因防错设计需要考虑的重要设计因素,并建立重要设计因素集。基于系统工程学的“人机环”模型,结合国内外飞机设计相关标准,建立民机人因防错设计重要因素指标体系。运用模糊层次分析法对因素指标进行权重计算,确立影响飞行安全的14项人因防错设计关键因素,并针对这些关键因素提出民机人因防错设计通用要求,以期为民机人因防错设计满足初始适航要求提供参考。

  10. Error model identification of inertial navigation platform based on errors-in-variables model

    Institute of Scientific and Technical Information of China (English)

    Liu Ming; Liu Yu; Su Baoku

    2009-01-01

    Because the real input acceleration cannot be obtained during the error model identification of inertial navigation platform, both the input and output data contain noises. In this case, the conventional regression model and the least squares (LS) method will result in bias. Based on the models of inertial navigation platform error and observation error, the errors-in-variables (EV) model and the total least squares (TLS) method are proposed to identify the error model of the inertial navigation platform. The estimation precision is improved and the result is better than the conventional regression model based LS method. The simulation results illustrate the effectiveness of the proposed method.

  11. Applications of human error analysis to aviation and space operations

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, W.R.

    1998-07-01

    For the past several years at the Idaho National Engineering and Environmental Laboratory (INEEL) we have been working to apply methods of human error analysis to the design of complex systems. We have focused on adapting human reliability analysis (HRA) methods that were developed for Probabilistic Safety Assessment (PSA) for application to system design. We are developing methods so that human errors can be systematically identified during system design, the potential consequences of each error can be assessed, and potential corrective actions (e.g. changes to system design or procedures) can be identified. These applications lead to different requirements when compared with HR.As performed as part of a PSA. For example, because the analysis will begin early during the design stage, the methods must be usable when only partial design information is available. In addition, the ability to perform numerous ''what if'' analyses to identify and compare multiple design alternatives is essential. Finally, since the goals of such human error analyses focus on proactive design changes rather than the estimate of failure probabilities for PRA, there is more emphasis on qualitative evaluations of error relationships and causal factors than on quantitative estimates of error frequency. The primary vehicle we have used to develop and apply these methods has been a series of projects sponsored by the National Aeronautics and Space Administration (NASA) to apply human error analysis to aviation operations. The first NASA-sponsored project had the goal to evaluate human errors caused by advanced cockpit automation. Our next aviation project focused on the development of methods and tools to apply human error analysis to the design of commercial aircraft. This project was performed by a consortium comprised of INEEL, NASA, and Boeing Commercial Airplane Group. The focus of the project was aircraft design and procedures that could lead to human errors during

  12. Research of electrical safety management in hospital based on human error analysis%基于人因失误分析的医院电气安全管理研究

    Institute of Scientific and Technical Information of China (English)

    刘松海

    2013-01-01

    The hospital is an important usage unit of electric.Which needs higher safety and reliability of power supply.However,the reasons of force majeure,the failure of power supply system,management problems and human errors,there exists the risk for occurrence of various types of electrical emergency.The human error or unsafe behavior factors which led to electrical emergencies has become the main reason of non-medical accidents in hospital.The causes of the human error and mistakes were analyzed from both individuals and organizations.It showed that the human error is caused not only by the individual factors,but also by the impact of the environment,systems and management level.In order to improve the quality of power supply and provide safety power for hospital medical work,based on the human error analysis,the measures that improving the relevant rules and regulations,strengthening education and training to reduce the human error and mistakes were put forward from the aspects of organization and personnel.%医院是重点用电单位,用电的安全性和供电可靠性都比较高,但因不可抗力、供电系统故障、医院管理问题、人为失误等方面的原因,仍存在发生各类电气突发事件的风险.其中,由人的误操作或不安全行为因素而诱发的电气方面的突发事件已成为医院非医疗事故的主要原因.文章针对医院电气安全操作方面的人因失误,从个人和组织两个角度进行了失误原因的分析,认为人的失误既受个体因素的影响,也受环境、制度和管理水平的影响.在此基础上,提出完善相关规章制度建设、加强教育与培训等,从组织制度建设、人员技术素质提高等方面,提出预防与减少人因失误的措施与方法,提高供配电质量,为医院医疗工作提供有效的电气安全后勤保障.

  13. Derivation of main drivers affecting the possibility of human errors during low power and shutdown operation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Ar Ryum; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of); Park, Jin Kyun; Kim, Jae Whan [KAERI, Daejeon (Korea, Republic of)

    2016-05-15

    In order to estimate the possibility of human error and identify its nature, human reliability analysis (HRA) methods have been implemented. For this, various HRA methods have been developed so far: techniques for human error rate prediction (THERP), cause based decision tree (CBDT), the cognitive reliability and error analysis method (CREAM) and so on. Most HRA methods have been developed with a focus on full power operation of NPPs even though human performance may more largely affect the safety of the system during low power and shutdown (LPSD) operation than it would when the system is in full power operation. In this regard, it is necessary to conduct a research for developing HRA method to be used in LPSD operation. For the first step of the study, main drivers which affect the possibility of human error have been developed. Drivers which are commonly called as performance shaping factors (PSFs) are aspects of the human's individual characteristics, environment, organization, or task that specifically decrements or improves human performance, thus respectively increasing or decreasing the likelihood of human errors. In order to estimate the possibility of human error and identify its nature, human reliability analysis (HRA) methods have been implemented. For this, various HRA methods have been developed so far: techniques for human error rate prediction (THERP), cause based decision tree (CBDT), the cognitive reliability and error analysis method (CREAM) and so on. Most HRA methods have been developed with a focus on full power operation of NPPs even though human performance may more largely affect the safety of the system during low power and shutdown (LPSD) operation than it would when the system is in full power operation. In this regard, it is necessary to conduct a research for developing HRA method to be used in LPSD operation. For the first step of the study, main drivers which affect the possibility of human error have been developed. Drivers

  14. Research on Human Errors Evaluation Method of Flight Accidents Based on HFACS%基于HFACS的飞行事故人为差错分析方法研究

    Institute of Scientific and Technical Information of China (English)

    魏水先; 孙有朝; 陈迎春

    2014-01-01

    人为差错是飞行事故最主要的致因因素,分析飞行事故中人为差错特点,进一步采取预防措施,对于飞行安全至关重要。分析 HFACS 模型,把 HFACS 模型分解为两部分,包括飞行事故差错模式和差错成因。基于HFACS模型,结合专家主观评分法和灰色系统理论构建了适用于航空飞行事故的人为差错致因分析的综合分析模型。利用专家主观评分法对飞行操纵中的人为差错致因进行分析,利用灰色理论对飞行操纵人为差错的影响要素进行量化排序,并通过实例验证了所提出的方法的有效性。%Human error is the primary cause of the flight accident ,analyzing the characteristics of the hu-man errors in flight accident and take preventive measures is vital for flight safety .Analysis HFACS ,and the HFACS model is decomposed into two parts ,including flight error model and the causes of the error . This article is based on HFACS ,combined with the expert subjective evaluation method and gray system theory to develop a comprehensive analysis model ,which is applicable to analyze the human error in the aviation accident .The effectiveness of the proposed method has been verified by examples .

  15. Resilience to evolving drinking water contamination risks: a human error prevention perspective

    OpenAIRE

    Tang, Yanhong; Wu, Shaomin; Miao, Xin; Pollard, Simon J.T.; Hrudey, Steve E.

    2013-01-01

    Human error contributes to one of the major causes of the prevalence of drinking water contamination incidents. It has, however, attracted insufficient attention in the cleaner production management community. This paper analyzes human error appearing in each stage of the gestation of 40 drinking water incidents and their causes, proposes resilience-based mechanisms and tools within three groups: consumers, drinking water companies, and policy regulators. The mechanism analysis involves conce...

  16. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    Science.gov (United States)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Quality within space exploration ground processing operations, the identification and or classification of underlying contributors and causes of human error must be identified, in order to manage human error.This presentation will provide a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  17. Human Errors - A Taxonomy for Describing Human Malfunction in Industrial Installations

    DEFF Research Database (Denmark)

    Rasmussen, J.

    1982-01-01

    This paper describes the definition and the characteristics of human errors. Different types of human behavior are classified, and their relation to different error mechanisms are analyzed. The effect of conditioning factors related to affective, motivating aspects of the work situation as well...... as physiological factors are also taken into consideration. The taxonomy for event analysis, including human malfunction, is presented. Possibilities for the prediction of human error are discussed. The need for careful studies in actual work situations is expressed. Such studies could provide a better...... understanding of the complexity of human error situations as well as the data needed to characterize these situations....

  18. 基于人误系统复合状态(MSHES)的人误防范理论研究%Study on Human Error Prevention Theories Based on MSHES

    Institute of Scientific and Technical Information of China (English)

    李卫民; 陶志

    2007-01-01

    评析国内外以第一代人因可靠性分析(静态)、第二代人因可靠性分析(动态)为主体形成的人误防范理论和方法;针对目前不能量化人的生理、认知、心理等相关非结构性和非确定性参数和数据的"瓶颈",建立基于人-机-环系统业务流程的人误系统复合状态(Multiplex State of Human Errors System,MSHES)结构模型;探求运用粗糙集数据挖掘,对资深专业人员的经验规则信息、人因事故或事件分析的信息,挖掘人因层次结构中的根因与人误层次结构中的差错之间的关联关系,构建基于规则的人误防范专家系统结构模型;探究人的风险性评估和人误防范理论.

  19. SCHEME (Soft Control Human error Evaluation MEthod) for advanced MCR HRA

    Energy Technology Data Exchange (ETDEWEB)

    Jang, Inseok; Jung, Wondea [KAERI, Daejeon (Korea, Republic of); Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of)

    2015-05-15

    The Technique for Human Error Rate Prediction (THERP), Korean Human Reliability Analysis (K-HRA), Human Error Assessment and Reduction Technique (HEART), A Technique for Human Event Analysis (ATHEANA), Cognitive Reliability and Error Analysis Method (CREAM), and Simplified Plant Analysis Risk Human Reliability Assessment (SPAR-H) in relation to NPP maintenance and operation. Most of these methods were developed considering the conventional type of Main Control Rooms (MCRs). They are still used for HRA in advanced MCRs even though the operating environment of advanced MCRs in NPPs has been considerably changed by the adoption of new human-system interfaces such as computer-based soft controls. Among the many features in advanced MCRs, soft controls are an important feature because the operation action in NPP advanced MCRs is performed by soft controls. Consequently, those conventional methods may not sufficiently consider the features of soft control execution human errors. To this end, a new framework of a HRA method for evaluating soft control execution human error is suggested by performing the soft control task analysis and the literature reviews regarding widely accepted human error taxonomies. In this study, the framework of a HRA method for evaluating soft control execution human error in advanced MCRs is developed. First, the factors which HRA method in advanced MCRs should encompass are derived based on the literature review, and soft control task analysis. Based on the derived factors, execution HRA framework in advanced MCRs is developed mainly focusing on the features of soft control. Moreover, since most current HRA database deal with operation in conventional type of MCRs and are not explicitly designed to deal with digital HSI, HRA database are developed under lab scale simulation.

  20. System modeling based measurement error analysis of digital sun sensors

    Institute of Scientific and Technical Information of China (English)

    WEI; M; insong; XING; Fei; WANG; Geng; YOU; Zheng

    2015-01-01

    Stringent attitude determination accuracy is required for the development of the advanced space technologies and thus the accuracy improvement of digital sun sensors is necessary.In this paper,we presented a proposal for measurement error analysis of a digital sun sensor.A system modeling including three different error sources was built and employed for system error analysis.Numerical simulations were also conducted to study the measurement error introduced by different sources of error.Based on our model and study,the system errors from different error sources are coupled and the system calibration should be elaborately designed to realize a digital sun sensor with extra-high accuracy.

  1. Normal accidents: human error and medical equipment design.

    Science.gov (United States)

    Dain, Steven

    2002-01-01

    High-risk systems, which are typical of our technologically complex era, include not just nuclear power plants but also hospitals, anesthesia systems, and the practice of medicine and perfusion. In high-risk systems, no matter how effective safety devices are, some types of accidents are inevitable because the system's complexity leads to multiple and unexpected interactions. It is important for healthcare providers to apply a risk assessment and management process to decisions involving new equipment and procedures or staffing matters in order to minimize the residual risks of latent errors, which are amenable to correction because of the large window of opportunity for their detection. This article provides an introduction to basic risk management and error theory principles and examines ways in which they can be applied to reduce and mitigate the inevitable human errors that accompany high-risk systems. The article also discusses "human factor engineering" (HFE), the process which is used to design equipment/ human interfaces in order to mitigate design errors. The HFE process involves interaction between designers and endusers to produce a series of continuous refinements that are incorporated into the final product. The article also examines common design problems encountered in the operating room that may predispose operators to commit errors resulting in harm to the patient. While recognizing that errors and accidents are unavoidable, organizations that function within a high-risk system must adopt a "safety culture" that anticipates problems and acts aggressively through an anonymous, "blameless" reporting mechanism to resolve them. We must continuously examine and improve the design of equipment and procedures, personnel, supplies and materials, and the environment in which we work to reduce error and minimize its effects. Healthcare providers must take a leading role in the day-to-day management of the "Perioperative System" and be a role model in

  2. Error Analysis of Robotic Assembly System Based on Screw Theory

    Institute of Scientific and Technical Information of China (English)

    韩卫军; 费燕琼; 赵锡芳

    2003-01-01

    Assembly errors have great influence on assembly quality in robotic assembly systems. Error analysis is directed to the propagations and accumula-tions of various errors and their effect on assembly success.Using the screw coordinates, assembly errors are represented as "error twist", the extremely compact expression. According to the law of screw composition, relative position and orientation errors of mating parts are computed and the necessary condition of assembly success is concluded. A new simple method for measuring assembly errors is also proposed based on the transformation law of a screw.Because of the compact representation of error, the model presented for error analysis can be applied to various part- mating types and especially useful for error analysis of complexity assembly.

  3. Human error in strabismus surgery: Quantification with a sensitivity analysis

    NARCIS (Netherlands)

    S. Schutte (Sander); J.R. Polling; F.C.T. van der Helm (Frans); H.J. Simonsz (Huib)

    2009-01-01

    textabstractBackground: Reoperations are frequently necessary in strabismus surgery. The goal of this study was to analyze human-error related factors that introduce variability in the results of strabismus surgery in a systematic fashion. Methods: We identified the primary factors that influence th

  4. Human error in strabismus surgery: quantification with a sensitivity analysis

    NARCIS (Netherlands)

    Schutte, S.; Polling, J.R.; Van der Helm, F.C.T.; Simonsz, H.J.

    2008-01-01

    Background- Reoperations are frequently necessary in strabismus surgery. The goal of this study was to analyze human-error related factors that introduce variability in the results of strabismus surgery in a systematic fashion. Methods- We identified the primary factors that influence the outcome of

  5. A circadian rhythm in skill-based errors in aviation maintenance.

    Science.gov (United States)

    Hobbs, Alan; Williamson, Ann; Van Dongen, Hans P A

    2010-07-01

    In workplaces where activity continues around the clock, human error has been observed to exhibit a circadian rhythm, with a characteristic peak in the early hours of the morning. Errors are commonly distinguished by the nature of the underlying cognitive failure, particularly the level of intentionality involved in the erroneous action. The Skill-Rule-Knowledge (SRK) framework of Rasmussen is used widely in the study of industrial errors and accidents. The SRK framework describes three fundamental types of error, according to whether behavior is under the control of practiced sensori-motor skill routines with minimal conscious awareness; is guided by implicit or explicit rules or expertise; or where the planning of actions requires the conscious application of domain knowledge. Up to now, examinations of circadian patterns of industrial errors have not distinguished between different types of error. Consequently, it is not clear whether all types of error exhibit the same circadian rhythm. A survey was distributed to aircraft maintenance personnel in Australia. Personnel were invited to anonymously report a safety incident and were prompted to describe, in detail, the human involvement (if any) that contributed to it. A total of 402 airline maintenance personnel reported an incident, providing 369 descriptions of human error in which the time of the incident was reported and sufficient detail was available to analyze the error. Errors were categorized using a modified version of the SRK framework, in which errors are categorized as skill-based, rule-based, or knowledge-based, or as procedure violations. An independent check confirmed that the SRK framework had been applied with sufficient consistency and reliability. Skill-based errors were the most common form of error, followed by procedure violations, rule-based errors, and knowledge-based errors. The frequency of errors was adjusted for the estimated proportion of workers present at work/each hour of the day

  6. A human error analysis methodology, AGAPE-ET, for emergency tasks in nuclear power plants and its application

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jae Whan; Jung, Won Dea [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2002-03-01

    This report presents a procedurised human reliability analysis (HRA) methodology, AGAPE-ET (A Guidance And Procedure for Human Error Analysis for Emergency Tasks), for both qualitative error analysis and quantification of human error probability (HEP) of emergency tasks in nuclear power plants. The AGAPE-ET is based on the simplified cognitive model. By each cognitive function, error causes or error-likely situations have been identified considering the characteristics of the performance of each cognitive function and influencing mechanism of PIFs on the cognitive function. Then, error analysis items have been determined from the identified error causes or error-likely situations to help the analysts cue or guide overall human error analysis. A human error analysis procedure based on the error analysis items is organised. The basic scheme for the quantification of HEP consists in the multiplication of the BHEP assigned by the error analysis item and the weight from the influencing factors decision tree (IFDT) constituted by cognitive function. The method can be characterised by the structured identification of the weak points of the task required to perform and the efficient analysis process that the analysts have only to carry out with the necessary cognitive functions. The report also presents the the application of AFAPE-ET to 31 nuclear emergency tasks and its results. 42 refs., 7 figs., 36 tabs. (Author)

  7. A Human Reliability Analysis of Post- Accident Human Errors in the Low Power and Shutdown PSA of KSNP

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Daeil; Kim, J. H.; Jang, S. C

    2007-03-15

    Korea Atomic Energy Research Institute, using the ANS low power and shutdown (LPSD) probabilistic risk assessment (PRA) Standard, evaluated the LPSD PSA model of the KSNP, Yonggwang Units 5 and 6, and identified the items to be improved. The evaluation results of human reliability analysis (HRA) of the post-accident human errors in the LPSD PSA model for the KSNP showed that 10 items among 19 items of supporting requirements for those in the ANS PRA Standard were identified as them to be improved. Thus, we newly carried out a HRA for post-accident human errors in the LPSD PSA model for the KSNP. Following tasks are the improvements in the HRA of post-accident human errors of the LPSD PSA model for the KSNP compared with the previous one: Interviews with operators in the interpretation of the procedure, modeling of operator actions, and the quantification results of human errors, site visit. Applications of limiting value to the combined post-accident human errors. Documentation of information of all the input and bases for the detailed quantifications and the dependency analysis using the quantification sheets The assessment results for the new HRA results of post-accident human errors using the ANS LPSD PRA Standard show that above 80% items of its supporting requirements for post-accident human errors were graded as its Category II. The number of the re-estimated human errors using the LPSD Korea Standard HRA method is 385. Among them, the number of individual post-accident human errors is 253. The number of dependent post-accident human errors is 135. The quantification results of the LPSD PSA model for the KSNP with new HEPs show that core damage frequency (CDF) is increased by 5.1% compared with the previous baseline CDF It is expected that this study results will be greatly helpful to improve the PSA quality for the domestic nuclear power plants because they have sufficient PSA quality to meet the Category II of Supporting Requirements for the post

  8. A statistical model for point-based target registration error with anisotropic fiducial localizer error.

    Science.gov (United States)

    Wiles, Andrew D; Likholyot, Alexander; Frantz, Donald D; Peters, Terry M

    2008-03-01

    Error models associated with point-based medical image registration problems were first introduced in the late 1990s. The concepts of fiducial localizer error, fiducial registration error, and target registration error are commonly used in the literature. The model for estimating the target registration error at a position r in a coordinate frame defined by a set of fiducial markers rigidly fixed relative to one another is ubiquitous in the medical imaging literature. The model has also been extended to simulate the target registration error at the point of interest in optically tracked tools. However, the model is limited to describing the error in situations where the fiducial localizer error is assumed to have an isotropic normal distribution in R3. In this work, the model is generalized to include a fiducial localizer error that has an anisotropic normal distribution. Similar to the previous models, the root mean square statistic rms tre is provided along with an extension that provides the covariance Sigma tre. The new model is verified using a Monte Carlo simulation and a set of statistical hypothesis tests. Finally, the differences between the two assumptions, isotropic and anisotropic, are discussed within the context of their use in 1) optical tool tracking simulation and 2) image registration.

  9. Image Signature Based Mean Square Error for Image Quality Assessment

    Institute of Scientific and Technical Information of China (English)

    CUI Ziguan; GAN Zongliang; TANG Guijin; LIU Feng; ZHU Xiuchang

    2015-01-01

    Motivated by the importance of Human visual system (HVS) in image processing, we propose a novel Image signature based mean square error (ISMSE) metric for full reference Image quality assessment (IQA). Efficient image signature based describer is used to predict visual saliency map of the reference image. The saliency map is incorporated into luminance diff erence between the reference and distorted images to obtain image quality score. The eff ect of luminance diff erence on visual quality with larger saliency value which is usually corresponding to foreground objects is highlighted. Experimental results on LIVE database release 2 show that by integrating the eff ects of image signature based saliency on luminance dif-ference, the proposed ISMSE metric outperforms several state-of-the-art HVS-based IQA metrics but with lower complexity.

  10. Human errors evaluation for muster in emergency situations applying human error probability index (HEPI, in the oil company warehouse in Hamadan City

    Directory of Open Access Journals (Sweden)

    2012-12-01

    Full Text Available Introduction: Emergency situation is one of the influencing factors on human error. The aim of this research was purpose to evaluate human error in emergency situation of fire and explosion at the oil company warehouse in Hamadan city applying human error probability index (HEPI. . Material and Method: First, the scenario of emergency situation of those situation of fire and explosion at the oil company warehouse was designed and then maneuver against, was performed. The scaled questionnaire of muster for the maneuver was completed in the next stage. Collected data were analyzed to calculate the probability success for the 18 actions required in an emergency situation from starting point of the muster until the latest action to temporary sheltersafe. .Result: The result showed that the highest probability of error occurrence was related to make safe workplace (evaluation phase with 32.4 % and lowest probability of occurrence error in detection alarm (awareness phase with 1.8 %, probability. The highest severity of error was in the evaluation phase and the lowest severity of error was in the awareness and recovery phase. Maximum risk level was related to the evaluating exit routes and selecting one route and choosy another exit route and minimum risk level was related to the four evaluation phases. . Conclusion: To reduce the risk of reaction in the exit phases of an emergency situation, the following actions are recommended, based on the finding in this study: A periodic evaluation of the exit phase and modifying them if necessary, conducting more maneuvers and analyzing this results along with a sufficient feedback to the employees.

  11. Efficient Error Calculation for Multiresolution Texture-Based Volume Visualization

    Energy Technology Data Exchange (ETDEWEB)

    LaMar, E; Hamann, B; Joy, K I

    2001-10-16

    Multiresolution texture-based volume visualization is an excellent technique to enable interactive rendering of massive data sets. Interactive manipulation of a transfer function is necessary for proper exploration of a data set. However, multiresolution techniques require assessing the accuracy of the resulting images, and re-computing the error after each change in a transfer function is very expensive. They extend their existing multiresolution volume visualization method by introducing a method for accelerating error calculations for multiresolution volume approximations. Computing the error for an approximation requires adding individual error terms. One error value must be computed once for each original voxel and its corresponding approximating voxel. For byte data, i.e., data sets where integer function values between 0 and 255 are given, they observe that the set of error pairs can be quite large, yet the set of unique error pairs is small. instead of evaluating the error function for each original voxel, they construct a table of the unique combinations and the number of their occurrences. To evaluate the error, they add the products of the error function for each unique error pair and the frequency of each error pair. This approach dramatically reduces the amount of computation time involved and allows them to re-compute the error associated with a new transfer function quickly.

  12. 基于图像阶调与人眼视觉模型的彩色误差扩散网目调方法%Color Error Diffusion Halftoning Method Based on Image Tone and Human Visual System

    Institute of Scientific and Technical Information of China (English)

    易尧华; 于晓庆

    2009-01-01

    In the process of color error diffusion halfioning, the quality of the color halftoning image will be affected directly by the design of the error diffusion filter with different color channels. This paper studied the method of error diffusion based on tone and the human visual system(HVS), optimized the filter coefficient and the threshold by applying the luminance and chrominance HVS, and the color error diffusion halftoning method based on the image tone and HVS had been received. The results showed that this method can reduce the artifacts in color halftoning images effectively and improve the accuracy of color rendition.%在彩色误差扩散网目调处理过程中,各色通道不同的误差滤波器设计将直接影响彩色网目调图像的质量.本文对基于阶调的误差扩散方法以及人眼视觉特性进行了分析研究,应用亮度和色度人眼视觉模型对误差扩散过程中的滤波器系数和阈值进行优化,实现了基于图像阶调与人眼视觉模型的彩色误差扩散网目调方法.实验结果表明,该方法能够有效地减少彩色网目调图像的人工纹理,并显著提高再现彩色图像的色彩还原精度.

  13. HUMAN ERROR QUANTIFICATION USING PERFORMANCE SHAPING FACTORS IN THE SPAR-H METHOD

    Energy Technology Data Exchange (ETDEWEB)

    Harold S. Blackman; David I. Gertman; Ronald L. Boring

    2008-09-01

    This paper describes a cognitively based human reliability analysis (HRA) quantification technique for estimating the human error probabilities (HEPs) associated with operator and crew actions at nuclear power plants. The method described here, Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) method, was developed to aid in characterizing and quantifying human performance at nuclear power plants. The intent was to develop a defensible method that would consider all factors that may influence performance. In the SPAR-H approach, calculation of HEP rates is especially straightforward, starting with pre-defined nominal error rates for cognitive vs. action-oriented tasks, and incorporating performance shaping factor multipliers upon those nominal error rates.

  14. Safety coaches in radiology: decreasing human error and minimizing patient harm

    Energy Technology Data Exchange (ETDEWEB)

    Dickerson, Julie M.; Adams, Janet M. [Cincinnati Children' s Hospital Medical Center, Department of Radiology, MLC 5031, Cincinnati, OH (United States); Koch, Bernadette L.; Donnelly, Lane F. [Cincinnati Children' s Hospital Medical Center, Department of Radiology, MLC 5031, Cincinnati, OH (United States); Cincinnati Children' s Hospital Medical Center, Department of Pediatrics, Cincinnati, OH (United States); Goodfriend, Martha A. [Cincinnati Children' s Hospital Medical Center, Department of Quality Improvement, Cincinnati, OH (United States)

    2010-09-15

    Successful programs to improve patient safety require a component aimed at improving safety culture and environment, resulting in a reduced number of human errors that could lead to patient harm. Safety coaching provides peer accountability. It involves observing for safety behaviors and use of error prevention techniques and provides immediate feedback. For more than a decade, behavior-based safety coaching has been a successful strategy for reducing error within the context of occupational safety in industry. We describe the use of safety coaches in radiology. Safety coaches are an important component of our comprehensive patient safety program. (orig.)

  15. The Relationship between Human Operators' Psycho-physiological Condition and Human Errors in Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Arryum; Jang, Inseok; Kang, Hyungook; Seong, Poonghyun [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)

    2013-05-15

    The safe operation of nuclear power plants (NPPs) is substantially dependent on the performance of the human operators who operate the systems. In this environment, human errors caused by inappropriate performance of operator have been considered to be critical since it may lead serious problems in the safety-critical plants. In order to provide meaningful insights to prevent human errors and enhance the human performance, operators' physiological conditions such as stress and workload have been investigated. Physiological measurements were considered as reliable tools to assess the stress and workload. T. Q. Tran et al. and J. B. Brooking et al pointed out that operators' workload can be assessed using eye tracking, galvanic skin response, electroencephalograms (EEGs), heart rate, respiration and other measurements. The purpose of this study is to investigate the effect of the human operators' tense level and knowledge level to the number of human errors. For this study, the experiments were conducted in the mimic of the main control rooms (MCR) in NPP. It utilized the compact nuclear simulator (CNS) which is modeled based on the three loop Pressurized Water Reactor, 993MWe, Kori unit 3 and 4 in Korea and the subjects were asked to follow the tasks described in the emergency operating procedures (EOP). During the simulation, three kinds of physiological measurement were utilized; Electrocardiogram (ECG), EEG and nose temperature. Also, subjects were divided into three groups based on their knowledge of the plant operation. The result shows that subjects who are tense make fewer errors. In addition, subjects who are in higher knowledge level tend to be tense and make fewer errors. For the ECG data, subjects who make fewer human errors tend to be located in higher tense level area of high SNS activity and low PSNS activity. The results of EEG data are also similar to ECG result. Beta power ratio of subjects who make fewer errors was higher. Since beta

  16. Causation chain of ship collision accident due to human error based on data mining technology%基于数据挖掘的船舶人为碰撞事故致因链研究

    Institute of Scientific and Technical Information of China (English)

    李红喜; 张连丰; 郑中义

    2014-01-01

    To effectively analyze the formation mechanism of ship collision accident , an accident causation chain was struc-tured based on Bayesian network and data mining algorithm . 128 cases of typical human errors caused ship collision acci-dents were analyzed , and the network structure of accident cause was built according to driver ’ s cognitive behavior for-mation process .Apriori algorithm and Java program were em-ployed to identify frequent occurrence human error , and the accident causation chain was formed.%为有效研究人为失误导致船舶碰撞事故的形成机理,采用贝叶斯网络结构和数据挖掘算法构建事故致因链。分析128起典型人为失误致因船舶碰撞事故案例,依据事故中驾驶员认知行为形成过程,采用贝叶斯网络构建事故致因网络结构;采用Apriori算法和JAVA语言编程挖掘事故的人为失误频繁因素组合,得出导致碰撞事故的人为失误致因链。

  17. Frequent Errors in Chinese EFL Learners' Topic-Based Writings

    Science.gov (United States)

    Zhan, Huifang

    2015-01-01

    This paper investigated a large number of errors found in the topic-based writings of Chinese EFL learners, especially provided an analysis on frequent errors, to find useful pedagogical implications for English grammar teaching and writing instruction in Chinese EFL setting. Students' topic-based writings were examined by the author. The findings…

  18. A Conceptual Framework for Predicting Error in Complex Human-Machine Environments

    Science.gov (United States)

    Freed, Michael; Remington, Roger; Null, Cynthia H. (Technical Monitor)

    1998-01-01

    We present a Goals, Operators, Methods, and Selection Rules-Model Human Processor (GOMS-MHP) style model-based approach to the problem of predicting human habit capture errors. Habit captures occur when the model fails to allocate limited cognitive resources to retrieve task-relevant information from memory. Lacking the unretrieved information, decision mechanisms act in accordance with implicit default assumptions, resulting in error when relied upon assumptions prove incorrect. The model helps interface designers identify situations in which such failures are especially likely.

  19. Human and organizational errors in loading and discharge operations at marine terminals: Reduction of tanker oil and chemical spills. Organizing to minimize human and organizational errors

    Energy Technology Data Exchange (ETDEWEB)

    Mannarelli, T.; Roberts, K.; Bea, R.

    1995-11-01

    This report summarizes organizational and managerial findings, and proposes corresponding recommendations, based on a program of research conducted at two major locations: Chevron USA Products Company Refinery in Richmond, California and Arco Marine Incorporated shipping operations in Long Beach, California. The Organizational Behavior and Industrial Relations group from the Business School approached the project with the same objective (of reducing the risk of accidents resulting from human and/or organizational errors), but used a different means of achieving those ends. On the Business side, the aim of the project is to identify organizational and managerial practices, problems, and potential problems, analyze them, and then make recommendations that offer potential solutions to those circumstances which pose a human and/or organizational error (HOE) risk.

  20. Human Error Probabilites (HEPs) for generic tasks and Performance Shaping Factors (PSFs) selected for railway operations

    DEFF Research Database (Denmark)

    Thommesen, Jacob; Andersen, Henning Boje

    at task level, which can be performed with fewer resources than a more detailed analysis of specific errors for each task. The generic tasks are presented with estimated Human Error Probabili-ties (HEPs) based on and extrapolated from the HRA literature, and estimates are compared with samples of measures...... on estimates derived from industries other than rail and the general warning that a task-based analysis is less precise than an error-based one. The authors recommend that estimates be adjusted to actual measures of task failures when feasible....... collaboration with Banedanmark. The estimates provided are based on HRA literature and primarily the HEART method, being recently been adapted for railway tasks by the British Rail Safety and Stan-dards Board (RSSB). The method presented in this report differs from the RSSB tool by supporting an analysis...

  1. An error assessment of the kriging based approximation model using a mean square error

    Energy Technology Data Exchange (ETDEWEB)

    Ju, Byeong Hyeon; Cho, Tae Min; Lee, Byung Chai [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Jung, Do Hyun [Korea Automotive Technology Institute, Chonan (Korea, Republic of)

    2006-08-15

    A Kriging model is a sort of approximation model and used as a deterministic model of a computationally expensive analysis or simulation. Although it has various advantages, it is difficult to assess the accuracy of the approximated model. It is generally known that a Mean Square Error (MSE) obtained from the kriging model can't calculate statistically exact error bounds contrary to a response surface method, and a cross validation is mainly used. But the cross validation also has many uncertainties. Moreover, the cross validation can't be used when a maximum error is required in the given region. For solving this problem, we first proposed a modified mean square error which can consider relative errors. Using the modified mean square error, we developed the strategy of adding a new sample to the place that the MSE has the maximum when the MSE is used for the assessment of the kriging model. Finally, we offer guidelines for the use of the MSE which is obtained from the kriging model. Four test problems show that the proposed strategy is a proper method which can assess the accuracy of the kriging model. Based on the results of four test problems, a convergence coefficient of 0.01 is recommended for an exact function approximation.

  2. Error Analysis of English Writing Based on Interlanguage Theory

    Institute of Scientific and Technical Information of China (English)

    李玲

    2014-01-01

    Language learning process has been hunted by learner’s errors,which is an unavoidable phenomenon.In the 1950s and 1960s,Contractive Analysis (CA) based on behaviorism and structuralism was generally employed in analyzing learners’ errors. CA soon lost its popularity.Error Analysis (EA),a branch of applied linguistics,has made great contributions to the study of second language learning and throws some light on the process of second language learning.Careful study of the errors reveals the common problems shared by the language learners.Writing is important in language learning process.Under Chinese context,English writing is always a difficult question for Chinese teachers and students,so errors in students’ written works are unavoidable.In this thesis,the author studies on error analysis of English writing with the interlanguage theory as its theoretical guidance.

  3. Error Analysis of English Writing Based on Interlanguage Theory

    Institute of Scientific and Technical Information of China (English)

    李玲

    2014-01-01

    Language learning process has been hunted by learner’s errors,which is an unavoidable phenomenon.In the 1950 s and 1960 s,Contractive Analysis(CA) based on behaviorism and structuralism was generally employed in analyzing learners’ errors.CA soon lost its popularity.Error Analysis(EA),a branch of applied linguistics,has made great contributions to the study of second language learning and throws some light on the process of second language learning.Careful study of the errors reveals the common problems shared by the language learners.Writing is important in language learning process.Under Chinese context,English writing is always a difficult question for Chinese teachers and students,so errors in students’ written works are unavoidable.In this thesis,the author studies on error analysis of English writing with the interlanguage theory as its theoretical guidance.

  4. Human factors and error prevention in emergency medicine.

    Science.gov (United States)

    Bleetman, Anthony; Sanusi, Seliat; Dale, Trevor; Brace, Samantha

    2012-05-01

    Emergency departments are one of the highest risk areas in health care. Emergency physicians have to assemble and manage unrehearsed multidisciplinary teams with little notice and manage critically ill patients. With greater emphasis on management and leadership skills, there is an increasing awareness of the importance of human factors in making changes to improve patient safety. Non-clinical skills are required to achieve this in an information-poor environment and to minimise the risk of errors. Training in these non-clinical skills is a mandatory component in other high-risk industries, such as aviation and, needs to be part of an emergency physician's skill set. Therefore, there remains an educational gap that we need to fill before an emergency physician is equipped to function as a team leader and manager. This review will examine the lessons from aviation and how these are applicable to emergency medicine. Solutions to averting errors are discussed and the need for formal human factors training in emergency medicine.

  5. Non-binary unitary error bases and quantum codes

    Energy Technology Data Exchange (ETDEWEB)

    Knill, E.

    1996-06-01

    Error operator bases for systems of any dimension are defined and natural generalizations of the bit-flip/ sign-change error basis for qubits are given. These bases allow generalizing the construction of quantum codes based on eigenspaces of Abelian groups. As a consequence, quantum codes can be constructed form linear codes over {ital Z}{sub {ital n}} for any {ital n}. The generalization of the punctured code construction leads to many codes which permit transversal (i.e. fault tolerant) implementations of certain operations compatible with the error basis.

  6. Correcting the optimal resampling-based error rate by estimating the error rate of wrapper algorithms.

    Science.gov (United States)

    Bernau, Christoph; Augustin, Thomas; Boulesteix, Anne-Laure

    2013-09-01

    High-dimensional binary classification tasks, for example, the classification of microarray samples into normal and cancer tissues, usually involve a tuning parameter. By reporting the performance of the best tuning parameter value only, over-optimistic prediction errors are obtained. For correcting this tuning bias, we develop a new method which is based on a decomposition of the unconditional error rate involving the tuning procedure, that is, we estimate the error rate of wrapper algorithms as introduced in the context of internal cross-validation (ICV) by Varma and Simon (2006, BMC Bioinformatics 7, 91). Our subsampling-based estimator can be written as a weighted mean of the errors obtained using the different tuning parameter values, and thus can be interpreted as a smooth version of ICV, which is the standard approach for avoiding tuning bias. In contrast to ICV, our method guarantees intuitive bounds for the corrected error. Additionally, we suggest to use bias correction methods also to address the conceptually similar method selection bias that results from the optimal choice of the classification method itself when evaluating several methods successively. We demonstrate the performance of our method on microarray and simulated data and compare it to ICV. This study suggests that our approach yields competitive estimates at a much lower computational price.

  7. Methodological Approach for Performing Human Reliability and Error Analysis in Railway Transportation System

    Directory of Open Access Journals (Sweden)

    Fabio De Felice

    2011-10-01

    Full Text Available Today, billions of dollars are being spent annually world wide to develop, manufacture, and operate transportation system such trains, ships, aircraft, and motor vehicles. Around 70 to 90 percent oftransportation crashes are, directly or indirectly, the result of human error. In fact, with the development of technology, system reliability has increased dramatically during the past decades, while human reliability has remained unchanged over the same period. Accordingly, human error is now considered as the most significant source of accidents or incidents in safety-critical systems. The aim of the paper is the proposal of a methodological approach to improve the transportation system reliability and in particular railway transportation system. The methodology presented is based on Failure Modes, Effects and Criticality Analysis (FMECA and Human Reliability Analysis (HRA.

  8. Support of protective work of human error in a nuclear power plant

    Energy Technology Data Exchange (ETDEWEB)

    Yoshizawa, Yuriko [Tokyo Electric Power Co., Inc. (Japan)

    1999-12-01

    The nuclear power plant human factor group of the Tokyo Electric Power Co., Ltd. supports various protective work of human error conducted at the nuclear power plant. Its main researching theme are studies on human factor on operation of a nuclear power plant, and on recovery and common basic study on human factor. In addition, on a base of the obtained informations, assistance to protective work of human error conducted at the nuclear power plant as well as development for its actual use was also promoted. Especially, for actions sharing some dangerous informations, various assistances such as a proposal on actual example analytical method to effectively understand a dangerous information not facially but faithfully, construction of a data base to conveniently share such dangerous information, and practice on non-accident business survey for a hint of effective promotion of the protection work, were promoted. Here were introduced on assistance and investigation for effective sharing of the dangerous informations for various actions on protection of human error mainly conducted in nuclear power plant. (G.K.)

  9. Error-analysis Based Second Language Teaching Strategies ...

    African Journals Online (AJOL)

    Error-analysis Based Second Language Teaching Strategies. ... the same number of years of education through primary and secondary education in Imo State. All of the ... written essays and Marked by some English teachers of the school.

  10. Error resilient image transmission based on virtual SPIHT

    Science.gov (United States)

    Liu, Rongke; He, Jie; Zhang, Xiaolin

    2007-02-01

    SPIHT is one of the most efficient image compression algorithms. It had been successfully applied to a wide variety of images, such as medical and remote sensing images. However, it is highly susceptible to channel errors. A single bit error could potentially lead to decoder derailment. In this paper, we integrate new error resilient tools into wavelet coding algorithm and present an error-resilient image transmission scheme based on virtual set partitioning in hierarchical trees (SPIHT), EREC and self truncation mechanism. After wavelet decomposition, the virtual spatial-orientation trees in the wavelet domain are individually encoded using virtual SPIHT. Since the self-similarity across sub bands is preserved, a high source coding efficiency can be achieved. The scheme is essentially a tree-based coding, thus error propagation is limited within each virtual tree. The number of virtual trees may be adjusted according to the channel conditions. When the channel is excellent, we may decrease the number of trees to further improve the compression efficiency, otherwise increase the number of trees to guarantee the error resilience to channel. EREC is also adopted to enhance the error resilience capability of the compressed bit streams. At the receiving side, the self-truncation mechanism based on self constraint of set partition trees is introduced. The decoding of any sub-tree halts in case the violation of self-constraint relationship occurs in the tree. So the bits impacted by the error propagation are limited and more likely located in the low bit-layers. In additional, inter-trees interpolation method is applied, thus some errors are compensated. Preliminary experimental results demonstrate that the proposed scheme can achieve much more benefits on error resilience.

  11. Predicting errors from reconfiguration patterns in human brain networks.

    Science.gov (United States)

    Ekman, Matthias; Derrfuss, Jan; Tittgemeyer, Marc; Fiebach, Christian J

    2012-10-09

    Task preparation is a complex cognitive process that implements anticipatory adjustments to facilitate future task performance. Little is known about quantitative network parameters governing this process in humans. Using functional magnetic resonance imaging (fMRI) and functional connectivity measurements, we show that the large-scale topology of the brain network involved in task preparation shows a pattern of dynamic reconfigurations that guides optimal behavior. This network could be decomposed into two distinct topological structures, an error-resilient core acting as a major hub that integrates most of the network's communication and a predominantly sensory periphery showing more flexible network adaptations. During task preparation, core-periphery interactions were dynamically adjusted. Task-relevant visual areas showed a higher topological proximity to the network core and an enhancement in their local centrality and interconnectivity. Failure to reconfigure the network topology was predictive for errors, indicating that anticipatory network reconfigurations are crucial for successful task performance. On the basis of a unique network decoding approach, we also develop a general framework for the identification of characteristic patterns in complex networks, which is applicable to other fields in neuroscience that relate dynamic network properties to behavior.

  12. Planar straightness error evaluation based on particle swarm optimization

    Science.gov (United States)

    Mao, Jian; Zheng, Huawen; Cao, Yanlong; Yang, Jiangxin

    2006-11-01

    The straightness error generally refers to the deviation between an actual line and an ideal line. According to the characteristics of planar straightness error evaluation, a novel method to evaluate planar straightness errors based on the particle swarm optimization (PSO) is proposed. The planar straightness error evaluation problem is formulated as a nonlinear optimization problem. According to minimum zone condition the mathematical model of planar straightness together with the optimal objective function and fitness function is developed. Compared with the genetic algorithm (GA), the PSO algorithm has some advantages. It is not only implemented without crossover and mutation but also has fast congruence speed. Moreover fewer parameters are needed to set up. The results show that the PSO method is very suitable for nonlinear optimization problems and provides a promising new method for straightness error evaluation. It can be applied to deal with the measured data of planar straightness obtained by the three-coordinates measuring machines.

  13. Quality score based identification and correction of pyrosequencing errors.

    Science.gov (United States)

    Iyer, Shyamala; Bouzek, Heather; Deng, Wenjie; Larsen, Brendan; Casey, Eleanor; Mullins, James I

    2013-01-01

    Massively-parallel DNA sequencing using the 454/pyrosequencing platform allows in-depth probing of diverse sequence populations, such as within an HIV-1 infected individual. Analysis of this sequence data, however, remains challenging due to the shorter read lengths relative to that obtained by Sanger sequencing as well as errors introduced during DNA template amplification and during pyrosequencing. The ability to distinguish real variation from pyrosequencing errors with high sensitivity and specificity is crucial to interpreting sequence data. We introduce a new algorithm, CorQ (Correction through Quality), which utilizes the inherent base quality in a sequence-specific context to correct for homopolymer and non-homopolymer insertion and deletion (indel) errors. CorQ also takes uneven read mapping into account for correcting pyrosequencing miscall errors and it identifies and corrects carry forward errors. We tested the ability of CorQ to correctly call SNPs on a set of pyrosequences derived from ten viral genomes from an HIV-1 infected individual, as well as on six simulated pyrosequencing datasets generated using non-zero error rates to emulate errors introduced by PCR. When combined with the AmpliconNoise error correction method developed to remove ambiguities in signal intensities, we attained a 97% reduction in indel errors, a 98% reduction in carry forward errors, and >97% specificity of SNP detection. When compared to four other error correction methods, AmpliconNoise+CorQ performed at equal or higher SNP identification specificity, but the sensitivity of SNP detection was consistently higher (>98%) than other methods tested. This combined procedure will therefore permit examination of complex genetic populations with improved accuracy.

  14. Robot perception errors and human resolution strategies in situated human-robot dialogue

    OpenAIRE

    Schutte, Niels; Kelleher, John; MacNamee, Brian

    2017-01-01

    We performed an experiment in which human participants interacted through a natural language dialogue interface with a simulated robot to fulfil a series of object manipulation tasks. We introduced errors into the robot’s perception, and observed the resulting problems in the dialogues and their resolutions. We then introduced different methods for the user to request information about the robot’s understanding of the environment. We quantify the impact of perception errors on the dialogues, ...

  15. Latent human error analysis and efficient improvement strategies by fuzzy TOPSIS in aviation maintenance tasks.

    Science.gov (United States)

    Chiu, Ming-Chuan; Hsieh, Min-Chih

    2016-05-01

    The purposes of this study were to develop a latent human error analysis process, to explore the factors of latent human error in aviation maintenance tasks, and to provide an efficient improvement strategy for addressing those errors. First, we used HFACS and RCA to define the error factors related to aviation maintenance tasks. Fuzzy TOPSIS with four criteria was applied to evaluate the error factors. Results show that 1) adverse physiological states, 2) physical/mental limitations, and 3) coordination, communication, and planning are the factors related to airline maintenance tasks that could be addressed easily and efficiently. This research establishes a new analytic process for investigating latent human error and provides a strategy for analyzing human error using fuzzy TOPSIS. Our analysis process complements shortages in existing methodologies by incorporating improvement efficiency, and it enhances the depth and broadness of human error analysis methodology.

  16. Analysis on critical factor of human error accidents in coal mine based on gray system theory%基于灰色系统理论的煤矿人因事故关键因素分析

    Institute of Scientific and Technical Information of China (English)

    兰建义; 乔美英; 周英

    2015-01-01

    Through analyzing the influence factors causing human error accidents in coal mine, the critical influ-ence factors were summarized.By applying the gray system correlation theory, according to the statistical data of mine accidents in recent 10 years from the State Administration of Coal Mine Safety, the influence types of human error accidents in coal mine were analyzed.Taking the number of accidents and death toll of accidents as reference index, the gray correlation degree about 10 kinds of factors mainly related to human error accident in coal mine, such as behavior error, personal violation, organization and management error and so on, were calculated and ana-lyzed.The gray correlation orders of these factors were derived, and the critical influence factors of human error ac-cident in coal mine were determined.Finally, the quantitative analysis result between the critical influence factors and human error accidents in coal mine were obtained.Using gray correlation theory to analyze the influence factors of human error in coal mine can well explain the weight relationship between human error and each critical affecting factor.It provides a strong reference for pretending and controlling the human error accident in coal mine, with more understanding on the main causing mechanism of human error accidents.%通过对煤矿人因失误事故致因因素进行分析,统计出相关的关键影响因素,运用灰色系统关联理论,根据国家安监局近十年煤矿事故统计数据,对煤矿人因失误事故影响类型进行了分析。以煤矿事故发生起数和事故死亡人数作为参考指标,计算和分析行为失误致因、个人违章、组织管理失误等十项主要与煤矿人因事故相关的灰色关联度,进而推算出这些因素的灰色关联序,确定出导致煤矿人因失误事故的关键因素,最终得到煤矿人因事故与关键影响因素之间的定量化分析结果。采用灰色关联理论对煤

  17. Method of change management based on dynamic machining error propagation

    Institute of Scientific and Technical Information of China (English)

    FENG Jia; JIANG PingYu

    2009-01-01

    In multistage machining processes (MMPs), the final quality of a part is influenced by a series of machining processes, which are complex correlations. So it is necessary to research the rule of machining error propagation to ensure the machining quality. For this issue, a change management method of quality control nodes (i.e., QC-nodes) for machining error propagation is proposed. A new framework of QC-nodes is proposed including association analysis of quality attributes, quality closed-loop control,error tracing and error coordination optimization. And the weighted directed network is introduced to describe and analyze the correlativity among the machining processes. In order to establish the dynamic machining error propagation network (D-MEPN), QC-nodes are defined as the network nodes,and the correlation among the QC-nodes is mapped onto the network. Based on the network analysis,the dynamic characteristics of machining error propagation are explored. An adaptive control method based on the stability theory is introduced for error coordination optimization. At last, a simple example is used to verify the proposed method.

  18. Method of change management based on dynamic machining error propagation

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    In multistage machining processes(MMPs),the final quality of a part is influenced by a series of machining processes,which are complex correlations.So it is necessary to research the rule of machin-ing error propagation to ensure the machining quality.For this issue,a change management method of quality control nodes(i.e.,QC-nodes) for machining error propagation is proposed.A new framework of QC-nodes is proposed including association analysis of quality attributes,quality closed-loop control,error tracing and error coordination optimization.And the weighted directed network is introduced to describe and analyze the correlativity among the machining processes.In order to establish the dynamic machining error propagation network(D-MEPN),QC-nodes are defined as the network nodes,and the correlation among the QC-nodes is mapped onto the network.Based on the network analysis,the dynamic characteristics of machining error propagation are explored.An adaptive control method based on the stability theory is introduced for error coordination optimization.At last,a simple example is used to verify the proposed method.

  19. How to Cope with the Rare Human Error Events Involved with organizational Factors in Nuclear Power Plants

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sa Kil; Luo, Meiling; Lee, Yong Hee [Korea Atomic Research Institute, Daejeon (Korea, Republic of)

    2014-10-15

    The current human error guidelines (e.g. US DOD handbooks, US NRC Guidelines) are representative tools to prevent human errors. These tools, however, have limits that they do not adapt all operating situations and circumstances such as design base events. In other words, these tools are only adapted foreseeable standardized operating situations and circumstances. In this study, our research team proposed an evidence-based approach such as UK's safety case to coping with the rare human error events such as TMI, Chernobyl, Fukushima accidents. These accidents are representative events involved with rare human errors. Our research team defined the 'rare human errors' as the follow three characterized events; Extremely low frequency Extremely high complicated structure Extremely serious damage of human life and property A safety case is a structured argument, supported by evidence, intended to justify that a system is acceptably safe. The definition by UK defense standard 00-56 issue 4 states that such an evidence-based approach can be contrast with a prescriptive approach to safety certification, which require safety to be justified using a prescribed process. Safety managements and safety regulatory activities based on safety case are effective to control organizational factors in terms of integrated safety management. Especially safety issues relevant with public acceptance are useful to provide practical evidences to the public reasonably. European Union including UK has developed the concept of engineered safety management system to deal with public acceptance using the safety case. In Korea nuclear industry, the Korean Atomic Research Institute has firstly performed a basic research to adapt the safety case in the field of radioactive waste according to the IAEA SSG-23(KAERI/TR-4497, 4531). Excepting the radioactive waste, there is no try to adapt the safety case yet. Most incidents and accidents involved human during operating NPPs have a tendency

  20. Error-thresholds for qudit-based topological quantum memories

    Science.gov (United States)

    Andrist, Ruben S.; Wootton, James R.; Katzgraber, Helmut G.

    2014-03-01

    Extending the quantum computing paradigm from qubits to higher-dimensional quantum systems allows for increased channel capacity and a more efficient implementation of quantum gates. However, to perform reliable computations an efficient error-correction scheme adapted for these multi-level quantum systems is needed. A promising approach is via topological quantum error correction, where stability to external noise is achieved by encoding quantum information in non-local degrees of freedom. A key figure of merit is the error threshold which quantifies the fraction of physical qudits that can be damaged before logical information is lost. Here we analyze the resilience of generalized topological memories built from d-level quantum systems (qudits) to bit-flip errors. The error threshold is determined by mapping the quantum setup to a classical Potts-like model with bond disorder, which is then investigated numerically using large-scale Monte Carlo simulations. Our results show that topological error correction with qutrits exhibits an improved error threshold in comparison to qubit-based systems.

  1. Human Error Classification for the Permit to Work System by SHERPA in a Petrochemical Industry

    Directory of Open Access Journals (Sweden)

    Arash Ghasemi

    2015-12-01

    Full Text Available Background & objective: Occupational accidents may occur in any types of activities. Carrying out daily activities such as repairing and maintaining are one of the work phases that have high risck. Despite the issuance of work permits or work license systems for controling the risks of non-routine activities, the high rate of accidents during activity indicates the inadequacy of such systems. A main portion of this lacking is attributed to the human errors. Then, it is necessary to identify and control the probable human errors during issuing permits. Methods: In the present study, the probable errors for four categories of working permits were identified using SHERPA method. Then, an expert team analyzed 25500 issued permits during a period of approximately one year. Most of frequent human errors and their types were determined. Results: The “Excavation” and “Entry to confined space” permit possess the most errors. Approximately, 28.5 present of all errors were related to the excavation permits. The implementation error was recognized as the most frequent error for all types of error taxonomy. For every category of permits, about 40% of all errors were attributed to the implementation errors. Conclusion: The results may indicate the weakness points in the practical training of the licensing system. The human error identification methods can be used to predict and decrease the human errors.

  2. Ising Spin-Based Error Correcting Private-Key Cryptosystems

    Institute of Scientific and Technical Information of China (English)

    ZHENG Dong; ZHENG Yan-fei; FAN Wu-ying

    2006-01-01

    Ising spin system has been shown to provide a new class of error-correction code and can be used to construct public-key cryptosystems by making use of statistical mechanics. The relation between Ising spin systems and private-key cryptosystems are investigated. Two private-key systems are based on two predetermined randomly constructed sparse matrices and rely on exploiting physical properties of the Mackay-Neal (MN) low-density parity-check (LDPC) error-correcting codes are proposed. One is error correcting private-key system, which is powerful to combat ciphertext errors in communications and computer systems. The other is a private-key system with authentication.

  3. Three-Dimensional Turbulent RANS Adjoint-Based Error Correction

    Science.gov (United States)

    Park, Michael A.

    2003-01-01

    Engineering problems commonly require functional outputs of computational fluid dynamics (CFD) simulations with specified accuracy. These simulations are performed with limited computational resources. Computable error estimates offer the possibility of quantifying accuracy on a given mesh and predicting a fine grid functional on a coarser mesh. Such an estimate can be computed by solving the flow equations and the associated adjoint problem for the functional of interest. An adjoint-based error correction procedure is demonstrated for transonic inviscid and subsonic laminar and turbulent flow. A mesh adaptation procedure is formulated to target uncertainty in the corrected functional and terminate when error remaining in the calculation is less than a user-specified error tolerance. This adaptation scheme is shown to yield anisotropic meshes with corrected functionals that are more accurate for a given number of grid points then isotropic adapted and uniformly refined grids.

  4. Labels affect preschoolers' tool-based scale errors.

    Science.gov (United States)

    Hunley, Samuel B; Hahn, Erin R

    2016-11-01

    Scale errors offer a unique context in which to examine the interdependencies between language and action. Here, we manipulated the presence of labels in a tool-based paradigm previously shown to elicit high rates of scale errors. We predicted that labels would increase children's scale errors with tools by directing attention to shape, function, and category membership. Children between the ages of 2 and 3years were introduced to an apparatus and shown how to produce its function using a tool (e.g., scooping a toy fish from an aquarium using a net). In each of two test trials, children were asked to choose between two novel tools to complete the same task: one that was a large non-functional version of the tool presented in training and one novel functional object (different in shape). A total of four tool-apparatus sets were tested. The results indicated that without labels, scale errors decreased over the two test trials. In contrast, when labels were present, scale errors remained high in the second test trial. We interpret these findings as evidence that linguistic cues can influence children's action-based errors with tools.

  5. Quadratic Error Metric Mesh Simplification Algorithm Based on Discrete Curvature

    Directory of Open Access Journals (Sweden)

    Li Yao

    2015-01-01

    Full Text Available Complex and highly detailed polygon meshes have been adopted for model representation in many areas of computer graphics. Existing works mainly focused on the quadric error metric based complex models approximation, which has not taken the retention of important model details into account. This may lead to visual degeneration. In this paper, we improve Garland and Heckberts’ quadric error metric based algorithm by using the discrete curvature to reserve more features for mesh simplification. Our experiments on various models show that the geometry and topology structure as well as the features of the original models are precisely retained by employing discrete curvature.

  6. Comparison of risk sensitivity to human errors in the Oconee and LaSalle PRAs

    Energy Technology Data Exchange (ETDEWEB)

    Wong, S.; Higgins, J.

    1991-01-01

    This paper describes the comparative analyses of plant risk sensitivity to human errors in the Oconee and La Salle Probabilistic Risk Assessment (PRAs). These analyses were performed to determine the reasons for the observed differences in the sensitivity of core melt frequency (CMF) to changes in human error probabilities (HEPs). Plant-specific design features, PRA methods, and the level of detail and assumptions in the human error modeling were evaluated to assess their influence risk estimates and sensitivities.

  7. An error resilient scheme for H.264 video coding based on distortion estimated mode decision and nearest neighbor error concealment

    Institute of Scientific and Technical Information of China (English)

    LEE Tien-hsu; WANG Jong-tzy; CHEN Jhih-bin; CHANG Pao-chi

    2006-01-01

    Although H.264 video coding standard provides several error resilience tools, the damage caused by error propagation may still be tremendous. This work is aimed at developing a robust and standard-compliant error resilient coding scheme for H.264and uses techniques of mode decision, data hiding, and error concealment to reduce the damage from error propagation. This paper proposes a system with two error resilience techniques that can improve the robustness of H.264 in noisy channels. The first technique is Nearest Neighbor motion compensated Error Concealment (NNEC) that chooses the nearest neighbors in the reference frames for error concealment. The second technique is Distortion Estimated Mode Decision (DEMD) that selects an optimal mode based on stochastically distorted frames. Observed simulation results showed that the rate-distortion performances of the proposed algorithms are better than those of the compared algorithms.

  8. Error Sources in Proccessing LIDAR Based Bridge Inspection

    Science.gov (United States)

    Bian, H.; Chen, S. E.; Liu, W.

    2017-09-01

    Bridge inspection is a critical task in infrastructure management and is facing unprecedented challenges after a series of bridge failures. The prevailing visual inspection was insufficient in providing reliable and quantitative bridge information although a systematic quality management framework was built to ensure visual bridge inspection data quality to minimize errors during the inspection process. The LiDAR based remote sensing is recommended as an effective tool in overcoming some of the disadvantages of visual inspection. In order to evaluate the potential of applying this technology in bridge inspection, some of the error sources in LiDAR based bridge inspection are analysed. The scanning angle variance in field data collection and the different algorithm design in scanning data processing are the found factors that will introduce errors into inspection results. Besides studying the errors sources, advanced considerations should be placed on improving the inspection data quality, and statistical analysis might be employed to evaluate inspection operation process that contains a series of uncertain factors in the future. Overall, the development of a reliable bridge inspection system requires not only the improvement of data processing algorithms, but also systematic considerations to mitigate possible errors in the entire inspection workflow. If LiDAR or some other technology can be accepted as a supplement for visual inspection, the current quality management framework will be modified or redesigned, and this would be as urgent as the refine of inspection techniques.

  9. an Efficient Blind Signature Scheme based on Error Correcting Codes

    Directory of Open Access Journals (Sweden)

    Junyao Ye

    Full Text Available Cryptography based on the theory of error correcting codes and lattices has received a wide attention in the last years. Shor`s algorithm showed that in a world where quantum computers are assumed to exist, number theoretic cryptosystems are insecure. The ...

  10. An Approach to Human Error Hazard Detection of Unexpected Situations in NPPs

    Energy Technology Data Exchange (ETDEWEB)

    Park, Sangjun; Oh, Yeonju; Shin, Youmin; Lee, Yong-Hee [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    Fukushima accident is a typical complex event including the extreme situations induced by the succeeding earthquake, tsunami, explosion, and human errors. And it is judged with incomplete cause of system build-up same manner, procedure as a deficiency of response manual, education and training, team capability and the discharge of operator from human engineering point of view. Especially, the guidelines of current operating NPPs are not enough including countermeasures to the human errors at the extreme situations. Therefore, this paper describes a trial to detect the hazards of human errors at extreme situation, and to define the countermeasures that can properly response to the human error hazards when an individual, team, organization, and working entities that encounter the extreme situation in NPPs. In this paper we try to propose an approach to analyzing and extracting human error hazards for suggesting additional countermeasures to the human errors in unexpected situations. They might be utilized to develop contingency guidelines, especially for reducing the human error accident in NPPs. But the trial application in this study is currently limited since it is not easy to find accidents cases in detail enough to enumerate the proposed steps. Therefore, we will try to analyze as more cases as possible, and consider other environmental factors and human error conditions.

  11. The treatment of commission errors in first generation human reliability analysis methods

    Energy Technology Data Exchange (ETDEWEB)

    Alvarengga, Marco Antonio Bayout; Fonseca, Renato Alves da, E-mail: bayout@cnen.gov.b, E-mail: rfonseca@cnen.gov.b [Comissao Nacional de Energia Nuclear (CNEN) Rio de Janeiro, RJ (Brazil); Melo, Paulo Fernando Frutuoso e, E-mail: frutuoso@nuclear.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Nuclear

    2011-07-01

    Human errors in human reliability analysis can be classified generically as errors of omission and commission errors. Omission errors are related to the omission of any human action that should have been performed, but does not occur. Errors of commission are those related to human actions that should not be performed, but which in fact are performed. Both involve specific types of cognitive error mechanisms, however, errors of commission are more difficult to model because they are characterized by non-anticipated actions that are performed instead of others that are omitted (omission errors) or are entered into an operational task without being part of the normal sequence of this task. The identification of actions that are not supposed to occur depends on the operational context that will influence or become easy certain unsafe actions of the operator depending on the operational performance of its parameters and variables. The survey of operational contexts and associated unsafe actions is a characteristic of second-generation models, unlike the first generation models. This paper discusses how first generation models can treat errors of commission in the steps of detection, diagnosis, decision-making and implementation, in the human information processing, particularly with the use of THERP tables of errors quantification. (author)

  12. Error-related EEG patterns during tactile human-machine interaction

    NARCIS (Netherlands)

    Lehne, M.; Ihme, K.; Brouwer, A.M.; Erp, J.B.F. van; Zander, T.O.

    2009-01-01

    Recently, the use of brain-computer interfaces (BCIs) has been extended from active control to passive detection of cognitive user states. These passive BCI systems can be especially useful for automatic error detection in human-machine systems by recording EEG potentials related to human error proc

  13. Structured methods for identifying and correcting potential human errors in space operations.

    Science.gov (United States)

    Nelson, W R; Haney, L N; Ostrom, L T; Richards, R E

    1998-01-01

    Human performance plays a significant role in the development and operation of any complex system, and human errors are significant contributors to degraded performance, incidents, and accidents for technologies as diverse as medical systems, commercial aircraft, offshore oil platforms, nuclear power plants, and space systems. To date, serious accidents attributed to human error have fortunately been rare in space operations. However, as flight rates go up and the duration of space missions increases, the accident rate could increase unless proactive action is taken to identity and correct potential human errors in space operations. The Idaho National Engineering and Environmental Laboratory (INEEL) has developed and applied structured methods of human error analysis to identify potential human errors, assess their effects on system performance, and develop strategies to prevent the errors or mitigate their consequences. These methods are being applied in NASA-sponsored programs to the domain of commercial aviation, focusing on airplane maintenance and air traffic management. The application of human error analysis to space operations could contribute to minimize the risks associated with human error in the design and operation of future space systems.

  14. Reversible Logic Based Concurrent Error Detection Methodology For Emerging Nanocircuits

    CERN Document Server

    Thapliyal, Himanshu

    2011-01-01

    Reversible logic has promising applications in emerging nanotechnologies, such as quantum computing, quantum dot cellular automata and optical computing, etc. Faults in reversible logic circuits that result in multi-bit error at the outputs are very tough to detect, and thus in literature, researchers have only addressed the problem of online testing of faults that result single-bit error at the outputs based on parity preserving logic. In this work, we propose a methodology for the concurrent error detection in reversible logic circuits to detect faults that can result in multi-bit error at the outputs. The methodology is based on the inverse property of reversible logic and is termed as 'inverse and compare' method. By using the inverse property of reversible logic, all the inputs can be regenerated at the outputs. Thus, by comparing the original inputs with the regenerated inputs, the faults in reversible circuits can be detected. Minimizing the garbage outputs is one of the main goals in reversible logic ...

  15. LÉVY-BASED ERROR PREDICTION IN CIRCULAR SYSTEMATIC SAMPLING

    Directory of Open Access Journals (Sweden)

    Kristjana Ýr Jónsdóttir

    2013-06-01

    Full Text Available In the present paper, Lévy-based error prediction in circular systematic sampling is developed. A model-based statistical setting as in Hobolth and Jensen (2002 is used, but the assumption that the measurement function is Gaussian is relaxed. The measurement function is represented as a periodic stationary stochastic process X obtained by a kernel smoothing of a Lévy basis. The process X may have an arbitrary covariance function. The distribution of the error predictor, based on measurements in n systematic directions is derived. Statistical inference is developed for the model parameters in the case where the covariance function follows the celebrated p-order covariance model.

  16. Quality of IT service delivery — Analysis and framework for human error prevention

    KAUST Repository

    Shwartz, L.

    2010-12-01

    In this paper, we address the problem of reducing the occurrence of Human Errors that cause service interruptions in IT Service Support and Delivery operations. Analysis of a large volume of service interruption records revealed that more than 21% of interruptions were caused by human error. We focus on Change Management, the process with the largest risk of human error, and identify the main instances of human errors as the 4 Wrongs: request, time, configuration item, and command. Analysis of change records revealed that the humanerror prevention by partial automation is highly relevant. We propose the HEP Framework, a framework for execution of IT Service Delivery operations that reduces human error by addressing the 4 Wrongs using content integration, contextualization of operation patterns, partial automation of command execution, and controlled access to resources.

  17. Errors in Seismic Hazard Assessment are Creating Huge Human Losses

    Science.gov (United States)

    Bela, J.

    2015-12-01

    The current practice of representing earthquake hazards to the public based upon their perceived likelihood or probability of occurrence is proven now by the global record of actual earthquakes to be not only erroneous and unreliable, but also too deadly! Earthquake occurrence is sporadic and therefore assumptions of earthquake frequency and return-period are both not only misleading, but also categorically false. More than 700,000 people have now lost their lives (2000-2011), wherein 11 of the World's Deadliest Earthquakes have occurred in locations where probability-based seismic hazard assessments had predicted only low seismic low hazard. Unless seismic hazard assessment and the setting of minimum earthquake design safety standards for buildings and bridges are based on a more realistic deterministic recognition of "what can happen" rather than on what mathematical models suggest is "most likely to happen" such future huge human losses can only be expected to continue! The actual earthquake events that did occur were at or near the maximum potential-size event that either already had occurred in the past; or were geologically known to be possible. Haiti's M7 earthquake, 2010 (with > 222,000 fatalities) meant the dead could not even be buried with dignity. Japan's catastrophic Tohoku earthquake, 2011; a M9 Megathrust earthquake, unleashed a tsunami that not only obliterated coastal communities along the northern Japanese coast, but also claimed > 20,000 lives. This tsunami flooded nuclear reactors at Fukushima, causing 4 explosions and 3 reactors to melt down. But while this history of huge human losses due to erroneous and misleading seismic hazard estimates, despite its wrenching pain, cannot be unlived; if faced with courage and a more realistic deterministic estimate of "what is possible", it need not be lived again. An objective testing of the results of global probability based seismic hazard maps against real occurrences has never been done by the

  18. Faces in places: humans and machines make similar face detection errors.

    Directory of Open Access Journals (Sweden)

    Bernard Marius 't Hart

    Full Text Available The human visual system seems to be particularly efficient at detecting faces. This efficiency sometimes comes at the cost of wrongfully seeing faces in arbitrary patterns, including famous examples such as a rock configuration on Mars or a toast's roast patterns. In machine vision, face detection has made considerable progress and has become a standard feature of many digital cameras. The arguably most wide-spread algorithm for such applications ("Viola-Jones" algorithm achieves high detection rates at high computational efficiency. To what extent do the patterns that the algorithm mistakenly classifies as faces also fool humans? We selected three kinds of stimuli from real-life, first-person perspective movies based on the algorithm's output: correct detections ("real faces", false positives ("illusory faces" and correctly rejected locations ("non faces". Observers were shown pairs of these for 20 ms and had to direct their gaze to the location of the face. We found that illusory faces were mistaken for faces more frequently than non faces. In addition, rotation of the real face yielded more errors, while rotation of the illusory face yielded fewer errors. Using colored stimuli increases overall performance, but does not change the pattern of results. When replacing the eye movement by a manual response, however, the preference for illusory faces over non faces disappeared. Taken together, our data show that humans make similar face-detection errors as the Viola-Jones algorithm, when directing their gaze to briefly presented stimuli. In particular, the relative spatial arrangement of oriented filters seems of relevance. This suggests that efficient face detection in humans is likely to be pre-attentive and based on rather simple features as those encoded in the early visual system.

  19. DNA barcoding: error rates based on comprehensive sampling.

    Directory of Open Access Journals (Sweden)

    Christopher P Meyer

    2005-12-01

    Full Text Available DNA barcoding has attracted attention with promises to aid in species identification and discovery; however, few well-sampled datasets are available to test its performance. We provide the first examination of barcoding performance in a comprehensively sampled, diverse group (cypraeid marine gastropods, or cowries. We utilize previous methods for testing performance and employ a novel phylogenetic approach to calculate intraspecific variation and interspecific divergence. Error rates are estimated for (1 identifying samples against a well-characterized phylogeny, and (2 assisting in species discovery for partially known groups. We find that the lowest overall error for species identification is 4%. In contrast, barcoding performs poorly in incompletely sampled groups. Here, species delineation relies on the use of thresholds, set to differentiate between intraspecific variation and interspecific divergence. Whereas proponents envision a "barcoding gap" between the two, we find substantial overlap, leading to minimal error rates of approximately 17% in cowries. Moreover, error rates double if only traditionally recognized species are analyzed. Thus, DNA barcoding holds promise for identification in taxonomically well-understood and thoroughly sampled clades. However, the use of thresholds does not bode well for delineating closely related species in taxonomically understudied groups. The promise of barcoding will be realized only if based on solid taxonomic foundations.

  20. VQ-based model for binary error process

    Science.gov (United States)

    Csóka, Tibor; Polec, Jaroslav; Csóka, Filip; Kotuliaková, Kvetoslava

    2017-05-01

    A variety of complex techniques, such as forward error correction (FEC), automatic repeat request (ARQ), hybrid ARQ or cross-layer optimization, require in their design and optimization phase a realistic model of binary error process present in a specific digital channel. Past and more recent modeling approaches focus on capturing one or more stochastic characteristics with precision sufficient for the desired model application, thereby applying concepts and methods severely limiting the model applicability (eg in the form of modeled process prerequisite expectations). The proposed novel concept utilizing a Vector Quantization (VQ)-based approach to binary process modeling offers a viable alternative capable of superior modeling of most commonly observed small- and large-scale stochastic characteristics of a binary error process on the digital channel. Precision of the proposed model was verified using multiple statistical distances against the data captured in a wireless sensor network logical channel trace. Furthermore, the Pearson's goodness of fit test of all model variants' output was performed to conclusively demonstrate usability of the model for realistic captured binary error process. Finally, the presented results prove the proposed model applicability and its ability to far surpass the capabilities of the reference Elliot's model.

  1. Rydberg-interaction-based quantum gates free from blockade error

    CERN Document Server

    Shi, Xiao-Feng

    2016-01-01

    Accurate quantum gates are basic elements for building quantum computers. There has been great interest in designing quantum gates by using blockade effect of Rydberg atoms recently. The fidelity and operation speed of these gates, however, are fundamentally limited by the blockade error. Here we propose another type of quantum gates, which are based on Rydberg blockade effect, yet free from any blockade error. In contrast to the `blocking' method in previous schemes, we use Rydberg energy shift to realise a rational generalised Rabi frequency so that a novel $\\pi$ phase for one input state of the gate emerges. This leads to an accurate Rydberg-blockade based two-qubit quantum gate that can operate in a $0.1\\mu s$ timescale or faster thanks to that it operates by a Rabi frequency which is comparable to the blockade shift.

  2. Genetic algorithm-based evaluation of spatial straightness error

    Institute of Scientific and Technical Information of China (English)

    崔长彩; 车仁生; 黄庆成; 叶东; 陈刚

    2003-01-01

    A genetic algorithm ( GA ) -based approach is proposed to evaluate the straightness error of spatial lines. According to the mathematical definition of spatial straightness, a verification model is established for straightness error, and the fitness function of GA is then given and the implementation techniques of the proposed algorithm is discussed in detail. The implementation techniques include real number encoding, adaptive variable range choosing, roulette wheel and elitist combination selection strategies, heuristic crossover and single point mutation schemes etc. An application example is quoted to validate the proposed algorithm. The computation result shows that the GA-based approach is a superior nonlinear parallel optimization method. The performance of the evolution population can be improved through genetic operations such as reproduction, crossover and mutation until the optimum goal of the minimum zone solution is obtained. The quality of the solution is better and the efficiency of computation is higher than other methods.

  3. Human error and the problem of causality in analysis of accidents

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1990-01-01

    and for termination of the search for `causes'. In addition, the concept of human error is analysed and its intimate relation with human adaptation and learning is discussed. It is concluded that identification of errors as a separate class of behaviour is becoming increasingly difficult in modern work environments......Present technology is characterized by complexity, rapid change and growing size of technical systems. This has caused increasing concern with the human involvement in system safety. Analyses of the major accidents during recent decades have concluded that human errors on part of operators...

  4. Study on Segmented Reflector Lamp Design Based on Error Analysis

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    This paper discusses the basic principle and design m ethod for light distribution of car lamp, introduces an important development: h igh efficient and flexible car lamp with reflecting light distribution-segmente d reflector (multi-patch) car lamp, and puts out a design method for segmented reflector based on error analysis. Unlike classical car lamp with refractive lig ht distribution, the method of reflecting light distribution gives car lamp desi gn more flexibility. In the case of guarantying the li...

  5. Automation of Commanding at NASA: Reducing Human Error in Space Flight

    Science.gov (United States)

    Dorn, Sarah J.

    2010-01-01

    Automation has been implemented in many different industries to improve efficiency and reduce human error. Reducing or eliminating the human interaction in tasks has been proven to increase productivity in manufacturing and lessen the risk of mistakes by humans in the airline industry. Human space flight requires the flight controllers to monitor multiple systems and react quickly when failures occur so NASA is interested in implementing techniques that can assist in these tasks. Using automation to control some of these responsibilities could reduce the number of errors the flight controllers encounter due to standard human error characteristics. This paper will investigate the possibility of reducing human error in the critical area of manned space flight at NASA.

  6. Selective error detection for error-resilient wavelet-based image coding.

    Science.gov (United States)

    Karam, Lina J; Lam, Tuyet-Trang

    2007-12-01

    This paper introduces the concept of a similarity check function for error-resilient multimedia data transmission. The proposed similarity check function provides information about the effects of corrupted data on the quality of the reconstructed image. The degree of data corruption is measured by the similarity check function at the receiver, without explicit knowledge of the original source data. The design of a perceptual similarity check function is presented for wavelet-based coders such as the JPEG2000 standard, and used with a proposed "progressive similarity-based ARQ" (ProS-ARQ) scheme to significantly decrease the retransmission rate of corrupted data while maintaining very good visual quality of images transmitted over noisy channels. Simulation results with JPEG2000-coded images transmitted over the Binary Symmetric Channel, show that the proposed ProS-ARQ scheme significantly reduces the number of retransmissions as compared to conventional ARQ-based schemes. The presented results also show that, for the same number of retransmitted data packets, the proposed ProS-ARQ scheme can achieve significantly higher PSNR and better visual quality as compared to the selective-repeat ARQ scheme.

  7. Algebraic Error Based Triangulation and Metric of Lines.

    Science.gov (United States)

    Wu, Fuchao; Zhang, Ming; Wang, Guanghui; Hu, Zhanyi

    2015-01-01

    Line triangulation, a classical geometric problem in computer vision, is to determine the 3D coordinates of a line based on its 2D image projections from more than two views of cameras with known projection matrices. Compared to point features, line segments are more robust to matching errors, occlusions, and image uncertainties. In addition to line triangulation, a better metric is needed to evaluate 3D errors of line triangulation. In this paper, the line triangulation problem is investigated by using the Lagrange multipliers theory. The main contributions include: (i) Based on the Lagrange multipliers theory, a formula to compute the Plücker correction is provided, and from the formula, a new linear algorithm, LINa, is proposed for line triangulation; (ii) two optimal algorithms, OPTa-I and OPTa-II, are proposed by minimizing the algebraic error; and (iii) two metrics on 3D line space, the orthogonal metric and the quasi-Riemannian metric, are introduced for the evaluation of line triangulations. Extensive experiments on synthetic data and real images are carried out to validate and demonstrate the effectiveness of the proposed algorithms.

  8. Experimental evaluation of multiprocessor cache-based error recovery

    Science.gov (United States)

    Janssens, Bob; Fuchs, W. K.

    1991-01-01

    Several variations of cache-based checkpointing for rollback error recovery in shared-memory multiprocessors have been recently developed. By modifying the cache replacement policy, these techniques use the inherent redundancy in the memory hierarchy to periodically checkpoint the computation state. Three schemes, different in the manner in which they avoid rollback propagation, are evaluated. By simulation with address traces from parallel applications running on an Encore Multimax shared-memory multiprocessor, the performance effect of integrating the recovery schemes in the cache coherence protocol are evaluated. The results indicate that the cache-based schemes can provide checkpointing capability with low performance overhead but uncontrollable high variability in the checkpoint interval.

  9. Approximation error in PDE-based modelling of vehicular platoons

    Science.gov (United States)

    Hao, He; Barooah, Prabir

    2012-08-01

    We study the problem of how much error is introduced in approximating the dynamics of a large vehicular platoon by using a partial differential equation, as was done in Barooah, Mehta, and Hespanha [Barooah, P., Mehta, P.G., and Hespanha, J.P. (2009), 'Mistuning-based Decentralised Control of Vehicular Platoons for Improved Closed Loop Stability', IEEE Transactions on Automatic Control, 54, 2100-2113], Hao, Barooah, and Mehta [Hao, H., Barooah, P., and Mehta, P.G. (2011), 'Stability Margin Scaling Laws of Distributed Formation Control as a Function of Network Structure', IEEE Transactions on Automatic Control, 56, 923-929]. In particular, we examine the difference between the stability margins of the coupled-ordinary differential equations (ODE) model and its partial differential equation (PDE) approximation, which we call the approximation error. The stability margin is defined as the absolute value of the real part of the least stable pole. The PDE model has proved useful in the design of distributed control schemes (Barooah et al. 2009; Hao et al. 2011); it provides insight into the effect of gains of local controllers on the closed-loop stability margin that is lacking in the coupled-ODE model. Here we show that the ratio of the approximation error to the stability margin is O(1/N), where N is the number of vehicles. Thus, the PDE model is an accurate approximation of the coupled-ODE model when N is large. Numerical computations are provided to corroborate the analysis.

  10. Correction of placement error in EBL using model based method

    Science.gov (United States)

    Babin, Sergey; Borisov, Sergey; Militsin, Vladimir; Komagata, Tadashi; Wakatsuki, Tetsuro

    2016-10-01

    The main source of placement error in maskmaking using electron beam is charging. DISPLACE software provides a method to correct placement errors for any layout, based on a physical model. The charge of a photomask and multiple discharge mechanisms are simulated to find the charge distribution over the mask. The beam deflection is calculated for each location on the mask, creating data for the placement correction. The software considers the mask layout, EBL system setup, resist, and writing order, as well as other factors such as fogging and proximity effects correction. The output of the software is the data for placement correction. Unknown physical parameters such as fogging can be found from calibration experiments. A test layout on a single calibration mask was used to calibrate physical parameters used in the correction model. The extracted model parameters were used to verify the correction. As an ultimate test for the correction, a sophisticated layout was used for verification that was very different from the calibration mask. The placement correction results were predicted by DISPLACE, and the mask was fabricated and measured. A good correlation of the measured and predicted values of the correction all over the mask with the complex pattern confirmed the high accuracy of the charging placement error correction.

  11. Dissociating error-based and reinforcement-based loss functions during sensorimotor learning.

    Science.gov (United States)

    Cashaback, Joshua G A; McGregor, Heather R; Mohatarem, Ayman; Gribble, Paul L

    2017-07-01

    It has been proposed that the sensorimotor system uses a loss (cost) function to evaluate potential movements in the presence of random noise. Here we test this idea in the context of both error-based and reinforcement-based learning. In a reaching task, we laterally shifted a cursor relative to true hand position using a skewed probability distribution. This skewed probability distribution had its mean and mode separated, allowing us to dissociate the optimal predictions of an error-based loss function (corresponding to the mean of the lateral shifts) and a reinforcement-based loss function (corresponding to the mode). We then examined how the sensorimotor system uses error feedback and reinforcement feedback, in isolation and combination, when deciding where to aim the hand during a reach. We found that participants compensated differently to the same skewed lateral shift distribution depending on the form of feedback they received. When provided with error feedback, participants compensated based on the mean of the skewed noise. When provided with reinforcement feedback, participants compensated based on the mode. Participants receiving both error and reinforcement feedback continued to compensate based on the mean while repeatedly missing the target, despite receiving auditory, visual and monetary reinforcement feedback that rewarded hitting the target. Our work shows that reinforcement-based and error-based learning are separable and can occur independently. Further, when error and reinforcement feedback are in conflict, the sensorimotor system heavily weights error feedback over reinforcement feedback.

  12. Trust in haptic assistance: weighting visual and haptic cues based on error history.

    Science.gov (United States)

    Gibo, Tricia L; Mugge, Winfred; Abbink, David A

    2017-08-01

    To effectively interpret and interact with the world, humans weight redundant estimates from different sensory cues to form one coherent, integrated estimate. Recent advancements in physical assistance systems, where guiding forces are computed by an intelligent agent, enable the presentation of augmented cues. It is unknown, however, if cue weighting can be extended to augmented cues. Previous research has shown that cue weighting is determined by the reliability (inversely related to uncertainty) of cues within a trial, yet augmented cues may also be affected by errors that vary over trials. In this study, we investigate whether people can learn to appropriately weight a haptic cue from an intelligent assistance system based on its error history. Subjects held a haptic device and reached to a hidden target using a visual (Gaussian distributed dots) and haptic (force channel) cue. The error of the augmented haptic cue varied from trial to trial based on a Gaussian distribution. Subjects learned to estimate the target location by weighting the visual and augmented haptic cues based on their perceptual uncertainty and experienced errors. With both cues available, subjects were able to find the target with an improved or equal performance compared to what was possible with one cue alone. Our results show that the brain can learn to reweight augmented cues from intelligent agents, akin to previous observations of the reweighting of naturally occurring cues. In addition, these results suggest that the weighting of a cue is not only affected by its within-trial reliability but also the history of errors.

  13. Prediction of human errors by maladaptive changes in event-related brain networks

    NARCIS (Netherlands)

    Eichele, T.; Debener, S.; Calhoun, V.D.; Specht, K.; Engel, A.K.; Hugdahl, K.; Cramon, D.Y. von; Ullsperger, M.

    2008-01-01

    Humans engaged in monotonous tasks are susceptible to occasional errors that may lead to serious consequences, but little is known about brain activity patterns preceding errors. Using functional Mill and applying independent component analysis followed by deconvolution of hemodynamic responses, we

  14. Detection of error related neuronal responses recorded by electrocorticography in humans during continuous movements.

    Directory of Open Access Journals (Sweden)

    Tomislav Milekovic

    Full Text Available BACKGROUND: Brain-machine interfaces (BMIs can translate the neuronal activity underlying a user's movement intention into movements of an artificial effector. In spite of continuous improvements, errors in movement decoding are still a major problem of current BMI systems. If the difference between the decoded and intended movements becomes noticeable, it may lead to an execution error. Outcome errors, where subjects fail to reach a certain movement goal, are also present during online BMI operation. Detecting such errors can be beneficial for BMI operation: (i errors can be corrected online after being detected and (ii adaptive BMI decoding algorithm can be updated to make fewer errors in the future. METHODOLOGY/PRINCIPAL FINDINGS: Here, we show that error events can be detected from human electrocorticography (ECoG during a continuous task with high precision, given a temporal tolerance of 300-400 milliseconds. We quantified the error detection accuracy and showed that, using only a small subset of 2×2 ECoG electrodes, 82% of detection information for outcome error and 74% of detection information for execution error available from all ECoG electrodes could be retained. CONCLUSIONS/SIGNIFICANCE: The error detection method presented here could be used to correct errors made during BMI operation or to adapt a BMI algorithm to make fewer errors in the future. Furthermore, our results indicate that smaller ECoG implant could be used for error detection. Reducing the size of an ECoG electrode implant used for BMI decoding and error detection could significantly reduce the medical risk of implantation.

  15. Human Error and the International Space Station: Challenges and Triumphs in Science Operations

    Science.gov (United States)

    Harris, Samantha S.; Simpson, Beau C.

    2016-01-01

    Any system with a human component is inherently risky. Studies in human factors and psychology have repeatedly shown that human operators will inevitably make errors, regardless of how well they are trained. Onboard the International Space Station (ISS) where crew time is arguably the most valuable resource, errors by the crew or ground operators can be costly to critical science objectives. Operations experts at the ISS Payload Operations Integration Center (POIC), located at NASA's Marshall Space Flight Center in Huntsville, Alabama, have learned that from payload concept development through execution, there are countless opportunities to introduce errors that can potentially result in costly losses of crew time and science. To effectively address this challenge, we must approach the design, testing, and operation processes with two specific goals in mind. First, a systematic approach to error and human centered design methodology should be implemented to minimize opportunities for user error. Second, we must assume that human errors will be made and enable rapid identification and recoverability when they occur. While a systematic approach and human centered development process can go a long way toward eliminating error, the complete exclusion of operator error is not a reasonable expectation. The ISS environment in particular poses challenging conditions, especially for flight controllers and astronauts. Operating a scientific laboratory 250 miles above the Earth is a complicated and dangerous task with high stakes and a steep learning curve. While human error is a reality that may never be fully eliminated, smart implementation of carefully chosen tools and techniques can go a long way toward minimizing risk and increasing the efficiency of NASA's space science operations.

  16. Likelihood-Based Inference in Nonlinear Error-Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...

  17. Human oocytes. Error-prone chromosome-mediated spindle assembly favors chromosome segregation defects in human oocytes.

    Science.gov (United States)

    Holubcová, Zuzana; Blayney, Martyn; Elder, Kay; Schuh, Melina

    2015-06-05

    Aneuploidy in human eggs is the leading cause of pregnancy loss and several genetic disorders such as Down syndrome. Most aneuploidy results from chromosome segregation errors during the meiotic divisions of an oocyte, the egg's progenitor cell. The basis for particularly error-prone chromosome segregation in human oocytes is not known. We analyzed meiosis in more than 100 live human oocytes and identified an error-prone chromosome-mediated spindle assembly mechanism as a major contributor to chromosome segregation defects. Human oocytes assembled a meiotic spindle independently of either centrosomes or other microtubule organizing centers. Instead, spindle assembly was mediated by chromosomes and the small guanosine triphosphatase Ran in a process requiring ~16 hours. This unusually long spindle assembly period was marked by intrinsic spindle instability and abnormal kinetochore-microtubule attachments, which favor chromosome segregation errors and provide a possible explanation for high rates of aneuploidy in human eggs.

  18. Phase errors elimination in compact digital holoscope (CDH) based on a reasonable mathematical model

    Science.gov (United States)

    Wen, Yongfu; Qu, Weijuan; Cheng, Cheeyuen; Wang, Zhaomin; Asundi, Anand

    2015-03-01

    In the compact digital holoscope (CDH) measurement process, theoretically, we need to ensure the distances between the reference wave and object wave to the hologram plane exactly match. However, it is not easy to realize in practice due to the human factors. This can lead to a phase error in the reconstruction result. In this paper, the strict theoretical analysis of the wavefront interference is performed to demonstrate the mathematical model of the phase error and then a phase errors elimination method is proposed based on the advanced mathematical model, which has a more explicit physical meaning. Experiments are carried out to verify the performance of the presented method and the results indicate that it is effective and allows the operator can make operation more flexible.

  19. TerrorCat: a translation error categorization-based MT quality metric

    OpenAIRE

    2012-01-01

    We present TerrorCat, a submission to the WMT’12 metrics shared task. TerrorCat uses frequencies of automatically obtained translation error categories as base for pairwise comparison of translation hypotheses, which is in turn used to generate a score for every translation. The metric shows high overall correlation with human judgements on the system level and more modest results on the level of individual sentences.

  20. Coping with human errors through system design: Implications for ecological interface design

    DEFF Research Database (Denmark)

    Rasmussen, Jens; Vicente, Kim J.

    1989-01-01

    Research during recent years has revealed that human errors are not stochastic events which can be removed through improved training programs or optimal interface design. Rather, errors tend to reflect either systematic interference between various models, rules, and schemata, or the effects...... of the adaptive mechanisms involved in learning. In terms of design implications, these findings suggest that reliable human-system interaction will be achieved by designing interfaces which tend to minimize the potential for control interference and support recovery from errors. In other words, the focus should...... be on control of the effects of errors rather than on the elimination of errors per se. In this paper, we propose a theoretical framework for interface design that attempts to satisfy these objectives. The goal of our framework, called ecological interface design, is to develop a meaningful representation...

  1. A human error identification method based on cognitive process analysis---Appliaction to dispatching system of high-speed railway train%一种基于认知过程分析的人因失误辨识方法--应用于高速铁路列车调度系统

    Institute of Scientific and Technical Information of China (English)

    吴海涛; 庄河; 罗霞

    2014-01-01

    通过总结既有的人因失误分类方法,对主要分类方法的优缺点和适用性进行了评述。既有的人因失误分类方法主要侧重于人因事故分析,在对人的认知过程四阶段尤其是计划决策阶段的失误模式进行主动辨识具有较大的困难。通过综合认知行为四阶段模型和技能型-规则型-知识型行为理论(SRK 理论),建立了高铁列车调度指挥认知行为 SRK 模型。以认知行为 SRK 模型为基础,提出一种新的人因失误分类方法用以人因失误辨识。以高铁列车调度指挥临时限速为任务背景,进行实际的人因失误辨识工作,并给出了详细的辨识结果。通过对列车调度员的访谈,辨识结果全面覆盖了临时限速时可能出现的人因失误类型,验证了方法的实用性。%By summarizing the existing human error classification methods , the advantages, disadvantages and ap-plicability of methods were reviewed.Existing methods mainly focused on human accident analysis , but had great difficulties in human error identification proactively in four stages of human cognitive processes , especially in deci-sion-making stage.A skill-rule-knowledge-based(SRK)cognitive model of high-speed railway train dispatching was established combining human cognitive process model and skill -rule-knowledge-based behavioral theory.Then a new human error classification method for human error identification was proposed .Finally, a case example of tem-porary speed restriction in high-speed railway train dispatching was presented and the detailed human error types was given.Through interviews with train dispatchers , the practicality of the method was demonstrated .

  2. An experimental approach to validating a theory of human error in complex systems

    Science.gov (United States)

    Morris, N. M.; Rouse, W. B.

    1985-01-01

    The problem of 'human error' is pervasive in engineering systems in which the human is involved. In contrast to the common engineering approach of dealing with error probabilistically, the present research seeks to alleviate problems associated with error by gaining a greater understanding of causes and contributing factors from a human information processing perspective. The general approach involves identifying conditions which are hypothesized to contribute to errors, and experimentally creating the conditions in order to verify the hypotheses. The conceptual framework which serves as the basis for this research is discussed briefly, followed by a description of upcoming research. Finally, the potential relevance of this research to design, training, and aiding issues is discussed.

  3. Integrated Framework for Understanding Relationship Between Human Error and Aviation Safety

    Institute of Scientific and Technical Information of China (English)

    徐锡东

    2009-01-01

    Introducing a framework for understanding the relationship between human error and aviation safety from mul-tiple perspectives and using multiple models. The first part of the framework is the perspective of individual operator using the information processing model. The second part is the group perspective with the Crew Re-source Management (CRM) model. The third and final is the organization perspective using Reason's Swiss cheese model. Each of the perspectives and models has been in existence for a long time, but the integrated framework presented allows a systematic understanding of the complex relationship between human error and aviation safety, along with the numerous factors that cause or influence error. The framework also allows the i-dentification of mitigation measures to systematically reduce human error and improve aviation safety.

  4. Radiation risk estimation based on measurement error models

    CERN Document Server

    Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya

    2017-01-01

    This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.

  5. Keyword Query over Error-Tolerant Knowledge Bases

    Institute of Scientific and Technical Information of China (English)

    Yu-Rong Cheng; Ye Yuan; Jia-Yu Li; Lei Chen; Guo-Ren Wang

    2016-01-01

    With more and more knowledge provided by WWW, querying and mining the knowledge bases have attracted much research attention. Among all the queries over knowledge bases, which are usually modelled as graphs, a keyword query is the most widely used one. Although the problem of keyword query over graphs has been deeply studied for years, knowledge bases, as special error-tolerant graphs, lead to the results of the traditional defined keyword queries out of users’ satisfaction. Thus, in this paper, we define a new keyword query, called confident r-clique, specific for knowledge bases based on the r-clique definition for keyword query on general graphs, which has been proved to be the best one. However, as we prove in the paper, finding the confident r-cliques is #P-hard. We propose a filtering-and-verification framework to improve the search efficiency. In the filtering phase, we develop the tightest upper bound of the confident r-clique, and design an index together with its search algorithm, which suits the large scale of knowledge bases well. In the verification phase, we develop an efficient sampling method to verify the final answers from the candidates remaining in the filtering phase. Extensive experiments demonstrate that the results derived from our new definition satisfy the users’ requirement better compared with the traditional r-clique definition, and our algorithms are efficient.

  6. Behind Human Error: Cognitive Systems, Computers and Hindsight

    Science.gov (United States)

    1994-12-01

    squeeze became on the powers of the operator.... And as Norbert Wiener noted some years later (1964, p. 63): The gadget-minded people often have the...for one exception see Woods and Elias , 1988). This failure to develop representations that reveal change and highlight events in the monitored...Woods, D. D., and Elias , G. (1988). Significance messages: An inte- gral display concept. In Proceedings of the 32nd Annual Meeting of the Human

  7. Examiner error in curriculum-based measurement of oral reading.

    Science.gov (United States)

    Cummings, Kelli D; Biancarosa, Gina; Schaper, Andrew; Reed, Deborah K

    2014-08-01

    Although curriculum based measures of oral reading (CBM-R) have strong technical adequacy, there is still a reason to believe that student performance may be influenced by factors of the testing situation, such as errors examiners make in administering and scoring the test. This study examined the construct-irrelevant variance introduced by examiners using a cross-classified multilevel model. We sought to determine the extent of variance in student CBM-R scores attributable to examiners and, if present, the extent to which it was moderated by students' grade level and English learner (EL) status. Fit indices indicated that a cross-classified random effects model (CCREM) best fits the data with measures nested within students, students nested within schools, and examiners crossing schools. Intraclass correlations of the CCREM revealed that roughly 16% of the variance in student CBM-R scores was associated between examiners. The remaining variance was associated with the measurement level, 3.59%; between students, 75.23%; and between schools, 5.21%. Results were moderated by grade level but not by EL status. The discussion addresses the implications of this error for low-stakes and high-stakes decisions about students, teacher evaluation systems, and hypothesis testing in reading intervention research.

  8. Corpus-based error detection in a multilingual medical thesaurus.

    Science.gov (United States)

    Andrade, Roosewelt L; Pacheco, Edson; Cancian, Pindaro S; Nohama, Percy; Schulz, Stefan

    2007-01-01

    Cross-language document retrieval systems require support by some kind of multilingual thesaurus for semantically indexing documents in different languages. The peculiarities of the medical sublanguage, together with the subjectivism of lexicographers' choices, complicates the thesaurus construction process. It furthermore requires a high degree of communication and interaction between the lexicographers involved. In order to detect errors, a systematic procedure is therefore necessary. We here describe a method which supports the maintenance of the multilingual medical subword repository of the MorphoSaurus system which assigns language-independent semantic identifiers to medical texts. Based on the assumption that the distribution of these semantic identifiers should be similar whenever comparing closely related texts in different languages, our approach identifies those semantic identifiers that vary most in distribution comparing language pairs. The revision of these identifiers and the lexical items related to them revealed multiple errors which were subsequently classified and fixed by the lexicographers. The overall quality improvement of the thesaurus was finally measured using the OHSUMED IR benchmark, resulting in a significant improvement of the retrieval quality for one of the languages tested.

  9. An Estimation of Human Error Probability of Filtered Containment Venting System Using Dynamic HRA Method

    Energy Technology Data Exchange (ETDEWEB)

    Jang, Seunghyun; Jae, Moosung [Hanyang University, Seoul (Korea, Republic of)

    2016-10-15

    The human failure events (HFEs) are considered in the development of system fault trees as well as accident sequence event trees in part of Probabilistic Safety Assessment (PSA). As a method for analyzing the human error, several methods, such as Technique for Human Error Rate Prediction (THERP), Human Cognitive Reliability (HCR), and Standardized Plant Analysis Risk-Human Reliability Analysis (SPAR-H) are used and new methods for human reliability analysis (HRA) are under developing at this time. This paper presents a dynamic HRA method for assessing the human failure events and estimation of human error probability for filtered containment venting system (FCVS) is performed. The action associated with implementation of the containment venting during a station blackout sequence is used as an example. In this report, dynamic HRA method was used to analyze FCVS-related operator action. The distributions of the required time and the available time were developed by MAAP code and LHS sampling. Though the numerical calculations given here are only for illustrative purpose, the dynamic HRA method can be useful tools to estimate the human error estimation and it can be applied to any kind of the operator actions, including the severe accident management strategy.

  10. Probe Error Modeling Research Based on Bayesian Network

    Institute of Scientific and Technical Information of China (English)

    Wu Huaiqiang; Xing Zilong; Zhang Jian; Yan Yan

    2015-01-01

    Probe calibration is carried out under specific conditions; most of the error caused by the change of speed parameter has not been corrected. In order to reduce the measuring error influence on measurement accuracy, this article analyzes the relationship between speed parameter and probe error, and use Bayesian network to establish the model of probe error. Model takes account of prior knowledge and sample data, with the updating of data, which can reflect the change of the errors of the probe and constantly revised modeling results.

  11. Human reliability analysis of errors of commission: a review of methods and applications

    Energy Technology Data Exchange (ETDEWEB)

    Reer, B

    2007-06-15

    Illustrated by specific examples relevant to contemporary probabilistic safety assessment (PSA), this report presents a review of human reliability analysis (HRA) addressing post initiator errors of commission (EOCs), i.e. inappropriate actions under abnormal operating conditions. The review addressed both methods and applications. Emerging HRA methods providing advanced features and explicit guidance suitable for PSA are: A Technique for Human Event Analysis (ATHEANA, key publications in 1998/2000), Methode d'Evaluation de la Realisation des Missions Operateur pour la Surete (MERMOS, 1998/2000), the EOC HRA method developed by the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS, 2003), the Misdiagnosis Tree Analysis (MDTA) method (2005/2006), the Cognitive Reliability and Error Analysis Method (CREAM, 1998), and the Commission Errors Search and Assessment (CESA) method (2002/2004). As a result of a thorough investigation of various PSA/HRA applications, this paper furthermore presents an overview of EOCs (termination of safety injection, shutdown of secondary cooling, etc.) referred to in predictive studies and a qualitative review of cases of EOC quantification. The main conclusions of the review of both the methods and the EOC HRA cases are: (1) The CESA search scheme, which proceeds from possible operator actions to the affected systems to scenarios, may be preferable because this scheme provides a formalized way for identifying relatively important scenarios with EOC opportunities; (2) an EOC identification guidance like CESA, which is strongly based on the procedural guidance and important measures of systems or components affected by inappropriate actions, however should pay some attention to EOCs associated with familiar but non-procedural actions and EOCs leading to failures of manually initiated safety functions. (3) Orientations of advanced EOC quantification comprise a) modeling of multiple contexts for a given scenario, b) accounting for

  12. Galois Field Based Very Fast and Compact Error Correcting Technique

    Directory of Open Access Journals (Sweden)

    Alin Sindhu.A,

    2014-01-01

    Full Text Available As the technology is improving the memory devices are becoming larger, so powerful error correction codes are needed. Error correction codes are commonly used to protect memories from soft errors, which change the logical value of memory cells without damaging the circuit. These codes can correct a large number of errors, but generally require complex decoders. In order to avoid this decoding complexity, in this project it uses Euclidean geometry LDPC codes with one step majority decoding technique. This method detects words having error in the first iteration of the majority logic decoding process and reduces the decoding time by stopping the decoding process when no errors are detected as well as reduces the memory access time. And the result obtained through this technique also proves that it is an effective and compact error correcting technique.

  13. Likelihood-Based Inference in Nonlinear Error-Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... of the process in terms of stochastic and deter- ministic trends as well as stationary components. In particular, the behaviour of the cointegrating relations is described in terms of geo- metric ergodicity. Despite the fact that no deterministic terms are included, the process will have both stochastic trends...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...

  14. Human error and the problem of causality in analysis of accidents

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1990-01-01

    , designers or managers have played a major role. There are, however, several basic problems in analysis of accidents and identification of human error. This paper addresses the nature of causal explanations and the ambiguity of the rules applied for identification of the events to include in analysis......Present technology is characterized by complexity, rapid change and growing size of technical systems. This has caused increasing concern with the human involvement in system safety. Analyses of the major accidents during recent decades have concluded that human errors on part of operators...

  15. Computational analysis of splicing errors and mutations in human transcripts

    Directory of Open Access Journals (Sweden)

    Gelfand Mikhail S

    2008-01-01

    Full Text Available Abstract Background Most retained introns found in human cDNAs generated by high-throughput sequencing projects seem to result from underspliced transcripts, and thus they capture intermediate steps of pre-mRNA splicing. On the other hand, mutations in splice sites cause exon skipping of the respective exon or activation of pre-existing cryptic sites. Both types of events reflect properties of the splicing mechanism. Results The retained introns were significantly shorter than constitutive ones, and skipped exons are shorter than exons with cryptic sites. Both donor and acceptor splice sites of retained introns were weaker than splice sites of constitutive introns. The authentic acceptor sites affected by mutations were significantly weaker in exons with activated cryptic sites than in skipped exons. The distance from a mutated splice site to the nearest equivalent site is significantly shorter in cases of activated cryptic sites compared to exon skipping events. The prevalence of retained introns within genes monotonically increased in the 5'-to-3' direction (more retained introns close to the 3'-end, consistent with the model of co-transcriptional splicing. The density of exonic splicing enhancers was higher, and the density of exonic splicing silencers lower in retained introns compared to constitutive ones and in exons with cryptic sites compared to skipped exons. Conclusion Thus the analysis of retained introns in human cDNA, exons skipped due to mutations in splice sites and exons with cryptic sites produced results consistent with the intron definition mechanism of splicing of short introns, co-transcriptional splicing, dependence of splicing efficiency on the splice site strength and the density of candidate exonic splicing enhancers and silencers. These results are consistent with other, recently published analyses.

  16. An Activation-Based Model of Routine Sequence Errors

    Science.gov (United States)

    2015-04-01

    Occasionally, after completing a step, the screen cleared and the participants were interrupted to perform a simple arithmetic task; the interruption...accordance with the columnar data, the distribution of errors clusters around the +/-1 errors, and falls away in both directions as the error type gets...has been accessed in working memory, slowly decaying as time passes. Activation strength- ening is calculated according to: As = ln ( n ∑ j=1 t−dj

  17. Human error identification for laparoscopic surgery: Development of a motion economy perspective.

    Science.gov (United States)

    Al-Hakim, Latif; Sevdalis, Nick; Maiping, Tanaphon; Watanachote, Damrongpan; Sengupta, Shomik; Dissaranan, Charuspong

    2015-09-01

    This study postulates that traditional human error identification techniques fail to consider motion economy principles and, accordingly, their applicability in operating theatres may be limited. This study addresses this gap in the literature with a dual aim. First, it identifies the principles of motion economy that suit the operative environment and second, it develops a new error mode taxonomy for human error identification techniques which recognises motion economy deficiencies affecting the performance of surgeons and predisposing them to errors. A total of 30 principles of motion economy were developed and categorised into five areas. A hierarchical task analysis was used to break down main tasks of a urological laparoscopic surgery (hand-assisted laparoscopic nephrectomy) to their elements and the new taxonomy was used to identify errors and their root causes resulting from violation of motion economy principles. The approach was prospectively tested in 12 observed laparoscopic surgeries performed by 5 experienced surgeons. A total of 86 errors were identified and linked to the motion economy deficiencies. Results indicate the developed methodology is promising. Our methodology allows error prevention in surgery and the developed set of motion economy principles could be useful for training surgeons on motion economy principles. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  18. A Corpus-based Study of EFL Learners’ Errors in IELTS Essay Writing

    Directory of Open Access Journals (Sweden)

    Hoda Divsar

    2017-03-01

    Full Text Available The present study analyzed different types of errors in the EFL learners’ IELTS essays. In order to determine the major types of errors, a corpus of 70 IELTS examinees’ writings were collected, and their errors were extracted and categorized qualitatively. Errors were categorized based on a researcher-developed error-coding scheme into 13 aspects. Based on the descriptive statistical analyses, the frequency of each error type was calculated and the commonest errors committed by the EFL learners in IELTS essays were identified. The results indicated that the two most frequent errors that IELTS candidates committed were related to word choice and verb forms. Based on the research results, pedagogical implications highlight analyzing EFL learners’ writing errors as a useful basis for instructional purposes including creating pedagogical teaching materials that are in line with learners’ linguistic strengths and weaknesses.

  19. Payment Error Rate Measurement (PERM)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The PERM program measures improper payments in Medicaid and CHIP and produces error rates for each program. The error rates are based on reviews of the...

  20. IMRT QA: Selecting gamma criteria based on error detection sensitivity

    Energy Technology Data Exchange (ETDEWEB)

    Steers, Jennifer M. [Department of Radiation Oncology, Cedars-Sinai Medical Center, Los Angeles, California 90048 and Physics and Biology in Medicine IDP, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90095 (United States); Fraass, Benedick A., E-mail: benedick.fraass@cshs.org [Department of Radiation Oncology, Cedars-Sinai Medical Center, Los Angeles, California 90048 (United States)

    2016-04-15

    Purpose: The gamma comparison is widely used to evaluate the agreement between measurements and treatment planning system calculations in patient-specific intensity modulated radiation therapy (IMRT) quality assurance (QA). However, recent publications have raised concerns about the lack of sensitivity when employing commonly used gamma criteria. Understanding the actual sensitivity of a wide range of different gamma criteria may allow the definition of more meaningful gamma criteria and tolerance limits in IMRT QA. We present a method that allows the quantitative determination of gamma criteria sensitivity to induced errors which can be applied to any unique combination of device, delivery technique, and software utilized in a specific clinic. Methods: A total of 21 DMLC IMRT QA measurements (ArcCHECK®, Sun Nuclear) were compared to QA plan calculations with induced errors. Three scenarios were studied: MU errors, multi-leaf collimator (MLC) errors, and the sensitivity of the gamma comparison to changes in penumbra width. Gamma comparisons were performed between measurements and error-induced calculations using a wide range of gamma criteria, resulting in a total of over 20 000 gamma comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using 36 different gamma criteria. Results: This study demonstrates that systematic errors and case-specific errors can be detected by the error curve analysis. Depending on the location of the error curve peak (e.g., not centered about zero), 3%/3 mm threshold = 10% at 90% pixels passing may miss errors as large as 15% MU errors and ±1 cm random MLC errors for some cases. As the dose threshold parameter was increased for a given %Diff/distance-to-agreement (DTA) setting, error sensitivity was increased by up to a factor of two for select cases. This increased sensitivity with increasing dose

  1. The effect of retinal image error update rate on human vestibulo-ocular reflex gain adaptation.

    Science.gov (United States)

    Fadaee, Shannon B; Migliaccio, Americo A

    2016-04-01

    The primary function of the angular vestibulo-ocular reflex (VOR) is to stabilise images on the retina during head movements. Retinal image movement is the likely feedback signal that drives VOR modification/adaptation for different viewing contexts. However, it is not clear whether a retinal image position or velocity error is used primarily as the feedback signal. Recent studies examining this signal are limited because they used near viewing to modify the VOR. However, it is not known whether near viewing drives VOR adaptation or is a pre-programmed contextual cue that modifies the VOR. Our study is based on analysis of the VOR evoked by horizontal head impulses during an established adaptation task. Fourteen human subjects underwent incremental unilateral VOR adaptation training and were tested using the scleral search coil technique over three separate sessions. The update rate of the laser target position (source of the retinal image error signal) used to drive VOR adaptation was different for each session [50 (once every 20 ms), 20 and 15/35 Hz]. Our results show unilateral VOR adaptation occurred at 50 and 20 Hz for both the active (23.0 ± 9.6 and 11.9 ± 9.1% increase on adapting side, respectively) and passive VOR (13.5 ± 14.9, 10.4 ± 12.2%). At 15 Hz, unilateral adaptation no longer occurred in the subject group for both the active and passive VOR, whereas individually, 4/9 subjects tested at 15 Hz had significant adaptation. Our findings suggest that 1-2 retinal image position error signals every 100 ms (i.e. target position update rate 15-20 Hz) are sufficient to drive VOR adaptation.

  2. A Human Reliability Analysis of Pre-Accident Human Errors in the Low Power and Shutdown PSA of the KSNP

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Daeil; Jang, Seungchul

    2007-03-15

    Korea Atomic Energy Research Institute, using the ANS Low Power /Shutdown (LPSD)PRA Standard, evaluated the LPSD PSA model of the KSNP, Younggwang (YGN) Units 5 and 6, and identified the items to be improved. The evaluation results of human reliability analysis (HRA) of the pre-accident human errors in the LPSD PSA model of the KSNP showed that 13 items among 15 items of supporting requirements for those in the ANS PRA Standard were identified as them to be improved. Thus, we newly carried out a HRA for pre-accident human errors in the LPSD PSA model for the KSNP to improve its quality. We considered potential pre-accident human errors for all manual valves and control/instrumentation equipment of the systems modeled in the KSNP LPSD PSA model except reactor protection system/ engineering safety features actuation system. We reviewed 160 manual valves and 56 control/instrumentation equipment. The number of newly identified pre-accident human errors is 101. Among them, the number of those related to testing/maintenance tasks is 56. The number of those related to calibration tasks is 45. The number of those related to only shutdown operation is 10. It was shown that the pre-accident human errors related to only shutdown operation contributions to the core damage frequency of LPSD PSA model for the KSNP was negligible.The self-assessment results for the new HRA results of pre-accident human errors using the ANS LPSD PRA Standard show that above 80% items of its supporting requirements for post-accident human errors were graded as its Category II or III. It is expected that the HRA results for the pre-accident human errors presented in this study will be greatly helpful to improve the PSA quality for the domestic nuclear power plants because they have sufficient PSA quality to meet the Category II of supporting requirements for the postaccident human errors in the ANS LPSD PRA Standard.

  3. A Human Reliability Analysis of Pre-Accident Human Errors in the Low Power and Shutdown PSA of the KSNP

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Daeil; Jang, Seungchul

    2007-03-15

    Korea Atomic Energy Research Institute, using the ANS Low Power /Shutdown (LPSD)PRA Standard, evaluated the LPSD PSA model of the KSNP, Younggwang (YGN) Units 5 and 6, and identified the items to be improved. The evaluation results of human reliability analysis (HRA) of the pre-accident human errors in the LPSD PSA model of the KSNP showed that 13 items among 15 items of supporting requirements for those in the ANS PRA Standard were identified as them to be improved. Thus, we newly carried out a HRA for pre-accident human errors in the LPSD PSA model for the KSNP to improve its quality. We considered potential pre-accident human errors for all manual valves and control/instrumentation equipment of the systems modeled in the KSNP LPSD PSA model except reactor protection system/ engineering safety features actuation system. We reviewed 160 manual valves and 56 control/instrumentation equipment. The number of newly identified pre-accident human errors is 101. Among them, the number of those related to testing/maintenance tasks is 56. The number of those related to calibration tasks is 45. The number of those related to only shutdown operation is 10. It was shown that the pre-accident human errors related to only shutdown operation contributions to the core damage frequency of LPSD PSA model for the KSNP was negligible.The self-assessment results for the new HRA results of pre-accident human errors using the ANS LPSD PRA Standard show that above 80% items of its supporting requirements for post-accident human errors were graded as its Category II or III. It is expected that the HRA results for the pre-accident human errors presented in this study will be greatly helpful to improve the PSA quality for the domestic nuclear power plants because they have sufficient PSA quality to meet the Category II of supporting requirements for the postaccident human errors in the ANS LPSD PRA Standard.

  4. Minimum Bayesian error probability-based gene subset selection.

    Science.gov (United States)

    Li, Jian; Yu, Tian; Wei, Jin-Mao

    2015-01-01

    Sifting functional genes is crucial to the new strategies for drug discovery and prospective patient-tailored therapy. Generally, simply generating gene subset by selecting the top k individually superior genes may obtain an inferior gene combination, for some selected genes may be redundant with respect to some others. In this paper, we propose to select gene subset based on the criterion of minimum Bayesian error probability. The method dynamically evaluates all available genes and sifts only one gene at a time. A gene is selected if its combination with the other selected genes can gain better classification information. Within the generated gene subset, each individual gene is the most discriminative one in comparison with those that classify cancers in the same way as this gene does and different genes are more discriminative in combination than in individual. The genes selected in this way are likely to be functional ones from the system biology perspective, for genes tend to co-regulate rather than regulate individually. Experimental results show that the classifiers induced based on this method are capable of classifying cancers with high accuracy, while only a small number of genes are involved.

  5. Human error in medical practice: an unavoidable presence El error en la práctica médica: una presencia ineludible

    OpenAIRE

    Gladis Adriana Vélez Álvarez

    2006-01-01

    Making mistakes is a human characteristic and a mechanism to learn, but at the same time it may become a threat to human beings in some scenarios. Aviation and Medicine are good examples of this. Some data are presented about the frequency of error in Medicine, its ubiquity and the circumstances that favor it. A reflection is done about how the error is being managed and why it is not more often discussed. It is proposed that the first step in learning from an error is to accept it as an unav...

  6. Leak in the breathing circuit: CO2 absorber and human error.

    Science.gov (United States)

    Umesh, Goneppanavar; Jasvinder, Kaur; Sagarnil, Roy

    2010-04-01

    A couple of reports in literature have mentioned CO2 absorbers to be the cause for breathing circuit leak during anesthesia. Defective canister, failure to close the absorber chamber and overfilling of the chamber with sodalime were the problems in these reports. Among these, the last two are reports of human error resulting in problems. We report a case where despite taking precautions in this regard, we experienced a significant leak in the system due to a problem with the CO2 absorber, secondary to human error.

  7. Estimation of genetic connectedness diagnostics based on prediction errors without the prediction error variance-covariance matrix.

    Science.gov (United States)

    Holmes, John B; Dodds, Ken G; Lee, Michael A

    2017-03-02

    An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.

  8. Homography-Based Correction of Positional Errors in MRT Survey

    CERN Document Server

    Nayak, Arvind; Shankar, N Udaya

    2009-01-01

    The Mauritius Radio Telescope (MRT) images show systematics in the positional errors of sources when compared to source positions in the Molonglo Reference Catalogue (MRC). We have applied two-dimensional homography to correct positional errors in the image domain and avoid re-processing the visibility data. Positions of bright (above 15-$\\sigma$) sources, common to MRT and MRC catalogues, are used to set up an over-determined system to solve for the 2-D homography matrix. After correction, the errors are found to be within 10% of the beamwidth for these bright sources and the systematics are eliminated from the images.

  9. Fourier transform based dynamic error modeling method for ultra-precision machine tool

    Science.gov (United States)

    Chen, Guoda; Liang, Yingchun; Ehmann, Kornel F.; Sun, Yazhou; Bai, Qingshun

    2014-08-01

    In some industrial fields, the workpiece surface need to meet not only the demand of surface roughness, but the strict requirement of multi-scale frequency domain errors. Ultra-precision machine tool is the most important carrier for the ultra-precision machining of the parts, whose errors is the key factor to influence the multi-scale frequency domain errors of the machined surface. The volumetric error modeling is the important bridge to link the relationship between the machine error and machined surface error. However, the available error modeling method from the previous research is hard to use to analyze the relationship between the dynamic errors of the machine motion components and multi-scale frequency domain errors of the machined surface, which plays the important reference role in the design and accuracy improvement of the ultra-precision machine tool. In this paper, a fourier transform based dynamic error modeling method is presented, which is also on the theoretical basis of rigid body kinematics and homogeneous transformation matrix. A case study is carried out, which shows the proposed method can successfully realize the identical and regular numerical description of the machine dynamic errors and the volumetric errors. The proposed method has strong potential for the prediction of the frequency domain errors on the machined surface, extracting of the information of multi-scale frequency domain errors, and analysis of the relationship between the machine motion components and frequency domain errors of the machined surface.

  10. Vision based error detection for 3D printing processes

    Directory of Open Access Journals (Sweden)

    Baumann Felix

    2016-01-01

    Full Text Available 3D printers became more popular in the last decade, partly because of the expiration of key patents and the supply of affordable machines. The origin is located in rapid prototyping. With Additive Manufacturing (AM it is possible to create physical objects from 3D model data by layer wise addition of material. Besides professional use for prototyping and low volume manufacturing they are becoming widespread amongst end users starting with the so called Maker Movement. The most prevalent type of consumer grade 3D printers is Fused Deposition Modelling (FDM, also Fused Filament Fabrication FFF. This work focuses on FDM machinery because of their widespread occurrence and large number of open problems like precision and failure. These 3D printers can fail to print objects at a statistical rate depending on the manufacturer and model of the printer. Failures can occur due to misalignment of the print-bed, the print-head, slippage of the motors, warping of the printed material, lack of adhesion or other reasons. The goal of this research is to provide an environment in which these failures can be detected automatically. Direct supervision is inhibited by the recommended placement of FDM printers in separate rooms away from the user due to ventilation issues. The inability to oversee the printing process leads to late or omitted detection of failures. Rejects effect material waste and wasted time thus lowering the utilization of printing resources. Our approach consists of a camera based error detection mechanism that provides a web based interface for remote supervision and early failure detection. Early failure detection can lead to reduced time spent on broken prints, less material wasted and in some cases salvaged objects.

  11. Adjoint-Based Forecast Error Sensitivity Diagnostics in Data Assimilation

    Science.gov (United States)

    Langland, R.; Daescu, D.

    2016-12-01

    We present an up-to-date review of the adjoint-data assimilation system (DAS) approach to evaluate the forecast sensitivity to error covariance parameters and provide guidance to flow-dependent adaptive covariance tuning (ACT) procedures. New applications of the forecast sensitivity to observation error covariance (FSR) are investigated including the sensitivity to observation error correlations and a priori first-order assessment to the error correlation impact on the forecasts. Issues related to ambiguities in the a posteriori estimation to the observation error covariance (R) and background error covariance (B) are discussed. A synergistic framework to adaptive covariance tuning is considered that combines R-estimates derived from a posteriori covariance diagnosis and FSR derivative information. The evaluation of the forecast sensitivity to the innovation-weight coefficients is introduced as a computationally-feasible approach to account for the characteristics of both R- and B-parameters and perform direct tuning of the DAS gain operator (K). Theoretical aspects are discussed and recent results are provided with the adjoint versions of the Naval Research Laboratory Atmospheric Variational Data Assimilation System-Accelerated Representer (NAVDAS-AR).

  12. Two Error Models for Calibrating SCARA Robots based on the MDH Model

    Directory of Open Access Journals (Sweden)

    Li Xiaolong

    2017-01-01

    Full Text Available This paper describes the process of using two error models for calibrating Selective Compliance Assembly Robot Arm (SCARA robots based on the modified Denavit-Hartenberg(MDH model, with the aim of improving the robot's accuracy. One of the error models is the position error model, which uses robot position errors with respect to an accurate robot base frame built before the measurement commenced. The other model is the distance error model, which uses only the robot moving distance to calculate errors. Because calibration requires the end-effector to be accurately measured, a laser tracker was used to measure the robot position and distance errors. After calibrating the robot and, the end-effector locations were measured again compensating the error models' parameters obtained from the calibration. The finding is that the robot's accuracy improved greatly after compensating the calibrated parameters.

  13. How we learn to make decisions: rapid propagation of reinforcement learning prediction errors in humans.

    Science.gov (United States)

    Krigolson, Olav E; Hassall, Cameron D; Handy, Todd C

    2014-03-01

    Our ability to make decisions is predicated upon our knowledge of the outcomes of the actions available to us. Reinforcement learning theory posits that actions followed by a reward or punishment acquire value through the computation of prediction errors-discrepancies between the predicted and the actual reward. A multitude of neuroimaging studies have demonstrated that rewards and punishments evoke neural responses that appear to reflect reinforcement learning prediction errors [e.g., Krigolson, O. E., Pierce, L. J., Holroyd, C. B., & Tanaka, J. W. Learning to become an expert: Reinforcement learning and the acquisition of perceptual expertise. Journal of Cognitive Neuroscience, 21, 1833-1840, 2009; Bayer, H. M., & Glimcher, P. W. Midbrain dopamine neurons encode a quantitative reward prediction error signal. Neuron, 47, 129-141, 2005; O'Doherty, J. P. Reward representations and reward-related learning in the human brain: Insights from neuroimaging. Current Opinion in Neurobiology, 14, 769-776, 2004; Holroyd, C. B., & Coles, M. G. H. The neural basis of human error processing: Reinforcement learning, dopamine, and the error-related negativity. Psychological Review, 109, 679-709, 2002]. Here, we used the brain ERP technique to demonstrate that not only do rewards elicit a neural response akin to a prediction error but also that this signal rapidly diminished and propagated to the time of choice presentation with learning. Specifically, in a simple, learnable gambling task, we show that novel rewards elicited a feedback error-related negativity that rapidly decreased in amplitude with learning. Furthermore, we demonstrate the existence of a reward positivity at choice presentation, a previously unreported ERP component that has a similar timing and topography as the feedback error-related negativity that increased in amplitude with learning. The pattern of results we observed mirrored the output of a computational model that we implemented to compute reward

  14. The Role of Human Error in Design, Construction, and Reliability of Marine Structures.

    Science.gov (United States)

    1994-10-01

    OrganizationHuman Resources Syste ms Facilities The entire process is Equipment iterative (the design spiral) [Taggart, 1980]. The preliminary design...quantitative analyses. New, little Standard, good experience, experience, insufficient sufficient Materials ms Constuction - PSrocdures SyIstlI Design - ~J...of the MSIP project [Bea, 1993] indicated that there were four general approaches that should be considered in developing human error tol- erant

  15. Real time remaining useful life prediction based on nonlinear Wiener based degradation processes with measurement errors

    Institute of Scientific and Technical Information of China (English)

    唐圣金; 郭晓松; 于传强; 周志杰; 周召发; 张邦成

    2014-01-01

    Real time remaining useful life (RUL) prediction based on condition monitoring is an essential part in condition based maintenance (CBM). In the current methods about the real time RUL prediction of the nonlinear degradation process, the measurement error is not considered and forecasting uncertainty is large. Therefore, an approximate analytical RUL distribution in a closed-form of a nonlinear Wiener based degradation process with measurement errors was proposed. The maximum likelihood estimation approach was used to estimate the unknown fixed parameters in the proposed model. When the newly observed data are available, the random parameter is updated by the Bayesian method to make the estimation adapt to the item’s individual characteristic and reduce the uncertainty of the estimation. The simulation results show that considering measurement errors in the degradation process can significantly improve the accuracy of real time RUL prediction.

  16. Forward error correction supported 150 Gbit/s error-free wavelength conversion based on cross phase modulation in silicon

    DEFF Research Database (Denmark)

    Hu, Hao; Andersen, Jakob Dahl; Rasmussen, Anders

    2013-01-01

    We build a forward error correction (FEC) module and implement it in an optical signal processing experiment. The experiment consists of two cascaded nonlinear optical signal processes, 160 Gbit/s all optical wavelength conversion based on the cross phase modulation (XPM) in a silicon nanowire...... and subsequent 160 Gbit/s-to-10 Gbit/s demultiplexing in a highly nonlinear fiber (HNLF). The XPM based all optical wavelength conversion in silicon is achieved by off-center filtering the red shifted sideband on the CW probe. We thoroughly demonstrate and verify that the FEC code operates correctly after...... the optical signal processing, yielding truly error-free 150 Gbit/s (excl. overhead) optically signal processed data after the two cascaded nonlinear processes. © 2013 Optical Society of America....

  17. Error bounds for surface area estimators based on Crofton's formula

    DEFF Research Database (Denmark)

    Kiderlen, Markus; Meschenmoser, Daniel

    2009-01-01

    and the mean is approximated by a finite weighted sum S(A) of the total projections in these directions. The choice of the weights depends on the selected quadrature rule. We define an associated zonotope Z (depending only on the projection directions and the quadrature rule), and show that the relative error...... in the sense that the relative error of the surface area estimator is very close to the minimal error.......According to Crofton’s formula, the surface area S(A) of a sufficiently regular compact set A in R^d is proportional to the mean of all total projections pA (u) on a linear hyperplane with normal u, uniformly averaged over all unit vectors u. In applications, pA (u) is only measured in k directions...

  18. Forward error correction based on algebraic-geometric theory

    CERN Document Server

    A Alzubi, Jafar; M Chen, Thomas

    2014-01-01

    This book covers the design, construction, and implementation of algebraic-geometric codes from Hermitian curves. Matlab simulations of algebraic-geometric codes and Reed-Solomon codes compare their bit error rate using different modulation schemes over additive white Gaussian noise channel model. Simulation results of Algebraic-geometric codes bit error rate performance using quadrature amplitude modulation (16QAM and 64QAM) are presented for the first time and shown to outperform Reed-Solomon codes at various code rates and channel models. The book proposes algebraic-geometric block turbo codes. It also presents simulation results that show an improved bit error rate performance at the cost of high system complexity due to using algebraic-geometric codes and Chase-Pyndiah’s algorithm simultaneously. The book proposes algebraic-geometric irregular block turbo codes (AG-IBTC) to reduce system complexity. Simulation results for AG-IBTCs are presented for the first time.

  19. Development of an improved HRA method: A technique for human error analysis (ATHEANA)

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, J.H.; Luckas, W.J. [Brookhaven National Lab., Upton, NY (United States); Wreathall, J. [John Wreathall & Co., Dublin, OH (United States)] [and others

    1996-03-01

    Probabilistic risk assessment (PRA) has become an increasingly important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. The NRC recently published a final policy statement, SECY-95-126, encouraging the use of PRA in regulatory activities. Human reliability analysis (HRA), while a critical element of PRA, has limitations in the analysis of human actions in PRAs that have long been recognized as a constraint when using PRA. In fact, better integration of HRA into the PRA process has long been a NRC issue. Of particular concern, has been the omission of errors of commission - those errors that are associated with inappropriate interventions by operators with operating systems. To address these concerns, the NRC identified the need to develop an improved HRA method, so that human reliability can be better represented and integrated into PRA modeling and quantification.

  20. Shape Error Analysis of Functional Surface Based on Isogeometrical Approach

    Science.gov (United States)

    YUAN, Pei; LIU, Zhenyu; TAN, Jianrong

    2017-05-01

    The construction of traditional finite element geometry (i.e., the meshing procedure) is time consuming and creates geometric errors. The drawbacks can be overcame by the Isogeometric Analysis (IGA), which integrates the computer aided design and structural analysis in a unified way. A new IGA beam element is developed by integrating the displacement field of the element, which is approximated by the NURBS basis, with the internal work formula of Euler-Bernoulli beam theory with the small deformation and elastic assumptions. Two cases of the strong coupling of IGA elements, "beam to beam" and "beam to shell", are also discussed. The maximum relative errors of the deformation in the three directions of cantilever beam benchmark problem between analytical solutions and IGA solutions are less than 0.1%, which illustrate the good performance of the developed IGA beam element. In addition, the application of the developed IGA beam element in the Root Mean Square (RMS) error analysis of reflector antenna surface, which is a kind of typical functional surface whose precision is closely related to the product's performance, indicates that no matter how coarse the discretization is, the IGA method is able to achieve the accurate solution with less degrees of freedom than standard Finite Element Analysis (FEA). The proposed research provides an effective alternative to standard FEA for shape error analysis of functional surface.

  1. Statistical analysis-based error models for the Microsoft Kinect(TM) depth sensor.

    Science.gov (United States)

    Choo, Benjamin; Landau, Michael; DeVore, Michael; Beling, Peter A

    2014-09-18

    The stochastic error characteristics of the Kinect sensing device are presented for each axis direction. Depth (z) directional error is measured using a flat surface, and horizontal (x) and vertical (y) errors are measured using a novel 3D checkerboard. Results show that the stochastic nature of the Kinect measurement error is affected mostly by the depth at which the object being sensed is located, though radial factors must be considered, as well. Measurement and statistics-based models are presented for the stochastic error in each axis direction, which are based on the location and depth value of empirical data measured for each pixel across the entire field of view. The resulting models are compared against existing Kinect error models, and through these comparisons, the proposed model is shown to be a more sophisticated and precise characterization of the Kinect error distributions.

  2. Statistical Analysis-Based Error Models for the Microsoft Kinect™ Depth Sensor

    Science.gov (United States)

    Choo, Benjamin; Landau, Michael; DeVore, Michael; Beling, Peter A.

    2014-01-01

    The stochastic error characteristics of the Kinect sensing device are presented for each axis direction. Depth (z) directional error is measured using a flat surface, and horizontal (x) and vertical (y) errors are measured using a novel 3D checkerboard. Results show that the stochastic nature of the Kinect measurement error is affected mostly by the depth at which the object being sensed is located, though radial factors must be considered, as well. Measurement and statistics-based models are presented for the stochastic error in each axis direction, which are based on the location and depth value of empirical data measured for each pixel across the entire field of view. The resulting models are compared against existing Kinect error models, and through these comparisons, the proposed model is shown to be a more sophisticated and precise characterization of the Kinect error distributions. PMID:25237896

  3. Modeling and Experimental Study of Soft Error Propagation Based on Cellular Automaton

    OpenAIRE

    2016-01-01

    Aiming to estimate SEE soft error performance of complex electronic systems, a soft error propagation model based on cellular automaton is proposed and an estimation methodology based on circuit partitioning and error propagation is presented. Simulations indicate that different fault grade jamming and different coupling factors between cells are the main parameters influencing the vulnerability of the system. Accelerated radiation experiments have been developed to determine the main paramet...

  4. An adjoint-based scheme for eigenvalue error improvement

    Energy Technology Data Exchange (ETDEWEB)

    Merton, S.R.; Smedley-Stevenson, R.P., E-mail: Simon.Merton@awe.co.uk, E-mail: Richard.Smedley-Stevenson@awe.co.uk [AWE plc, Berkshire (United Kingdom); Pain, C.C.; El-Sheikh, A.H.; Buchan, A.G., E-mail: c.pain@imperial.ac.uk, E-mail: a.el-sheikh@imperial.ac.uk, E-mail: andrew.buchan@imperial.ac.uk [Department of Earth Science and Engineering, Imperial College, London (United Kingdom)

    2011-07-01

    A scheme for improving the accuracy and reducing the error in eigenvalue calculations is presented. Using a rst order Taylor series expansion of both the eigenvalue solution and the residual of the governing equation, an approximation to the error in the eigenvalue is derived. This is done using a convolution of the equation residual and adjoint solution, which is calculated in-line with the primal solution. A defect correction on the solution is then performed in which the approximation to the error is used to apply a correction to the eigenvalue. The method is shown to dramatically improve convergence of the eigenvalue. The equation for the eigenvalue is shown to simplify when certain normalizations are applied to the eigenvector. Two such normalizations are considered; the rst of these is a fission-source type of normalisation and the second is an eigenvector normalisation. Results are demonstrated on a number of demanding elliptic problems using continuous Galerkin weighted nite elements. Moreover, the correction scheme may also be applied to hyperbolic problems and arbitrary discretization. This is not limited to spatial corrections and may be used throughout the phase space of the discrete equation. The applied correction not only improves fidelity of the calculation, it allows assessment of the reliability of numerical schemes to be made and could be used to guide mesh adaption algorithms or to automate mesh generation schemes. (author)

  5. El error en la práctica médica: una presencia ineludible Human error in medical practice: an unavoidable presence

    Directory of Open Access Journals (Sweden)

    Gladis Adriana Vélez Álvarez

    2006-01-01

    Full Text Available El errar, que es una característica humana y un mecanismo de aprendizaje, se convierte en una amenaza para el hombre mismo en algunos escenarios como la aviación y la medicina. Se presentan algunos datos acerca de la frecuencia del error en medicina, su ubicuidad y las circunstancias que lo favorecen, y se hace una reflexión acerca de cómo se ha enfrentado el error y de por qué no se habla abiertamente del mismo. Se propone que el primer paso para aprender del error es aceptarlo como una presencia ineludible. Making mistakes is a human characteristic and a mechanism to learn, but at the same time it may become a threat to human beings in some scenarios. Aviation and Medicine are good examples of this. Some data are presented about the frequency of error in Medicine, its ubiquity and the circumstances that favor it. A reflection is done about how the error is being managed and why it is not more often discussed. It is proposed that the first step in learning from an error is to accept it as an unavoidable presence.

  6. Analysis of offset error for segmented micro-structure optical element based on optical diffraction theory

    Science.gov (United States)

    Su, Jinyan; Wu, Shibin; Yang, Wei; Wang, Lihua

    2016-10-01

    Micro-structure optical elements are gradually applied in modern optical system due to their characters such as light weight, replicating easily, high diffraction efficiency and many design variables. Fresnel lens is a typical micro-structure optical element. So in this paper we take Fresnel lens as base of research. Analytic solution to the Point Spread Function (PSF) of the segmented Fresnel lens is derived based on the theory of optical diffraction, and the mathematical simulation model is established. Then we take segmented Fresnel lens with 5 pieces of sub-mirror as an example. In order to analyze the influence of different offset errors on the system's far-field image quality, we obtain the analytic solution to PSF of the system under the condition of different offset errors by using Fourier-transform. The result shows the translation error along XYZ axis and tilt error around XY axis will introduce phase errors which affect the imaging quality of system. The translation errors along XYZ axis constitute linear relationship with corresponding phase errors and the tilt errors around XY axis constitute trigonometric function relationship with corresponding phase errors. In addition, the standard deviations of translation errors along XY axis constitute quadratic nonlinear relationship with system's Strehl ratio. Finally, the tolerances of different offset errors are obtained according to Strehl Criteria.

  7. Value-based HR practices, i-deals and clinical error control with CSR as a moderator.

    Science.gov (United States)

    Luu, Tuan; Rowley, Chris; Siengthai, Sununta; Thanh Thao, Vo

    2017-05-08

    Purpose Notwithstanding the rising magnitude of system factors in patient safety improvement, "human factors" such as idiosyncratic deals (i-deals) which also contribute to the adjustment of system deficiencies should not be neglected. The purpose of this paper is to investigate the role of value-based HR practices in catalyzing i-deals, which then influence clinical error control. The research further examines the moderating role of corporate social responsibility (CSR) on the effect of value-based HR practices on i-deals. Design/methodology/approach The data were collected from middle-level clinicians from hospitals in the Vietnam context. Findings The research results confirmed the effect chain from value-based HR practices through i-deals to clinical error control with CSR as a moderator. Originality/value The HRM literature is expanded through enlisting i-deals and clinical error control as the outcomes of HR practices.

  8. Error-Correcting Output Codes in Classification of Human Induced Pluripotent Stem Cell Colony Images

    Directory of Open Access Journals (Sweden)

    Henry Joutsijoki

    2016-01-01

    Full Text Available The purpose of this paper is to examine how well the human induced pluripotent stem cell (hiPSC colony images can be classified using error-correcting output codes (ECOC. Our image dataset includes hiPSC colony images from three classes (bad, semigood, and good which makes our classification task a multiclass problem. ECOC is a general framework to model multiclass classification problems. We focus on four different coding designs of ECOC and apply to each one of them k-Nearest Neighbor (k-NN searching, naïve Bayes, classification tree, and discriminant analysis variants classifiers. We use Scaled Invariant Feature Transformation (SIFT based features in classification. The best accuracy (62.4% is obtained with ternary complete ECOC coding design and k-NN classifier (standardized Euclidean distance measure and inverse weighting. The best result is comparable with our earlier research. The quality identification of hiPSC colony images is an essential problem to be solved before hiPSCs can be used in practice in large-scale. ECOC methods examined are promising techniques for solving this challenging problem.

  9. Error-Correcting Output Codes in Classification of Human Induced Pluripotent Stem Cell Colony Images.

    Science.gov (United States)

    Joutsijoki, Henry; Haponen, Markus; Rasku, Jyrki; Aalto-Setälä, Katriina; Juhola, Martti

    2016-01-01

    The purpose of this paper is to examine how well the human induced pluripotent stem cell (hiPSC) colony images can be classified using error-correcting output codes (ECOC). Our image dataset includes hiPSC colony images from three classes (bad, semigood, and good) which makes our classification task a multiclass problem. ECOC is a general framework to model multiclass classification problems. We focus on four different coding designs of ECOC and apply to each one of them k-Nearest Neighbor (k-NN) searching, naïve Bayes, classification tree, and discriminant analysis variants classifiers. We use Scaled Invariant Feature Transformation (SIFT) based features in classification. The best accuracy (62.4%) is obtained with ternary complete ECOC coding design and k-NN classifier (standardized Euclidean distance measure and inverse weighting). The best result is comparable with our earlier research. The quality identification of hiPSC colony images is an essential problem to be solved before hiPSCs can be used in practice in large-scale. ECOC methods examined are promising techniques for solving this challenging problem.

  10. An Adaptive Finite Element Method Based on Optimal Error Estimates for Linear Elliptic Problems

    Institute of Scientific and Technical Information of China (English)

    汤雁

    2004-01-01

    The subject of the work is to propose a series of papers about adaptive finite element methods based on optimal error control estimate. This paper is the third part in a series of papers on adaptive finite element methods based on optimal error estimates for linear elliptic problems on the concave corner domains. In the preceding two papers (part 1:Adaptive finite element method based on optimal error estimate for linear elliptic problems on concave corner domain; part 2:Adaptive finite element method based on optimal error estimate for linear elliptic problems on nonconvex polygonal domains), we presented adaptive finite element methods based on the energy norm and the maximum norm. In this paper, an important result is presented and analyzed. The algorithm for error control in the energy norm and maximum norm in part 1 and part 2 in this series of papers is based on this result.

  11. Development of a web-based simulator for estimating motion errors in linear motion stages

    Science.gov (United States)

    Khim, G.; Oh, J.-S.; Park, C.-H.

    2017-08-01

    This paper presents a web-based simulator for estimating 5-DOF motion errors in the linear motion stages. The main calculation modules of the simulator are stored on the server computer. The clients uses the client software to send the input parameters to the server and receive the computed results from the server. By using the simulator, we can predict performances such as 5-DOF motion errors, bearing and table stiffness by entering the design parameters in a design step before fabricating the stages. Motion errors are calculated using the transfer function method from the rail form errors which is the most dominant factor on the motion errors. To verify the simulator, the predicted motion errors are compared to the actually measured motion errors in the linear motion stage.

  12. Positioning Errors Predicting Method of Strapdown Inertial Navigation Systems Based on PSO-SVM

    Directory of Open Access Journals (Sweden)

    Xunyuan Yin

    2013-01-01

    Full Text Available The strapdown inertial navigation systems (SINS have been widely used for many vehicles, such as commercial airplanes, Unmanned Aerial Vehicles (UAVs, and other types of aircrafts. In order to evaluate the navigation errors precisely and efficiently, a prediction method based on support vector machine (SVM is proposed for positioning error assessment. Firstly, SINS error models that are used for error calculation are established considering several error resources with respect to inertial units. Secondly, flight paths for simulation are designed. Thirdly, the -SVR based prediction method is proposed to predict the positioning errors of navigation systems, and particle swarm optimization (PSO is used for the SVM parameters optimization. Finally, 600 sets of error parameters of SINS are utilized to train the SVM model, which is used for the performance prediction of new navigation systems. By comparing the predicting results with the real errors, the latitudinal predicting accuracy is 92.73%, while the longitudinal predicting accuracy is 91.64%, and PSO is effective to increase the prediction accuracy compared with traditional SVM with fixed parameters. This method is also demonstrated to be effective for error prediction for an entire flight process. Moreover, the prediction method can save 75% of calculation time compared with analyses based on error models.

  13. Analysis of Task Types and Error Types of the Human Actions Involved in the Human-related Unplanned Reactor Trip Events

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jae Whan; Park, Jin Kyun; Jung, Won Dea

    2008-02-15

    This report provides the task types and error types involved in the unplanned reactor trip events that have occurred during 1986 - 2006. The events that were caused by the secondary system of the nuclear power plants amount to 67 %, and the remaining 33 % was by the primary system. The contribution of the activities of the plant personnel was identified as the following order: corrective maintenance (25.7 %), planned maintenance (22.8 %), planned operation (19.8 %), periodic preventive maintenance (14.9 %), response to a transient (9.9 %), and design/manufacturing/installation (9.9%). According to the analysis of error modes, the error modes such as control failure (22.2 %), wrong object (18.5 %), omission (14.8 %), wrong action (11.1 %), and inadequate (8.3 %) take up about 75 % of all the unplanned trip events. The analysis of the cognitive functions involved showed that the planning function makes the highest contribution to the human actions leading to unplanned reactor trips, and it is followed by the observation function (23.4%), the execution function (17.8 %), and the interpretation function (10.3 %). The results of this report are to be used as important bases for development of the error reduction measures or development of the error mode prediction system for the test and maintenance tasks in nuclear power plants.

  14. Toward a cognitive taxonomy of medical errors.

    Science.gov (United States)

    Zhang, Jiajie; Patel, Vimla L; Johnson, Todd R; Shortliffe, Edward H

    2002-01-01

    One critical step in addressing and resolving the problems associated with human errors is the development of a cognitive taxonomy of such errors. In the case of errors, such a taxonomy may be developed (1) to categorize all types of errors along cognitive dimensions, (2) to associate each type of error with a specific underlying cognitive mechanism, (3) to explain why, and even predict when and where, a specific error will occur, and (4) to generate intervention strategies for each type of error. Based on Reason's (1992) definition of human errors and Norman's (1986) cognitive theory of human action, we have developed a preliminary action-based cognitive taxonomy of errors that largely satisfies these four criteria in the domain of medicine. We discuss initial steps for applying this taxonomy to develop an online medical error reporting system that not only categorizes errors but also identifies problems and generates solutions.

  15. Quantum secret sharing based on quantum error-correcting codes

    Institute of Scientific and Technical Information of China (English)

    Zhang Zu-Rong; Liu Wei-Tao; Li Cheng-Zu

    2011-01-01

    Quantum secret sharing(QSS) is a procedure of sharing classical information or quantum information by using quantum states. This paper presents how to use a [2k - 1,1, k] quantum error-correcting code (QECC) to implement a quantum (k, 2k - 1) threshold scheme. It also takes advantage of classical enhancement of the [2k - 1, l,k] QECC to establish a QSS scheme which can share classical information and quantum information simultaneously. Because information is encoded into QECC, these schemes can prevent intercept-resend attacks and be implemented on some noisy channels.

  16. Zero phase error control based on neural compensation for flight simulator servo system

    Institute of Scientific and Technical Information of China (English)

    Liu Jinkun; He Peng; Er Lianjie

    2006-01-01

    Using the future desired input value, zero phase error controller enables the overall system's frequency response exhibit zero phase shift for all frequencies and a small gain error at low frequency range, and based on this, a new algorithm is presented to design the feedforward controller. However, zero phase error controller is only suitable for certain linear system. To reduce the tracking error and improve robustness, the design of the proposed feedforward controller uses a neural compensation based on diagonal recurrent neural network. Simulation and real-time control results for flight simulator servo system show the effectiveness of the proposed approach.

  17. Entropy-Based TOA Estimation and SVM-Based Ranging Error Mitigation in UWB Ranging Systems.

    Science.gov (United States)

    Yin, Zhendong; Cui, Kai; Wu, Zhilu; Yin, Liang

    2015-05-21

    The major challenges for Ultra-wide Band (UWB) indoor ranging systems are the dense multipath and non-line-of-sight (NLOS) problems of the indoor environment. To precisely estimate the time of arrival (TOA) of the first path (FP) in such a poor environment, a novel approach of entropy-based TOA estimation and support vector machine (SVM) regression-based ranging error mitigation is proposed in this paper. The proposed method can estimate the TOA precisely by measuring the randomness of the received signals and mitigate the ranging error without the recognition of the channel conditions. The entropy is used to measure the randomness of the received signals and the FP can be determined by the decision of the sample which is followed by a great entropy decrease. The SVM regression is employed to perform the ranging-error mitigation by the modeling of the regressor between the characteristics of received signals and the ranging error. The presented numerical simulation results show that the proposed approach achieves significant performance improvements in the CM1 to CM4 channels of the IEEE 802.15.4a standard, as compared to conventional approaches.

  18. DISTANCE MEASURING MODELING AND ERROR ANALYSIS OF DUAL CCD VISION SYSTEM SIMULATING HUMAN EYES AND NECK

    Institute of Scientific and Technical Information of China (English)

    Wang Xuanyin; Xiao Baoping; Pan Feng

    2003-01-01

    A dual-CCD simulating human eyes and neck (DSHEN) vision system is put forward. Its structure and principle are introduced. The DSHEN vision system can perform some movements simulating human eyes and neck by means of four rotating joints, and realize precise object recognizing and distance measuring in all orientations. The mathematic model of the DSHEN vision system is built, and its movement equation is solved. The coordinate error and measure precision affected by the movement parameters are analyzed by means of intersection measuring method. So a theoretic foundation for further research on automatic object recognizing and precise target tracking is provided.

  19. Does the A-not-B error in adult pet dogs indicate sensitivity to human communication?

    Science.gov (United States)

    Kis, Anna; Topál, József; Gácsi, Márta; Range, Friederike; Huber, Ludwig; Miklósi, Adám; Virányi, Zsófia

    2012-07-01

    Recent dog-infant comparisons have indicated that the experimenter's communicative signals in object hide-and-search tasks increase the probability of perseverative (A-not-B) errors in both species (Topál et al. 2009). These behaviourally similar results, however, might reflect different mechanisms in dogs and in children. Similar errors may occur if the motor response of retrieving the object during the A trials cannot be inhibited in the B trials or if the experimenter's movements and signals toward the A hiding place in the B trials ('sham-baiting') distract the dogs' attention. In order to test these hypotheses, we tested dogs similarly to Topál et al. (2009) but eliminated the motor search in the A trials and 'sham-baiting' in the B trials. We found that neither an inability to inhibit previously rewarded motor response nor insufficiencies in their working memory and/or attention skills can explain dogs' erroneous choices. Further, we replicated the finding that dogs have a strong tendency to commit the A-not-B error after ostensive-communicative hiding and demonstrated the crucial effect of socio-communicative cues as the A-not-B error diminishes when location B is ostensively enhanced. These findings further support the hypothesis that the dogs' A-not-B error may reflect a special sensitivity to human communicative cues. Such object-hiding and search tasks provide a typical case for how susceptibility to human social signals could (mis)lead domestic dogs.

  20. Numerical experiments on the efficiency of local grid refinement based on truncation error estimates

    CERN Document Server

    Syrakos, Alexandros; Bartzis, John G; Goulas, Apostolos

    2015-01-01

    Local grid refinement aims to optimise the relationship between accuracy of the results and number of grid nodes. In the context of the finite volume method no single local refinement criterion has been globally established as optimum for the selection of the control volumes to subdivide, since it is not easy to associate the discretisation error with an easily computable quantity in each control volume. Often the grid refinement criterion is based on an estimate of the truncation error in each control volume, because the truncation error is a natural measure of the discrepancy between the algebraic finite-volume equations and the original differential equations. However, it is not a straightforward task to associate the truncation error with the optimum grid density because of the complexity of the relationship between truncation and discretisation errors. In the present work several criteria based on a truncation error estimate are tested and compared on a regularised lid-driven cavity case at various Reyno...

  1. Ultraspectral sounder data compression using a novel marker-based error-resilient arithmetic coder

    Science.gov (United States)

    Huang, Bormin; Sriraja, Y.; Wei, Shih-Chieh

    2006-08-01

    Entropy coding techniques aim to achieve the entropy of the source data by assigning variable-length codewords to symbols with the code lengths linked to the corresponding symbol probabilities. Entropy coders (e.g. Huffman coding, arithmetic coding), in one form or the other, are commonly used as the last stage in various compression schemes. While these variable-length coders provide better compression than fixed-length coders, they are vulnerable to transmission errors. Even a single bit error in the transmission process can cause havoc in the subsequent decoded stream. To cope with it, this research proposes a marker-based sentinel mechanism in entropy coding for error detection and recovery. We use arithmetic coding as an example to demonstrate this error-resilient technique for entropy coding. Experimental results on ultraspectral sounder data indicate that the marker-based error-resilient arithmetic coder provides remarkable robustness to correct transmission errors without significantly compromising the compression gains.

  2. ATHEANA: {open_quotes}a technique for human error analysis{close_quotes} entering the implementation phase

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, J.; O`Hara, J.; Luckas, W. [Brookhaven National Lab., Upton, NY (United States)] [and others

    1997-02-01

    Probabilistic Risk Assessment (PRA) has become an increasingly important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. The NRC recently published a final policy statement, SECY-95-126, encouraging the use of PRA in regulatory activities. Human reliability analysis (HRA), while a critical element of PRA, has limitations in the analysis of human actions in PRAs that have long been recognized as a constraint when using PRA. In fact, better integration of HRA into the PRA process has long been a NRC issue. Of particular concern, has been the omission of errors of commission - those errors that are associated with inappropriate interventions by operators with operating systems. To address these concerns, the NRC identified the need to develop an improved HRA method, so that human reliability can be better represented and integrated into PRA modeling and quantification. The purpose of the Brookhaven National Laboratory (BNL) project, entitled `Improved HRA Method Based on Operating Experience` is to develop a new method for HRA which is supported by the analysis of risk-significant operating experience. This approach will allow a more realistic assessment and representation of the human contribution to plant risk, and thereby increase the utility of PRA. The project`s completed, ongoing, and future efforts fall into four phases: (1) Assessment phase (FY 92/93); (2) Analysis and Characterization phase (FY 93/94); (3) Development phase (FY 95/96); and (4) Implementation phase (FY 96/97 ongoing).

  3. Error Robust H.264 Video Transmission Schemes Based on Multi-frame

    Institute of Scientific and Technical Information of China (English)

    余红斌; 余松煜; 王慈

    2004-01-01

    Multi-frame coding is supported by the emerging H. 264. It is important for the enhancement of both coding efficiency and error robustness. In this paper, error resilient schemes for H. 264 based on multi-frame were investigated. Error robust H. 264 video transmission schemes were introduced for the applications with and without a feedback channel. The experimental results demonstrate the effectiveness of the proposed schemes.

  4. Evaluation of a Web-based Error Reporting Surveillance System in a Large Iranian Hospital.

    Science.gov (United States)

    Askarian, Mehrdad; Ghoreishi, Mahboobeh; Akbari Haghighinejad, Hourvash; Palenik, Charles John; Ghodsi, Maryam

    2017-08-01

    Proper reporting of medical errors helps healthcare providers learn from adverse incidents and improve patient safety. A well-designed and functioning confidential reporting system is an essential component to this process. There are many error reporting methods; however, web-based systems are often preferred because they can provide; comprehensive and more easily analyzed information. This study addresses the use of a web-based error reporting system. This interventional study involved the application of an in-house designed "voluntary web-based medical error reporting system." The system has been used since July 2014 in Nemazee Hospital, Shiraz University of Medical Sciences. The rate and severity of errors reported during the year prior and a year after system launch were compared. The slope of the error report trend line was steep during the first 12 months (B = 105.727, P = 0.00). However, it slowed following launch of the web-based reporting system and was no longer statistically significant (B = 15.27, P = 0.81) by the end of the second year. Most recorded errors were no-harm laboratory types and were due to inattention. Usually, they were reported by nurses and other permanent employees. Most reported errors occurred during morning shifts. Using a standardized web-based error reporting system can be beneficial. This study reports on the performance of an in-house designed reporting system, which appeared to properly detect and analyze medical errors. The system also generated follow-up reports in a timely and accurate manner. Detection of near-miss errors could play a significant role in identifying areas of system defects.

  5. Analysis of Human Errors in Industrial Incidents and Accidents for Improvement of Work Safety

    DEFF Research Database (Denmark)

    Leplat, J.; Rasmussen, Jens

    1984-01-01

    recommendations, the method proposed identifies very explicit countermeasures. Improvements require a change in human decisions during equipment design, work planning, or the execution itself. The use of a model of human behavior drawing a distinction between automated skill-based behavior, rule-based 'know......-how' and knowledge-based analysis is proposed for identification of the human decisions which are most sensitive to improvements...

  6. Joint Estimation of Contamination, Error and Demography for Nuclear DNA from Ancient Humans.

    Directory of Open Access Journals (Sweden)

    Fernando Racimo

    2016-04-01

    Full Text Available When sequencing an ancient DNA sample from a hominin fossil, DNA from present-day humans involved in excavation and extraction will be sequenced along with the endogenous material. This type of contamination is problematic for downstream analyses as it will introduce a bias towards the population of the contaminating individual(s. Quantifying the extent of contamination is a crucial step as it allows researchers to account for possible biases that may arise in downstream genetic analyses. Here, we present an MCMC algorithm to co-estimate the contamination rate, sequencing error rate and demographic parameters-including drift times and admixture rates-for an ancient nuclear genome obtained from human remains, when the putative contaminating DNA comes from present-day humans. We assume we have a large panel representing the putative contaminant population (e.g. European, East Asian or African. The method is implemented in a C++ program called 'Demographic Inference with Contamination and Error' (DICE. We applied it to simulations and genome data from ancient Neanderthals and modern humans. With reasonable levels of genome sequence coverage (>3X, we find we can recover accurate estimates of all these parameters, even when the contamination rate is as high as 50%.

  7. Anisotropic mesh adaptation for solution of finite element problems using hierarchical edge-based error estimates

    Energy Technology Data Exchange (ETDEWEB)

    Lipnikov, Konstantin [Los Alamos National Laboratory; Agouzal, Abdellatif [UNIV DE LYON; Vassilevski, Yuri [Los Alamos National Laboratory

    2009-01-01

    We present a new technology for generating meshes minimizing the interpolation and discretization errors or their gradients. The key element of this methodology is construction of a space metric from edge-based error estimates. For a mesh with N{sub h} triangles, the error is proportional to N{sub h}{sup -1} and the gradient of error is proportional to N{sub h}{sup -1/2} which are optimal asymptotics. The methodology is verified with numerical experiments.

  8. Methods for Quantifying and Characterizing Errors in Pixel-Based 3D Rendering.

    Science.gov (United States)

    Hagedorn, John G; Terrill, Judith E; Peskin, Adele P; Filliben, James J

    2008-01-01

    We present methods for measuring errors in the rendering of three-dimensional points, line segments, and polygons in pixel-based computer graphics systems. We present error metrics for each of these three cases. These methods are applied to rendering with OpenGL on two common hardware platforms under several rendering conditions. Results are presented and differences in measured errors are analyzed and characterized. We discuss possible extensions of this error analysis approach to other aspects of the process of generating visual representations of synthetic scenes.

  9. Results of a nuclear power plant application of A New Technique for Human Error Analysis (ATHEANA)

    Energy Technology Data Exchange (ETDEWEB)

    Whitehead, D.W.; Forester, J.A. [Sandia National Labs., Albuquerque, NM (United States); Bley, D.C. [Buttonwood Consulting, Inc. (United States)] [and others

    1998-03-01

    A new method to analyze human errors has been demonstrated at a pressurized water reactor (PWR) nuclear power plant. This was the first application of the new method referred to as A Technique for Human Error Analysis (ATHEANA). The main goals of the demonstration were to test the ATHEANA process as described in the frame-of-reference manual and the implementation guideline, test a training package developed for the method, test the hypothesis that plant operators and trainers have significant insight into the error-forcing-contexts (EFCs) that can make unsafe actions (UAs) more likely, and to identify ways to improve the method and its documentation. A set of criteria to evaluate the success of the ATHEANA method as used in the demonstration was identified. A human reliability analysis (HRA) team was formed that consisted of an expert in probabilistic risk assessment (PRA) with some background in HRA (not ATHEANA) and four personnel from the nuclear power plant. Personnel from the plant included two individuals from their PRA staff and two individuals from their training staff. Both individuals from training are currently licensed operators and one of them was a senior reactor operator on shift until a few months before the demonstration. The demonstration was conducted over a 5-month period and was observed by members of the Nuclear Regulatory Commission`s ATHEANA development team, who also served as consultants to the HRA team when necessary. Example results of the demonstration to date, including identified human failure events (HFEs), UAs, and EFCs are discussed. Also addressed is how simulator exercises are used in the ATHEANA demonstration project.

  10. OOK power model based dynamic error testing for smart electricity meter

    Science.gov (United States)

    Wang, Xuewei; Chen, Jingxia; Yuan, Ruiming; Jia, Xiaolu; Zhu, Meng; Jiang, Zhenyu

    2017-02-01

    This paper formulates the dynamic error testing problem for a smart meter, with consideration and investigation of both the testing signal and the dynamic error testing method. To solve the dynamic error testing problems, the paper establishes an on-off-keying (OOK) testing dynamic current model and an OOK testing dynamic load energy (TDLE) model. Then two types of TDLE sequences and three modes of OOK testing dynamic power are proposed. In addition, a novel algorithm, which helps to solve the problem of dynamic electric energy measurement’s traceability, is derived for dynamic errors. Based on the above researches, OOK TDLE sequence generation equipment is developed and a dynamic error testing system is constructed. Using the testing system, five kinds of meters were tested in the three dynamic power modes. The test results show that the dynamic error is closely related to dynamic power mode and the measurement uncertainty is 0.38%.

  11. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation.

    Science.gov (United States)

    Li, Tao; Yuan, Gannan; Li, Wang

    2016-03-15

    The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition.

  12. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation

    Directory of Open Access Journals (Sweden)

    Tao Li

    2016-03-01

    Full Text Available The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF and Kalman filter (KF. The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition.

  13. Modeling Distance and Bandwidth Dependency of TOA-Based UWB Ranging Error for Positioning

    NARCIS (Netherlands)

    Bellusci, G.; Janssen, G.J.M.; Yan, J.; Tiberius, C.C.J.M.

    2009-01-01

    A statistical model for the range error provided by TOA estimation using UWB signals is given, based on UWB channel measurements between 3.1 and 10.6 GHz. The range error has been modeled as a Gaussian random variable for LOS and as a combination of a Gaussian and an exponential random variable for

  14. Unit-based clinical pharmacists' prevention of serious medication errors in pediatric inpatients.

    Science.gov (United States)

    Kaushal, Rainu; Bates, David W; Abramson, Erika L; Soukup, Jane R; Goldmann, Donald A

    2008-07-01

    Rates of serious medication errors in three pediatric inpatient units (intensive care, general medical, and general surgical) were measured before and after introduction of unit-based clinical pharmacists. Error rates on the study units and similar patient care units in the same hospital that served as controls were determined during six- to eight-week baseline periods and three-month periods after the introduction of unit-based clinical pharmacists (full-time in the intensive care unit [ICU] and mornings only on the general units). Nurses trained by the investigators reviewed medication orders, medication administration records, and patient charts daily to detect errors, near misses, and adverse drug events (ADEs) and determine whether near misses were intercepted. Two physicians independently reviewed and rated all data collected by the nurses. Serious medication errors were defined as preventable ADEs and nonintercepted near misses. The baseline rates of serious medication errors per 1000 patient days were 29 for the ICU, 8 for the general medical unit, and 7 for the general surgical unit. With unit-based clinical pharmacists, the ICU rate dropped to 6 per 1000 patient days. In the general care units, there was no reduction from baseline in the rates of serious medication errors. A full-time unit-based clinical pharmacist substantially decreased the rate of serious medication errors in a pediatric ICU, but a part-time pharmacist was not as effective in decreasing errors in pediatric general care units.

  15. Multidisciplinary framework for human reliability analysis with an application to errors of commission and dependencies

    Energy Technology Data Exchange (ETDEWEB)

    Barriere, M.T.; Luckas, W.J. [Brookhaven National Lab., Upton, NY (United States); Wreathall, J. [Wreathall (John) and Co., Dublin, OH (United States); Cooper, S.E. [Science Applications International Corp., Reston, VA (United States); Bley, D.C. [PLG, Inc., Newport Beach, CA (United States); Ramey-Smith, A. [Nuclear Regulatory Commission, Washington, DC (United States). Div. of Systems Technology

    1995-08-01

    Since the early 1970s, human reliability analysis (HRA) has been considered to be an integral part of probabilistic risk assessments (PRAs). Nuclear power plant (NPP) events, from Three Mile Island through the mid-1980s, showed the importance of human performance to NPP risk. Recent events demonstrate that human performance continues to be a dominant source of risk. In light of these observations, the current limitations of existing HRA approaches become apparent when the role of humans is examined explicitly in the context of real NPP events. The development of new or improved HRA methodologies to more realistically represent human performance is recognized by the Nuclear Regulatory Commission (NRC) as a necessary means to increase the utility of PRAS. To accomplish this objective, an Improved HRA Project, sponsored by the NRC`s Office of Nuclear Regulatory Research (RES), was initiated in late February, 1992, at Brookhaven National Laboratory (BNL) to develop an improved method for HRA that more realistically assesses the human contribution to plant risk and can be fully integrated with PRA. This report describes the research efforts including the development of a multidisciplinary HRA framework, the characterization and representation of errors of commission, and an approach for addressing human dependencies. The implications of the research and necessary requirements for further development also are discussed.

  16. Feedback-based error monitoring processes during musical performance: an ERP study.

    Science.gov (United States)

    Katahira, Kentaro; Abla, Dilshat; Masuda, Sayaka; Okanoya, Kazuo

    2008-05-01

    Auditory feedback is important in detecting and correcting errors during sound production when a current performance is compared to an intended performance. In the context of vocal production, a forward model, in which a prediction of action consequence (corollary discharge) is created, has been proposed to explain the dampened activity of the auditory cortex while producing self-generated vocal sounds. However, it is unclear how auditory feedback is processed and what neural mechanism underlies the process during other sound production behavior, such as musical performances. We investigated the neural correlates of human auditory feedback-based error detection using event-related potentials (ERPs) recorded during musical performances. Keyboard players of two different skill levels played simple melodies using a musical score. During the performance, the auditory feedback was occasionally altered. Subjects with early and extensive piano training produced a negative ERP component N210, which was absent in non-trained players. When subjects listened to music that deviated from a corresponding score without playing the piece, N210 did not emerge but the imaginary mismatch negativity (iMMN) did. Therefore, N210 may reflect a process of mismatch between the intended auditory image evoked by motor activity, and actual auditory feedback.

  17. Analysis of translational errors in frame-based and frameless cranial radiosurgery using an anthropomorphic phantom

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Taynna Vernalha Rocha [Faculdades Pequeno Principe (FPP), Curitiba, PR (Brazil); Cordova Junior, Arno Lotar; Almeida, Cristiane Maria; Piedade, Pedro Argolo; Silva, Cintia Mara da, E-mail: taynnavra@gmail.com [Centro de Radioterapia Sao Sebastiao, Florianopolis, SC (Brazil); Brincas, Gabriela R. Baseggio [Centro de Diagnostico Medico Imagem, Florianopolis, SC (Brazil); Marins, Priscila; Soboll, Danyel Scheidegger [Universidade Tecnologica Federal do Parana (UTFPR), Curitiba, PR (Brazil)

    2016-03-15

    Objective: To evaluate three-dimensional translational setup errors and residual errors in image-guided radiosurgery, comparing frameless and frame-based techniques, using an anthropomorphic phantom. Materials and Methods: We initially used specific phantoms for the calibration and quality control of the image-guided system. For the hidden target test, we used an Alderson Radiation Therapy (ART)-210 anthropomorphic head phantom, into which we inserted four 5- mm metal balls to simulate target treatment volumes. Computed tomography images were the taken with the head phantom properly positioned for frameless and frame-based radiosurgery. Results: For the frameless technique, the mean error magnitude was 0.22 ± 0.04 mm for setup errors and 0.14 ± 0.02 mm for residual errors, the combined uncertainty being 0.28 mm and 0.16 mm, respectively. For the frame-based technique, the mean error magnitude was 0.73 ± 0.14 mm for setup errors and 0.31 ± 0.04 mm for residual errors, the combined uncertainty being 1.15 mm and 0.63 mm, respectively. Conclusion: The mean values, standard deviations, and combined uncertainties showed no evidence of a significant differences between the two techniques when the head phantom ART-210 was used. (author)

  18. Analysis of translational errors in frame-based and frameless cranial radiosurgery using an anthropomorphic phantom*

    Science.gov (United States)

    Almeida, Taynná Vernalha Rocha; Cordova Junior, Arno Lotar; Piedade, Pedro Argolo; da Silva, Cintia Mara; Marins, Priscila; Almeida, Cristiane Maria; Brincas, Gabriela R. Baseggio; Soboll, Danyel Scheidegger

    2016-01-01

    Objective To evaluate three-dimensional translational setup errors and residual errors in image-guided radiosurgery, comparing frameless and frame-based techniques, using an anthropomorphic phantom. Materials and Methods We initially used specific phantoms for the calibration and quality control of the image-guided system. For the hidden target test, we used an Alderson Radiation Therapy (ART)-210 anthropomorphic head phantom, into which we inserted four 5mm metal balls to simulate target treatment volumes. Computed tomography images were the taken with the head phantom properly positioned for frameless and frame-based radiosurgery. Results For the frameless technique, the mean error magnitude was 0.22 ± 0.04 mm for setup errors and 0.14 ± 0.02 mm for residual errors, the combined uncertainty being 0.28 mm and 0.16 mm, respectively. For the frame-based technique, the mean error magnitude was 0.73 ± 0.14 mm for setup errors and 0.31 ± 0.04 mm for residual errors, the combined uncertainty being 1.15 mm and 0.63 mm, respectively. Conclusion The mean values, standard deviations, and combined uncertainties showed no evidence of a significant differences between the two techniques when the head phantom ART-210 was used. PMID:27141132

  19. Analysis of translational errors in frame-based and frameless cranial radiosurgery using an anthropomorphic phantom

    Directory of Open Access Journals (Sweden)

    Taynná Vernalha Rocha Almeida

    2016-04-01

    Full Text Available Abstract Objective: To evaluate three-dimensional translational setup errors and residual errors in image-guided radiosurgery, comparing frameless and frame-based techniques, using an anthropomorphic phantom. Materials and Methods: We initially used specific phantoms for the calibration and quality control of the image-guided system. For the hidden target test, we used an Alderson Radiation Therapy (ART-210 anthropomorphic head phantom, into which we inserted four 5mm metal balls to simulate target treatment volumes. Computed tomography images were the taken with the head phantom properly positioned for frameless and frame-based radiosurgery. Results: For the frameless technique, the mean error magnitude was 0.22 ± 0.04 mm for setup errors and 0.14 ± 0.02 mm for residual errors, the combined uncertainty being 0.28 mm and 0.16 mm, respectively. For the frame-based technique, the mean error magnitude was 0.73 ± 0.14 mm for setup errors and 0.31 ± 0.04 mm for residual errors, the combined uncertainty being 1.15 mm and 0.63 mm, respectively. Conclusion: The mean values, standard deviations, and combined uncertainties showed no evidence of a significant differences between the two techniques when the head phantom ART-210 was used.

  20. On the Orientation Error of IMU: Investigating Static and Dynamic Accuracy Targeting Human Motion.

    Science.gov (United States)

    Ricci, Luca; Taffoni, Fabrizio; Formica, Domenico

    2016-01-01

    The accuracy in orientation tracking attainable by using inertial measurement units (IMU) when measuring human motion is still an open issue. This study presents a systematic quantification of the accuracy under static conditions and typical human dynamics, simulated by means of a robotic arm. Two sensor fusion algorithms, selected from the classes of the stochastic and complementary methods, are considered. The proposed protocol implements controlled and repeatable experimental conditions and validates accuracy for an extensive set of dynamic movements, that differ in frequency and amplitude of the movement. We found that dynamic performance of the tracking is only slightly dependent on the sensor fusion algorithm. Instead, it is dependent on the amplitude and frequency of the movement and a major contribution to the error derives from the orientation of the rotation axis w.r.t. the gravity vector. Absolute and relative errors upper bounds are found respectively in the range [0.7° ÷ 8.2°] and [1.0° ÷ 10.3°]. Alongside dynamic, static accuracy is thoroughly investigated, also with an emphasis on convergence behavior of the different algorithms. Reported results emphasize critical issues associated with the use of this technology and provide a baseline level of performance for the human motion related application.

  1. On the Orientation Error of IMU: Investigating Static and Dynamic Accuracy Targeting Human Motion

    Science.gov (United States)

    Ricci, Luca; Taffoni, Fabrizio

    2016-01-01

    The accuracy in orientation tracking attainable by using inertial measurement units (IMU) when measuring human motion is still an open issue. This study presents a systematic quantification of the accuracy under static conditions and typical human dynamics, simulated by means of a robotic arm. Two sensor fusion algorithms, selected from the classes of the stochastic and complementary methods, are considered. The proposed protocol implements controlled and repeatable experimental conditions and validates accuracy for an extensive set of dynamic movements, that differ in frequency and amplitude of the movement. We found that dynamic performance of the tracking is only slightly dependent on the sensor fusion algorithm. Instead, it is dependent on the amplitude and frequency of the movement and a major contribution to the error derives from the orientation of the rotation axis w.r.t. the gravity vector. Absolute and relative errors upper bounds are found respectively in the range [0.7° ÷ 8.2°] and [1.0° ÷ 10.3°]. Alongside dynamic, static accuracy is thoroughly investigated, also with an emphasis on convergence behavior of the different algorithms. Reported results emphasize critical issues associated with the use of this technology and provide a baseline level of performance for the human motion related application. PMID:27612100

  2. Statistics-based reconstruction method with high random-error tolerance for integral imaging.

    Science.gov (United States)

    Zhang, Juan; Zhou, Liqiu; Jiao, Xiaoxue; Zhang, Lei; Song, Lipei; Zhang, Bo; Zheng, Yi; Zhang, Zan; Zhao, Xing

    2015-10-01

    A three-dimensional (3D) digital reconstruction method for integral imaging with high random-error tolerance based on statistics is proposed. By statistically analyzing the points reconstructed by triangulation from all corresponding image points in an elemental images array, 3D reconstruction with high random-error tolerance could be realized. To simulate the impacts of random errors, random offsets with different error levels are added to a different number of elemental images in simulation and optical experiments. The results of simulation and optical experiments showed that the proposed statistic-based reconstruction method has relatively stable and better reconstruction accuracy than the conventional reconstruction method. It can be verified that the proposed method can effectively reduce the impacts of random errors on 3D reconstruction of integral imaging. This method is simple and very helpful to the development of integral imaging technology.

  3. Determining The Factors Causing Human Error Deficiencies At A Public Utility Company

    Directory of Open Access Journals (Sweden)

    F. W. Badenhorst

    2004-11-01

    Full Text Available According to Neff (1977, as cited by Bergh (1995, the westernised culture considers work important for industrial mental health. Most individuals experience work positively, which creates a positive attitude. Should this positive attitude be inhibited, workers could lose concentration and become bored, potentially resulting in some form of human error. The aim of this research was to determine the factors responsible for human error events, which lead to power supply failures at Eskom power stations. Proposals were made for the reduction of these contributing factors towards improving plant performance. The target population was 700 panel operators in Eskom’s Power Generation Group. The results showed that factors leading to human error can be reduced or even eliminated. Opsomming Neff (1977 soos aangehaal deur Bergh (1995, skryf dat in die westerse kultuur werk belangrik vir bedryfsgeestesgesondheid is. Die meeste persone ervaar werk as positief, wat ’n positiewe gesindheid kweek. Indien hierdie positiewe gesindheid geïnhibeer word, kan dit lei tot ’n gebrek aan konsentrasie by die werkers. Werkers kan verveeld raak en dit kan weer lei tot menslike foute. Die doel van hierdie navorsing is om die faktore vas te stel wat tot menslike foute lei, en wat bydra tot onderbrekings in kragvoorsiening by Eskom kragstasies. Voorstelle is gemaak vir die vermindering van hierdie bydraende faktore ten einde die kragaanleg se prestasie te verbeter. Die teiken-populasie was 700 paneel-operateurs in die Kragopwekkingsgroep by Eskom. Die resultate dui daarop dat die faktore wat aanleiding gee tot menslike foute wel verminder, of geëlimineer kan word.

  4. A human error taxonomy for analysing healthcare incident reports: assessing reporting culture and its effects on safety perfomance

    DEFF Research Database (Denmark)

    Itoh, Kenji; Omata, N.; Andersen, Henning Boje

    2009-01-01

    The present paper reports on a human error taxonomy system developed for healthcare risk management and on its application to evaluating safety performance and reporting culture. The taxonomy comprises dimensions for classifying errors, for performance-shaping factors, and for the maturity...

  5. Formal safety assessment and application of the navigation simulators for preventing human error in ship operations

    Institute of Scientific and Technical Information of China (English)

    2005-01-01

    The International Maritime Organization (IMO) has encouraged its member countries to introduce Formal Safety Assessment (FSA) for ship operations since the end of the last century. FSA can be used through certain formal assessing steps to generate effective recommendations and cautions to control marine risks and improve the safety of ships. On the basis of the brief introduction of FSA, this paper describes the ideas of applying FSA to the prevention of human error in ship operations. It especially discusses the investigation and analysis of the information and data using navigation simulators and puts forward some suggestions for the introduction and development of the FSA research work for safer ship operations.

  6. Report: Human biochemical genetics: an insight into inborn errors of metabolism

    Institute of Scientific and Technical Information of China (English)

    YU Chunli; SCOTT C. Ronald

    2006-01-01

    Inborn errors of metabolism (IEM) include a broad spectrum of defects of various gene products that affect intermediary metabolism in the body. Studying the molecular and biochemical mechanisms of those inherited disorder, systematically summarizing the disease phenotype and natural history, providing diagnostic rationale and methodology and treatment strategy comprise the context of human biochemical genetics. This session focused on: (1) manifestations of representative metabolic disorders; (2) the emergent technology and application of newborn screening of metabolic disorders using tandem mass spectrometry; (3) principles of managing IEM; (4) the concept of carrier testing aiming prevention. Early detection of patients with IEM allows early intervention and more options for treatment.

  7. A wavelet-based approach to assessing timing errors in hydrologic predictions

    Science.gov (United States)

    Liu, Yuqiong; Brown, James; Demargne, Julie; Seo, Dong-Jun

    2011-02-01

    SummaryStreamflow predictions typically contain errors in both the timing and the magnitude of peak flows. These two types of error often originate from different sources (e.g. rainfall-runoff modeling vs. routing) and hence may have different implications and ramifications for both model diagnosis and decision support. Thus, where possible and relevant, they should be distinguished and separated in model evaluation and forecast verification applications. Distinct information on timing errors in hydrologic prediction could lead to more targeted model improvements in a diagnostic evaluation context, as well as better-informed decisions in many practical applications, such as flood prediction, water supply forecasting, river regulation, navigation, and engineering design. However, information on timing errors in hydrologic predictions is rarely evaluated or provided. In this paper, we discuss the importance of assessing and quantifying timing error in hydrologic predictions and present a new approach, which is based on the cross wavelet transform (XWT) technique. The XWT technique transforms the time series of predictions and corresponding observations into a two-dimensional time-scale space and provides information on scale- and time-dependent timing differences between the two time series. The results for synthetic timing errors (both constant and time-varying) indicate that the XWT-based approach can estimate timing errors in streamflow predictions with reasonable reliability. The approach is then employed to analyze the timing errors in real streamflow simulations for a number of headwater basins in the US state of Texas. The resulting timing error estimates were consistent with the physiographic and climatic characteristics of these basins. A simple post-factum timing adjustment based on these estimates led to considerably improved agreement between streamflow observations and simulations, further illustrating the potential for using the XWT-based approach for

  8. A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting

    Science.gov (United States)

    Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao

    2014-01-01

    We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813

  9. A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting

    Directory of Open Access Journals (Sweden)

    Yingxian Zhang

    2014-01-01

    Full Text Available We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length.

  10. Phase Error Correction for Approximated Observation-Based Compressed Sensing Radar Imaging.

    Science.gov (United States)

    Li, Bo; Liu, Falin; Zhou, Chongbin; Lv, Yuanhao; Hu, Jingqiu

    2017-03-17

    Defocus of the reconstructed image of synthetic aperture radar (SAR) occurs in the presence of the phase error. In this work, a phase error correction method is proposed for compressed sensing (CS) radar imaging based on approximated observation. The proposed method has better image focusing ability with much less memory cost, compared to the conventional approaches, due to the inherent low memory requirement of the approximated observation operator. The one-dimensional (1D) phase error correction for approximated observation-based CS-SAR imaging is first carried out and it can be conveniently applied to the cases of random-frequency waveform and linear frequency modulated (LFM) waveform without any a priori knowledge. The approximated observation operators are obtained by calculating the inverse of Omega-K and chirp scaling algorithms for random-frequency and LFM waveforms, respectively. Furthermore, the 1D phase error model is modified by incorporating a priori knowledge and then a weighted 1D phase error model is proposed, which is capable of correcting two-dimensional (2D) phase error in some cases, where the estimation can be simplified to a 1D problem. Simulation and experimental results validate the effectiveness of the proposed method in the presence of 1D phase error or weighted 1D phase error.

  11. Error Concealment Based on Matching-Principles in MPEG-2 Image

    Institute of Scientific and Technical Information of China (English)

    HAOLuguo; WANGZhaohua; GUOHui; SUHansong

    2004-01-01

    The MPEG-2 compression algorithm is very sensitive to transmission errors due to the use of variable-length coding. Any errors can lead to noticeable degradation of image quality seriously, so that in part or entire slice information is lost until the next resynchronization point is reached. Error concealment (EC) methods offer one way of dealing with this problem. In this paper,two new algorithms, namely spatial EC based on edgematching and temporal EC based on block-matching, are presented to reconstruct the corrupted regions. According to the simulation results of experiments, the proposed methods can recover the high-quality MPEG-2 images.

  12. Alternatives to accuracy and bias metrics based on percentage errors for radiation belt modeling applications

    Energy Technology Data Exchange (ETDEWEB)

    Morley, Steven Karl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-01

    This report reviews existing literature describing forecast accuracy metrics, concentrating on those based on relative errors and percentage errors. We then review how the most common of these metrics, the mean absolute percentage error (MAPE), has been applied in recent radiation belt modeling literature. Finally, we describe metrics based on the ratios of predicted to observed values (the accuracy ratio) that address the drawbacks inherent in using MAPE. Specifically, we define and recommend the median log accuracy ratio as a measure of bias and the median symmetric accuracy as a measure of accuracy.

  13. Efficiency of Event-Based Sampling According to Error Energy Criterion

    OpenAIRE

    Marek Miskowicz

    2010-01-01

    The paper belongs to the studies that deal with the effectiveness of the particular event-based sampling scheme compared to the conventional periodic sampling as a reference. In the present study, the event-based sampling according to a constant energy of sampling error is analyzed. This criterion is suitable for applications where the energy of sampling error should be bounded (i.e., in building automation, or in greenhouse climate monitoring and control). Compared to the integral sampling c...

  14. Opportunistic error correction for OFDM-based DVB systems

    NARCIS (Netherlands)

    Shao, Xiaoying; Slump, Cornelis H.

    2013-01-01

    DVB-T2 (second generation terrestrial digital video broadcasting) employs LDPC (Low Density Parity Check) codes combined with BCH (Bose-Chaudhuri-Hocquengham) codes, which has a better performance in comparison to convolutional and Reed-Solomon codes used in other OFDM-based DVB systems. However, th

  15. An Inquiry-Based Density Laboratory for Teaching Experimental Error

    Science.gov (United States)

    Prilliman, Stephen G.

    2012-01-01

    An inquiry-based laboratory exercise is described in which introductory chemistry students measure the density of water five times using either a beaker, a graduated cylinder, or a volumetric pipet. Students are also assigned to use one of two analytical balances, one of which is purposefully miscalibrated by 5%. Each group collects data using…

  16. Sliding mode output feedback control based on tracking error observer with disturbance estimator.

    Science.gov (United States)

    Xiao, Lingfei; Zhu, Yue

    2014-07-01

    For a class of systems who suffers from disturbances, an original output feedback sliding mode control method is presented based on a novel tracking error observer with disturbance estimator. The mathematical models of the systems are not required to be with high accuracy, and the disturbances can be vanishing or nonvanishing, while the bounds of disturbances are unknown. By constructing a differential sliding surface and employing reaching law approach, a sliding mode controller is obtained. On the basis of an extended disturbance estimator, a creative tracking error observer is produced. By using the observation of tracking error and the estimation of disturbance, the sliding mode controller is implementable. It is proved that the disturbance estimation error and tracking observation error are bounded, the sliding surface is reachable and the closed-loop system is robustly stable. The simulations on a servomotor positioning system and a five-degree-of-freedom active magnetic bearings system verify the effect of the proposed method.

  17. Calculating the reflected radiation error between turbine blades and vanes based on double contour integral method

    Science.gov (United States)

    Feng, Chi; Li, Dong; Gao, Shan; Daniel, Ketui

    2016-11-01

    This paper presents a CFD (Computation Fluid Dynamic) simulation and experimental results for the reflected radiation error from turbine vanes when measuring turbine blade's temperature using a pyrometer. In the paper, an accurate reflection model based on discrete irregular surfaces is established. Double contour integral method is used to calculate view factor between the irregular surfaces. Calculated reflected radiation error was found to change with relative position between blades and vanes as temperature distribution of vanes and blades was simulated using CFD. Simulation results indicated that when the vanes suction surface temperature ranged from 860 K to 1060 K and the blades pressure surface average temperature is 805 K, pyrometer measurement error can reach up to 6.35%. Experimental results show that the maximum pyrometer absolute error of three different targets on the blade decreases from 6.52%, 4.15% and 1.35% to 0.89%, 0.82% and 0.69% respectively after error correction.

  18. Error-based training and emergent awareness in anosognosia for hemiplegia.

    Science.gov (United States)

    Moro, V; Scandola, M; Bulgarelli, C; Avesani, R; Fotopoulou, A

    2015-01-01

    Residual forms of awareness have recently been demonstrated in subjects affected by anosognosia for hemiplegia, but their potential effects in recovery of awareness remain to date unexplored. Emergent awareness refers to a specific facet of motor unawareness in which anosognosic subjects recognise their motor deficits only when they have been requested to perform an action and they realise their errors. Four participants in the chronic phase after a stroke with anosognosia for hemiplegia were recruited. They took part in an "error-full" or "analysis of error-based" rehabilitative training programme. They were asked to attempt to execute specific actions, analyse their own strategies and errors and discuss the reasons for their failures. Pre- and post-training and follow-up assessments showed that motor unawareness improved in all four patients. These results indicate that unsuccessful action attempts with concomitant error analysis may facilitate the recovery of emergent awareness and, sometimes, of more general aspects of awareness.

  19. Adjoint-Based a Posteriori Error Estimation for Coupled Time-Dependent Systems

    KAUST Repository

    Asner, Liya

    2012-01-01

    We consider time-dependent parabolic problem s coupled across a common interface which we formulate using a Lagrange multiplier construction and solve by applying a monolithic solution technique. We derive an adjoint-based a posteriori error representation for a quantity of interest given by a linear functional of the solution. We establish the accuracy of our error representation formula through numerical experimentation and investigate the effect of error in the adjoint solution. Crucially, the error representation affords a distinction between temporal and spatial errors and can be used as a basis for a blockwise time-space refinement strategy. Numerical tests illustrate the efficacy of the refinement strategy by capturing the distinctive behavior of a localized traveling wave solution. The saddle point systems considered here are equivalent to those arising in the mortar finite element technique for parabolic problems. © 2012 Society for Industrial and Applied Mathematics.

  20. A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates

    Science.gov (United States)

    Huang, Weizhang; Kamenski, Lennard; Lang, Jens

    2010-03-01

    A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.

  1. Human errors and violations in computer and information security: the viewpoint of network administrators and security specialists.

    Science.gov (United States)

    Kraemer, Sara; Carayon, Pascale

    2007-03-01

    This paper describes human errors and violations of end users and network administration in computer and information security. This information is summarized in a conceptual framework for examining the human and organizational factors contributing to computer and information security. This framework includes human error taxonomies to describe the work conditions that contribute adversely to computer and information security, i.e. to security vulnerabilities and breaches. The issue of human error and violation in computer and information security was explored through a series of 16 interviews with network administrators and security specialists. The interviews were audio taped, transcribed, and analyzed by coding specific themes in a node structure. The result is an expanded framework that classifies types of human error and identifies specific human and organizational factors that contribute to computer and information security. Network administrators tended to view errors created by end users as more intentional than unintentional, while errors created by network administrators as more unintentional than intentional. Organizational factors, such as communication, security culture, policy, and organizational structure, were the most frequently cited factors associated with computer and information security.

  2. Neural Bases of Unconscious Error Detection in a Chinese Anagram Solution Task: Evidence from ERP Study.

    Directory of Open Access Journals (Sweden)

    Hua-Zhan Yin

    Full Text Available In everyday life, error monitoring and processing are important for improving ongoing performance in response to a changing environment. However, detecting an error is not always a conscious process. The temporal activation patterns of brain areas related to cognitive control in the absence of conscious awareness of an error remain unknown. In the present study, event-related potentials (ERPs in the brain were used to explore the neural bases of unconscious error detection when subjects solved a Chinese anagram task. Our ERP data showed that the unconscious error detection (UED response elicited a more negative ERP component (N2 than did no error (NE and detect error (DE responses in the 300-400-ms time window, and the DE elicited a greater late positive component (LPC than did the UED and NE in the 900-1200-ms time window after the onset of the anagram stimuli. Taken together with the results of dipole source analysis, the N2 (anterior cingulate cortex might reflect unconscious/automatic conflict monitoring, and the LPC (superior/medial frontal gyrus might reflect conscious error recognition.

  3. Efficiency of event-based sampling according to error energy criterion.

    Science.gov (United States)

    Miskowicz, Marek

    2010-01-01

    The paper belongs to the studies that deal with the effectiveness of the particular event-based sampling scheme compared to the conventional periodic sampling as a reference. In the present study, the event-based sampling according to a constant energy of sampling error is analyzed. This criterion is suitable for applications where the energy of sampling error should be bounded (i.e., in building automation, or in greenhouse climate monitoring and control). Compared to the integral sampling criteria, the error energy criterion gives more weight to extreme sampling error values. The proposed sampling principle extends a range of event-based sampling schemes and makes the choice of particular sampling criterion more flexible to application requirements. In the paper, it is proved analytically that the proposed event-based sampling criterion is more effective than the periodic sampling by a factor defined by the ratio of the maximum to the mean of the cubic root of the signal time-derivative square in the analyzed time interval. Furthermore, it is shown that the sampling according to energy criterion is less effective than the send-on-delta scheme but more effective than the sampling according to integral criterion. On the other hand, it is indicated that higher effectiveness in sampling according to the selected event-based criterion is obtained at the cost of increasing the total sampling error defined as the sum of errors for all the samples taken.

  4. Error Analysis and Compensation of Gyrocompass Alignment for SINS on Moving Base

    Directory of Open Access Journals (Sweden)

    Bo Xu

    2014-01-01

    Full Text Available An improved method of gyrocompass alignment for strap-down inertial navigation system (SINS on moving base assisted with Doppler velocity log (DVL is proposed in this paper. After analyzing the classical gyrocompass alignment principle on static base, implementation of compass alignment on moving base is given in detail. Furthermore, based on analysis of velocity error, latitude error, and acceleration error on moving base, two improvements are introduced to ensure alignment accuracy and speed: (1 the system parameters are redesigned to decrease the acceleration interference and (2 a data repeated calculation algorithm is used in order to shorten the prolonged alignment time caused by changes in parameters. Simulation and test results indicate that the improved method can realize the alignment on moving base quickly and effectively.

  5. Analysis of error in soot characterization using scattering-based techniques

    Institute of Scientific and Technical Information of China (English)

    Lin Ma

    2011-01-01

    The increasing concern of the health and environmental effects of ultrafine soot particles emitted by modern combustion devices calls for new techniques to monitor such particles. Techniques based on light scattering represent one possible monitoring method. In this study, numerical simulations were conducted to examine the errors involved in soot characterization using light scattering techniques.Specifically, this study focused on examining the error caused by the approximate fractal scattering models based on the Rayleigh-Deybe-Gans theory (the RDG-FA model). When the angular scattering properties were used to retrieve parameters of soot aggregates (the radius of gyration and the fractal dimension), the RDG-FA method was observed to cause a relative error of ~10% for a representative set of soot parameters. The effects of measurement uncertainties were also investigated. Our results revealed the pattern of the errors: the errors consisted of a relatively constant baseline error caused by the RDG-FA approximation and an error increasing with the measurement uncertainties. These results are expected to be useful in the analysis and interpretation of experimental data, and also in the determination of the accuracy and applicable range of scattering techniques.

  6. 情景环境与人为差错的对应关系分析方法%Method for correlation analysis between scenario and human error

    Institute of Scientific and Technical Information of China (English)

    蒋英杰; 孙志强; 宫二玲; 谢红卫

    2011-01-01

    A new method is proposed to analyze the correlation between scenario and human error. The scenario is decomposed into six aspects, which are operator, machine, task, organization, environment and assistant devices. Based on the scenario decomposition, a taxonomy of performance shaping factor is constructed, which includes thirty-eight items and can provide a reference template for the investigation of human error causes. Based on the skill-based, rule-based and knowledge-based (SRK) model, the slip/lapse/mistake framework is introduced to classify human errors, which are categorized as skill-based slip and lapse, rule-based slip and mistake, and knowledge-based mistake. Grey relational analysis is introduced to analyze the correlation between performance shaping factors and human error types, in which the correlations of "consequent-antecedent" and "antecedent-consequent" are both analyzed. By this method, performance shaping factors related to some specified human error type and human error types caused by some specified performance shaping factor both can be sorted according to their correlation degrees. A case study is provided, which shows that the proposed method is applicable in analyzing the correlation between scenario and human error, and can provide some important implications for human error prediction and human error reduction.%提出了一种分析情景环境与人为差错之间对应关系的方法.将情景环境分为操作者、机器、任务、组织、环境和辅助系统6个方面,建立了包含38个元素的行为形成因子分类方法,为人为差错成因的查找提供了参考模板.在SRK(skill-based,rule-based and knowledge-based)模型的基础上引入疏忽/遗忘/错误分类框架,将人为差错分为技能型疏忽、技能型遗忘、规则型疏忽、规则型错误以及知识型错误等5种基本的人为差错类型.使用灰色关联分析方法,从“结果-原因”和“原因-结果”两个方向分析行为形

  7. Position error correction in absolute surface measurement based on a multi-angle averaging method

    Science.gov (United States)

    Wang, Weibo; Wu, Biwei; Liu, Pengfei; Liu, Jian; Tan, Jiubin

    2017-04-01

    We present a method for position error correction in absolute surface measurement based on a multi-angle averaging method. Differences in shear rotation measurements at overlapping areas can be used to estimate the unknown relative position errors of the measurements. The model and the solving of the estimation algorithm have been discussed in detail. The estimation algorithm adopts a least-squares technique to eliminate azimuthal errors caused by rotation inaccuracy. The cost functions can be minimized to determine the true values of the unknowns of Zernike polynomial coefficients and rotation angle. Experimental results show the validity of the method proposed.

  8. Active and passive compensation of APPLE II-introduced multipole errors through beam-based measurement

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Ting-Yi; Huang, Szu-Jung; Fu, Huang-Wen; Chang, Ho-Ping; Chang, Cheng-Hsiang [National Synchrotron Radiation Research Center, Hsinchu Science Park, Hsinchu 30076, Taiwan (China); Hwang, Ching-Shiang [National Synchrotron Radiation Research Center, Hsinchu Science Park, Hsinchu 30076, Taiwan (China); Department of Electrophysics, National Chiao Tung University, Hsinchu 30050, Taiwan (China)

    2016-08-01

    The effect of an APPLE II-type elliptically polarized undulator (EPU) on the beam dynamics were investigated using active and passive methods. To reduce the tune shift and improve the injection efficiency, dynamic multipole errors were compensated using L-shaped iron shims, which resulted in stable top-up operation for a minimum gap. The skew quadrupole error was compensated using a multipole corrector, which was located downstream of the EPU for minimizing betatron coupling, and it ensured the enhancement of the synchrotron radiation brightness. The investigation methods, a numerical simulation algorithm, a multipole error correction method, and the beam-based measurement results are discussed.

  9. Knowledge-Based Trajectory Error Pattern Method Applied to an Active Force Control Scheme

    Directory of Open Access Journals (Sweden)

    Endra Pitowarno, Musa Mailah, Hishamuddin Jamaluddin

    2012-08-01

    Full Text Available The active force control (AFC method is known as a robust control scheme that dramatically enhances the performance of a robot arm particularly in compensating the disturbance effects. The main task of the AFC method is to estimate the inertia matrix in the feedback loop to provide the correct (motor torque required to cancel out these disturbances. Several intelligent control schemes have already been introduced to enhance the estimation methods of acquiring the inertia matrix such as those using neural network, iterative learning and fuzzy logic. In this paper, we propose an alternative scheme called Knowledge-Based Trajectory Error Pattern Method (KBTEPM to suppress the trajectory track error of the AFC scheme. The knowledge is developed from the trajectory track error characteristic based on the previous experimental results of the crude approximation method. It produces a unique, new and desirable error pattern when a trajectory command is forced. An experimental study was performed using simulation work on the AFC scheme with KBTEPM applied to a two-planar manipulator in which a set of rule-based algorithm is derived. A number of previous AFC schemes are also reviewed as benchmark. The simulation results show that the AFC-KBTEPM scheme successfully reduces the trajectory track error significantly even in the presence of the introduced disturbances.Key Words:  Active force control, estimated inertia matrix, robot arm, trajectory error pattern, knowledge-based.

  10. Human factors evaluation of remote afterloading brachytherapy: Human error and critical tasks in remote afterloading brachytherapy and approaches for improved system performance. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    Callan, J.R.; Kelly, R.T.; Quinn, M.L. [Pacific Science and Engineering Group, San Diego, CA (United States)] [and others

    1995-05-01

    Remote Afterloading Brachytherapy (RAB) is a medical process used in the treatment of cancer. RAB uses a computer-controlled device to remotely insert and remove radioactive sources close to a target (or tumor) in the body. Some RAB problems affecting the radiation dose to the patient have been reported and attributed to human error. To determine the root cause of human error in the RAB system, a human factors team visited 23 RAB treatment sites in the US The team observed RAB treatment planning and delivery, interviewed RAB personnel, and performed walk-throughs, during which staff demonstrated the procedures and practices used in performing RAB tasks. Factors leading to human error in the RAB system were identified. The impact of those factors on the performance of RAB was then evaluated and prioritized in terms of safety significance. Finally, the project identified and evaluated alternative approaches for resolving the safety significant problems related to human error.

  11. Adaptive Error Detection Method for P300-based Spelling Using Riemannian Geometry

    Directory of Open Access Journals (Sweden)

    Attaullah Sahito

    2016-11-01

    Full Text Available Brain-Computer Interface (BCI systems have be-come one of the valuable research area of ML (Machine Learning and AI based techniques have brought significant change in traditional diagnostic systems of medical diagnosis. Specially; Electroencephalogram (EEG, which is measured electrical ac-tivity of the brain and ionic current in neurons is result of these activities. A brain-computer interface (BCI system uses these EEG signals to facilitate humans in different ways. P300 signal is one of the most important and vastly studied EEG phenomenon that has been studied in Brain Computer Interface domain. For instance, P300 signal can be used in BCI to translate the subject’s intention from mere thoughts using brain waves into actual commands, which can eventually be used to control different electro mechanical devices and artificial human body parts. Since low Signal-to-Noise-Ratio (SNR in P300 is one of the major challenge because concurrently ongoing heterogeneous activities and artifacts of brain creates lots of challenges for doctors to understand the human intentions. In order to address above stated challenge this research proposes a system so called Adaptive Error Detection method for P300-Based Spelling using Riemannian Geometry, the system comprises of three main steps, in first step raw signal is cleaned by preprocessing. In second step most relevant features are extracted using xDAWN spatial filtering along with covariance matrices for handling high dimensional data and in final step elastic net classification algorithm is applied after converting from Riemannian manifold to Euclidean space using tangent space mapping. Results obtained by proposed method are comparable to state-of-the-art methods, as they decrease time drastically; as results suggest six times decrease in time and perform better during the inter-session and inter-subject variability.

  12. Performance of cumulant-based rank reduction estimator in presence of unexpected modeling errors

    Institute of Scientific and Technical Information of China (English)

    王鼎

    2015-01-01

    Compared with the rank reduction estimator (RARE) based on second-order statistics (called SOS-RARE), the RARE based on fourth-order cumulants (referred to as FOC-RARE) can handle more sources and restrain the negative impacts of the Gaussian colored noise. However, the unexpected modeling errors appearing in practice are known to significantly degrade the performance of the RARE. Therefore, the direction-of-arrival (DOA) estimation performance of the FOC-RARE is quantitatively derived. The explicit expression for direction-finding (DF) error is derived via the first-order perturbation analysis, and then the theoretical formula for the mean square error (MSE) is given. Simulation results demonstrate the validation of the theoretical analysis and reveal that the FOC-RARE is more robust to the unexpected modeling errors than the SOS-RARE.

  13. An AFM-based methodology for measuring axial and radial error motions of spindles

    Science.gov (United States)

    Geng, Yanquan; Zhao, Xuesen; Yan, Yongda; Hu, Zhenjiang

    2014-05-01

    This paper presents a novel atomic force microscopy (AFM)-based methodology for measurement of axial and radial error motions of a high precision spindle. Based on a modified commercial AFM system, the AFM tip is employed as a cutting tool by which nano-grooves are scratched on a flat surface with the rotation of the spindle. By extracting the radial motion data of the spindle from the scratched nano-grooves, the radial error motion of the spindle can be calculated after subtracting the tilting errors from the original measurement data. Through recording the variation of the PZT displacement in the Z direction in AFM tapping mode during the spindle rotation, the axial error motion of the spindle can be obtained. Moreover the effects of the nano-scratching parameters on the scratched grooves, the tilting error removal method for both conditions and the method of data extraction from the scratched groove depth are studied in detail. The axial error motion of 124 nm and the radial error motion of 279 nm of a commercial high precision air bearing spindle are achieved by this novel method, which are comparable with the values provided by the manufacturer, verifying this method. This approach does not need an expensive standard part as in most conventional measurement approaches. Moreover, the axial and radial error motions of the spindle can both be obtained, indicating that this is a potential means of measuring the error motions of the high precision moving parts of ultra-precision machine tools in the future.

  14. Evaluating Atlantic tropical cyclone track error distributions based on forecast confidence

    OpenAIRE

    Hauke, Matthew D.

    2006-01-01

    A new Tropical Cyclone (TC) surface wind speed probability product from the National Hurricane Center (NHC) takes into account uncertainty in track, maximum wind speed, and wind radii. A Monte Carlo (MC) model is used that draws from probability distributions based on historic track errors. In this thesis, distributions of forecast track errors conditioned on forecast confidence are examined to determine if significant differences exist in distribution characteristics. Two predictors are ...

  15. 认知控制模式下的CREAM方法概率量化%Quantification of human error probability of CREAM in cognitive control mode

    Institute of Scientific and Technical Information of China (English)

    蒋英杰; 孙志强; 宫二玲; 谢红卫

    2011-01-01

    Human errors have nowadays turned lo be the main factor that may reduce the reliability and safety of human-machine system, and therefore necessary to be attached special attention to. It is for this reason that the quantification of human error probability has become the research topic of this paper known as a key ingredient of human reliability analysis (HRA) . However, the first step for us to do here is to introduce the basic method of cognitive reliability and error analysis method (CREAM) as a kind of widely accepted HRA method as well as the hasic theory it involves, And, then, we would like to introduce the steps for quantifying human error probability in details. Considering that cognitive ben a vi or mode provided by CREAM should be continuous, we have put forward two methods for defining the probabilistic control modes by HRA practitioners, which arc based on Bayesian nell and the fuzzy logic, respectively. The reason for so doing is that if the human error probability were not lo be quantified, it would be necessary to construct a method to deal with the human error probability in probabilistic mode, which makes it necessary to apply a method for quantifying the human error probability in probabilistic control modes. In preparing for such a method, we should lake the lognormal function as the probabilistic density function of human error probability in the mode and the probabilistic density function of human error probability in probabilistic cognitive behavior mode as the linear combination of the functions in each cognitive behavior mode. However, the human error probability in probabilistic mode is quantified through theoretical inference. In order to heighten the efficiency of calculation, we have also applied the Monte Carlo algorithm to our work. And, last of all, the validity of the method has been demonstrated by means of a sample study to show the process of the method.%研究了人因可靠性分析(Human Reliability Analysis,HRA)中人为差

  16. Estimation of genetic connectedness diagnostics based on prediction errors without the prediction error variance-covariance matrix

    National Research Council Canada - National Science Library

    John B Holmes; Ken G Dodds; Michael A Lee

    2017-01-01

    .... While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix...

  17. Understanding Human Error Based on Automated Analyses vol 1

    Data.gov (United States)

    National Aeronautics and Space Administration — A proactive approach to identifying and alleviating life-threatening conditions in the aviation system entails a well-defined process of identifying threats,...

  18. North error estimation based on solar elevation errors in the third step of sky-polarimetric Viking navigation.

    Science.gov (United States)

    Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Egri, Ádám; Horváth, Gábor

    2016-07-01

    The theory of sky-polarimetric Viking navigation has been widely accepted for decades without any information about the accuracy of this method. Previously, we have measured the accuracy of the first and second steps of this navigation method in psychophysical laboratory and planetarium experiments. Now, we have tested the accuracy of the third step in a planetarium experiment, assuming that the first and second steps are errorless. Using the fists of their outstretched arms, 10 test persons had to estimate the elevation angles (measured in numbers of fists and fingers) of black dots (representing the position of the occluded Sun) projected onto the planetarium dome. The test persons performed 2400 elevation estimations, 48% of which were more accurate than ±1°. We selected three test persons with the (i) largest and (ii) smallest elevation errors and (iii) highest standard deviation of the elevation error. From the errors of these three persons, we calculated their error function, from which the North errors (the angles with which they deviated from the geographical North) were determined for summer solstice and spring equinox, two specific dates of the Viking sailing period. The range of possible North errors ΔωN was the lowest and highest at low and high solar elevations, respectively. At high elevations, the maximal ΔωN was 35.6° and 73.7° at summer solstice and 23.8° and 43.9° at spring equinox for the best and worst test person (navigator), respectively. Thus, the best navigator was twice as good as the worst one. At solstice and equinox, high elevations occur the most frequently during the day, thus high North errors could occur more frequently than expected before. According to our findings, the ideal periods for sky-polarimetric Viking navigation are immediately after sunrise and before sunset, because the North errors are the lowest at low solar elevations.

  19. North error estimation based on solar elevation errors in the third step of sky-polarimetric Viking navigation

    Science.gov (United States)

    Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Egri, Ádám; Horváth, Gábor

    2016-07-01

    The theory of sky-polarimetric Viking navigation has been widely accepted for decades without any information about the accuracy of this method. Previously, we have measured the accuracy of the first and second steps of this navigation method in psychophysical laboratory and planetarium experiments. Now, we have tested the accuracy of the third step in a planetarium experiment, assuming that the first and second steps are errorless. Using the fists of their outstretched arms, 10 test persons had to estimate the elevation angles (measured in numbers of fists and fingers) of black dots (representing the position of the occluded Sun) projected onto the planetarium dome. The test persons performed 2400 elevation estimations, 48% of which were more accurate than ±1°. We selected three test persons with the (i) largest and (ii) smallest elevation errors and (iii) highest standard deviation of the elevation error. From the errors of these three persons, we calculated their error function, from which the North errors (the angles with which they deviated from the geographical North) were determined for summer solstice and spring equinox, two specific dates of the Viking sailing period. The range of possible North errors ΔωN was the lowest and highest at low and high solar elevations, respectively. At high elevations, the maximal ΔωN was 35.6° and 73.7° at summer solstice and 23.8° and 43.9° at spring equinox for the best and worst test person (navigator), respectively. Thus, the best navigator was twice as good as the worst one. At solstice and equinox, high elevations occur the most frequently during the day, thus high North errors could occur more frequently than expected before. According to our findings, the ideal periods for sky-polarimetric Viking navigation are immediately after sunrise and before sunset, because the North errors are the lowest at low solar elevations.

  20. Temporal and Developmental-Stage Variation in the Occurrence of Mitotic Errors in Tripronuclear Human Preimplantation Embryos

    NARCIS (Netherlands)

    Mantikou, Eleni; van Echten-Arends, Jannie; Sikkema-Raddatz, Birgit; van der Veen, Fulco; Repping, Sjoerd; Mastenbroek, Sebastiaan

    2013-01-01

    Mitotic errors during early development of human preimplantation embryos are common, rendering a large proportion of embryos chromosomally mosaic. It is also known that the percentage of diploid cells in human diploid-aneuploid mosaic embryos is higher at the blastocyst than at the cleavage stage. I

  1. AN IV CATHETER FRAGMENTS DURING MDCT SCANNING OF HUMAN ERROR: EXPERIMENTAL AND REPRODUCIBLE MICROSCOPIC MAGNIFICATION ANALYSIS

    Energy Technology Data Exchange (ETDEWEB)

    Kweon, Dae Cheol [Dept. of Radiologic Science, Shin Heung College, Uijeongbu (Korea, Republic of); Lee, Jong Woong [Dept. of of Radiology, Kyung Hee University Hospital at Gang-dong, Seoul (Korea, Republic of); Choi, Ji Won [Dept. of Radiological Science, Jeonju University, Jeonju (Korea, Republic of); Yang, Sung Hwan [Dept. of of Prosthetics and Orthotics, Korean National College of Rehabilitation and Welfare, Pyeongtaek (Korea, Republic of); Dong, Kyung Rae [Dept. of Radiological Technology, Gwangju Health College University, Gwangju (Korea, Republic of); Chung, Won Kwan [Dept. of of Nuclear Engineering, Chosun University, Gwangju (Korea, Republic of)

    2011-12-15

    The use of intravenous catheters are occasionally complicated by intravascular fragments and swelling of the catheter fragments. We present a patient in whom an intravenous catheter fragments was retrieved from the dorsal metacarpal vein following its incidental CT examination detection. The case of demonstrates the utility of microscopy and multi-detector CT in localizing small of subtle intravenous catheter fragments as a human error. A case of IV catheter fragments in the metacarpal vein, in which reproducible and microscopy data allowed complete localization of a missing fragments and guided surgery with respect to the optimal incision site for fragments removal. These reproducible studies may help to determine the best course of action and treatment for the patient who presents with such a case.

  2. Adaptation of hybrid human-computer interaction systems using EEG error-related potentials.

    Science.gov (United States)

    Chavarriaga, Ricardo; Biasiucci, Andrea; Forster, Killian; Roggen, Daniel; Troster, Gerhard; Millan, Jose Del R

    2010-01-01

    Performance improvement in both humans and artificial systems strongly relies in the ability of recognizing erroneous behavior or decisions. This paper, that builds upon previous studies on EEG error-related signals, presents a hybrid approach for human computer interaction that uses human gestures to send commands to a computer and exploits brain activity to provide implicit feedback about the recognition of such commands. Using a simple computer game as a case study, we show that EEG activity evoked by erroneous gesture recognition can be classified in single trials above random levels. Automatic artifact rejection techniques are used, taking into account that subjects are allowed to move during the experiment. Moreover, we present a simple adaptation mechanism that uses the EEG signal to label newly acquired samples and can be used to re-calibrate the gesture recognition system in a supervised manner. Offline analysis show that, although the achieved EEG decoding accuracy is far from being perfect, these signals convey sufficient information to significantly improve the overall system performance.

  3. Human factors engineering in healthcare systems: the problem of human error and accident management.

    Science.gov (United States)

    Cacciabue, P C; Vella, G

    2010-04-01

    This paper discusses some crucial issues associated with the exploitation of data and information about health care for the improvement of patient safety. In particular, the issues of human factors and safety management are analysed in relation to exploitation of reports about non-conformity events and field observations. A methodology for integrating field observation and theoretical approaches for safety studies is described. Two sample cases are discussed in detail: the first one makes reference to the use of data collected in the aviation domain and shows how these can be utilised to define hazard and risk; the second one concerns a typical ethnographic study in a large hospital structure for the identification of most relevant areas of intervention. The results show that, if national authorities find a way to harmonise and formalize critical aspects, such as the severity of standard events, it is possible to estimate risk and define auditing needs, well before the occurrence of serious incidents, and to indicate practical ways forward for improving safety standards.

  4. Precision and shortcomings of yaw error estimation using spinner-based light detection and ranging

    DEFF Research Database (Denmark)

    Kragh, Knud Abildgaard; Hansen, Morten Hartvig; Mikkelsen, Torben

    2013-01-01

    was developed and tested. In this study, the simulation parameter space is extended to include higher levels of turbulence intensity. Furthermore, the method is applied to experimental data and compared with met-mast data corrected for a calibration error that was not discovered during previous work. Finally......, the shortcomings of using a spinner mounted LIDAR for yaw error estimation are discussed. The extended simulation study shows that with the applied method, the yaw error can be estimated with a precision of a few degrees, even in highly turbulent flows. Applying the method to experimental data reveals an average......When extracting energy from the wind using horizontal axis wind turbines, the ability to align the rotor axis with the mean wind direction is crucial. In previous work, a method for estimating the yaw error based on measurements from a spinner mounted light detection and ranging (LIDAR) device...

  5. Errors in Thermographic Camera Measurement Caused by Known Heat Sources and Depth Based Correction

    Directory of Open Access Journals (Sweden)

    Mark Christian E. Manuel

    2016-03-01

    Full Text Available Thermal imaging has shown to be a better tool for the quantitative measurement of temperature than single spot infrared thermometers. However, thermographic cameras can encounter errors in acquiring accurate temperature measurements in the presence of other environmental heat sources. Some of these errors arise due to the inability of the thermal camera to detect objects and features in the infrared domain. In this paper, the thermal image is registered as a stereo image from a Kinect system prior to depth-based correction. Experiments demonstrating the error are presented together with the determination of the measurement errors under prior knowledge of the thermographed scene. The proposed correction scheme improves the accuracy of the thermal image through augmentation using the Kinect system.

  6. An Enhanced MEMS Error Modeling Approach Based on Nu-Support Vector Regression

    Directory of Open Access Journals (Sweden)

    Deepak Bhatt

    2012-07-01

    Full Text Available Micro Electro Mechanical System (MEMS-based inertial sensors have made possible the development of a civilian land vehicle navigation system by offering a low-cost solution. However, the accurate modeling of the MEMS sensor errors is one of the most challenging tasks in the design of low-cost navigation systems. These sensors exhibit significant errors like biases, drift, noises; which are negligible for higher grade units. Different conventional techniques utilizing the Gauss Markov model and neural network method have been previously utilized to model the errors. However, Gauss Markov model works unsatisfactorily in the case of MEMS units due to the presence of high inherent sensor errors. On the other hand, modeling the random drift utilizing Neural Network (NN is time consuming, thereby affecting its real-time implementation. We overcome these existing drawbacks by developing an enhanced Support Vector Machine (SVM based error model. Unlike NN, SVMs do not suffer from local minimisation or over-fitting problems and delivers a reliable global solution. Experimental results proved that the proposed SVM approach reduced the noise standard deviation by 10–35% for gyroscopes and 61–76% for accelerometers. Further, positional error drifts under static conditions improved by 41% and 80% in comparison to NN and GM approaches.

  7. Improved modeling of multivariate measurement errors based on the Wishart distribution.

    Science.gov (United States)

    Wentzell, Peter D; Cleary, Cody S; Kompany-Zareh, M

    2017-03-22

    The error covariance matrix (ECM) is an important tool for characterizing the errors from multivariate measurements, representing both the variance and covariance in the errors across multiple channels. Such information is useful in understanding and minimizing sources of experimental error and in the selection of optimal data analysis procedures. Experimental ECMs, normally obtained through replication, are inherently noisy, inconvenient to obtain, and offer limited interpretability. Significant advantages can be realized by building a model for the ECM based on established error types. Such models are less noisy, reduce the need for replication, mitigate mathematical complications such as matrix singularity, and provide greater insights. While the fitting of ECM models using least squares has been previously proposed, the present work establishes that fitting based on the Wishart distribution offers a much better approach. Simulation studies show that the Wishart method results in parameter estimates with a smaller variance and also facilitates the statistical testing of alternative models using a parameterized bootstrap method. The new approach is applied to fluorescence emission data to establish the acceptability of various models containing error terms related to offset, multiplicative offset, shot noise and uniform independent noise. The implications of the number of replicates, as well as single vs. multiple replicate sets are also described.

  8. Error Evaluation in a Stereovision-Based 3D Reconstruction System

    Directory of Open Access Journals (Sweden)

    Kohler Sophie

    2010-01-01

    Full Text Available The work presented in this paper deals with the performance analysis of the whole 3D reconstruction process of imaged objects, specifically of the set of geometric primitives describing their outline and extracted from a pair of images knowing their associated camera models. The proposed analysis focuses on error estimation for the edge detection process, the starting step for the whole reconstruction procedure. The fitting parameters describing the geometric features composing the workpiece to be evaluated are used as quality measures to determine error bounds and finally to estimate the edge detection errors. These error estimates are then propagated up to the final 3D reconstruction step. The suggested error analysis procedure for stereovision-based reconstruction tasks further allows evaluating the quality of the 3D reconstruction. The resulting final error estimates enable lastly to state if the reconstruction results fulfill a priori defined criteria, for example, fulfill dimensional constraints including tolerance information, for vision-based quality control applications for example.

  9. Error analysis and algorithm implementation for an improved optical-electric tracking device based on MEMS

    Science.gov (United States)

    Sun, Hong; Wu, Qian-zhong

    2013-09-01

    In order to improve the precision of optical-electric tracking device, proposing a kind of improved optical-electric tracking device based on MEMS, in allusion to the tracking error of gyroscope senor and the random drift, According to the principles of time series analysis of random sequence, establish AR model of gyro random error based on Kalman filter algorithm, then the output signals of gyro are multiple filtered with Kalman filter. And use ARM as micro controller servo motor is controlled by fuzzy PID full closed loop control algorithm, and add advanced correction and feed-forward links to improve response lag of angle input, Free-forward can make output perfectly follow input. The function of lead compensation link is to shorten the response of input signals, so as to reduce errors. Use the wireless video monitor module and remote monitoring software (Visual Basic 6.0) to monitor servo motor state in real time, the video monitor module gathers video signals, and the wireless video module will sent these signals to upper computer, so that show the motor running state in the window of Visual Basic 6.0. At the same time, take a detailed analysis to the main error source. Through the quantitative analysis of the errors from bandwidth and gyro sensor, it makes the proportion of each error in the whole error more intuitive, consequently, decrease the error of the system. Through the simulation and experiment results shows the system has good following characteristic, and it is very valuable for engineering application.

  10. Compensation of position errors in passivity based teleoperation over packet switched communication networks

    NARCIS (Netherlands)

    Secchi, C; Stramigioli, Stefano; Fantuzzi, C.

    Because of the use of scattering based communication channels, passivity based telemanipulation systems can be subject to a steady state position error between master and slave robots. In this paper, we consider the case in which the passive master and slave sides communicate through a packet

  11. NEW APPROACH FOR RELIABILITY-BASED DESIGN OPTIMIZATION: MINIMUM ERROR POINT

    Institute of Scientific and Technical Information of China (English)

    LIU Deshun; YUE Wenhui; ZHU Pingyu; DU Xiaoping

    2006-01-01

    Conventional reliability-based design optimization (RBDO) requires to use the most probable point (MPP) method for a probabilistic analysis of the reliability constraints. A new approach is presented, called as the minimum error point (MEP) method or the MEP based method,for reliability-based design optimization, whose idea is to minimize the error produced by approximating performance functions. The MEP based method uses the first order Taylor's expansion at MEP instead of MPP. Examples demonstrate that the MEP based design optimization can ensure product reliability at the required level, which is very imperative for many important engineering systems. The MEP based reliability design optimization method is feasible and is considered as an alternative for solving reliability design optimization problems. The MEP based method is more robust than the commonly used MPP based method for some irregular performance functions.

  12. Lossless compression of hyperspectral images based on the prediction error block

    Science.gov (United States)

    Li, Yongjun; Li, Yunsong; Song, Juan; Liu, Weijia; Li, Jiaojiao

    2016-05-01

    A lossless compression algorithm of hyperspectral image based on distributed source coding is proposed, which is used to compress the spaceborne hyperspectral data effectively. In order to make full use of the intra-frame correlation and inter-frame correlation, the prediction error block scheme are introduced. Compared with the scalar coset based distributed compression method (s-DSC) proposed by E.Magli et al., that is , the bitrate of the whole block is determined by its maximum prediction error, and the s-DSC-classify scheme proposed by Song Juan that is based on classification and coset coding, the prediction error block scheme could reduce the bitrate efficiently. Experimental results on hyperspectral images show that the proposed scheme can offer both high compression performance and low encoder complexity and decoder complexity, which is available for on-board compression of hyperspectral images.

  13. Error-detection-based quantum fault tolerance against discrete Pauli noise

    CERN Document Server

    Reichardt, B W

    2006-01-01

    A quantum computer -- i.e., a computer capable of manipulating data in quantum superposition -- would find applications including factoring, quantum simulation and tests of basic quantum theory. Since quantum superpositions are fragile, the major hurdle in building such a computer is overcoming noise. Developed over the last couple of years, new schemes for achieving fault tolerance based on error detection, rather than error correction, appear to tolerate as much as 3-6% noise per gate -- an order of magnitude better than previous procedures. But proof techniques could not show that these promising fault-tolerance schemes tolerated any noise at all. With an analysis based on decomposing complicated probability distributions into mixtures of simpler ones, we rigorously prove the existence of constant tolerable noise rates ("noise thresholds") for error-detection-based schemes. Numerical calculations indicate that the actual noise threshold this method yields is lower-bounded by 0.1% noise per gate.

  14. A study on fatigue measurement of operators for human error prevention in NPPs

    Energy Technology Data Exchange (ETDEWEB)

    Ju, Oh Yeon; Il, Jang Tong; Meiling, Luo; Hee, Lee Young [KAERI, Daejeon (Korea, Republic of)

    2012-10-15

    The identification and the analysis of individual factor of operators, which is one of the various causes of adverse effects in human performance, is not easy in NPPs. There are work types (including shift), environment, personality, qualification, training, education, cognition, fatigue, job stress, workload, etc in individual factors for the operators. Research at the Finnish Institute of Occupational Health (FIOH) reported that a 'burn out (extreme fatigue)' is related to alcohol dependent habits and must be dealt with using a stress management program. USNRC (U.S. Nuclear Regulatory Commission) developed FFD (Fitness for Duty) for improving the task efficiency and preventing human errors. 'Managing Fatigue' of 10CFR26 presented as requirements to control operator fatigue in NPPs. The committee explained that excessive fatigue is due to stressful work environments, working hours, shifts, sleep disorders, and unstable circadian rhythms. In addition, an International Labor Organization (ILO) developed and suggested a checklist to manage fatigue and job stress. In domestic, a systematic evaluation way is presented by the Final Safety Analysis Report (FSAR) chapter 18, Human Factors, in the licensing process. However, it almost focused on the interface design such as HMI (Human Machine Interface), not individual factors. In particular, because our country is in a process of the exporting the NPP to UAE, the development and setting of fatigue management technique is important and urgent to present the technical standard and FFD criteria to UAE. And also, it is anticipated that the domestic regulatory body applies the FFD program as the regulation requirement so that a preparation for that situation is required. In this paper, advanced researches are investigated to find the fatigue measurement and evaluation methods of operators in a high reliability industry. Also, this study tries to review the NRC report and discuss the causal factors and

  15. Systematic analysis of video data from different human-robot interaction studies: a categorization of social signals during error situations.

    Science.gov (United States)

    Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred

    2015-01-01

    Human-robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human-robot interaction experiments. For that, we analyzed 201 videos of five human-robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human-robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies.

  16. Nonlinear signal-based control with an error feedback action for nonlinear substructuring control

    Science.gov (United States)

    Enokida, Ryuta; Kajiwara, Koichi

    2017-01-01

    A nonlinear signal-based control (NSBC) method utilises the 'nonlinear signal' that is obtained from the outputs of a controlled system and its linear model under the same input signal. Although this method has been examined in numerical simulations of nonlinear systems, its application in physical experiments has not been studied. In this paper, we study an application of NSBC in physical experiments and incorporate an error feedback action into the method to minimise the error and enhance the feasibility in practice. Focusing on NSBC in substructure testing methods, we propose nonlinear substructuring control (NLSC), that is a more general form of linear substructuring control (LSC) developed for dynamical substructured systems. In this study, we experimentally and numerically verified the proposed NLSC via substructuring tests on a rubber bearing used in base-isolated structures. In the examinations, NLSC succeeded in gaining accurate results despite significant nonlinear hysteresis and unknown parameters in the substructures. The nonlinear signal feedback action in NLSC was found to be notably effective in minimising the error caused by nonlinearity or unknown properties in the controlled system. In addition, the error feedback action in NLSC was found to be essential for maintaining stability. A stability analysis based on the Nyquist criterion, which is used particularly for linear systems, was also found to be efficient for predicting the instability conditions of substructuring tests with NLSC and useful for the error feedback controller design.

  17. Taxonomía de errores en las bases de datos cubanas

    Directory of Open Access Journals (Sweden)

    Ramiro Pérez Vázquez

    2011-10-01

    Full Text Available La limpieza de datos, proceso que se caracteriza por detectar y corregir los errores en los datos, es muy usado en los ambientes donde la información se integra desde diferentes fuentes, aunque también se aplica en ficheros o bases de datos operacionales. La primera tarea dentro del proceso de limpieza de datos es la detección de los errores, y por tanto debe conocerse a qué se denomina error. Una dirección importante de trabajo en la limpieza de datos es el establecimiento de qué constituye una anomalía o error en los datos; en general esto depende del contexto que se esté analizando y de las reglas del negocio específicas para el universo de trabajo en cuestión. En este artículo se presenta el análisis realizado sobre varias bases de datos y se propone una taxonomía de errores en las bases de datos en Cuba,  lo cual permitirá el desarrollo de herramientas dirigidas a limpiar estos tipos de anomalías en los datos.

  18. A Novel Error Correcting System Based on Product Codes for Future Magnetic Recording Channels

    CERN Document Server

    Van, Vo Tam

    2012-01-01

    We propose a novel construction of product codes for high-density magnetic recording based on binary low-density parity check (LDPC) codes and binary image of Reed Solomon (RS) codes. Moreover, two novel algorithms are proposed to decode the codes in the presence of both AWGN errors and scattered hard errors (SHEs). Simulation results show that at a bit error rate (bER) of approximately 10^-8, our method allows improving the error performance by approximately 1.9dB compared with that of a hard decision decoder of RS codes of the same length and code rate. For the mixed error channel including random noises and SHEs, the signal-to-noise ratio (SNR) is set at 5dB and 150 to 400 SHEs are randomly generated. The bit error performance of the proposed product code shows a significant improvement over that of equivalent random LDPC codes or serial concatenation of LDPC and RS codes.

  19. A New Approach of Error Compensation on NC Machining Based on Memetic Computation

    Directory of Open Access Journals (Sweden)

    Huanglin Zeng

    2013-04-01

    Full Text Available This paper is a study of the application of Memetic computation integrating and coordinating intelligence algorithms to solve the problems of error compensation for a high-precision numeral control machining system. The primary focus is on development of integrated intelligent computation approach to set up an error compensation system of a numeral control machine tool based on a dynamic feedback neural network. Optimization of error measurement points of a numeral control machine tool is realized by way of application of error variable attribute reduction on rough set theory. A principal component analysis is used for data compression and feature extraction to reduce the input dimension of a dynamic feedback neural network. A dynamic feedback neural network is trained on ant colony algorithm so that network can converge to get a global optimum. Positioning error caused in thermal deformation compensation capabilities were tested using industry standard equipment and procedures. The results obtained shows that this approach can effectively improve compensation precision and real time of error compensation on machine tools.

  20. Detection and correction of inconsistency-based errors in non-rigid registration

    Science.gov (United States)

    Gass, Tobias; Szekely, Gabor; Goksel, Orcun

    2014-03-01

    In this paper we present a novel post-processing technique to detect and correct inconsistency-based errors in non-rigid registration. While deformable registration is ubiquitous in medical image computing, assessing its quality has yet been an open problem. We propose a method that predicts local registration errors of existing pairwise registrations between a set of images, while simultaneously estimating corrected registrations. In the solution the error is constrained to be small in areas of high post-registration image similarity, while local registrations are constrained to be consistent between direct and indirect registration paths. The latter is a critical property of an ideal registration process, and has been frequently used to asses the performance of registration algorithms. In our work, the consistency is used as a target criterion, for which we efficiently find a solution using a linear least-squares model on a coarse grid of registration control points. We show experimentally that the local errors estimated by our algorithm correlate strongly with true registration errors in experiments with known, dense ground-truth deformations. Additionally, the estimated corrected registrations consistently improve over the initial registrations in terms of average deformation error or TRE for different registration algorithms on both simulated and clinical data, independent of modality (MRI/CT), dimensionality (2D/3D) and employed primary registration method (demons/Markov-randomfield).

  1. Error Detection-Based Model to Assess Educational Outcomes in Crisis Resource Management Training: A Pilot Study.

    Science.gov (United States)

    Bouhabel, Sarah; Kay-Rivest, Emily; Nhan, Carol; Bank, Ilana; Nugus, Peter; Fisher, Rachel; Nguyen, Lily Hp

    2017-06-01

    Otolaryngology-head and neck surgery (OTL-HNS) residents face a variety of difficult, high-stress situations, which may occur early in their training. Since these events occur infrequently, simulation-based learning has become an important part of residents' training and is already well established in fields such as anesthesia and emergency medicine. In the domain of OTL-HNS, it is gradually gaining in popularity. Crisis Resource Management (CRM), a program adapted from the aviation industry, aims to improve outcomes of crisis situations by attempting to mitigate human errors. Some examples of CRM principles include cultivating situational awareness; promoting proper use of available resources; and improving rapid decision making, particularly in high-acuity, low-frequency clinical situations. Our pilot project sought to integrate CRM principles into an airway simulation course for OTL-HNS residents, but most important, it evaluated whether learning objectives were met, through use of a novel error identification model.

  2. Comparison of subset-based local and FE-based global digital image correlation: Theoretical error analysis and validation

    Science.gov (United States)

    Pan, B.; Wang, B.; Lubineau, G.

    2016-07-01

    Subset-based local and finite-element-based (FE-based) global digital image correlation (DIC) approaches are the two primary image matching algorithms widely used for full-field displacement mapping. Very recently, the performances of these different DIC approaches have been experimentally investigated using numerical and real-world experimental tests. The results have shown that in typical cases, where the subset (element) size is no less than a few pixels and the local deformation within a subset (element) can be well approximated by the adopted shape functions, the subset-based local DIC outperforms FE-based global DIC approaches because the former provides slightly smaller root-mean-square errors and offers much higher computation efficiency. Here we investigate the theoretical origin and lay a solid theoretical basis for the previous comparison. We assume that systematic errors due to imperfect intensity interpolation and undermatched shape functions are negligibly small, and perform a theoretical analysis of the random errors or standard deviation (SD) errors in the displacements measured by two local DIC approaches (i.e., a subset-based local DIC and an element-based local DIC) and two FE-based global DIC approaches (i.e., Q4-DIC and Q8-DIC). The equations that govern the random errors in the displacements measured by these local and global DIC approaches are theoretically derived. The correctness of the theoretically predicted SD errors is validated through numerical translation tests under various noise levels. We demonstrate that the SD errors induced by the Q4-element-based local DIC, the global Q4-DIC and the global Q8-DIC are 4, 1.8-2.2 and 1.2-1.6 times greater, respectively, than that associated with the subset-based local DIC, which is consistent with our conclusions from previous work.

  3. Comparison of subset-based local and FE-based global digital image correlation: Theoretical error analysis and validation

    KAUST Repository

    Pan, B.

    2016-03-22

    Subset-based local and finite-element-based (FE-based) global digital image correlation (DIC) approaches are the two primary image matching algorithms widely used for full-field displacement mapping. Very recently, the performances of these different DIC approaches have been experimentally investigated using numerical and real-world experimental tests. The results have shown that in typical cases, where the subset (element) size is no less than a few pixels and the local deformation within a subset (element) can be well approximated by the adopted shape functions, the subset-based local DIC outperforms FE-based global DIC approaches because the former provides slightly smaller root-mean-square errors and offers much higher computation efficiency. Here we investigate the theoretical origin and lay a solid theoretical basis for the previous comparison. We assume that systematic errors due to imperfect intensity interpolation and undermatched shape functions are negligibly small, and perform a theoretical analysis of the random errors or standard deviation (SD) errors in the displacements measured by two local DIC approaches (i.e., a subset-based local DIC and an element-based local DIC) and two FE-based global DIC approaches (i.e., Q4-DIC and Q8-DIC). The equations that govern the random errors in the displacements measured by these local and global DIC approaches are theoretically derived. The correctness of the theoretically predicted SD errors is validated through numerical translation tests under various noise levels. We demonstrate that the SD errors induced by the Q4-element-based local DIC, the global Q4-DIC and the global Q8-DIC are 4, 1.8-2.2 and 1.2-1.6 times greater, respectively, than that associated with the subset-based local DIC, which is consistent with our conclusions from previous work. © 2016 Elsevier Ltd. All rights reserved.

  4. Minimising human error in malaria rapid diagnosis: clarity of written instructions and health worker performance.

    Science.gov (United States)

    Rennie, Waverly; Phetsouvanh, Rattanaxay; Lupisan, Socorro; Vanisaveth, Viengsay; Hongvanthong, Bouasy; Phompida, Samlane; Alday, Portia; Fulache, Mila; Lumagui, Richard; Jorgensen, Pernille; Bell, David; Harvey, Steven

    2007-01-01

    The usefulness of rapid diagnostic tests (RDT) in malaria case management depends on the accuracy of the diagnoses they provide. Despite their apparent simplicity, previous studies indicate that RDT accuracy is highly user-dependent. As malaria RDTs will frequently be used in remote areas with little supervision or support, minimising mistakes is crucial. This paper describes the development of new instructions (job aids) to improve health worker performance, based on observations of common errors made by remote health workers and villagers in preparing and interpreting RDTs, in the Philippines and Laos. Initial preparation using the instructions provided by the manufacturer was poor, but improved significantly with the job aids (e.g. correct use both of the dipstick and cassette increased in the Philippines by 17%). However, mistakes in preparation remained commonplace, especially for dipstick RDTs, as did mistakes in interpretation of results. A short orientation on correct use and interpretation further improved accuracy, from 70% to 80%. The results indicate that apparently simple diagnostic tests can be poorly performed and interpreted, but provision of clear, simple instructions can reduce these errors. Preparation of appropriate instructions and training as well as monitoring of user behaviour are an essential part of rapid test implementation.

  5. Automated contouring error detection based on supervised geometric attribute distribution models for radiation therapy: A general strategy

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Hsin-Chen; Tan, Jun; Dolly, Steven; Kavanaugh, James; Harold Li, H.; Altman, Michael; Gay, Hiram; Thorstad, Wade L.; Mutic, Sasa; Li, Hua, E-mail: huli@radonc.wustl.edu [Department of Radiation Oncology, Washington University, St. Louis, Missouri 63110 (United States); Anastasio, Mark A. [Department of Biomedical Engineering, Washington University, St. Louis, Missouri 63110 (United States); Low, Daniel A. [Department of Radiation Oncology, University of California Los Angeles, Los Angeles, California 90095 (United States)

    2015-02-15

    Purpose: One of the most critical steps in radiation therapy treatment is accurate tumor and critical organ-at-risk (OAR) contouring. Both manual and automated contouring processes are prone to errors and to a large degree of inter- and intraobserver variability. These are often due to the limitations of imaging techniques in visualizing human anatomy as well as to inherent anatomical variability among individuals. Physicians/physicists have to reverify all the radiation therapy contours of every patient before using them for treatment planning, which is tedious, laborious, and still not an error-free process. In this study, the authors developed a general strategy based on novel geometric attribute distribution (GAD) models to automatically detect radiation therapy OAR contouring errors and facilitate the current clinical workflow. Methods: Considering the radiation therapy structures’ geometric attributes (centroid, volume, and shape), the spatial relationship of neighboring structures, as well as anatomical similarity of individual contours among patients, the authors established GAD models to characterize the interstructural centroid and volume variations, and the intrastructural shape variations of each individual structure. The GAD models are scalable and deformable, and constrained by their respective principal attribute variations calculated from training sets with verified OAR contours. A new iterative weighted GAD model-fitting algorithm was developed for contouring error detection. Receiver operating characteristic (ROC) analysis was employed in a unique way to optimize the model parameters to satisfy clinical requirements. A total of forty-four head-and-neck patient cases, each of which includes nine critical OAR contours, were utilized to demonstrate the proposed strategy. Twenty-nine out of these forty-four patient cases were utilized to train the inter- and intrastructural GAD models. These training data and the remaining fifteen testing data sets

  6. Error analysis for satellite gravity field determination based on two-dimensional Fourier methods

    CERN Document Server

    Cai, Lin; Hsu, Houtse; Gao, Fang; Zhu, Zhu; Luo, Jun

    2012-01-01

    The time-wise and space-wise approaches are generally applied to data processing and error analysis for satellite gravimetry missions. But both the approaches, which are based on least-squares collocation, address the whole effect of measurement errors and estimate the resolution of gravity field models mainly from a numerical point of indirect view. Moreover, requirement for higher accuracy and resolution gravity field models could make the computation more difficult, and serious numerical instabilities arise. In order to overcome the problems, this study focuses on constructing a direct relationship between power spectral density of the satellite gravimetry measurements and coefficients of the Earth's gravity potential. Based on two-dimensional Fourier transform, the relationship is analytically concluded. By taking advantage of the analytical expression, it is efficient and distinct for parameter estimation and error analysis of missions. From the relationship and the simulations, it is analytically confir...

  7. Error analysis and feasibility study of dynamic stiffness matrix-based damping matrix identification

    Science.gov (United States)

    Ozgen, Gokhan O.; Kim, Jay H.

    2009-02-01

    Developing a method to formulate a damping matrix that represents the actual spatial distribution and mechanism of damping of the dynamic system has been an elusive goal. The dynamic stiffness matrix (DSM)-based damping identification method proposed by Lee and Kim is attractive and promising because it identifies the damping matrix from the measured DSM without relying on any unfounded assumptions. However, in ensuing works it was found that damping matrices identified from the method had unexpected forms and showed traces of large variance errors. The causes and possible remedies of the problem are sought for in this work. The variance and leakage errors are identified as the major sources of the problem, which are then related to system parameters through numerical and experimental simulations. An improved experimental procedure is developed to reduce the effect of these errors in order to make the DSM-based damping identification method a practical option.

  8. Missing texture reconstruction method based on error reduction algorithm using Fourier transform magnitude estimation scheme.

    Science.gov (United States)

    Ogawa, Takahiro; Haseyama, Miki

    2013-03-01

    A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.

  9. Enriched goal-oriented error estimation for fracture problems solved by continuum-based shell extended finite element method

    Institute of Scientific and Technical Information of China (English)

    Zhi-jia LIN; Zhuo ZHUANG BU

    2014-01-01

    An enriched goal-oriented error estimation method with extended degrees of freedom is developed to estimate the error in the continuum-based shell extended finite element method. It leads to high quality local error bounds in three-dimensional fracture mechanics simulation which involves enrichments to solve the singularity in crack tip. This enriched goal-oriented error estimation gives a chance to evaluate this continuum-based shell extended finite element method simulation. With comparisons of reliability to the stress intensity factor calculation in stretching and bending, the accuracy of the continuum-based shell extended finite element method simulation is evaluated, and the reason of error is discussed.

  10. The propagation of inventory-based positional errors into statistical landslide susceptibility models

    Science.gov (United States)

    Steger, Stefan; Brenning, Alexander; Bell, Rainer; Glade, Thomas

    2016-12-01

    There is unanimous agreement that a precise spatial representation of past landslide occurrences is a prerequisite to produce high quality statistical landslide susceptibility models. Even though perfectly accurate landslide inventories rarely exist, investigations of how landslide inventory-based errors propagate into subsequent statistical landslide susceptibility models are scarce. The main objective of this research was to systematically examine whether and how inventory-based positional inaccuracies of different magnitudes influence modelled relationships, validation results, variable importance and the visual appearance of landslide susceptibility maps. The study was conducted for a landslide-prone site located in the districts of Amstetten and Waidhofen an der Ybbs, eastern Austria, where an earth-slide point inventory was available. The methodological approach comprised an artificial introduction of inventory-based positional errors into the present landslide data set and an in-depth evaluation of subsequent modelling results. Positional errors were introduced by artificially changing the original landslide position by a mean distance of 5, 10, 20, 50 and 120 m. The resulting differently precise response variables were separately used to train logistic regression models. Odds ratios of predictor variables provided insights into modelled relationships. Cross-validation and spatial cross-validation enabled an assessment of predictive performances and permutation-based variable importance. All analyses were additionally carried out with synthetically generated data sets to further verify the findings under rather controlled conditions. The results revealed that an increasing positional inventory-based error was generally related to increasing distortions of modelling and validation results. However, the findings also highlighted that interdependencies between inventory-based spatial inaccuracies and statistical landslide susceptibility models are complex. The

  11. Previous estimates of mitochondrial DNA mutation level variance did not account for sampling error: comparing the mtDNA genetic bottleneck in mice and humans.

    Science.gov (United States)

    Wonnapinij, Passorn; Chinnery, Patrick F; Samuels, David C

    2010-04-09

    In cases of inherited pathogenic mitochondrial DNA (mtDNA) mutations, a mother and her offspring generally have large and seemingly random differences in the amount of mutated mtDNA that they carry. Comparisons of measured mtDNA mutation level variance values have become an important issue in determining the mechanisms that cause these large random shifts in mutation level. These variance measurements have been made with samples of quite modest size, which should be a source of concern because higher-order statistics, such as variance, are poorly estimated from small sample sizes. We have developed an analysis of the standard error of variance from a sample of size n, and we have defined error bars for variance measurements based on this standard error. We calculate variance error bars for several published sets of measurements of mtDNA mutation level variance and show how the addition of the error bars alters the interpretation of these experimental results. We compare variance measurements from human clinical data and from mouse models and show that the mutation level variance is clearly higher in the human data than it is in the mouse models at both the primary oocyte and offspring stages of inheritance. We discuss how the standard error of variance can be used in the design of experiments measuring mtDNA mutation level variance. Our results show that variance measurements based on fewer than 20 measurements are generally unreliable and ideally more than 50 measurements are required to reliably compare variances with less than a 2-fold difference.

  12. Quantum Watermarking by Frequency of Error when Observing Qubits in Dissimilar Bases

    CERN Document Server

    Worley, G G

    2004-01-01

    We present a so-called fuzzy watermarking scheme based on the relative frequency of error in observing qubits in a dissimilar basis from the one in which they were written. Then we discuss possible attacks on the system and speculate on how to implement this watermarking scheme for particular kinds of messages (images, formated text, etc.).

  13. Using Computation Curriculum-Based Measurement Probes for Error Pattern Analysis

    Science.gov (United States)

    Dennis, Minyi Shih; Calhoon, Mary Beth; Olson, Christopher L.; Williams, Cara

    2014-01-01

    This article describes how "curriculum-based measurement--computation" (CBM-C) mathematics probes can be used in combination with "error pattern analysis" (EPA) to pinpoint difficulties in basic computation skills for students who struggle with learning mathematics. Both assessment procedures provide ongoing assessment data…

  14. A meshless based method for solution of integral equations: Improving the error analysis

    OpenAIRE

    Mirzaei, Davoud

    2015-01-01

    This draft concerns the error analysis of a collocation method based on the moving least squares (MLS) approximation for integral equations, which improves the results of [2] in the analysis part. This is mainly a translation from Persian of some parts of Chapter 2 of the author's PhD thesis in 2011.

  15. Rank-based Tests of the Cointegrating Rank in Semiparametric Error Correction Models

    NARCIS (Netherlands)

    Hallin, M.; van den Akker, R.; Werker, B.J.M.

    2012-01-01

    Abstract: This paper introduces rank-based tests for the cointegrating rank in an Error Correction Model with i.i.d. elliptical innovations. The tests are asymptotically distribution-free, and their validity does not depend on the actual distribution of the innovations. This result holds despite the

  16. Likelihood-Based Cointegration Analysis in Panels of Vector Error Correction Models

    NARCIS (Netherlands)

    J.J.J. Groen (Jan); F.R. Kleibergen (Frank)

    1999-01-01

    textabstractWe propose in this paper a likelihood-based framework for cointegration analysis in panels of a fixed number of vector error correction models. Maximum likelihood estimators of the cointegrating vectors are constructed using iterated Generalized Method of Moments estimators. Using these

  17. Mean square error approximation for wavelet-based semiregular mesh compression.

    Science.gov (United States)

    Payan, Frédéric; Antonini, Marc

    2006-01-01

    The objective of this paper is to propose an efficient model-based bit allocation process optimizing the performances of a wavelet coder for semiregular meshes. More precisely, this process should compute the best quantizers for the wavelet coefficient subbands that minimize the reconstructed mean square error for one specific target bitrate. In order to design a fast and low complex allocation process, we propose an approximation of the reconstructed mean square error relative to the coding of semiregular mesh geometry. This error is expressed directly from the quantization errors of each coefficient subband. For that purpose, we have to take into account the influence of the wavelet filters on the quantized coefficients. Furthermore, we propose a specific approximation for wavelet transforms based on lifting schemes. Experimentally, we show that, in comparison with a "naive" approximation (depending on the subband levels), using the proposed approximation as distortion criterion during the model-based allocation process improves the performances of a wavelet-based coder for any model, any bitrate, and any lifting scheme.

  18. On the Security of Digital Signature Schemes Based on Error-Correcting Codes

    NARCIS (Netherlands)

    Xu, Sheng-bo; Doumen, J.M.; van Tilborg, Henk

    We discuss the security of digital signature schemes based on error-correcting codes. Several attacks to the Xinmei scheme are surveyed, and some reasons given to explain why the Xinmei scheme failed, such as the linearity of the signature and the redundancy of public keys. Another weakness is found

  19. Students' Errors in Solving the Permutation and Combination Problems Based on Problem Solving Steps of Polya

    Science.gov (United States)

    Sukoriyanto; Nusantara, Toto; Subanji; Chandra, Tjang Daniel

    2016-01-01

    This article was written based on the results of a study evaluating students' errors in problem solving of permutation and combination in terms of problem solving steps according to Polya. Twenty-five students were asked to do four problems related to permutation and combination. The research results showed that the students still did a mistake in…

  20. Rank-based Tests of the Cointegrating Rank in Semiparametric Error Correction Models

    NARCIS (Netherlands)

    Hallin, M.; van den Akker, R.; Werker, B.J.M.

    2012-01-01

    Abstract: This paper introduces rank-based tests for the cointegrating rank in an Error Correction Model with i.i.d. elliptical innovations. The tests are asymptotically distribution-free, and their validity does not depend on the actual distribution of the innovations. This result holds despite the

  1. A Preliminary Study on the Measures to Assess the Organizational Safety: The Cultural Impact on Human Error Potential

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yong Hee; Lee, Yong Hee [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2011-10-15

    The Fukushima I nuclear accident following the Tohoku earthquake and tsunami on 11 March 2011 occurred after twelve years had passed since the JCO accident which was caused as a result of an error made by JCO employees. These accidents, along with the Chernobyl accident, associated with characteristic problems of various organizations caused severe social and economic disruptions and have had significant environmental and health impact. The cultural problems with human errors occur for various reasons, and different actions are needed to prevent different errors. Unfortunately, much of the research on organization and human error has shown widely various or different results which call for different approaches. In other words, we have to find more practical solutions from various researches for nuclear safety and lead a systematic approach to organizational deficiency causing human error. This paper reviews Hofstede's criteria, IAEA safety culture, safety areas of periodic safety review (PSR), teamwork and performance, and an evaluation of HANARO safety culture to verify the measures used to assess the organizational safety

  2. Lexis in Chinese-English Translation of Drug Package Inserts: Corpus-based Error Analysis and Its Translation Strategies

    OpenAIRE

    Ying, Lin; Yumei, Zhou

    2010-01-01

    Error analysis (EA) has been broadly applied to the researches of writing, speaking, second language acquisition (SLA) and translation. This study was carried out based on Carl James’ error taxonomy to investigate the distribution of lexical errors in Chinese-English (C-E) translation of drug package inserts (DPIs)(1), explore the underlying causes and propose some translation strategies for correction and reduction of lexical errors in DPIs. A translation corpus consisting of 25 DPIs transla...

  3. [Analysis, identification and correction of some errors of model refseqs appeared in NCBI Human Gene Database by in silico cloning and experimental verification of novel human genes].

    Science.gov (United States)

    Zhang, De-Li; Ji, Liang; Li, Yan-Da

    2004-05-01

    We found that human genome coding regions annotated by computers have different kinds of many errors in public domain through homologous BLAST of our cloned genes in non-redundant (nr) database, including insertions, deletions or mutations of one base pair or a segment in sequences at the cDNA level, or different permutation and combination of these errors. Basically, we use the three means for validating and identifying some errors of the model genes appeared in NCBI GENOME ANNOTATION PROJECT REFSEQS: (I) Evaluating the support degree of human EST clustering and draft human genome BLAST. (2) Preparation of chromosomal mapping of our verified genes and analysis of genomic organization of the genes. All of the exon/intron boundaries should be consistent with the GT/AG rule, and consensuses surrounding the splice boundaries should be found as well. (3) Experimental verification by RT-PCR of the in silico cloning genes and further by cDNA sequencing. And then we use the three means as reference: (1) Web searching or in silico cloning of the genes of different species, especially mouse and rat homologous genes, and thus judging the gene existence by ontology. (2) By using the released genes in public domain as standard, which should be highly homologous to our verified genes, especially the released human genes appeared in NCBI GENOME ANNOTATION PROJECT REFSEQS, we try to clone each a highly homologous complete gene similar to the released genes in public domain according to the strategy we developed in this paper. If we can not get it, our verified gene may be correct and the released gene in public domain may be wrong. (3) To find more evidence, we verified our cloned genes by RT-PCR or hybrid technique. Here we list some errors we found from NCBI GENOME ANNOTATION PROJECT REFSEQs: (1) Insert a base in the ORF by mistake which causes the frame shift of the coding amino acid. In detail, abase in the ORF of a gene is a redundant insertion, which causes a reading frame

  4. Approximation-error-ADP-based optimal tracking control for chaotic systems with convergence proof

    Science.gov (United States)

    Song, Rui-Zhuo; Xiao, Wen-Dong; Sun, Chang-Yin; Wei, Qing-Lai

    2013-09-01

    In this paper, an optimal tracking control scheme is proposed for a class of discrete-time chaotic systems using the approximation-error-based adaptive dynamic programming (ADP) algorithm. Via the system transformation, the optimal tracking problem is transformed into an optimal regulation problem, and then the novel optimal tracking control method is proposed. It is shown that for the iterative ADP algorithm with finite approximation error, the iterative performance index functions can converge to a finite neighborhood of the greatest lower bound of all performance index functions under some convergence conditions. Two examples are given to demonstrate the validity of the proposed optimal tracking control scheme for chaotic systems.

  5. An Investigation into Error Source Identification of Machine Tools Based on Time-Frequency Feature Extraction

    Directory of Open Access Journals (Sweden)

    Dongju Chen

    2016-01-01

    Full Text Available This paper presents a new identification method to identify the main errors of the machine tool in time-frequency domain. The low- and high-frequency signals of the workpiece surface are decomposed based on the Daubechies wavelet transform. With power spectral density analysis, the main features of the high-frequency signal corresponding to the imbalance of the spindle system are extracted from the surface topography of the workpiece in the frequency domain. With the cross-correlation analysis method, the relationship between the guideway error of the machine tool and the low-frequency signal of the surface topography is calculated in the time domain.

  6. Differential Laser Doppler based Non-Contact Sensor for Dimensional Inspection with Error Propagation Evaluation

    Directory of Open Access Journals (Sweden)

    Ketsaya Vacharanukul

    2006-06-01

    Full Text Available To achieve dynamic error compensation in CNC machine tools, a non-contactlaser probe capable of dimensional measurement of a workpiece while it is being machinedhas been developed and presented in this paper. The measurements are automatically fedback to the machine controller for intelligent error compensations. Based on a well resolvedlaser Doppler technique and real time data acquisition, the probe delivers a very promisingdimensional accuracy at few microns over a range of 100 mm. The developed opticalmeasuring apparatus employs a differential laser Doppler arrangement allowing acquisitionof information from the workpiece surface. In addition, the measurements are traceable tostandards of frequency allowing higher precision.

  7. Real-time prediction of atmospheric Lagrangian coherent structures based on forecast data: An application and error analysis

    Science.gov (United States)

    BozorgMagham, Amir E.; Ross, Shane D.; Schmale, David G.

    2013-09-01

    The language of Lagrangian coherent structures (LCSs) provides a new means for studying transport and mixing of passive particles advected by an atmospheric flow field. Recent observations suggest that LCSs govern the large-scale atmospheric motion of airborne microorganisms, paving the way for more efficient models and management strategies for the spread of infectious diseases affecting plants, domestic animals, and humans. In addition, having reliable predictions of the timing of hyperbolic LCSs may contribute to improved aerobiological sampling of microorganisms with unmanned aerial vehicles and LCS-based early warning systems. Chaotic atmospheric dynamics lead to unavoidable forecasting errors in the wind velocity field, which compounds errors in LCS forecasting. In this study, we reveal the cumulative effects of errors of (short-term) wind field forecasts on the finite-time Lyapunov exponent (FTLE) fields and the associated LCSs when realistic forecast plans impose certain limits on the forecasting parameters. Objectives of this paper are to (a) quantify the accuracy of prediction of FTLE-LCS features and (b) determine the sensitivity of such predictions to forecasting parameters. Results indicate that forecasts of attracting LCSs exhibit less divergence from the archive-based LCSs than the repelling features. This result is important since attracting LCSs are the backbone of long-lived features in moving fluids. We also show under what circumstances one can trust the forecast results if one merely wants to know if an LCS passed over a region and does not need to precisely know the passage time.

  8. Block Recovery Rate-Based Unequal Error Protection for Three-Screen TV

    Directory of Open Access Journals (Sweden)

    Hojin Ha

    2017-02-01

    Full Text Available This paper describes a three-screen television system using a block recovery rate (BRR-based unequal error protection (UEP. The proposed in-home wireless network uses scalable video coding (SVC and UEP with forward error correction (FEC for maximizing the quality of service (QoS over error-prone wireless networks. For efficient FEC packet assignment, this paper proposes a simple and efficient performance metric, a BRR which is defined as a recovery rate of temporal and quality layer from FEC assignment by analyzing the hierarchical prediction structure including the current packet loss. It also explains the SVC layer switching scheme according to network conditions such as packet loss rate (PLR and available bandwidth (ABW. In the experiments conducted, gains in video quality with the proposed UEP scheme vary from 1 to 3 dB in Y-peak signal-to-noise ratio (PSNR with corresponding subjective video quality improvements.

  9. Spatial multi-level interacting particle simulations and information theory-based error quantification

    CERN Document Server

    Kalligiannaki, Evangelia; Plechac, Petr

    2012-01-01

    We propose a hierarchy of multi-level kinetic Monte Carlo methods for sampling high-dimensional, stochastic lattice particle dynamics with complex interactions. The method is based on the efficient coupling of different spatial resolution levels, taking advantage of the low sampling cost in a coarse space and by developing local reconstruction strategies from coarse-grained dynamics. Microscopic reconstruction corrects possibly significant errors introduced through coarse-graining, leading to the controlled-error approximation of the sampled stochastic process. In this manner, the proposed multi-level algorithm overcomes known shortcomings of coarse-graining of particle systems with complex interactions such as combined long and short-range particle interactions and/or complex lattice geometries. Specifically, we provide error analysis for the approximation of long-time stationary dynamics in terms of relative entropy and prove that information loss in the multi-level methods is growing linearly in time, whic...

  10. Stable 1-Norm Error Minimization Based Linear Predictors for Speech Modeling

    DEFF Research Database (Denmark)

    Giacobello, Daniele; Christensen, Mads Græsbøll; Jensen, Tobias Lindstrøm;

    2014-01-01

    In linear prediction of speech, the 1-norm error minimization criterion has been shown to provide a valid alternative to the 2-norm minimization criterion. However, unlike 2-norm minimization, 1-norm minimization does not guarantee the stability of the corresponding all-pole filter and can generate...... of the shift operator associated with the particular prediction problem considered. The second method uses the alternative Cauchy bound to impose a convex constraint on the predictor in the 1-norm error minimization. These methods are compared with two existing methods: the Burg method, based on the 1-norm...... minimization of the forward and backward prediction error, and the iteratively reweighted 2-norm minimization known to converge to the 1-norm minimization with an appropriate selection of weights. The evaluation gives proof of the effectiveness of the new methods, performing as well as unconstrained 1-norm...

  11. Asynchronous error-correcting secure communication scheme based on fractional-order shifting chaotic system

    Science.gov (United States)

    Chao, Luo

    2015-11-01

    In this paper, a novel digital secure communication scheme is firstly proposed. Different from the usual secure communication schemes based on chaotic synchronization, the proposed scheme employs asynchronous communication which avoids the weakness of synchronous systems and is susceptible to environmental interference. Moreover, as to the transmission errors and data loss in the process of communication, the proposed scheme has the ability to be error-checking and error-correcting in real time. In order to guarantee security, the fractional-order complex chaotic system with the shifting of order is utilized to modulate the transmitted signal, which has high nonlinearity and complexity in both frequency and time domains. The corresponding numerical simulations demonstrate the effectiveness and feasibility of the scheme.

  12. Thermal Error Modeling of the CNC Machine Tool Based on Data Fusion Method of Kalman Filter

    Directory of Open Access Journals (Sweden)

    Haitong Wang

    2017-01-01

    Full Text Available This paper presents a modeling methodology for the thermal error of machine tool. The temperatures predicted by modified lumped-mass method and the temperatures measured by sensors are fused by the data fusion method of Kalman filter. The fused temperatures, instead of the measured temperatures used in traditional methods, are applied to predict the thermal error. The genetic algorithm is implemented to optimize the parameters in modified lumped-mass method and the covariances in Kalman filter. The simulations indicate that the proposed method performs much better compared with the traditional method of MRA, in terms of prediction accuracy and robustness under a variety of operating conditions. A compensation system is developed based on the controlling system of Siemens 840D. Validated by the compensation experiment, the thermal error after compensation has been reduced dramatically.

  13. Treatment of multiple network parameter errors through a genetic-based algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Stacchini de Souza, Julio C.; Do Coutto Filho, Milton B.; Meza, Edwin B. Mitacc [Department of Electrical Engineering, Institute of Computing, Fluminense Federal University, Rua Passo da Patria, 156 - Sao Domingos, 24210-240 Niteroi, Rio de Janeiro (Brazil)

    2009-11-15

    This paper proposes a genetic algorithm-based methodology for network parameter estimation and correction. Network parameter errors may come from many different sources, such as: imprecise data provided by manufacturers, poor estimation of transmission lines lengths and changes in transmission network design which are not adequately updated in the corresponding database. Network parameter data are employed by almost all power system analysis tools, from real time monitoring to long-term planning. The presence of parameter errors contaminates the results obtained by these tools and compromises decision-making processes. To get rid of single or multiple network parameter errors, a methodology that combines genetic algorithms and power system state estimation is proposed. Tests with the IEEE 14-bus system and a real Brazilian system are performed to illustrate the proposed method. (author)

  14. Angular discretization errors in transport theory; An information-based approach

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, P. (Texas A and M Univ., College Station, TX (United States). Dept. of Computer Science); Yu, F. (Academia Sinica, Beijing, BJ (China). Inst. of Atomic Energy)

    1992-11-01

    Elements of the information-based complexity theory are computed for several types of information and associated algorithms for angular approximations in the setting of a on-dimensional model problem. For point-evaluation information, the local and global radii of information are computed, a (trivial) optimal algorithm is determined, and the local and global error of a discrete ordinates algorithm are shown to be infinite. For average cone-integral information, the local and global radii of information are computed, the local and global error tends to zero as the underlying partition is indefinitely refined. A central algorithm for such information and an optimal partition (of given cardinality) are described. It is further shown that the analytic first-collision source method has zero error (for the purely absorbing model problem). Implications of the restricted problem domains suitable for the various types of information are discussed.

  15. Objective Error Criterion for Evaluation of Mapping Accuracy Based on Sensor Time-of-Flight Measurements

    Directory of Open Access Journals (Sweden)

    Billur Barshan

    2008-12-01

    Full Text Available An objective error criterion is proposed for evaluating the accuracy of maps of unknown environments acquired by making range measurements with different sensing modalities and processing them with different techniques. The criterion can also be used for the assessment of goodness of fit of curves or shapes fitted to map points. A demonstrative example from ultrasonic mapping is given based on experimentally acquired time-of-flight measurements and compared with a very accurate laser map, considered as absolute reference. The results of the proposed criterion are compared with the Hausdorff metric and the median error criterion results. The error criterion is sufficiently general and flexible that it can be applied to discrete point maps acquired with other mapping techniques and sensing modalities as well.

  16. A population-based survey of the prevalence of refractive error in Malawi.

    Science.gov (United States)

    Lewallen, S; Lowdon, R; Courtright, P; Mehl, G L

    1995-12-01

    Refractive errors, particularly myopia, are a common problem in industrialized countries, but the impression exists that myopia may be relatively uncommon in non-industrialized societies. We conducted a population-based survey of refractive error in two groups of Malawians: a group of rural agricultural workers (n = 510) and a group of students at an urban teachers' college (n = 534). The overall prevalence of myopia was low; 2.5% (95% confidence interval 1.3%, 3.7%) of participants had an error of -0.5 D or greater. The mean refractive error (right eye) in the urban student group was +0.52 D compared to +0.62 D among the rural agricultural workers and the excess myopia was accounted for by significant myopia (> or = -0.75 D) in a few individuals, rather than an overall shift towards myopia within the urban student group. Among the rural agricultural workers, literacy predicted refractive error (right eye), with a mean of +0.59 D in the rural literate compared to +0.67 D in the rural illiterate. These findings support the notion that myopia is uncommon in non-industrialized societies and that it is associated with increased literacy but we have not identified specific risk factors within this group to predict the occurrence of significant myopia. In settings such as Malawi, refractive services should be targeted to urban centers, where more educated populations are likely to be found.

  17. Conditional probability distribution (CPD) method in temperature based death time estimation: Error propagation analysis.

    Science.gov (United States)

    Hubig, Michael; Muggenthaler, Holger; Mall, Gita

    2014-05-01

    Bayesian estimation applied to temperature based death time estimation was recently introduced as conditional probability distribution or CPD-method by Biermann and Potente. The CPD-method is useful, if there is external information that sets the boundaries of the true death time interval (victim last seen alive and found dead). CPD allows computation of probabilities for small time intervals of interest (e.g. no-alibi intervals of suspects) within the large true death time interval. In the light of the importance of the CPD for conviction or acquittal of suspects the present study identifies a potential error source. Deviations in death time estimates will cause errors in the CPD-computed probabilities. We derive formulae to quantify the CPD error as a function of input error. Moreover we observed the paradox, that in cases, in which the small no-alibi time interval is located at the boundary of the true death time interval, adjacent to the erroneous death time estimate, CPD-computed probabilities for that small no-alibi interval will increase with increasing input deviation, else the CPD-computed probabilities will decrease. We therefore advise not to use CPD if there is an indication of an error or a contra-empirical deviation in the death time estimates, that is especially, if the death time estimates fall out of the true death time interval, even if the 95%-confidence intervals of the estimate still overlap the true death time interval.

  18. Determination of optimal samples for robot calibration based on error similarity

    Directory of Open Access Journals (Sweden)

    Tian Wei

    2015-06-01

    Full Text Available Industrial robots are used for automatic drilling and riveting. The absolute position accuracy of an industrial robot is one of the key performance indexes in aircraft assembly, and can be improved through error compensation to meet aircraft assembly requirements. The achievable accuracy and the difficulty of accuracy compensation implementation are closely related to the choice of sampling points. Therefore, based on the error similarity error compensation method, a method for choosing sampling points on a uniform grid is proposed. A simulation is conducted to analyze the influence of the sample point locations on error compensation. In addition, the grid steps of the sampling points are optimized using a statistical analysis method. The method is used to generate grids and optimize the grid steps of a Kuka KR-210 robot. The experimental results show that the method for planning sampling data can be used to effectively optimize the sampling grid. After error compensation, the position accuracy of the robot meets the position accuracy requirements.

  19. Multi-bit soft error tolerable L1 data cache based on characteristic of data value

    Institute of Scientific and Technical Information of China (English)

    WANG Dang-hui; LIU He-peng; CHEN Yi-ran

    2015-01-01

    Due to continuous decreasing feature size and increasing device density, on-chip caches have been becoming susceptible to single event upsets, which will result in multi-bit soft errors. The increasing rate of multi-bit errors could result in high risk of data corruption and even application program crashing. Traditionally, L1 D-caches have been protected from soft errors using simple parity to detect errors, and recover errors by reading correct data from L2 cache, which will induce performance penalty. This work proposes to exploit the redundancy based on the characteristic of data values. In the case of a small data value, the replica is stored in the upper half of the word. The replica of a big data value is stored in a dedicated cache line, which will sacrifice some capacity of the data cache. Experiment results show that the reliability of L1 D-cache has been improved by 65% at the cost of 1% in performance.

  20. Error Estimations in an Approximation on a Compact Interval with a Wavelet Bases

    Directory of Open Access Journals (Sweden)

    Dr. Marco Schuchmann

    2013-11-01

    Full Text Available By an approximation with a wavelet base we have in practise not only an error if the function y is not in Vj . There we have a second error because we do not use all bases functions. If the wavelet has a compact support we have no error by using only a part of all basis function. If we need an approximation on a compact interval I (which we can do even if y is not quadratic integrable on R, because in that case it must only be quadratic integrable on I leads to worse approximations if we calculate an orthogonal projection from 1I y in Vj. We can get much better approximations, if we apply a least square approximation with points in I. Here we will see, that this approximation can be much better th an a orthogonal projection form y or 1I y in Vj . With the Shannon wavelet, which has no compact support, we saw in many simulations, that a least square approximation can lead to much better results than with well known wavelets with compact support. So in that article we do an error estimation for the Shannon wavelet, if we use not all bases coefficients.

  1. XCO2 Retrieval Errors from a PCA-based Approach to Fast Radiative Transfer

    Science.gov (United States)

    Somkuti, Peter; Boesch, Hartmut; Natraj, Vijay; Kopparla, Pushkar

    2017-04-01

    Multiple-scattering radiative transfer (RT) calculations are an integral part of forward models used to infer greenhouse gas concentrations in the shortwave-infrared spectral range from satellite missions such as GOSAT or OCO-2. Such calculations are, however, computationally expensive and, combined with the recent growth in data volume, necessitate the use of acceleration methods in order to make retrievals feasible on an operational level. The principle component analysis (PCA)-based approach to fast radiative transfer introduced by Natraj et al. 2005 is a spectral binning method, in which the many line-by-line monochromatic calculations are replaced by a small set of representative ones. From the PCA performed on the optical layer properties for a scene-dependent atmosphere, the results of the representative calculations are mapped onto all spectral points in the given band. Since this RT scheme is an approximation, the computed top-of-atmosphere radiances exhibit errors compared to the "full" line-by-line calculation. These errors ultimately propagate into the final retrieved greenhouse gas concentrations, and their magnitude depends on scene-dependent parameters such as aerosol loadings or viewing geometry. An advantage of this method is the ability to choose the degree of accuracy by increasing or decreasing the number of empirical orthogonal functions used for the reconstruction of the radiances. We have performed a large set of global simulations based on real GOSAT scenes and assess the retrieval errors induced by the fast RT approximation through linear error analysis. We find that across a wide range of geophysical parameters, the errors are for the most part smaller than ± 0.2 ppm and ± 0.06 ppm (out of roughly 400 ppm) for ocean and land scenes respectively. A fast RT scheme that produces low errors is important, since regional biases in XCO2 even in the low sub-ppm range can cause significant changes in carbon fluxes obtained from inversions

  2. Sensitivity analysis of FBMC-based multi-cellular networks to synchronization errors and HPA nonlinearities

    Science.gov (United States)

    Elmaroud, Brahim; Faqihi, Ahmed; Aboutajdine, Driss

    2017-01-01

    In this paper, we study the performance of asynchronous and nonlinear FBMC-based multi-cellular networks. The considered system includes a reference mobile perfectly synchronized with its reference base station (BS) and K interfering BSs. Both synchronization errors and high-power amplifier (HPA) distortions will be considered and a theoretical analysis of the interference signal will be conducted. On the basis of this analysis, we will derive an accurate expression of signal-to-noise-plus-interference ratio (SINR) and bit error rate (BER) in the presence of a frequency-selective channel. In order to reduce the computational complexity of the BER expression, we applied an interesting lemma based on the moment generating function of the interference power. Finally, the proposed model is evaluated through computer simulations which show a high sensitivity of the asynchronous FBMC-based multi-cellular network to HPA nonlinear distortions.

  3. Modified Redundancy based Technique—a New Approach to Combat Error Propagation Effect of AES

    Science.gov (United States)

    Sarkar, B.; Bhunia, C. T.; Maulik, U.

    2012-06-01

    Advanced encryption standard (AES) is a great research challenge. It has been developed to replace the data encryption standard (DES). AES suffers from a major limitation of error propagation effect. To tackle this limitation, two methods are available. One is redundancy based technique and the other one is bite based parity technique. The first one has a significant advantage of correcting any error on definite term over the second one but at the cost of higher level of overhead and hence lowering the processing speed. In this paper, a new approach based on the redundancy based technique is proposed that would certainly speed up the process of reliable encryption and hence the secured communication.

  4. Methodical errors of measurement of the human body tissues electrical parameters

    OpenAIRE

    Antoniuk, O.; Pokhodylo, Y.

    2015-01-01

    Sources of methodical measurement errors of immitance parameters of biological tissues are described. Modeling measurement errors of RC-parameters of biological tissues equivalent circuits into the frequency range is analyzed. Recommendations on the choice of test signal frequency for measurement of these elements is provided.

  5. A Histogram-Based Static-Error Correction Technique for Flash ADCs

    Institute of Scientific and Technical Information of China (English)

    Armin Jalili; J Jacob Wikner; Sayed Masoud Sayedi; Rasoul Dehghani

    2011-01-01

    High-speed, high-accuracy data converters are attractive for use in most RF applications. Such converters allow direct conversion to occur between the digital baseband and the antenna. However, high speed and high accuracy make the analog components in a converter more complex, and this complexity causes more power to be dissipated than if a traditional approach were taken. A static calibration technique for flash analog-to-digital converters (ADCs) is discussed in this paper. The calibration is based onhistogram test methods, and equivalent errors in the flash ADC comparators are estimated in the digital domain without any significant changes being made to the ADC comparators. In the trimming process, reference voltages are adjusted to compensate for static errors. Behavioral-level simulations of a moderate-resolution 8-bit flash ADC show that, for typical errors, ADC performance is considerably improved by the proposed technique. As a result of calibration, the differential no.nlinearities (DNLs) are reduced on average from 4 LSB to 0.5 LSB, and the integral nonlinearities (INLs) are reduced on average from 4.2 LSB to 0.35 LSB. Implementation issues for this proposed technique are discussed in our subsequent paper, “A Histogram-Based Static-Error Correction Technique for Flash ADCs: Implementation Aspects. ”

  6. Probability-Based Diagnostic Imaging Technique Using Error Functions for Active Structural Health Monitoring

    Directory of Open Access Journals (Sweden)

    Rahim Gorgin,

    2014-07-01

    Full Text Available This study presents a novel probability-based diagnostic imaging (PDI technique using error functions for active structural health monitoring (SHM. To achieve this, first the changes between baseline and current signals of each sensing path are measured, and by taking the root mean square of such changes, the energy of the scattered signal at different times can be calculated. Then, for different pairs of signal acquisition paths, an error function based on the energy of the scattered signals is introduced. Finally, the resultant error function is fused to the final estimation of the probability of damage presence in the monitoring area. As for applications, developed methods were employed to various damage identification cases, including cracks located in regions among an active sensor network with different configurations (pulse-echo and pitch-catch, and holes located in regions outside active network sensors with pitch-catch configuration. The results identified using experimental Lamb wave signals at different central frequencies corroborated that the developed PDI technique using error functions is capable of monitoring structural damage, regardless of its shape, size and location. The developed method doesn’t need direct interpretation of overlaid and dispersed lamb wave components for damage identification and can monitor damage located anywhere in the structure. These bright advantages, qualify the above presented PDI method for online structural health monitoring.

  7. Age-related changes in error processing in young children: A school-based investigation

    Directory of Open Access Journals (Sweden)

    Jennie K. Grammer

    2014-07-01

    Full Text Available Growth in executive functioning (EF skills play a role children's academic success, and the transition to elementary school is an important time for the development of these abilities. Despite this, evidence concerning the development of the ERP components linked to EF, including the error-related negativity (ERN and the error positivity (Pe, over this period is inconclusive. Data were recorded in a school setting from 3- to 7-year-old children (N = 96, mean age = 5 years 11 months as they performed a Go/No-Go task. Results revealed the presence of the ERN and Pe on error relative to correct trials at all age levels. Older children showed increased response inhibition as evidenced by faster, more accurate responses. Although developmental changes in the ERN were not identified, the Pe increased with age. In addition, girls made fewer mistakes and showed elevated Pe amplitudes relative to boys. Based on a representative school-based sample, findings indicate that the ERN is present in children as young as 3, and that development can be seen in the Pe between ages 3 and 7. Results varied as a function of gender, providing insight into the range of factors associated with developmental changes in the complex relations between behavioral and electrophysiological measures of error processing.

  8. A Corpus-based Study on High Frequency Grammatical Errors in the Written Production by Chinese EFL Learners

    Institute of Scientific and Technical Information of China (English)

    陈思; 蔡丽慧

    2014-01-01

    The present study is a corpus-based error analysis of IL written production by Chinese EFL learners. Based on the In-terlanguage theory, the research chooses the tagged WECCL as the data base, and makes an overall investigation of the high fre-quency grammatical errors committed by Chinese EFL learners.

  9. Influence of measurement errors on temperature-based death time determination.

    Science.gov (United States)

    Hubig, Michael; Muggenthaler, Holger; Mall, Gita

    2011-07-01

    Temperature-based methods represent essential tools in forensic death time determination. Empirical double exponential models have gained wide acceptance because they are highly flexible and simple to handle. The most established model commonly used in forensic practice was developed by Henssge. It contains three independent variables: the body mass, the environmental temperature, and the initial body core temperature. The present study investigates the influence of variations in the input data (environmental temperature, initial body core temperature, core temperature, time) on the standard deviation of the model-based estimates of the time since death. Two different approaches were used for calculating the standard deviation: the law of error propagation and the Monte Carlo method. Errors in environmental temperature measurements as well as deviations of the initial rectal temperature were identified as major sources of inaccuracies in model based death time estimation.

  10. Management and Evaluation System on Human Error, Licence Requirements, and Job-aptitude in Rail and the Other Industries

    Energy Technology Data Exchange (ETDEWEB)

    Koo, In Soo; Suh, S. M.; Park, G. O. (and others)

    2006-07-15

    Rail system is a system that is very closely related to the public life. When an accident happens, the public using this system should be injured or even be killed. The accident that recently took place in Taegu subway system, because of the inappropriate human-side task performance, showed demonstratively how its results could turn out to be tragic one. Many studies have shown that the most cases of the accidents have occurred because of performing his/her tasks in inappropriate way. It is generally recognised that the rail system without human element could never be happened quite long time. So human element in rail system is going to be the major factor to the next tragic accident. This state of the art report studied the cases of the managements and evaluation systems related to human errors, license requirements, and job aptitudes in the areas of rail and the other industries for the purpose of improvement of the task performance of personnel which consists of an element and finally enhancement of rail safety. The human errors, license requirements, and evaluation system of the job aptitude on people engaged in agencies with close relation to rail do much for development and preservation their abilities. But due to various inside and outside factors, to some extent it may have limitations to timely reflect overall trends of society, technology, and a sense of value. Removal and control of the factors of human errors will have epochal roles in safety of the rail system through the case studies of this report. Analytical results on case studies of this report will be used in the project 'Development of Management Criteria on Human Error and Evaluation Criteria on Job-aptitude of Rail Safe-operation Personnel' which has been carried out as a part of 'Integrated R and D Program for Railway Safety'.

  11. Science, practice, and human errors in controlling Clostridium botulinum in heat-preserved food in hermetic containers.

    Science.gov (United States)

    Pflug, Irving J

    2010-05-01

    The incidence of botulism in canned food in the last century is reviewed along with the background science; a few conclusions are reached based on analysis of published data. There are two primary aspects to botulism control: the design of an adequate process and the delivery of the adequate process to containers of food. The probability that the designed process will not be adequate to control Clostridium botulinum is very small, probably less than 1.0 x 10(-6), based on containers of food, whereas the failure of the operator of the processing equipment to deliver the specified process to containers of food may be of the order of 1 in 40, to 1 in 100, based on processing units (retort loads). In the commercial food canning industry, failure to deliver the process will probably be of the order of 1.0 x 10(-4) to 1.0 x 10(-6) when U.S. Food and Drug Administration (FDA) regulations are followed. Botulism incidents have occurred in food canning plants that have not followed the FDA regulations. It is possible but very rare to have botulism result from postprocessing contamination. It may thus be concluded that botulism incidents in canned food are primarily the result of human failure in the delivery of the designed or specified process to containers of food that, in turn, result in the survival, outgrowth, and toxin production of C. botulinum spores. Therefore, efforts in C. botulinum control should be concentrated on reducing human errors in the delivery of the specified process to containers of food.

  12. Analytical/experimental methods supporting an ``error budget``-based design of the PATS/PFMS

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, J.G.; Rhorer, R.L.; Stevens, R.R.

    1992-09-01

    The design of a Precision Automated Turning System (PATS) for use in the Department of Energy weapons complex is being carried out under the umbrella of the Precision Flexible Manufacturing System (PFMS) at the Los Alamos National Laboratory. From the beginning of this project, the design has been based on a finished part ``error budget`` concept that allows the contribution of the elastic displacements from the PATS machine-base to be included in the machining error and the spindle-to-spindle part transfer error. Thus, a number of different analytical models have been used to perform studies of the displacements of key points when the base is subjected to the loadings that occur during movement of the slides on the way system. Finite Element (FE) models have been used in parameter studies. This model has pointer ``beams`` at the part-machining position along the spindle centerline and at the spindle-to-spindle part transfer position. These rigid pointers indicate the difference in the actual displacements, as opposed to the assumed displacements (thus, the `error), at these points caused by the base deformation. For example, one of their uses was to study the effects of the locations and the number of base supports. The maximum relative pointer displacements, when the model is loaded by the way system for 3, 6, 11, and a crippled 11 -point support scheme, were studied. Using this information, a decision was made to depart from the conventional 3-point (``milkstool``) support concept, and support the base at 5-points.

  13. Vector data error analysis for remote sensing-based urban change mapping

    Science.gov (United States)

    Feng, Xiuli; Wang, Ke; Lou, Liming; Zhou, Bin

    2005-10-01

    Nowadays, rapid urban growth will affect large social, environmental, economic and public health impacts in China. There is therefore a need for timely spatial information on urban change, i.e. urban change mapping. Aviation survey and satellite remote sensing are the main ways to obtain the information of earth surface. Compared to aviation survey, the satellite remote sensing makes it cost-efficient for urban change mapping at 1:10,000 scale. SPOT5 remotely sensed images are the more befitting data for this application than other satellite remotely sensed data now. The key technical problem is that whether the vector data error of urban change is within the mapping error limitation at this scale. In order to obtain the vector data precision and provide the precision reference for the application of SPOT5 image to urban change mapping, a case study was taken in Yangxunqiao of Shaoxing city. Based on the SPOT5 images acquired and the ground control points (GCPs) and check points taken by differential GPS through field survey, the geometric correction of images and urban change mapping at 1:10,000 scale were performed. The relevant indices were used to evaluate point position error, line feature error and polygon feature error of urban change vector data with field surveying data and simultaneous IKONOS images. The point position precision results were: root mean square (RMS) of X-3.93 m, RMS of Y-4.13 m, RMS of plane -5.71 m and the average relative polygon area accuracy was 88.05%. Finally the conclusion was made that urban change mapping based on SPOT5 image can satisfy well with the precision demand of 1:10,000 scale.

  14. Artificial Error Tuning Based on Design a Novel SISO Fuzzy Backstepping Adaptive Variable Structure Control

    Directory of Open Access Journals (Sweden)

    Samaneh Zahmatkesh

    2013-10-01

    Full Text Available This paper examines single input single output (SISO chattering free variable structure control (VSC which controller coefficient is on-line tuned by fuzzy backstepping algorithm to control of continuum robot manipulator. Variable structure methodology is selected as a framework to construct the control law and address the stability and robustness of the close loop system based on Lyapunove formulation. The main goal is to guarantee acceptable error result and adjust the trajectory following. The proposed approach effectively combines the design technique from variable structure controller is based on Lyapunov and modified Proportional plus Derivative (P+D fuzzy estimator to estimate the nonlinearity of undefined system dynamic in backstepping controller. The input represents the function between variable structure function, error and the modified rate of error. The outputs represent joint torque, respectively. The fuzzy backstepping methodology is on-line tune the variable structure function based on adaptive methodology. The performance of the SISO VSC based on-line tuned by fuzzy backstepping algorithm (FBSAVSC is validated through comparison with VSC. Simulation results signify good performance of trajectory in presence of uncertainty joint torque load.

  15. The essential component in DNA-based information storage system: robust error-tolerating module

    Directory of Open Access Journals (Sweden)

    Aldrin Kay-Yuen eYim

    2014-11-01

    Full Text Available The size of digital data is ever increasing and is expected to grow to 40,000EB by 2020, yet the estimated global information storage capacity in 2011 is less than 300EB, indicating that most of the data are transient. DNA, as a very stable nano-molecule, is an ideal massive storage device for long-term data archive. The two most notable illustrations are from Church et al. and Goldman et al., whose approaches are well-optimized for most sequencing platforms – short synthesized DNA fragments without homopolymer. Here we suggested improvements on error handling methodology that could enable the integration of DNA-based computational process, e.g. algorithms based on self-assembly of DNA. As a proof of concept, a picture of size 438 bytes was encoded to DNA with Low-Density Parity-Check error-correction code. We salvaged a significant portion of sequencing reads with mutations generated during DNA synthesis and sequencing and successfully reconstructed the entire picture. A modular-based programming framework - DNAcodec with a XML-based data format was also introduced. Our experiments demonstrated the practicability of long DNA message recovery with high error-tolerance, which opens the field to biocomputing and synthetic biology.

  16. Medication Errors

    Science.gov (United States)

    ... Proprietary Names (PDF - 146KB) Draft Guidance for Industry: Best Practices in Developing Proprietary Names for Drugs (PDF - 279KB) ... or (301) 796-3400 druginfo@fda.hhs.gov Human Drug ... in Medication Errors Resources for You Agency for Healthcare Research and Quality: ...

  17. Errors in Radiologic Reporting

    Directory of Open Access Journals (Sweden)

    Esmaeel Shokrollahi

    2010-05-01

    Full Text Available Given that the report is a professional document and bears the associated responsibilities, all of the radiologist's errors appear in it, either directly or indirectly. It is not easy to distinguish and classify the mistakes made when a report is prepared, because in most cases the errors are complex and attributable to more than one cause and because many errors depend on the individual radiologists' professional, behavioral and psychological traits."nIn fact, anyone can make a mistake, but some radiologists make more mistakes, and some types of mistakes are predictable to some extent."nReporting errors can be categorized differently:"nUniversal vs. individual"nHuman related vs. system related"nPerceptive vs. cognitive errors"n1. Descriptive "n2. Interpretative "n3. Decision related Perceptive errors"n1. False positive "n2. False negative"n Nonidentification "n Erroneous identification "nCognitive errors "n Knowledge-based"n Psychological  

  18. A new accuracy measure based on bounded relative error for time series forecasting.

    Science.gov (United States)

    Chen, Chao; Twycross, Jamie; Garibaldi, Jonathan M

    2017-01-01

    Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred.

  19. Research on vision-based error detection system for optic fiber winding

    Science.gov (United States)

    Lu, Wenchao; Li, Huipeng; Yang, Dewei; Zhang, Min

    2011-11-01

    Optic fiber coils are the hearts of fiber optic gyroscopes (FOGs). To detect the irresistible errors during the process of winding of optical fibers, such as gaps, climbs and partial rises between fibers, when fiber optic winding machines are operated, and to enable fully automated winding, we researched and designed this vision-based error detection system for optic fiber winding, on the basis of digital image collection and process[1]. When a Fiber-optic winding machine is operated, background light is used as illumination system to strength the contrast of images between fibers and background. Then microscope and CCD as imaging system and image collecting system are used to receive the analog images of fibers. After that analog images are shifted into digital imagines, which can be processed and analyzed by computers. Canny edge detection and a contour-tracing algorithm are used as the main image processing method. The distances between the fiber peaks were then measured and compared with the desired values. If these values fall outside of a predetermined tolerance zone, an error is detected and classified either as a gap, climb or rise. we used OpenCV and MATLAB database as basic function library and used VC++6.0 as the platform to show the results. The test results showed that the system was useful, and the edge detection and contour-tracing algorithm were effective, because of the high rate of accuracy. At the same time, the results of error detection are correct.

  20. A new accuracy measure based on bounded relative error for time series forecasting

    Science.gov (United States)

    Twycross, Jamie; Garibaldi, Jonathan M.

    2017-01-01

    Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred. PMID:28339480

  1. Model based correction of placement error in EBL and its verification

    Science.gov (United States)

    Babin, Sergey; Borisov, Sergey; Militsin, Vladimir; Komagata, Tadashi; Wakatsuki, Tetsuro

    2016-05-01

    In maskmaking, the main source of error contributing to placement error is charging. DISPLACE software corrects the placement error for any layout, based on a physical model. The charge of a photomask and multiple discharge mechanisms are simulated to find the charge distribution over the mask. The beam deflection is calculated for each location on the mask, creating data for the placement correction. The software considers the mask layout, EBL system setup, resist, and writing order, as well as other factors such as fogging and proximity effects correction. The output of the software is the data for placement correction. One important step is the calibration of physical model. A test layout on a single calibration mask was used for calibration. The extracted model parameters were used to verify the correction. As an ultimate test for the correction, a sophisticated layout was used for the verification that was very different from the calibration mask. The placement correction results were predicted by DISPLACE. A good correlation of the measured and predicted values of the correction confirmed the high accuracy of the charging placement error correction.

  2. Equilibrating errors: reliable estimation of information transmission rates in biological systems with spectral analysis-based methods.

    Science.gov (United States)

    Ignatova, Irina; French, Andrew S; Immonen, Esa-Ville; Frolov, Roman; Weckström, Matti

    2014-06-01

    Shannon's seminal approach to estimating information capacity is widely used to quantify information processing by biological systems. However, the Shannon information theory, which is based on power spectrum estimation, necessarily contains two sources of error: time delay bias error and random error. These errors are particularly important for systems with relatively large time delay values and for responses of limited duration, as is often the case in experimental work. The window function type and size chosen, as well as the values of inherent delays cause changes in both the delay bias and random errors, with possibly strong effect on the estimates of system properties. Here, we investigated the properties of these errors using white-noise simulations and analysis of experimental photoreceptor responses to naturalistic and white-noise light contrasts. Photoreceptors were used from several insect species, each characterized by different visual performance, behavior, and ecology. We show that the effect of random error on the spectral estimates of photoreceptor performance (gain, coherence, signal-to-noise ratio, Shannon information rate) is opposite to that of the time delay bias error: the former overestimates information rate, while the latter underestimates it. We propose a new algorithm for reducing the impact of time delay bias error and random error, based on discovering, and then using that size of window, at which the absolute values of these errors are equal and opposite, thus cancelling each other, allowing minimally biased measurement of neural coding.

  3. Study on Cell Error Rate of a Satellite ATM System Based on CDMA

    Institute of Scientific and Technical Information of China (English)

    赵彤宇; 张乃通

    2003-01-01

    In this paper, the cell error rate (CER) of a CDMA-based satellite ATM system is analyzed. Two fading models, i.e. the partial fading model and the total fading model are presented according to multi-path propagation fading and shadow effect. Based on the total shadow model, the relation of CER vs. the number of subscribers at various elevations under 2D-RAKE receiving and non-diversity receiving is got. The impact on cell error rate with pseudo noise (PN) code length is also considered. The result that the maximum likelihood combination of multi-path signal would not improve the system performance when multiple access interference (MAI) is small, on the contrary the performance may be even worse is abtained.

  4. Data-aided efficient synchronization for UWB signals based on minimum average error probability

    Institute of Scientific and Technical Information of China (English)

    SUN Qiang; L(U) Tie-jun

    2008-01-01

    One of the biggest challenges in ultra-wideband(UWB) radio is the accurate timing acquisition for the receiver.In this article, we develop a novel data-aided synchronizationalgorithm for pulses amplitude modulation (PAM) UWB systems.Pilot and information symbols are transmitted simultaneously byan orthogonal code division multiplexing (OCDM) scheme. Inthe receiver, an algorithm based on the minimum average errorprobability (MAEP) of coherent detector is applied to estimatethe timing offset. The multipath interference (MI) problem fortiming offset estimation is considered. The mean-square-error(MSE) and the bit-error-rate(BER) performances of our proposedscheme are simulated. The results show that our algorithmoutperforms the algorithm based on the maximum correlatoroutput (MCO) in multipath channels.

  5. Accelerating Time-Varying Hardware Volume Rendering Using TSP Trees and Color-Based Error Metrics

    Science.gov (United States)

    Ellsworth, David; Chiang, Ling-Jen; Shen, Han-Wei; Kwak, Dochan (Technical Monitor)

    2000-01-01

    This paper describes a new hardware volume rendering algorithm for time-varying data. The algorithm uses the Time-Space Partitioning (TSP) tree data structure to identify regions within the data that have spatial or temporal coherence. By using this coherence, the rendering algorithm can improve performance when the volume data is larger than the texture memory capacity by decreasing the amount of textures required. This coherence can also allow improved speed by appropriately rendering flat-shaded polygons instead of textured polygons, and by not rendering transparent regions. To reduce the polygonization overhead caused by the use of the hierarchical data structure, we introduce an optimization method using polygon templates. The paper also introduces new color-based error metrics, which more accurately identify coherent regions compared to the earlier scalar-based metrics. By showing experimental results from runs using different data sets and error metrics, we demonstrate that the new methods give substantial improvements in volume rendering performance.

  6. Fast Training of Support Vector Machines Using Error-Center-Based Optimization

    Institute of Scientific and Technical Information of China (English)

    L. Meng; Q. H. Wu

    2005-01-01

    This paper presents a new algorithm for Support Vector Machine (SVM) training, which trains a machine based on the cluster centers of errors caused by the current machine. Experiments withvarious training sets show that the computation time of this new algorithm scales almost linear with training set size and thus may be applied to much larger training sets, in comparison to standard quadratic programming (QP) techniques.

  7. The localization and correction of errors in models: a constraint-based approach

    OpenAIRE

    Piechowiak, S.; Rodriguez, J

    2005-01-01

    Model-based diagnosis, and constraint-based reasoning are well known generic paradigms for which the most difficult task lies in the construction of the models used. We consider the problem of localizing and correcting the errors in a model.We present a method to debug a model. To help the debugging task, we propose to use the model-base diagnosis solver. This method has been used in a real application of the development a model of a railway signalling system.

  8. Novel Ontologies-based Optical Character Recognition-error Correction Cooperating with Graph Component Extraction

    Directory of Open Access Journals (Sweden)

    Sarunya Kanjanawattana

    2017-01-01

    Full Text Available literature. Extracting graph information clearly contributes to readers, who are interested in graph information interpretation, because we can obtain significant information presenting in the graph. A typical tool used to transform image-based characters to computer editable characters is optical character recognition (OCR. Unfortunately, OCR cannot guarantee perfect results, because it is sensitive to noise and input quality. This becomes a serious problem because misrecognition provides misunderstanding information to readers and causes misleading communication. In this study, we present a novel method for OCR-error correction based on bar graphs using semantics, such as ontologies and dependency parsing. Moreover, we used a graph component extraction proposed in our previous study to omit irrelevant parts from graph components. It was applied to clean and prepare input data for this OCR-error correction. The main objectives of this paper are to extract significant information from the graph using OCR and to correct OCR errors using semantics. As a result, our method provided remarkable performance with the highest accuracies and F-measures. Moreover, we examined that our input data contained less of noise because of an efficiency of our graph component extraction. Based on the evidence, we conclude that our solution to the OCR problem achieves the objectives.

  9. Adaptive control of machining process based on extended entropy square error and wavelet neural network

    Institute of Scientific and Technical Information of China (English)

    LAI Xing-yu; YE Bang-yan; LI Wei-guang; YAN Chun-yan

    2007-01-01

    Combining information entropy and wavelet analysis with neural network, an adaptive control system and an adaptive control algorithm are presented for machining process based on extended entropy square error (EESE) and wavelet neural network (WNN). Extended entropy square error function is defined and its availability is proved theoretically. Replacing the mean square error criterion of BP algorithm with the EESE criterion, the proposed system is then applied to the on-line control of the cutting force with variable cutting parameters by searching adaptively wavelet base function and self adjusting scaling parameter, translating parameter of the wavelet and neural network weights. Simulation results show that the designed system is of fast response,non-overshoot and it is more effective than the conventional adaptive control of machining process based on the neural network. The suggested algorithm can adaptively adjust the feed rate on-line till achieving a constant cutting force approaching the reference force in varied cutting conditions, thus improving the machining efficiency and protecting the tool.

  10. Ensemble-based algorithm for error reduction in hydraulics in the context of flood forecasting

    Directory of Open Access Journals (Sweden)

    Barthélémy Sébastien

    2016-01-01

    Full Text Available Over the last few years, a collaborative work between CERFACS, LNHE (EDF R&D, SCHAPI and CE-REMA resulted in the implementation of a Data Assimilation (DA method on top of MASCARET in the framework of real-time forecasting. This prototype was based on a simplified Kalman filter where the description of the background error covariances is prescribed based on off-line climatology constant over time. This approach showed promising results on the Adour and Marne catchments as it improves the forecast skills of the hydraulic model using water level and discharge in-situ observations. An ensemble-based DA algorithm has recently been implemented to improve the modelling of the background error covariance matrix used to distribute the correction to the water level and discharge states when observations are assimilated from observation points to the entire state. It was demonstrated that the flow dependent description of the background error covariances with the EnKF algorithm leads to a more realistic correction of the hydraulic state with significant impact of the hydraulic network characteristics

  11. Study of geometric errors detection method for NC machine tools based on non-contact circular track

    Science.gov (United States)

    Yan, Kejun; Liu, Jun; Gao, Feng; Wang, Huan

    2008-12-01

    This paper presents a non-contact measuring method of geometric errors for NC machine tools based on circular track testing method. Let the machine spindle move along a circular path, the position error of every tested position in the circle can be obtained using two laser interferometers. With a volumetric error model, the 12 components of geometric error apart from angular error components can be derived. It has characteristics of wide detection range and high precision. Being obtained geometric errors respectively, it is of great significance for the error compensation of NC machine tools. This method has been tested on a MCV-510 NC machine tool. The experiment result has been proved to be feasible for this method.

  12. Object Detection and Tracking-Based Camera Calibration for Normalized Human Height Estimation

    Directory of Open Access Journals (Sweden)

    Jaehoon Jung

    2016-01-01

    Full Text Available This paper presents a normalized human height estimation algorithm using an uncalibrated camera. To estimate the normalized human height, the proposed algorithm detects a moving object and performs tracking-based automatic camera calibration. The proposed method consists of three steps: (i moving human detection and tracking, (ii automatic camera calibration, and (iii human height estimation and error correction. The proposed method automatically calibrates camera by detecting moving humans and estimates the human height using error correction. The proposed method can be applied to object-based video surveillance systems and digital forensic.

  13. Projection based image restoration, super-resolution and error correction codes

    Science.gov (United States)

    Bauer, Karl Gregory

    Super-resolution is the ability of a restoration algorithm to restore meaningful spatial frequency content beyond the diffraction limit of the imaging system. The Gerchberg-Papoulis (GP) algorithm is one of the most celebrated algorithms for super-resolution. The GP algorithm is conceptually simple and demonstrates the importance of using a priori information in the formation of the object estimate. In the first part of this dissertation the continuous GP algorithm is discussed in detail and shown to be a projection on convex sets algorithm. The discrete GP algorithm is shown to converge in the exactly-, over- and under-determined cases. A direct formula for the computation of the estimate at the kth iteration and at convergence is given. This analysis of the discrete GP algorithm sets the stage to connect super-resolution to error-correction codes. Reed-Solomon codes are used for error-correction in magnetic recording devices, compact disk players and by NASA for space communications. Reed-Solomon codes have a very simple description when analyzed with the Fourier transform. This signal processing approach to error- correction codes allows the error-correction problem to be compared with the super-resolution problem. The GP algorithm for super-resolution is shown to be equivalent to the correction of errors with a Reed-Solomon code over an erasure channel. The Restoration from Magnitude (RFM) problem seeks to recover a signal from the magnitude of the spectrum. This problem has applications to imaging through a turbulent atmosphere. The turbulent atmosphere causes localized changes in the index of refraction and introduces different phase delays in the data collected. Synthetic aperture radar (SAR) and hyperspectral imaging systems are capable of simultaneously recording multiple images of different polarizations or wavelengths. Each of these images will experience the same turbulent atmosphere and have a common phase distortion. A projection based restoration

  14. Mitigating Errors in External Respiratory Surrogate-Based Models of Tumor Position

    Energy Technology Data Exchange (ETDEWEB)

    Malinowski, Kathleen T. [Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, MD (United States); Fischell Department of Bioengineering, University of Maryland, College Park, MD (United States); McAvoy, Thomas J. [Fischell Department of Bioengineering, University of Maryland, College Park, MD (United States); Department of Chemical and Biomolecular Engineering and Institute of Systems Research, University of Maryland, College Park, MD (United States); George, Rohini [Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, MD (United States); Dieterich, Sonja [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA (United States); D' Souza, Warren D., E-mail: wdsou001@umaryland.edu [Department of Radiation Oncology, University of Maryland School of Medicine, Baltimore, MD (United States); Fischell Department of Bioengineering, University of Maryland, College Park, MD (United States)

    2012-04-01

    Purpose: To investigate the effect of tumor site, measurement precision, tumor-surrogate correlation, training data selection, model design, and interpatient and interfraction variations on the accuracy of external marker-based models of tumor position. Methods and Materials: Cyberknife Synchrony system log files comprising synchronously acquired positions of external markers and the tumor from 167 treatment fractions were analyzed. The accuracy of Synchrony, ordinary-least-squares regression, and partial-least-squares regression models for predicting the tumor position from the external markers was evaluated. The quantity and timing of the data used to build the predictive model were varied. The effects of tumor-surrogate correlation and the precision in both the tumor and the external surrogate position measurements were explored by adding noise to the data. Results: The tumor position prediction errors increased during the duration of a fraction. Increasing the training data quantities did not always lead to more accurate models. Adding uncorrelated noise to the external marker-based inputs degraded the tumor-surrogate correlation models by 16% for partial-least-squares and 57% for ordinary-least-squares. External marker and tumor position measurement errors led to tumor position prediction changes 0.3-3.6 times the magnitude of the measurement errors, varying widely with model algorithm. The tumor position prediction errors were significantly associated with the patient index but not with the fraction index or tumor site. Partial-least-squares was as accurate as Synchrony and more accurate than ordinary-least-squares. Conclusions: The accuracy of surrogate-based inferential models of tumor position was affected by all the investigated factors, except for the tumor site and fraction index.

  15. Medical errors: legal and ethical responses.

    Science.gov (United States)

    Dickens, B M

    2003-04-01

    Liability to err is a human, often unavoidable, characteristic. Errors can be classified as skill-based, rule-based, knowledge-based and other errors, such as of judgment. In law, a key distinction is between negligent and non-negligent errors. To describe a mistake as an error of clinical judgment is legally ambiguous, since an error that a physician might have made when acting with ordinary care and the professional skill the physician claims, is not deemed negligent in law. If errors prejudice patients' recovery from treatment and/or future care, in physical or psychological ways, it is legally and ethically required that they be informed of them in appropriate time. Senior colleagues, facility administrators and others such as medical licensing authorities should be informed of serious forms of error, so that preventive education and strategies can be designed. Errors for which clinicians may be legally liable may originate in systemically defective institutional administration.

  16. Reliability-Based Marginal Cost Pricing Problem Case with Both Demand Uncertainty and Travelers’ Perception Errors

    Directory of Open Access Journals (Sweden)

    Shaopeng Zhong

    2013-01-01

    Full Text Available Focusing on the first-best marginal cost pricing (MCP in a stochastic network with both travel demand uncertainty and stochastic perception errors within the travelers’ route choice decision processes, this paper develops a perceived risk-based stochastic network marginal cost pricing (PRSN-MCP model. Numerical examples based on an integrated method combining the moment analysis approach, the fitting distribution method, and the reliability measures are also provided to demonstrate the importance and properties of the proposed model. The main finding is that ignoring the effect of travel time reliability and travelers’ perception errors may significantly reduce the performance of the first-best MCP tolls, especially under high travelers’ confidence and network congestion levels. The analysis result could also enhance our understanding of (1 the effect of stochastic perception error (SPE on the perceived travel time distribution and the components of road toll; (2 the effect of road toll on the actual travel time distribution and its reliability measures; (3 the effect of road toll on the total network travel time distribution and its statistics; and (4 the effect of travel demand level and the value of reliability (VoR level on the components of road toll.

  17. A TOA-AOA-Based NLOS Error Mitigation Method for Location Estimation

    Directory of Open Access Journals (Sweden)

    Tianshuang Qiu

    2007-12-01

    Full Text Available This paper proposes a geometric method to locate a mobile station (MS in a mobile cellular network when both the range and angle measurements are corrupted by non-line-of-sight (NLOS errors. The MS location is restricted to an enclosed region by geometric constraints from the temporal-spatial characteristics of the radio propagation channel. A closed-form equation of the MS position, time of arrival (TOA, angle of arrival (AOA, and angle spread is provided. The solution space of the equation is very large because the angle spreads are random variables in nature. A constrained objective function is constructed to further limit the MS position. A Lagrange multiplier-based solution and a numerical solution are proposed to resolve the MS position. The estimation quality of the estimator in term of “biased” or “unbiased” is discussed. The scale factors, which may be used to evaluate NLOS propagation level, can be estimated by the proposed method. AOA seen at base stations may be corrected to some degree. The performance comparisons among the proposed method and other hybrid location methods are investigated on different NLOS error models and with two scenarios of cell layout. It is found that the proposed method can deal with NLOS error effectively, and it is attractive for location estimation in cellular networks.

  18. Degradation data analysis based on a generalized Wiener process subject to measurement error

    Science.gov (United States)

    Li, Junxing; Wang, Zhihua; Zhang, Yongbo; Fu, Huimin; Liu, Chengrui; Krishnaswamy, Sridhar

    2017-09-01

    Wiener processes have received considerable attention in degradation modeling over the last two decades. In this paper, we propose a generalized Wiener process degradation model that takes unit-to-unit variation, time-correlated structure and measurement error into considerations simultaneously. The constructed methodology subsumes a series of models studied in the literature as limiting cases. A simple method is given to determine the transformed time scale forms of the Wiener process degradation model. Then model parameters can be estimated based on a maximum likelihood estimation (MLE) method. The cumulative distribution function (CDF) and the probability distribution function (PDF) of the Wiener process with measurement errors are given based on the concept of the first hitting time (FHT). The percentiles of performance degradation (PD) and failure time distribution (FTD) are also obtained. Finally, a comprehensive simulation study is accomplished to demonstrate the necessity of incorporating measurement errors in the degradation model and the efficiency of the proposed model. Two illustrative real applications involving the degradation of carbon-film resistors and the wear of sliding metal are given. The comparative results show that the constructed approach can derive a reasonable result and an enhanced inference precision.

  19. Worst-case analysis of target localization errors in fiducial-based rigid body registration

    Science.gov (United States)

    Shamir, Reuben R.; Joskowicz, Leo

    2009-02-01

    Fiducial-based rigid registration is the preferred method for aligning the preoperative image with the intra-operative physical anatomy in existing image-guided surgery systems. After registration, the targets locations usually cannot be measured directly, so the Target Registration Error (TRE) is often estimated with the Fiducial Registration Error (FRE), or with Fitzpatrick TRE (FTRE) estimation formula. However, large discrepancies between the FRE and the TRE have been exemplified in hypothetical setups and have been observed in the clinic. In this paper, we formally prove that in the worst case the FRE and the TRE, and the FTRE and the TRE are independent, regardless of the target location, it location, the number of fiducials, and their configuration. The worst case occurs when the unknown Fiducial Localization Error (FLE) is modeled as an affine anisotropic inhomogeneous bias. Our results generalize previous examples, contribute to the mathematical understanding of TRE estimation in fiducial-based rigid-body registration, and strengthen the need for realistic and reliable FLE models and effective TRE estimation methods.

  20. A TOA-AOA-Based NLOS Error Mitigation Method for Location Estimation

    Science.gov (United States)

    Tang, Hong; Park, Yongwan; Qiu, Tianshuang

    2007-12-01

    This paper proposes a geometric method to locate a mobile station (MS) in a mobile cellular network when both the range and angle measurements are corrupted by non-line-of-sight (NLOS) errors. The MS location is restricted to an enclosed region by geometric constraints from the temporal-spatial characteristics of the radio propagation channel. A closed-form equation of the MS position, time of arrival (TOA), angle of arrival (AOA), and angle spread is provided. The solution space of the equation is very large because the angle spreads are random variables in nature. A constrained objective function is constructed to further limit the MS position. A Lagrange multiplier-based solution and a numerical solution are proposed to resolve the MS position. The estimation quality of the estimator in term of "biased" or "unbiased" is discussed. The scale factors, which may be used to evaluate NLOS propagation level, can be estimated by the proposed method. AOA seen at base stations may be corrected to some degree. The performance comparisons among the proposed method and other hybrid location methods are investigated on different NLOS error models and with two scenarios of cell layout. It is found that the proposed method can deal with NLOS error effectively, and it is attractive for location estimation in cellular networks.

  1. Adaptive Green-Kubo estimates of transport coefficients from molecular dynamics based on robust error analysis

    Science.gov (United States)

    Jones, Reese E.; Mandadapu, Kranthi K.

    2012-04-01

    We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)], 10.1103/PhysRev.182.280 and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.

  2. Error Propagation Dynamics of PIV-based Pressure Field Calculations: How well does the pressure Poisson solver perform inherently?

    Science.gov (United States)

    Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd

    2016-08-01

    Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.

  3. Error Propagation Dynamics of PIV-based Pressure Field Calculations: How well does the pressure Poisson solver perform inherently?

    CERN Document Server

    Pan, Zhao; Thomson, Scott; Truscott, Tadd

    2016-01-01

    Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.

  4. Error propagation dynamics of PIV-based pressure field calculations: How well does the pressure Poisson solver perform inherently?

    Science.gov (United States)

    Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd

    2016-08-01

    Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.

  5. An Adaptive Systematic Lossy Error Protection Scheme for Broadcast Applications Based on Frequency Filtering and Unequal Picture Protection

    Directory of Open Access Journals (Sweden)

    Marie Ramon

    2009-01-01

    Full Text Available Systematic lossy error protection (SLEP is a robust error resilient mechanism based on principles of Wyner-Ziv (WZ coding for video transmission over error-prone networks. In an SLEP scheme, the video bitstream is separated into two parts: a systematic part consisting of a video sequence transmitted without channel coding, and additional information consisting of a WZ supplementary stream. This paper presents an adaptive SLEP scheme in which the WZ stream is obtained by frequency filtering in the transform domain. Additionally, error resilience varies adaptively depending on the characteristics of compressed video. We show that the proposed SLEP architecture achieves graceful degradation of reconstructed video quality in the presence of increasing transmission errors. Moreover, it provides good performances in terms of error protection as well as reconstructed video quality if compared to solutions based on coarser quantization, while offering an interesting embedded scheme to apply digital video format conversion.

  6. An analytical method for error analysis of GRACE-like missions based on spectral analysis

    CERN Document Server

    Cai, Lin; Li, Qiong; Luo, Zhicai; Hsu, Houtse

    2016-01-01

    The aim of this paper is to present an analytical relationship between the power spectral density of GRACE-like mission measurements and the accuracies of the gravity field coefficients mainly from the point of view of theory of signal and system, which indicates the one-to-one correspondence between spherical harmonic error degree variances and frequencies of the measurement noise. In order to establish this relationship, the average power of the errors due to gravitational acceleration difference and the relationship between perturbing forces and range-rate perturbations are derived, based on the orthogonality property of associated Legendre functions and the linear orbit perturbation theory, respectively. This method provides a physical insight into the relation between mission parameters and scientific requirements. By taking GRACE-FO as the object of research, the effects of sensor noises and time variable gravity signals are analyzed. If LRI measurements are applied, a mission goal with a geoid accuracy...

  7. An Integrated Method of Multiradar Quantitative Precipitation Estimation Based on Cloud Classification and Dynamic Error Analysis

    Directory of Open Access Journals (Sweden)

    Yong Huang

    2017-01-01

    Full Text Available Relationships between radar reflectivity factor and rainfall are different in various precipitation cloud systems. In this study, the cloud systems are firstly classified into five categories with radar and satellite data to improve radar quantitative precipitation estimation (QPE algorithm. Secondly, the errors of multiradar QPE algorithms are assumed to be different in convective and stratiform clouds. The QPE data are then derived with methods of Z-R, Kalman filter (KF, optimum interpolation (OI, Kalman filter plus optimum interpolation (KFOI, and average calibration (AC based on error analysis on the Huaihe River Basin. In the case of flood on the early of July 2007, the KFOI is applied to obtain the QPE product. Applications show that the KFOI can improve precision of estimating precipitation for multiple precipitation types.

  8. Estimation of random errors for lidar based on noise scale factor

    Science.gov (United States)

    Wang, Huan-Xue; Liu, Jian-Guo; Zhang, Tian-Shu

    2015-08-01

    Estimation of random errors, which are due to shot noise of photomultiplier tube (PMT) or avalanche photodiode (APD) detectors, is very necessary in lidar observation. Due to the Poisson distribution of incident electrons, there still exists a proportional relationship between standard deviation and square root of its mean value. Based on this relationship, noise scale factor (NSF) is introduced into the estimation, which only needs a single data sample. This method overcomes the distractions of atmospheric fluctuations during calculation of random errors. The results show that this method is feasible and reliable. Project supported by the Strategic Priority Research Program of the Chinese Academy of Sciences (Grant No. XDB05040300) and the National Natural Science Foundation of China (Grant No. 41205119).

  9. Mining discriminative class codes for multi-class classification based on minimizing generalization errors

    Science.gov (United States)

    Eiadon, Mongkon; Pipanmaekaporn, Luepol; Kamonsantiroj, Suwatchai

    2016-07-01

    Error Correcting Output Code (ECOC) has emerged as one of promising techniques for solving multi-class classification. In the ECOC framework, a multi-class problem is decomposed into several binary ones with a coding design scheme. Despite this, the suitable multi-class decomposition scheme is still ongoing research in machine learning. In this work, we propose a novel multi-class coding design method to mine the effective and compact class codes for multi-class classification. For a given n-class problem, this method decomposes the classes into subsets by embedding a structure of binary trees. We put forward a novel splitting criterion based on minimizing generalization errors across the classes. Then, a greedy search procedure is applied to explore the optimal tree structure for the problem domain. We run experiments on many multi-class UCI datasets. The experimental results show that our proposed method can achieve better classification performance than the common ECOC design methods.

  10. The analysis of human error as causes in the maintenance of machines: a case study in mining companies

    Directory of Open Access Journals (Sweden)

    Kovacevic, Srdja

    2016-12-01

    Full Text Available This paper describes the two-step method used to analyse the factors and aspects influencing human error during the maintenance of mining machines. The first step is the cause-effect analysis, supported by brainstorming, where five factors and 21 aspects are identified. During the second step, the group fuzzy analytic hierarchy process is used to rank the identified factors and aspects. A case study is done on mining companies in Serbia. The key aspects are ranked according to an analysis that included experts who assess risks in mining companies (a maintenance engineer, a technologist, an ergonomist, a psychologist, and an organisational scientist. Failure to follow technical maintenance instructions, poor organisation of the training process, inadequate diagnostic equipment, and a lack of understanding of the work process are identified as the most important causes of human error.

  11. The application of SHERPA (Systematic Human Error Reduction and Prediction Approach) in the development of compensatory cognitive rehabilitation strategies for stroke patients with left and right brain damage.

    Science.gov (United States)

    Hughes, Charmayne M L; Baber, Chris; Bienkiewicz, Marta; Worthington, Andrew; Hazell, Alexa; Hermsdörfer, Joachim

    2015-01-01

    Approximately 33% of stroke patients have difficulty performing activities of daily living, often committing errors during the planning and execution of such activities. The objective of this study was to evaluate the ability of the human error identification (HEI) technique SHERPA (Systematic Human Error Reduction and Prediction Approach) to predict errors during the performance of daily activities in stroke patients with left and right hemisphere lesions. Using SHERPA we successfully predicted 36 of the 38 observed errors, with analysis indicating that the proportion of predicted and observed errors was similar for all sub-tasks and severity levels. HEI results were used to develop compensatory cognitive strategies that clinicians could employ to reduce or prevent errors from occurring. This study provides evidence for the reliability and validity of SHERPA in the design of cognitive rehabilitation strategies in stroke populations.

  12. SU-E-T-789: Validation of 3DVH Accuracy On Quantifying Delivery Errors Based On Clinical Relevant DVH Metrics

    Energy Technology Data Exchange (ETDEWEB)

    Ma, T; Kumaraswamy, L [Roswell Park Cancer Institute, Buffalo, NY (United States)

    2015-06-15

    Purpose: Detection of treatment delivery errors is important in radiation therapy. However, accurate quantification of delivery errors is also of great importance. This study aims to evaluate the 3DVH software’s ability to accurately quantify delivery errors. Methods: Three VMAT plans (prostate, H&N and brain) were randomly chosen for this study. First, we evaluated whether delivery errors could be detected by gamma evaluation. Conventional per-beam IMRT QA was performed with the ArcCHECK diode detector for the original plans and for the following modified plans: (1) induced dose difference error up to ±4.0% and (2) control point (CP) deletion (3 to 10 CPs were deleted) (3) gantry angle shift error (3 degree uniformly shift). 2D and 3D gamma evaluation were performed for all plans through SNC Patient and 3DVH, respectively. Subsequently, we investigated the accuracy of 3DVH analysis for all cases. This part evaluated, using the Eclipse TPS plans as standard, whether 3DVH accurately can model the changes in clinically relevant metrics caused by the delivery errors. Results: 2D evaluation seemed to be more sensitive to delivery errors. The average differences between ECLIPSE predicted and 3DVH results for each pair of specific DVH constraints were within 2% for all three types of error-induced treatment plans, illustrating the fact that 3DVH is fairly accurate in quantifying the delivery errors. Another interesting observation was that even though the gamma pass rates for the error plans are high, the DVHs showed significant differences between original plan and error-induced plans in both Eclipse and 3DVH analysis. Conclusion: The 3DVH software is shown to accurately quantify the error in delivered dose based on clinically relevant DVH metrics, where a conventional gamma based pre-treatment QA might not necessarily detect.

  13. A comparative evaluation of emerging methods for errors of commission based on applications to the Davis-Besse (1985) event

    Energy Technology Data Exchange (ETDEWEB)

    Reer, B.; Dang, V.N.; Hirschberg, S. [Paul Scherrer Inst., Nuclear Energy and Safety Research Dept., CH-5232 Villigen PSI (Switzerland); Straeter, O. [Gesellschaft fur Anlagen- und Reaktorsicherheit (Germany)

    1999-12-01

    In considering the human role in accidents, the classical PSA methodology applied today focuses primarily on the omissions of actions required of the operators at specific points in the scenario models. A practical, proven methodology is not available for systematically identifying and analyzing the scenario contexts in which the operators might perform inappropriate actions that aggravate the scenario. As a result, typical PSA's do not comprehensively treat these actions, referred to as errors of commission (EOCs). This report presents the results of a joint project of the Paul Scherrer Institut (PSI, Villigen, Switzerland) and the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS, Garching, Germany) that examined some methods recently proposed for addressing the EOC issue. Five methods were investigated: 1 ) ATHEANA, 2) the Borssele screening methodology. 3) CREAM, 4) CAHR, and 5) CODA. In addition to a comparison of their scope, basic assumptions, and analytical approach, the methods were each applied in the analysis of PWR Loss of Feedwater scenarios based on the 1985 Davis-Besse event, in which the operator response included actions that can be categorized as EOCs. The aim was to compare how the methods consider a concrete scenario in which EOCs have in fact been observed. These case applications show how the methods are used in practical terms and constitute a common basis for comparing the methods and the insights that they provide. The identification of the potentially significant EOCs to be analysed in the PSA is currently the central problem for their treatment. The identification or search scheme has to consider an extensive set of potential actions that the operators may take. These actions may take place instead of required actions, for example, because the operators fail to assess the plant state correctly, or they may occur even when no action is required. As a result of this broad search space, most methodologies apply multiple schemes to

  14. Online Adaptive Error Compensation SVM-Based Sliding Mode Control of an Unmanned Aerial Vehicle

    Directory of Open Access Journals (Sweden)

    Kaijia Xue

    2016-01-01

    Full Text Available Unmanned Aerial Vehicle (UAV is a nonlinear dynamic system with uncertainties and noises. Therefore, an appropriate control system has an obligation to ensure the stabilization and navigation of UAV. This paper mainly discusses the control problem of quad-rotor UAV system, which is influenced by unknown parameters and noises. Besides, a sliding mode control based on online adaptive error compensation support vector machine (SVM is proposed for stabilizing quad-rotor UAV system. Sliding mode controller is established through analyzing quad-rotor dynamics model in which the unknown parameters are computed by offline SVM. During this process, the online adaptive error compensation SVM method is applied in this paper. As modeling errors and noises both exist in the process of flight, the offline SVM one-time mode cannot predict the uncertainties and noises accurately. The control law is adjusted in real-time by introducing new training sample data to online adaptive SVM in the control process, so that the stability and robustness of flight are ensured. It can be demonstrated through the simulation experiments that the UAV that joined online adaptive SVM can track the changing path faster according to its dynamic model. Consequently, the proposed method that is proved has the better control effect in the UAV system.

  15. RTP-based broadcast streaming of high definition H.264/AVC video: an error robustness evaluation

    Institute of Scientific and Technical Information of China (English)

    HILLESTAD Odd Inge; JETLUND Ola; PERKIS Andrew

    2006-01-01

    In this work, we present an evaluation of the performance and error robustness of RTP-based broadcast streaming of high-quality high-definition (HD) H.264/AVC video. Using a fully controlled IP test bed (Hillestad et al., 2005), we broadcast high-definition video over RTP/UDP, and use an IP network emulator to introduce a varying amount of randomly distributed packet loss. A high-performance network interface monitoring card is used to capture the video packets into a trace file. Purpose-built software parses the trace file, analyzes the RTP stream and assembles the correctly received NAL units into an H.264/AVC Annex B byte stream file, which is subsequently decoded by JVT JM 10.1 reference software. The proposed measurement setup is a novel, practical and intuitive approach to perform error resilience testing of real-world H.264/AVC broadcast applications. Through a series of experiments, we evaluate some of the error resilience features of the H.264/AVC standard, and see how they perform at packet loss rates from 0.01% to 5%. The results confirmed that an appropriate slice partitioning scheme is essential to have a graceful degradation in received quality in the case of packet loss. While flexible macroblock ordering reduces the compression efficiency about 1 dB for our test material, reconstructed video quality is improved for loss rates above 0.25%.

  16. Systematic errors of mapping functions which are based on the VMF1 concept

    Science.gov (United States)

    Zus, Florian; Dick, Galina; Dousa, Jan; Wickert, Jens

    2014-05-01

    Precise GNSS positioning requires an accurate Mapping Function (MF) to model the tropospheric delay. To date the most accurate MF is the Vienna Mapping Function 1 (VMF1). It utilizes data from a numerical weather model which is known for high predictive skill (Integrated Forecast System of the European Centre of Medium range Weather Forecast). Still, the VMF1, or any other MF which is based on the VMF1 concept, is a parameterized mapping approach and this means that it is tuned for specific elevation angles, station and orbital altitudes. In this study we analyse the systematic errors caused by such tuning on a global scale. We find that in particular the parameterization of the station altitude dependency is a major concern regarding airborne applications. For the moment we do not provide an improved parameterized mapping approach to mitigate systematic errors but instead we propose a rapid direct and therefore error-free mapping approach; the so-called Potsdam Mapping Factors (PMFs).

  17. Predictor-based error correction method in short-term climate prediction

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    In terms of the basic idea of combining dynamical and statistical methods in short-term climate prediction, a new prediction method of predictor-based error correction (PREC) is put forward in order to effectively use statistical experiences in dynamical prediction. Analyses show that the PREC can reasonably utilize the significant correlations between predictors and model prediction errors and correct prediction errors by establishing statistical prediction model. Besides, the PREC is further applied to the cross-validation experiments of dynamical seasonal prediction on the operational atmosphere-ocean coupled general circulation model of China Meteorological Administration/National Climate Center by selecting the sea surface temperature index in Ni(n)o3 region as the physical predictor that represents the prevailing ENSO-cycle mode of interannual variability in climate system. It is shown from the prediction results of summer mean circulation and total precipitation that the PREC can improve predictive skills to some extent. Thus the PREC provides a new approach for improving short-term climate prediction.

  18. Modeling for IFOG Vibration Error Based on the Strain Distribution of Quadrupolar Fiber Coil.

    Science.gov (United States)

    Gao, Zhongxing; Zhang, Yonggang; Zhang, Yunhao

    2016-07-21

    Improving the performance of interferometric fiber optic gyroscope (IFOG) in harsh environment, especially in vibrational environment, is necessary for its practical applications. This paper presents a mathematical model for IFOG to theoretically compute the short-term rate errors caused by mechanical vibration. The computational procedures are mainly based on the strain distribution of quadrupolar fiber coil measured by stress analyzer. The definition of asymmetry of strain distribution (ASD) is given in the paper to evaluate the winding quality of the coil. The established model reveals that the high ASD and the variable fiber elastic modulus in large strain situation are two dominant reasons that give rise to nonreciprocity phase shift in IFOG under vibration. Furthermore, theoretical analysis and computational results indicate that vibration errors of both open-loop and closed-loop IFOG increase with the raise of vibrational amplitude, vibrational frequency and ASD. Finally, an estimation of vibration-induced IFOG errors in aircraft is done according to the proposed model. Our work is meaningful in designing IFOG coils to achieve a better anti-vibration performance.

  19. Modeling for IFOG Vibration Error Based on the Strain Distribution of Quadrupolar Fiber Coil

    Directory of Open Access Journals (Sweden)

    Zhongxing Gao

    2016-07-01

    Full Text Available Improving the performance of interferometric fiber optic gyroscope (IFOG in harsh environment, especially in vibrational environment, is necessary for its practical applications. This paper presents a mathematical model for IFOG to theoretically compute the short-term rate errors caused by mechanical vibration. The computational procedures are mainly based on the strain distribution of quadrupolar fiber coil measured by stress analyzer. The definition of asymmetry of strain distribution (ASD is given in the paper to evaluate the winding quality of the coil. The established model reveals that the high ASD and the variable fiber elastic modulus in large strain situation are two dominant reasons that give rise to nonreciprocity phase shift in IFOG under vibration. Furthermore, theoretical analysis and computational results indicate that vibration errors of both open-loop and closed-loop IFOG increase with the raise of vibrational amplitude, vibrational frequency and ASD. Finally, an estimation of vibration-induced IFOG errors in aircraft is done according to the proposed model. Our work is meaningful in designing IFOG coils to achieve a better anti-vibration performance.

  20. Experiments and error analysis of laser ranging based on frequency-sweep polarization modulation

    Science.gov (United States)

    Gao, Shuyuan; Ji, Rongyi; Li, Yao; Cheng, Zhi; Zhou, Weihu

    2016-11-01

    Frequency-sweep polarization modulation ranging uses a polarization-modulated laser beam to determine the distance to the target, the modulation frequency is swept and frequency values are measured when transmitted and received signals are in phase, thus the distance can be calculated through these values. This method gets much higher theoretical measuring accuracy than phase difference method because of the prevention of phase measurement. However, actual accuracy of the system is limited since additional phase retardation occurs in the measuring optical path when optical elements are imperfectly processed and installed. In this paper, working principle of frequency sweep polarization modulation ranging method is analyzed, transmission model of polarization state in light path is built based on the theory of Jones Matrix, additional phase retardation of λ/4 wave plate and PBS, their impact on measuring performance is analyzed. Theoretical results show that wave plate's azimuth error dominates the limitation of ranging accuracy. According to the system design index, element tolerance and error correcting method of system is proposed, ranging system is built and ranging experiment is performed. Experiential results show that with proposed tolerance, the system can satisfy the accuracy requirement. The present work has a guide value for further research about system design and error distribution.

  1. Geometrical analysis of registration errors in point-based rigid-body registration using invariants.

    Science.gov (United States)

    Shamir, Reuben R; Joskowicz, Leo

    2011-02-01

    Point-based rigid registration is the method of choice for aligning medical datasets in diagnostic and image-guided surgery systems. The most clinically relevant localization error measure is the Target Registration Error (TRE), which is the distance between the image-defined target and the corresponding target defined on another image or on the physical anatomy after registration. The TRE directly depends on the Fiducial Localization Error (FLE), which is the discrepancy between the selected and the actual (unknown) fiducial locations. Since the actual locations of targets usually cannot be measured after registration, the TRE is often estimated by the Fiducial Registration Error (FRE), which is the RMS distance between the fiducials in both datasets after registration, or with Fitzpatrick's TRE (FTRE) formula. However, low FRE-TRE and FTRE-TRE correlations have been reported in clinical practice and in theoretical studies. In this article, we show that for realistic FLE classes, the TRE and the FRE are uncorrelated, regardless of the target location and the number of fiducials and their configuration, and regardless of the FLE magnitude distribution. We use a geometrical approach and classical invariant theory to model the FLE and derive its relation to the TRE and FRE values. We show that, for these FLE classes, the FTRE and TRE are also uncorrelated. Finally, we show with simulations on clinical data that the FRE-TRE correlation is low also in the neighborhood of the FLE-FRE invariant classes. Consequently, and contrary to common practice, the FRE and FTRE may not always be used as surrogates for the TRE.

  2. Error compensation in computer generated hologram-based form testing of aspheres.

    Science.gov (United States)

    Stuerwald, Stephan

    2014-12-10

    Computer-generated holograms (CGHs) are used relatively often to test aspheric surfaces in the case of medium and high lot sizes. Until now differently modified measurement setups for optical form testing interferometry have been presented, like subaperture stitching interferometry and scanning interferometry. In contrast, for testing low to medium lot sizes in research and development, a variety of other tactile and nontactile measurement methods have been developed. In the case of CGH-based interferometric form testing, measurement deviations in the region of several tens of nanometers typically occur. Deviations arise especially due to a nonperfect alignment of the asphere relative to the testing wavefront. Therefore, the null test is user- and adjustment-dependent, which results in insufficient repeatability and reproducibility of the form errors. When adjusting a CGH, an operator usually performs a minimization of the spatial frequency of the fringe pattern. An adjustment to the ideal position, however, often cannot be performed with sufficient precision by the operator as the position of minimum spatial fringe density is often not unique, which also depends on the asphere. Thus, the scientific and technical objectives of this paper comprise the development of a simulation-based approach to explain and quantify typical experimental errors due to misalignment of the specimen toward a CGH in an optical form testing measurement system. A further step is the programming of an iterative method to realize a virtual optimized realignment of the system on the basis of Zernike polynomial decomposition, which should allow for the calculation of the measured form for an ideal alignment and thus a careful subtraction of a typical alignment-based form error. To validate the simulation-based findings, a series of systematic experiments is performed with a recently developed hexapod positioning system in order to allow an exact and reproducible positioning of the optical CGH-based

  3. A residual-based a posteriori error estimator for single-phase Darcy flow in fractured porous media

    KAUST Repository

    Chen, Huangxin

    2016-12-09

    In this paper we develop an a posteriori error estimator for a mixed finite element method for single-phase Darcy flow in a two-dimensional fractured porous media. The discrete fracture model is applied to model the fractures by one-dimensional fractures in a two-dimensional domain. We consider Raviart–Thomas mixed finite element method for the approximation of the coupled Darcy flows in the fractures and the surrounding porous media. We derive a robust residual-based a posteriori error estimator for the problem with non-intersecting fractures. The reliability and efficiency of the a posteriori error estimator are established for the error measured in an energy norm. Numerical results verifying the robustness of the proposed a posteriori error estimator are given. Moreover, our numerical results indicate that the a posteriori error estimator also works well for the problem with intersecting fractures.

  4. A Corpus-Based System of Error Detection and Revision Suggestion for Spanish Learners in Taiwan: A Case Study

    Science.gov (United States)

    Lu, Hui-Chuan; Chu, Yu-Hsin; Chang, Cheng-Yu

    2013-01-01

    Compared with English learners, Spanish learners have fewer resources for automatic error detection and revision and following the current integrative Computer Assisted Language Learning (CALL), we combined corpus-based approach and CALL to create the System of Error Detection and Revision Suggestion (SEDRS) for learning Spanish. Through…

  5. Feasibility of neuro-morphic computing to emulate error-conflict based decision making.

    Energy Technology Data Exchange (ETDEWEB)

    Branch, Darren W.

    2009-09-01

    A key aspect of decision making is determining when errors or conflicts exist in information and knowing whether to continue or terminate an action. Understanding the error-conflict processing is crucial in order to emulate higher brain functions in hardware and software systems. Specific brain regions, most notably the anterior cingulate cortex (ACC) are known to respond to the presence of conflicts in information by assigning a value to an action. Essentially, this conflict signal triggers strategic adjustments in cognitive control, which serve to prevent further conflict. The most probable mechanism is the ACC reports and discriminates different types of feedback, both positive and negative, that relate to different adaptations. Unique cells called spindle neurons that are primarily found in the ACC (layer Vb) are known to be responsible for cognitive dissonance (disambiguation between alternatives). Thus, the ACC through a specific set of cells likely plays a central role in the ability of humans to make difficult decisions and solve challenging problems in the midst of conflicting information. In addition to dealing with cognitive dissonance, decision making in high consequence scenarios also relies on the integration of multiple sets of information (sensory, reward, emotion, etc.). Thus, a second area of interest for this proposal lies in the corticostriatal networks that serve as an integration region for multiple cognitive inputs. In order to engineer neurological decision making processes in silicon devices, we will determine the key cells, inputs, and outputs of conflict/error detection in the ACC region. The second goal is understand in vitro models of corticostriatal networks and the impact of physical deficits on decision making, specifically in stressful scenarios with conflicting streams of data from multiple inputs. We will elucidate the mechanisms of cognitive data integration in order to implement a future corticostriatal-like network in silicon

  6. Likelihood-based inference for cointegration with nonlinear error-correction

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbek, Anders Christian

    2010-01-01

    We consider a class of nonlinear vector error correction models where the transfer function (or loadings) of the stationary relationships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long-run cointegration parameters, and the short-run parameters. Asymptotic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normality can be found. A simulation study...

  7. Cryptanalysis on AW digital signature scheme based on error-correcting codes

    Institute of Scientific and Technical Information of China (English)

    张振峰; 冯登国; 戴宗锋

    2002-01-01

    In 1993, Alabhadi and Wicker gave a modification to Xinmei Digital Signature Scheme based on error-correcting codes, which is usually denoted by AW Scheme. In this paper we show that the AW Scheme is actually not secure: anyone holding public keys of the signatory can obtain the equivalent private keys, and then forge digital signatures for arbitrary messages successfully. We also point out that one can hardly construct a digital signature scheme with high-level security due to the difficulty of decomposing large matrixes.

  8. Fountain code-based error control scheme for dimmable visible light communication systems

    Science.gov (United States)

    Feng, Lifang; Hu, Rose Qingyang; Wang, Jianping; Xu, Peng

    2015-07-01

    In this paper, a novel error control scheme using Fountain codes is proposed in on-off keying (OOK) based visible light communications (VLC) systems. By using Fountain codes, feedback information is needed to be sent back to the transmitter only when transmitted messages are successfully recovered. Therefore improved transmission efficiency, reduced protocol complexity and relative little wireless link-layer delay are gained. By employing scrambling techniques and complementing symbols, the least complemented symbols are needed to support arbitrary dimming target values, and the value of entropy of encoded message are increased.

  9. Probabilistic model based error correction in a set of various mutant sequences analyzed by next-generation sequencing.

    Science.gov (United States)

    Aita, Takuyo; Ichihashi, Norikazu; Yomo, Tetsuya

    2013-12-01

    To analyze the evolutionary dynamics of a mutant population in an evolutionary experiment, it is necessary to sequence a vast number of mutants by high-throughput (next-generation) sequencing technologies, which enable rapid and parallel analysis of multikilobase sequences. However, the observed sequences include many errors of base call. Therefore, if next-generation sequencing is applied to analysis of a heterogeneous population of various mutant sequences, it is necessary to discriminate between true bases as point mutations and errors of base call in the observed sequences, and to subject the sequences to error-correction processes. To address this issue, we have developed a novel method of error correction based on the Potts model and a maximum a posteriori probability (MAP) estimate of its parameters corresponding to the "true sequences". Our method of error correction utilizes (1) the "quality scores" which are assigned to individual bases in the observed sequences and (2) the neighborhood relationship among the observed sequences mapped in sequence space. The computer experiments of error correction of artificially generated sequences supported the effectiveness of our method, showing that 50-90% of errors were removed. Interestingly, this method is analogous to a probabilistic model based method of image restoration developed in the field of information engineering.

  10. Eliminating Obliquity Error from the Estimation of Ionospheric Delay in a Satellite-Based Augmentation System

    Science.gov (United States)

    Sparks, Lawrence

    2013-01-01

    Current satellite-based augmentation systems estimate ionospheric delay using algorithms that assume the electron density of the ionosphere is non-negligible only in a thin shell located near the peak of the actual profile. In its initial operating capability, for example, the Wide Area Augmentation System incorporated the thin shell model into an estimation algorithm that calculates vertical delay using a planar fit. Under disturbed conditions or at low latitude where ionospheric structure is complex, however, the thin shell approximation can serve as a significant source of estimation error. A recent upgrade of the system replaced the planar fit algorithm with an algorithm based upon kriging. The upgrade owes its success, in part, to the ability of kriging to mitigate the error due to this approximation. Previously, alternative delay estimation algorithms have been proposed that eliminate the need for invoking the thin shell model altogether. Prior analyses have compared the accuracy achieved by these methods to the accuracy achieved by the planar fit algorithm. This paper extends these analyses to include a comparison with the accuracy achieved by kriging. It concludes by examining how a satellite-based augmentation system might be implemented without recourse to the thin shell approximation.

  11. On calibrating the sensor errors of a PDR-based indoor localization system.

    Science.gov (United States)

    Lan, Kun-Chan; Shih, Wen-Yuah

    2013-04-10

    Many studies utilize the signal strength of short-range radio systems (such as WiFi, ultrasound and infrared) to build a radio map for indoor localization, by deploying a large number of beacon nodes within a building. The drawback of such an infrastructure-based approach is that the deployment and calibration of the system are costly and labor-intensive. Some prior studies proposed the use of Pedestrian Dead Reckoning (PDR) for indoor localization, which does not require the deployment of beacon nodes. In a PDR system, a small number of sensors are put on the pedestrian. These sensors (such as a G-sensor and gyroscope) are used to estimate the distance and direction that a user travels. The effectiveness of a PDR system lies in its success in accurately estimating the user's moving distance and direction. In this work, we propose a novel waist-mounted based PDR that can measure the user's step lengths with a high accuracy. We utilize vertical acceleration of the body to calculate the user's change in height during walking. Based on the Pythagorean Theorem, we can then estimate each step length using this data. Furthermore, we design a map matching algorithm to calibrate the direction errors from the gyro using building floor plans. The results of our experiment show that we can achieve about 98.26% accuracy in estimating the user's walking distance, with an overall location error of about 0.48 m.

  12. Temporal dynamics of prediction error processing during reward-based decision making.

    Science.gov (United States)

    Philiastides, Marios G; Biele, Guido; Vavatzanidis, Niki; Kazzer, Philipp; Heekeren, Hauke R

    2010-10-15

    Adaptive decision making depends on the accurate representation of rewards associated with potential choices. These representations can be acquired with reinforcement learning (RL) mechanisms, which use the prediction error (PE, the difference between expected and received rewards) as a learning signal to update reward expectations. While EEG experiments have highlighted the role of feedback-related potentials during performance monitoring, important questions about the temporal sequence of feedback processing and the specific function of feedback-related potentials during reward-based decision making remain. Here, we hypothesized that feedback processing starts with a qualitative evaluation of outcome-valence, which is subsequently complemented by a quantitative representation of PE magnitude. Results of a model-based single-trial analysis of EEG data collected during a reversal learning task showed that around 220ms after feedback outcomes are initially evaluated categorically with respect to their valence (positive vs. negative). Around 300ms, and parallel to the maintained valence-evaluation, the brain also represents quantitative information about PE magnitude, thus providing the complete information needed to update reward expectations and to guide adaptive decision making. Importantly, our single-trial EEG analysis based on PEs from an RL model showed that the feedback-related potentials do not merely reflect error awareness, but rather quantitative information crucial for learning reward contingencies.

  13. VR-based training and assessment in ultrasound-guided regional anesthesia: from error analysis to system design.

    LENUS (Irish Health Repository)

    2011-01-01

    If VR-based medical training and assessment is to improve patient care and safety (i.e. a genuine health gain), it has to be based on clinically relevant measurement of performance. Metrics on errors are particularly useful for capturing and correcting undesired behaviors before they occur in the operating room. However, translating clinically relevant metrics and errors into meaningful system design is a challenging process. This paper discusses how an existing task and error analysis was translated into the system design of a VR-based training and assessment environment for Ultrasound Guided Regional Anesthesia (UGRA).

  14. Reduction of error influence in a crisis: Agent-based modeling

    Directory of Open Access Journals (Sweden)

    Danijela D. Protić

    2012-01-01

    Full Text Available Crises caused by human or system errors vary in intensity and duration, and may cause adverse changes in the functioning of an organization. The prevention as well as a good response to incidents and fast reaction to the crisis when it escalates are essential. Therefore, all decisions during the crisis must be precise and concise, and the reaction of the information and communication systems as well as that of personnel must be adequate. The agent technology described in this paper is used to model the education agent for establishing, maintaining and upgrading IT systems, as well as for the staff training. The education agent consists of an indicator of changes in the environment (status indication agent and a management system (management agent for the reaction to crises. Through an illustrative example of the VORG organization, the functions of these agents are described.

  15. Measure short separation for space debris based on radar angle error measurement information

    Science.gov (United States)

    Zhang, Yao; Wang, Qiao; Zhou, Lai-jian; Zhang, Zhuo; Li, Xiao-long

    2016-11-01

    With the increasingly frequent human activities in space, number of dead satellites and space debris has increased dramatically, bring greater risks to the available spacecraft, however, the current widespread use of measuring equipment between space target has a lot of problems, such as high development costs or the limited conditions of use. To solve this problem, use radar multi-target measure error information to the space, and combining the relationship between target and the radar station point of view, building horizontal distance decoding model. By adopting improved signal quantization digit, timing synchronization and outliers processing method, improve the measurement precision, satisfies the requirement of multi-objective near distance measurements, and the using efficiency is analyzed. By conducting the validation test, test the feasibility and effectiveness of the proposed methods.

  16. Distinguishing science from pseudoscience in school psychology: science and scientific thinking as safeguards against human error.

    Science.gov (United States)

    Lilienfeld, Scott O; Ammirati, Rachel; David, Michal

    2012-02-01

    Like many domains of professional psychology, school psychology continues to struggle with the problem of distinguishing scientific from pseudoscientific and otherwise questionable clinical practices. We review evidence for the scientist-practitioner gap in school psychology and provide a user-friendly primer on science and scientific thinking for school psychologists. Specifically, we (a) outline basic principles of scientific thinking, (b) delineate widespread cognitive errors that can contribute to belief in pseudoscientific practices within school psychology and allied professions, (c) provide a list of 10 key warning signs of pseudoscience, illustrated by contemporary examples from school psychology and allied disciplines, and (d) offer 10 user-friendly prescriptions designed to encourage scientific thinking among school psychology practitioners and researchers. We argue that scientific thinking, although fallible, is ultimately school psychologists' best safeguard against a host of errors in thinking.

  17. A gender-based analysis of Iranian EFL learners' types of written errors

    Directory of Open Access Journals (Sweden)

    Faezeh Boroomand

    2013-05-01

    Full Text Available Committing errors is inevitable in process of language acquisition and learning. Analysis of learners' errors from different perspectives, contributes to the improvement of language learning and teaching. Although the issue of gender differences has received considerable attention in the context of second or foreign language learning and teaching, few studies on the relationship between gender and EFL learners' written errors have been carried out. The present study conducted on 100 Iranian advanced EFL learners' written errors (50 male learners and 50 female learners, presents different classifications and subdivisions of errors, and carries out an analysis on these errors. Detecting the most committed errors in each classification, findings reveal significant differences between error frequencies of the two male and female groups (more error frequency in female written productions.

  18. Error-associated behaviors and error rates for robotic geology

    Science.gov (United States)

    Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin

    2004-01-01

    This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.

  19. Error-associated behaviors and error rates for robotic geology

    Science.gov (United States)

    Anderson, Robert C.; Thomas, Geb; Wagner, Jacob; Glasgow, Justin

    2004-01-01

    This study explores human error as a function of the decision-making process. One of many models for human decision-making is Rasmussen's decision ladder [9]. The decision ladder identifies the multiple tasks and states of knowledge involved in decision-making. The tasks and states of knowledge can be classified by the level of cognitive effort required to make the decision, leading to the skill, rule, and knowledge taxonomy (Rasmussen, 1987). Skill based decisions require the least cognitive effort and knowledge based decisions require the greatest cognitive effort. Errors can occur at any of the cognitive levels.

  20. Is human muscle spindle afference dependent on perceived size of error in visual tracking?

    Science.gov (United States)

    Kakuda, N; Wessberg, J; Vallbo, A B

    1997-04-01

    Impulses of 16 muscle spindle afferents from finger extensor muscles were recorded from the radial nerve along with electromyographic (EMG) activity and kinematics of joint movement. Twelve units were classified as Ia and 4 as II spindle afferents. Subjects were requested to perform precision movements at a single metacarpophalangeal joint in an indirect visual tracking task. Similar movements were executed under two different conditions, i.e. with high and low error gain. The purpose was to explore whether different precision demands were associated with different spindle firing rates. With high error gain, a small but significantly higher impulse rate was found in pooled data from Ia afferents during lengthening movements but not during shortening movements, nor with II afferents. EMG was also significantly higher with high error gain in recordings with Ia afferents. When the effect of EMG was factored out, using partial correlation analysis, the significant difference in Ia firing rate vanished. The findings suggest that fusimotor drive as well as skeletomotor activity were both marginally higher when the precision demand was higher, whereas no indication of independent fusimotor adjustments was found. These results are discussed with respect to data from behaving animals and the role of fusimotor independence in various limb muscles proposed.

  1. Universal geometric error modeling of the CNC machine tools based on the screw theory

    Science.gov (United States)

    Tian, Wenjie; He, Baiyan; Huang, Tian

    2011-05-01

    The methods to improve the precision of the CNC (Computerized Numerical Control) machine tools can be classified into two categories: error prevention and error compensation. Error prevention is to improve the precision via high accuracy in manufacturing and assembly. Error compensation is to analyze the source errors that affect on the machining error, to establish the error model and to reach the ideal position and orientation by modifying the trajectory in real time. Error modeling is the key to compensation, so the error modeling method is of great significance. Many researchers have focused on this topic, and proposed many methods, but we can hardly describe the 6-dimensional configuration error of the machine tools. In this paper, the universal geometric error model of CNC machine tools is obtained utilizing screw theory. The 6-dimensional error vector is expressed with a twist, and the error vector transforms between different frames with the adjoint transformation matrix. This model can describe the overall position and orientation errors of the tool relative to the workpiece entirely. It provides the mathematic model for compensation, and also provides a guideline in the manufacture, assembly and precision synthesis of the machine tools.

  2. El error en la práctica médica: una presencia ineludible Human error in medical practice: an unavoidable presence

    OpenAIRE

    Gladis Adriana Vélez Álvarez

    2006-01-01

    El errar, que es una característica humana y un mecanismo de aprendizaje, se convierte en una amenaza para el hombre mismo en algunos escenarios como la aviación y la medicina. Se presentan algunos datos acerca de la frecuencia del error en medicina, su ubicuidad y las circunstancias que lo favorecen, y se hace una reflexión acerca de cómo se ha enfrentado el error y de por qué no se habla abiertamente del mismo. Se propone que el primer paso para aprender del error es aceptarlo como una pres...

  3. Design and validation of HABTA: Human Attention-Based Task Allocator

    NARCIS (Netherlands)

    Maanen, P.P. van; Koning, L. de; Dongen, C.J. van

    2007-01-01

    This paper addresses the development of an adaptive cooperative agent in a domain that suffers from human error in the allocation of attention. The design is discussed of a component of this adaptive agent, called Human Attention-Based Task Allocator (HABTA), capable of managing agent and human atte

  4. ERROR COUNTER-BASED NEGATIVE ACKNOWLEDGEMENT MODE IN CCSDS FILE DELIVERY PROTOCOL

    Institute of Scientific and Technical Information of China (English)

    Xiao Shijie; Yang Mingchuan; Guo Qing

    2011-01-01

    Deep space communication has its own features such as long propagation delays,heavy noise,asymmetric link rates,and intermittent connectivity in space,therefore TCP/IP protocol cannot perform as well as it does in terrestrial communications.Accordingly,the Consultative Committee for Space Data Systems (CCSDS) developed CCSDS File Delivery Protocol (CFDP),which sets standards of efficient file delivery service capable of transferring files to and from mass memory located in the space segment.In CFDP,four optional acknowledge modes are supported to make the communication more reliable.In this paper,we gave a general introduction of typical communication process in CFDP and analysis of its four Negative Acknowledgement (NAK) modes on the respect of file delivery delay and times of retransmission.We found out that despite the shortest file delivery delay,immediate NAK mode suffers from the problem that frequent retransmission may probably lead to network congestion.Thus,we proposed a new mode,the error counter-based NAK mode.By simulation of the case focused on the link between a deep space probe on Mars and a terrestrial station on Earth,we concluded that error counter-based NAK mode has successfully reduced the retransmission times at negligible cost of certain amount of file delivery delay.

  5. OCR Context-Sensitive Error Correction Based on Google Web 1T 5-Gram Data Set

    CERN Document Server

    Bassil, Youssef

    2012-01-01

    Since the dawn of the computing era, information has been represented digitally so that it can be processed by electronic computers. Paper books and documents were abundant and widely being published at that time; and hence, there was a need to convert them into digital format. OCR, short for Optical Character Recognition was conceived to translate paper-based books into digital e-books. Regrettably, OCR systems are still erroneous and inaccurate as they produce misspellings in the recognized text, especially when the source document is of low printing quality. This paper proposes a post-processing OCR context-sensitive error correction method for detecting and correcting non-word and real-word OCR errors. The cornerstone of this proposed approach is the use of Google Web 1T 5-gram data set as a dictionary of words to spell-check OCR text. The Google data set incorporates a very large vocabulary and word statistics entirely reaped from the Internet, making it a reliable source to perform dictionary-based erro...

  6. Error analysis in a stereo vision-based pedestrian detection sensor for collision avoidance applications.

    Science.gov (United States)

    Llorca, David F; Sotelo, Miguel A; Parra, Ignacio; Ocaña, Manuel; Bergasa, Luis M

    2010-01-01

    This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM) classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS) is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance.

  7. Error Analysis in a Stereo Vision-Based Pedestrian Detection Sensor for Collision Avoidance Applications

    Directory of Open Access Journals (Sweden)

    David F. Llorca

    2010-04-01

    Full Text Available This paper presents an analytical study of the depth estimation error of a stereo vision-based pedestrian detection sensor for automotive applications such as pedestrian collision avoidance and/or mitigation. The sensor comprises two synchronized and calibrated low-cost cameras. Pedestrians are detected by combining a 3D clustering method with Support Vector Machine-based (SVM classification. The influence of the sensor parameters in the stereo quantization errors is analyzed in detail providing a point of reference for choosing the sensor setup according to the application requirements. The sensor is then validated in real experiments. Collision avoidance maneuvers by steering are carried out by manual driving. A real time kinematic differential global positioning system (RTK-DGPS is used to provide ground truth data corresponding to both the pedestrian and the host vehicle locations. The performed field test provided encouraging results and proved the validity of the proposed sensor for being used in the automotive sector towards applications such as autonomous pedestrian collision avoidance.

  8. Automatic detection of MLC relative position errors for VMAT using the EPID-based picket fence test

    Science.gov (United States)

    Christophides, Damianos; Davies, Alex; Fleckney, Mark

    2016-12-01

    Multi-leaf collimators (MLCs) ensure the accurate delivery of treatments requiring complex beam fluences like intensity modulated radiotherapy and volumetric modulated arc therapy. The purpose of this work is to automate the detection of MLC relative position errors  ⩾0.5 mm using electronic portal imaging device-based picket fence tests and compare the results to the qualitative assessment currently in use. Picket fence tests with and without intentional MLC errors were measured weekly on three Varian linacs. The picket fence images analysed covered a time period ranging between 14-20 months depending on the linac. An algorithm was developed that calculated the MLC error for each leaf-pair present in the picket fence images. The baseline error distributions of each linac were characterised for an initial period of 6 months and compared with the intentional MLC errors using statistical metrics. The distributions of median and one-sample Kolmogorov-Smirnov test p-value exhibited no overlap between baseline and intentional errors and were used retrospectively to automatically detect MLC errors in routine clinical practice. Agreement was found between the MLC errors detected by the automatic method and the fault reports during clinical use, as well as interventions for MLC repair and calibration. In conclusion the method presented provides for full automation of MLC quality assurance, based on individual linac performance characteristics. The use of the automatic method has been shown to provide early warning for MLC errors that resulted in clinical downtime.

  9. Sparsity-based Image Error Concealment via Adaptive Dual Dictionary Learning and Regularization.

    Science.gov (United States)

    Liu, Xianming; Zhai, Deming; Zhou, Jiantao; Wang, Shiqi; Zhao, Debin; Gao, Huijun

    2016-10-31

    In this paper, we propose a novel sparsity-based image error concealment (EC) algorithm through Adaptive Dual dictionary Learning and Regularization (ADLR). We define two feature spaces: the observed space and the latent space, corresponding to the available regions and the missing regions of image under test, respectively. We learn adaptive and complete dictionaries individually for each space, where the training data are collected via an adaptive template matching mechanism. Based on the piecewise stationarity of natural images, a local correlation model is learned to bridge the sparse representations of the aforementioned dual spaces, allowing us to transfer the knowledge of the available regions to the missing regions for EC purpose. Eventually, the EC task is formulated as a unified optimization problem, where the sparsity of both spaces and the learned correlation model are incorporated. Experimental results show that the proposed method outperforms the state-of-the-art techniques in terms of both objective and perceptual metrics.

  10. Beam-Based Error Identification and Correction Methods for Particle Accelerators

    CERN Document Server

    AUTHOR|(SzGeCERN)692826; Tomas, Rogelio; Nilsson, Thomas

    2014-06-10

    Modern particle accelerators have tight tolerances on the acceptable deviation from their desired machine parameters. The control of the parameters is of crucial importance for safe machine operation and performance. This thesis focuses on beam-based methods and algorithms to identify and correct errors in particle accelerators. The optics measurements and corrections of the Large Hadron Collider (LHC), which resulted in an unprecedented low β-beat for a hadron collider is described. The transverse coupling is another parameter which is of importance to control. Improvement in the reconstruction of the coupling from turn-by-turn data has resulted in a significant decrease of the measurement uncertainty. An automatic coupling correction method, which is based on the injected beam oscillations, has been successfully used in normal operation of the LHC. Furthermore, a new method to measure and correct chromatic coupling that was applied to the LHC, is described. It resulted in a decrease of the chromatic coupli...

  11. Impact Analysis of Human Error on Protection System Reliability%人为失误对保护系统可靠性的影响

    Institute of Scientific and Technical Information of China (English)

    张晶晶; 丁明; 李生虎

    2012-01-01

    针对单一主保护和主后备保护系统,基于状态维修环境,首次建立了详细的、考虑人为失误影响的保护系统可靠性模型。定义了相应的可靠性指标,并通过算例分析了人为失误对保护系统可靠性指标的影响。分析结果表明:人为失误对单一主保护和主后备保护系统的可靠性影响都较大,在正常运行及修理等过程中要尽量减少人为失误,提高人员可靠性和保护系统可靠性。在多重保护系统运行中,不仅要提高主保护的可靠性,也要提高后备保护的可靠性,并把防止误动作作为指导思想。%In view of the single main protection and main and backup protection system, a protection system reliability model considering the impact of hun-lan error is firstly developed in detail, which is based on the condition-based maintenance environment. Corresponding reliability indices are defined, through an example the impact of human error on the protection system reliability is analyzed. The analysis results show that human error has a great impact on both single main protection and main and backup protection system, and human error must be reduced as possible during normal operation and maintenance process. The human reliability and protection system reliability must be improved. Not only reliability of main protection should be increased, but also reliability of backup protection in the multiple protection system, and preventing malfunction of protection system should be guideline.

  12. A GPS-Based Pitot-Static Calibration Method Using Global Output-Error Optimization

    Science.gov (United States)

    Foster, John V.; Cunningham, Kevin

    2010-01-01

    Pressure-based airspeed and altitude measurements for aircraft typically require calibration of the installed system to account for pressure sensing errors such as those due to local flow field effects. In some cases, calibration is used to meet requirements such as those specified in Federal Aviation Regulation Part 25. Several methods are used for in-flight pitot-static calibration including tower fly-by, pacer aircraft, and trailing cone methods. In the 1990 s, the introduction of satellite-based positioning systems to the civilian market enabled new inflight calibration methods based on accurate ground speed measurements provided by Global Positioning Systems (GPS). Use of GPS for airspeed calibration has many advantages such as accuracy, ease of portability (e.g. hand-held) and the flexibility of operating in airspace without the limitations of test range boundaries or ground telemetry support. The current research was motivated by the need for a rapid and statistically accurate method for in-flight calibration of pitot-static systems for remotely piloted, dynamically-scaled research aircraft. Current calibration methods were deemed not practical for this application because of confined test range size and limited flight time available for each sortie. A method was developed that uses high data rate measurements of static and total pressure, and GPSbased ground speed measurements to compute the pressure errors over a range of airspeed. The novel application of this approach is the use of system identification methods that rapidly compute optimal pressure error models with defined confidence intervals in nearreal time. This method has been demonstrated in flight tests and has shown 2- bounds of approximately 0.2 kts with an order of magnitude reduction in test time over other methods. As part of this experiment, a unique database of wind measurements was acquired concurrently with the flight experiments, for the purpose of experimental validation of the

  13. Comparison of ensemble post-processing approaches, based on empirical and dynamical error modelisation of rainfall-runoff model forecasts

    Science.gov (United States)

    Chardon, J.; Mathevet, T.; Le Lay, M.; Gailhard, J.

    2012-04-01

    In the context of a national energy company (EDF : Electricité de France), hydro-meteorological forecasts are necessary to ensure safety and security of installations, meet environmental standards and improve water ressources management and decision making. Hydrological ensemble forecasts allow a better representation of meteorological and hydrological forecasts uncertainties and improve human expertise of hydrological forecasts, which is essential to synthesize available informations, coming from different meteorological and hydrological models and human experience. An operational hydrological ensemble forecasting chain has been developed at EDF since 2008 and is being used since 2010 on more than 30 watersheds in France. This ensemble forecasting chain is characterized ensemble pre-processing (rainfall and temperature) and post-processing (streamflow), where a large human expertise is solicited. The aim of this paper is to compare 2 hydrological ensemble post-processing methods developed at EDF in order improve ensemble forecasts reliability (similar to Monatanari &Brath, 2004; Schaefli et al., 2007). The aim of the post-processing methods is to dress hydrological ensemble forecasts with hydrological model uncertainties, based on perfect forecasts. The first method (called empirical approach) is based on a statistical modelisation of empirical error of perfect forecasts, by streamflow sub-samples of quantile class and lead-time. The second method (called dynamical approach) is based on streamflow sub-samples of quantile class and streamflow variation, and lead-time. On a set of 20 watersheds used for operational forecasts, results show that both approaches are necessary to ensure a good post-processing of hydrological ensemble, allowing a good improvement of reliability, skill and sharpness of ensemble forecasts. The comparison of the empirical and dynamical approaches shows the limits of the empirical approach which is not able to take into account hydrological

  14. Online machining error estimation method of numerical control gear grinding machine tool based on data analysis of internal sensors

    Science.gov (United States)

    Zhao, Fei; Zhang, Chi; Yang, Guilin; Chen, Chinyin

    2016-12-01

    This paper presents an online estimation method of cutting error by analyzing of internal sensor readings. The internal sensors of numerical control (NC) machine tool are selected to avoid installation problem. The estimation mathematic model of cutting error was proposed to compute the relative position of cutting point and tool center point (TCP) from internal sensor readings based on cutting theory of gear. In order to verify the effectiveness of the proposed model, it was simulated and experimented in gear generating grinding process. The cutting error of gear was estimated and the factors which induce cutting error were analyzed. The simulation and experiments verify that the proposed approach is an efficient way to estimate the cutting error of work-piece during machining process.

  15. SU-E-T-363: Error Detection Comparison of EPID and MLC Log File Based IMRT QA Systems

    Energy Technology Data Exchange (ETDEWEB)

    Defoor, D; Obeidat, M; Linden, P; Kirby, N; Papanikolaou, N; Stathakis, S [University of Texas Health Science Center at San Antonio, San Antonio, TX (United States); Mavroidis, P [University of North Carolina, Chapel Hill, NC (United States)

    2015-06-15

    Purpose: In this study we will compare the ability of three QA methods (Delta4, MU-EPID, Dynalog QA) to detect specific errors. Methods: A Varian Novalis Tx with a HD120 MLC and aS1000 Electronic Portal Imaging Device (EPID) was used in our study. Multi-leaf collimator (MLC) errors, gantry angle and dose errors were introduced into 5 volumetric arc therapy (VMAT) plans. 3D dose distributions calculated with data from the EPID and Dynalog QA methods were compared with the planned dose distribution. The gamma passing percentages as well as percentage error of planning target volume (PTV) dose were used for passing determination. Baselines for gamma passing percentages and PTV dose were established by measuring the original plan 5 times consecutively. Standard passing thresholds as well as thresholds derived from receiver operator characteristic (ROC) analysis and 2 standard deviation (SD) criteria were used. Results: When applying the standard 95% pass rate at 3%/3mm gamma analysis 14, 21 and 8 of 30 errors were detected by the Delta4, MU-EPID and Dynalog QA methods respectively. Thresholds set at 2 SD from our base line measurements resulted in the detection of 18, 9 and 14 of 30 errors for the Delta4, MU-EPID and Dynalog QA methods respectively. When using D2 of the PTV as a metric the Dynalog QA detected 20 of 30 errors while the EPID method detected 14 of 30 errors. Using D98 of the PTV, Dynalog QA detected 13 of 30 while the EPID detected 3 of 30 errors. Conclusion: Although MU-EPID detected the most errors at the standard 95% cutoff it also produced the most false detections in the baseline data. The Dynalog QA was the most effective when the ROC adjusted passing threshold was used. D2 was more effective as a metric for detecting errors than D98.

  16. Current error vector based prediction control of the section winding permanent magnet linear synchronous motor

    Energy Technology Data Exchange (ETDEWEB)

    Hong Junjie, E-mail: hongjjie@mail.sysu.edu.cn [School of Engineering, Sun Yat-Sen University, Guangzhou 510006 (China); Li Liyi, E-mail: liliyi@hit.edu.cn [Dept. Electrical Engineering, Harbin Institute of Technology, Harbin 150000 (China); Zong Zhijian; Liu Zhongtu [School of Engineering, Sun Yat-Sen University, Guangzhou 510006 (China)

    2011-10-15

    Highlights: {yields} The structure of the permanent magnet linear synchronous motor (SW-PMLSM) is new. {yields} A new current control method CEVPC is employed in this motor. {yields} The sectional power supply method is different to the others and effective. {yields} The performance gets worse with voltage and current limitations. - Abstract: To include features such as greater thrust density, higher efficiency without reducing the thrust stability, this paper proposes a section winding permanent magnet linear synchronous motor (SW-PMLSM), whose iron core is continuous, whereas winding is divided. The discrete system model of the motor is derived. With the definition of the current error vector and selection of the value function, the theory of the current error vector based prediction control (CEVPC) for the motor currents is explained clearly. According to the winding section feature, the motion region of the mover is divided into five zones, in which the implementation of the current predictive control method is proposed. Finally, the experimental platform is constructed and experiments are carried out. The results show: the current control effect has good dynamic response, and the thrust on the mover remains constant basically.

  17. Stochastic thermodynamics based on incomplete information: generalized Jarzynski equality with measurement errors with or without feedback

    Science.gov (United States)

    Wächtler, Christopher W.; Strasberg, Philipp; Brandes, Tobias

    2016-11-01

    In the derivation of fluctuation relations, and in stochastic thermodynamics in general, it is tacitly assumed that we can measure the system perfectly, i.e., without measurement errors. We here demonstrate for a driven system immersed in a single heat bath, for which the classic Jarzynski equality =1 holds, how to relax this assumption. Based on a general measurement model akin to Bayesian inference we derive a general expression for the fluctuation relation of the measured work and we study the case of an overdamped Brownian particle and of a two-level system in particular. We then generalize our results further and incorporate feedback in our description. We show and argue that, if measurement errors are fully taken into account by the agent who controls and observes the system, the standard Jarzynski-Sagawa-Ueda relation should be formulated differently. We again explicitly demonstrate this for an overdamped Brownian particle and a two-level system where the fluctuation relation of the measured work differs significantly from the efficacy parameter introduced by Sagawa and Ueda. Instead, the generalized fluctuation relation under feedback control, =1, holds only for a superobserver having perfect access to both the system and detector degrees of freedom, independently of whether or not the detector yields a noisy measurement record and whether or not we perform feedback.

  18. Errores ortográficos en el ingreso en bases de datos

    Directory of Open Access Journals (Sweden)

    Spinak, Ernesto

    1995-09-01

    Full Text Available The problems of ortographic quality control in data entry of records in databases in Spanish language are analyzed. The pros and cons of four control methods are evaluated: double entry, hapax legomena trigrammes and use of dictionaries, in view to determining which of these procedures offer a better cost/result relation. The work is focussed to the processes of manual data entry; the errors of data entry with scanners ar not analyzed.

    Se estudian los problemas de la corrección ortográfica en el ingreso de registros en bases de datos en idioma español. Se evalúan los pros y contras de cuatro métodos de control: doble entrada, hápax legómena, trigramas y uso de diccionarios, con vistas a determinar cuáles de estos procedimientos ofrecen mejor relación de costo/resultado. El trabajo está enfocado a los procesos de ingreso por digitación, y no se analizan los errores ortográficos de los ingresos por lectura óptica.

  19. A method for multiplex gene synthesis employing error correction based on expression.

    Directory of Open Access Journals (Sweden)

    Timothy H-C Hsiau

    Full Text Available Our ability to engineer organisms with new biosynthetic pathways and genetic circuits is limited by the availability of protein characterization data and the cost of synthetic DNA. With new tools for reading and writing DNA, there are opportunities for scalable assays that more efficiently and cost effectively mine for biochemical protein characteristics. To that end, we have developed the Multiplex Library Synthesis and Expression Correction (MuLSEC method for rapid assembly, error correction, and expression characterization of many genes as a pooled library. This methodology enables gene synthesis from microarray-synthesized oligonucleotide pools with a one-pot technique, eliminating the need for robotic liquid handling. Post assembly, the gene library is subjected to an ampicillin based quality control selection, which serves as both an error correction step and a selection for proteins that are properly expressed and folded in E. coli. Next generation sequencing of post selection DNA enables quantitative analysis of gene expression characteristics. We demonstrate the feasibility of this approach by building and testing over 90 genes for empirical evidence of soluble expression. This technique reduces the problem of part characterization to multiplex oligonucleotide synthesis and deep sequencing, two technologies under extensive development with projected cost reduction.

  20. Measurement error in epidemiologic studies of air pollution based on land-use regression models.

    Science.gov (United States)

    Basagaña, Xavier; Aguilera, Inmaculada; Rivera, Marcela; Agis, David; Foraster, Maria; Marrugat, Jaume; Elosua, Roberto; Künzli, Nino

    2013-10-15

    Land-use regression (LUR) models are increasingly used to estimate air pollution exposure in epidemiologic studies. These models use air pollution measurements taken at a small set of locations and modeling based on geographical covariates for which data are available at all study participant locations. The process of LUR model development commonly includes a variable selection procedure. When LUR model predictions are used as explanatory variables in a model for a health outcome, measurement error can lead to bias of the regression coefficients and to inflation of their variance. In previous studies dealing with spatial predictions of air pollution, bias was shown to be small while most of the effect of measurement error was on the variance. In this study, we show that in realistic cases where LUR models are applied to health data, bias in health-effect estimates can be substantial. This bias depends on the number of air pollution measurement sites, the number of available predictors for model selection, and the amount of explainable variability in the true exposure. These results should be taken into account when interpreting health effects from studies that used LUR models.

  1. A method for multiplex gene synthesis employing error correction based on expression.

    Science.gov (United States)

    Hsiau, Timothy H-C; Sukovich, David; Elms, Phillip; Prince, Robin N; Strittmatter, Tobias; Stritmatter, Tobias; Ruan, Paul; Curry, Bo; Anderson, Paige; Sampson, Jeff; Anderson, J Christopher

    2015-01-01

    Our ability to engineer organisms with new biosynthetic pathways and genetic circuits is limited by the availability of protein characterization data and the cost of synthetic DNA. With new tools for reading and writing DNA, there are opportunities for scalable assays that more efficiently and cost effectively mine for biochemical protein characteristics. To that end, we have developed the Multiplex Library Synthesis and Expression Correction (MuLSEC) method for rapid assembly, error correction, and expression characterization of many genes as a pooled library. This methodology enables gene synthesis from microarray-synthesized oligonucleotide pools with a one-pot technique, eliminating the need for robotic liquid handling. Post assembly, the gene library is subjected to an ampicillin based quality control selection, which serves as both an error correction step and a selection for proteins that are properly expressed and folded in E. coli. Next generation sequencing of post selection DNA enables quantitative analysis of gene expression characteristics. We demonstrate the feasibility of this approach by building and testing over 90 genes for empirical evidence of soluble expression. This technique reduces the problem of part characterization to multiplex oligonucleotide synthesis and deep sequencing, two technologies under extensive development with projected cost reduction.

  2. Instanton-based techniques for analysis and reduction of error floor of LDPC codes

    Energy Technology Data Exchange (ETDEWEB)

    Chertkov, Michael [Los Alamos National Laboratory; Chilappagari, Shashi K [Los Alamos National Laboratory; Stepanov, Mikhail G [Los Alamos National Laboratory; Vasic, Bane [SENIOR MEMBER, IEEE

    2008-01-01

    We describe a family of instanton-based optimization methods developed recently for the analysis of the error floors of low-density parity-check (LDPC) codes. Instantons are the most probable configurations of the channel noise which result in decoding failures. We show that the general idea and the respective optimization technique are applicable broadly to a variety of channels, discrete or continuous, and variety of sub-optimal decoders. Specifically, we consider: iterative belief propagation (BP) decoders, Gallager type decoders, and linear programming (LP) decoders performing over the additive white Gaussian noise channel (AWGNC) and the binary symmetric channel (BSC). The instanton analysis suggests that the underlying topological structures of the most probable instanton of the same code but different channels and decoders are related to each other. Armed with this understanding of the graphical structure of the instanton and its relation to the decoding failures, we suggest a method to construct codes whose Tanner graphs are free of these structures, and thus have less significant error floors.

  3. Optimization-based Analysis and Training of Human Decision Making

    OpenAIRE

    Engelhart, Michael

    2015-01-01

    In the research domain Complex Problem Solving (CPS) in psychology, computer-supported tests are used to analyze complex human decision making and problem solving. The approach is to use computer-based microworlds and to evaluate the performance of participants in such test-scenarios and correlate it to certain characteristics. However, these test-scenarios have usually been defined on a trial-and-error basis, until certain characteristics became apparent. The more complex models ...

  4. Evaluating Example-based Pose Estimation: Experiments on the HumanEva Sets

    NARCIS (Netherlands)

    Poppe, Ronald

    2007-01-01

    We present an example-based approach to pose recovery, using histograms of oriented gradients as image descriptors. Tests on the HumanEva-I and HumanEva-II data sets provide us insight into the strengths and limitations of an example-based approach. We report mean relative 3D errors of approximately

  5. Research on the Mechanism of Human Error in Ship Building%舰船建造中人因失误机理的研究

    Institute of Scientific and Technical Information of China (English)

    石小岗; 周宏; 莫一峰

    2014-01-01

    由于舰船建造的人-机-环境系统的复杂性使得在建造过程中人因失误事件的发生概率很大。如何预防与减少人因失误提高人的可靠性已成为保证舰船建造安全生产的主要因素。本文研究了人因失误的特点,根据人的认知行为对舰船建造过程的人为失误进行了分类同时总结出了影响舰船建造过程中人因失误的影响因素,针对影响因素给出了预防舰船建造中人因失误的有效措施。%The complexity of the man-machine-environment system for ship building results in big probability of human error. How to prevent and decrease human error and improve the reliability of people has become the main factor for ensuring shipbuilding safety. This paper studies the characteristics of human error, classifies the human error in building according to human cognitive behavior and summarizes the influencing factors of human error in shipbuilding. Effective measures to prevent human error are put forward.

  6. Lexis in Chinese-English Translation of Drug Package Inserts: Corpus-based Error Analysis and Its Translation Strategies.

    Science.gov (United States)

    Ying, Lin; Yumei, Zhou

    2010-12-01

    Error analysis (EA) has been broadly applied to the researches of writing, speaking, second language acquisition (SLA) and translation. This study was carried out based on Carl James' error taxonomy to investigate the distribution of lexical errors in Chinese-English (C-E) translation of drug package inserts (DPIs)(1), explore the underlying causes and propose some translation strategies for correction and reduction of lexical errors in DPIs. A translation corpus consisting of 25 DPIs translated from Chinese into English was established. Lexical errors in the corpus and the error causes were analyzed qualitatively and quantitatively. Some examples were used to analyze the lexical errors and their causes, and some strategies for translating vocabulary in DPIs were proposed according to Eugene Nida's translation theory. This study will not only help translators and medical workers reduce errors in C-E translation of vocabulary in DPIs and other types of medical texts but also shed light on the learning and teaching of C-E translation of medical texts.

  7. FPGA Based Efficient Multiplier for Image Processing Applications Using Recursive Error Free Mitchell Log Multiplier and KOM Architecture

    Directory of Open Access Journals (Sweden)

    Satish S Bhairannawar

    2014-06-01

    Full Text Available The Digital Image processing applications like medi cal imaging, satellite imaging, Biometric trait ima ges etc., rely on multipliers to improve the quality of image. However, existing multiplication techniques introduce errors in the output with consumption of more time, hence error free high speed multipliers has to be designed. In this paper we propose FPGA based Recursive Error Free Mitchell Log Multiplier (REFMLM for image Filters. The 2x2 error free Mitc hell log multiplier is designed with zero error by introducing error correction term is used in higher order Karastuba-Ofman Multiplier (KOM Architectures. The higher order KOM multipliers is decomposed into number of lower order multipliers using radix 2 till basic multiplier block of order 2x2 which is designed by error free Mitchell log mu ltiplier. The 8x8 REFMLM is tested for Gaussian filter to rem ove noise in fingerprint image. The Multiplier is synthesized using Spartan 3 FPGA family device XC3S 1500-5fg320. It is observed that the performance parameters such as area utilization, speed, error a nd PSNR are better in the case of proposed architec ture compared to existing architectures

  8. A Novel Systematic Error Compensation Algorithm Based on Least Squares Support Vector Regression for Star Sensor Image Centroid Estimation

    Directory of Open Access Journals (Sweden)

    Jingyan Song

    2011-07-01

    Full Text Available The star centroid estimation is the most important operation, which directly affects the precision of attitude determination for star sensors. This paper presents a theoretical study of the systematic error introduced by the star centroid estimation algorithm. The systematic error is analyzed through a frequency domain approach and numerical simulations. It is shown that the systematic error consists of the approximation error and truncation error which resulted from the discretization approximation and sampling window limitations, respectively. A criterion for choosing the size of the sampling window to reduce the truncation error is given in this paper. The systematic error can be evaluated as a function of the actual star centroid positions under different Gaussian widths of star intensity distribution. In order to eliminate the systematic error, a novel compensation algorithm based on the least squares support vector regression (LSSVR with Radial Basis Function (RBF kernel is proposed. Simulation results show that when the compensation algorithm is applied to the 5-pixel star sampling window, the accuracy of star centroid estimation is improved from 0.06 to 6 × 10−5 pixels.

  9. On Calibrating the Sensor Errors of a PDR-Based Indoor Localization System

    Directory of Open Access Journals (Sweden)

    Wen-Yuah Shih

    2013-04-01

    Full Text Available Many studies utilize the signal strength of short-range radio systems (such as WiFi, ultrasound and infrared to build a radio map for indoor localization, by deploying a large number of beacon nodes within a building. The drawback of such an infrastructure-based approach is that the deployment and calibration of the system are costly and labor-intensive. Some prior studies proposed the use of Pedestrian Dead Reckoning (PDR for indoor localization, which does not require the deployment of beacon nodes. In a PDR system, a small number of sensors are put on the pedestrian. These sensors (such as a G-sensor and gyroscope are used to estimate the distance and direction that a user travels. The effectiveness of a PDR system lies in its success in accurately estimating the user’s moving distance and direction. In this work, we propose a novel waist-mounted based PDR that can measure the user’s step lengths with a high accuracy. We utilize vertical acceleration of the body to calculate the user’s change in height during walking. Based on the Pythagorean Theorem, we can then estimate each step length using this data. Furthermore, we design a map matching algorithm to calibrate the direction errors from the gyro using building floor plans. The results of our experiment show that we can achieve about 98.26% accuracy in estimating the user’s walking distance, with an overall location error of about 0.48 m.

  10. The Measure of Human Error: Direct and Indirect Performance Shaping Factors

    Energy Technology Data Exchange (ETDEWEB)

    Ronald L. Boring; Candice D. Griffith; Jeffrey C. Joe

    2007-08-01

    The goal of performance shaping factors (PSFs) is to provide measures to account for human performance. PSFs fall into two categories—direct and indirect measures of human performance. While some PSFs such as “time to complete a task” are directly measurable, other PSFs, such as “fitness for duty,” can only be measured indirectly through other measures and PSFs, such as through fatigue measures. This paper explores the role of direct and indirect measures in human reliability analysis (HRA) and the implications that measurement theory has on analyses and applications using PSFs. The paper concludes with suggestions for maximizing the reliability and validity of PSFs.

  11. Human error probability quantification using fuzzy methodology in nuclear plants; Aplicacao da metodologia fuzzy na quantificacao da probabilidade de erro humano em instalacoes nucleares

    Energy Technology Data Exchange (ETDEWEB)

    Nascimento, Claudio Souza do

    2010-07-01

    This work obtains Human Error Probability (HEP) estimates from operator's actions in response to emergency situations a hypothesis on Research Reactor IEA-R1 from IPEN. It was also obtained a Performance Shaping Factors (PSF) evaluation in order to classify them according to their influence level onto the operator's actions and to determine these PSF actual states over the plant. Both HEP estimation and PSF evaluation were done based on Specialists Evaluation using interviews and questionnaires. Specialists group was composed from selected IEA-R1 operators. Specialist's knowledge representation into linguistic variables and group evaluation values were obtained through Fuzzy Logic and Fuzzy Set Theory. HEP obtained values show good agreement with literature published data corroborating the proposed methodology as a good alternative to be used on Human Reliability Analysis (HRA). (author)

  12. A physiologically based pharmacokinetic model to predict the pharmacokinetics of highly protein-bound drugs and the impact of errors in plasma protein binding.

    Science.gov (United States)

    Ye, Min; Nagar, Swati; Korzekwa, Ken

    2016-04-01

    Predicting the pharmacokinetics of highly protein-bound drugs is difficult. Also, since historical plasma protein binding data were often collected using unbuffered plasma, the resulting inaccurate binding data could contribute to incorrect predictions. This study uses a generic physiologically based pharmacokinetic (PBPK) model to predict human plasma concentration-time profiles for 22 highly protein-bound drugs. Tissue distribution was estimated from in vitro drug lipophilicity data, plasma protein binding and the blood: plasma ratio. Clearance was predicted with a well-stirred liver model. Underestimated hepatic clearance for acidic and neutral compounds was corrected by an empirical scaling factor. Predicted values (pharmacokinetic parameters, plasma concentration-time profile) were compared with observed data to evaluate the model accuracy. Of the 22 drugs, less than a 2-fold error was obtained for the terminal elimination half-life (t1/2 , 100% of drugs), peak plasma concentration (Cmax , 100%), area under the plasma concentration-time curve (AUC0-t , 95.4%), clearance (CLh , 95.4%), mean residence time (MRT, 95.4%) and steady state volume (Vss , 90.9%). The impact of fup errors on CLh and Vss prediction was evaluated. Errors in fup resulted in proportional errors in clearance prediction for low-clearance compounds, and in Vss prediction for high-volume neutral drugs. For high-volume basic drugs, errors in fup did not propagate to errors in Vss prediction. This is due to the cancellation of errors in the calculations for tissue partitioning of basic drugs. Overall, plasma profiles were well simulated with the present PBPK model. Copyright © 2016 John Wiley & Sons, Ltd.

  13. The role of usability in the evaluation of accidents: human error or design flaw?

    Science.gov (United States)

    Correia, Walter; Soares, Marcelo; Barros, Marina; Campos, Fábio

    2012-01-01

    This article aims to highlight the role of consumer products companies in the heart and the extent of accidents involving these types of products, and as such undesired events take part as an agent in influencing decision making for the purchase of a product that nature on the part of consumers and users. The article demonstrates, by reference, interviews and case studies such as the development of poorly designed products and design errors of design can influence the usage behavior of users, thus leading to accidents, and also negatively affect the next image of a company. The full explanation of these types of questions aims to raise awareness, plan on a reliable usability, users and consumers in general about the safe use of consumer products, and also safeguard their rights before a legal system of consumer protection, even far away by the CDC--Code of Consumer Protection.

  14. Electrophysiological correlates of reward prediction error recorded in the human prefrontal cortex

    Science.gov (United States)

    Oya, Hiroyuki; Adolphs, Ralph; Kawasaki, Hiroto; Bechara, Antoine; Damasio, Antonio; Howard, Matthew A.

    2005-01-01

    Lesion and functional imaging studies have shown that the ventromedial prefrontal cortex is critically involved in the avoidance of risky choices. However, detailed descriptions of the mechanisms that underlie the establishment of such behaviors remain elusive, due in part to the spatial and temporal limitations of available research techniques. We investigated this issue by recording directly from prefrontal depth electrodes in a rare neurosurgical patient while he performed the Iowa Gambling Task, and we concurrently measured behavioral, autonomic, and electrophysiological responses. We found a robust alpha-band component of event-related potentials that reflected the mismatch between expected outcomes and actual outcomes in the task, correlating closely with the reward-related error obtained from a reinforcement learning model of the patient's choice behavior. The finding implicates this brain region in the acquisition of choice bias by means of a continuous updating of expectations about reward and punishment. PMID:15928095

  15. MO-F-BRA-04: Voxel-Based Statistical Analysis of Deformable Image Registration Error via a Finite Element Method.

    Science.gov (United States)

    Li, S; Lu, M; Kim, J; Glide-Hurst, C; Chetty, I; Zhong, H

    2012-06-01

    Purpose Clinical implementation of adaptive treatment planning is limited by the lack of quantitative tools to assess deformable image registration errors (R-ERR). The purpose of this study was to develop a method, using finite element modeling (FEM), to estimate registration errors based on mechanical changes resulting from them. Methods An experimental platform to quantify the correlation between registration errors and their mechanical consequences was developed as follows: diaphragm deformation was simulated on the CT images in patients with lung cancer using a finite element method (FEM). The simulated displacement vector fields (F-DVF) were used to warp each CT image to generate a FEM image. B-Spline based (Elastix) registrations were performed from reference to FEM images to generate a registration DVF (R-DVF). The F- DVF was subtracted from R-DVF. The magnitude of the difference vector was defined as the registration error, which is a consequence of mechanically unbalanced energy (UE), computed using 'in-house-developed' FEM software. A nonlinear regression model was used based on imaging voxel data and the analysis considered clustered voxel data within images. Results A regression model analysis showed that UE was significantly correlated with registration error, DVF and the product of registration error and DVF respectively with R̂2=0.73 (R=0.854). The association was verified independently using 40 tracked landmarks. A linear function between the means of UE values and R- DVF*R-ERR has been established. The mean registration error (N=8) was 0.9 mm. 85.4% of voxels fit this model within one standard deviation. Conclusions An encouraging relationship between UE and registration error has been found. These experimental results suggest the feasibility of UE as a valuable tool for evaluating registration errors, thus supporting 4D and adaptive radiotherapy. The research was supported by NIH/NCI R01CA140341. © 2012 American Association of Physicists in

  16. When errors are rewarding

    NARCIS (Netherlands)

    Bruijn, E.R.A. de; Lange, F.P. de; Cramon, D.Y. von; Ullsperger, M.

    2009-01-01

    For social beings like humans, detecting one's own and others' errors is essential for efficient goal-directed behavior. Although one's own errors are always negative events, errors from other persons may be negative or positive depending on the social context. We used neuroimaging to disentangle br

  17. Fundamental building blocks for molecular biowire based forward error-correcting biosensors

    Energy Technology Data Exchange (ETDEWEB)

    Liu Yang [Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48824 (United States); Chakrabartty, Shantanu [Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48824 (United States); Alocilja, Evangelyn C [Biosystems Engineering, Michigan State University, East Lansing, MI 48824 (United States)

    2007-10-24

    This paper describes the fabrication, characterization and modeling of fundamental logic gates that can be used for designing biosensors with embedded forward error-correction (FEC). The proposed logic gates (AND and OR) are constructed by patterning antibodies at different spatial locations along the substrate of a lateral flow immunosensor assay. The logic gates operate by converting binding events between an antigen and an antibody into a measurable electrical signal using polyaniline nanowires as the transducer. In this study, B. cereus and E. coli have been chosen as model pathogens. The functionality of the AND and OR logic gates has been validated using conductance measurements with different pathogen concentrations. Experimental results show that the change in conductance across the gates can be modeled as a log-linear response with respect to varying pathogen concentration. Equivalent circuits models for AND and OR logic gates have been derived based on measured results.

  18. Early warning system for coffee rust disease based on error correcting output codes: a proposal

    Directory of Open Access Journals (Sweden)

    David Camilo Corrales

    2014-12-01

    Full Text Available Colombian coffee producers have had to face the severe consequences of the coffee rust disease since it was first reported in the country in 1983. Recently, machine learning researchers have tried to predict infection through classifiers such as decision trees, regression Support Vector Machines (SVM, non-deterministic classifiers and Bayesian Networks, but it has been theoretically and empirically demonstrated that combining multiple classifiers can substantially improve the classification performance of the constituent members. An Early Warning System (EWS for coffee rust disease was therefore proposed based on Error Correcting Output Codes (ECOC and SVM to compute the binary functions of Plant Density, Shadow Level, Soil Acidity, Last Nighttime Rainfall Intensity and Last Days Relative Humidity.

  19. Streaming Media over a Color Overlay Based on Forward Error Correction Technique

    Institute of Scientific and Technical Information of China (English)

    张晓瑜; 沈国斌; 李世鹏; 钟玉琢

    2004-01-01

    The number of clients that receive high-quality streaming video from a source is greatly limited by the application requirements,such as the high bandwidth and reliability.In this work,a method was developed to construct a color overlay,which enables clients to receive data across multiple paths,based on the forward error correction technique.The color overlay enlarges system capacity by reducing the bottlenecks and extending the bandwidth,improves reliability against node failure,and is more resilient to fluctuations of network metrics.A light-weight protocol for building the overlay is also presented.Extensive simulations were conducted and the results clearly support the claimed advantages.

  20. Minimum Error Thresholding Segmentation Algorithm Based on 3D Grayscale Histogram

    Directory of Open Access Journals (Sweden)

    Jin Liu

    2014-01-01

    Full Text Available Threshold segmentation is a very important technique. The existing threshold algorithms do not work efficiently for noisy grayscale images. This paper proposes a novel algorithm called three-dimensional minimum error thresholding (3D-MET, which is used to solve the problem. The proposed approach is implemented by an optimal threshold discriminant based on the relative entropy theory and the 3D histogram. The histogram is comprised of gray distribution information of pixels and relevant information of neighboring pixels in an image. Moreover, a fast recursive method is proposed to reduce the time complexity of 3D-MET from O(L6 to O(L3, where L stands for gray levels. Experimental results demonstrate that the proposed approach can provide superior segmentation performance compared to other methods for gray image segmentation.

  1. Threshold-Based Bit Error Rate for Stopping Iterative Turbo Decoding in a Varying SNR Environment

    Science.gov (United States)

    Mohamad, Roslina; Harun, Harlisya; Mokhtar, Makhfudzah; Adnan, Wan Azizun Wan; Dimyati, Kaharudin

    2017-01-01

    Online bit error rate (BER) estimation (OBE) has been used as a stopping iterative turbo decoding criterion. However, the stopping criteria only work at high signal-to-noise ratios (SNRs), and fail to have early termination at low SNRs, which contributes to an additional iteration number and an increase in computational complexity. The failure of the stopping criteria is caused by the unsuitable BER threshold, which is obtained by estimating the expected BER performance at high SNRs, and this threshold does not indicate the correct termination according to convergence and non-convergence outputs (CNCO). Hence, in this paper, the threshold computation based on the BER of CNCO is proposed for an OBE stopping criterion (OBEsc). From the results, OBEsc is capable of terminating early in a varying SNR environment. The optimum number of iterations achieved by the OBEsc allows huge savings in decoding iteration number and decreasing the delay of turbo iterative decoding.

  2. QFT control based on zero phase error compensation for flight simulator

    Institute of Scientific and Technical Information of China (English)

    Liu Jinkun; He Yuzhu

    2007-01-01

    To improve the robustness of high-precision servo systems, quantitative feedback theory (QFT) which aims to achieve a desired robust design over a specified region of plant uncertainty is proposed. The robust design problem can be solved using QFT but it fails to guarantee a high precision tracking. This problem is solved by a robust digital QFT control scheme based on zero phase error (ZPE) feed forward compensation. This scheme consists of two parts: a QFr controller in the closed-loop system and a ZPE feed-forward compensator. Digital QFT controller is designed to overcome the uncertainties in the system. Digital ZPE feed forward controller is used to improve the tracking precision. Simulation and real-time examples for flight simulator servo system indicate that this control scheme can guarantee both high robust performance and high position tracking precision.

  3. Random and bias errors in simple regression-based calculations of sea-level acceleration

    Science.gov (United States)

    Howd, P.; Doran, K. J.; Sallenger, A. H.

    2012-12-01

    We examine the random and bias errors associated with three simple regression-based methods used to calculate the acceleration of sea-level elevation (SL). These methods are: (1) using ordinary least-squares regression (OLSR) to fit a single second-order (in time) equation to an entire elevation time series; (2) using a sliding regression window with OLRS 2nd order fits to provide time and window length dependent estimates; and (3) using a sliding regression window with OLSR 1st order fits to provide time and window length dependent estimates of sea level rate differences (SLRD). A Monte Carlo analysis using synthetic elevation time series with 9 different noise formulations (red, AR(1), and white noise at 3 variance levels) is used to examine the error structure associated with the three analysis methods. We show that, as expected, the single-fit method (1), while providing statistically unbiased estimates of the mean acceleration over an interval, by statistical design does not provide estimates of time-varying acceleration. This technique cannot be expected to detect recent changes in SL acceleration, such as those predicted by some climate models. The two sliding window techniques show similar qualitative results for the test time series, but differ dramatically in their statistical significance. Estimates of acceleration based on the 2nd order fits (2) are numerically smaller than the rate differences (3), and in the presence of near-equal residual noise, are more difficult to detect with statistical significance. We show, using the SLRD estimates from tide gauge data, how statistically significant changes in sea level accelerations can be detected at different temporal and spatial scales.

  4. Effect of Video-Based versus Personalized Instruction on Errors during Elastic Tubing Exercises for Musculoskeletal Pain

    DEFF Research Database (Denmark)

    Andersen, Kenneth Jay; Schraefel, M. C.; Brandt, M.;

    2014-01-01

    only instruction in four typical neck/shoulder/arm rehabilitation exercises using elastic tubing. At a 2-week follow-up, the participants' technical execution was assessed by two blinded physical therapists using a reliable error assessment tool. The error assessment was based on ordinal deviation.......002). For the remaining three exercises the normalized error score did not differ. In conclusion, when instructing simple exercises to reduce musculoskeletal pain the use of video material is a cost-effective solution that can be implemented easily in corporations with challenging work schedules not allowing for a fixed...

  5. Residual-based a posteriori error estimates of nonconforming finite element method for elliptic problems with Dirac delta source terms

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    Two residual-based a posteriori error estimators of the nonconforming Crouzeix-Raviart element are derived for elliptic problems with Dirac delta source terms.One estimator is shown to be reliable and efficient,which yields global upper and lower bounds for the error in piecewise W1,p seminorm.The other one is proved to give a global upper bound of the error in Lp-norm.By taking the two estimators as refinement indicators,adaptive algorithms are suggested,which are experimentally shown to attain optimal convergence orders.

  6. Effective Prediction of Errors by Non-native Speakers Using Decision Tree for Speech Recognition-Based CALL System

    Science.gov (United States)

    Wang, Hongcui; Kawahara, Tatsuya

    CALL (Computer Assisted Language Learning) systems using ASR (Automatic Speech Recognition) for second language learning have received increasing interest recently. However, it still remains a challenge to achieve high speech recognition performance, including accurate detection of erroneous utterances by non-native speakers. Conventionally, possible error patterns, based on linguistic knowledge, are added to the lexicon and language model, or the ASR grammar network. However, this approach easily falls in the trade-off of coverage of errors and the increase of perplexity. To solve the problem, we propose a method based on a decision tree to learn effective prediction of errors made by non-native speakers. An experimental evaluation with a number of foreign students learning Japanese shows that the proposed method can effectively generate an ASR grammar network, given a target sentence, to achieve both better coverage of errors and smaller perplexity, resulting in significant improvement in ASR accuracy.

  7. Rigid-body point-based registration: The distribution of the target registration error when the fiducial registration errors are given.

    Science.gov (United States)

    Seginer, A

    2011-08-01

    Medical guidance systems often employ several data sources using different coordinate systems. In order to map positions from one coordinate system to the other, these guidance systems usually employ rigid-body point-based registration, using pairs of fiducial points: pairs which describe the same physical positions, but in different coordinate systems. The customary test for the quality of the registration is the fiducial registration error (FRE), which is the root-mean-square of the mismatch between the fiducials in each pair (after the registration). The FRE, however, does not give an answer to the question which is usually of interest, and that is the accuracy at a "target" point which is not part of the set of fiducial points. The statistics of the target registration error (TRE) have been studied before and approximate expressions were derived, but those expressions require as input the unknown true fiducial positions. In the present paper, it is proven that by replacing these unknowable true positions with the known measured positions in the expression for mean-square TRE, a higher order approximation is achieved. In other words, it is shown that more accurate estimates are obtained by using less accurate, but available, inputs. Furthermore, in previous approximations FRE and TRE were shown to be statistically independent, whereas here, due to the higher approximation level, it is shown that a slight dependence exists. Thus, the knowledge of FRE can in fact be employed to improve predictions of the TRE statistics. These results are supported by simulations and hold even for fiducial localization error (FLE) distributions with large standard deviations.

  8. Maintenance error reduction strategies in nuclear power plants, using root cause analysis.

    Science.gov (United States)

    Wu, T M; Hwang, S L

    1989-06-01

    This study proposes a conceptual model of maintenance tasks to facilitate the identification of root causes of human errors in carrying out such tasks in nuclear power plants. Based on this model, an external/internal classification scheme was developed to discover the root causes of human errors. As a consequence, certain policies pertaining to human error prevention or correction were proposed.

  9. The Error Is the Clue: Breakdown In Human-Machine Interaction

    Science.gov (United States)

    2006-01-01

    prolonged vowel on line 35 above, utterance 16 in Figure1. After two unsuccessful attempts to book a train the user tries one more time. At that point she has...is because it is sought in fusion” writes Levinas in his essay “The Other in Proust” [10]. Levinas meant fusion of humans, of views, of perspectives... styles ’ or to get their hats and leave. Thus on one hand, we don’t need to work for fusion between humans and machines by frenetically trying to

  10. Expanding research to provide an evidence base for nutritional interventions for the management of inborn errors of metabolism☆

    OpenAIRE

    Camp, Kathryn M; Lloyd-Puryear, Michele A.; Yao, Lynne; Groft, Stephen C.; Parisi, Melissa A.; Mulberg, Andrew; Gopal-Srivastava, Rashmi; Cederbaum, Stephen; Enns, Gregory M.; Ershow, Abby G.; Frazier, Dianne M.; Gohagan, John; Harding, Cary; Howell, R. Rodney; Regan, Karen

    2013-01-01

    A trans-National Institutes of Health initiative, Nutrition and Dietary Supplement Interventions for Inborn Errors of Metabolism (NDSI-IEM), was launched in 2010 to identify gaps in knowledge regarding the safety and utility of nutritional interventions for the management of inborn errors of metabolism (IEM) that need to be filled with evidence-based research. IEM include inherited biochemical disorders in which specific enzyme defects interfere with the normal metabolism of exogenous (dietar...

  11. Biological bases of human musicality.

    Science.gov (United States)

    Perrone-Capano, Carla; Volpicelli, Floriana; di Porzio, Umberto

    2017-01-20

    Music is a universal language, present in all human societies. It pervades the lives of most human beings and can recall memories and feelings of the past, can exert positive effects on our mood, can be strongly evocative and ignite intense emotions, and can establish or strengthen social bonds. In this review, we summarize the research and recent progress on the origins and neural substrates of human musicality as well as the changes in brain plasticity elicited by listening or performing music. Indeed, music improves performance in a number of cognitive tasks and may have beneficial effects on diseased brains. The emerging picture begins to unravel how and why particular brain circuits are affected by music. Numerous studies show that music affects emotions and mood, as it is strongly associated with the brain's reward system. We can therefore assume that an in-depth study of the relationship between music and the brain may help to shed light on how the mind works and how the emotions arise and may improve the methods of music-based rehabilitation for people with neurological disorders. However, many facets of the mind-music connection still remain to be explored and enlightened.

  12. Five-wave-packet quantum error correction based on continuous-variable cluster entanglement

    Science.gov (United States)

    Hao, Shuhong; Su, Xiaolong; Tian, Caixing; Xie, Changde; Peng, Kunchi

    2015-10-01

    Quantum error correction protects the quantum state against noise and decoherence in quantum communication and quantum computation, which enables one to perform fault-torrent quantum information processing. We experimentally demonstrate a quantum error correction scheme with a five-wave-packet code against a single stochastic error, the original theoretical model of which was firstly proposed by S. L. Braunstein and T. A. Walker. Five submodes of a continuous variable cluster entangled state of light are used for five encoding channels. Especially, in our encoding scheme the information of the input state is only distributed on three of the five channels and thus any error appearing in the remained two channels never affects the output state, i.e. the output quantum state is immune from the error in the two channels. The stochastic error on a single channel is corrected for both vacuum and squeezed input states and the achieved fidelities of the output states are beyond the corresponding classical limit.

  13. Single Epoch GPS Deformation Signals Extraction and Gross Error Detection Technique Based on Wavelet Transform

    Institute of Scientific and Technical Information of China (English)

    WANG Jian; GAO Jingxiang; XU Changhui

    2006-01-01

    Wavelet theory is efficient as an adequate tool for analyzing single epoch GPS deformation signal. Wavelet analysis technique on gross error detection and recovery is advanced. Criteria of wavelet function choosing and Mallat decomposition levels decision are discussed. An effective deformation signal extracting method is proposed, that is wavelet noise reduction technique considering gross error recovery, which combines wavelet multi-resolution gross error detection results. Time position recognizing of gross errors and their repairing performance are realized. In the experiment, compactly supported orthogonal wavelet with short support block is more efficient than the longer one when discerning gross errors, which can obtain more finely analyses. And the shape of discerned gross error of short support wavelet is simpler than that of the longer one. Meanwhile, the time scale is easier to identify.

  14. Five-wave-packet quantum error correction based on continuous-variable cluster entanglement.

    Science.gov (United States)

    Hao, Shuhong; Su, Xiaolong; Tian, Caixing; Xie, Changde; Peng, Kunchi

    2015-10-26

    Quantum error correction protects the quantum state against noise and decoherence in quantum communication and quantum computation, which enables one to perform fault-torrent quantum information processing. We experimentally demonstrate a quantum error correction scheme with a five-wave-packet code against a single stochastic error, the original theoretical model of which was firstly proposed by S. L. Braunstein and T. A. Walker. Five submodes of a continuous variable cluster entangled state of light are used for five encoding channels. Especially, in our encoding scheme the information of the input state is only distributed on three of the five channels and thus any error appearing in the remained two channels never affects the output state, i.e. the output quantum state is immune from the error in the two channels. The stochastic error on a single channel is corrected for both vacuum and squeezed input states and the achieved fidelities of the output states are beyond the corresponding classical limit.

  15. Mini-corpus Based Analysis of Errors in Higher Vocational College Students ’Writing

    Institute of Scientific and Technical Information of China (English)

    查静

    2014-01-01

    Errors are of significance to language learners in that they are unavoidable and necessary part of learning. We collect 120 HVC students’in-class compositions. Writing errors are identified, marked and annotated in line with the error tagging sys-tem used by Gui in CLEC. A mini-corpus is created and tokens are counted and analyzed with SPSS. A factor analysis together with follow-up interview is made to figure out if common factors can account for certain types of errors.

  16. Residual-based a posteriori error estimation for multipoint flux mixed finite element methods

    KAUST Repository

    Du, Shaohong

    2015-10-26

    A novel residual-type a posteriori error analysis technique is developed for multipoint flux mixed finite element methods for flow in porous media in two or three space dimensions. The derived a posteriori error estimator for the velocity and pressure error in L-norm consists of discretization and quadrature indicators, and is shown to be reliable and efficient. The main tools of analysis are a locally postprocessed approximation to the pressure solution of an auxiliary problem and a quadrature error estimate. Numerical experiments are presented to illustrate the competitive behavior of the estimator.

  17. Surface errors without semantic impairment in acquired dyslexia: a voxel-based lesion-symptom mapping study.

    Science.gov (United States)

    Binder, Jeffrey R; Pillay, Sara B; Humphries, Colin J; Gross, William L; Graves, William W; Book, Diane S

    2016-05-01

    Patients with surface dyslexia have disproportionate difficulty pronouncing irregularly spelled words (e.g. pint), suggesting impaired use of lexical-semantic information to mediate phonological retrieval. Patients with this deficit also make characteristic 'regularization' errors, in which an irregularly spelled word is mispronounced by incorrect application of regular spelling-sound correspondences (e.g. reading plaid as 'played'), indicating over-reliance on sublexical grapheme-phoneme correspondences. We examined the neuroanatomical correlates of this specific error type in 45 patients with left hemisphere chronic stroke. Voxel-based lesion-symptom mapping showed a strong positive relationship between the rate of regularization errors and damage to the posterior half of the left middle temporal gyrus. Semantic deficits on tests of single-word comprehension were generally mild, and these deficits were not correlated with the rate of regularization errors. Furthermore, the deep occipital-temporal white matter locus associated with these mild semantic deficits was distinct from the lesion site associated with regularization errors. Thus, in contrast to patients with surface dyslexia and semantic impairment from anterior temporal lobe degeneration, surface errors in our patients were not related to a semantic deficit. We propose that these patients have an inability to link intact semantic representations with phonological representations. The data provide novel evidence for a post-semantic mechanism mediating the production of surface errors, and suggest that the posterior middle temporal gyrus may compute an intermediate representation linking semantics with phonology.

  18. A robust static decoupling algorithm for 3-axis force sensors based on coupling error model and ε-SVR.

    Science.gov (United States)

    Ma, Junqing; Song, Aiguo; Xiao, Jing

    2012-10-29

    Coupling errors are major threats to the accuracy of 3-axis force sensors. Design of decoupling algorithms is a challenging topic due to the uncertainty of coupling errors. The conventional nonlinear decoupling algorithms by a standard Neural Network (NN) are sometimes unstable due to overfitting. In order to avoid overfitting and minimize the negative effect of random noises and gross errors in calibration data, we propose a novel nonlinear static decoupling algorithm based on the establishment of a coupling error model. Instead of regarding the whole system as a black box in conventional algorithm, the coupling error model is designed by the principle of coupling errors, in which the nonlinear relationships between forces and coupling errors in each dimension are calculated separately. Six separate Support Vector Regressions (SVRs) are employed for their ability to perform adaptive, nonlinear data fitting. The decoupling performance of the proposed algorithm is compared with the conventional method by utilizing obtained data from the static calibration experiment of a 3-axis force sensor. Experimental results show that the proposed decoupling algorithm gives more robust performance with high efficiency and decoupling accuracy, and can thus be potentially applied to the decoupling application of 3-axis force sensors.

  19. Evaluation of roundness error using a new method based on a small displacement screw

    Science.gov (United States)

    Nouira, Hichem; Bourdet, Pierre

    2014-04-01

    In relation to industrial need and the progress of technology, LNE would like to improve the measurement of its primary pressure, spherical and flick standards. The spherical and flick standards are respectively used to calibrate the spindle motion error and the probe which equips commercial conventional cylindricity measuring machines. The primary pressure standards are obtained using pressure balances equipped with rotary pistons with an uncertainty of 5 nm for a piston diameter of 10 mm. Conventional machines are not able to reach such an uncertainty level. That is why the development of a new machine is necessary. To ensure such a level of uncertainty, both stability and performance of the machine are not sufficient, and the data processing should also be done with accuracy less than a nanometre. In this paper, a new method based on the small displacement screw (SDS) model is proposed. A first validation of this method is proposed on a theoretical dataset published by the European Community Bureau of Reference (BCR) in report no 3327. Then, an experiment is prepared in order to validate the new method on real datasets. Specific environment conditions are taken into account and many precautions are considered. The new method is applied to analyse the least-squares circle, minimum zone circle, maximum inscribed circle and minimum circumscribed circle. The results are compared to those done by the reference Chebyshev best-fit method and reveal perfect agreement. The sensibilities of the SDS and Chebyshev methodologies are investigated, and it is revealed that results remain unchanged when the value of the diameter exceeds 700 times the form error.

  20. Correlated measurement error hampers association network inference.

    Science.gov (United States)

    Kaduk, Mateusz; Hoefsloot, Huub C J; Vis, Daniel J; Reijmers, Theo; van der Greef, Jan; Smilde, Age K; Hendriks, Margriet M W B

    2014-09-01

    Modern chromatography-based metabolomics measurements generate large amounts of data in the form of abundances of metabolites. An increasingly popular way of representing and analyzing such data is by means of association networks. Ideally, such a network can be interpreted in terms of the underlying biology. A property of chromatography-based metabolomics data is that the measurement error structure is complex: apart from the usual (random) instrumental error there is also correlated measurement error. This is intrinsic to the way the samples are prepared and the analyses are performed and cannot be avoided. The impact of correlated measurement errors on (partial) correlation networks can be large and is not always predictable. The interplay between relative amounts of uncorrelated measurement error, correlated measurement error and biological variation defines this impact. Using chromatography-based time-resolved lipidomics data obtained from a human intervention study we show how partial correlation based association networks are influenced by correlated measurement error. We show how the effect of correlated measurement error on partial correlations is different for direct and indirect associations. For direct associations the correlated measurement error usually has no negative effect on the results, while for indirect associations, depending on the relative size of the correlated measurement error, results can become unreliable. The aim of this paper is to generate awareness of the existence of correlated measurement errors and their influence on association networks. Time series lipidomics data is used for this purpose, as it makes it possible to visually distinguish the correlated measurement error from a biological response. Underestimating the phenomenon of correlated measurement error will result in the suggestion of biologically meaningful results that in reality rest solely on complicated error structures. Using proper experimental designs that allow

  1. A stale challenge to the philosophy of science: commentary on "Is psychology based on a methodological error?" by Michael Schwarz.

    Science.gov (United States)

    Ruck, Nora; Slunecko, Thomas

    2010-06-01

    In his article "Is psychology based on a methodological error?" and based on a quite convincing empirical basis, Michael Schwarz offers a methodological critique of one of mainstream psychology's key test theoretical axioms, i.e., that of the in principle normal distribution of personality variables. It is characteristic of this paper--and at first seems to be a strength of it--that the author positions his critique within a frame of philosophy of science, particularly positioning himself in the tradition of Karl Popper's critical rationalism. When scrutinizing Schwarz's arguments, however, we find Schwarz's critique profound only as an immanent critique of test theoretical axioms. We raise doubts, however, as to Schwarz's alleged 'challenge' to the philosophy of science because the author not at all seems to be in touch with the state of the art of contemporary philosophy of science. Above all, we question the universalist undercurrent that Schwarz's 'bio-psycho-social model' of human judgment boils down to. In contrast to such position, we close our commentary with a plea for a context- and culture sensitive philosophy of science.

  2. Common errors in textbook descriptions of muscle fiber size in nontrained humans.

    Science.gov (United States)

    Chalmers, Gordon R; Row, Brandi S

    2011-09-01

    Exercise science and human anatomy and physiology textbooks commonly report that type IIB muscle fibers have the largest cross-sectional area of the three fiber types. These descriptions of muscle fiber sizes do not match with the research literature examining muscle fibers in young adult nontrained humans. For men, most commonly type IIA fibers were significantly larger than other fiber types (six out of 10 cases across six different muscles). For women, either type I, or both I and IIA muscle fibers were usually significantly the largest (five out of six cases across four different muscles). In none of these reports were type IIB fibers significantly larger than both other fiber types. In 27 studies that did not include statistical comparisons of mean fiber sizes across fiber types, in no cases were type IIB or fast glycolytic fibers larger than both type I and IIA, or slow oxidative and fast oxidative glycolytic fibers. The likely reason for mistakes in textbook descriptions of human muscle fiber sizes is that animal data were presented without being labeled as such, and without any warning that there are interspecies differences in muscle fiber properties. Correct knowledge of muscle fiber sizes may facilitate interpreting training and aging adaptations.

  3. Spectral demixing avoids registration errors and reduces noise in multicolor localization-based super-resolution microscopy

    Science.gov (United States)

    Lampe, André; Tadeus, Georgi; Schmoranzer, Jan

    2015-09-01

    Multicolor single molecule localization-based super-resolution microscopy (SMLM) approaches are challenged by channel crosstalk and errors in multi-channel registration. We recently introduced a spectral demixing-based variant of direct stochastic optical reconstruction microscopy (SD-dSTORM) to perform multicolor SMLM with minimal color crosstalk. Here, we demonstrate that the spectral demixing procedure is inherently free of errors in multicolor registration and therefore does not require multicolor channel alignment. Furthermore, spectral demixing significantly reduces single molecule noise and is applicable to astigmatism-based 3D multicolor imaging achieving 25 nm lateral and 66 nm axial resolution on cellular nanostructures.

  4. A convenient look-up-table based method for the compensation of non-linear error in digital fringe projection

    Directory of Open Access Journals (Sweden)

    Chen Xiong

    2016-01-01

    Full Text Available Although the structured light system that uses digital fringe projection has been widely implemented in three-dimensional surface profile measurement, the measurement system is susceptible to non-linear error. In this work, we propose a convenient look-up-table-based (LUT-based method to compensate for the non-linear error in captured fringe patterns. Without extra calibration, this LUT-based method completely utilizes the captured fringe pattern by recording the full-field differences. Then, a phase compensation map is established to revise the measured phase. Experimental results demonstrate that this method works effectively.

  5. Drug errors and related interventions reported by United States clinical pharmacists: the American College of Clinical Pharmacy practice-based research network medication error detection, amelioration and prevention study.

    Science.gov (United States)

    Kuo, Grace M; Touchette, Daniel R; Marinac, Jacqueline S

    2013-03-01

    To describe and evaluate drug errors and related clinical pharmacist interventions. Cross-sectional observational study with an online data collection form. American College of Clinical Pharmacy practice-based research network (ACCP PBRN). A total of 62 clinical pharmacists from the ACCP PBRN who provided direct patient care in the inpatient and outpatient practice settings. Clinical pharmacist participants identified drug errors in their usual practices and submitted online error reports over a period of 14 consecutive days during 2010. The 62 clinical pharmacists submitted 924 reports; of these, 779 reports from 53 clinical pharmacists had complete data. Drug errors occurred in both the inpatient (61%) and outpatient (39%) settings. Therapeutic categories most frequently associated with drug errors were systemic antiinfective (25%), hematologic (21%), and cardiovascular (19%) drugs. Approximately 95% of drug errors did not result in patient harm; however, 33 drug errors resulted in treatment or medical intervention, 6 resulted in hospitalization, 2 required treatment to sustain life, and 1 resulted in death. The types of drug errors were categorized as prescribing (53%), administering (13%), monitoring (13%), dispensing (10%), documenting (7%), and miscellaneous (4%). Clinical pharmacist interventions included communication (54%), drug changes (35%), and monitoring (9%). Approximately 89% of clinical pharmacist recommendations were accepted by the prescribers: 5% with drug therapy modifications, 28% due to clinical pharmacist prescriptive authority, and 56% without drug therapy modifications. This study provides insight into the role clinical pharmacists play with regard to drug error interventions using a national practice-based research network. Most drug errors reported by clinical pharmacists in the United States did not result in patient harm; however, severe harm and death due to drug errors were reported. Drug error types, therapeutic categories, and

  6. Profiles in patient safety: when an error occurs.

    Science.gov (United States)

    Hobgood, Cherri; Hevia, Armando; Hinchey, Paul

    2004-07-01

    Medical error is now clearly established as one of the most significant problems facing the American health care system. Anecdotal evidence, studies of human cognition, and analysis of high-reliability organizations all predict that despite excellent training, human error is unavoidable. When an error occurs and is recognized, providers have a duty to disclose the error. Yet disclosure of error to patients, families, and hospital colleagues is a difficult and/or threatening process for most physicians. A more thorough understanding of the ethical and social contract between physicians and their patients as well as the professional milieu surrounding an error may improve the likelihood of its disclosure. Key among these is the identification of institutional factors that support disclosure and recognize error as an unavoidable part of the practice of medicine. Using a case-based format, this article focuses on the communication of error with patients, families, and colleagues and grounds error disclosure in the cultural milieu of medial ethics.

  7. A Method to Optimize Geometric Errors of Machine Tool based on SNR Quality Loss Function and Correlation Analysis

    Directory of Open Access Journals (Sweden)

    Cai Ligang

    2017-01-01

    Full Text Available Instead improving the accuracy of machine tool by increasing the precision of key components level blindly in the production process, the method of combination of SNR quality loss function and machine tool geometric error correlation analysis to optimize five-axis machine tool geometric errors will be adopted. Firstly, the homogeneous transformation matrix method will be used to build five-axis machine tool geometric error modeling. Secondly, the SNR quality loss function will be used for cost modeling. And then, machine tool accuracy optimal objective function will be established based on the correlation analysis. Finally, ISIGHT combined with MATLAB will be applied to optimize each error. The results show that this method is reasonable and appropriate to relax the range of tolerance values, so as to reduce the manufacturing cost of machine tools.

  8. Optimal Threshold-Based Multi-Trial Error/Erasure Decoding with the Guruswami-Sudan Algorithm

    CERN Document Server

    Senger, Christian; Bossert, Martin; Zyablov, Victor V

    2011-01-01

    Traditionally, multi-trial error/erasure decoding of Reed-Solomon (RS) codes is based on Bounded Minimum Distance (BMD) decoders with an erasure option. Such decoders have error/erasure tradeoff factor L=2, which means that an error is twice as expensive as an erasure in terms of the code's minimum distance. The Guruswami-Sudan (GS) list decoder can be considered as state of the art in algebraic decoding of RS codes. Besides an erasure option, it allows to adjust L to values in the range 1=1 times. We show that BMD decoders with z_BMD decoding trials can result in lower residual codeword error probability than GS decoders with z_GS trials, if z_BMD is only slightly larger than z_GS. This is of practical interest since BMD decoders generally have lower computational complexity than GS decoders.

  9. A Developmental Model of Reading Acquisition Based upon Early Scaffolding Errors and Subsequent Vowel Inferences

    Science.gov (United States)

    Savage, Robert; Stuart, Morag

    2006-01-01

    This paper investigates the processes that predict reading acquisition. Associations between (a) scaffolding errors (e.g., "torn" misread as "town" or "tarn"), other reading errors, and later reading and (b) vowel and rime inferences and later reading were explored. To assess both of these issues, 50 6-year-old children were shown a number of CVC…

  10. An international collaborative family-based whole genome quantitative trait linkage scan for myopic refractive error

    DEFF Research Database (Denmark)

    Abbott, Diana; Li, Yi-Ju; Guggenheim, Jeremy A;

    2012-01-01

    To investigate quantitative trait loci linked to refractive error, we performed a genome-wide quantitative trait linkage analysis using single nucleotide polymorphism markers and family data from five international sites.......To investigate quantitative trait loci linked to refractive error, we performed a genome-wide quantitative trait linkage analysis using single nucleotide polymorphism markers and family data from five international sites....

  11. Motor-Based Treatment with and without Ultrasound Feedback for Residual Speech-Sound Errors

    Science.gov (United States)

    Preston, Jonathan L.; Leece, Megan C.; Maas, Edwin

    2017-01-01

    Background: There is a need to develop effective interventions and to compare the efficacy of different interventions for children with residual speech-sound errors (RSSEs). Rhotics (the r-family of sounds) are frequently in error American English-speaking children with RSSEs and are commonly targeted in treatment. One treatment approach involves…

  12. [Responsibility due to medication errors in France: a study based on SHAM insurance data].

    Science.gov (United States)

    Theissen, A; Orban, J-C; Fuz, F; Guerin, J-P; Flavin, P; Albertini, S; Maricic, S; Saquet, D; Niccolai, P

    2015-03-01

    The safe medication practices at the hospital constitute a major public health problem. Drug supply chain is a complex process, potentially source of errors and damages for the patient. SHAM insurances are the biggest French provider of medical liability insurances and a relevant source of data on the health care complications. The main objective of the study was to analyze the type and cause of medication errors declared to SHAM and having led to a conviction by a court. We did a retrospective study on insurance claims provided by SHAM insurances with a medication error and leading to a condemnation over a 6-year period (between 2005 and 2010). Thirty-one cases were analysed, 21 for scheduled activity and 10 for emergency activity. Consequences of claims were mostly serious (12 deaths, 14 serious complications, 5 simple complications). The types of medication errors were a drug monitoring error (11 cases), an administration error (5 cases), an overdose (6 cases), an allergy (4 cases), a contraindication (3 cases) and an omission (2 cases). Intravenous route of administration was involved in 19 of 31 cases (61%). The causes identified by the court expert were an error related to service organization (11), an error related to medical practice (11) or nursing practice (13). Only one claim was due to the hospital pharmacy. The claim related to drug supply chain is infrequent but potentially serious. These data should help strengthen quality approach in risk management. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  13. Error Correction of Measured Unstructured Road Profiles Based on Accelerometer and Gyroscope Data

    Directory of Open Access Journals (Sweden)

    Jinhua Han

    2017-01-01

    Full Text Available This paper describes a noncontact acquisition system composed of several time synchronized laser height sensors, accelerometers, gyroscope, and so forth in order to collect the road profiles of vehicle riding on the unstructured roads. A method of correcting road profiles based on the accelerometer and gyroscope data is proposed to eliminate the adverse impacts of vehicle vibration and attitudes change. Because the power spectral density (PSD of gyro attitudes concentrates in the low frequency band, a method called frequency division is presented to divide the road profiles into two parts: high frequency part and low frequency part. The vibration error of road profiles is corrected by displacement data obtained through two times integration of measured acceleration data. After building the mathematical model between gyro attitudes and road profiles, the gyro attitudes signals are separated from low frequency road profile by the method of sliding block overlap based on correlation analysis. The accuracy and limitations of the system have been analyzed, and its validity has been verified by implementing the system on wheeled equipment for road profiles’ measuring of vehicle testing ground. The paper offers an accurate and practical approach to obtaining unstructured road profiles for road simulation test.

  14. Error analysis of marker-based object localization using a single-plane XRII

    Energy Technology Data Exchange (ETDEWEB)

    Habets, Damiaan F.; Pollmann, Steven I.; Yuan, Xunhua; Peters, Terry M.; Holdsworth, David W. [Imaging Research Laboratories, Robarts Research Institute, Schulich School of Medicine and Dentistry, University of Western Ontario, 100 Perth Drive, London, Ontario N6A 5K8 (Canada) and Department of Medical Biophysics, Schulich School of Medicine and Dentistry, University of Western Ontario, London, Ontario N6A 5C1 (Canada); Imaging Research Laboratories, Robarts Research Institute, Schulich School of Medicine and Dentistry, University of Western Ontario, 100 Perth Drive, London, Ontario N6A 5K8 (Canada); Imaging Research Laboratories, Robarts Research Institute, Schulich School of Medicine and Dentistry, University of Western Ontario, 100 Perth Drive, London, Ontario N6A 5K8 (Canada) and Department of Medical Imaging, Department of Medical Biophysics, and Department of Biomedical Engineering, Schulich School of Medicine and Dentistry, University of Western Ontario, London, Ontario N6A 5C1 (Canada); Imaging Research Laboratories, Robarts Research Institute, Schulich School of Medicine and Dentistry, University of Western Ontario, 100 Perth Drive, London, Ontario N6A 5K8 (Canada) and Department of Surgery, and Department of Medical Biophysics, Schulich School of Medicine and Dentistry, University of Western Ontario, London, Ontario N6A 5C1 (Canada)

    2009-01-15

    The role of imaging and image guidance is increasing in surgery and therapy, including treatment planning and follow-up. Fluoroscopy is used for two-dimensional (2D) guidance or localization; however, many procedures would benefit from three-dimensional (3D) guidance or localization. Three-dimensional computed tomography (CT) using a C-arm mounted x-ray image intensifier (XRII) can provide high-quality 3D images; however, patient dose and the required acquisition time restrict the number of 3D images that can be obtained. C-arm based 3D CT is therefore limited in applications for x-ray based image guidance or dynamic evaluations. 2D-3D model-based registration, using a single-plane 2D digital radiographic system, does allow for rapid 3D localization. It is our goal to investigate - over a clinically practical range - the impact of x-ray exposure on the resulting range of 3D localization precision. In this paper it is assumed that the tracked instrument incorporates a rigidly attached 3D object with a known configuration of markers. A 2D image is obtained by a digital fluoroscopic x-ray system and corrected for XRII distortions ({+-}0.035 mm) and mechanical C-arm shift ({+-}0.080 mm). A least-square projection-Procrustes analysis is then used to calculate the 3D position using the measured 2D marker locations. The effect of x-ray exposure on the precision of 2D marker localization and on 3D object localization was investigated using numerical simulations and x-ray experiments. The results show a nearly linear relationship between 2D marker localization precision and the 3D localization precision. However, a significant amplification of error, nonuniformly distributed among the three major axes, occurs, and that is demonstrated. To obtain a 3D localization error of less than {+-}1.0 mm for an object with 20 mm marker spacing, the 2D localization precision must be better than {+-}0.07 mm. This requirement was met for all investigated nominal x-ray exposures at 28 cm

  15. Error analysis of marker-based object localization using a single-plane XRII.

    Science.gov (United States)

    Habets, Damiaan F; Pollmann, Steven I; Yuan, Xunhua; Peters, Terry M; Holdsworth, David W

    2009-01-01

    The role of imaging and image guidance is increasing in surgery and therapy, including treatment planning and follow-up. Fluoroscopy is used for two-dimensional (2D) guidance or localization; however, many procedures would benefit from three-dimensional (3D) guidance or localization. Three-dimensional computed tomography (CT) using a C-arm mounted x-ray image intensifier (XRII) can provide high-quality 3D images; however, patient dose and the required acquisition time restrict the number of 3D images that can be obtained. C-arm based 3D CT is therefore limited in applications for x-ray based image guidance or dynamic evaluations. 2D-3D model-based registration, using a single-plane 2D digital radiographic system, does allow for rapid 3D localization. It is our goal to investigate-over a clinically practical range-the impact of x-ray exposure on the resulting range of 3D localization precision. In this paper it is assumed that the tracked instrument incorporates a rigidly attached 3D object with a known configuration of markers. A 2D image is obtained by a digital fluoroscopic x-ray system and corrected for XRII distortions (+/- 0.035 mm) and mechanical C-arm shift (+/- 0.080 mm). A least-square projection-Procrustes analysis is then used to calculate the 3D position using the measured 2D marker locations. The effect of x-ray exposure on the precision of 2D marker localization and on 3D object localization was investigated using numerical simulations and x-ray experiments. The results show a nearly linear relationship between 2D marker localization precision and the 3D localization precision. However, a significant amplification of error, nonuniformly distributed among the three major axes, occurs, and that is demonstrated. To obtain a 3D localization error of less than +/- 1.0 mm for an object with 20 mm marker spacing, the 2D localization precision must be better than +/- 0.07 mm. This requirement was met for all investigated nominal x-ray exposures at 28 cm FOV

  16. Identification of chromosomal errors in human preimplantation embryos with oligonucleotide DNA microarray.

    Directory of Open Access Journals (Sweden)

    Lifeng Liang

    Full Text Available A previous study comparing the performance of different platforms for DNA microarray found that the oligonucleotide (oligo microarray platform containing 385K isothermal probes had the best performance when evaluating dosage sensitivity, precision, specificity, sensitivity and copy number variations border definition. Although oligo microarray platform has been used in some research fields and clinics, it has not been used for aneuploidy screening in human embryos. The present study was designed to use this new microarray platform for preimplantation genetic screening in the human. A total of 383 blastocysts from 72 infertility patients with either advanced maternal age or with previous miscarriage were analyzed after biopsy and microarray. Euploid blastocysts were transferred to patients and clinical pregnancy and implantation rates were measured. Chromosomes in some aneuploid blastocysts were further analyzed by fluorescence in-situ hybridization (FISH to evaluate accuracy of the results. We found that most (58.1% of the blastocysts had chromosomal abnormalities that included single or multiple gains and/or losses of chromosome(s, partial chromosome deletions and/or duplications in both euploid and aneuploid embryos. Transfer of normal euploid blastocysts in 34 cycles resulted in 58.8% clinical pregnancy and 54.4% implantation rates. Examination of abnormal blastocysts by FISH showed that all embryos had matching results comparing microarray and FISH analysis. The present study indicates that oligo microarray conducted with a higher resolution and a greater number of probes is able to detect not only aneuploidy, but also minor chromosomal abnormalities, such as partial chromosome deletion and/or duplication in human embryos. Preimplantation genetic screening of the aneuploidy by DNA microarray is an advanced technology used to select embryos for transfer and improved embryo implantation can be obtained after transfer of the screened normal

  17. A non-orthogonal SVD-based decomposition for phase invariant error-related potential estimation.

    Science.gov (United States)

    Phlypo, Ronald; Jrad, Nisrine; Rousseau, Sandra; Congedo, Marco

    2011-01-01

    The estimation of the Error Related Potential from a set of trials is a challenging problem. Indeed, the Error Related Potential is of low amplitude compared to the ongoing electroencephalographic activity. In addition, simple summing over the different trials is prone to errors, since the waveform does not appear at an exact latency with respect to the trigger. In this work, we propose a method to cope with the discrepancy of these latencies of the Error Related Potential waveform and offer a framework in which the estimation of the Error Related Potential waveform reduces to a simple Singular Value Decomposition of an analytic waveform representation of the observed signal. The followed approach is promising, since we are able to explain a higher portion of the variance of the observed signal with fewer components in the expansion.

  18. Detection of prescription errors by a unit-based clinical pharmacist in a nephrology ward.

    Science.gov (United States)

    Vessal, Ghazal

    2010-02-01

    To determine the impact of a clinical pharmacist on detection and prevention of prescription errors at the nephrology ward of a referral hospital. Nephrology ward of a major referral hospital in Southern Iran. During a 4-month period, a clinical pharmacist was assigned to review medication order sheets and drug orders three times a week at the nephrology ward. Besides chart review, the clinical pharmacist participated in medical rounds once a week. The occurrence of prescribing errors, and related harm was determined on hospitalized patients in this ward during the 4 month period. When an error was detected, intervention was made after agreement of the attending physician. Number and types of prescribing errors, level of harm, and number of interventions were determined. Seventy six patient charts were reviewed during the 4-month period. A total of 818 medications were ordered in these patients. Eighty six prescribing errors were detected in 46 hospital admissions. The mean age of the patients was 47.7 +/- 17.2. Fifty five percent were male while 45% were female. Different types of prescribing errors and their frequencies were as follows: wrong frequency (37.2%), wrong drug selection (19.8%), overdose (12.8%), failure to discontinue (10.5%), failure to order (7 %), under- dose (3.5%), wrong time (3.5%), monitoring (3.5%), wrong route (1.2%), and drug interaction (1.2 %). The attending physician agreed to 96.5% of the prescription errors detected, and interventions were made. Although 89.5% of the detected errors caused no harm, 4(4.7%) of the errors increased the need for monitoring, 2 (2.3%) increased length of stay, and 2 (2.3%) led to permanent patient harm. presence of a clinical pharmacist at the nephrology ward helps in early detection of prescription errors, and therefore potential prevention of negative consequences due to drug administration.

  19. 船舶事故中人因失误机理的研究%Study on the Human Error Mechanism in Ship Accident

    Institute of Scientific and Technical Information of China (English)

    彭陈; 张圆圆

    2015-01-01

    Due to the complexity of the man-machine-environment system in ship accident, human error is of great possibility;therefore,to reduce human errors becomes important for prevention of ship accidents.This essay analyzes the reasons of human errors,constructs the human error model and the reliability mathematical model of human in ship accident,and gives an outlook on the study of human errors in ship accidents.%船舶事故中人-机-环境系统的复杂性,使得人因失误的概率很大,减少人因失误成为船舶事故的重要因素,本文分析了人因失误原因,构建了人因失误模型及船舶事故中人的可靠性数学模型,并对船舶事故人因失误的研究方向提出了展望。

  20. [Genetic Bases of Human Comorbidity].

    Science.gov (United States)

    Puzyrev, V P

    2015-04-01

    In this review, the development of ideas focused on the phenomenon of disease combination (comorbidity) in humans is discussed. The genetic bases of the three forms of the phenomenon, comorbidity (syntropias), inverse comorbidity (dystropias), and comorbidity of Mendelian and multifactorial diseases, are analyzed. The results of personal genome-wide association studies of the genetic risk profile that may predispose an individual to cardiovascular disease continuum (CDC), including coronary heart disease, type 2 diabetes, hypertension, and hypercholesterolemia (CDC syntropy), as well as the results of bioinformatic analysis of common genes and the networks of molecular interactions for two (bronchial asthma and pulmonary tuberculosis) diseases rarely found in one patient (dystropy), are presented. The importance of the diseasome and network medicine concepts in the study of comorbidity is emphasized. Promising areas in genomic studies of comorbidities for disease classification and the development of personalized medicine are designated.

  1. Procedures for using expert judgment to estimate human-error probabilities in nuclear power plant operations. [PWR; BWR

    Energy Technology Data Exchange (ETDEWEB)

    Seaver, D.A.; Stillwell, W.G.

    1983-03-01

    This report describes and evaluates several procedures for using expert judgment to estimate human-error probabilities (HEPs) in nuclear power plant operations. These HEPs are currently needed for several purposes, particularly for probabilistic risk assessments. Data do not exist for estimating these HEPs, so expert judgment can provide these estimates in a timely manner. Five judgmental procedures are described here: paired comparisons, ranking and rating, direct numerical estimation, indirect numerical estimation and multiattribute utility measurement. These procedures are evaluated in terms of several criteria: quality of judgments, difficulty of data collection, empirical support, acceptability, theoretical justification, and data processing. Situational constraints such as the number of experts available, the number of HEPs to be estimated, the time available, the location of the experts, and the resources available are discussed in regard to their implications for selecting a procedure for use.

  2. De novo centriole formation in human cells is error-prone and does not require SAS-6 self-assembly.

    Science.gov (United States)

    Wang, Won-Jing; Acehan, Devrim; Kao, Chien-Han; Jane, Wann-Neng; Uryu, Kunihiro; Tsou, Meng-Fu Bryan

    2015-11-26

    Vertebrate centrioles normally propagate through duplication, but in the absence of preexisting centrioles, de novo synthesis can occur. Consistently, centriole formation is thought to strictly rely on self-assembly, involving self-oligomerization of the centriolar protein SAS-6. Here, through reconstitution of de novo synthesis in human cells, we surprisingly found that normal looking centrioles capable of duplication and ciliation can arise in the absence of SAS-6 self-oligomerization. Moreover, whereas canonically duplicated centrioles always form correctly, de novo centrioles are prone to structural errors, even in the presence of SAS-6 self-oligomerization. These results indicate that centriole biogenesis does not strictly depend on SAS-6 self-assembly, and may require preexisting centrioles to ensure structural accuracy, fundamentally deviating from the current paradigm.

  3. A Bayesian method for using simulator data to enhance human error probabilities assigned by existing HRA methods

    Energy Technology Data Exchange (ETDEWEB)

    Katrinia M. Groth; Curtis L. Smith; Laura P. Swiler

    2014-08-01

    In the past several years, several international organizations have begun to collect data on human performance in nuclear power plant simulators. The data collected provide a valuable opportunity to improve human reliability analysis (HRA), but these improvements will not be realized without implementation of Bayesian methods. Bayesian methods are widely used to incorporate sparse data into models in many parts of probabilistic risk assessment (PRA), but Bayesian methods have not been adopted by the HRA community. In this paper, we provide a Bayesian methodology to formally use simulator data to refine the human error probabilities (HEPs) assigned by existing HRA methods. We demonstrate the methodology with a case study, wherein we use simulator data from the Halden Reactor Project to update the probability assignments from the SPAR-H method. The case study demonstrates the ability to use performance data, even sparse data, to improve existing HRA methods. Furthermore, this paper also serves as a demonstration of the value of Bayesian methods to improve the technical basis of HRA.

  4. Engineering the electronic health record for safety: a multi-level video-based approach to diagnosing and preventing technology-induced error arising from usability problems.

    Science.gov (United States)

    Borycki, Elizabeth M; Kushniruk, Andre W; Kuwata, Shigeki; Kannry, Joseph

    2011-01-01

    Electronic health records (EHRs) promise to improve and streamline healthcare through electronic entry and retrieval of patient data. Furthermore, based on a number of studies showing their positive benefits, they promise to reduce medical error and make healthcare safer. However, a growing body of literature has clearly documented that if EHRS are not designed properly and with usability as an important goal in their design, rather than reducing error, EHR deployment has the potential to actually increase medical error. In this paper we describe our approach to engineering (and reengineering) EHRs in order to increase their beneficial potential while at the same time improving their safety. The approach described in this paper involves an integration of the methods of usability analysis with video analysis of end users interacting with EHR systems and extends the evaluation of the usability of EHRs to include the assessment of the impact of these systems on work practices. Using clinical simulations, we analyze human-computer interaction in real healthcare settings (in a portable, low-cost and high fidelity manner) and include both artificial and naturalistic data collection to identify potential usability problems and sources of technology-induced error prior to widespread system release. Two case studies where the methods we have developed and refined have been applied at different levels of user-computer interaction are described.

  5. TECHNOLOGY VS NATURE: HUMAN ERROR IN DEALING WITH NATURE IN CRICHTON'S JURASSIC PARK

    Directory of Open Access Journals (Sweden)

    Sarah Prasasti

    2000-01-01

    Full Text Available Witnessing the euphoria of the era of biotechnology in the late twentieth century, Crichton exposes the theme of biotechnology in his works. In Jurassic Park, he voices his concern about the impact of the use of biotechnology to preserve nature and its living creatures. He further describes how the purpose of preserving nature and the creatures has turned out to be destructive. This article discusses Crichton's main character, Hammond, who attempts to control nature by genetically recreating the extinct fossil animals. It seems that the attempt ignores his human limitations. Although he is confident that has been equipped with the technology, he forgets to get along with nature. His way of using technology to accomplish his purpose proves not to be in harmony with nature. As a consequence, nature fights back. And he is conquered.

  6. Reducing errors benefits the field-based learning of a fundamental movement skill in children.

    Science.gov (United States)

    Capio, C M; Poolton, J M; Sit, C H P; Holmstrom, M; Masters, R S W

    2013-03-01

    Proficient fundamental movement skills (FMS) are believed to form the basis of more complex movement patterns in sports. This study examined the development of the FMS of overhand throwing in children through either an error-reduced (ER) or error-strewn (ES) training program. Students (n = 216), aged 8-12 years (M = 9.16, SD = 0.96), practiced overhand throwing in either a program that reduced errors during practice (ER) or one that was ES. ER program reduced errors by incrementally raising the task difficulty, while the ES program had an incremental lowering of task difficulty. Process-oriented assessment of throwing movement form (Test of Gross Motor Development-2) and product-oriented assessment of throwing accuracy (absolute error) were performed. Changes in performance were examined among children in the upper and lower quartiles of the pretest throwing accuracy scores. ER training participants showed greater gains in movement form and accuracy, and performed throwing more effectively with a concurrent secondary cognitive task. Movement form improved among girls, while throwing accuracy improved among children with low ability. Reduced performance errors in FMS training resulted in greater learning than a program that did not restrict errors. Reduced cognitive processing costs (effective dual-task performance) associated with such approach suggest its potential benefits for children with developmental conditions.

  7. Measurement-based analysis of error latency. [in computer operating system

    Science.gov (United States)

    Chillarege, Ram; Iyer, Ravishankar K.

    1987-01-01

    This paper demonstrates a practical methodology for the study of error latency under a real workload. The method is illustrated with sampled data on the physical memory activity, gathered by hardware instrumentation on a VAX 11/780 during the normal workload cycle of the installation. These data are used to simulate fault occurrence and to reconstruct the error discovery process in the system. The technique provides a means to study the system under different workloads and for multiple days. An approach to determine the percentage of undiscovered errors is also developed and a verification of the entire methodology is performed. This study finds that the mean error latency, in the memory containing the operating system, varies by a factor of 10 to 1 (in hours) between the low and high workloads. It is found that of all errors occurring within a day, 70 percent are detected in the same day, 82 percent within the following day, and 91 percent within the third day. The increase in failure rate due to latency is not so much a function of remaining errors but is dependent on whether or not there is a latent error.

  8. Measurement-based analysis of error latency. [in computer operating system

    Science.gov (United States)

    Chillarege, Ram; Iyer, Ravishankar K.

    1987-01-01

    This paper demonstrates a practical methodology for the study of error latency under a real workload. The method is illustrated with sampled data on the physical memory activity, gathered by hardware instrumentation on a VAX 11/780 during the normal workload cycle of the installation. These data are used to simulate fault occurrence and to reconstruct the error discovery process in the system. The technique provides a means to study the system under different workloads and for multiple days. An approach to determine the percentage of undiscovered errors is also developed and a verification of the entire methodology is performed. This study finds that the mean error latency, in the memory containing the operating system, varies by a factor of 10 to 1 (in hours) between the low and high workloads. It is found that of all errors occurring within a day, 70 percent are detected in the same day, 82 percent within the following day, and 91 percent within the third day. The increase in failure rate due to latency is not so much a function of remaining errors but is dependent on whether or not there is a latent error.

  9. 电梯检验过程人因失误及其影响因素的实证研究%Empirical Study on Influencing Factors of Human Errors in the Process of Elevator Inspection

    Institute of Scientific and Technical Information of China (English)

    胡晓; 黄端; 石岿然; 蒋凤

    2014-01-01

    This paper examines the empirical test of key factors affecting human errors based on the samples of 248 senior and middle managers and primary technical staffs in foreign and state -owned elevator firms .The results show that personnel ability is negatively associated with human errors ;similarly ,organizational communication and organizational culture also have a directly and significantly negative impact on it .In addition ,there exists the related relationship among individual age ,work experiences ,marital status and human errors .This research provides sufficient basis to improve organizational management and avoid human errors for the elevator industry .%以248家电梯企业(包括外企和国企)的中高层管理人员和基层技术人员为调查对象,对人因失误的主要影响因素进行实证研究。研究结果表明,员工的能力素质、组织沟通与组织文化因素与人因失误的频繁程度显著负相关。此外,电梯检验过程人因失误与个体年龄、工龄、婚姻状况也存在相关性。研究结果为电梯行业改善组织管理,降低人因失误提供了充分的依据。

  10. A Hybrid Prediction Method of Thermal Extension Error for Boring Machine Based on PCA and LS-SVM

    Directory of Open Access Journals (Sweden)

    Cheng Qiang

    2017-01-01

    Full Text Available Thermal extension error of boring bar in z-axis is one of the key factors that have a bad influence on the machining accuracy of boring machine, so how to exactly establish the relationship between the thermal extension length and temperature and predict the changing rule of thermal error are the premise of thermal extension error compensation. In this paper, a prediction method of thermal extension length of boring bar in boring machine is proposed based on principal component analysis (PCA and least squares support vector machine (LS-SVM model. In order to avoid the multiple correlation and coupling among the great amount temperature input variables, firstly, PCA is introduced to extract the principal components of temperature data samples. Then, LS-SVM is used to predict the changing tendency of the thermally induced thermal extension error of boring bar. Finally, experiments are conducted on a boring machine, the application results show that Boring bar axial thermal elongation error residual value dropped below 5 μm and minimum residual error is only 0.5 μm. This method not only effectively improve the efficiency of the temperature data acquisition and analysis, and improve the modeling accuracy and robustness.

  11. Thermal-Induced Errors Prediction and Compensation for a Coordinate Boring Machine Based on Time Series Analysis

    Directory of Open Access Journals (Sweden)

    Jun Yang

    2014-01-01

    Full Text Available To improve the CNC machine tools precision, a thermal error modeling for the motorized spindle was proposed based on time series analysis, considering the length of cutting tools and thermal declined angles, and the real-time error compensation was implemented. A five-point method was applied to measure radial thermal declinations and axial expansion of the spindle with eddy current sensors, solving the problem that the three-point measurement cannot obtain the radial thermal angle errors. Then the stationarity of the thermal error sequences was determined by the Augmented Dickey-Fuller Test Algorithm, and the autocorrelation/partial autocorrelation function was applied to identify the model pattern. By combining both Yule-Walker equations and information criteria, the order and parameters of the models were solved effectively, which improved the prediction accuracy and generalization ability. The results indicated that the prediction accuracy of the time series model could reach up to 90%. In addition, the axial maximum error decreased from 39.6 μm to 7 μm after error compensation, and the machining accuracy was improved by 89.7%. Moreover, the X/Y-direction accuracy can reach up to 77.4% and 86%, respectively, which demonstrated that the proposed methods of measurement, modeling, and compensation were effective.

  12. Analysis of influence on back-EMF based sensorless control of PMSM due to parameter variations and measurement errors

    DEFF Research Database (Denmark)

    Wang, Z.; Lu, K.; Ye, Y.;

    2011-01-01

    To achieve better performance of sensorless control of PMSM, a precise and stable estimation of rotor position and speed is required. Several parameter uncertainties and variable measurement errors may lead to estimation error, such as resistance and inductance variations due to temperature and f......, gives mathematical analysis and experimental results to support the principles, and quantify the effects of each. It may be a guidance for designers to minify the estimation error and make proper on-line parameter estimations.......To achieve better performance of sensorless control of PMSM, a precise and stable estimation of rotor position and speed is required. Several parameter uncertainties and variable measurement errors may lead to estimation error, such as resistance and inductance variations due to temperature...... and flux saturation, current and voltage errors due to measurement uncertainties, and signal delay caused by hardwares. This paper reveals some inherent principles for the performance of the back-EMF based sensorless algorithm embedded in a surface mounted PMSM system adapting vector control strategy...

  13. Classification of Error-Diffused Halftone Images Based on Spectral Regression Kernel Discriminant Analysis

    Directory of Open Access Journals (Sweden)

    Zhigao Zeng

    2016-01-01

    Full Text Available This paper proposes a novel algorithm to solve the challenging problem of classifying error-diffused halftone images. We firstly design the class feature matrices, after extracting the image patches according to their statistics characteristics, to classify the error-diffused halftone images. Then, the spectral regression kernel discriminant analysis is used for feature dimension reduction. The error-diffused halftone images are finally classified using an idea similar to the nearest centroids classifier. As demonstrated by the experimental results, our method is fast and can achieve a high classification accuracy rate with an added benefit of robustness in tackling noise.

  14. A New Model for Intrusion Detection based on Reduced Error Pruning Technique

    Directory of Open Access Journals (Sweden)

    Mradul Dhakar

    2013-09-01

    Full Text Available The increasing counterfeit of the internet usage has raised concerns of the security agencies to work very hard in order to diminish the presence of the abnormal users from the web. The motive of these illicit users (called intruders is to harm the system or the network either by gaining access to the system or prohibiting genuine users to access the resources. Hence in order to tackle the abnormalities Intrusion Detection System (IDS with Data Mining has evolved as the most demanding approach. On the one end IDS aims to detect the intrusions by monitoring a given environment while on the other end Data Mining allows mining of these intrusions hidden among genuine users. In this regard, IDS with Data Mining has been through several revisions in consideration to meet the current requirements with efficient detection of intrusions. Also several models have been proposed for enhancing the system performance. In context to improved performance, the paper presents a new model for intrusion detection. This improved model, named as REP (Reduced Error Pruning based Intrusion Detection Model results in higher accuracy along with the increased number of correctly classified instances.

  15. Human physiologically based pharmacokinetic model for propofol

    Directory of Open Access Journals (Sweden)

    Schnider Thomas W

    2005-04-01

    Full Text Available Abstract Background Propofol is widely used for both short-term anesthesia and long-term sedation. It has unusual pharmacokinetics because of its high lipid solubility. The standard approach to describing the pharmacokinetics is by a multi-compartmental model. This paper presents the first detailed human physiologically based pharmacokinetic (PBPK model for propofol. Methods PKQuest, a freely distributed software routine http://www.pkquest.com, was used for all the calculations. The "standard human" PBPK parameters developed in previous applications is used. It is assumed that the blood and tissue binding is determined by simple partition into the tissue lipid, which is characterized by two previously determined set of parameters: 1 the value of the propofol oil/water partition coefficient; 2 the lipid fraction in the blood and tissues. The model was fit to the individual experimental data of Schnider et. al., Anesthesiology, 1998; 88:1170 in which an initial bolus dose was followed 60 minutes later by a one hour constant infusion. Results The PBPK model provides a good description of the experimental data over a large range of input dosage, subject age and fat fraction. Only one adjustable parameter (the liver clearance is required to describe the constant infusion phase for each individual subject. In order to fit the bolus injection phase, for 10 or the 24 subjects it was necessary to assume that a fraction of the bolus dose was sequestered and then slowly released from the lungs (characterized by two additional parameters. The average weighted residual error (WRE of the PBPK model fit to the both the bolus and infusion phases was 15%; similar to the WRE for just the constant infusion phase obtained by Schnider et. al. using a 6-parameter NONMEM compartmental model. Conclusion A PBPK model using standard human parameters and a simple description of tissue binding provides a good description of human propofol kinetics. The major advantage of a

  16. Inborn errors of the Krebs cycle: a group of unusual mitochondrial diseases in human.

    Science.gov (United States)

    Rustin, P; Bourgeron, T; Parfait, B; Chretien, D; Munnich, A; Rötig, A

    1997-08-22

    Krebs cycle disorders constitute a group of rare human diseases which present an amazing complexity considering our current knowledge on the Krebs cycle function and biogenesis. Acting as a turntable of cell metabolism, it is ubiquitously distributed in the organism and its enzyme components encoded by supposedly typical house-keeping genes. However, the investigation of patients presenting specific defects of Krebs cycle enzymes, resulting from deleterious mutations of the considered genes, leads to reconsider this simple envision by revealing organ-specific impairments, mostly affecting neuromuscular system. This often leaves aside organs the metabolism of which strongly depends on mitochondrial energy metabolism as well, such as heart, kidney or liver. Additionally, in some patients, a complex pattern of tissue-specific enzyme defect was also observed. The lack of functional additional copies of Krebs cycle genes suggests that the complex expression pattern should be ascribed to tissue-specific regulations of transcriptional and/or translational activities, together with a variable cell adaptability to Krebs cycle functional defects.

  17. Error Analysis for a Navigation Algorithm based on Optical-Flow and a Digital Terrain Map

    CERN Document Server

    Kupervasser, Oleg; Rivlin, Ehud; 10.1109/PLANS.2008.4570040

    2011-01-01

    The paper deals with the error analysis of a navigation algorithm that uses as input a sequence of images acquired by a moving camera and a Digital Terrain Map (DTM) of the region been imaged by the camera during the motion. The main sources of error are more or less straightforward to identify: camera resolution, structure of the observed terrain and DTM accuracy, field of view and camera trajectory. After characterizing and modeling these error sources in the framework of the CDTM algorithm, a closed form expression for their effect on the pose and motion errors of the camera can be found. The analytic expression provides a priori measurements for the accuracy in terms of the parameters mentioned above.

  18. Securing Relay Networks with Artificial Noise: An Error Performance-Based Approach

    National Research Council Canada - National Science Library

    Ying Liu; Liang Li; George C Alexandropoulos; Marius Pesavento

    2017-01-01

    ... (AN) symbols to jam the relay reception. The objective of our considered AN design is to degrade the error probability performance at the untrusted relay, for different types of channel state information (CSI) at the destination...

  19. Field-balanced adaptive optics error function for wide field-of-view space-based systems

    Science.gov (United States)

    McComas, Brian K.; Friedman, Edward J.

    2002-03-01

    Adaptive optics are regularly used in ground-based astronomical telescopes. These applications are characterized by a very narrow (approximately 1 arcmin) field of view. For economic reasons, commercial space-based earth-observing optical systems must have a field of view as large as possible. We develop a new error function that is an extension of conventional adaptive optics for wide field-of-view optical systems and show that this new error function enables diffraction-limited performance across a large field of view with only one deformable mirror. This new error function allows for reprogramming of aberration control algorithms for particular applications by the use of an addressable weighting function.

  20. Trellis and turbo coding iterative and graph-based error control coding

    CERN Document Server

    Schlegel, Christian B

    2015-01-01

    This new edition has been extensively revised to reflect the progress in error control coding over the past few years. Over 60% of the material has been completely reworked, and 30% of the material is original. Convolutional, turbo, and low density parity-check (LDPC) coding and polar codes in a unified framework. Advanced research-related developments such as spatial coupling. A focus on algorithmic and implementation aspects of error control coding.