WorldWideScience

Sample records for error analysis based

  1. Effective training based on the cause analysis of operation errors

    International Nuclear Information System (INIS)

    Fujita, Eimitsu; Noji, Kunio; Kobayashi, Akira.

    1991-01-01

    The authors have investigated typical error types through our training experience, and analyzed the causes of them. Error types which are observed in simulator training are: (1) lack of knowledge or lack of its applying ability to actual operation; (2) defective mastery of skillbase operation; (3) rote operation or stereotyped manner; (4) mind-setting or lack of redundant verification; (5) lack of team work; (6) misjudgement for the plant overall conditions by operation chief, who directs a reactor operator and a turbine operator in the training. The paper describes training methods used in Japan for BWR utilities to overcome these error types

  2. A methodology for collection and analysis of human error data based on a cognitive model: IDA

    International Nuclear Information System (INIS)

    Shen, S.-H.; Smidts, C.; Mosleh, A.

    1997-01-01

    This paper presents a model-based human error taxonomy and data collection. The underlying model, IDA (described in two companion papers), is a cognitive model of behavior developed for analysis of the actions of nuclear power plant operating crew during abnormal situations. The taxonomy is established with reference to three external reference points (i.e. plant status, procedures, and crew) and four reference points internal to the model (i.e. information collected, diagnosis, decision, action). The taxonomy helps the analyst: (1) recognize errors as such; (2) categorize the error in terms of generic characteristics such as 'error in selection of problem solving strategies' and (3) identify the root causes of the error. The data collection methodology is summarized in post event operator interview and analysis summary forms. The root cause analysis methodology is illustrated using a subset of an actual event. Statistics, which extract generic characteristics of error prone behaviors and error prone situations are presented. Finally, applications of the human error data collection are reviewed. A primary benefit of this methodology is to define better symptom-based and other auxiliary procedures with associated training to minimize or preclude certain human errors. It also helps in design of control rooms, and in assessment of human error probabilities in the probabilistic risk assessment framework. (orig.)

  3. Frame-based safety analysis approach for decision-based errors

    International Nuclear Information System (INIS)

    Fan, Chin-Feng; Yihb, Swu

    1997-01-01

    A frame-based approach is proposed to analyze decision-based errors made by automatic controllers or human operators due to erroneous reference frames. An integrated framework, Two Frame Model (TFM), is first proposed to model the dynamic interaction between the physical process and the decision-making process. Two important issues, consistency and competing processes, are raised. Consistency between the physical and logic frames makes a TFM-based system work properly. Loss of consistency refers to the failure mode that the logic frame does not accurately reflect the state of the controlled processes. Once such failure occurs, hazards may arise. Among potential hazards, the competing effect between the controller and the controlled process is the most severe one, which may jeopardize a defense-in-depth design. When the logic and physical frames are inconsistent, conventional safety analysis techniques are inadequate. We propose Frame-based Fault Tree; Analysis (FFTA) and Frame-based Event Tree Analysis (FETA) under TFM to deduce the context for decision errors and to separately generate the evolution of the logical frame as opposed to that of the physical frame. This multi-dimensional analysis approach, different from the conventional correctness-centred approach, provides a panoramic view in scenario generation. Case studies using the proposed techniques are also given to demonstrate their usage and feasibility

  4. Analysis of measured data of human body based on error correcting frequency

    Science.gov (United States)

    Jin, Aiyan; Peipei, Gao; Shang, Xiaomei

    2014-04-01

    Anthropometry is to measure all parts of human body surface, and the measured data is the basis of analysis and study of the human body, establishment and modification of garment size and formulation and implementation of online clothing store. In this paper, several groups of the measured data are gained, and analysis of data error is gotten by analyzing the error frequency and using analysis of variance method in mathematical statistics method. Determination of the measured data accuracy and the difficulty of measured parts of human body, further studies of the causes of data errors, and summarization of the key points to minimize errors possibly are also mentioned in the paper. This paper analyses the measured data based on error frequency, and in a way , it provides certain reference elements to promote the garment industry development.

  5. Knowledge-base for the new human reliability analysis method, A Technique for Human Error Analysis (ATHEANA)

    International Nuclear Information System (INIS)

    Cooper, S.E.; Wreathall, J.; Thompson, C.M., Drouin, M.; Bley, D.C.

    1996-01-01

    This paper describes the knowledge base for the application of the new human reliability analysis (HRA) method, a ''A Technique for Human Error Analysis'' (ATHEANA). Since application of ATHEANA requires the identification of previously unmodeled human failure events, especially errors of commission, and associated error-forcing contexts (i.e., combinations of plant conditions and performance shaping factors), this knowledge base is an essential aid for the HRA analyst

  6. Study on Network Error Analysis and Locating based on Integrated Information Decision System

    Science.gov (United States)

    Yang, F.; Dong, Z. H.

    2017-10-01

    Integrated information decision system (IIDS) integrates multiple sub-system developed by many facilities, including almost hundred kinds of software, which provides with various services, such as email, short messages, drawing and sharing. Because the under-layer protocols are different, user standards are not unified, many errors are occurred during the stages of setup, configuration, and operation, which seriously affect the usage. Because the errors are various, which may be happened in different operation phases, stages, TCP/IP communication protocol layers, sub-system software, it is necessary to design a network error analysis and locating tool for IIDS to solve the above problems. This paper studies on network error analysis and locating based on IIDS, which provides strong theory and technology supports for the running and communicating of IIDS.

  7. Error analysis and algorithm implementation for an improved optical-electric tracking device based on MEMS

    Science.gov (United States)

    Sun, Hong; Wu, Qian-zhong

    2013-09-01

    In order to improve the precision of optical-electric tracking device, proposing a kind of improved optical-electric tracking device based on MEMS, in allusion to the tracking error of gyroscope senor and the random drift, According to the principles of time series analysis of random sequence, establish AR model of gyro random error based on Kalman filter algorithm, then the output signals of gyro are multiple filtered with Kalman filter. And use ARM as micro controller servo motor is controlled by fuzzy PID full closed loop control algorithm, and add advanced correction and feed-forward links to improve response lag of angle input, Free-forward can make output perfectly follow input. The function of lead compensation link is to shorten the response of input signals, so as to reduce errors. Use the wireless video monitor module and remote monitoring software (Visual Basic 6.0) to monitor servo motor state in real time, the video monitor module gathers video signals, and the wireless video module will sent these signals to upper computer, so that show the motor running state in the window of Visual Basic 6.0. At the same time, take a detailed analysis to the main error source. Through the quantitative analysis of the errors from bandwidth and gyro sensor, it makes the proportion of each error in the whole error more intuitive, consequently, decrease the error of the system. Through the simulation and experiment results shows the system has good following characteristic, and it is very valuable for engineering application.

  8. Error tolerance analysis of wave diagnostic based on coherent modulation imaging in high power laser system

    Science.gov (United States)

    Pan, Xingchen; Liu, Cheng; Zhu, Jianqiang

    2018-02-01

    Coherent modulation imaging providing fast convergence speed and high resolution with single diffraction pattern is a promising technique to satisfy the urgent demands for on-line multiple parameter diagnostics with single setup in high power laser facilities (HPLF). However, the influence of noise on the final calculated parameters concerned has not been investigated yet. According to a series of simulations with twenty different sampling beams generated based on the practical parameters and performance of HPLF, the quantitative analysis based on statistical results was first investigated after considering five different error sources. We found the background noise of detector and high quantization error will seriously affect the final accuracy and different parameters have different sensitivity to different noise sources. The simulation results and the corresponding analysis provide the potential directions to further improve the final accuracy of parameter diagnostics which is critically important to its formal applications in the daily routines of HPLF.

  9. Comparison of subset-based local and FE-based global digital image correlation: Theoretical error analysis and validation

    KAUST Repository

    Pan, B.

    2016-03-22

    Subset-based local and finite-element-based (FE-based) global digital image correlation (DIC) approaches are the two primary image matching algorithms widely used for full-field displacement mapping. Very recently, the performances of these different DIC approaches have been experimentally investigated using numerical and real-world experimental tests. The results have shown that in typical cases, where the subset (element) size is no less than a few pixels and the local deformation within a subset (element) can be well approximated by the adopted shape functions, the subset-based local DIC outperforms FE-based global DIC approaches because the former provides slightly smaller root-mean-square errors and offers much higher computation efficiency. Here we investigate the theoretical origin and lay a solid theoretical basis for the previous comparison. We assume that systematic errors due to imperfect intensity interpolation and undermatched shape functions are negligibly small, and perform a theoretical analysis of the random errors or standard deviation (SD) errors in the displacements measured by two local DIC approaches (i.e., a subset-based local DIC and an element-based local DIC) and two FE-based global DIC approaches (i.e., Q4-DIC and Q8-DIC). The equations that govern the random errors in the displacements measured by these local and global DIC approaches are theoretically derived. The correctness of the theoretically predicted SD errors is validated through numerical translation tests under various noise levels. We demonstrate that the SD errors induced by the Q4-element-based local DIC, the global Q4-DIC and the global Q8-DIC are 4, 1.8-2.2 and 1.2-1.6 times greater, respectively, than that associated with the subset-based local DIC, which is consistent with our conclusions from previous work. © 2016 Elsevier Ltd. All rights reserved.

  10. Error analysis of satellite attitude determination using a vision-based approach

    Science.gov (United States)

    Carozza, Ludovico; Bevilacqua, Alessandro

    2013-09-01

    Improvements in communication and processing technologies have opened the doors to exploit on-board cameras to compute objects' spatial attitude using only the visual information from sequences of remote sensed images. The strategies and the algorithmic approach used to extract such information affect the estimation accuracy of the three-axis orientation of the object. This work presents a method for analyzing the most relevant error sources, including numerical ones, possible drift effects and their influence on the overall accuracy, referring to vision-based approaches. The method in particular focuses on the analysis of the image registration algorithm, carried out through on-purpose simulations. The overall accuracy has been assessed on a challenging case study, for which accuracy represents the fundamental requirement. In particular, attitude determination has been analyzed for small satellites, by comparing theoretical findings to metric results from simulations on realistic ground-truth data. Significant laboratory experiments, using a numerical control unit, have further confirmed the outcome. We believe that our analysis approach, as well as our findings in terms of error characterization, can be useful at proof-of-concept design and planning levels, since they emphasize the main sources of error for visual based approaches employed for satellite attitude estimation. Nevertheless, the approach we present is also of general interest for all the affine applicative domains which require an accurate estimation of three-dimensional orientation parameters (i.e., robotics, airborne stabilization).

  11. Residents' surgical performance during the laboratory years: an analysis of rule-based errors.

    Science.gov (United States)

    Nathwani, Jay N; Wise, Brett J; Garren, Margaret E; Mohamadipanah, Hossein; Van Beek, Nicole; DiMarco, Shannon M; Pugh, Carla M

    2017-11-01

    Nearly one-third of surgical residents will enter into academic development during their surgical residency by dedicating time to a research fellowship for 1-3 y. Major interest lies in understanding how laboratory residents' surgical skills are affected by minimal clinical exposure during academic development. A widely held concern is that the time away from clinical exposure results in surgical skills decay. This study examines the impact of the academic development years on residents' operative performance. We hypothesize that the use of repeated, annual assessments may result in learning even without individual feedback on participants simulated performance. Surgical performance data were collected from laboratory residents (postgraduate years 2-5) during the summers of 2014, 2015, and 2016. Residents had 15 min to complete a shortened, simulated laparoscopic ventral hernia repair procedure. Final hernia repair skins from all participants were scored using a previously validated checklist. An analysis of variance test compared the mean performance scores of repeat participants to those of first time participants. Twenty-seven (37% female) laboratory residents provided 2-year assessment data over the 3-year span of the study. Second time performance revealed improvement from a mean score of 14 (standard error = 1.0) in the first year to 17.2 (SD = 0.9) in the second year, (F[1, 52] = 5.6, P = 0.022). Detailed analysis demonstrated improvement in performance for 3 grading criteria that were considered to be rule-based errors. There was no improvement in operative strategy errors. Analysis of longitudinal performance of laboratory residents shows higher scores for repeat participants in the category of rule-based errors. These findings suggest that laboratory residents can learn from rule-based mistakes when provided with annual performance-based assessments. This benefit was not seen with operative strategy errors and has important implications for

  12. Adaptive Green-Kubo estimates of transport coefficients from molecular dynamics based on robust error analysis

    Science.gov (United States)

    Jones, Reese E.; Mandadapu, Kranthi K.

    2012-04-01

    We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)], 10.1103/PhysRev.182.280 and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.

  13. High‐resolution trench photomosaics from image‐based modeling: Workflow and error analysis

    Science.gov (United States)

    Reitman, Nadine G.; Bennett, Scott E. K.; Gold, Ryan D.; Briggs, Richard; Duross, Christopher

    2015-01-01

    Photomosaics are commonly used to construct maps of paleoseismic trench exposures, but the conventional process of manually using image‐editing software is time consuming and produces undesirable artifacts and distortions. Herein, we document and evaluate the application of image‐based modeling (IBM) for creating photomosaics and 3D models of paleoseismic trench exposures, illustrated with a case‐study trench across the Wasatch fault in Alpine, Utah. Our results include a structure‐from‐motion workflow for the semiautomated creation of seamless, high‐resolution photomosaics designed for rapid implementation in a field setting. Compared with conventional manual methods, the IBM photomosaic method provides a more accurate, continuous, and detailed record of paleoseismic trench exposures in approximately half the processing time and 15%–20% of the user input time. Our error analysis quantifies the effect of the number and spatial distribution of control points on model accuracy. For this case study, an ∼87  m2 exposure of a benched trench photographed at viewing distances of 1.5–7 m yields a model with <2  cm root mean square error (rmse) with as few as six control points. Rmse decreases as more control points are implemented, but the gains in accuracy are minimal beyond 12 control points. Spreading control points throughout the target area helps to minimize error. We propose that 3D digital models and corresponding photomosaics should be standard practice in paleoseismic exposure archiving. The error analysis serves as a guide for future investigations that seek balance between speed and accuracy during photomosaic and 3D model construction.

  14. Analysis of Medication Error Reports

    Energy Technology Data Exchange (ETDEWEB)

    Whitney, Paul D.; Young, Jonathan; Santell, John; Hicks, Rodney; Posse, Christian; Fecht, Barbara A.

    2004-11-15

    In medicine, as in many areas of research, technological innovation and the shift from paper based information to electronic records has created a climate of ever increasing availability of raw data. There has been, however, a corresponding lag in our abilities to analyze this overwhelming mass of data, and classic forms of statistical analysis may not allow researchers to interact with data in the most productive way. This is true in the emerging area of patient safety improvement. Traditionally, a majority of the analysis of error and incident reports has been carried out based on an approach of data comparison, and starts with a specific question which needs to be answered. Newer data analysis tools have been developed which allow the researcher to not only ask specific questions but also to “mine” data: approach an area of interest without preconceived questions, and explore the information dynamically, allowing questions to be formulated based on patterns brought up by the data itself. Since 1991, United States Pharmacopeia (USP) has been collecting data on medication errors through voluntary reporting programs. USP’s MEDMARXsm reporting program is the largest national medication error database and currently contains well over 600,000 records. Traditionally, USP has conducted an annual quantitative analysis of data derived from “pick-lists” (i.e., items selected from a list of items) without an in-depth analysis of free-text fields. In this paper, the application of text analysis and data analysis tools used by Battelle to analyze the medication error reports already analyzed in the traditional way by USP is described. New insights and findings were revealed including the value of language normalization and the distribution of error incidents by day of the week. The motivation for this effort is to gain additional insight into the nature of medication errors to support improvements in medication safety.

  15. SLC beam line error analysis using a model-based expert system

    International Nuclear Information System (INIS)

    Lee, M.; Kleban, S.

    1988-02-01

    Commissioning particle beam line is usually a very time-consuming and labor-intensive task for accelerator physicists. To aid in commissioning, we developed a model-based expert system that identifies error-free regions, as well as localizing beam line errors. This paper will give examples of the use of our system for the SLC commissioning. 8 refs., 5 figs

  16. Research on Human-Error Factors of Civil Aircraft Pilots Based On Grey Relational Analysis

    Directory of Open Access Journals (Sweden)

    Guo Yundong

    2018-01-01

    Full Text Available In consideration of the situation that civil aviation accidents involve many human-error factors and show the features of typical grey systems, an index system of civil aviation accident human-error factors is built using human factor analysis and classification system model. With the data of accidents happened worldwide between 2008 and 2011, the correlation between human-error factors can be analyzed quantitatively using the method of grey relational analysis. Research results show that the order of main factors affecting pilot human-error factors is preconditions for unsafe acts, unsafe supervision, organization and unsafe acts. The factor related most closely with second-level indexes and pilot human-error factors is the physical/mental limitations of pilots, followed by supervisory violations. The relevancy between the first-level indexes and the corresponding second-level indexes and the relevancy between second-level indexes can also be analyzed quantitatively.

  17. Analysis of translational errors in frame-based and frameless cranial radiosurgery using an anthropomorphic phantom

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Taynna Vernalha Rocha [Faculdades Pequeno Principe (FPP), Curitiba, PR (Brazil); Cordova Junior, Arno Lotar; Almeida, Cristiane Maria; Piedade, Pedro Argolo; Silva, Cintia Mara da, E-mail: taynnavra@gmail.com [Centro de Radioterapia Sao Sebastiao, Florianopolis, SC (Brazil); Brincas, Gabriela R. Baseggio [Centro de Diagnostico Medico Imagem, Florianopolis, SC (Brazil); Marins, Priscila; Soboll, Danyel Scheidegger [Universidade Tecnologica Federal do Parana (UTFPR), Curitiba, PR (Brazil)

    2016-03-15

    Objective: To evaluate three-dimensional translational setup errors and residual errors in image-guided radiosurgery, comparing frameless and frame-based techniques, using an anthropomorphic phantom. Materials and Methods: We initially used specific phantoms for the calibration and quality control of the image-guided system. For the hidden target test, we used an Alderson Radiation Therapy (ART)-210 anthropomorphic head phantom, into which we inserted four 5- mm metal balls to simulate target treatment volumes. Computed tomography images were the taken with the head phantom properly positioned for frameless and frame-based radiosurgery. Results: For the frameless technique, the mean error magnitude was 0.22 ± 0.04 mm for setup errors and 0.14 ± 0.02 mm for residual errors, the combined uncertainty being 0.28 mm and 0.16 mm, respectively. For the frame-based technique, the mean error magnitude was 0.73 ± 0.14 mm for setup errors and 0.31 ± 0.04 mm for residual errors, the combined uncertainty being 1.15 mm and 0.63 mm, respectively. Conclusion: The mean values, standard deviations, and combined uncertainties showed no evidence of a significant differences between the two techniques when the head phantom ART-210 was used. (author)

  18. Instanton-based techniques for analysis and reduction of error floors of LDPC codes

    International Nuclear Information System (INIS)

    Chertkov, Michael; Chilappagari, Shashi K.; Stepanov, Mikhail G.; Vasic, Bane

    2008-01-01

    We describe a family of instanton-based optimization methods developed recently for the analysis of the error floors of low-density parity-check (LDPC) codes. Instantons are the most probable configurations of the channel noise which result in decoding failures. We show that the general idea and the respective optimization technique are applicable broadly to a variety of channels, discrete or continuous, and variety of sub-optimal decoders. Specifically, we consider: iterative belief propagation (BP) decoders, Gallager type decoders, and linear programming (LP) decoders performing over the additive white Gaussian noise channel (AWGNC) and the binary symmetric channel (BSC). The instanton analysis suggests that the underlying topological structures of the most probable instanton of the same code but different channels and decoders are related to each other. Armed with this understanding of the graphical structure of the instanton and its relation to the decoding failures, we suggest a method to construct codes whose Tanner graphs are free of these structures, and thus have less significant error floors.

  19. Instanton-based techniques for analysis and reduction of error floor of LDPC codes

    Energy Technology Data Exchange (ETDEWEB)

    Chertkov, Michael [Los Alamos National Laboratory; Chilappagari, Shashi K [Los Alamos National Laboratory; Stepanov, Mikhail G [Los Alamos National Laboratory; Vasic, Bane [SENIOR MEMBER, IEEE

    2008-01-01

    We describe a family of instanton-based optimization methods developed recently for the analysis of the error floors of low-density parity-check (LDPC) codes. Instantons are the most probable configurations of the channel noise which result in decoding failures. We show that the general idea and the respective optimization technique are applicable broadly to a variety of channels, discrete or continuous, and variety of sub-optimal decoders. Specifically, we consider: iterative belief propagation (BP) decoders, Gallager type decoders, and linear programming (LP) decoders performing over the additive white Gaussian noise channel (AWGNC) and the binary symmetric channel (BSC). The instanton analysis suggests that the underlying topological structures of the most probable instanton of the same code but different channels and decoders are related to each other. Armed with this understanding of the graphical structure of the instanton and its relation to the decoding failures, we suggest a method to construct codes whose Tanner graphs are free of these structures, and thus have less significant error floors.

  20. Classification of Error-Diffused Halftone Images Based on Spectral Regression Kernel Discriminant Analysis

    Directory of Open Access Journals (Sweden)

    Zhigao Zeng

    2016-01-01

    Full Text Available This paper proposes a novel algorithm to solve the challenging problem of classifying error-diffused halftone images. We firstly design the class feature matrices, after extracting the image patches according to their statistics characteristics, to classify the error-diffused halftone images. Then, the spectral regression kernel discriminant analysis is used for feature dimension reduction. The error-diffused halftone images are finally classified using an idea similar to the nearest centroids classifier. As demonstrated by the experimental results, our method is fast and can achieve a high classification accuracy rate with an added benefit of robustness in tackling noise.

  1. M/T method based incremental encoder velocity measurement error analysis and self-adaptive error elimination algorithm

    DEFF Research Database (Denmark)

    Chen, Yangyang; Yang, Ming; Long, Jiang

    2017-01-01

    For motor control applications, the speed loop performance is largely depended on the accuracy of speed feedback signal. M/T method, due to its high theoretical accuracy, is the most widely used in incremental encoder adopted speed measurement. However, the inherent encoder optical grating error...

  2. VR-based training and assessment in ultrasound-guided regional anesthesia: from error analysis to system design.

    LENUS (Irish Health Repository)

    2011-01-01

    If VR-based medical training and assessment is to improve patient care and safety (i.e. a genuine health gain), it has to be based on clinically relevant measurement of performance. Metrics on errors are particularly useful for capturing and correcting undesired behaviors before they occur in the operating room. However, translating clinically relevant metrics and errors into meaningful system design is a challenging process. This paper discusses how an existing task and error analysis was translated into the system design of a VR-based training and assessment environment for Ultrasound Guided Regional Anesthesia (UGRA).

  3. Accuracy improvement of the H-drive air-levitating wafer inspection stage based on error analysis and compensation

    Science.gov (United States)

    Zhang, Fan; Liu, Pinkuan

    2018-04-01

    In order to improve the inspection precision of the H-drive air-bearing stage for wafer inspection, in this paper the geometric error of the stage is analyzed and compensated. The relationship between the positioning errors and error sources are initially modeled, and seven error components are identified that are closely related to the inspection accuracy. The most effective factor that affects the geometric error is identified by error sensitivity analysis. Then, the Spearman rank correlation method is applied to find the correlation between different error components, aiming at guiding the accuracy design and error compensation of the stage. Finally, different compensation methods, including the three-error curve interpolation method, the polynomial interpolation method, the Chebyshev polynomial interpolation method, and the B-spline interpolation method, are employed within the full range of the stage, and their results are compared. Simulation and experiment show that the B-spline interpolation method based on the error model has better compensation results. In addition, the research result is valuable for promoting wafer inspection accuracy and will greatly benefit the semiconductor industry.

  4. Phakic intraocular lenses for the treatment of refractive errors: an evidence-based analysis.

    Science.gov (United States)

    2009-01-01

    .6 per 10,000 people in Canada have low vision. The 2001 Participation and Activity Limitation Survey (PALS) found that 594,350 (2.5%) Canadians had "difficulty seeing ordinary newsprint or clearly seeing the face of someone from 4 m," and the Canadian National Institute for the Blind (CNIB) registry classified 105,000 (.35%) Canadians as visually disabled. PHAKIC INTRAOCULAR LENSES (PIOL): A phakic intraocular lens (pIOL) is a supplementary lens that is inserted into the anterior or posterior chamber of the eye to correct refractive errors (myopia, hyperopia, and astigmatism). Unlike in cataract surgery, the eye's natural crystalline lens is not removed when the pIOL is inserted, so the eye retains its accommodative ability. In Canada and the United States, iris-fixated (anterior chamber lenses that are anchored to the iris with a claw) and posterior chamber lenses are the only types of pIOLs that are licensed by Health Canada and the Food and Drug Administration, respectively. EVIDENCE-BASED ANALYSIS METHOD: RESEARCH QUESTIONS #ENTITYSTARTX00026; What are the effectiveness, cost-effectiveness, and safety of pIOLs for the treatment of myopia, hyperopia, and astigmatism?Do certain subgroups (e.g. high myopia and low vision) benefit more from pIOLs?How do pIOLs compare with alternative surgical treatment options (LASIK, PRK, and CLE)?Using appropriate keywords, a literature search was conducted up to January 2009. Systematic reviews, meta-analyses, randomized controlled trials, and observational studies with more than 20 eyes receiving pIOLs were eligible for inclusion. The primary outcomes of interest were uncorrected visual acuity (UCVA), predictability of manifest refraction spherical equivalent (MRSE), and adverse events. The GRADE approach was used to systematically and explicitly evaluate the quality of evidence. The search identified 1,131 citations published between January 1, 2003, and January 16, 2009. Including a health technology assessment (HTA) identified in

  5. Self-test web-based pure-tone audiometry: validity evaluation and measurement error analysis.

    Science.gov (United States)

    Masalski, Marcin; Kręcicki, Tomasz

    2013-04-12

    Potential methods of application of self-administered Web-based pure-tone audiometry conducted at home on a PC with a sound card and ordinary headphones depend on the value of measurement error in such tests. The aim of this research was to determine the measurement error of the hearing threshold determined in the way described above and to identify and analyze factors influencing its value. The evaluation of the hearing threshold was made in three series: (1) tests on a clinical audiometer, (2) self-tests done on a specially calibrated computer under the supervision of an audiologist, and (3) self-tests conducted at home. The research was carried out on the group of 51 participants selected from patients of an audiology outpatient clinic. From the group of 51 patients examined in the first two series, the third series was self-administered at home by 37 subjects (73%). The average difference between the value of the hearing threshold determined in series 1 and in series 2 was -1.54dB with standard deviation of 7.88dB and a Pearson correlation coefficient of .90. Between the first and third series, these values were -1.35dB±10.66dB and .84, respectively. In series 3, the standard deviation was most influenced by the error connected with the procedure of hearing threshold identification (6.64dB), calibration error (6.19dB), and additionally at the frequency of 250Hz by frequency nonlinearity error (7.28dB). The obtained results confirm the possibility of applying Web-based pure-tone audiometry in screening tests. In the future, modifications of the method leading to the decrease in measurement error can broaden the scope of Web-based pure-tone audiometry application.

  6. Analysis of Point Based Image Registration Errors With Applications in Single Molecule Microscopy.

    Science.gov (United States)

    Cohen, E A K; Ober, R J

    2013-12-15

    We present an asymptotic treatment of errors involved in point-based image registration where control point (CP) localization is subject to heteroscedastic noise; a suitable model for image registration in fluorescence microscopy. Assuming an affine transform, CPs are used to solve a multivariate regression problem. With measurement errors existing for both sets of CPs this is an errors-in-variable problem and linear least squares is inappropriate; the correct method being generalized least squares. To allow for point dependent errors the equivalence of a generalized maximum likelihood and heteroscedastic generalized least squares model is achieved allowing previously published asymptotic results to be extended to image registration. For a particularly useful model of heteroscedastic noise where covariance matrices are scalar multiples of a known matrix (including the case where covariance matrices are multiples of the identity) we provide closed form solutions to estimators and derive their distribution. We consider the target registration error (TRE) and define a new measure called the localization registration error (LRE) believed to be useful, especially in microscopy registration experiments. Assuming Gaussianity of the CP localization errors, it is shown that the asymptotic distribution for the TRE and LRE are themselves Gaussian and the parameterized distributions are derived. Results are successfully applied to registration in single molecule microscopy to derive the key dependence of the TRE and LRE variance on the number of CPs and their associated photon counts. Simulations show asymptotic results are robust for low CP numbers and non-Gaussianity. The method presented here is shown to outperform GLS on real imaging data.

  7. MO-F-BRA-04: Voxel-Based Statistical Analysis of Deformable Image Registration Error via a Finite Element Method.

    Science.gov (United States)

    Li, S; Lu, M; Kim, J; Glide-Hurst, C; Chetty, I; Zhong, H

    2012-06-01

    Purpose Clinical implementation of adaptive treatment planning is limited by the lack of quantitative tools to assess deformable image registration errors (R-ERR). The purpose of this study was to develop a method, using finite element modeling (FEM), to estimate registration errors based on mechanical changes resulting from them. Methods An experimental platform to quantify the correlation between registration errors and their mechanical consequences was developed as follows: diaphragm deformation was simulated on the CT images in patients with lung cancer using a finite element method (FEM). The simulated displacement vector fields (F-DVF) were used to warp each CT image to generate a FEM image. B-Spline based (Elastix) registrations were performed from reference to FEM images to generate a registration DVF (R-DVF). The F- DVF was subtracted from R-DVF. The magnitude of the difference vector was defined as the registration error, which is a consequence of mechanically unbalanced energy (UE), computed using 'in-house-developed' FEM software. A nonlinear regression model was used based on imaging voxel data and the analysis considered clustered voxel data within images. Results A regression model analysis showed that UE was significantly correlated with registration error, DVF and the product of registration error and DVF respectively with R̂2=0.73 (R=0.854). The association was verified independently using 40 tracked landmarks. A linear function between the means of UE values and R- DVF*R-ERR has been established. The mean registration error (N=8) was 0.9 mm. 85.4% of voxels fit this model within one standard deviation. Conclusions An encouraging relationship between UE and registration error has been found. These experimental results suggest the feasibility of UE as a valuable tool for evaluating registration errors, thus supporting 4D and adaptive radiotherapy. The research was supported by NIH/NCI R01CA140341. © 2012 American Association of Physicists in

  8. A Method to Optimize Geometric Errors of Machine Tool based on SNR Quality Loss Function and Correlation Analysis

    Directory of Open Access Journals (Sweden)

    Cai Ligang

    2017-01-01

    Full Text Available Instead improving the accuracy of machine tool by increasing the precision of key components level blindly in the production process, the method of combination of SNR quality loss function and machine tool geometric error correlation analysis to optimize five-axis machine tool geometric errors will be adopted. Firstly, the homogeneous transformation matrix method will be used to build five-axis machine tool geometric error modeling. Secondly, the SNR quality loss function will be used for cost modeling. And then, machine tool accuracy optimal objective function will be established based on the correlation analysis. Finally, ISIGHT combined with MATLAB will be applied to optimize each error. The results show that this method is reasonable and appropriate to relax the range of tolerance values, so as to reduce the manufacturing cost of machine tools.

  9. Error analysis of marker-based object localization using a single-plane XRII

    International Nuclear Information System (INIS)

    Habets, Damiaan F.; Pollmann, Steven I.; Yuan, Xunhua; Peters, Terry M.; Holdsworth, David W.

    2009-01-01

    The role of imaging and image guidance is increasing in surgery and therapy, including treatment planning and follow-up. Fluoroscopy is used for two-dimensional (2D) guidance or localization; however, many procedures would benefit from three-dimensional (3D) guidance or localization. Three-dimensional computed tomography (CT) using a C-arm mounted x-ray image intensifier (XRII) can provide high-quality 3D images; however, patient dose and the required acquisition time restrict the number of 3D images that can be obtained. C-arm based 3D CT is therefore limited in applications for x-ray based image guidance or dynamic evaluations. 2D-3D model-based registration, using a single-plane 2D digital radiographic system, does allow for rapid 3D localization. It is our goal to investigate - over a clinically practical range - the impact of x-ray exposure on the resulting range of 3D localization precision. In this paper it is assumed that the tracked instrument incorporates a rigidly attached 3D object with a known configuration of markers. A 2D image is obtained by a digital fluoroscopic x-ray system and corrected for XRII distortions (±0.035 mm) and mechanical C-arm shift (±0.080 mm). A least-square projection-Procrustes analysis is then used to calculate the 3D position using the measured 2D marker locations. The effect of x-ray exposure on the precision of 2D marker localization and on 3D object localization was investigated using numerical simulations and x-ray experiments. The results show a nearly linear relationship between 2D marker localization precision and the 3D localization precision. However, a significant amplification of error, nonuniformly distributed among the three major axes, occurs, and that is demonstrated. To obtain a 3D localization error of less than ±1.0 mm for an object with 20 mm marker spacing, the 2D localization precision must be better than ±0.07 mm. This requirement was met for all investigated nominal x-ray exposures at 28 cm FOV, and

  10. Degradation data analysis based on a generalized Wiener process subject to measurement error

    Science.gov (United States)

    Li, Junxing; Wang, Zhihua; Zhang, Yongbo; Fu, Huimin; Liu, Chengrui; Krishnaswamy, Sridhar

    2017-09-01

    Wiener processes have received considerable attention in degradation modeling over the last two decades. In this paper, we propose a generalized Wiener process degradation model that takes unit-to-unit variation, time-correlated structure and measurement error into considerations simultaneously. The constructed methodology subsumes a series of models studied in the literature as limiting cases. A simple method is given to determine the transformed time scale forms of the Wiener process degradation model. Then model parameters can be estimated based on a maximum likelihood estimation (MLE) method. The cumulative distribution function (CDF) and the probability distribution function (PDF) of the Wiener process with measurement errors are given based on the concept of the first hitting time (FHT). The percentiles of performance degradation (PD) and failure time distribution (FTD) are also obtained. Finally, a comprehensive simulation study is accomplished to demonstrate the necessity of incorporating measurement errors in the degradation model and the efficiency of the proposed model. Two illustrative real applications involving the degradation of carbon-film resistors and the wear of sliding metal are given. The comparative results show that the constructed approach can derive a reasonable result and an enhanced inference precision.

  11. Fault Analysis of Wind Turbines Based on Error Messages and Work Orders

    DEFF Research Database (Denmark)

    Borchersen, Anders Bech; Larsen, Jesper Abildgaard; Stoustrup, Jakob

    2012-01-01

    describing the service performed at the individual turbines. The auto generated alarms are analysed by applying a cleaning procedure to identify the alarms related to components. A severity, occurrence, and detection analysis is performed on the work orders. The outcome of the two analyses are then compared......In this paper data describing the operation and maintenance of an offshore wind farm is presented and analysed. Two different sets of data is presented; the first is auto generated error messages from the Supervisory Control and Data Acquisition (SCADA) system, the other is the work orders...... to identify common fault types and areas where further data analysis would be beneficial for improving the operation and maintenance of wind turbines in the future....

  12. Thermal-Induced Errors Prediction and Compensation for a Coordinate Boring Machine Based on Time Series Analysis

    Directory of Open Access Journals (Sweden)

    Jun Yang

    2014-01-01

    Full Text Available To improve the CNC machine tools precision, a thermal error modeling for the motorized spindle was proposed based on time series analysis, considering the length of cutting tools and thermal declined angles, and the real-time error compensation was implemented. A five-point method was applied to measure radial thermal declinations and axial expansion of the spindle with eddy current sensors, solving the problem that the three-point measurement cannot obtain the radial thermal angle errors. Then the stationarity of the thermal error sequences was determined by the Augmented Dickey-Fuller Test Algorithm, and the autocorrelation/partial autocorrelation function was applied to identify the model pattern. By combining both Yule-Walker equations and information criteria, the order and parameters of the models were solved effectively, which improved the prediction accuracy and generalization ability. The results indicated that the prediction accuracy of the time series model could reach up to 90%. In addition, the axial maximum error decreased from 39.6 μm to 7 μm after error compensation, and the machining accuracy was improved by 89.7%. Moreover, the X/Y-direction accuracy can reach up to 77.4% and 86%, respectively, which demonstrated that the proposed methods of measurement, modeling, and compensation were effective.

  13. Errors from Image Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wood, William Monford [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    Presenting a systematic study of the standard analysis of rod-pinch radiographs for obtaining quantitative measurements of areal mass densities, and making suggestions for improving the methodology of obtaining quantitative information from radiographed objects.

  14. A model-based and computer-aided approach to analysis of human errors in nuclear power plants

    International Nuclear Information System (INIS)

    Yoon, Wan C.; Lee, Yong H.; Kim, Young S.

    1996-01-01

    Since the operator's mission in NPPs is increasingly defined by cognitive tasks such as monitoring, diagnosis and planning, the focus of human error analysis should also move from external actions to internal decision-making processes. While more elaborate analysis of cognitive aspects of human errors will help understand their causes and derive effective countermeasures, a lack of framework and an arbitrary resolution of description may hamper the effectiveness of such analysis. This paper presents new model-based schemes of event description and error classification as well as an interactive computerized support system. The schemes and the support system were produced in an effort to develop an improved version of HPES. The use of a decision-making model enables the analyst to document cognitive aspects of human performance explicitly and in a proper resolution. The stage-specific terms used in the proposed schemes make the task of characterizing human errors easier and confident for field analysts. The support system was designed to help the analyst achieve a contextually well-integrated analysis throughout the different parts of HPES

  15. Analysis of error patterns in clinical radiotherapy

    International Nuclear Information System (INIS)

    Macklis, Roger; Meier, Tim; Barrett, Patricia; Weinhous, Martin

    1996-01-01

    Purpose: Until very recently, prescription errors and adverse treatment events have rarely been studied or reported systematically in oncology. We wished to understand the spectrum and severity of radiotherapy errors that take place on a day-to-day basis in a high-volume academic practice and to understand the resource needs and quality assurance challenges placed on a department by rapid upswings in contract-based clinical volumes requiring additional operating hours, procedures, and personnel. The goal was to define clinical benchmarks for operating safety and to detect error-prone treatment processes that might function as 'early warning' signs. Methods: A multi-tiered prospective and retrospective system for clinical error detection and classification was developed, with formal analysis of the antecedents and consequences of all deviations from prescribed treatment delivery, no matter how trivial. A department-wide record-and-verify system was operational during this period and was used as one method of treatment verification and error detection. Brachytherapy discrepancies were analyzed separately. Results: During the analysis year, over 2000 patients were treated with over 93,000 individual fields. A total of 59 errors affecting a total of 170 individual treated fields were reported or detected during this period. After review, all of these errors were classified as Level 1 (minor discrepancy with essentially no potential for negative clinical implications). This total treatment delivery error rate (170/93, 332 or 0.18%) is significantly better than corresponding error rates reported for other hospital and oncology treatment services, perhaps reflecting the relatively sophisticated error avoidance and detection procedures used in modern clinical radiation oncology. Error rates were independent of linac model and manufacturer, time of day (normal operating hours versus late evening or early morning) or clinical machine volumes. There was some relationship to

  16. Multivariate analysis for the estimation of target localization errors in fiducial marker-based radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Takamiya, Masanori [Department of Nuclear Engineering, Graduate School of Engineering, Kyoto University, Kyoto 606-8501, Japan and Department of Radiation Oncology and Image-applied Therapy, Graduate School of Medicine, Kyoto University, Kyoto 606-8507 (Japan); Nakamura, Mitsuhiro, E-mail: m-nkmr@kuhp.kyoto-u.ac.jp; Akimoto, Mami; Ueki, Nami; Yamada, Masahiro; Matsuo, Yukinori; Mizowaki, Takashi; Hiraoka, Masahiro [Department of Radiation Oncology and Image-applied Therapy, Graduate School of Medicine, Kyoto University, Kyoto 606-8507 (Japan); Tanabe, Hiroaki [Division of Radiation Oncology, Institute of Biomedical Research and Innovation, Kobe 650-0047 (Japan); Kokubo, Masaki [Division of Radiation Oncology, Institute of Biomedical Research and Innovation, Kobe 650-0047, Japan and Department of Radiation Oncology, Kobe City Medical Center General Hospital, Kobe 650-0047 (Japan); Itoh, Akio [Department of Nuclear Engineering, Graduate School of Engineering, Kyoto University, Kyoto 606-8501 (Japan)

    2016-04-15

    Purpose: To assess the target localization error (TLE) in terms of the distance between the target and the localization point estimated from the surrogates (|TMD|), the average of respiratory motion for the surrogates and the target (|aRM|), and the number of fiducial markers used for estimating the target (n). Methods: This study enrolled 17 lung cancer patients who subsequently underwent four fractions of real-time tumor tracking irradiation. Four or five fiducial markers were implanted around the lung tumor. The three-dimensional (3D) distance between the tumor and markers was at maximum 58.7 mm. One of the markers was used as the target (P{sub t}), and those markers with a 3D |TMD{sub n}| ≤ 58.7 mm at end-exhalation were then selected. The estimated target position (P{sub e}) was calculated from a localization point consisting of one to three markers except P{sub t}. Respiratory motion for P{sub t} and P{sub e} was defined as the root mean square of each displacement, and |aRM| was calculated from the mean value. TLE was defined as the root mean square of each difference between P{sub t} and P{sub e} during the monitoring of each fraction. These procedures were performed repeatedly using the remaining markers. To provide the best guidance on the answer with n and |TMD|, fiducial markers with a 3D |aRM ≥ 10 mm were selected. Finally, a total of 205, 282, and 76 TLEs that fulfilled the 3D |TMD| and 3D |aRM| criteria were obtained for n = 1, 2, and 3, respectively. Multiple regression analysis (MRA) was used to evaluate TLE as a function of |TMD| and |aRM| in each n. Results: |TMD| for n = 1 was larger than that for n = 3. Moreover, |aRM| was almost constant for all n, indicating a similar scale for the marker’s motion near the lung tumor. MRA showed that |aRM| in the left–right direction was the major cause of TLE; however, the contribution made little difference to the 3D TLE because of the small amount of motion in the left–right direction. The TLE

  17. Research on Electronic Transformer Data Synchronization Based on Interpolation Methods and Their Error Analysis

    Directory of Open Access Journals (Sweden)

    Pang Fubin

    2015-09-01

    Full Text Available In this paper the origin problem of data synchronization is analyzed first, and then three common interpolation methods are introduced to solve the problem. Allowing for the most general situation, the paper divides the interpolation error into harmonic and transient interpolation error components, and the error expression of each method is derived and analyzed. Besides, the interpolation errors of linear, quadratic and cubic methods are computed at different sampling rates, harmonic orders and transient components. Further, the interpolation accuracy and calculation amount of each method are compared. The research results provide theoretical guidance for selecting the interpolation method in the data synchronization application of electronic transformer.

  18. Measurement-based analysis of error latency. [in computer operating system

    Science.gov (United States)

    Chillarege, Ram; Iyer, Ravishankar K.

    1987-01-01

    This paper demonstrates a practical methodology for the study of error latency under a real workload. The method is illustrated with sampled data on the physical memory activity, gathered by hardware instrumentation on a VAX 11/780 during the normal workload cycle of the installation. These data are used to simulate fault occurrence and to reconstruct the error discovery process in the system. The technique provides a means to study the system under different workloads and for multiple days. An approach to determine the percentage of undiscovered errors is also developed and a verification of the entire methodology is performed. This study finds that the mean error latency, in the memory containing the operating system, varies by a factor of 10 to 1 (in hours) between the low and high workloads. It is found that of all errors occurring within a day, 70 percent are detected in the same day, 82 percent within the following day, and 91 percent within the third day. The increase in failure rate due to latency is not so much a function of remaining errors but is dependent on whether or not there is a latent error.

  19. Research on the Reliability Analysis of the Integrated Modular Avionics System Based on the AADL Error Model

    Directory of Open Access Journals (Sweden)

    Peng Wang

    2018-01-01

    Full Text Available In recent years, the integrated modular avionics (IMA concept has been introduced to replace the traditional federated avionics. Different avionics functions are hosted in a shared IMA platform, and IMA adopts partition technologies to provide a logical isolation among different functions. The IMA architecture can provide more sophisticated and powerful avionics functionality; meanwhile, the failure propagation patterns in IMA are more complex. The feature of resource sharing introduces some unintended interconnections among different functions, which makes the failure propagation modes more complex. Therefore, this paper proposes an architecture analysis and design language- (AADL- based method to establish the reliability model of IMA platform. The single software and hardware error behavior in IMA system is modeled. The corresponding AADL error model of failure propagation among components, between software and hardware, is given. Finally, the display function of IMA platform is taken as an example to illustrate the effectiveness of the proposed method.

  20. Feasibility of theoretical formulas on the anisotropy of shale based on laboratory measurement and error analysis

    Science.gov (United States)

    Xie, Jianyong; Di, Bangrang; Wei, Jianxin; Luan, Xinyuan; Ding, Pinbo

    2015-04-01

    This paper designs a total angle ultrasonic test method to measure the P-wave velocities (vp), vertically and horizontally polarized shear wave velocities (vsv and vsh) of all angles to the bedding plane on different kinds of strong anisotropic shale. Analysis has been made of the comparisons among the observations and corresponding calculated theoretical curves based on the varied vertical transversely isotropic (TI) medium theories, for which discussing the real similarity with the characterizations of the TI medium on the scope of dynamic behaviors, and further conclude a more accurate and precise theory from the varied theoretical formulas as well as its suitable range to characterize the strong anisotropy of shale. At a low phase angle (theta Berryman expressions provide a relatively much better agreement with the measured data for vp, vsv on shale. Also all of the three theories lead to more deviations in the approximation of the vsv than for the vp and vsh. Furthermore, we created synthetic comparative ideal physical models (from coarse bakelite, cambric bakelite, and paper bakelite) as supplementary models to natural shale, which are used to model shale with different anisotropy, to research the effects of the anisotropic parameters upon the applicability of the former optimal TI theories, especially for the vsv. We found the when the P-wave anisotropy, S-wave anisotropy ɛ, γ > 0.25, the Berrryman curve will be the best fit for the vp, vsv on shale.

  1. Web-Based Information Management System for the Investigation, Reporting, and Analysis of Human Error in Naval Aviation Maintenance

    National Research Council Canada - National Science Library

    Boex, Anthony

    2001-01-01

    .... The Human Factors Analysis and Classification System-Maintenance Extension (HFACS-ME) taxonomy, a framework for classifying and analyzing the presence of maintenance errors that lead to mishaps, is the foundation of this tool...

  2. Analysis of influence on back-EMF based sensorless control of PMSM due to parameter variations and measurement errors

    DEFF Research Database (Denmark)

    Wang, Z.; Lu, K.; Ye, Y.

    2011-01-01

    To achieve better performance of sensorless control of PMSM, a precise and stable estimation of rotor position and speed is required. Several parameter uncertainties and variable measurement errors may lead to estimation error, such as resistance and inductance variations due to temperature...... and flux saturation, current and voltage errors due to measurement uncertainties, and signal delay caused by hardwares. This paper reveals some inherent principles for the performance of the back-EMF based sensorless algorithm embedded in a surface mounted PMSM system adapting vector control strategy...

  3. Flyby Error Analysis Based on Contour Plots for the Cassini Tour

    Science.gov (United States)

    Stumpf, P. W.; Gist, E. M.; Goodson, T. D.; Hahn, Y.; Wagner, S. V.; Williams, P. N.

    2008-01-01

    The maneuver cancellation analysis consists of cost contour plots employed by the Cassini maneuver team. The plots are two-dimensional linear representations of a larger six-dimensional solution to a multi-maneuver, multi-encounter mission at Saturn. By using contours plotted with the dot product of vectors B and R and the dot product of vectors B and T components, it is possible to view the effects delta V on for various encounter positions in the B-plane. The plot is used in operations to help determine if the Approach Maneuver (ensuing encounter minus three days) and/or the Cleanup Maneuver (ensuing encounter plus three days) can be cancelled and also is a linear check of an integrated solution.

  4. Uncertainty quantification and error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  5. LAW DISTRIBUTION APPROXIMATION ON EIGENSTATE ERRORS OF ADS-B BASED ON CUMULANT ANALYSIS OF ADS-B-RAD SYSTEM DATA DISPARITY

    Directory of Open Access Journals (Sweden)

    2017-01-01

    Full Text Available The article deals with a new approximation method for enhanced accuracy measurement system errors distribu- tion. The method is based upon the mistie analysis of this system and a more robust design data. The method is considered on the example of comparison of Automatic Dependent Surveillance - Broadcast (ADS-B with ground radar warning sys- tem used at present. The peculiarity of the considered problem is that the target parameter (aircraft swerve value may dras- tically change in the scale of both measurement systems errors during observation. That is why it is impossible to determine the position of the aircraft by repeatedly observing it with ground radar warning system. It is only possible to compare the systems’ one-shot measurements, which are called errors here. The article considers that the distribution of robust meas- urement system errors probability density (the system that has been continuously in operation is known, the histogram of errors is given and it is needed to obtain an asymptotic estimate of errors occurrence distribution for a new improved meas- urement system.This approach is based on cumulant analysis of measurement systems error distribution functions. The approach allows us to carry out the reduction of corresponding infinite series properly. The author shows that due to measurement systems independency, their errors distribution cumulants are connected by a simple ratio, which allow to calculate the val- ues easily. To reconstruct distribution initial form one should use Edgeworth’s asymptotic series, where a normal distribu- tion derivative is used as a basis function. The latter is proportional to Hermitian polynomial, thus the series can be consid- ered as an orthogonal decomposition.The author reveals the results of coordinate error component distribution calculation; the error is measured when the normal line lies towards aircraft path, using error statistics experimental information obtained in ”RI of

  6. Error Estimate of the Ares I Vehicle Longitudinal Aerodynamic Characteristics Based on Turbulent Navier-Stokes Analysis

    Science.gov (United States)

    Abdol-Hamid, Khaled S.; Ghaffari, Farhad

    2011-01-01

    Numerical predictions of the longitudinal aerodynamic characteristics for the Ares I class of vehicles, along with the associated error estimate derived from an iterative convergence grid refinement, are presented. Computational results are based on the unstructured grid, Reynolds-averaged Navier-Stokes flow solver USM3D, with an assumption that the flow is fully turbulent over the entire vehicle. This effort was designed to complement the prior computational activities conducted over the past five years in support of the Ares I Project with the emphasis on the vehicle s last design cycle designated as the A106 configuration. Due to a lack of flight data for this particular design s outer mold line, the initial vehicle s aerodynamic predictions and the associated error estimates were first assessed and validated against the available experimental data at representative wind tunnel flow conditions pertinent to the ascent phase of the trajectory without including any propulsion effects. Subsequently, the established procedures were then applied to obtain the longitudinal aerodynamic predictions at the selected flight flow conditions. Sample computed results and the correlations with the experimental measurements are presented. In addition, the present analysis includes the relevant data to highlight the balance between the prediction accuracy against the grid size and, thus, the corresponding computer resource requirements for the computations at both wind tunnel and flight flow conditions. NOTE: Some details have been removed from selected plots and figures in compliance with the sensitive but unclassified (SBU) restrictions. However, the content still conveys the merits of the technical approach and the relevant results.

  7. Error analysis of stochastic gradient descent ranking.

    Science.gov (United States)

    Chen, Hong; Tang, Yi; Li, Luoqing; Yuan, Yuan; Li, Xuelong; Tang, Yuanyan

    2013-06-01

    Ranking is always an important task in machine learning and information retrieval, e.g., collaborative filtering, recommender systems, drug discovery, etc. A kernel-based stochastic gradient descent algorithm with the least squares loss is proposed for ranking in this paper. The implementation of this algorithm is simple, and an expression of the solution is derived via a sampling operator and an integral operator. An explicit convergence rate for leaning a ranking function is given in terms of the suitable choices of the step size and the regularization parameter. The analysis technique used here is capacity independent and is novel in error analysis of ranking learning. Experimental results on real-world data have shown the effectiveness of the proposed algorithm in ranking tasks, which verifies the theoretical analysis in ranking error.

  8. Robust Online State of Charge Estimation of Lithium-Ion Battery Pack Based on Error Sensitivity Analysis

    Directory of Open Access Journals (Sweden)

    Ting Zhao

    2015-01-01

    Full Text Available Accurate and reliable state of charge (SOC estimation is a key enabling technique for large format lithium-ion battery pack due to its vital role in battery safety and effective management. This paper tries to make three contributions to existing literatures through robust algorithms. (1 Observer based SOC estimation error model is established, where the crucial parameters on SOC estimation accuracy are determined by quantitative analysis, being a basis for parameters update. (2 The estimation method for a battery pack in which the inconsistency of cells is taken into consideration is proposed, ensuring all batteries’ SOC ranging from 0 to 1, effectively avoiding the battery overcharged/overdischarged. Online estimation of the parameters is also presented in this paper. (3 The SOC estimation accuracy of the battery pack is verified using the hardware-in-loop simulation platform. The experimental results at various dynamic test conditions, temperatures, and initial SOC difference between two cells demonstrate the efficacy of the proposed method.

  9. Error Analysis in Mathematics. Technical Report #1012

    Science.gov (United States)

    Lai, Cheng-Fei

    2012-01-01

    Error analysis is a method commonly used to identify the cause of student errors when they make consistent mistakes. It is a process of reviewing a student's work and then looking for patterns of misunderstanding. Errors in mathematics can be factual, procedural, or conceptual, and may occur for a number of reasons. Reasons why students make…

  10. An Error Analysis on TFL Learners’ Writings

    Directory of Open Access Journals (Sweden)

    Arif ÇERÇİ

    2016-12-01

    Full Text Available The main purpose of the present study is to identify and represent TFL learners’ writing errors through error analysis. All the learners started learning Turkish as foreign language with A1 (beginner level and completed the process by taking C1 (advanced certificate in TÖMER at Gaziantep University. The data of the present study were collected from 14 students’ writings in proficiency exams for each level. The data were grouped as grammatical, syntactic, spelling, punctuation, and word choice errors. The ratio and categorical distributions of identified errors were analyzed through error analysis. The data were analyzed through statistical procedures in an effort to determine whether error types differ according to the levels of the students. The errors in this study are limited to the linguistic and intralingual developmental errors

  11. Game Design Principles based on Human Error

    Directory of Open Access Journals (Sweden)

    Guilherme Zaffari

    2016-03-01

    Full Text Available This paper displays the result of the authors’ research regarding to the incorporation of Human Error, through design principles, to video game design. In a general way, designers must consider Human Error factors throughout video game interface development; however, when related to its core design, adaptations are in need, since challenge is an important factor for fun and under the perspective of Human Error, challenge can be considered as a flaw in the system. The research utilized Human Error classifications, data triangulation via predictive human error analysis, and the expanded flow theory to allow the design of a set of principles in order to match the design of playful challenges with the principles of Human Error. From the results, it was possible to conclude that the application of Human Error in game design has a positive effect on player experience, allowing it to interact only with errors associated with the intended aesthetics of the game.

  12. Group representations, error bases and quantum codes

    Energy Technology Data Exchange (ETDEWEB)

    Knill, E

    1996-01-01

    This report continues the discussion of unitary error bases and quantum codes. Nice error bases are characterized in terms of the existence of certain characters in a group. A general construction for error bases which are non-abelian over the center is given. The method for obtaining codes due to Calderbank et al. is generalized and expressed purely in representation theoretic terms. The significance of the inertia subgroup both for constructing codes and obtaining the set of transversally implementable operations is demonstrated.

  13. Comparison of subset-based local and FE-based global digital image correlation: Theoretical error analysis and validation

    KAUST Repository

    Pan, B.; Wang, Bo; Lubineau, Gilles

    2016-01-01

    Subset-based local and finite-element-based (FE-based) global digital image correlation (DIC) approaches are the two primary image matching algorithms widely used for full-field displacement mapping. Very recently, the performances

  14. An analysis of errors of commission in the Nordic nuclear power plants based on plant operating experience

    International Nuclear Information System (INIS)

    Pyy, P.; Bento, J.P.; Flodin, Y.

    2001-12-01

    The report presents the methodology followed, the material used and conclusions drawn in a study of active human failures. First, the report discusses the concept of active human failures in the context of human errors. Then, a simplified methodology is presented applicable to analysis of operating experience and documenting all kinds of human failures. Also the material and analysis procedure used in the three parts of the study are discussed. Finally, some selected highlights of results are presented with common conclusions and recommendations. (au)

  15. Real-time prediction of atmospheric Lagrangian coherent structures based on forecast data: An application and error analysis

    Science.gov (United States)

    BozorgMagham, Amir E.; Ross, Shane D.; Schmale, David G.

    2013-09-01

    The language of Lagrangian coherent structures (LCSs) provides a new means for studying transport and mixing of passive particles advected by an atmospheric flow field. Recent observations suggest that LCSs govern the large-scale atmospheric motion of airborne microorganisms, paving the way for more efficient models and management strategies for the spread of infectious diseases affecting plants, domestic animals, and humans. In addition, having reliable predictions of the timing of hyperbolic LCSs may contribute to improved aerobiological sampling of microorganisms with unmanned aerial vehicles and LCS-based early warning systems. Chaotic atmospheric dynamics lead to unavoidable forecasting errors in the wind velocity field, which compounds errors in LCS forecasting. In this study, we reveal the cumulative effects of errors of (short-term) wind field forecasts on the finite-time Lyapunov exponent (FTLE) fields and the associated LCSs when realistic forecast plans impose certain limits on the forecasting parameters. Objectives of this paper are to (a) quantify the accuracy of prediction of FTLE-LCS features and (b) determine the sensitivity of such predictions to forecasting parameters. Results indicate that forecasts of attracting LCSs exhibit less divergence from the archive-based LCSs than the repelling features. This result is important since attracting LCSs are the backbone of long-lived features in moving fluids. We also show under what circumstances one can trust the forecast results if one merely wants to know if an LCS passed over a region and does not need to precisely know the passage time.

  16. Notes on human error analysis and prediction

    International Nuclear Information System (INIS)

    Rasmussen, J.

    1978-11-01

    The notes comprise an introductory discussion of the role of human error analysis and prediction in industrial risk analysis. Following this introduction, different classes of human errors and role in industrial systems are mentioned. Problems related to the prediction of human behaviour in reliability and safety analysis are formulated and ''criteria for analyzability'' which must be met by industrial systems so that a systematic analysis can be performed are suggested. The appendices contain illustrative case stories and a review of human error reports for the task of equipment calibration and testing as found in the US Licensee Event Reports. (author)

  17. A Comparative Study on Error Analysis

    DEFF Research Database (Denmark)

    Wu, Xiaoli; Zhang, Chun

    2015-01-01

    Title: A Comparative Study on Error Analysis Subtitle: - Belgian (L1) and Danish (L1) learners’ use of Chinese (L2) comparative sentences in written production Xiaoli Wu, Chun Zhang Abstract: Making errors is an inevitable and necessary part of learning. The collection, classification and analysis...... the occurrence of errors either in linguistic or pedagogical terms. The purpose of the current study is to demonstrate the theoretical and practical relevance of error analysis approach in CFL by investigating two cases - (1) Belgian (L1) learners’ use of Chinese (L2) comparative sentences in written production...... of errors in the written and spoken production of L2 learners has a long tradition in L2 pedagogy. Yet, in teaching and learning Chinese as a foreign language (CFL), only handful studies have been made either to define the ‘error’ in a pedagogically insightful way or to empirically investigate...

  18. An Analysis of Medication Errors at the Military Medical Center: Implications for a Systems Approach for Error Reduction

    National Research Council Canada - National Science Library

    Scheirman, Katherine

    2001-01-01

    An analysis was accomplished of all inpatient medication errors at a military academic medical center during the year 2000, based on the causes of medication errors as described by current research in the field...

  19. Bit Error Rate Performance Analysis of a Threshold-Based Generalized Selection Combining Scheme in Nakagami Fading Channels

    Directory of Open Access Journals (Sweden)

    Kousa Maan

    2005-01-01

    Full Text Available The severity of fading on mobile communication channels calls for the combining of multiple diversity sources to achieve acceptable error rate performance. Traditional approaches perform the combining of the different diversity sources using either the conventional selective diversity combining (CSC, equal-gain combining (EGC, or maximal-ratio combining (MRC schemes. CSC and MRC are the two extremes of compromise between performance quality and complexity. Some researches have proposed a generalized selection combining scheme (GSC that combines the best branches out of the available diversity resources ( . In this paper, we analyze a generalized selection combining scheme based on a threshold criterion rather than a fixed-size subset of the best channels. In this scheme, only those diversity branches whose energy levels are above a specified threshold are combined. Closed-form analytical solutions for the BER performances of this scheme over Nakagami fading channels are derived. We also discuss the merits of this scheme over GSC.

  20. Error estimation in plant growth analysis

    Directory of Open Access Journals (Sweden)

    Andrzej Gregorczyk

    2014-01-01

    Full Text Available The scheme is presented for calculation of errors of dry matter values which occur during approximation of data with growth curves, determined by the analytical method (logistic function and by the numerical method (Richards function. Further formulae are shown, which describe absolute errors of growth characteristics: Growth rate (GR, Relative growth rate (RGR, Unit leaf rate (ULR and Leaf area ratio (LAR. Calculation examples concerning the growth course of oats and maize plants are given. The critical analysis of the estimation of obtained results has been done. The purposefulness of joint application of statistical methods and error calculus in plant growth analysis has been ascertained.

  1. Trial application of a technique for human error analysis (ATHEANA)

    International Nuclear Information System (INIS)

    Bley, D.C.; Cooper, S.E.; Parry, G.W.

    1996-01-01

    The new method for HRA, ATHEANA, has been developed based on a study of the operating history of serious accidents and an understanding of the reasons why people make errors. Previous publications associated with the project have dealt with the theoretical framework under which errors occur and the retrospective analysis of operational events. This is the first attempt to use ATHEANA in a prospective way, to select and evaluate human errors within the PSA context

  2. Initialization Errors in Quantum Data Base Recall

    OpenAIRE

    Natu, Kalyani

    2016-01-01

    This paper analyzes the relationship between initialization error and recall of a specific memory in the Grover algorithm for quantum database search. It is shown that the correct memory is obtained with high probability even when the initial state is far removed from the correct one. The analysis is done by relating the variance of error in the initial state to the recovery of the correct memory and the surprising result is obtained that the relationship between the two is essentially linear.

  3. Human Error Analysis by Fuzzy-Set

    International Nuclear Information System (INIS)

    Situmorang, Johnny

    1996-01-01

    In conventional HRA the probability of Error is treated as a single and exact value through constructing even tree, but in this moment the Fuzzy-Set Theory is used. Fuzzy set theory treat the probability of error as a plausibility which illustrate a linguistic variable. Most parameter or variable in human engineering been defined verbal good, fairly good, worst etc. Which describe a range of any value of probability. For example this analysis is quantified the human error in calibration task, and the probability of miscalibration is very low

  4. Analysis of field errors in existing undulators

    International Nuclear Information System (INIS)

    Kincaid, B.M.

    1990-01-01

    The Advanced Light Source (ALS) and other third generation synchrotron light sources have been designed for optimum performance with undulator insertion devices. The performance requirements for these new undulators are explored, with emphasis on the effects of errors on source spectral brightness. Analysis of magnetic field data for several existing hybrid undulators is presented, decomposing errors into systematic and random components. An attempts is made to identify the sources of these errors, and recommendations are made for designing future insertion devices. 12 refs., 16 figs

  5. Analysis of ionospheric structure influences on residual ionospheric errors in GNSS radio occultation bending angles based on ray tracing simulations

    Science.gov (United States)

    Liu, Congliang; Kirchengast, Gottfried; Sun, Yueqiang; Zhang, Kefei; Norman, Robert; Schwaerz, Marc; Bai, Weihua; Du, Qifei; Li, Ying

    2018-04-01

    The Global Navigation Satellite System (GNSS) radio occultation (RO) technique is widely used to observe the atmosphere for applications such as numerical weather prediction and global climate monitoring. The ionosphere is a major error source to RO at upper stratospheric altitudes, and a linear dual-frequency bending angle correction is commonly used to remove the first-order ionospheric effect. However, the higher-order residual ionospheric error (RIE) can still be significant, so it needs to be further mitigated for high-accuracy applications, especially from 35 km altitude upward, where the RIE is most relevant compared to the decreasing magnitude of the atmospheric bending angle. In a previous study we quantified RIEs using an ensemble of about 700 quasi-realistic end-to-end simulated RO events, finding typical RIEs at the 0.1 to 0.5 µrad noise level, but were left with 26 exceptional events with anomalous RIEs at the 1 to 10 µrad level that remained unexplained. In this study, we focused on investigating the causes of the high RIE of these exceptional events, employing detailed along-ray-path analyses of atmospheric and ionospheric refractivities, impact parameter changes, and bending angles and RIEs under asymmetric and symmetric ionospheric structures. We found that the main causes of the high RIEs are a combination of physics-based effects - where asymmetric ionospheric conditions play the primary role, more than the ionization level driven by solar activity - and technical ray tracer effects due to occasions of imperfect smoothness in ionospheric refractivity model derivatives. We also found that along-ray impact parameter variations of more than 10 to 20 m are possible due to ionospheric asymmetries and, depending on prevailing horizontal refractivity gradients, are positive or negative relative to the initial impact parameter at the GNSS transmitter. Furthermore, mesospheric RIEs are found generally higher than upper-stratospheric ones, likely due to

  6. Learning from Past Classification Errors: Exploring Methods for Improving the Performance of a Deep Learning-based Building Extraction Model through Quantitative Analysis of Commission Errors for Optimal Sample Selection

    Science.gov (United States)

    Swan, B.; Laverdiere, M.; Yang, L.

    2017-12-01

    In the past five years, deep Convolutional Neural Networks (CNN) have been increasingly favored for computer vision applications due to their high accuracy and ability to generalize well in very complex problems; however, details of how they function and in turn how they may be optimized are still imperfectly understood. In particular, their complex and highly nonlinear network architecture, including many hidden layers and self-learned parameters, as well as their mathematical implications, presents open questions about how to effectively select training data. Without knowledge of the exact ways the model processes and transforms its inputs, intuition alone may fail as a guide to selecting highly relevant training samples. Working in the context of improving a CNN-based building extraction model used for the LandScan USA gridded population dataset, we have approached this problem by developing a semi-supervised, highly-scalable approach to select training samples from a dataset of identified commission errors. Due to the large scope this project, tens of thousands of potential samples could be derived from identified commission errors. To efficiently trim those samples down to a manageable and effective set for creating additional training sample, we statistically summarized the spectral characteristics of areas with rates of commission errors at the image tile level and grouped these tiles using affinity propagation. Highly representative members of each commission error cluster were then used to select sites for training sample creation. The model will be incrementally re-trained with the new training data to allow for an assessment of how the addition of different types of samples affects the model performance, such as precision and recall rates. By using quantitative analysis and data clustering techniques to select highly relevant training samples, we hope to improve model performance in a manner that is resource efficient, both in terms of training process

  7. Analysis and Compensation of Modulation Angular Rate Error Based on Missile-Borne Rotation Semi-Strapdown Inertial Navigation System

    Directory of Open Access Journals (Sweden)

    Jiayu Zhang

    2018-05-01

    Full Text Available The Semi-Strapdown Inertial Navigation System (SSINS provides a new solution to attitude measurement of a high-speed rotating missile. However, micro-electro-mechanical-systems (MEMS inertial measurement unit (MIMU outputs are corrupted by significant sensor errors. In order to improve the navigation precision, a rotation modulation technology method called Rotation Semi-Strapdown Inertial Navigation System (RSSINS is introduced into SINS. In fact, the stability of the modulation angular rate is difficult to achieve in a high-speed rotation environment. The changing rotary angular rate has an impact on the inertial sensor error self-compensation. In this paper, the influence of modulation angular rate error, including acceleration-deceleration process, and instability of the angular rate on the navigation accuracy of RSSINS is deduced and the error characteristics of the reciprocating rotation scheme are analyzed. A new compensation method is proposed to remove or reduce sensor errors so as to make it possible to maintain high precision autonomous navigation performance by MIMU when there is no external aid. Experiments have been carried out to validate the performance of the method. In addition, the proposed method is applicable for modulation angular rate error compensation under various dynamic conditions.

  8. Error propagation analysis for a sensor system

    International Nuclear Information System (INIS)

    Yeater, M.L.; Hockenbury, R.W.; Hawkins, J.; Wilkinson, J.

    1976-01-01

    As part of a program to develop reliability methods for operational use with reactor sensors and protective systems, error propagation analyses are being made for each model. An example is a sensor system computer simulation model, in which the sensor system signature is convoluted with a reactor signature to show the effect of each in revealing or obscuring information contained in the other. The error propagation analysis models the system and signature uncertainties and sensitivities, whereas the simulation models the signatures and by extensive repetitions reveals the effect of errors in various reactor input or sensor response data. In the approach for the example presented, the errors accumulated by the signature (set of ''noise'' frequencies) are successively calculated as it is propagated stepwise through a system comprised of sensor and signal processing components. Additional modeling steps include a Fourier transform calculation to produce the usual power spectral density representation of the product signature, and some form of pattern recognition algorithm

  9. Numeracy, Literacy and Newman's Error Analysis

    Science.gov (United States)

    White, Allan Leslie

    2010-01-01

    Newman (1977, 1983) defined five specific literacy and numeracy skills as crucial to performance on mathematical word problems: reading, comprehension, transformation, process skills, and encoding. Newman's Error Analysis (NEA) provided a framework for considering the reasons that underlay the difficulties students experienced with mathematical…

  10. Sensitivity analysis of periodic errors in heterodyne interferometry

    International Nuclear Information System (INIS)

    Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony

    2011-01-01

    Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors

  11. Sensitivity analysis of periodic errors in heterodyne interferometry

    Science.gov (United States)

    Ganguly, Vasishta; Kim, Nam Ho; Kim, Hyo Soo; Schmitz, Tony

    2011-03-01

    Periodic errors in heterodyne displacement measuring interferometry occur due to frequency mixing in the interferometer. These nonlinearities are typically characterized as first- and second-order periodic errors which cause a cyclical (non-cumulative) variation in the reported displacement about the true value. This study implements an existing analytical periodic error model in order to identify sensitivities of the first- and second-order periodic errors to the input parameters, including rotational misalignments of the polarizing beam splitter and mixing polarizer, non-orthogonality of the two laser frequencies, ellipticity in the polarizations of the two laser beams, and different transmission coefficients in the polarizing beam splitter. A local sensitivity analysis is first conducted to examine the sensitivities of the periodic errors with respect to each input parameter about the nominal input values. Next, a variance-based approach is used to study the global sensitivities of the periodic errors by calculating the Sobol' sensitivity indices using Monte Carlo simulation. The effect of variation in the input uncertainty on the computed sensitivity indices is examined. It is seen that the first-order periodic error is highly sensitive to non-orthogonality of the two linearly polarized laser frequencies, while the second-order error is most sensitive to the rotational misalignment between the laser beams and the polarizing beam splitter. A particle swarm optimization technique is finally used to predict the possible setup imperfections based on experimentally generated values for periodic errors.

  12. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Matsuo, Yukinori, E-mail: ymatsuo@kuhp.kyoto-u.ac.jp; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro [Department of Radiation Oncology and Image-applied Therapy, Kyoto University, 54 Shogoin-Kawaharacho, Sakyo, Kyoto 606-8507 (Japan)

    2016-09-15

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.

  13. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    International Nuclear Information System (INIS)

    Matsuo, Yukinori; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro

    2016-01-01

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.

  14. A technique for human error analysis (ATHEANA)

    Energy Technology Data Exchange (ETDEWEB)

    Cooper, S.E.; Ramey-Smith, A.M.; Wreathall, J.; Parry, G.W. [and others

    1996-05-01

    Probabilistic risk assessment (PRA) has become an important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. Human reliability analysis (HRA) is a critical element of PRA; however, limitations in the analysis of human actions in PRAs have long been recognized as a constraint when using PRA. A multidisciplinary HRA framework has been developed with the objective of providing a structured approach for analyzing operating experience and understanding nuclear plant safety, human error, and the underlying factors that affect them. The concepts of the framework have matured into a rudimentary working HRA method. A trial application of the method has demonstrated that it is possible to identify potentially significant human failure events from actual operating experience which are not generally included in current PRAs, as well as to identify associated performance shaping factors and plant conditions that have an observable impact on the frequency of core damage. A general process was developed, albeit in preliminary form, that addresses the iterative steps of defining human failure events and estimating their probabilities using search schemes. Additionally, a knowledge- base was developed which describes the links between performance shaping factors and resulting unsafe actions.

  15. A technique for human error analysis (ATHEANA)

    International Nuclear Information System (INIS)

    Cooper, S.E.; Ramey-Smith, A.M.; Wreathall, J.; Parry, G.W.

    1996-05-01

    Probabilistic risk assessment (PRA) has become an important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. Human reliability analysis (HRA) is a critical element of PRA; however, limitations in the analysis of human actions in PRAs have long been recognized as a constraint when using PRA. A multidisciplinary HRA framework has been developed with the objective of providing a structured approach for analyzing operating experience and understanding nuclear plant safety, human error, and the underlying factors that affect them. The concepts of the framework have matured into a rudimentary working HRA method. A trial application of the method has demonstrated that it is possible to identify potentially significant human failure events from actual operating experience which are not generally included in current PRAs, as well as to identify associated performance shaping factors and plant conditions that have an observable impact on the frequency of core damage. A general process was developed, albeit in preliminary form, that addresses the iterative steps of defining human failure events and estimating their probabilities using search schemes. Additionally, a knowledge- base was developed which describes the links between performance shaping factors and resulting unsafe actions

  16. SU-E-I-83: Error Analysis of Multi-Modality Image-Based Volumes of Rodent Solid Tumors Using a Preclinical Multi-Modality QA Phantom

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Y [University of Kansas Hospital, Kansas City, KS (United States); Fullerton, G; Goins, B [University of Texas Health Science Center at San Antonio, San Antonio, TX (United States)

    2015-06-15

    Purpose: In our previous study a preclinical multi-modality quality assurance (QA) phantom that contains five tumor-simulating test objects with 2, 4, 7, 10 and 14 mm diameters was developed for accurate tumor size measurement by researchers during cancer drug development and testing. This study analyzed the errors during tumor volume measurement from preclinical magnetic resonance (MR), micro-computed tomography (micro- CT) and ultrasound (US) images acquired in a rodent tumor model using the preclinical multi-modality QA phantom. Methods: Using preclinical 7-Tesla MR, US and micro-CT scanners, images were acquired of subcutaneous SCC4 tumor xenografts in nude rats (3–4 rats per group; 5 groups) along with the QA phantom using the same imaging protocols. After tumors were excised, in-air micro-CT imaging was performed to determine reference tumor volume. Volumes measured for the rat tumors and phantom test objects were calculated using formula V = (π/6)*a*b*c where a, b and c are the maximum diameters in three perpendicular dimensions determined by the three imaging modalities. Then linear regression analysis was performed to compare image-based tumor volumes with the reference tumor volume and known test object volume for the rats and the phantom respectively. Results: The slopes of regression lines for in-vivo tumor volumes measured by three imaging modalities were 1.021, 1.101 and 0.862 for MRI, micro-CT and US respectively. For phantom, the slopes were 0.9485, 0.9971 and 0.9734 for MRI, micro-CT and US respectively. Conclusion: For both animal and phantom studies, random and systematic errors were observed. Random errors were observer-dependent and systematic errors were mainly due to selected imaging protocols and/or measurement method. In the animal study, there were additional systematic errors attributed to ellipsoidal assumption for tumor shape. The systematic errors measured using the QA phantom need to be taken into account to reduce measurement

  17. Analysis of the interface tracking errors

    International Nuclear Information System (INIS)

    Cerne, G.; Tiselj, I.; Petelin, S.

    2001-01-01

    An important limitation of the interface-tracking algorithm is the grid density, which determines the space scale of the surface tracking. In this paper the analysis of the interface tracking errors, which occur in a dispersed flow, is performed for the VOF interface tracking method. A few simple two-fluid tests are proposed for the investigation of the interface tracking errors and their grid dependence. When the grid density becomes too coarse to follow the interface changes, the errors can be reduced either by using denser nodalization or by switching to the two-fluid model during the simulation. Both solutions are analyzed and compared on a simple vortex-flow test.(author)

  18. Understanding native Russian listeners' errors on an English word recognition test: model-based analysis of phoneme confusion.

    Science.gov (United States)

    Shi, Lu-Feng; Morozova, Natalia

    2012-08-01

    Word recognition is a basic component in a comprehensive hearing evaluation, but data are lacking for listeners speaking two languages. This study obtained such data for Russian natives in the US and analysed the data using the perceptual assimilation model (PAM) and speech learning model (SLM). Listeners were randomly presented 200 NU-6 words in quiet. Listeners responded verbally and in writing. Performance was scored on words and phonemes (word-initial consonants, vowels, and word-final consonants). Seven normal-hearing, adult monolingual English natives (NM), 16 English-dominant (ED), and 15 Russian-dominant (RD) Russian natives participated. ED and RD listeners differed significantly in their language background. Consistent with the SLM, NM outperformed ED listeners and ED outperformed RD listeners, whether responses were scored on words or phonemes. NM and ED listeners shared similar phoneme error patterns, whereas RD listeners' errors had unique patterns that could be largely understood via the PAM. RD listeners had particular difficulty differentiating vowel contrasts /i-I/, /æ-ε/, and /ɑ-Λ/, word-initial consonant contrasts /p-h/ and /b-f/, and word-final contrasts /f-v/. Both first-language phonology and second-language learning history affect word and phoneme recognition. Current findings may help clinicians differentiate word recognition errors due to language background from hearing pathologies.

  19. Comprehensive analysis of a medication dosing error related to CPOE.

    Science.gov (United States)

    Horsky, Jan; Kuperman, Gilad J; Patel, Vimla L

    2005-01-01

    This case study of a serious medication error demonstrates the necessity of a comprehensive methodology for the analysis of failures in interaction between humans and information systems. The authors used a novel approach to analyze a dosing error related to computer-based ordering of potassium chloride (KCl). The method included a chronological reconstruction of events and their interdependencies from provider order entry usage logs, semistructured interviews with involved clinicians, and interface usability inspection of the ordering system. Information collected from all sources was compared and evaluated to understand how the error evolved and propagated through the system. In this case, the error was the product of faults in interaction among human and system agents that methods limited in scope to their distinct analytical domains would not identify. The authors characterized errors in several converging aspects of the drug ordering process: confusing on-screen laboratory results review, system usability difficulties, user training problems, and suboptimal clinical system safeguards that all contributed to a serious dosing error. The results of the authors' analysis were used to formulate specific recommendations for interface layout and functionality modifications, suggest new user alerts, propose changes to user training, and address error-prone steps of the KCl ordering process to reduce the risk of future medication dosing errors.

  20. Data Analysis & Statistical Methods for Command File Errors

    Science.gov (United States)

    Meshkat, Leila; Waggoner, Bruce; Bryant, Larry

    2014-01-01

    This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.

  1. Analysis of Employee's Survey for Preventing Human-Errors

    International Nuclear Information System (INIS)

    Sung, Chanho; Kim, Younggab; Joung, Sanghoun

    2013-01-01

    Human errors in nuclear power plant can cause large and small events or incidents. These events or incidents are one of main contributors of reactor trip and might threaten the safety of nuclear plants. To prevent human-errors, KHNP(nuclear power plants) introduced 'Human-error prevention techniques' and have applied the techniques to main parts such as plant operation, operation support, and maintenance and engineering. This paper proposes the methods to prevent and reduce human-errors in nuclear power plants through analyzing survey results which includes the utilization of the human-error prevention techniques and the employees' awareness of preventing human-errors. With regard to human-error prevention, this survey analysis presented the status of the human-error prevention techniques and the employees' awareness of preventing human-errors. Employees' understanding and utilization of the techniques was generally high and training level of employee and training effect on actual works were in good condition. Also, employees answered that the root causes of human-error were due to working environment including tight process, manpower shortage, and excessive mission rather than personal negligence or lack of personal knowledge. Consideration of working environment is certainly needed. At the present time, based on analyzing this survey, the best methods of preventing human-error are personal equipment, training/education substantiality, private mental health check before starting work, prohibit of multiple task performing, compliance with procedures, and enhancement of job site review. However, the most important and basic things for preventing human-error are interests of workers and organizational atmosphere such as communication between managers and workers, and communication between employees and bosses

  2. Study on temperature measurement of gas turbine blade based on analysis of error caused by the reflected radiation and emission angle

    Science.gov (United States)

    Li, Dong; Feng, Chi; Gao, Shan; Chen, Liwei; Daniel, Ketui

    2018-06-01

    Accurate measurement of gas turbine blade temperature is of great significance as far as blade health monitoring is concerned. An important method for measuring this temperature is the use of a radiation pyrometer. In this research, error of the pyrometer caused by reflected radiation from the surfaces surrounding the target and the emission angle of the target was analyzed. Important parameters for this analysis were the view factor between interacting surfaces, spectral directional emissivity, pyrometer operating wavelength and the surface temperature distribution on the blades and the vanes. The interacting surface of the rotor blade and the vane models used were discretized using triangular surface elements from which contour integral was used to calculate the view factor between the surface elements. Spectral directional emissivities were obtained from an experimental setup of Ni based alloy samples. A pyrometer operating wavelength of 1.6 μm was chosen. Computational fluid dynamics software was used to simulate the temperature distribution of the rotor blade and the guide vane based on the actual gas turbine input parameters. Results obtained in this analysis show that temperature error introduced by reflected radiation and emission angle ranges from  ‑23 K to 49 K.

  3. Detecting Soft Errors in Stencil based Computations

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, V. [Univ. of Utah, Salt Lake City, UT (United States); Gopalkrishnan, G. [Univ. of Utah, Salt Lake City, UT (United States); Bronevetsky, G. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-05-06

    Given the growing emphasis on system resilience, it is important to develop software-level error detectors that help trap hardware-level faults with reasonable accuracy while minimizing false alarms as well as the performance overhead introduced. We present a technique that approaches this idea by taking stencil computations as our target, and synthesizing detectors based on machine learning. In particular, we employ linear regression to generate computationally inexpensive models which form the basis for error detection. Our technique has been incorporated into a new open-source library called SORREL. In addition to reporting encouraging experimental results, we demonstrate techniques that help reduce the size of training data. We also discuss the efficacy of various detectors synthesized, as well as our future plans.

  4. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.

  5. Formal Analysis of Soft Errors using Theorem Proving

    Directory of Open Access Journals (Sweden)

    Sofiène Tahar

    2013-07-01

    Full Text Available Modeling and analysis of soft errors in electronic circuits has traditionally been done using computer simulations. Computer simulations cannot guarantee correctness of analysis because they utilize approximate real number representations and pseudo random numbers in the analysis and thus are not well suited for analyzing safety-critical applications. In this paper, we present a higher-order logic theorem proving based method for modeling and analysis of soft errors in electronic circuits. Our developed infrastructure includes formalized continuous random variable pairs, their Cumulative Distribution Function (CDF properties and independent standard uniform and Gaussian random variables. We illustrate the usefulness of our approach by modeling and analyzing soft errors in commonly used dynamic random access memory sense amplifier circuits.

  6. Error and Performance Analysis of MEMS-based Inertial Sensors with a Low-cost GPS Receiver

    Directory of Open Access Journals (Sweden)

    Yang Gao

    2008-03-01

    Full Text Available Global Navigation Satellite Systems (GNSS, such as the Global Positioning System (GPS, have been widely utilized and their applications are becoming popular, not only in military or commercial applications, but also for everyday life. Although GPS measurements are the essential information for currently developed land vehicle navigation systems (LVNS, GPS signals are often unavailable or unreliable due to signal blockages under certain environments such as urban canyons. This situation must be compensated in order to provide continuous navigation solutions. To overcome the problems of unavailability and unreliability using GPS and to be cost and size effective as well, Micro Electro Mechanical Systems (MEMS based inertial sensor technology has been pushing for the development of low-cost integrated navigation systems for land vehicle navigation and guidance applications. This paper will analyze the characterization of MEMS based inertial sensors and the performance of an integrated system prototype of MEMS based inertial sensors, a low-cost GPS receiver and a digital compass. The influence of the stochastic variation of sensors will be assessed and modeled by two different methods, namely Gauss-Markov (GM and AutoRegressive (AR models, with GPS signal blockage of different lengths. Numerical results from kinematic testing have been used to assess the performance of different modeling schemes.

  7. Error Analysis of Determining Airplane Location by Global Positioning System

    OpenAIRE

    Hajiyev, Chingiz; Burat, Alper

    1999-01-01

    This paper studies the error analysis of determining airplane location by global positioning system (GPS) using statistical testing method. The Newton Rhapson method positions the airplane at the intersection point of four spheres. Absolute errors, relative errors and standard deviation have been calculated The results show that the positioning error of the airplane varies with the coordinates of GPS satellite and the airplane.

  8. Evaluation of Analysis by Cross-Validation, Part II: Diagnostic and Optimization of Analysis Error Covariance

    Directory of Open Access Journals (Sweden)

    Richard Ménard

    2018-02-01

    Full Text Available We present a general theory of estimation of analysis error covariances based on cross-validation as well as a geometric interpretation of the method. In particular, we use the variance of passive observation-minus-analysis residuals and show that the true analysis error variance can be estimated, without relying on the optimality assumption. This approach is used to obtain near optimal analyses that are then used to evaluate the air quality analysis error using several different methods at active and passive observation sites. We compare the estimates according to the method of Hollingsworth-Lönnberg, Desroziers et al., a new diagnostic we developed, and the perceived analysis error computed from the analysis scheme, to conclude that, as long as the analysis is near optimal, all estimates agree within a certain error margin.

  9. An Error Analysis of Structured Light Scanning of Biological Tissue

    DEFF Research Database (Denmark)

    Jensen, Sebastian Hoppe Nesgaard; Wilm, Jakob; Aanæs, Henrik

    2017-01-01

    This paper presents an error analysis and correction model for four structured light methods applied to three common types of biological tissue; skin, fat and muscle. Despite its many advantages, structured light is based on the assumption of direct reflection at the object surface only......, statistical linear model based on the scan geometry. As such, scans can be corrected without introducing any specially designed pattern strategy or hardware. We can effectively reduce the error in a structured light scanner applied to biological tissue by as much as factor of two or three........ This assumption is violated by most biological material e.g. human skin, which exhibits subsurface scattering. In this study, we find that in general, structured light scans of biological tissue deviate significantly from the ground truth. We show that a large portion of this error can be predicted with a simple...

  10. Analysis of errors in forensic science

    Directory of Open Access Journals (Sweden)

    Mingxiao Du

    2017-01-01

    Full Text Available Reliability of expert testimony is one of the foundations of judicial justice. Both expert bias and scientific errors affect the reliability of expert opinion, which in turn affects the trustworthiness of the findings of fact in legal proceedings. Expert bias can be eliminated by replacing experts; however, it may be more difficult to eliminate scientific errors. From the perspective of statistics, errors in operation of forensic science include systematic errors, random errors, and gross errors. In general, process repetition and abiding by the standard ISO/IEC:17025: 2005, general requirements for the competence of testing and calibration laboratories, during operation are common measures used to reduce errors that originate from experts and equipment, respectively. For example, to reduce gross errors, the laboratory can ensure that a test is repeated several times by different experts. In applying for forensic principles and methods, the Federal Rules of Evidence 702 mandate that judges consider factors such as peer review, to ensure the reliability of the expert testimony. As the scientific principles and methods may not undergo professional review by specialists in a certain field, peer review serves as an exclusive standard. This study also examines two types of statistical errors. As false-positive errors involve a higher possibility of an unfair decision-making, they should receive more attention than false-negative errors.

  11. Error analysis for mesospheric temperature profiling by absorptive occultation sensors

    Directory of Open Access Journals (Sweden)

    M. J. Rieder

    unprecedented accuracy and vertical resolution. A major part of the error analysis also applies to refractive (e.g., Global Navigation Satellite System based occultations as well as to any temperature profile retrieval based on air density or major species density measurements (e.g., from Rayleigh lidar or falling sphere techniques.

    Key words. Atmospheric composition and structure (pressure, density, and temperature; instruments and techniques – Radio science (remote sensing

  12. Error analysis for mesospheric temperature profiling by absorptive occultation sensors

    Directory of Open Access Journals (Sweden)

    M. J. Rieder

    2001-01-01

    unprecedented accuracy and vertical resolution. A major part of the error analysis also applies to refractive (e.g., Global Navigation Satellite System based occultations as well as to any temperature profile retrieval based on air density or major species density measurements (e.g., from Rayleigh lidar or falling sphere techniques.Key words. Atmospheric composition and structure (pressure, density, and temperature; instruments and techniques – Radio science (remote sensing

  13. Preventing Medication Error Based on Knowledge Management Against Adverse Event

    OpenAIRE

    Hastuti, Apriyani Puji; Nursalam, Nursalam; Triharini, Mira

    2017-01-01

    Introductions: Medication error is one of many types of errors that could decrease the quality and safety of healthcare. Increasing number of adverse events (AE) reflects the number of medication errors. This study aimed to develop a model of medication error prevention based on knowledge management. This model is expected to improve knowledge and skill of nurses to prevent medication error which is characterized by the decrease of adverse events (AE). Methods: This study consisted of two sta...

  14. [Analysis of intrusion errors in free recall].

    Science.gov (United States)

    Diesfeldt, H F A

    2017-06-01

    Extra-list intrusion errors during five trials of the eight-word list-learning task of the Amsterdam Dementia Screening Test (ADST) were investigated in 823 consecutive psychogeriatric patients (87.1% suffering from major neurocognitive disorder). Almost half of the participants (45.9%) produced one or more intrusion errors on the verbal recall test. Correct responses were lower when subjects made intrusion errors, but learning slopes did not differ between subjects who committed intrusion errors and those who did not so. Bivariate regression analyses revealed that participants who committed intrusion errors were more deficient on measures of eight-word recognition memory, delayed visual recognition and tests of executive control (the Behavioral Dyscontrol Scale and the ADST-Graphical Sequences as measures of response inhibition). Using hierarchical multiple regression, only free recall and delayed visual recognition retained an independent effect in the association with intrusion errors, such that deficient scores on tests of episodic memory were sufficient to explain the occurrence of intrusion errors. Measures of inhibitory control did not add significantly to the explanation of intrusion errors in free recall, which makes insufficient strength of memory traces rather than a primary deficit in inhibition the preferred account for intrusion errors in free recall.

  15. AGAPE-ET for human error analysis of emergency tasks and its application

    International Nuclear Information System (INIS)

    Kim, J. H.; Jeong, W. D.

    2002-01-01

    The paper presents a proceduralised human reliability analysis (HRA) methodology, AGAPE-ET (A Guidance And Procedure for Human Error Analysis for Emergency Tasks), covering both qualitative error analysis and quantification of human error probability (HEP) of emergency tasks in nuclear power plants. The AGAPE-ET method is based on the simplified cognitive model. By each cognitive function, error causes or error-likely situations have been identified considering the characteristics of the performance of each cognitive function and influencing mechanism of the performance influencing factors (PIFs) on the cognitive function. Then, error analysis items have been determined from the identified error causes or error-likely situations and a human error analysis procedure based on the error analysis items is organised to help the analysts cue or guide overall human error analysis. The basic scheme for the quantification of HEP consists in the multiplication of the BHEP assigned by the error analysis item and the weight from the influencing factors decision tree (IFDT) constituted by cognitive function. The method can be characterised by the structured identification of the weak points of the task required to perform and the efficient analysis process that the analysts have only to carry out with the necessary cognitive functions. The paper also presents the application of AGAPE-ET to 31 nuclear emergency tasks and its results

  16. Teacher knowledge of error analysis in differential calculus

    Directory of Open Access Journals (Sweden)

    Eunice K. Moru

    2014-12-01

    Full Text Available The study investigated teacher knowledge of error analysis in differential calculus. Two teachers were the sample of the study: one a subject specialist and the other a mathematics education specialist. Questionnaires and interviews were used for data collection. The findings of the study reflect that the teachers’ knowledge of error analysis was characterised by the following assertions, which are backed up with some evidence: (1 teachers identified the errors correctly, (2 the generalised error identification resulted in opaque analysis, (3 some of the identified errors were not interpreted from multiple perspectives, (4 teachers’ evaluation of errors was either local or global and (5 in remedying errors accuracy and efficiency were emphasised more than conceptual understanding. The implications of the findings of the study for teaching include engaging in error analysis continuously as this is one way of improving knowledge for teaching.

  17. A navigation system for percutaneous needle interventions based on PET/CT images: design, workflow and error analysis of soft tissue and bone punctures.

    Science.gov (United States)

    Oliveira-Santos, Thiago; Klaeser, Bernd; Weitzel, Thilo; Krause, Thomas; Nolte, Lutz-Peter; Peterhans, Matthias; Weber, Stefan

    2011-01-01

    Percutaneous needle intervention based on PET/CT images is effective, but exposes the patient to unnecessary radiation due to the increased number of CT scans required. Computer assisted intervention can reduce the number of scans, but requires handling, matching and visualization of two different datasets. While one dataset is used for target definition according to metabolism, the other is used for instrument guidance according to anatomical structures. No navigation systems capable of handling such data and performing PET/CT image-based procedures while following clinically approved protocols for oncologic percutaneous interventions are available. The need for such systems is emphasized in scenarios where the target can be located in different types of tissue such as bone and soft tissue. These two tissues require different clinical protocols for puncturing and may therefore give rise to different problems during the navigated intervention. Studies comparing the performance of navigated needle interventions targeting lesions located in these two types of tissue are not often found in the literature. Hence, this paper presents an optical navigation system for percutaneous needle interventions based on PET/CT images. The system provides viewers for guiding the physician to the target with real-time visualization of PET/CT datasets, and is able to handle targets located in both bone and soft tissue. The navigation system and the required clinical workflow were designed taking into consideration clinical protocols and requirements, and the system is thus operable by a single person, even during transition to the sterile phase. Both the system and the workflow were evaluated in an initial set of experiments simulating 41 lesions (23 located in bone tissue and 18 in soft tissue) in swine cadavers. We also measured and decomposed the overall system error into distinct error sources, which allowed for the identification of particularities involved in the process as well

  18. Accommodating error analysis in comparison and clustering of molecular fingerprints.

    OpenAIRE

    Salamon, H.; Segal, M. R.; Ponce de Leon, A.; Small, P. M.

    1998-01-01

    Molecular epidemiologic studies of infectious diseases rely on pathogen genotype comparisons, which usually yield patterns comprising sets of DNA fragments (DNA fingerprints). We use a highly developed genotyping system, IS6110-based restriction fragment length polymorphism analysis of Mycobacterium tuberculosis, to develop a computational method that automates comparison of large numbers of fingerprints. Because error in fragment length measurements is proportional to fragment length and is ...

  19. Fixturing error measurement and analysis using CMMs

    International Nuclear Information System (INIS)

    Wang, Y; Chen, X; Gindy, N

    2005-01-01

    Influence of fixture on the errors of a machined surface can be very significant. The machined surface errors generated during machining can be measured by using a coordinate measurement machine (CMM) through the displacements of three coordinate systems on a fixture-workpiece pair in relation to the deviation of the machined surface. The surface errors consist of the component movement, component twist, deviation between actual machined surface and defined tool path. A turbine blade fixture for grinding operation is used for case study

  20. Asteroid orbital error analysis: Theory and application

    Science.gov (United States)

    Muinonen, K.; Bowell, Edward

    1992-01-01

    We present a rigorous Bayesian theory for asteroid orbital error estimation in which the probability density of the orbital elements is derived from the noise statistics of the observations. For Gaussian noise in a linearized approximation the probability density is also Gaussian, and the errors of the orbital elements at a given epoch are fully described by the covariance matrix. The law of error propagation can then be applied to calculate past and future positional uncertainty ellipsoids (Cappellari et al. 1976, Yeomans et al. 1987, Whipple et al. 1991). To our knowledge, this is the first time a Bayesian approach has been formulated for orbital element estimation. In contrast to the classical Fisherian school of statistics, the Bayesian school allows a priori information to be formally present in the final estimation. However, Bayesian estimation does give the same results as Fisherian estimation when no priori information is assumed (Lehtinen 1988, and reference therein).

  1. Analysis of errors of radiation relay, (1)

    International Nuclear Information System (INIS)

    Koyanagi, Takami; Nakajima, Sinichi

    1976-01-01

    The statistical error of liquid level controlled by radiation relay is analysed and a method of minimizing the error is proposed. This method comes to the problem of optimum setting of the time constant of radiation relay. The equations for obtaining the value of time constant are presented and the numerical results are shown in a table and plotted in a figure. The optimum time constant of the upper level control relay is entirely different from that of the lower level control relay. (auth.)

  2. Error performance analysis in downlink cellular networks with interference management

    KAUST Repository

    Afify, Laila H.

    2015-05-01

    Modeling aggregate network interference in cellular networks has recently gained immense attention both in academia and industry. While stochastic geometry based models have succeeded to account for the cellular network geometry, they mostly abstract many important wireless communication system aspects (e.g., modulation techniques, signal recovery techniques). Recently, a novel stochastic geometry model, based on the Equivalent-in-Distribution (EiD) approach, succeeded to capture the aforementioned communication system aspects and extend the analysis to averaged error performance, however, on the expense of increasing the modeling complexity. Inspired by the EiD approach, the analysis developed in [1] takes into consideration the key system parameters, while providing a simple tractable analysis. In this paper, we extend this framework to study the effect of different interference management techniques in downlink cellular network. The accuracy of the proposed analysis is verified via Monte Carlo simulations.

  3. On Error Analysis of ORIGEN Decay Data Library Based on ENDF/B-VII.1 via Decay Heat Estimation after a Fission Event

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Do Heon; Gil, Choong-Sup; Lee, Young-Ouk [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    The method is strongly dependent on the available nuclear structure data, i.e., fission product yield data and decay data. Consequently, the improvements in the nuclear structure data could have guaranteed more reliable decay heat estimation for short cooling times after fission. The SCALE-6.1.3 code package includes the ENDF/B-VII.0-based fission product yield data and ENDF/B-VII.1-based decay data libraries for the ORIGEN-S code. The generation and validation of the new ORIGEN-S yield data libraries based on the recently available fission product yield data such as ENDF/B-VII.1, JEFF-3.1.1, JENDL/FPY-2011, and JENDL-4.0 have been presented in the previous study. According to the study, the yield data library in the SCALE-6.1.3 could be regarded as the latest one because it resulted in almost the same outcomes as the ENDF/B-VII.1. A research project on the production of the nuclear structure data for decay heat estimation of nuclear fuel has been carried out in Korea Atomic Energy Research Institute (KAERI). The data errors contained in the ORIGEN-S decay data library of SCALE-6.1.3 have been clearly identified by their changing variables. Also, the impacts of the decay data errors have been analyzed by estimating the decay heats for the fission product nuclides and their daughters after {sup 235}U thermal-neutron fission. Although the impacts of decay data errors are quite small, it reminds us the possible importance of decay data when estimating the decay heat for short cooling times after a fission event.

  4. Prediction-error of Prediction Error (PPE)-based Reversible Data Hiding

    OpenAIRE

    Wu, Han-Zhou; Wang, Hong-Xia; Shi, Yun-Qing

    2016-01-01

    This paper presents a novel reversible data hiding (RDH) algorithm for gray-scaled images, in which the prediction-error of prediction error (PPE) of a pixel is used to carry the secret data. In the proposed method, the pixels to be embedded are firstly predicted with their neighboring pixels to obtain the corresponding prediction errors (PEs). Then, by exploiting the PEs of the neighboring pixels, the prediction of the PEs of the pixels can be determined. And, a sorting technique based on th...

  5. Dose error analysis for a scanned proton beam delivery system

    International Nuclear Information System (INIS)

    Coutrakon, G; Wang, N; Miller, D W; Yang, Y

    2010-01-01

    All particle beam scanning systems are subject to dose delivery errors due to errors in position, energy and intensity of the delivered beam. In addition, finite scan speeds, beam spill non-uniformities, and delays in detector, detector electronics and magnet responses will all contribute errors in delivery. In this paper, we present dose errors for an 8 x 10 x 8 cm 3 target of uniform water equivalent density with 8 cm spread out Bragg peak and a prescribed dose of 2 Gy. Lower doses are also analyzed and presented later in the paper. Beam energy errors and errors due to limitations of scanning system hardware have been included in the analysis. By using Gaussian shaped pencil beams derived from measurements in the research room of the James M Slater Proton Treatment and Research Center at Loma Linda, CA and executing treatment simulations multiple times, statistical dose errors have been calculated in each 2.5 mm cubic voxel in the target. These errors were calculated by delivering multiple treatments to the same volume and calculating the rms variation in delivered dose at each voxel in the target. The variations in dose were the result of random beam delivery errors such as proton energy, spot position and intensity fluctuations. The results show that with reasonable assumptions of random beam delivery errors, the spot scanning technique yielded an rms dose error in each voxel less than 2% or 3% of the 2 Gy prescribed dose. These calculated errors are within acceptable clinical limits for radiation therapy.

  6. Error analysis of short term wind power prediction models

    International Nuclear Information System (INIS)

    De Giorgi, Maria Grazia; Ficarella, Antonio; Tarantino, Marco

    2011-01-01

    The integration of wind farms in power networks has become an important problem. This is because the electricity produced cannot be preserved because of the high cost of storage and electricity production must follow market demand. Short-long-range wind forecasting over different lengths/periods of time is becoming an important process for the management of wind farms. Time series modelling of wind speeds is based upon the valid assumption that all the causative factors are implicitly accounted for in the sequence of occurrence of the process itself. Hence time series modelling is equivalent to physical modelling. Auto Regressive Moving Average (ARMA) models, which perform a linear mapping between inputs and outputs, and Artificial Neural Networks (ANNs) and Adaptive Neuro-Fuzzy Inference Systems (ANFIS), which perform a non-linear mapping, provide a robust approach to wind power prediction. In this work, these models are developed in order to forecast power production of a wind farm with three wind turbines, using real load data and comparing different time prediction periods. This comparative analysis takes in the first time, various forecasting methods, time horizons and a deep performance analysis focused upon the normalised mean error and the statistical distribution hereof in order to evaluate error distribution within a narrower curve and therefore forecasting methods whereby it is more improbable to make errors in prediction. (author)

  7. Error analysis of short term wind power prediction models

    Energy Technology Data Exchange (ETDEWEB)

    De Giorgi, Maria Grazia; Ficarella, Antonio; Tarantino, Marco [Dipartimento di Ingegneria dell' Innovazione, Universita del Salento, Via per Monteroni, 73100 Lecce (Italy)

    2011-04-15

    The integration of wind farms in power networks has become an important problem. This is because the electricity produced cannot be preserved because of the high cost of storage and electricity production must follow market demand. Short-long-range wind forecasting over different lengths/periods of time is becoming an important process for the management of wind farms. Time series modelling of wind speeds is based upon the valid assumption that all the causative factors are implicitly accounted for in the sequence of occurrence of the process itself. Hence time series modelling is equivalent to physical modelling. Auto Regressive Moving Average (ARMA) models, which perform a linear mapping between inputs and outputs, and Artificial Neural Networks (ANNs) and Adaptive Neuro-Fuzzy Inference Systems (ANFIS), which perform a non-linear mapping, provide a robust approach to wind power prediction. In this work, these models are developed in order to forecast power production of a wind farm with three wind turbines, using real load data and comparing different time prediction periods. This comparative analysis takes in the first time, various forecasting methods, time horizons and a deep performance analysis focused upon the normalised mean error and the statistical distribution hereof in order to evaluate error distribution within a narrower curve and therefore forecasting methods whereby it is more improbable to make errors in prediction. (author)

  8. Analysis of Errors in a Special Perturbations Satellite Orbit Propagator

    Energy Technology Data Exchange (ETDEWEB)

    Beckerman, M.; Jones, J.P.

    1999-02-01

    We performed an analysis of error densities for the Special Perturbations orbit propagator using data for 29 satellites in orbits of interest to Space Shuttle and International Space Station collision avoidance. We find that the along-track errors predominate. These errors increase monotonically over each 36-hour prediction interval. The predicted positions in the along-track direction progressively either leap ahead of or lag behind the actual positions. Unlike the along-track errors the radial and cross-track errors oscillate about their nearly zero mean values. As the number of observations per fit interval decline the along-track prediction errors, and amplitudes of the radial and cross-track errors, increase.

  9. Dynamic Error Analysis Method for Vibration Shape Reconstruction of Smart FBG Plate Structure

    Directory of Open Access Journals (Sweden)

    Hesheng Zhang

    2016-01-01

    Full Text Available Shape reconstruction of aerospace plate structure is an important issue for safe operation of aerospace vehicles. One way to achieve such reconstruction is by constructing smart fiber Bragg grating (FBG plate structure with discrete distributed FBG sensor arrays using reconstruction algorithms in which error analysis of reconstruction algorithm is a key link. Considering that traditional error analysis methods can only deal with static data, a new dynamic data error analysis method are proposed based on LMS algorithm for shape reconstruction of smart FBG plate structure. Firstly, smart FBG structure and orthogonal curved network based reconstruction method is introduced. Then, a dynamic error analysis model is proposed for dynamic reconstruction error analysis. Thirdly, the parameter identification is done for the proposed dynamic error analysis model based on least mean square (LMS algorithm. Finally, an experimental verification platform is constructed and experimental dynamic reconstruction analysis is done. Experimental results show that the dynamic characteristics of the reconstruction performance for plate structure can be obtained accurately based on the proposed dynamic error analysis method. The proposed method can also be used for other data acquisition systems and data processing systems as a general error analysis method.

  10. Error Analysis of Galerkin's Method for Semilinear Equations

    Directory of Open Access Journals (Sweden)

    Tadashi Kawanago

    2012-01-01

    Full Text Available We establish a general existence result for Galerkin's approximate solutions of abstract semilinear equations and conduct an error analysis. Our results may be regarded as some extension of a precedent work (Schultz 1969. The derivation of our results is, however, different from the discussion in his paper and is essentially based on the convergence theorem of Newton’s method and some techniques for deriving it. Some of our results may be applicable for investigating the quality of numerical verification methods for solutions of ordinary and partial differential equations.

  11. ERROR ANALYSIS ON INFORMATION AND TECHNOLOGY STUDENTS’ SENTENCE WRITING ASSIGNMENTS

    Directory of Open Access Journals (Sweden)

    Rentauli Mariah Silalahi

    2015-03-01

    Full Text Available Students’ error analysis is very important for helping EFL teachers to develop their teaching materials, assessments and methods. However, it takes much time and effort from the teachers to do such an error analysis towards their students’ language. This study seeks to identify the common errors made by 1 class of 28 freshmen students studying English in their first semester in an IT university. The data is collected from their writing assignments for eight consecutive weeks. The errors found were classified into 24 types and the top ten most common errors committed by the students were article, preposition, spelling, word choice, subject-verb agreement, auxiliary verb, plural form, verb form, capital letter, and meaningless sentences. The findings about the students’ frequency of committing errors were, then, contrasted to their midterm test result and in order to find out the reasons behind the error recurrence; the students were given some questions to answer in a questionnaire format. Most of the students admitted that careless was the major reason for their errors and lack understanding came next. This study suggests EFL teachers to devote their time to continuously check the students’ language by giving corrections so that the students can learn from their errors and stop committing the same errors.

  12. The error model and experiment of measuring angular position error based on laser collimation

    Science.gov (United States)

    Cai, Yangyang; Yang, Jing; Li, Jiakun; Feng, Qibo

    2018-01-01

    Rotary axis is the reference component of rotation motion. Angular position error is the most critical factor which impair the machining precision among the six degree-of-freedom (DOF) geometric errors of rotary axis. In this paper, the measuring method of angular position error of rotary axis based on laser collimation is thoroughly researched, the error model is established and 360 ° full range measurement is realized by using the high precision servo turntable. The change of space attitude of each moving part is described accurately by the 3×3 transformation matrices and the influences of various factors on the measurement results is analyzed in detail. Experiments results show that the measurement method can achieve high measurement accuracy and large measurement range.

  13. Error analysis of the freshmen Criminology students’ grammar in the written English

    Directory of Open Access Journals (Sweden)

    Maico Demi Banate Aperocho

    2017-12-01

    Full Text Available This study identifies the various syntactical errors of the fifty (50 freshmen B.S. Criminology students of the University of Mindanao in Davao City. Specifically, this study aims to answer the following: (1 What are the common errors present in the argumentative essays of the respondents? (2 What are the reasons of the existence of these errors? This study is descriptive-qualitative. It also uses error analysis to point out the syntactical errors present in the compositions of the participants. The fifty essays are subjected to error analysis. Errors are classified based on Chanquoy’s Classification of Writing Errors. Furthermore, Hourani’s Common Reasons of Grammatical Errors Checklist was also used to determine the common reasons of the identified syntactical errors. To create a meaningful interpretation of data and to solicit further ideas from the participants, a focus group discussion is also done. Findings show that students’ most common errors are on the grammatical aspect. In the grammatical aspect, students have more frequently committed errors in the verb aspect (tense, subject agreement, and auxiliary and linker choice compared to spelling and punctuation aspects. Moreover, there are three topmost reasons of committing errors in the paragraph: mother tongue interference, incomprehensibility of the grammar rules, and the incomprehensibility of the writing mechanics. Despite the difficulty in learning English as a second language, students are still very motivated to master the concepts and applications of the language.

  14. Ionospheric error analysis in gps measurements

    Directory of Open Access Journals (Sweden)

    G. Pugliano

    2008-06-01

    Full Text Available The results of an experiment aimed at evaluating the effects of the ionosphere on GPS positioning applications are presented in this paper. Specifically, the study, based upon a differential approach, was conducted utilizing GPS measurements acquired by various receivers located at increasing inter-distances. The experimental research was developed upon the basis of two groups of baselines: the first group is comprised of "short" baselines (less than 10 km; the second group is characterized by greater distances (up to 90 km. The obtained results were compared either upon the basis of the geometric characteristics, for six different baseline lengths, using 24 hours of data, or upon temporal variations, by examining two periods of varying intensity in ionospheric activity respectively coinciding with the maximum of the 23 solar cycle and in conditions of low ionospheric activity. The analysis revealed variations in terms of inter-distance as well as different performances primarily owing to temporal modifications in the state of the ionosphere.

  15. Analysis of Human Error Types and Performance Shaping Factors in the Next Generation Main Control Room

    International Nuclear Information System (INIS)

    Sin, Y. C.; Jung, Y. S.; Kim, K. H.; Kim, J. H.

    2008-04-01

    Main control room of nuclear power plants has been computerized and digitalized in new and modernized plants, as information and digital technologies make great progresses and become mature. Survey on human factors engineering issues in advanced MCRs: Model-based approach, Literature survey-based approach. Analysis of human error types and performance shaping factors is analysis of three human errors. The results of project can be used for task analysis, evaluation of human error probabilities, and analysis of performance shaping factors in the HRA analysis

  16. Boundary error analysis and categorization in the TRECVID news story segmentation task

    NARCIS (Netherlands)

    Arlandis, J.; Over, P.; Kraaij, W.

    2005-01-01

    In this paper, an error analysis based on boundary error popularity (frequency) including semantic boundary categorization is applied in the context of the news story segmentation task from TRECVTD1. Clusters of systems were defined based on the input resources they used including video, audio and

  17. An Analysis of College Students' Attitudes towards Error Correction in EFL Context

    Science.gov (United States)

    Zhu, Honglin

    2010-01-01

    This article is based on a survey on the attitudes towards the error correction by their teachers in the process of teaching and learning and it is intended to improve the language teachers' understanding of the nature of error correction. Based on the analysis, the article expounds some principles and techniques that can be applied in the process…

  18. Investigation on coupling error characteristics in angular rate matching based ship deformation measurement approach

    Science.gov (United States)

    Yang, Shuai; Wu, Wei; Wang, Xingshu; Xu, Zhiguang

    2018-01-01

    The coupling error in the measurement of ship hull deformation can significantly influence the attitude accuracy of the shipborne weapons and equipments. It is therefore important to study the characteristics of the coupling error. In this paper, an comprehensive investigation on the coupling error is reported, which has a potential of deducting the coupling error in the future. Firstly, the causes and characteristics of the coupling error are analyzed theoretically based on the basic theory of measuring ship deformation. Then, simulations are conducted for verifying the correctness of the theoretical analysis. Simulation results show that the cross-correlation between dynamic flexure and ship angular motion leads to the coupling error in measuring ship deformation, and coupling error increases with the correlation value between them. All the simulation results coincide with the theoretical analysis.

  19. Analysis of Medication Errors in Simulated Pediatric Resuscitation by Residents

    Directory of Open Access Journals (Sweden)

    Evelyn Porter

    2014-07-01

    Full Text Available Introduction: The objective of our study was to estimate the incidence of prescribing medication errors specifically made by a trainee and identify factors associated with these errors during the simulated resuscitation of a critically ill child. Methods: The results of the simulated resuscitation are described. We analyzed data from the simulated resuscitation for the occurrence of a prescribing medication error. We compared univariate analysis of each variable to medication error rate and performed a separate multiple logistic regression analysis on the significant univariate variables to assess the association between the selected variables. Results: We reviewed 49 simulated resuscitations . The final medication error rate for the simulation was 26.5% (95% CI 13.7% - 39.3%. On univariate analysis, statistically significant findings for decreased prescribing medication error rates included senior residents in charge, presence of a pharmacist, sleeping greater than 8 hours prior to the simulation, and a visual analog scale score showing more confidence in caring for critically ill children. Multiple logistic regression analysis using the above significant variables showed only the presence of a pharmacist to remain significantly associated with decreased medication error, odds ratio of 0.09 (95% CI 0.01 - 0.64. Conclusion: Our results indicate that the presence of a clinical pharmacist during the resuscitation of a critically ill child reduces the medication errors made by resident physician trainees.

  20. Error Analysis on Plane-to-Plane Linear Approximate Coordinate ...

    Indian Academy of Sciences (India)

    Abstract. In this paper, the error analysis has been done for the linear approximate transformation between two tangent planes in celestial sphere in a simple case. The results demonstrate that the error from the linear transformation does not meet the requirement of high-precision astrometry under some conditions, so the ...

  1. Error analysis of nuclear power plant operator cognitive behavior

    International Nuclear Information System (INIS)

    He Xuhong; Zhao Bingquan; Chen Yulong

    2001-01-01

    Nuclear power plant is a complex human-machine system integrated with many advanced machines, electron devices and automatic controls. It demands operators to have high cognitive ability and correct analysis skill. The author divides operator's cognitive process into five stages to analysis. With this cognitive model, operator's cognitive error is analysed to get the root causes and stages that error happens. The results of the analysis serve as a basis in design of control rooms and training and evaluation of operators

  2. Spatial-temporal analysis of wind power forecast errors for West-Coast Norway

    Energy Technology Data Exchange (ETDEWEB)

    Revheim, Paal Preede; Beyer, Hans Georg [Agder Univ. (UiA), Grimstad (Norway). Dept. of Engineering Sciences

    2012-07-01

    In this paper the spatial-temporal structure of forecast errors for wind power in West-Coast Norway is analyzed. Starting on the qualitative analysis of the forecast error reduction, with respect to single site data, for the lumped conditions of groups of sites the spatial and temporal correlations of the wind power forecast errors within and between the same groups are studied in detail. Based on this, time-series regression models to be used to analytically describe the error reduction are set up. The models give an expected reduction in forecast error between 48.4% and 49%. (orig.)

  3. The application of two recently developed human reliability techniques to cognitive error analysis

    International Nuclear Information System (INIS)

    Gall, W.

    1990-01-01

    Cognitive error can lead to catastrophic consequences for manned systems, including those whose design renders them immune to the effects of physical slips made by operators. Four such events, pressurized water and boiling water reactor accidents which occurred recently, were analysed. The analysis identifies the factors which contributed to the errors and suggests practical strategies for error recovery or prevention. Two types of analysis were conducted: an unstructured analysis based on the analyst's knowledge of psychological theory, and a structured analysis using two recently-developed human reliability analysis techniques. In general, the structured techniques required less effort to produce results and these were comparable to those of the unstructured analysis. (author)

  4. Non-binary unitary error bases and quantum codes

    Energy Technology Data Exchange (ETDEWEB)

    Knill, E.

    1996-06-01

    Error operator bases for systems of any dimension are defined and natural generalizations of the bit-flip/ sign-change error basis for qubits are given. These bases allow generalizing the construction of quantum codes based on eigenspaces of Abelian groups. As a consequence, quantum codes can be constructed form linear codes over {ital Z}{sub {ital n}} for any {ital n}. The generalization of the punctured code construction leads to many codes which permit transversal (i.e. fault tolerant) implementations of certain operations compatible with the error basis.

  5. Analysis of Students' Errors on Linear Programming at Secondary ...

    African Journals Online (AJOL)

    The purpose of this study was to identify secondary school students' errors on linear programming at 'O' level. It is based on the fact that students' errors inform teaching hence an essential tool for any serious mathematics teacher who intends to improve mathematics teaching. The study was guided by a descriptive survey ...

  6. Human reliability analysis during PSA at Trillo NPP: main characteristics and analysis of diagnostic errors

    International Nuclear Information System (INIS)

    Barquin, M.A.; Gomez, F.

    1998-01-01

    The design difference between Trillo NPP and other Spanish nuclear power plants (basic Westinghouse and General Electric designs) were made clear in the Human Reliability Analysis of the Probabilistic Safety Analysis (PSA) for Trillo NPP. The object of this paper is to describe the most significant characteristics of the Human Reliability Analysis carried out in the PSA, with special emphasis on the possible diagnostic errors and their consequences, based on the characteristics in the Emergency Operations Manual for Trillo NPP. - In the case of human errors before the initiating event (type 1), the existence of four redundancies in most of the plant safety systems, means that the impact of this type or error on the final results of the PSA is insignificant. However, in the case common cause errors, especially in certain calibration errors, some actions are significant in the final equation for core damage - The number of human actions that the operator has to carry out during the accidents (type 3) modelled, is relatively small in comparison with this value in other PSAs. This is basically due to the high level of automation at Rillo NPP - The Plant Operations Manual cannot be strictly considered to be a symptoms-based procedure. The operation Group must select the chapter from the Operations Manual to be followed, after having diagnosed the perturbing event, using for this purpose and Emergency and Anomaly Decision Tree (M.O.3.0.1) based on the different indications, alarms and symptoms present in the plant after the perturbing event. For this reason, it was decided to analyse the possible diagnosis errors. In the bibliography on diagnosis and commission errors available at the present time, there is no precise methodology for the analysis of this type of error and its incorporation into PSAs. The method used in the PSA for Trillo y NPP to evaluate this type of interaction, is to develop a Diagnosis Error Table, the object of which is to identify the situations in

  7. Errors of DWPF frit analysis: Final report

    International Nuclear Information System (INIS)

    Schumacher, R.F.

    1993-01-01

    Glass frit will be a major raw material for the operation of the Defense Waste Processing Facility. The frit will be controlled by certificate of conformance and a confirmatory analysis from a commercial analytical laboratory. The following effort provides additional quantitative information on the variability of frit chemical analyses at two commercial laboratories. Identical samples of IDMS Frit 202 were chemically analyzed at two commercial laboratories and at three different times over a period of four months. The SRL-ADS analyses, after correction with the reference standard and normalization, provided confirmatory information, but did not detect the low silica level in one of the frit samples. A methodology utilizing elliptical limits for confirming the certificate of conformance or confirmatory analysis was introduced and recommended for use when the analysis values are close but not within the specification limits. It was also suggested that the lithia specification limits might be reduced as long as CELS is used to confirm the analysis

  8. Attitude Determination Error Analysis System (ADEAS) mathematical specifications document

    Science.gov (United States)

    Nicholson, Mark; Markley, F.; Seidewitz, E.

    1988-01-01

    The mathematical specifications of Release 4.0 of the Attitude Determination Error Analysis System (ADEAS), which provides a general-purpose linear error analysis capability for various spacecraft attitude geometries and determination processes, are presented. The analytical basis of the system is presented. The analytical basis of the system is presented, and detailed equations are provided for both three-axis-stabilized and spin-stabilized attitude sensor models.

  9. IMRT QA: Selecting gamma criteria based on error detection sensitivity

    Energy Technology Data Exchange (ETDEWEB)

    Steers, Jennifer M. [Department of Radiation Oncology, Cedars-Sinai Medical Center, Los Angeles, California 90048 and Physics and Biology in Medicine IDP, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California 90095 (United States); Fraass, Benedick A., E-mail: benedick.fraass@cshs.org [Department of Radiation Oncology, Cedars-Sinai Medical Center, Los Angeles, California 90048 (United States)

    2016-04-15

    Purpose: The gamma comparison is widely used to evaluate the agreement between measurements and treatment planning system calculations in patient-specific intensity modulated radiation therapy (IMRT) quality assurance (QA). However, recent publications have raised concerns about the lack of sensitivity when employing commonly used gamma criteria. Understanding the actual sensitivity of a wide range of different gamma criteria may allow the definition of more meaningful gamma criteria and tolerance limits in IMRT QA. We present a method that allows the quantitative determination of gamma criteria sensitivity to induced errors which can be applied to any unique combination of device, delivery technique, and software utilized in a specific clinic. Methods: A total of 21 DMLC IMRT QA measurements (ArcCHECK®, Sun Nuclear) were compared to QA plan calculations with induced errors. Three scenarios were studied: MU errors, multi-leaf collimator (MLC) errors, and the sensitivity of the gamma comparison to changes in penumbra width. Gamma comparisons were performed between measurements and error-induced calculations using a wide range of gamma criteria, resulting in a total of over 20 000 gamma comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using 36 different gamma criteria. Results: This study demonstrates that systematic errors and case-specific errors can be detected by the error curve analysis. Depending on the location of the error curve peak (e.g., not centered about zero), 3%/3 mm threshold = 10% at 90% pixels passing may miss errors as large as 15% MU errors and ±1 cm random MLC errors for some cases. As the dose threshold parameter was increased for a given %Diff/distance-to-agreement (DTA) setting, error sensitivity was increased by up to a factor of two for select cases. This increased sensitivity with increasing dose

  10. Error analysis of terrestrial laser scanning data by means of spherical statistics and 3D graphs.

    Science.gov (United States)

    Cuartero, Aurora; Armesto, Julia; Rodríguez, Pablo G; Arias, Pedro

    2010-01-01

    This paper presents a complete analysis of the positional errors of terrestrial laser scanning (TLS) data based on spherical statistics and 3D graphs. Spherical statistics are preferred because of the 3D vectorial nature of the spatial error. Error vectors have three metric elements (one module and two angles) that were analyzed by spherical statistics. A study case has been presented and discussed in detail. Errors were calculating using 53 check points (CP) and CP coordinates were measured by a digitizer with submillimetre accuracy. The positional accuracy was analyzed by both the conventional method (modular errors analysis) and the proposed method (angular errors analysis) by 3D graphics and numerical spherical statistics. Two packages in R programming language were performed to obtain graphics automatically. The results indicated that the proposed method is advantageous as it offers a more complete analysis of the positional accuracy, such as angular error component, uniformity of the vector distribution, error isotropy, and error, in addition the modular error component by linear statistics.

  11. Grinding Method and Error Analysis of Eccentric Shaft Parts

    Science.gov (United States)

    Wang, Zhiming; Han, Qiushi; Li, Qiguang; Peng, Baoying; Li, Weihua

    2017-12-01

    RV reducer and various mechanical transmission parts are widely used in eccentric shaft parts, The demand of precision grinding technology for eccentric shaft parts now, In this paper, the model of X-C linkage relation of eccentric shaft grinding is studied; By inversion method, the contour curve of the wheel envelope is deduced, and the distance from the center of eccentric circle is constant. The simulation software of eccentric shaft grinding is developed, the correctness of the model is proved, the influence of the X-axis feed error, the C-axis feed error and the wheel radius error on the grinding process is analyzed, and the corresponding error calculation model is proposed. The simulation analysis is carried out to provide the basis for the contour error compensation.

  12. AN ERROR ANALYSIS OF ARGUMENTATIVE ESSAY (CASE STUDY AT UNIVERSITY MUHAMMADIYAH OF METRO

    Directory of Open Access Journals (Sweden)

    Fenny - Thresia

    2015-10-01

    Full Text Available The purpose of this study was study analyze the students’ error in writing argumentative essay. The researcher focuses on errors of verb, concord and learner language. This study took 20 students as the subject of research from the third semester. The data took from observation and documentation. Based on the result of the data analysis there are some errors still found on the student’s argumentative essay in English writing? The common errors which repeatedly appear are verb. The second is concord, and learner languages are the smallest error. From 20 samples that took, the frequency the errors of verb are 12 items (60%, concord are 8 items (40%, learner languages are 7 items (35%. As a result, verb has the biggest number of common errors.

  13. Detection method of nonlinearity errors by statistical signal analysis in heterodyne Michelson interferometer.

    Science.gov (United States)

    Hu, Juju; Hu, Haijiang; Ji, Yinghua

    2010-03-15

    Periodic nonlinearity that ranges from tens of nanometers to a few nanometers in heterodyne interferometer limits its use in high accuracy measurement. A novel method is studied to detect the nonlinearity errors based on the electrical subdivision and the analysis method of statistical signal in heterodyne Michelson interferometer. Under the movement of micropositioning platform with the uniform velocity, the method can detect the nonlinearity errors by using the regression analysis and Jackknife estimation. Based on the analysis of the simulations, the method can estimate the influence of nonlinearity errors and other noises for the dimensions measurement in heterodyne Michelson interferometer.

  14. Analysis of Errors and Misconceptions in the Learning of Calculus by Undergraduate Students

    Science.gov (United States)

    Muzangwa, Jonatan; Chifamba, Peter

    2012-01-01

    This paper is going to analyse errors and misconceptions in an undergraduate course in Calculus. The study will be based on a group of 10 BEd. Mathematics students at Great Zimbabwe University. Data is gathered through use of two exercises on Calculus 1&2.The analysis of the results from the tests showed that a majority of the errors were due…

  15. Reliability and error analysis on xenon/CT CBF

    International Nuclear Information System (INIS)

    Zhang, Z.

    2000-01-01

    This article provides a quantitative error analysis of a simulation model of xenon/CT CBF in order to investigate the behavior and effect of different types of errors such as CT noise, motion artifacts, lower percentage of xenon supply, lower tissue enhancements, etc. A mathematical model is built to simulate these errors. By adjusting the initial parameters of the simulation model, we can scale the Gaussian noise, control the percentage of xenon supply, and change the tissue enhancement with different kVp settings. The motion artifact will be treated separately by geometrically shifting the sequential CT images. The input function is chosen from an end-tidal xenon curve of a practical study. Four kinds of cerebral blood flow, 10, 20, 50, and 80 cc/100 g/min, are examined under different error environments and the corresponding CT images are generated following the currently popular timing protocol. The simulated studies will be fed to a regular xenon/CT CBF system for calculation and evaluation. A quantitative comparison is given to reveal the behavior and effect of individual error resources. Mixed error testing is also provided to inspect the combination effect of errors. The experiment shows that CT noise is still a major error resource. The motion artifact affects the CBF results more geometrically than quantitatively. Lower xenon supply has a lesser effect on the results, but will reduce the signal/noise ratio. The lower xenon enhancement will lower the flow values in all areas of brain. (author)

  16. Improving patient safety in radiotherapy through error reporting and analysis

    International Nuclear Information System (INIS)

    Findlay, Ú.; Best, H.; Ottrey, M.

    2016-01-01

    Aim: To improve patient safety in radiotherapy (RT) through the analysis and publication of radiotherapy errors and near misses (RTE). Materials and methods: RTE are submitted on a voluntary basis by NHS RT departments throughout the UK to the National Reporting and Learning System (NRLS) or directly to Public Health England (PHE). RTE are analysed by PHE staff using frequency trend analysis based on the classification and pathway coding from Towards Safer Radiotherapy (TSRT). PHE in conjunction with the Patient Safety in Radiotherapy Steering Group publish learning from these events, on a triannual and summarised on a biennial basis, so their occurrence might be mitigated. Results: Since the introduction of this initiative in 2010, over 30,000 (RTE) reports have been submitted. The number of RTE reported in each biennial cycle has grown, ranging from 680 (2010) to 12,691 (2016) RTE. The vast majority of the RTE reported are lower level events, thus not affecting the outcome of patient care. Of the level 1 and 2 incidents reported, it is known the majority of them affected only one fraction of a course of treatment. This means that corrective action could be taken over the remaining treatment fractions so the incident did not have a significant impact on the patient or the outcome of their treatment. Analysis of the RTE reports demonstrates that generation of error is not confined to one professional group or to any particular point in the pathway. It also indicates that the pattern of errors is replicated across service providers in the UK. Conclusion: Use of the terminology, classification and coding of TSRT, together with implementation of the national voluntary reporting system described within this report, allows clinical departments to compare their local analysis to the national picture. Further opportunities to improve learning from this dataset must be exploited through development of the analysis and development of proactive risk management strategies

  17. ERROR ANALYSIS IN THE TRAVEL WRITING MADE BY THE STUDENTS OF ENGLISH STUDY PROGRAM

    Directory of Open Access Journals (Sweden)

    Vika Agustina

    2015-05-01

    Full Text Available This study was conducted to identify the kinds of errors in surface strategy taxonomy and to know the dominant type of errors made by the fifth semester students of English Department of one State University in Malang-Indonesia in producing their travel writing. The type of research of this study is document analysis since it analyses written materials, in this case travel writing texts. The analysis finds that the grammatical errors made by the students based on surface strategy taxonomy theory consist of four types. They are (1 omission, (2 addition, (3 misformation and (4 misordering. The most frequent errors occuring in misformation are in the use of tense form. Secondly, the errors are in omission of noun/verb inflection. The next error, there are many clauses that contain unnecessary phrase added there.

  18. Accommodating error analysis in comparison and clustering of molecular fingerprints.

    Science.gov (United States)

    Salamon, H; Segal, M R; Ponce de Leon, A; Small, P M

    1998-01-01

    Molecular epidemiologic studies of infectious diseases rely on pathogen genotype comparisons, which usually yield patterns comprising sets of DNA fragments (DNA fingerprints). We use a highly developed genotyping system, IS6110-based restriction fragment length polymorphism analysis of Mycobacterium tuberculosis, to develop a computational method that automates comparison of large numbers of fingerprints. Because error in fragment length measurements is proportional to fragment length and is positively correlated for fragments within a lane, an align-and-count method that compensates for relative scaling of lanes reliably counts matching fragments between lanes. Results of a two-step method we developed to cluster identical fingerprints agree closely with 5 years of computer-assisted visual matching among 1,335 M. tuberculosis fingerprints. Fully documented and validated methods of automated comparison and clustering will greatly expand the scope of molecular epidemiology.

  19. QUALITATIVE DATA AND ERROR MEASUREMENT IN INPUT-OUTPUT-ANALYSIS

    NARCIS (Netherlands)

    NIJKAMP, P; OOSTERHAVEN, J; OUWERSLOOT, H; RIETVELD, P

    1992-01-01

    This paper is a contribution to the rapidly emerging field of qualitative data analysis in economics. Ordinal data techniques and error measurement in input-output analysis are here combined in order to test the reliability of a low level of measurement and precision of data by means of a stochastic

  20. Understanding Teamwork in Trauma Resuscitation through Analysis of Team Errors

    Science.gov (United States)

    Sarcevic, Aleksandra

    2009-01-01

    An analysis of human errors in complex work settings can lead to important insights into the workspace design. This type of analysis is particularly relevant to safety-critical, socio-technical systems that are highly dynamic, stressful and time-constrained, and where failures can result in catastrophic societal, economic or environmental…

  1. An advanced human reliability analysis methodology: analysis of cognitive errors focused on

    International Nuclear Information System (INIS)

    Kim, J. H.; Jeong, W. D.

    2001-01-01

    The conventional Human Reliability Analysis (HRA) methods such as THERP/ASEP, HCR and SLIM has been criticised for their deficiency in analysing cognitive errors which occurs during operator's decision making process. In order to supplement the limitation of the conventional methods, an advanced HRA method, what is called the 2 nd generation HRA method, including both qualitative analysis and quantitative assessment of cognitive errors has been being developed based on the state-of-the-art theory of cognitive systems engineering and error psychology. The method was developed on the basis of human decision-making model and the relation between the cognitive function and the performance influencing factors. The application of the proposed method to two emergency operation tasks is presented

  2. An Analysis of Students Error In Solving PISA 2012 And Its Scaffolding

    Directory of Open Access Journals (Sweden)

    Yurizka Melia Sari

    2017-08-01

    Full Text Available Based on PISA survey in 2012, Indonesia was only placed on 64 out of 65 participating countries. The survey suggest that the students’ ability of reasoning, spatial orientation, and problem solving are lower compare with other participants countries, especially in Shouth East Asia. Nevertheless, the result of PISA does not elicit clearly on the students’ inability in solving PISA problem such as the location and the types of student’s errors. Therefore, analyzing students’ error in solving PISA problem would be essential countermeasure to help the students in solving mathematics problems and to develop scaffolding. Based on the data analysis, it is found that there are 5 types of error which is made by the subject. They consist of reading error, comprehension error, transformation error, process skill error, and encoding error. The most common mistake that subject do is encoding error with a percentage of 26%. While reading is the fewest errors made by the subjects that is only 12%. The types of given scaffolding was explaining the problem carefully and making a summary of new words and find the meaning of them, restructuring problem-solving strategies and reviewing the results of the completion of the problem.

  3. A human error analysis methodology, AGAPE-ET, for emergency tasks in nuclear power plants and its application

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jae Whan; Jung, Won Dea [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2002-03-01

    This report presents a procedurised human reliability analysis (HRA) methodology, AGAPE-ET (A Guidance And Procedure for Human Error Analysis for Emergency Tasks), for both qualitative error analysis and quantification of human error probability (HEP) of emergency tasks in nuclear power plants. The AGAPE-ET is based on the simplified cognitive model. By each cognitive function, error causes or error-likely situations have been identified considering the characteristics of the performance of each cognitive function and influencing mechanism of PIFs on the cognitive function. Then, error analysis items have been determined from the identified error causes or error-likely situations to help the analysts cue or guide overall human error analysis. A human error analysis procedure based on the error analysis items is organised. The basic scheme for the quantification of HEP consists in the multiplication of the BHEP assigned by the error analysis item and the weight from the influencing factors decision tree (IFDT) constituted by cognitive function. The method can be characterised by the structured identification of the weak points of the task required to perform and the efficient analysis process that the analysts have only to carry out with the necessary cognitive functions. The report also presents the the application of AFAPE-ET to 31 nuclear emergency tasks and its results. 42 refs., 7 figs., 36 tabs. (Author)

  4. The analysis and compensation of errors of precise simple harmonic motion control under high speed and large load conditions based on servo electric cylinder

    Science.gov (United States)

    Ma, Chen-xi; Ding, Guo-qing

    2017-10-01

    Simple harmonic waves and synthesized simple harmonic waves are widely used in the test of instruments. However, because of the errors caused by clearance of gear and time-delay error of FPGA, it is difficult to control servo electric cylinder in precise simple harmonic motion under high speed, high frequency and large load conditions. To solve the problem, a method of error compensation is proposed in this paper. In the method, a displacement sensor is fitted on the piston rod of the electric cylinder. By using the displacement sensor, the real-time displacement of the piston rod is obtained and fed back to the input of servo motor, then a closed loop control is realized. There is compensation of pulses in the next period of the synthetic waves. This paper uses FPGA as the processing core. The software mainly comprises a waveform generator, an Ethernet module, a memory module, a pulse generator, a pulse selector, a protection module, an error compensation module. A durability of shock absorbers is used as the testing platform. The durability mainly comprises a single electric cylinder, a servo motor for driving the electric cylinder, and the servo motor driver.

  5. WORKING MEMORY STRUCTURE REVEALED IN ANALYSIS OF RECALL ERRORS

    Directory of Open Access Journals (Sweden)

    Regina V Ershova

    2017-12-01

    Full Text Available We analyzed working memory errors stemming from 193 Russian college students taking the Tarnow Unchunkable Test utilizing double digit items on a visual display.In three-item trials with at most one error per trial, single incorrect tens and ones digits (“singlets” were overrepresented and made up the majority of errors, indicating a base 10 organization.These errors indicate that there are separate memory maps for each position and that there are pointers that can move primarily within these maps. Several pointers make up a pointer collection. The number of pointer collections possible is the working memory capacity limit. A model for self-organizing maps is constructed in which the organization is created by turning common pointer collections into maps thereby replacing a pointer collection with a single pointer.The factors 5 and 11 were underrepresented in the errors, presumably because base 10 properties beyond positional order were used for error correction, perhaps reflecting the existence of additional maps of integers divisible by 5 and integers divisible by 11.

  6. Technical Note: New ground-based FTIR measurements at Ile de La Réunion: observations, error analysis, and comparisons with independent data

    Directory of Open Access Journals (Sweden)

    C. Senten

    2008-07-01

    Full Text Available Ground-based high spectral resolution Fourier-transform infrared (FTIR solar absorption spectroscopy is a powerful remote sensing technique to obtain information on the total column abundances and on the vertical distribution of various constituents in the atmosphere. This work presents results from two FTIR measurement campaigns in 2002 and 2004, held at Ile de La Réunion (21° S, 55° E. These campaigns represent the first FTIR observations carried out at a southern (subtropical site. They serve the initiation of regular, long-term FTIR monitoring at this site in the near future. To demonstrate the capabilities of the FTIR measurements at this location for tropospheric and stratospheric monitoring, a detailed report is given on the retrieval strategy, information content and corresponding full error budget evaluation for ozone (O3, methane (CH4, nitrous oxide (N2O, carbon monoxide (CO, ethane (C2H6, hydrogen chloride (HCl, hydrogen fluoride (HF and nitric acid (HNO3 total and partial column retrievals. Moreover, we have made a thorough comparison of the capabilities at sea level altitude (St.-Denis and at 2200 m a.s.l. (Maïdo. It is proved that the performances of the technique are such that the atmospheric variability can be observed, at both locations and in distinct altitude layers. Comparisons with literature and with correlative data from ozone sonde and satellite (i.e., ACE-FTS, HALOE and MOPITT measurements are given to confirm the results. Despite the short time series available at present, we have been able to detect the seasonal variation of CO in the biomass burning season, as well as the impact of particular biomass burning events in Africa and Madagascar on the atmospheric composition above Ile de La Réunion. We also show that differential measurements between St.-Denis and Maïdo provide useful information about the concentrations in the boundary layer.

  7. SHEAN (Simplified Human Error Analysis code) and automated THERP

    International Nuclear Information System (INIS)

    Wilson, J.R.

    1993-01-01

    One of the most widely used human error analysis tools is THERP (Technique for Human Error Rate Prediction). Unfortunately, this tool has disadvantages. The Nuclear Regulatory Commission, realizing these drawbacks, commissioned Dr. Swain, the author of THERP, to create a simpler, more consistent tool for deriving human error rates. That effort produced the Accident Sequence Evaluation Program Human Reliability Analysis Procedure (ASEP), which is more conservative than THERP, but a valuable screening tool. ASEP involves answering simple questions about the scenario in question, and then looking up the appropriate human error rate in the indicated table (THERP also uses look-up tables, but four times as many). The advantages of ASEP are that human factors expertise is not required, and the training to use the method is minimal. Although not originally envisioned by Dr. Swain, the ASEP approach actually begs to be computerized. That WINCO did, calling the code SHEAN, for Simplified Human Error ANalysis. The code was done in TURBO Basic for IBM or IBM-compatible MS-DOS, for fast execution. WINCO is now in the process of comparing this code against THERP for various scenarios. This report provides a discussion of SHEAN

  8. Two-component model application for error calculus in the environmental monitoring data analysis

    International Nuclear Information System (INIS)

    Carvalho, Maria Angelica G.; Hiromoto, Goro

    2002-01-01

    Analysis and interpretation of results of an environmental monitoring program is often based on the evaluation of the mean value of a particular set of data, which is strongly affected by the analytical errors associated with each measurement. A model proposed by Rocke and Lorenzato assumes two error components, one additive and one multiplicative, to deal with lower and higher concentration values in a single model. In this communication, an application of this method for re-evaluation of the errors reported in a large set of results of total alpha measurements in a environmental sample is presented. The results show that the mean values calculated taking into account the new errors is higher than as obtained with the original errors, being an indicative that the analytical errors reported before were underestimated in the region of lower concentrations. (author)

  9. Applications of human error analysis to aviation and space operations

    International Nuclear Information System (INIS)

    Nelson, W.R.

    1998-01-01

    For the past several years at the Idaho National Engineering and Environmental Laboratory (INEEL) we have been working to apply methods of human error analysis to the design of complex systems. We have focused on adapting human reliability analysis (HRA) methods that were developed for Probabilistic Safety Assessment (PSA) for application to system design. We are developing methods so that human errors can be systematically identified during system design, the potential consequences of each error can be assessed, and potential corrective actions (e.g. changes to system design or procedures) can be identified. These applications lead to different requirements when compared with HR.As performed as part of a PSA. For example, because the analysis will begin early during the design stage, the methods must be usable when only partial design information is available. In addition, the ability to perform numerous ''what if'' analyses to identify and compare multiple design alternatives is essential. Finally, since the goals of such human error analyses focus on proactive design changes rather than the estimate of failure probabilities for PRA, there is more emphasis on qualitative evaluations of error relationships and causal factors than on quantitative estimates of error frequency. The primary vehicle we have used to develop and apply these methods has been a series of prqjects sponsored by the National Aeronautics and Space Administration (NASA) to apply human error analysis to aviation operations. The first NASA-sponsored project had the goal to evaluate human errors caused by advanced cockpit automation. Our next aviation project focused on the development of methods and tools to apply human error analysis to the design of commercial aircraft. This project was performed by a consortium comprised of INEEL, NASA, and Boeing Commercial Airplane Group. The focus of the project was aircraft design and procedures that could lead to human errors during airplane maintenance

  10. ERROR DETECTION BY ANTICIPATION FOR VISION-BASED CONTROL

    Directory of Open Access Journals (Sweden)

    A ZAATRI

    2001-06-01

    Full Text Available A vision-based control system has been developed.  It enables a human operator to remotely direct a robot, equipped with a camera, towards targets in 3D space by simply pointing on their images with a pointing device. This paper presents an anticipatory system, which has been designed for improving the safety and the effectiveness of the vision-based commands. It simulates these commands in a virtual environment. It attempts to detect hard contacts that may occur between the robot and its environment, which can be caused by machine errors or operator errors as well.

  11. Accounting for covariate measurement error in a Cox model analysis of recurrence of depression.

    Science.gov (United States)

    Liu, K; Mazumdar, S; Stone, R A; Dew, M A; Houck, P R; Reynolds, C F

    2001-01-01

    When a covariate measured with error is used as a predictor in a survival analysis using the Cox model, the parameter estimate is usually biased. In clinical research, covariates measured without error such as treatment procedure or sex are often used in conjunction with a covariate measured with error. In a randomized clinical trial of two types of treatments, we account for the measurement error in the covariate, log-transformed total rapid eye movement (REM) activity counts, in a Cox model analysis of the time to recurrence of major depression in an elderly population. Regression calibration and two variants of a likelihood-based approach are used to account for measurement error. The likelihood-based approach is extended to account for the correlation between replicate measures of the covariate. Using the replicate data decreases the standard error of the parameter estimate for log(total REM) counts while maintaining the bias reduction of the estimate. We conclude that covariate measurement error and the correlation between replicates can affect results in a Cox model analysis and should be accounted for. In the depression data, these methods render comparable results that have less bias than the results when measurement error is ignored.

  12. Savannah River Site human error data base development for nonreactor nuclear facilities

    International Nuclear Information System (INIS)

    Benhardt, H.C.; Held, J.E.; Olsen, L.M.; Vail, R.E.; Eide, S.A.

    1994-01-01

    As part of an overall effort to upgrade and streamline methodologies for safety analyses of nonreactor nuclear facilities at the Savannah River Site (SRS), a human error data base has been developed and is presented in this report. The data base fulfills several needs of risk analysts supporting safety analysis report (SAR) development. First, it provides a single source for probabilities or rates for a wide variety of human errors associated with the SRS nonreactor nuclear facilities. Second, it provides a documented basis for human error probabilities or rates. And finally, it provides actual SRS-specific human error data to support many of the error probabilities or rates. Use of a single, documented reference source for human errors, supported by SRS-specific human error data, will improve the consistency and accuracy of human error modeling by SRS risk analysts. It is envisioned that SRS risk analysts will use this report as both a guide to identifying the types of human errors that may need to be included in risk models such as fault and event trees, and as a source for human error probabilities or rates. For each human error in this report, ffime different mean probabilities or rates are presented to cover a wide range of conditions and influencing factors. The ask analysts must decide which mean value is most appropriate for each particular application. If other types of human errors are needed for the risk models, the analyst must use other sources. Finally, if human enors are dominant in the quantified risk models (based on the values obtained fmm this report), then it may be appropriate to perform detailed human reliability analyses (HRAS) for the dominant events. This document does not provide guidance for such refined HRAS; in such cases experienced human reliability analysts should be involved

  13. Uncertainty quantification in a chemical system using error estimate-based mesh adaption

    International Nuclear Information System (INIS)

    Mathelin, Lionel; Le Maitre, Olivier P.

    2012-01-01

    This paper describes a rigorous a posteriori error analysis for the stochastic solution of non-linear uncertain chemical models. The dual-based a posteriori stochastic error analysis extends the methodology developed in the deterministic finite elements context to stochastic discretization frameworks. It requires the resolution of two additional (dual) problems to yield the local error estimate. The stochastic error estimate can then be used to adapt the stochastic discretization. Different anisotropic refinement strategies are proposed, leading to a cost-efficient tool suitable for multi-dimensional problems of moderate stochastic dimension. The adaptive strategies allow both for refinement and coarsening of the stochastic discretization, as needed to satisfy a prescribed error tolerance. The adaptive strategies were successfully tested on a model for the hydrogen oxidation in supercritical conditions having 8 random parameters. The proposed methodologies are however general enough to be also applicable for a wide class of models such as uncertain fluid flows. (authors)

  14. Error Analysis Of Clock Time (T), Declination (*) And Latitude ...

    African Journals Online (AJOL)

    ), latitude (Φ), longitude (λ) and azimuth (A); which are aimed at establishing fixed positions and orientations of survey points and lines on the earth surface. The paper attempts the analysis of the individual and combined effects of error in time ...

  15. Analysis of possible systematic errors in the Oslo method

    International Nuclear Information System (INIS)

    Larsen, A. C.; Guttormsen, M.; Buerger, A.; Goergen, A.; Nyhus, H. T.; Rekstad, J.; Siem, S.; Toft, H. K.; Tveten, G. M.; Wikan, K.; Krticka, M.; Betak, E.; Schiller, A.; Voinov, A. V.

    2011-01-01

    In this work, we have reviewed the Oslo method, which enables the simultaneous extraction of the level density and γ-ray transmission coefficient from a set of particle-γ coincidence data. Possible errors and uncertainties have been investigated. Typical data sets from various mass regions as well as simulated data have been tested against the assumptions behind the data analysis.

  16. Theory of error for target factor analysis with applications to mass spectrometry and nuclear magnetic resonance spectrometry

    International Nuclear Information System (INIS)

    Malinowski, E.R.

    1978-01-01

    Based on the theory of error for abstract factor analysis described earlier, a theory of error for target factor analysis is developed. The theory shows how the error in the data matrix mixes with the error in the target test vector. The apparent error in a target test is found to be a vector sum of the real error in the target vector and the real error in the predicted vector. The theory predicts the magnitudes of these errors without requiring any a priori knowledge of the error in the data matrix or the target vector. A reliability function and a spoil function are developed for the purpose of assessing the validity and the worthiness of a target vector. Examples from model data, mass spectrometry and nuclear magnetic resonance spectrometry are presented. (Auth.)

  17. Detecting errors in micro and trace analysis by using statistics

    DEFF Research Database (Denmark)

    Heydorn, K.

    1993-01-01

    By assigning a standard deviation to each step in an analytical method it is possible to predict the standard deviation of each analytical result obtained by this method. If the actual variability of replicate analytical results agrees with the expected, the analytical method is said...... to be in statistical control. Significant deviations between analytical results from different laboratories reveal the presence of systematic errors, and agreement between different laboratories indicate the absence of systematic errors. This statistical approach, referred to as the analysis of precision, was applied...

  18. SYNTACTIC ERRORS ANALYSIS IN THE CASUAL CONVERSATION 60 COMMITED BY TWO SENIOR HIGH STUDENTS

    Directory of Open Access Journals (Sweden)

    Anjar Setiawan

    2017-12-01

    Full Text Available Syntactic structures are the base of English grammar. This study was aimed to analyze the syntactic errors in the casual conversation commited by two senior high students of MAN 2 Semarang. The researcher used qualitative approach to analyze and interpret the meaning of casual conversation. Furthermore, the data collection had been transcribed and analyzed based on the areas of syntactic errors analysis. The findings of the study showed that all areas of syntactic errors happened during the conversation, included auxiliaries, tenses, article, preposition, and conjunction. Both speakers also had a relatively weak vocabulary and their sentences which were sometimes incomprehensible by the interlocutor.

  19. Determining Bounds on Assumption Errors in Operational Analysis

    Directory of Open Access Journals (Sweden)

    Neal M. Bengtson

    2014-01-01

    Full Text Available The technique of operational analysis (OA is used in the study of systems performance, mainly for estimating mean values of various measures of interest, such as, number of jobs at a device and response times. The basic principles of operational analysis allow errors in assumptions to be quantified over a time period. The assumptions which are used to derive the operational analysis relationships are studied. Using Karush-Kuhn-Tucker (KKT conditions bounds on error measures of these OA relationships are found. Examples of these bounds are used for representative performance measures to show limits on the difference between true performance values and those estimated by operational analysis relationships. A technique for finding tolerance limits on the bounds is demonstrated with a simulation example.

  20. Airborne LIDAR borsight error calibration based on surface coincide

    International Nuclear Information System (INIS)

    Yuan, Fangyan; Li, Guoqing; Zuo, Zhengli; Li, Dong; Qi, Zengying; Qiu, Wen; Tan, Junxiang

    2014-01-01

    Light Detection and Ranging (LIDAR) is a system which can directly collect three-dimensional coordinate information of ground point and laser reflection strength information. With the wide application of LIDAR system, users hope to get more accurate results. Boresight error has an important effect on data accuracy and thus, it is thought that eliminating the error is very important. In recent years, many methods have been proposed to eliminate the error. Generally, they can be categorized into tie point method and surface matching method. In this paper, we propose another method called try value method based on surface coincide that is used in actual production by many companies. The method is simple and operable. Further, the efficacy of the method was demonstrated by analyzing the data from Zhangye city

  1. Evolutionary modeling-based approach for model errors correction

    Directory of Open Access Journals (Sweden)

    S. Q. Wan

    2012-08-01

    Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."

    On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.

  2. Interactive analysis of human error factors in NPP operation events

    International Nuclear Information System (INIS)

    Zhang Li; Zou Yanhua; Huang Weigang

    2010-01-01

    Interactive of human error factors in NPP operation events were introduced, and 645 WANO operation event reports from 1999 to 2008 were analyzed, among which 432 were found relative to human errors. After classifying these errors with the Root Causes or Causal Factors, and then applying SPSS for correlation analysis,we concluded: (1) Personnel work practices are restricted by many factors. Forming a good personnel work practices is a systematic work which need supports in many aspects. (2)Verbal communications,personnel work practices, man-machine interface and written procedures and documents play great roles. They are four interaction factors which often come in bundle. If some improvements need to be made on one of them,synchronous measures are also necessary for the others.(3) Management direction and decision process, which are related to management,have a significant interaction with personnel factors. (authors)

  3. Predicting positional error of MLC using volumetric analysis

    International Nuclear Information System (INIS)

    Hareram, E.S.

    2008-01-01

    IMRT normally using multiple beamlets (small width of the beam) for a particular field to deliver so that it is imperative to maintain the positional accuracy of the MLC in order to deliver integrated computed dose accurately. Different manufacturers have reported high precession on MLC devices with leaf positional accuracy nearing 0.1 mm but measuring and rectifying the error in this accuracy is very difficult. Various methods are used to check MLC position and among this volumetric analysis is one of the technique. Volumetric approach was adapted in our method using primus machine and 0.6cc chamber at 5 cm depth In perspex. MLC of 1 mm error introduces an error of 20%, more sensitive to other methods

  4. Students’ Written Production Error Analysis in the EFL Classroom Teaching: A Study of Adult English Learners Errors

    Directory of Open Access Journals (Sweden)

    Ranauli Sihombing

    2016-12-01

    Full Text Available Errors analysis has become one of the most interesting issues in the study of Second Language Acquisition. It can not be denied that some teachers do not know a lot about error analysis and related theories of how L1, L2 or foreign language acquired. In addition, the students often feel upset since they find a gap between themselves and the teachers for the errors the students make and the teachers’ understanding about the error correction. The present research aims to investigate what errors adult English learners make in written production of English. The significances of the study is to know what errors students make in writing that the teachers can find solution to the errors the students make for a better English language teaching and learning especially in teaching English for adults. The study employed qualitative method. The research was undertaken at an airline education center in Bandung. The result showed that syntax errors are more frequently found than morphology errors, especially in terms of verb phrase errors. It is recommended that it is important for teacher to know the theory of second language acquisition in order to know how the students learn and produce theirlanguage. In addition, it will be advantages for teachers if they know what errors students frequently make in their learning, so that the teachers can give solution to the students for a better English language learning achievement.   DOI: https://doi.org/10.24071/llt.2015.180205

  5. Critical slowing down and error analysis in lattice QCD simulations

    International Nuclear Information System (INIS)

    Virotta, Francesco

    2012-01-01

    In this work we investigate the critical slowing down of lattice QCD simulations. We perform a preliminary study in the quenched approximation where we find that our estimate of the exponential auto-correlation time scales as τ exp (a)∝a -5 , where a is the lattice spacing. In unquenched simulations with O(a) improved Wilson fermions we do not obtain a scaling law but find results compatible with the behavior that we find in the pure gauge theory. The discussion is supported by a large set of ensembles both in pure gauge and in the theory with two degenerate sea quarks. We have moreover investigated the effect of slow algorithmic modes in the error analysis of the expectation value of typical lattice QCD observables (hadronic matrix elements and masses). In the context of simulations affected by slow modes we propose and test a method to obtain reliable estimates of statistical errors. The method is supposed to help in the typical algorithmic setup of lattice QCD, namely when the total statistics collected is of O(10)τ exp . This is the typical case when simulating close to the continuum limit where the computational costs for producing two independent data points can be extremely large. We finally discuss the scale setting in N f =2 simulations using the Kaon decay constant f K as physical input. The method is explained together with a thorough discussion of the error analysis employed. A description of the publicly available code used for the error analysis is included.

  6. Diffraction analysis of sidelobe characteristics of optical elements with ripple error

    Science.gov (United States)

    Zhao, Lei; Luo, Yupeng; Bai, Jian; Zhou, Xiangdong; Du, Juan; Liu, Qun; Luo, Yujie

    2018-03-01

    The ripple errors of the lens lead to optical damage in high energy laser system. The analysis of sidelobe on the focal plane, caused by ripple error, provides a reference to evaluate the error and the imaging quality. In this paper, we analyze the diffraction characteristics of sidelobe of optical elements with ripple errors. First, we analyze the characteristics of ripple error and build relationship between ripple error and sidelobe. The sidelobe results from the diffraction of ripple errors. The ripple error tends to be periodic due to fabrication method on the optical surface. The simulated experiments are carried out based on angular spectrum method by characterizing ripple error as rotationally symmetric periodic structures. The influence of two major parameter of ripple including spatial frequency and peak-to-valley value to sidelobe is discussed. The results indicate that spatial frequency and peak-to-valley value both impact sidelobe at the image plane. The peak-tovalley value is the major factor to affect the energy proportion of the sidelobe. The spatial frequency is the major factor to affect the distribution of the sidelobe at the image plane.

  7. A stochastic dynamic model for human error analysis in nuclear power plants

    Science.gov (United States)

    Delgado-Loperena, Dharma

    Nuclear disasters like Three Mile Island and Chernobyl indicate that human performance is a critical safety issue, sending a clear message about the need to include environmental press and competence aspects in research. This investigation was undertaken to serve as a roadmap for studying human behavior through the formulation of a general solution equation. The theoretical model integrates models from two heretofore-disassociated disciplines (behavior specialists and technical specialists), that historically have independently studied the nature of error and human behavior; including concepts derived from fractal and chaos theory; and suggests re-evaluation of base theory regarding human error. The results of this research were based on comprehensive analysis of patterns of error, with the omnipresent underlying structure of chaotic systems. The study of patterns lead to a dynamic formulation, serving for any other formula used to study human error consequences. The search for literature regarding error yielded insight for the need to include concepts rooted in chaos theory and strange attractors---heretofore unconsidered by mainstream researchers who investigated human error in nuclear power plants or those who employed the ecological model in their work. The study of patterns obtained from the rupture of a steam generator tube (SGTR) event simulation, provided a direct application to aspects of control room operations in nuclear power plant operations. In doing so, the conceptual foundation based in the understanding of the patterns of human error analysis can be gleaned, resulting in reduced and prevent undesirable events.

  8. A Corpus-based Study of EFL Learners’ Errors in IELTS Essay Writing

    OpenAIRE

    Hoda Divsar; Robab Heydari

    2017-01-01

    The present study analyzed different types of errors in the EFL learners’ IELTS essays. In order to determine the major types of errors, a corpus of 70 IELTS examinees’ writings were collected, and their errors were extracted and categorized qualitatively. Errors were categorized based on a researcher-developed error-coding scheme into 13 aspects. Based on the descriptive statistical analyses, the frequency of each error type was calculated and the commonest errors committed by the EFL learne...

  9. A New Error Analysis and Accuracy Synthesis Method for Shoe Last Machine

    Directory of Open Access Journals (Sweden)

    Bian Xiangjuan

    2014-05-01

    Full Text Available In order to improve the manufacturing precision of the shoe last machine, a new error-computing model has been put forward to. At first, Based on the special topological structure of the shoe last machine and multi-rigid body system theory, a spatial error-calculating model of the system was built; Then, the law of error distributing in the whole work space was discussed, and the maximum error position of the system was found; At last, The sensitivities of error parameters were analyzed at the maximum position and the accuracy synthesis was conducted by using Monte Carlo method. Considering the error sensitivities analysis, the accuracy of the main parts was distributed. Results show that the probability of the maximal volume error less than 0.05 mm of the new scheme was improved from 0.6592 to 0.7021 than the probability of the old scheme, the precision of the system was improved obviously, the model can be used for the error analysis and accuracy synthesis of the complex multi- embranchment motion chain system, and to improve the system precision of manufacturing.

  10. Spectrogram Image Analysis of Error Signals for Minimizing Impulse Noise

    Directory of Open Access Journals (Sweden)

    Jeakwan Kim

    2016-01-01

    Full Text Available This paper presents the theoretical and experimental study on the spectrogram image analysis of error signals for minimizing the impulse input noises in the active suppression of noise. Impulse inputs of some specific wave patterns as primary noises to a one-dimensional duct with the length of 1800 mm are shown. The convergence speed of the adaptive feedforward algorithm based on the least mean square approach was controlled by a normalized step size which was incorporated into the algorithm. The variations of the step size govern the stability as well as the convergence speed. Because of this reason, a normalized step size is introduced as a new method for the control of impulse noise. The spectrogram images which indicate the degree of the attenuation of the impulse input noises are considered to represent the attenuation with the new method. The algorithm is extensively investigated in both simulation and real-time control experiment. It is demonstrated that the suggested algorithm worked with a nice stability and performance against impulse noises. The results in this study can be used for practical active noise control systems.

  11. Analysis of Employee's Survey for Preventing Human-Errors

    Energy Technology Data Exchange (ETDEWEB)

    Sung, Chanho; Kim, Younggab; Joung, Sanghoun [KHNP Central Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    Human errors in nuclear power plant can cause large and small events or incidents. These events or incidents are one of main contributors of reactor trip and might threaten the safety of nuclear plants. To prevent human-errors, KHNP(nuclear power plants) introduced 'Human-error prevention techniques' and have applied the techniques to main parts such as plant operation, operation support, and maintenance and engineering. This paper proposes the methods to prevent and reduce human-errors in nuclear power plants through analyzing survey results which includes the utilization of the human-error prevention techniques and the employees' awareness of preventing human-errors. With regard to human-error prevention, this survey analysis presented the status of the human-error prevention techniques and the employees' awareness of preventing human-errors. Employees' understanding and utilization of the techniques was generally high and training level of employee and training effect on actual works were in good condition. Also, employees answered that the root causes of human-error were due to working environment including tight process, manpower shortage, and excessive mission rather than personal negligence or lack of personal knowledge. Consideration of working environment is certainly needed. At the present time, based on analyzing this survey, the best methods of preventing human-error are personal equipment, training/education substantiality, private mental health check before starting work, prohibit of multiple task performing, compliance with procedures, and enhancement of job site review. However, the most important and basic things for preventing human-error are interests of workers and organizational atmosphere such as communication between managers and workers, and communication between employees and bosses.

  12. Covariance Analysis Tool (G-CAT) for Computing Ascent, Descent, and Landing Errors

    Science.gov (United States)

    Boussalis, Dhemetrios; Bayard, David S.

    2013-01-01

    G-CAT is a covariance analysis tool that enables fast and accurate computation of error ellipses for descent, landing, ascent, and rendezvous scenarios, and quantifies knowledge error contributions needed for error budgeting purposes. Because GCAT supports hardware/system trade studies in spacecraft and mission design, it is useful in both early and late mission/ proposal phases where Monte Carlo simulation capability is not mature, Monte Carlo simulation takes too long to run, and/or there is a need to perform multiple parametric system design trades that would require an unwieldy number of Monte Carlo runs. G-CAT is formulated as a variable-order square-root linearized Kalman filter (LKF), typically using over 120 filter states. An important property of G-CAT is that it is based on a 6-DOF (degrees of freedom) formulation that completely captures the combined effects of both attitude and translation errors on the propagated trajectories. This ensures its accuracy for guidance, navigation, and control (GN&C) analysis. G-CAT provides the desired fast turnaround analysis needed for error budgeting in support of mission concept formulations, design trade studies, and proposal development efforts. The main usefulness of a covariance analysis tool such as G-CAT is its ability to calculate the performance envelope directly from a single run. This is in sharp contrast to running thousands of simulations to obtain similar information using Monte Carlo methods. It does this by propagating the "statistics" of the overall design, rather than simulating individual trajectories. G-CAT supports applications to lunar, planetary, and small body missions. It characterizes onboard knowledge propagation errors associated with inertial measurement unit (IMU) errors (gyro and accelerometer), gravity errors/dispersions (spherical harmonics, masscons), and radar errors (multiple altimeter beams, multiple Doppler velocimeter beams). G-CAT is a standalone MATLAB- based tool intended to

  13. Fault tree model of human error based on error-forcing contexts

    International Nuclear Information System (INIS)

    Kang, Hyun Gook; Jang, Seung Cheol; Ha, Jae Joo

    2004-01-01

    In the safety-critical systems such as nuclear power plants, the safety-feature actuation is fully automated. In emergency case, the human operator could also play the role of a backup for automated systems. That is, the failure of safety-feature-actuation signal generation implies the concurrent failure of automated systems and that of manual actuation. The human operator's manual actuation failure is largely affected by error-forcing contexts (EFC). The failures of sensors and automated systems are most important ones. The sensors, the automated actuation system and the human operators are correlated in a complex manner and hard to develop a proper model. In this paper, we will explain the condition-based human reliability assessment (CBHRA) method in order to treat these complicated conditions in a practical way. In this study, we apply the CBHRA method to the manual actuation of safety features such as reactor trip and safety injection in Korean Standard Nuclear Power Plants

  14. Radiological error: analysis, standard setting, targeted instruction and teamworking

    International Nuclear Information System (INIS)

    FitzGerald, Richard

    2005-01-01

    Diagnostic radiology does not have objective benchmarks for acceptable levels of missed diagnoses [1]. Until now, data collection of radiological discrepancies has been very time consuming. The culture within the specialty did not encourage it. However, public concern about patient safety is increasing. There have been recent innovations in compiling radiological interpretive discrepancy rates which may facilitate radiological standard setting. However standard setting alone will not optimise radiologists' performance or patient safety. We must use these new techniques in radiological discrepancy detection to stimulate greater knowledge sharing, targeted instruction and teamworking among radiologists. Not all radiological discrepancies are errors. Radiological discrepancy programmes must not be abused as an instrument for discrediting individual radiologists. Discrepancy rates must not be distorted as a weapon in turf battles. Radiological errors may be due to many causes and are often multifactorial. A systems approach to radiological error is required. Meaningful analysis of radiological discrepancies and errors is challenging. Valid standard setting will take time. Meanwhile, we need to develop top-up training, mentoring and rehabilitation programmes. (orig.)

  15. Error function attack of chaos synchronization based encryption schemes.

    Science.gov (United States)

    Wang, Xingang; Zhan, Meng; Lai, C-H; Gang, Hu

    2004-03-01

    Different chaos synchronization based encryption schemes are reviewed and compared from the practical point of view. As an efficient cryptanalysis tool for chaos encryption, a proposal based on the error function attack is presented systematically and used to evaluate system security. We define a quantitative measure (quality factor) of the effective applicability of a chaos encryption scheme, which takes into account the security, the encryption speed, and the robustness against channel noise. A comparison is made of several encryption schemes and it is found that a scheme based on one-way coupled chaotic map lattices performs outstandingly well, as judged from quality factor. Copyright 2004 American Institute of Physics.

  16. Neural Bases of Unconscious Error Detection in a Chinese Anagram Solution Task: Evidence from ERP Study.

    Directory of Open Access Journals (Sweden)

    Hua-Zhan Yin

    Full Text Available In everyday life, error monitoring and processing are important for improving ongoing performance in response to a changing environment. However, detecting an error is not always a conscious process. The temporal activation patterns of brain areas related to cognitive control in the absence of conscious awareness of an error remain unknown. In the present study, event-related potentials (ERPs in the brain were used to explore the neural bases of unconscious error detection when subjects solved a Chinese anagram task. Our ERP data showed that the unconscious error detection (UED response elicited a more negative ERP component (N2 than did no error (NE and detect error (DE responses in the 300-400-ms time window, and the DE elicited a greater late positive component (LPC than did the UED and NE in the 900-1200-ms time window after the onset of the anagram stimuli. Taken together with the results of dipole source analysis, the N2 (anterior cingulate cortex might reflect unconscious/automatic conflict monitoring, and the LPC (superior/medial frontal gyrus might reflect conscious error recognition.

  17. Error Evaluation in a Stereovision-Based 3D Reconstruction System

    Directory of Open Access Journals (Sweden)

    Kohler Sophie

    2010-01-01

    Full Text Available The work presented in this paper deals with the performance analysis of the whole 3D reconstruction process of imaged objects, specifically of the set of geometric primitives describing their outline and extracted from a pair of images knowing their associated camera models. The proposed analysis focuses on error estimation for the edge detection process, the starting step for the whole reconstruction procedure. The fitting parameters describing the geometric features composing the workpiece to be evaluated are used as quality measures to determine error bounds and finally to estimate the edge detection errors. These error estimates are then propagated up to the final 3D reconstruction step. The suggested error analysis procedure for stereovision-based reconstruction tasks further allows evaluating the quality of the 3D reconstruction. The resulting final error estimates enable lastly to state if the reconstruction results fulfill a priori defined criteria, for example, fulfill dimensional constraints including tolerance information, for vision-based quality control applications for example.

  18. Short-term wind power combined forecasting based on error forecast correction

    International Nuclear Information System (INIS)

    Liang, Zhengtang; Liang, Jun; Wang, Chengfu; Dong, Xiaoming; Miao, Xiaofeng

    2016-01-01

    Highlights: • The correlation relationships of short-term wind power forecast errors are studied. • The correlation analysis method of the multi-step forecast errors is proposed. • A strategy selecting the input variables for the error forecast models is proposed. • Several novel combined models based on error forecast correction are proposed. • The combined models have improved the short-term wind power forecasting accuracy. - Abstract: With the increasing contribution of wind power to electric power grids, accurate forecasting of short-term wind power has become particularly valuable for wind farm operators, utility operators and customers. The aim of this study is to investigate the interdependence structure of errors in short-term wind power forecasting that is crucial for building error forecast models with regression learning algorithms to correct predictions and improve final forecasting accuracy. In this paper, several novel short-term wind power combined forecasting models based on error forecast correction are proposed in the one-step ahead, continuous and discontinuous multi-step ahead forecasting modes. First, the correlation relationships of forecast errors of the autoregressive model, the persistence method and the support vector machine model in various forecasting modes have been investigated to determine whether the error forecast models can be established by regression learning algorithms. Second, according to the results of the correlation analysis, the range of input variables is defined and an efficient strategy for selecting the input variables for the error forecast models is proposed. Finally, several combined forecasting models are proposed, in which the error forecast models are based on support vector machine/extreme learning machine, and correct the short-term wind power forecast values. The data collected from a wind farm in Hebei Province, China, are selected as a case study to demonstrate the effectiveness of the proposed

  19. Critical slowing down and error analysis in lattice QCD simulations

    Energy Technology Data Exchange (ETDEWEB)

    Virotta, Francesco

    2012-02-21

    In this work we investigate the critical slowing down of lattice QCD simulations. We perform a preliminary study in the quenched approximation where we find that our estimate of the exponential auto-correlation time scales as {tau}{sub exp}(a){proportional_to}a{sup -5}, where a is the lattice spacing. In unquenched simulations with O(a) improved Wilson fermions we do not obtain a scaling law but find results compatible with the behavior that we find in the pure gauge theory. The discussion is supported by a large set of ensembles both in pure gauge and in the theory with two degenerate sea quarks. We have moreover investigated the effect of slow algorithmic modes in the error analysis of the expectation value of typical lattice QCD observables (hadronic matrix elements and masses). In the context of simulations affected by slow modes we propose and test a method to obtain reliable estimates of statistical errors. The method is supposed to help in the typical algorithmic setup of lattice QCD, namely when the total statistics collected is of O(10){tau}{sub exp}. This is the typical case when simulating close to the continuum limit where the computational costs for producing two independent data points can be extremely large. We finally discuss the scale setting in N{sub f}=2 simulations using the Kaon decay constant f{sub K} as physical input. The method is explained together with a thorough discussion of the error analysis employed. A description of the publicly available code used for the error analysis is included.

  20. Utilizing Integrated Prediction Error Filter Analysis (INPEFA) to divide base-level cycle of fan-deltas: A case study of the Triassic Baikouquan Formation in Mabei Slope Area, Mahu Depression, Junggar Basin, China

    Science.gov (United States)

    Yuan, Rui; Zhu, Rui; Qu, Jianhua; Wu, Jun; You, Xincai; Sun, Yuqiu; Zhou, Yuanquan (Nancy)

    2018-05-01

    The Mahu Depression is an important hydrocarbon-bearing foreland sag located at the northwestern margin of the Junggar Basin, China. On the northern slope of the depression, large coarse-grained proximal fan-delta depositional systems developed in the Lower Triassic Baikouquan Formation (T1b). Some lithologic hydrocarbon reservoirs have been found in the conglomerates of the formation since recent years. However, the rapid vertical and horizontal lithology variations make it is difficult to divide the base-level cycle of the formation using the conventional methods. Spectral analysis technologies, such as Integrated Prediction Error Filter Analysis (INPEFA), provide another effective way to overcome this difficultly. In this paper, processed by INPEFA, conventional resistivity logs are utilized to study the base-level cycle of the fan-delta depositional systems. The negative trend of the INPEFA curve indicates the base-level fall semi-cycles, adversely, positive trend suggests the rise semi-cycles. Base-level cycles of Baikouquan Formation are divided in single and correlation wells. One long-term base-level rise semi-cycle, including three medium-term base-level cycles, is identified overall the Baikouquan Formation. The medium-term base-level cycles are characterized as rise semi-cycles mainly in the fan-delta plain, symmetric cycles in the fan-delta front and fall semi-cycles mainly in the pro-fan-delta. The short-term base-level rise semi-cycles most developed in the braided channels, sub-aqueous distributary channels and sheet sands. While, the interdistributary bays and pro-fan-delta mud indicate short-term base-level fall semi-cycles. Finally, based on the method of INPEFA, sequence filling model of Baikouquan formation is established.

  1. Minimizing treatment planning errors in proton therapy using failure mode and effects analysis

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Yuanshui, E-mail: yuanshui.zheng@okc.procure.com [ProCure Proton Therapy Center, 5901 W Memorial Road, Oklahoma City, Oklahoma 73142 and Department of Physics, Oklahoma State University, Stillwater, Oklahoma 74078-3072 (United States); Johnson, Randall; Larson, Gary [ProCure Proton Therapy Center, 5901 W Memorial Road, Oklahoma City, Oklahoma 73142 (United States)

    2016-06-15

    Purpose: Failure mode and effects analysis (FMEA) is a widely used tool to evaluate safety or reliability in conventional photon radiation therapy. However, reports about FMEA application in proton therapy are scarce. The purpose of this study is to apply FMEA in safety improvement of proton treatment planning at their center. Methods: The authors performed an FMEA analysis of their proton therapy treatment planning process using uniform scanning proton beams. The authors identified possible failure modes in various planning processes, including image fusion, contouring, beam arrangement, dose calculation, plan export, documents, billing, and so on. For each error, the authors estimated the frequency of occurrence, the likelihood of being undetected, and the severity of the error if it went undetected and calculated the risk priority number (RPN). The FMEA results were used to design their quality management program. In addition, the authors created a database to track the identified dosimetric errors. Periodically, the authors reevaluated the risk of errors by reviewing the internal error database and improved their quality assurance program as needed. Results: In total, the authors identified over 36 possible treatment planning related failure modes and estimated the associated occurrence, detectability, and severity to calculate the overall risk priority number. Based on the FMEA, the authors implemented various safety improvement procedures into their practice, such as education, peer review, and automatic check tools. The ongoing error tracking database provided realistic data on the frequency of occurrence with which to reevaluate the RPNs for various failure modes. Conclusions: The FMEA technique provides a systematic method for identifying and evaluating potential errors in proton treatment planning before they result in an error in patient dose delivery. The application of FMEA framework and the implementation of an ongoing error tracking system at their

  2. Minimizing treatment planning errors in proton therapy using failure mode and effects analysis

    International Nuclear Information System (INIS)

    Zheng, Yuanshui; Johnson, Randall; Larson, Gary

    2016-01-01

    Purpose: Failure mode and effects analysis (FMEA) is a widely used tool to evaluate safety or reliability in conventional photon radiation therapy. However, reports about FMEA application in proton therapy are scarce. The purpose of this study is to apply FMEA in safety improvement of proton treatment planning at their center. Methods: The authors performed an FMEA analysis of their proton therapy treatment planning process using uniform scanning proton beams. The authors identified possible failure modes in various planning processes, including image fusion, contouring, beam arrangement, dose calculation, plan export, documents, billing, and so on. For each error, the authors estimated the frequency of occurrence, the likelihood of being undetected, and the severity of the error if it went undetected and calculated the risk priority number (RPN). The FMEA results were used to design their quality management program. In addition, the authors created a database to track the identified dosimetric errors. Periodically, the authors reevaluated the risk of errors by reviewing the internal error database and improved their quality assurance program as needed. Results: In total, the authors identified over 36 possible treatment planning related failure modes and estimated the associated occurrence, detectability, and severity to calculate the overall risk priority number. Based on the FMEA, the authors implemented various safety improvement procedures into their practice, such as education, peer review, and automatic check tools. The ongoing error tracking database provided realistic data on the frequency of occurrence with which to reevaluate the RPNs for various failure modes. Conclusions: The FMEA technique provides a systematic method for identifying and evaluating potential errors in proton treatment planning before they result in an error in patient dose delivery. The application of FMEA framework and the implementation of an ongoing error tracking system at their

  3. Critical slowing down and error analysis in lattice QCD simulations

    Energy Technology Data Exchange (ETDEWEB)

    Schaefer, Stefan [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Sommer, Rainer; Virotta, Francesco [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC

    2010-09-15

    We study the critical slowing down towards the continuum limit of lattice QCD simulations with Hybrid Monte Carlo type algorithms. In particular for the squared topological charge we find it to be very severe with an effective dynamical critical exponent of about 5 in pure gauge theory. We also consider Wilson loops which we can demonstrate to decouple from the modes which slow down the topological charge. Quenched observables are studied and a comparison to simulations of full QCD is made. In order to deal with the slow modes in the simulation, we propose a method to incorporate the information from slow observables into the error analysis of physical observables and arrive at safer error estimates. (orig.)

  4. Critical slowing down and error analysis in lattice QCD simulations

    International Nuclear Information System (INIS)

    Schaefer, Stefan; Sommer, Rainer; Virotta, Francesco

    2010-09-01

    We study the critical slowing down towards the continuum limit of lattice QCD simulations with Hybrid Monte Carlo type algorithms. In particular for the squared topological charge we find it to be very severe with an effective dynamical critical exponent of about 5 in pure gauge theory. We also consider Wilson loops which we can demonstrate to decouple from the modes which slow down the topological charge. Quenched observables are studied and a comparison to simulations of full QCD is made. In order to deal with the slow modes in the simulation, we propose a method to incorporate the information from slow observables into the error analysis of physical observables and arrive at safer error estimates. (orig.)

  5. Analytical sensitivity analysis of geometric errors in a three axis machine tool

    International Nuclear Information System (INIS)

    Park, Sung Ryung; Yang, Seung Han

    2012-01-01

    In this paper, an analytical method is used to perform a sensitivity analysis of geometric errors in a three axis machine tool. First, an error synthesis model is constructed for evaluating the position volumetric error due to the geometric errors, and then an output variable is defined, such as the magnitude of the position volumetric error. Next, the global sensitivity analysis is executed using an analytical method. Finally, the sensitivity indices are calculated using the quantitative values of the geometric errors

  6. Error Analysis of Satellite Precipitation-Driven Modeling of Flood Events in Complex Alpine Terrain

    Directory of Open Access Journals (Sweden)

    Yiwen Mei

    2016-03-01

    Full Text Available The error in satellite precipitation-driven complex terrain flood simulations is characterized in this study for eight different global satellite products and 128 flood events over the Eastern Italian Alps. The flood events are grouped according to two flood types: rain floods and flash floods. The satellite precipitation products and runoff simulations are evaluated based on systematic and random error metrics applied on the matched event pairs and basin-scale event properties (i.e., rainfall and runoff cumulative depth and time series shape. Overall, error characteristics exhibit dependency on the flood type. Generally, timing of the event precipitation mass center and dispersion of the time series derived from satellite precipitation exhibits good agreement with the reference; the cumulative depth is mostly underestimated. The study shows a dampening effect in both systematic and random error components of the satellite-driven hydrograph relative to the satellite-retrieved hyetograph. The systematic error in shape of the time series shows a significant dampening effect. The random error dampening effect is less pronounced for the flash flood events and the rain flood events with a high runoff coefficient. This event-based analysis of the satellite precipitation error propagation in flood modeling sheds light on the application of satellite precipitation in mountain flood hydrology.

  7. Pollutant forecasting error based on persistence of wind direction

    International Nuclear Information System (INIS)

    Cooper, R.E.

    1978-01-01

    The purpose of this report is to provide a means of estimating the reliability of forecasts of downwind pollutant concentrations from atmospheric puff releases. These forecasts are based on assuming the persistence of wind direction as determined at the time of release. This initial forecast will be used to deploy survey teams, to predict population centers that may be affected, and to estimate the amount of time available for emergency response. Reliability of forecasting is evaluated by developing a cumulative probability distribution of error as a function of lapsed time following an assumed release. The cumulative error is determined by comparing the forecast pollutant concentration with the concentration measured by sampling along the real-time meteorological trajectory. It may be concluded that the assumption of meteorological persistence for emergency response is not very good for periods longer than 3 hours. Even within this period, the possibiity for large error exists due to wind direction shifts. These shifts could affect population areas totally different from those areas first indicated

  8. Tropical systematic and random error energetics based on NCEP ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    Systematic error growth rate peak is observed at wavenumber 2 up to 4-day forecast then .... the influence of summer systematic error and ran- ... total exchange. When the error energy budgets are examined in spectral domain, one may ask ques- tions on the error growth at a certain wavenum- ber from its interaction with ...

  9. Error Analysis for Fourier Methods for Option Pricing

    KAUST Repository

    Häppölä, Juho

    2016-01-06

    We provide a bound for the error committed when using a Fourier method to price European options when the underlying follows an exponential Levy dynamic. The price of the option is described by a partial integro-differential equation (PIDE). Applying a Fourier transformation to the PIDE yields an ordinary differential equation that can be solved analytically in terms of the characteristic exponent of the Levy process. Then, a numerical inverse Fourier transform allows us to obtain the option price. We present a novel bound for the error and use this bound to set the parameters for the numerical method. We analyze the properties of the bound for a dissipative and pure-jump example. The bound presented is independent of the asymptotic behaviour of option prices at extreme asset prices. The error bound can be decomposed into a product of terms resulting from the dynamics and the option payoff, respectively. The analysis is supplemented by numerical examples that demonstrate results comparable to and superior to the existing literature.

  10. FRamework Assessing Notorious Contributing Influences for Error (FRANCIE): Perspective on Taxonomy Development to Support Error Reporting and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lon N. Haney; David I. Gertman

    2003-04-01

    Beginning in the 1980s a primary focus of human reliability analysis was estimation of human error probabilities. However, detailed qualitative modeling with comprehensive representation of contextual variables often was lacking. This was likely due to the lack of comprehensive error and performance shaping factor taxonomies, and the limited data available on observed error rates and their relationship to specific contextual variables. In the mid 90s Boeing, America West Airlines, NASA Ames Research Center and INEEL partnered in a NASA sponsored Advanced Concepts grant to: assess the state of the art in human error analysis, identify future needs for human error analysis, and develop an approach addressing these needs. Identified needs included the need for a method to identify and prioritize task and contextual characteristics affecting human reliability. Other needs identified included developing comprehensive taxonomies to support detailed qualitative modeling and to structure meaningful data collection efforts across domains. A result was the development of the FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) with a taxonomy for airline maintenance tasks. The assignment of performance shaping factors to generic errors by experts proved to be valuable to qualitative modeling. Performance shaping factors and error types from such detailed approaches can be used to structure error reporting schemes. In a recent NASA Advanced Human Support Technology grant FRANCIE was refined, and two new taxonomies for use on space missions were developed. The development, sharing, and use of error taxonomies, and the refinement of approaches for increased fidelity of qualitative modeling is offered as a means to help direct useful data collection strategies.

  11. Adjustment of Measurements with Multiplicative Errors: Error Analysis, Estimates of the Variance of Unit Weight, and Effect on Volume Estimation from LiDAR-Type Digital Elevation Models

    Directory of Open Access Journals (Sweden)

    Yun Shi

    2014-01-01

    Full Text Available Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.

  12. Research on Error Modelling and Identification of 3 Axis NC Machine Tools Based on Cross Grid Encoder Measurement

    International Nuclear Information System (INIS)

    Du, Z C; Lv, C F; Hong, M S

    2006-01-01

    A new error modelling and identification method based on the cross grid encoder is proposed in this paper. Generally, there are 21 error components in the geometric error of the 3 axis NC machine tools. However according our theoretical analysis, the squareness error among different guide ways affects not only the translation error component, but also the rotational ones. Therefore, a revised synthetic error model is developed. And the mapping relationship between the error component and radial motion error of round workpiece manufactured on the NC machine tools are deduced. This mapping relationship shows that the radial error of circular motion is the comprehensive function result of all the error components of link, worktable, sliding table and main spindle block. Aiming to overcome the solution singularity shortcoming of traditional error component identification method, a new multi-step identification method of error component by using the Cross Grid Encoder measurement technology is proposed based on the kinematic error model of NC machine tool. Firstly, the 12 translational error components of the NC machine tool are measured and identified by using the least square method (LSM) when the NC machine tools go linear motion in the three orthogonal planes: XOY plane, XOZ plane and YOZ plane. Secondly, the circular error tracks are measured when the NC machine tools go circular motion in the same above orthogonal planes by using the cross grid encoder Heidenhain KGM 182. Therefore 9 rotational errors can be identified by using LSM. Finally the experimental validation of the above modelling theory and identification method is carried out in the 3 axis CNC vertical machining centre Cincinnati 750 Arrow. The entire 21 error components have been successfully measured out by the above method. Research shows the multi-step modelling and identification method is very suitable for 'on machine measurement'

  13. Likelihood-Based Inference in Nonlinear Error-Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    We consider a class of vector nonlinear error correction models where the transfer function (or loadings) of the stationary relation- ships is nonlinear. This includes in particular the smooth transition models. A general representation theorem is given which establishes the dynamic properties...... and a linear trend in general. Gaussian likelihood-based estimators are considered for the long- run cointegration parameters, and the short-run parameters. Asymp- totic theory is provided for these and it is discussed to what extend asymptotic normality and mixed normaity can be found. A simulation study...

  14. A Posteriori Error Analysis of Stochastic Differential Equations Using Polynomial Chaos Expansions

    KAUST Repository

    Butler, T.; Dawson, C.; Wildey, T.

    2011-01-01

    We develop computable a posteriori error estimates for linear functionals of a solution to a general nonlinear stochastic differential equation with random model/source parameters. These error estimates are based on a variational analysis applied to stochastic Galerkin methods for forward and adjoint problems. The result is a representation for the error estimate as a polynomial in the random model/source parameter. The advantage of this method is that we use polynomial chaos representations for the forward and adjoint systems to cheaply produce error estimates by simple evaluation of a polynomial. By comparison, the typical method of producing such estimates requires repeated forward/adjoint solves for each new choice of random parameter. We present numerical examples showing that there is excellent agreement between these methods. © 2011 Society for Industrial and Applied Mathematics.

  15. Detailed semantic analyses of human error incidents occurring at nuclear power plants. Extraction of periodical transition of error occurrence patterns by applying multivariate analysis

    International Nuclear Information System (INIS)

    Hirotsu, Yuko; Suzuki, Kunihiko; Takano, Kenichi; Kojima, Mitsuhiro

    2000-01-01

    It is essential for preventing the recurrence of human error incidents to analyze and evaluate them with the emphasis on human factor. Detailed and structured analyses of all incidents at domestic nuclear power plants (NPPs) reported during last 31 years have been conducted based on J-HPES, in which total 193 human error cases are identified. Results obtained by the analyses have been stored into the J-HPES database. In the previous study, by applying multivariate analysis to above case studies, it was suggested that there were several occurrence patterns identified of how errors occur at NPPs. It was also clarified that the causes related to each human error are different depending on age of their occurrence. This paper described the obtained results in respects of periodical transition of human error occurrence patterns. By applying multivariate analysis to the above data, it was suggested there were two types of error occurrence patterns as to each human error type. First type is common occurrence patterns, not depending on the age, and second type is the one influenced by periodical characteristics. (author)

  16. Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers

    Directory of Open Access Journals (Sweden)

    Zheng You

    2013-04-01

    Full Text Available The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.

  17. Optical system error analysis and calibration method of high-accuracy star trackers.

    Science.gov (United States)

    Sun, Ting; Xing, Fei; You, Zheng

    2013-04-08

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.

  18. Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip

    Energy Technology Data Exchange (ETDEWEB)

    Du, Xuecheng; He, Chaohui; Liu, Shuhuan, E-mail: liushuhuan@mail.xjtu.edu.cn; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang

    2016-09-21

    Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.

  19. Error analysis of mathematical problems on TIMSS: A case of Indonesian secondary students

    Science.gov (United States)

    Priyani, H. A.; Ekawati, R.

    2018-01-01

    Indonesian students’ competence in solving mathematical problems is still considered as weak. It was pointed out by the results of international assessment such as TIMSS. This might be caused by various types of errors made. Hence, this study aimed at identifying students’ errors in solving mathematical problems in TIMSS in the topic of numbers that considered as the fundamental concept in Mathematics. This study applied descriptive qualitative analysis. The subject was three students with most errors in the test indicators who were taken from 34 students of 8th graders. Data was obtained through paper and pencil test and student’s’ interview. The error analysis indicated that in solving Applying level problem, the type of error that students made was operational errors. In addition, for reasoning level problem, there are three types of errors made such as conceptual errors, operational errors and principal errors. Meanwhile, analysis of the causes of students’ errors showed that students did not comprehend the mathematical problems given.

  20. Error-Based Accidents and Security Incidents in Nuclear Materials Management

    International Nuclear Information System (INIS)

    Pond, Daniel J.; Greitzer, Frank L.

    2005-01-01

    Hazard and risk assessments, along with human error analysis and mitigation techniques, have long been mainstays of effective safety programs. These tools have revealed that worker errors contributing to or resulting in accidents are often the consequence of ineffective system conditions, process features, or individual employee characteristics. At Los Alamos National Laboratory (LANL), security, safety, human error, and organizational analysts determined that the system-induced human errors that make accidents more likely also are contributing to security incidents. A similar set of system conditions has been found to underlie deliberate, non-malevolent deviations from proper security practices - termed breaches - that also can result in a security incident. In fiscal-year (FY) 2002, LANL's Security Division therefore established the ESTHER (Enhanced Security Through Human Error Reduction) program to identify and reduce the influence of the factors that underlie employee errors and breaches and, in turn, security incidents. Recognizing the potential benefits of this program and approach, in FY2004 the Department of Energy (DOE) Office of Security Policy (DOE-SO) funded an expansion of ESTHER implementation to the causal assessment and reporting of security incidents at other DOE sites. This presentation will focus on three applications of error/breach assessment and mitigation techniques. One use is proactive, accomplished through the elimination of contributors to error, whereas two are reactive, implemented in response to accidents or security incidents as well as to near misses, to prevent recurrence. The human performance and safety bases of these techniques will be detailed. Associated tools - including computer-based assessment training and web-based incident reporting modules developed by Pacific Northwest National Laboratory - will be discussed

  1. A Simulation-Based Soft Error Estimation Methodology for Computer Systems

    OpenAIRE

    Sugihara, Makoto; Ishihara, Tohru; Hashimoto, Koji; Muroyama, Masanori

    2006-01-01

    This paper proposes a simulation-based soft error estimation methodology for computer systems. Accumulating soft error rates (SERs) of all memories in a computer system results in pessimistic soft error estimation. This is because memory cells are used spatially and temporally and not all soft errors in them make the computer system faulty. Our soft-error estimation methodology considers the locations and the timings of soft errors occurring at every level of memory hierarchy and estimates th...

  2. Error Analysis of CM Data Products Sources of Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Hunt, Brian D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eckert-Gallup, Aubrey Celia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Cochran, Lainy Dromgoole [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kraus, Terrence D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Allen, Mark B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Beal, Bill [National Security Technologies, Joint Base Andrews, MD (United States); Okada, Colin [National Security Technologies, LLC. (NSTec), Las Vegas, NV (United States); Simpson, Mathew [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-02-01

    This goal of this project is to address the current inability to assess the overall error and uncertainty of data products developed and distributed by DOE’s Consequence Management (CM) Program. This is a widely recognized shortfall, the resolution of which would provide a great deal of value and defensibility to the analysis results, data products, and the decision making process that follows this work. A global approach to this problem is necessary because multiple sources of error and uncertainty contribute to the ultimate production of CM data products. Therefore, this project will require collaboration with subject matter experts across a wide range of FRMAC skill sets in order to quantify the types of uncertainty that each area of the CM process might contain and to understand how variations in these uncertainty sources contribute to the aggregated uncertainty present in CM data products. The ultimate goal of this project is to quantify the confidence level of CM products to ensure that appropriate public and worker protections decisions are supported by defensible analysis.

  3. CUSUM-Logistic Regression analysis for the rapid detection of errors in clinical laboratory test results.

    Science.gov (United States)

    Sampson, Maureen L; Gounden, Verena; van Deventer, Hendrik E; Remaley, Alan T

    2016-02-01

    The main drawback of the periodic analysis of quality control (QC) material is that test performance is not monitored in time periods between QC analyses, potentially leading to the reporting of faulty test results. The objective of this study was to develop a patient based QC procedure for the more timely detection of test errors. Results from a Chem-14 panel measured on the Beckman LX20 analyzer were used to develop the model. Each test result was predicted from the other 13 members of the panel by multiple regression, which resulted in correlation coefficients between the predicted and measured result of >0.7 for 8 of the 14 tests. A logistic regression model, which utilized the measured test result, the predicted test result, the day of the week and time of day, was then developed for predicting test errors. The output of the logistic regression was tallied by a daily CUSUM approach and used to predict test errors, with a fixed specificity of 90%. The mean average run length (ARL) before error detection by CUSUM-Logistic Regression (CSLR) was 20 with a mean sensitivity of 97%, which was considerably shorter than the mean ARL of 53 (sensitivity 87.5%) for a simple prediction model that only used the measured result for error detection. A CUSUM-Logistic Regression analysis of patient laboratory data can be an effective approach for the rapid and sensitive detection of clinical laboratory errors. Published by Elsevier Inc.

  4. Linear and nonlinear magnetic error measurements using action and phase jump analysis

    Directory of Open Access Journals (Sweden)

    Javier F. Cardona

    2009-01-01

    Full Text Available “Action and phase jump” analysis is presented—a beam based method that uses amplitude and phase knowledge of a particle trajectory to locate and measure magnetic errors in an accelerator lattice. The expected performance of the method is first tested using single-particle simulations in the optical lattice of the Relativistic Heavy Ion Collider (RHIC. Such simulations predict that under ideal conditions typical quadrupole errors can be estimated within an uncertainty of 0.04%. Other simulations suggest that sextupole errors can be estimated within a 3% uncertainty. Then the action and phase jump analysis is applied to real RHIC orbits with known quadrupole errors, and to real Super Proton Synchrotron (SPS orbits with known sextupole errors. It is possible to estimate the strength of a skew quadrupole error from measured RHIC orbits within a 1.2% uncertainty, and to estimate the strength of a strong sextupole component from the measured SPS orbits within a 7% uncertainty.

  5. ANALYSIS AND CORRECTION OF SYSTEMATIC HEIGHT MODEL ERRORS

    Directory of Open Access Journals (Sweden)

    K. Jacobsen

    2016-06-01

    Full Text Available The geometry of digital height models (DHM determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC. Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3 has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP, but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM digital surface model (DSM or the new AW3D30 DSM, based on ALOS

  6. Spinocerebellar ataxia type 2 neurodegeneration differentially affects error-based and strategic-based visuomotor learning.

    Science.gov (United States)

    Vaca-Palomares, Israel; Díaz, Rosalinda; Rodríguez-Labrada, Roberto; Medrano-Montero, Jacqeline; Vázquez-Mojena, Yaimé; Velázquez-Pérez, Luis; Fernandez-Ruiz, Juan

    2013-12-01

    There are different types of visuomotor learning. Among the most studied is motor error-based learning where the sign and magnitude of the error are used to update motor commands. However, there are other instances where individuals show visuomotor learning even if the sign or magnitude of the error is precluded. Studies with patients suggest that the former learning is impaired after cerebellar lesions, while basal ganglia lesions disrupt the latter. Nevertheless, the cerebellar role is not restricted only to error-based learning, but it also contributes to several cognitive processes. Therefore, here, we tested if cerebellar ataxia patients are affected in two tasks, one that depends on error-based learning and the other that prevents the use of error-based learning. Our results showed that cerebellar patients have deficits in both visuomotor tasks; however, while error-based learning tasks deficits correlated with the motor impairments, the motor error-dependent task did not correlate with any motor measure.

  7. Radiation risk estimation based on measurement error models

    CERN Document Server

    Masiuk, Sergii; Shklyar, Sergiy; Chepurny, Mykola; Likhtarov, Illya

    2017-01-01

    This monograph discusses statistics and risk estimates applied to radiation damage under the presence of measurement errors. The first part covers nonlinear measurement error models, with a particular emphasis on efficiency of regression parameter estimators. In the second part, risk estimation in models with measurement errors is considered. Efficiency of the methods presented is verified using data from radio-epidemiological studies.

  8. Error analysis and prevention of cosmic ion-induced soft errors in static CMOS RAMS

    International Nuclear Information System (INIS)

    Diehl, S.E.; Ochoa, A. Jr.; Dressendorfer, P.V.; Koga, R.; Kolasinski, W.A.

    1982-06-01

    Cosmic ray interactions with memory cells are known to cause temporary, random, bit errors in some designs. The sensitivity of polysilicon gate CMOS static RAM designs to logic upset by impinging ions has been studied using computer simulations and experimental heavy ion bombardment. Results of the simulations are confirmed by experimental upset cross-section data. Analytical models have been extended to determine and evaluate design modifications which reduce memory cell sensitivity to cosmic ions. A simple design modification, the addition of decoupling resistance in the feedback path, is shown to produce static RAMs immune to cosmic ray-induced bit errors

  9. Error Analysis for RADAR Neighbor Matching Localization in Linear Logarithmic Strength Varying Wi-Fi Environment

    Directory of Open Access Journals (Sweden)

    Mu Zhou

    2014-01-01

    Full Text Available This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs in logarithmic received signal strength (RSS varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future.

  10. Error Analysis for RADAR Neighbor Matching Localization in Linear Logarithmic Strength Varying Wi-Fi Environment

    Science.gov (United States)

    Tian, Zengshan; Xu, Kunjie; Yu, Xiang

    2014-01-01

    This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs) in logarithmic received signal strength (RSS) varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs) as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future. PMID:24683349

  11. Analysis on optical heterodyne frequency error of full-field heterodyne interferometer

    Science.gov (United States)

    Li, Yang; Zhang, Wenxi; Wu, Zhou; Lv, Xiaoyu; Kong, Xinxin; Guo, Xiaoli

    2017-06-01

    The full-field heterodyne interferometric measurement technology is beginning better applied by employing low frequency heterodyne acousto-optical modulators instead of complex electro-mechanical scanning devices. The optical element surface could be directly acquired by synchronously detecting the received signal phases of each pixel, because standard matrix detector as CCD and CMOS cameras could be used in heterodyne interferometer. Instead of the traditional four-step phase shifting phase calculating, Fourier spectral analysis method is used for phase extracting which brings lower sensitivity to sources of uncertainty and higher measurement accuracy. In this paper, two types of full-field heterodyne interferometer are described whose advantages and disadvantages are also specified. Heterodyne interferometer has to combine two different frequency beams to produce interference, which brings a variety of optical heterodyne frequency errors. Frequency mixing error and beat frequency error are two different kinds of inescapable heterodyne frequency errors. In this paper, the effects of frequency mixing error to surface measurement are derived. The relationship between the phase extraction accuracy and the errors are calculated. :: The tolerance of the extinction ratio of polarization splitting prism and the signal-to-noise ratio of stray light is given. The error of phase extraction by Fourier analysis that caused by beat frequency shifting is derived and calculated. We also propose an improved phase extraction method based on spectrum correction. An amplitude ratio spectrum correction algorithm with using Hanning window is used to correct the heterodyne signal phase extraction. The simulation results show that this method can effectively suppress the degradation of phase extracting caused by beat frequency error and reduce the measurement uncertainty of full-field heterodyne interferometer.

  12. Estimation error algorithm at analysis of beta-spectra

    International Nuclear Information System (INIS)

    Bakovets, N.V.; Zhukovskij, A.I.; Zubarev, V.N.; Khadzhinov, E.M.

    2005-01-01

    This work describes the estimation error algorithm at the operations with beta-spectrums, as well as compares the theoretical and experimental errors by the processing of beta-channel's data. (authors)

  13. Error analysis to improve the speech recognition accuracy on ...

    Indian Academy of Sciences (India)

    dictionary plays a key role in the speech recognition accuracy. .... Sophisticated microphone is used for the recording speech corpus in a noise free environment. .... values, word error rate (WER) and error-rate will be calculated as follows:.

  14. Design of a single-borehole hydraulic test programme allowing for interpretation-based errors

    International Nuclear Information System (INIS)

    Black, J.H.

    1987-07-01

    Hydraulic testing using packers in single boreholes is one of the most important sources of data to safety assessment modelling in connection with the disposal of radioactive waste. It is also one of the most time-consuming and expensive. It is important that the results are as reliable as possible and as accurate as necessary for the use that is made of them. There are many causes of possible error and inaccuracy ranging from poor field practice to inappropriate interpretation procedure. The report examines and attempts to quantify the size of error arising from the accidental use of an inappropriate or inadequate interpretation procedure. In doing so, it can be seen which interpretation procedure or combination of procedures results in least error. Lastly, the report attempts to use the previous conclusions from interpretation to propose forms of field test procedure where interpretation-based errors will be minimised. Hydraulic tests (sometimes known as packer tests) come in three basic forms: slug/pulse, constant flow and constant head. They have different characteristics, some measuring a variable volume of rock (dependent on hydraulic conductivity) and some having a variable duration (dependent on hydraulic conductivity). A combination of different tests in the same interval is seen as desirable. For the purposes of assessing interpretation-based errors, slug and pulse tests are considered together as are constant flow and constant head tests. The same method is used in each case to assess errors. The method assumes that the simplest analysis procedure (cylindrical flow in homogeneous isotropic porous rock) will be used on each set of field data. The error is assessed by calculating synthetic data for alternative configurations (e.g. fissured rock, anisotropic rock, inhomogeneous rock - i.e. skin - etc.) and then analyzing this data using the simplest analysis procedure. 28 refs., 26 figs

  15. A trend analysis of human error events for proactive prevention of accidents. Methodology development and effective utilization

    International Nuclear Information System (INIS)

    Hirotsu, Yuko; Ebisu, Mitsuhiro; Aikawa, Takeshi; Matsubara, Katsuyuki

    2006-01-01

    This paper described methods for analyzing human error events that has been accumulated in the individual plant and for utilizing the result to prevent accidents proactively. Firstly, a categorization framework of trigger action and causal factors of human error events were reexamined, and the procedure to analyze human error events was reviewed based on the framework. Secondly, a method for identifying the common characteristics of trigger action data and of causal factor data accumulated by analyzing human error events was clarified. In addition, to utilize the results of trend analysis effectively, methods to develop teaching material for safety education, to develop the checkpoints for the error prevention and to introduce an error management process for strategic error prevention were proposed. (author)

  16. Error Probability Analysis of Hardware Impaired Systems with Asymmetric Transmission

    KAUST Repository

    Javed, Sidrah; Amin, Osama; Ikki, Salama S.; Alouini, Mohamed-Slim

    2018-01-01

    Error probability study of the hardware impaired (HWI) systems highly depends on the adopted model. Recent models have proved that the aggregate noise is equivalent to improper Gaussian signals. Therefore, considering the distinct noise nature and self-interfering (SI) signals, an optimal maximum likelihood (ML) receiver is derived. This renders the conventional minimum Euclidean distance (MED) receiver as a sub-optimal receiver because it is based on the assumptions of ideal hardware transceivers and proper Gaussian noise in communication systems. Next, the average error probability performance of the proposed optimal ML receiver is analyzed and tight bounds and approximations are derived for various adopted systems including transmitter and receiver I/Q imbalanced systems with or without transmitter distortions as well as transmitter or receiver only impaired systems. Motivated by recent studies that shed the light on the benefit of improper Gaussian signaling in mitigating the HWIs, asymmetric quadrature amplitude modulation or phase shift keying is optimized and adapted for transmission. Finally, different numerical and simulation results are presented to support the superiority of the proposed ML receiver over MED receiver, the tightness of the derived bounds and effectiveness of asymmetric transmission in dampening HWIs and improving overall system performance

  17. Error Probability Analysis of Hardware Impaired Systems with Asymmetric Transmission

    KAUST Repository

    Javed, Sidrah

    2018-04-26

    Error probability study of the hardware impaired (HWI) systems highly depends on the adopted model. Recent models have proved that the aggregate noise is equivalent to improper Gaussian signals. Therefore, considering the distinct noise nature and self-interfering (SI) signals, an optimal maximum likelihood (ML) receiver is derived. This renders the conventional minimum Euclidean distance (MED) receiver as a sub-optimal receiver because it is based on the assumptions of ideal hardware transceivers and proper Gaussian noise in communication systems. Next, the average error probability performance of the proposed optimal ML receiver is analyzed and tight bounds and approximations are derived for various adopted systems including transmitter and receiver I/Q imbalanced systems with or without transmitter distortions as well as transmitter or receiver only impaired systems. Motivated by recent studies that shed the light on the benefit of improper Gaussian signaling in mitigating the HWIs, asymmetric quadrature amplitude modulation or phase shift keying is optimized and adapted for transmission. Finally, different numerical and simulation results are presented to support the superiority of the proposed ML receiver over MED receiver, the tightness of the derived bounds and effectiveness of asymmetric transmission in dampening HWIs and improving overall system performance

  18. Error-rate performance analysis of opportunistic regenerative relaying

    KAUST Repository

    Tourki, Kamel

    2011-09-01

    In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path can be considered unusable, and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We first derive the exact statistics of each hop, in terms of probability density function (PDF). Then, the PDFs are used to determine accurate closed form expressions for end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation where the detector may use maximum ration combining (MRC) or selection combining (SC). Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over linear network (LN) architecture and considering Rayleigh fading channels. © 2011 IEEE.

  19. Kitchen Physics: Lessons in Fluid Pressure and Error Analysis

    Science.gov (United States)

    Vieyra, Rebecca Elizabeth; Vieyra, Chrystian; Macchia, Stefano

    2017-02-01

    Although the advent and popularization of the "flipped classroom" tends to center around at-home video lectures, teachers are increasingly turning to at-home labs for enhanced student engagement. This paper describes two simple at-home experiments that can be accomplished in the kitchen. The first experiment analyzes the density of four liquids using a waterproof case and a smartphone barometer in a container, sink, or tub. The second experiment determines the relationship between pressure and temperature of an ideal gas in a constant volume container placed momentarily in a refrigerator freezer. These experiences provide a ripe opportunity both for learning fundamental physics concepts as well as to investigate a variety of error analysis techniques that are frequently overlooked in introductory physics courses.

  20. Error analysis of acceleration control loops of a synchrotron

    International Nuclear Information System (INIS)

    Zhang, S.Y.; Weng, W.T.

    1991-01-01

    For beam control during acceleration, it is conventional to derive the frequency from an external reference, be it a field marker or an external oscillator, to provide phase and radius feedback loops to ensure the phase stability, radial position and emittance integrity of the beam. The open and closed loop behaviors of both feedback control and their response under the possible frequency, phase and radius errors are derived from fundamental principles and equations. The stability of the loops is investigated under a wide range of variations of the gain and time delays. Actual system performance of the AGS Booster is analyzed and compared to commissioning experiences. Such analysis is useful for setting design criteria and tolerances for new proton synchrotrons. 4 refs., 13 figs

  1. Error analysis of the crystal orientations obtained by the dictionary approach to EBSD indexing.

    Science.gov (United States)

    Ram, Farangis; Wright, Stuart; Singh, Saransh; De Graef, Marc

    2017-10-01

    The efficacy of the dictionary approach to Electron Back-Scatter Diffraction (EBSD) indexing was evaluated through the analysis of the error in the retrieved crystal orientations. EBSPs simulated by the Callahan-De Graef forward model were used for this purpose. Patterns were noised, distorted, and binned prior to dictionary indexing. Patterns with a high level of noise, with optical distortions, and with a 25 × 25 pixel size, when the error in projection center was 0.7% of the pattern width and the error in specimen tilt was 0.8°, were indexed with a 0.8° mean error in orientation. The same patterns, but 60 × 60 pixel in size, were indexed by the standard 2D Hough transform based approach with almost the same orientation accuracy. Optimal detection parameters in the Hough space were obtained by minimizing the orientation error. It was shown that if the error in detector geometry can be reduced to 0.1% in projection center and 0.1° in specimen tilt, the dictionary approach can retrieve a crystal orientation with a 0.2° accuracy. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. A Method and Support Tool for the Analysis of Human Error Hazards in Digital Devices

    International Nuclear Information System (INIS)

    Lee, Yong Hee; Kim, Seon Soo; Lee, Yong Hee

    2012-01-01

    In recent years, many nuclear power plants have adopted modern digital I and C technologies since they are expected to significantly improve their performance and safety. Modern digital technologies were expected to significantly improve both the economical efficiency and safety of nuclear power plants. However, the introduction of an advanced main control room (MCR) is accompanied with lots of changes in forms and features and differences through virtue of new digital devices. Many user-friendly displays and new features in digital devices are not enough to prevent human errors in nuclear power plants (NPPs). It may be an urgent to matter find the human errors potentials due to digital devices, and their detailed mechanisms. We can then consider them during the design of digital devices and their interfaces. The characteristics of digital technologies and devices may give many opportunities to the interface management, and can be integrated into a compact single workstation in an advanced MCR, such that workers can operate the plant with minimum burden under any operating condition. However, these devices may introduce new types of human errors, and thus we need a means to evaluate and prevent such errors, especially within digital devices for NPPs. This research suggests a new method named HEA-BIS (Human Error Analysis based on Interaction Segment) to confirm and detect human errors associated with digital devices. This method can be facilitated by support tools when used to ensure the safety when applying digital devices in NPPs

  3. Scanner qualification with IntenCD based reticle error correction

    Science.gov (United States)

    Elblinger, Yair; Finders, Jo; Demarteau, Marcel; Wismans, Onno; Minnaert Janssen, Ingrid; Duray, Frank; Ben Yishai, Michael; Mangan, Shmoolik; Cohen, Yaron; Parizat, Ziv; Attal, Shay; Polonsky, Netanel; Englard, Ilan

    2010-03-01

    Scanner introduction into the fab production environment is a challenging task. An efficient evaluation of scanner performance matrices during factory acceptance test (FAT) and later on during site acceptance test (SAT) is crucial for minimizing the cycle time for pre and post production-start activities. If done effectively, the matrices of base line performance established during the SAT are used as a reference for scanner performance and fleet matching monitoring and maintenance in the fab environment. Key elements which can influence the cycle time of the SAT, FAT and maintenance cycles are the imaging, process and mask characterizations involved with those cycles. Discrete mask measurement techniques are currently in use to create across-mask CDU maps. By subtracting these maps from their final wafer measurement CDU map counterparts, it is possible to assess the real scanner induced printed errors within certain limitations. The current discrete measurement methods are time consuming and some techniques also overlook mask based effects other than line width variations, such as transmission and phase variations, all of which influence the final printed CD variability. Applied Materials Aera2TM mask inspection tool with IntenCDTM technology can scan the mask at high speed, offer full mask coverage and accurate assessment of all masks induced source of errors simultaneously, making it beneficial for scanner qualifications and performance monitoring. In this paper we report on a study that was done to improve a scanner introduction and qualification process using the IntenCD application to map the mask induced CD non uniformity. We will present the results of six scanners in production and discuss the benefits of the new method.

  4. The error performance analysis over cyclic redundancy check codes

    Science.gov (United States)

    Yoon, Hee B.

    1991-06-01

    The burst error is generated in digital communication networks by various unpredictable conditions, which occur at high error rates, for short durations, and can impact services. To completely describe a burst error one has to know the bit pattern. This is impossible in practice on working systems. Therefore, under the memoryless binary symmetric channel (MBSC) assumptions, the performance evaluation or estimation schemes for digital signal 1 (DS1) transmission systems carrying live traffic is an interesting and important problem. This study will present some analytical methods, leading to efficient detecting algorithms of burst error using cyclic redundancy check (CRC) code. The definition of burst error is introduced using three different models. Among the three burst error models, the mathematical model is used in this study. The probability density function, function(b) of burst error of length b is proposed. The performance of CRC-n codes is evaluated and analyzed using function(b) through the use of a computer simulation model within CRC block burst error. The simulation result shows that the mean block burst error tends to approach the pattern of the burst error which random bit errors generate.

  5. SPACE-BORNE LASER ALTIMETER GEOLOCATION ERROR ANALYSIS

    Directory of Open Access Journals (Sweden)

    Y. Wang

    2018-05-01

    Full Text Available This paper reviews the development of space-borne laser altimetry technology over the past 40 years. Taking the ICESAT satellite as an example, a rigorous space-borne laser altimeter geolocation model is studied, and an error propagation equation is derived. The influence of the main error sources, such as the platform positioning error, attitude measurement error, pointing angle measurement error and range measurement error, on the geolocation accuracy of the laser spot are analysed by simulated experiments. The reasons for the different influences on geolocation accuracy in different directions are discussed, and to satisfy the accuracy of the laser control point, a design index for each error source is put forward.

  6. Error analysis of isotope dilution mass spectrometry method with internal standard

    International Nuclear Information System (INIS)

    Rizhinskii, M.W.; Vitinskii, M.Y.

    1989-02-01

    The computation algorithms of the normalized isotopic ratios and element concentration by isotope dilution mass spectrometry with internal standard are presented. A procedure based on the Monte-Carlo calculation is proposed for predicting the magnitude of the errors to be expected. The estimation of systematic and random errors is carried out in the case of the certification of uranium and plutonium reference materials as well as for the use of those reference materials in the analysis of irradiated nuclear fuels. 4 refs, 11 figs, 2 tabs

  7. A human error taxonomy and its application to an automatic method accident analysis

    International Nuclear Information System (INIS)

    Matthews, R.H.; Winter, P.W.

    1983-01-01

    Commentary is provided on the quantification aspects of human factors analysis in risk assessment. Methods for quantifying human error in a plant environment are discussed and their application to system quantification explored. Such a programme entails consideration of the data base and a taxonomy of factors contributing to human error. A multi-levelled approach to system quantification is proposed, each level being treated differently drawing on the advantages of different techniques within the fault/event tree framework. Management, as controller of organization, planning and procedure, is assigned a dominant role. (author)

  8. Human errors identification using the human factors analysis and classification system technique (HFACS

    Directory of Open Access Journals (Sweden)

    G. A. Shirali

    2013-12-01

    .Result: In this study, 158 reports of accident in Ahvaz steel industry were analyzed by HFACS technique. This analysis showed that most of the human errors were: in the first level was related to the skill-based errors, in the second to the physical environment, in the third level to the inadequate supervision and in the fourth level to the management of resources. .Conclusion: Studying and analyzing of past events using the HFACS technique can identify the major and root causes of accidents and can be effective on prevent repetitions of such mishaps. Also, it can be used as a basis for developing strategies to prevent future events in steel industries.

  9. Medication errors as malpractice-a qualitative content analysis of 585 medication errors by nurses in Sweden.

    Science.gov (United States)

    Björkstén, Karin Sparring; Bergqvist, Monica; Andersén-Karlsson, Eva; Benson, Lina; Ulfvarson, Johanna

    2016-08-24

    Many studies address the prevalence of medication errors but few address medication errors serious enough to be regarded as malpractice. Other studies have analyzed the individual and system contributory factor leading to a medication error. Nurses have a key role in medication administration, and there are contradictory reports on the nurses' work experience in relation to the risk and type for medication errors. All medication errors where a nurse was held responsible for malpractice (n = 585) during 11 years in Sweden were included. A qualitative content analysis and classification according to the type and the individual and system contributory factors was made. In order to test for possible differences between nurses' work experience and associations within and between the errors and contributory factors, Fisher's exact test was used, and Cohen's kappa (k) was performed to estimate the magnitude and direction of the associations. There were a total of 613 medication errors in the 585 cases, the most common being "Wrong dose" (41 %), "Wrong patient" (13 %) and "Omission of drug" (12 %). In 95 % of the cases, an average of 1.4 individual contributory factors was found; the most common being "Negligence, forgetfulness or lack of attentiveness" (68 %), "Proper protocol not followed" (25 %), "Lack of knowledge" (13 %) and "Practice beyond scope" (12 %). In 78 % of the cases, an average of 1.7 system contributory factors was found; the most common being "Role overload" (36 %), "Unclear communication or orders" (30 %) and "Lack of adequate access to guidelines or unclear organisational routines" (30 %). The errors "Wrong patient due to mix-up of patients" and "Wrong route" and the contributory factors "Lack of knowledge" and "Negligence, forgetfulness or lack of attentiveness" were more common in less experienced nurses. The experienced nurses were more prone to "Practice beyond scope of practice" and to make errors in spite of "Lack of adequate

  10. Error Analysis of Explicit Partitioned Runge–Kutta Schemes for Conservation Laws

    KAUST Repository

    Hundsdorfer, Willem

    2014-08-27

    An error analysis is presented for explicit partitioned Runge–Kutta methods and multirate methods applied to conservation laws. The interfaces, across which different methods or time steps are used, lead to order reduction of the schemes. Along with cell-based decompositions, also flux-based decompositions are studied. In the latter case mass conservation is guaranteed, but it will be seen that the accuracy may deteriorate.

  11. Error Analysis of Explicit Partitioned Runge–Kutta Schemes for Conservation Laws

    KAUST Repository

    Hundsdorfer, Willem; Ketcheson, David I.; Savostianov, Igor

    2014-01-01

    An error analysis is presented for explicit partitioned Runge–Kutta methods and multirate methods applied to conservation laws. The interfaces, across which different methods or time steps are used, lead to order reduction of the schemes. Along with cell-based decompositions, also flux-based decompositions are studied. In the latter case mass conservation is guaranteed, but it will be seen that the accuracy may deteriorate.

  12. Analysis of error functions in speckle shearing interferometry

    International Nuclear Information System (INIS)

    Wan Saffiey Wan Abdullah

    2001-01-01

    Electronic Speckle Pattern Shearing Interferometry (ESPSI) or shearography has successfully been used in NDT for slope (∂w/ (∂x and / or (∂w/ (∂y) measurement while strain measurement (∂u/ ∂x, ∂v/ ∂y, ∂u/ ∂y and (∂v/ (∂x) is still under investigation. This method is well accepted in industrial applications especially in the aerospace industry. Demand of this method is increasing due to complexity of the test materials and objects. ESPSI has successfully performed in NDT only for qualitative measurement whilst quantitative measurement is the current aim of many manufacturers. Industrial use of such equipment is being completed without considering the errors arising from numerous sources, including wavefront divergence. The majority of commercial systems are operated with diverging object illumination wave fronts without considering the curvature of the object illumination wavefront or the object geometry, when calculating the interferometer fringe function and quantifying data. This thesis reports the novel approach in quantified maximum phase change difference analysis for derivative out-of-plane (OOP) and in-plane (IP) cases that propagate from the divergent illumination wavefront compared to collimated illumination. The theoretical of maximum phase difference is formulated by means of three dependent variables, these being the object distance, illuminated diameter, center of illuminated area and camera distance and illumination angle. The relative maximum phase change difference that may contributed to the error in the measurement analysis in this scope of research is defined by the difference of maximum phase difference value measured by divergent illumination wavefront relative to the maximum phase difference value of collimated illumination wavefront, taken at the edge of illuminated area. Experimental validation using test objects for derivative out-of-plane and derivative in-plane deformation, using a single illumination wavefront

  13. Analysis of strain error sources in micro-beam Laue diffraction

    International Nuclear Information System (INIS)

    Hofmann, Felix; Eve, Sophie; Belnoue, Jonathan; Micha, Jean-Sébastien; Korsunsky, Alexander M.

    2011-01-01

    Micro-beam Laue diffraction is an experimental method that allows the measurement of local lattice orientation and elastic strain within individual grains of engineering alloys, ceramics, and other polycrystalline materials. Unlike other analytical techniques, e.g. based on electron microscopy, it is not limited to surface characterisation or thin sections, but rather allows non-destructive measurements in the material bulk. This is of particular importance for in situ loading experiments where the mechanical response of a material volume (rather than just surface) is studied and it is vital that no perturbation/disturbance is introduced by the measurement technique. Whilst the technique allows lattice orientation to be determined to a high level of precision, accurate measurement of elastic strains and estimating the errors involved is a significant challenge. We propose a simulation-based approach to assess the elastic strain errors that arise from geometrical perturbations of the experimental setup. Using an empirical combination rule, the contributions of different geometrical uncertainties to the overall experimental strain error are estimated. This approach was applied to the micro-beam Laue diffraction setup at beamline BM32 at the European Synchrotron Radiation Facility (ESRF). Using a highly perfect germanium single crystal, the mechanical stability of the instrument was determined and hence the expected strain errors predicted. Comparison with the actual strain errors found in a silicon four-point beam bending test showed good agreement. The simulation-based error analysis approach makes it possible to understand the origins of the experimental strain errors and thus allows a directed improvement of the experimental geometry to maximise the benefit in terms of strain accuracy.

  14. Bootstrap-Based Improvements for Inference with Clustered Errors

    OpenAIRE

    Doug Miller; A. Colin Cameron; Jonah B. Gelbach

    2006-01-01

    Microeconometrics researchers have increasingly realized the essential need to account for any within-group dependence in estimating standard errors of regression parameter estimates. The typical preferred solution is to calculate cluster-robust or sandwich standard errors that permit quite general heteroskedasticity and within-cluster error correlation, but presume that the number of clusters is large. In applications with few (5-30) clusters, standard asymptotic tests can over-reject consid...

  15. Analysis of error-correction constraints in an optical disk

    Science.gov (United States)

    Roberts, Jonathan D.; Ryley, Alan; Jones, David M.; Burke, David

    1996-07-01

    The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.

  16. Analysis of the "naming game" with learning errors in communications.

    Science.gov (United States)

    Lou, Yang; Chen, Guanrong

    2015-07-16

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.

  17. Study of the Ultimate Error of the Axis Tolerance Feature and Its Pose Decoupling Based on an Area Coordinate System

    Directory of Open Access Journals (Sweden)

    Qungui Du

    2018-03-01

    Full Text Available Manufacturing error and assembly error should be taken into consideration during evaluation and analysis of accurate product performance in the design phase. Traditional tolerance analysis methods establish error propagation model based on dimension chains with tolerance values being regarded as error boundaries, and obtain the limit of target feature error through optimization methods or conducting statistical analysis with the tolerance domain being the boundary. As deviations of the tolerance feature (TF on degrees of freedom (DOF have coupling relations, accurate deviations on all DOF may not be obtained, even though these deviations constitute the basis for product performance analysis. Therefore, taking the widely used shaft-hole fit as an example, a pose decoupling model of the axis TF was proposed based on an area coordinate system. This model realized decoupling analysis of any pose of the axis TF within the tolerance domain. As proposed by the authors, by combining a tolerance analysis model based on tracking local coordinate systems, ultimate pose analysis of the closed-loop system, namely the target feature, as well as statistical analysis could be further implemented. This method contributed to analysis of true product performance with arbitrary error in the product design phase from the angle of tolerance, therefore, shortening the product research and development cycle. This method is demonstrated through applying it to a real-life example.

  18. Accuracy Improvement of Multi-Axis Systems Based on Laser Correction of Volumetric Geometric Errors

    Science.gov (United States)

    Teleshevsky, V. I.; Sokolov, V. A.; Pimushkin, Ya I.

    2018-04-01

    The article describes a volumetric geometric errors correction method for CNC- controlled multi-axis systems (machine-tools, CMMs etc.). The Kalman’s concept of “Control and Observation” is used. A versatile multi-function laser interferometer is used as Observer in order to measure machine’s error functions. A systematic error map of machine’s workspace is produced based on error functions measurements. The error map results into error correction strategy. The article proposes a new method of error correction strategy forming. The method is based on error distribution within machine’s workspace and a CNC-program postprocessor. The postprocessor provides minimal error values within maximal workspace zone. The results are confirmed by error correction of precision CNC machine-tools.

  19. Quaternion error-based optimal control applied to pinpoint landing

    Science.gov (United States)

    Ghiglino, Pablo

    Accurate control techniques for pinpoint planetary landing - i.e., the goal of achieving landing errors in the order of 100m for unmanned missions - is a complex problem that have been tackled in different ways in the available literature. Among other challenges, this kind of control is also affected by the well known trade-off in UAV control that for complex underlying models the control is sub-optimal, while optimal control is applied to simplifed models. The goal of this research has been the development new control algorithms that would be able to tackle these challenges and the result are two novel optimal control algorithms namely: OQTAL and HEX2OQTAL. These controllers share three key properties that are thoroughly proven and shown in this thesis; stability, accuracy and adaptability. Stability is rigorously demonstrated for both controllers. Accuracy is shown in results of comparing these novel controllers with other industry standard algorithms in several different scenarios: there is a gain in accuracy of at least 15% for each controller, and in many cases much more than that. A new tuning algorithm based on swarm heuristics optimisation was developed as well as part of this research in order to tune in an online manner the standard Proportional-Integral-Derivative (PID) controllers used for benchmarking. Finally, adaptability of these controllers can be seen as a combination of four elements: mathematical model extensibility, cost matrices tuning, reduced computation time required and finally no prior knowledge of the navigation or guidance strategies needed. Further simulations in real planetary landing trajectories has shown that these controllers have the capacity of achieving landing errors in the order of pinpoint landing requirements, making them not only very precise UAV controllers, but also potential candidates for pinpoint landing unmanned missions.

  20. Circular Array of Magnetic Sensors for Current Measurement: Analysis for Error Caused by Position of Conductor.

    Science.gov (United States)

    Yu, Hao; Qian, Zheng; Liu, Huayi; Qu, Jiaqi

    2018-02-14

    This paper analyzes the measurement error, caused by the position of the current-carrying conductor, of a circular array of magnetic sensors for current measurement. The circular array of magnetic sensors is an effective approach for AC or DC non-contact measurement, as it is low-cost, light-weight, has a large linear range, wide bandwidth, and low noise. Especially, it has been claimed that such structure has excellent reduction ability for errors caused by the position of the current-carrying conductor, crosstalk current interference, shape of the conduction cross-section, and the Earth's magnetic field. However, the positions of the current-carrying conductor-including un-centeredness and un-perpendicularity-have not been analyzed in detail until now. In this paper, for the purpose of having minimum measurement error, a theoretical analysis has been proposed based on vector inner and exterior product. In the presented mathematical model of relative error, the un-center offset distance, the un-perpendicular angle, the radius of the circle, and the number of magnetic sensors are expressed in one equation. The comparison of the relative error caused by the position of the current-carrying conductor between four and eight sensors is conducted. Tunnel magnetoresistance (TMR) sensors are used in the experimental prototype to verify the mathematical model. The analysis results can be the reference to design the details of the circular array of magnetic sensors for current measurement in practical situations.

  1. Identification of 'Point A' as the prevalent source of error in cephalometric analysis of lateral radiographs.

    Science.gov (United States)

    Grogger, P; Sacher, C; Weber, S; Millesi, G; Seemann, R

    2018-04-10

    Deviations in measuring dentofacial components in a lateral X-ray represent a major hurdle in the subsequent treatment of dysgnathic patients. In a retrospective study, we investigated the most prevalent source of error in the following commonly used cephalometric measurements: the angles Sella-Nasion-Point A (SNA), Sella-Nasion-Point B (SNB) and Point A-Nasion-Point B (ANB); the Wits appraisal; the anteroposterior dysplasia indicator (APDI); and the overbite depth indicator (ODI). Preoperative lateral radiographic images of patients with dentofacial deformities were collected and the landmarks digitally traced by three independent raters. Cephalometric analysis was automatically performed based on 1116 tracings. Error analysis identified the x-coordinate of Point A as the prevalent source of error in all investigated measurements, except SNB, in which it is not incorporated. In SNB, the y-coordinate of Nasion predominated error variance. SNB showed lowest inter-rater variation. In addition, our observations confirmed previous studies showing that landmark identification variance follows characteristic error envelopes in the highest number of tracings analysed up to now. Variance orthogonal to defining planes was of relevance, while variance parallel to planes was not. Taking these findings into account, orthognathic surgeons as well as orthodontists would be able to perform cephalometry more accurately and accomplish better therapeutic results. Copyright © 2018 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  2. Errors in MR-based attenuation correction for brain imaging with PET/MR scanners

    International Nuclear Information System (INIS)

    Rota Kops, Elena; Herzog, Hans

    2013-01-01

    Aim: Attenuation correction of PET data acquired by hybrid MR/PET scanners remains a challenge, even if several methods for brain and whole-body measurements have been developed recently. A template-based attenuation correction for brain imaging proposed by our group is easy to handle and delivers reliable attenuation maps in a short time. However, some potential error sources are analyzed in this study. We investigated the choice of template reference head among all the available data (error A), and possible skull anomalies of the specific patient, such as discontinuities due to surgery (error B). Materials and methods: An anatomical MR measurement and a 2-bed-position transmission scan covering the whole head and neck region were performed in eight normal subjects (4 females, 4 males). Error A: Taking alternatively one of the eight heads as reference, eight different templates were created by nonlinearly registering the images to the reference and calculating the average. Eight patients (4 females, 4 males; 4 with brain lesions, 4 w/o brain lesions) were measured in the Siemens BrainPET/MR scanner. The eight templates were used to generate the patients' attenuation maps required for reconstruction. ROI and VOI atlas-based comparisons were performed employing all the reconstructed images. Error B: CT-based attenuation maps of two volunteers were manipulated by manually inserting several skull lesions and filling a nasal cavity. The corresponding attenuation coefficients were substituted with the water's coefficient (0.096/cm). Results: Error A: The mean SUVs over the eight templates pairs for all eight patients and all VOIs did not differ significantly one from each other. Standard deviations up to 1.24% were found. Error B: After reconstruction of the volunteers' BrainPET data with the CT-based attenuation maps without and with skull anomalies, a VOI-atlas analysis was performed revealing very little influence of the skull lesions (less than 3%), while the filled

  3. Errors in MR-based attenuation correction for brain imaging with PET/MR scanners

    Science.gov (United States)

    Rota Kops, Elena; Herzog, Hans

    2013-02-01

    AimAttenuation correction of PET data acquired by hybrid MR/PET scanners remains a challenge, even if several methods for brain and whole-body measurements have been developed recently. A template-based attenuation correction for brain imaging proposed by our group is easy to handle and delivers reliable attenuation maps in a short time. However, some potential error sources are analyzed in this study. We investigated the choice of template reference head among all the available data (error A), and possible skull anomalies of the specific patient, such as discontinuities due to surgery (error B). Materials and methodsAn anatomical MR measurement and a 2-bed-position transmission scan covering the whole head and neck region were performed in eight normal subjects (4 females, 4 males). Error A: Taking alternatively one of the eight heads as reference, eight different templates were created by nonlinearly registering the images to the reference and calculating the average. Eight patients (4 females, 4 males; 4 with brain lesions, 4 w/o brain lesions) were measured in the Siemens BrainPET/MR scanner. The eight templates were used to generate the patients' attenuation maps required for reconstruction. ROI and VOI atlas-based comparisons were performed employing all the reconstructed images. Error B: CT-based attenuation maps of two volunteers were manipulated by manually inserting several skull lesions and filling a nasal cavity. The corresponding attenuation coefficients were substituted with the water's coefficient (0.096/cm). ResultsError A: The mean SUVs over the eight templates pairs for all eight patients and all VOIs did not differ significantly one from each other. Standard deviations up to 1.24% were found. Error B: After reconstruction of the volunteers' BrainPET data with the CT-based attenuation maps without and with skull anomalies, a VOI-atlas analysis was performed revealing very little influence of the skull lesions (less than 3%), while the filled nasal

  4. On Gait Analysis Estimation Errors Using Force Sensors on a Smart Rollator.

    Science.gov (United States)

    Ballesteros, Joaquin; Urdiales, Cristina; Martinez, Antonio B; van Dieën, Jaap H

    2016-11-10

    Gait analysis can provide valuable information on a person's condition and rehabilitation progress. Gait is typically captured using external equipment and/or wearable sensors. These tests are largely constrained to specific controlled environments. In addition, gait analysis often requires experts for calibration, operation and/or to place sensors on volunteers. Alternatively, mobility support devices like rollators can be equipped with onboard sensors to monitor gait parameters, while users perform their Activities of Daily Living. Gait analysis in rollators may use odometry and force sensors in the handlebars. However, force based estimation of gait parameters is less accurate than traditional methods, especially when rollators are not properly used. This paper presents an evaluation of force based gait analysis using a smart rollator on different groups of users to determine when this methodology is applicable. In a second stage, the rollator is used in combination with two lab-based gait analysis systems to assess the rollator estimation error. Our results show that: (i) there is an inverse relation between the variance in the force difference between handlebars and support on the handlebars-related to the user condition-and the estimation error; and (ii) this error is lower than 10% when the variation in the force difference is above 7 N. This lower limit was exceeded by the 95.83% of our challenged volunteers. In conclusion, rollators are useful for gait characterization as long as users really need the device for ambulation.

  5. On Gait Analysis Estimation Errors Using Force Sensors on a Smart Rollator

    Directory of Open Access Journals (Sweden)

    Joaquin Ballesteros

    2016-11-01

    Full Text Available Gait analysis can provide valuable information on a person’s condition and rehabilitation progress. Gait is typically captured using external equipment and/or wearable sensors. These tests are largely constrained to specific controlled environments. In addition, gait analysis often requires experts for calibration, operation and/or to place sensors on volunteers. Alternatively, mobility support devices like rollators can be equipped with onboard sensors to monitor gait parameters, while users perform their Activities of Daily Living. Gait analysis in rollators may use odometry and force sensors in the handlebars. However, force based estimation of gait parameters is less accurate than traditional methods, especially when rollators are not properly used. This paper presents an evaluation of force based gait analysis using a smart rollator on different groups of users to determine when this methodology is applicable. In a second stage, the rollator is used in combination with two lab-based gait analysis systems to assess the rollator estimation error. Our results show that: (i there is an inverse relation between the variance in the force difference between handlebars and support on the handlebars—related to the user condition—and the estimation error; and (ii this error is lower than 10% when the variation in the force difference is above 7 N. This lower limit was exceeded by the 95.83% of our challenged volunteers. In conclusion, rollators are useful for gait characterization as long as users really need the device for ambulation.

  6. SU-E-T-789: Validation of 3DVH Accuracy On Quantifying Delivery Errors Based On Clinical Relevant DVH Metrics

    International Nuclear Information System (INIS)

    Ma, T; Kumaraswamy, L

    2015-01-01

    Purpose: Detection of treatment delivery errors is important in radiation therapy. However, accurate quantification of delivery errors is also of great importance. This study aims to evaluate the 3DVH software’s ability to accurately quantify delivery errors. Methods: Three VMAT plans (prostate, H&N and brain) were randomly chosen for this study. First, we evaluated whether delivery errors could be detected by gamma evaluation. Conventional per-beam IMRT QA was performed with the ArcCHECK diode detector for the original plans and for the following modified plans: (1) induced dose difference error up to ±4.0% and (2) control point (CP) deletion (3 to 10 CPs were deleted) (3) gantry angle shift error (3 degree uniformly shift). 2D and 3D gamma evaluation were performed for all plans through SNC Patient and 3DVH, respectively. Subsequently, we investigated the accuracy of 3DVH analysis for all cases. This part evaluated, using the Eclipse TPS plans as standard, whether 3DVH accurately can model the changes in clinically relevant metrics caused by the delivery errors. Results: 2D evaluation seemed to be more sensitive to delivery errors. The average differences between ECLIPSE predicted and 3DVH results for each pair of specific DVH constraints were within 2% for all three types of error-induced treatment plans, illustrating the fact that 3DVH is fairly accurate in quantifying the delivery errors. Another interesting observation was that even though the gamma pass rates for the error plans are high, the DVHs showed significant differences between original plan and error-induced plans in both Eclipse and 3DVH analysis. Conclusion: The 3DVH software is shown to accurately quantify the error in delivered dose based on clinically relevant DVH metrics, where a conventional gamma based pre-treatment QA might not necessarily detect

  7. Incremental Volumetric Remapping Method: Analysis and Error Evaluation

    International Nuclear Information System (INIS)

    Baptista, A. J.; Oliveira, M. C.; Rodrigues, D. M.; Menezes, L. F.; Alves, J. L.

    2007-01-01

    In this paper the error associated with the remapping problem is analyzed. A range of numerical results that assess the performance of three different remapping strategies, applied to FE meshes that typically are used in sheet metal forming simulation, are evaluated. One of the selected strategies is the previously presented Incremental Volumetric Remapping method (IVR), which was implemented in the in-house code DD3TRIM. The IVR method fundaments consists on the premise that state variables in all points associated to a Gauss volume of a given element are equal to the state variable quantities placed in the correspondent Gauss point. Hence, given a typical remapping procedure between a donor and a target mesh, the variables to be associated to a target Gauss volume (and point) are determined by a weighted average. The weight function is the Gauss volume percentage of each donor element that is located inside the target Gauss volume. The calculus of the intersecting volumes between the donor and target Gauss volumes is attained incrementally, for each target Gauss volume, by means of a discrete approach. The other two remapping strategies selected are based in the interpolation/extrapolation of variables by using the finite element shape functions or moving least square interpolants. The performance of the three different remapping strategies is address with two tests. The first remapping test was taken from a literature work. The test consists in remapping successively a rotating symmetrical mesh, throughout N increments, in an angular span of 90 deg. The second remapping error evaluation test consists of remapping an irregular element shape target mesh from a given regular element shape donor mesh and proceed with the inverse operation. In this second test the computation effort is also measured. The results showed that the error level associated to IVR can be very low and with a stable evolution along the number of remapping procedures when compared with the

  8. An error taxonomy system for analysis of haemodialysis incidents.

    Science.gov (United States)

    Gu, Xiuzhu; Itoh, Kenji; Suzuki, Satoshi

    2014-12-01

    This paper describes the development of a haemodialysis error taxonomy system for analysing incidents and predicting the safety status of a dialysis organisation. The error taxonomy system was developed by adapting an error taxonomy system which assumed no specific specialty to haemodialysis situations. Its application was conducted with 1,909 incident reports collected from two dialysis facilities in Japan. Over 70% of haemodialysis incidents were reported as problems or complications related to dialyser, circuit, medication and setting of dialysis condition. Approximately 70% of errors took place immediately before and after the four hours of haemodialysis therapy. Error types most frequently made in the dialysis unit were omission and qualitative errors. Failures or complications classified to staff human factors, communication, task and organisational factors were found in most dialysis incidents. Device/equipment/materials, medicine and clinical documents were most likely to be involved in errors. Haemodialysis nurses were involved in more incidents related to medicine and documents, whereas dialysis technologists made more errors with device/equipment/materials. This error taxonomy system is able to investigate incidents and adverse events occurring in the dialysis setting but is also able to estimate safety-related status of an organisation, such as reporting culture. © 2014 European Dialysis and Transplant Nurses Association/European Renal Care Association.

  9. Constrained motion estimation-based error resilient coding for HEVC

    Science.gov (United States)

    Guo, Weihan; Zhang, Yongfei; Li, Bo

    2018-04-01

    Unreliable communication channels might lead to packet losses and bit errors in the videos transmitted through it, which will cause severe video quality degradation. This is even worse for HEVC since more advanced and powerful motion estimation methods are introduced to further remove the inter-frame dependency and thus improve the coding efficiency. Once a Motion Vector (MV) is lost or corrupted, it will cause distortion in the decoded frame. More importantly, due to motion compensation, the error will propagate along the motion prediction path, accumulate over time, and significantly degrade the overall video presentation quality. To address this problem, we study the problem of encoder-sider error resilient coding for HEVC and propose a constrained motion estimation scheme to mitigate the problem of error propagation to subsequent frames. The approach is achieved by cutting off MV dependencies and limiting the block regions which are predicted by temporal motion vector. The experimental results show that the proposed method can effectively suppress the error propagation caused by bit errors of motion vector and can improve the robustness of the stream in the bit error channels. When the bit error probability is 10-5, an increase of the decoded video quality (PSNR) by up to1.310dB and on average 0.762 dB can be achieved, compared to the reference HEVC.

  10. A slicing-based approach for locating type errors

    NARCIS (Netherlands)

    T.B. Dinesh; F. Tip (Frank)

    1998-01-01

    htmlabstractThe effectiveness of a type checking tool strongly depends on the accuracy of the positional information that is associated with type errors. We present an approach where the location associated with an error message e is defined as a slice P_e of the program P being type checked. We

  11. A slicing-based approach for locating type errors

    NARCIS (Netherlands)

    T.B. Dinesh; F. Tip (Frank)

    1998-01-01

    textabstractThe effectiveness of a type checking tool strongly depends on the accuracy of the positional information that is associated with type errors. We present an approach where the location associated with an error message e is defined as a slice P_e of the program P being type checked. We

  12. A Corpus-based Study of EFL Learners’ Errors in IELTS Essay Writing

    Directory of Open Access Journals (Sweden)

    Hoda Divsar

    2017-03-01

    Full Text Available The present study analyzed different types of errors in the EFL learners’ IELTS essays. In order to determine the major types of errors, a corpus of 70 IELTS examinees’ writings were collected, and their errors were extracted and categorized qualitatively. Errors were categorized based on a researcher-developed error-coding scheme into 13 aspects. Based on the descriptive statistical analyses, the frequency of each error type was calculated and the commonest errors committed by the EFL learners in IELTS essays were identified. The results indicated that the two most frequent errors that IELTS candidates committed were related to word choice and verb forms. Based on the research results, pedagogical implications highlight analyzing EFL learners’ writing errors as a useful basis for instructional purposes including creating pedagogical teaching materials that are in line with learners’ linguistic strengths and weaknesses.

  13. Generalization error analysis: deep convolutional neural network in mammography

    Science.gov (United States)

    Richter, Caleb D.; Samala, Ravi K.; Chan, Heang-Ping; Hadjiiski, Lubomir; Cha, Kenny

    2018-02-01

    We conducted a study to gain understanding of the generalizability of deep convolutional neural networks (DCNNs) given their inherent capability to memorize data. We examined empirically a specific DCNN trained for classification of masses on mammograms. Using a data set of 2,454 lesions from 2,242 mammographic views, a DCNN was trained to classify masses into malignant and benign classes using transfer learning from ImageNet LSVRC-2010. We performed experiments with varying amounts of label corruption and types of pixel randomization to analyze the generalization error for the DCNN. Performance was evaluated using the area under the receiver operating characteristic curve (AUC) with an N-fold cross validation. Comparisons were made between the convergence times, the inference AUCs for both the training set and the test set of the original image patches without corruption, and the root-mean-squared difference (RMSD) in the layer weights of the DCNN trained with different amounts and methods of corruption. Our experiments observed trends which revealed that the DCNN overfitted by memorizing corrupted data. More importantly, this study improved our understanding of DCNN weight updates when learning new patterns or new labels. Although we used a specific classification task with the ImageNet as example, similar methods may be useful for analysis of the DCNN learning processes, especially those that employ transfer learning for medical image analysis where sample size is limited and overfitting risk is high.

  14. A Simple Exact Error Rate Analysis for DS-CDMA with Arbitrary Pulse Shape in Flat Nakagami Fading

    Science.gov (United States)

    Rahman, Mohammad Azizur; Sasaki, Shigenobu; Kikuchi, Hisakazu; Harada, Hiroshi; Kato, Shuzo

    A simple exact error rate analysis is presented for random binary direct sequence code division multiple access (DS-CDMA) considering a general pulse shape and flat Nakagami fading channel. First of all, a simple model is developed for the multiple access interference (MAI). Based on this, a simple exact expression of the characteristic function (CF) of MAI is developed in a straight forward manner. Finally, an exact expression of error rate is obtained following the CF method of error rate analysis. The exact error rate so obtained can be much easily evaluated as compared to the only reliable approximate error rate expression currently available, which is based on the Improved Gaussian Approximation (IGA).

  15. A Human Error Analysis Procedure for Identifying Potential Error Modes and Influencing Factors for Test and Maintenance Activities

    International Nuclear Information System (INIS)

    Kim, Jae Whan; Park, Jin Kyun

    2010-01-01

    Periodic or non-periodic test and maintenance (T and M) activities in large, complex systems such as nuclear power plants (NPPs) are essential for sustaining stable and safe operation of the systems. On the other hand, it also has been raised that human erroneous actions that might occur during T and M activities has the possibility of incurring unplanned reactor trips (RTs) or power derate, making safety-related systems unavailable, or making the reliability of components degraded. Contribution of human errors during normal and abnormal activities of NPPs to the unplanned RTs is known to be about 20% of the total events. This paper introduces a procedure for predictively analyzing human error potentials when maintenance personnel perform T and M tasks based on a work procedure or their work plan. This procedure helps plant maintenance team prepare for plausible human errors. The procedure to be introduced is focusing on the recurrent error forms (or modes) in execution-based errors such as wrong object, omission, too little, and wrong action

  16. THE PRACTICAL ANALYSIS OF FINITE ELEMENTS METHOD ERRORS

    Directory of Open Access Journals (Sweden)

    Natalia Bakhova

    2011-03-01

    Full Text Available Abstract. The most important in the practical plan questions of reliable estimations of finite elementsmethod errors are considered. Definition rules of necessary calculations accuracy are developed. Methodsand ways of the calculations allowing receiving at economical expenditures of computing work the best finalresults are offered.Keywords: error, given the accuracy, finite element method, lagrangian and hermitian elements.

  17. Error Analysis in a Written Composition Análisis de errores en una composición escrita

    Directory of Open Access Journals (Sweden)

    David Alberto Londoño Vásquez

    2008-12-01

    Full Text Available Learners make errors in both comprehension and production. Some theoreticians have pointed out the difficulty of assigning the cause of failures in comprehension to an inadequate knowledge of a particular syntactic feature of a misunderstood utterance. Indeed, an error can be defined as a deviation from the norms of the target language. In this investigation, based on personal and professional experience, a written composition entitled "My Life in Colombia" will be analyzed based on clinical elicitation (CE research. CE involves getting the informant to produce data of any sort, for example, by means of a general interview or by asking the learner to write a composition. Some errors produced by a foreign language learner in her acquisition process will be analyzed, identifying the possible sources of these errors. Finally, four kinds of errors are classified: omission, addition, misinformation, and misordering.Los aprendices comenten errores tanto en la comprensión como en la producción. Algunos teóricos han identificado que la dificultad para clasificar las diferentes fallas en comprensión se debe al conocimiento inadecuado de una característica sintáctica particular. Por tanto, el error puede definirse como una desviación de las normas del idioma objetivo. En esta experiencia profesional se analizará una composición escrita sobre "Mi vida en Colombia" con base en la investigación a través de la elicitación clínica (EC. Esta se centra en cómo el informante produce datos de cualquier tipo, por ejemplo, a través de una entrevista general o solicitándole al aprendiz una composición escrita. Se analizarán algunos errores producidos por un aprendiz de una lengua extranjera en su proceso de adquisición, identificando sus posibles causas. Finalmente, se clasifican cuatro tipos de errores: omisión, adición, desinformación y yuxtaposición sintáctica.

  18. Determination of optimal samples for robot calibration based on error similarity

    Directory of Open Access Journals (Sweden)

    Tian Wei

    2015-06-01

    Full Text Available Industrial robots are used for automatic drilling and riveting. The absolute position accuracy of an industrial robot is one of the key performance indexes in aircraft assembly, and can be improved through error compensation to meet aircraft assembly requirements. The achievable accuracy and the difficulty of accuracy compensation implementation are closely related to the choice of sampling points. Therefore, based on the error similarity error compensation method, a method for choosing sampling points on a uniform grid is proposed. A simulation is conducted to analyze the influence of the sample point locations on error compensation. In addition, the grid steps of the sampling points are optimized using a statistical analysis method. The method is used to generate grids and optimize the grid steps of a Kuka KR-210 robot. The experimental results show that the method for planning sampling data can be used to effectively optimize the sampling grid. After error compensation, the position accuracy of the robot meets the position accuracy requirements.

  19. Evaluation and Error Analysis for a Solar Thermal Receiver

    International Nuclear Information System (INIS)

    Pfander, M.

    2001-01-01

    In the following study a complete balance over the REFOS receiver module, mounted on the tower power plant CESA-1 at the Plataforma Solar de Almeria (PSA), is carried out. Additionally an error inspection of the various measurement techniques used in the REFOS project is made. Especially the flux measurement system Pro hermes that is used to determine the total entry power of the receiver module and known as a major error source is analysed in detail. Simulations and experiments on the particular instruments are used to determine and quantify possible error sources. After discovering the origin of the errors they are reduced and included in the error calculation. The ultimate result is presented as an overall efficiency of the receiver module in dependence on the flux density at the receiver modules entry plane and the receiver operating temperature. (Author) 26 refs

  20. Evaluation and Error Analysis for a Solar thermal Receiver

    Energy Technology Data Exchange (ETDEWEB)

    Pfander, M.

    2001-07-01

    In the following study a complete balance over the REFOS receiver module, mounted on the tower power plant CESA-1 at the Plataforma Solar de Almeria (PSA), is carried out. Additionally an error inspection of the various measurement techniques used in the REFOS project is made. Especially the flux measurement system Prohermes that is used to determine the total entry power of the receiver module and known as a major error source is analysed in detail. Simulations and experiments on the particular instruments are used to determine and quantify possible error sources. After discovering the origin of the errors they are reduced and included in the error calculation. the ultimate result is presented as an overall efficiency of the receiver module in dependence on the flux density at the receiver module's entry plane and the receiver operating temperature. (Author) 26 refs.

  1. Error analysis for determination of accuracy of an ultrasound navigation system for head and neck surgery.

    Science.gov (United States)

    Kozak, J; Krysztoforski, K; Kroll, T; Helbig, S; Helbig, M

    2009-01-01

    The use of conventional CT- or MRI-based navigation systems for head and neck surgery is unsatisfactory due to tissue shift. Moreover, changes occurring during surgical procedures cannot be visualized. To overcome these drawbacks, we developed a novel ultrasound-guided navigation system for head and neck surgery. A comprehensive error analysis was undertaken to determine the accuracy of this new system. The evaluation of the system accuracy was essentially based on the method of error definition for well-established fiducial marker registration methods (point-pair matching) as used in, for example, CT- or MRI-based navigation. This method was modified in accordance with the specific requirements of ultrasound-guided navigation. The Fiducial Localization Error (FLE), Fiducial Registration Error (FRE) and Target Registration Error (TRE) were determined. In our navigation system, the real error (the TRE actually measured) did not exceed a volume of 1.58 mm(3) with a probability of 0.9. A mean value of 0.8 mm (standard deviation: 0.25 mm) was found for the FRE. The quality of the coordinate tracking system (Polaris localizer) could be defined with an FLE of 0.4 +/- 0.11 mm (mean +/- standard deviation). The quality of the coordinates of the crosshairs of the phantom was determined with a deviation of 0.5 mm (standard deviation: 0.07 mm). The results demonstrate that our newly developed ultrasound-guided navigation system shows only very small system deviations and therefore provides very accurate data for practical applications.

  2. Application of human error theory in case analysis of wrong procedures.

    Science.gov (United States)

    Duthie, Elizabeth A

    2010-06-01

    The goal of this study was to contribute to the emerging body of literature about the role of human behaviors and cognitive processes in the commission of wrong procedures. Case analysis of 5 wrong procedures in operative and nonoperative settings using James Reason's human error theory was performed. The case analysis showed that cognitive underspecification, cognitive flips, automode processing, and skill-based errors were contributory to wrong procedures. Wrong-site procedures accounted for the preponderance of the cases. Front-line supervisory staff used corrective actions that focused on the performance of the individual without taking into account cognitive factors. System fixes using human cognition concepts have a greater chance of achieving sustainable safety outcomes than those that are based on the traditional approach of counseling, education, and disciplinary action for staff.

  3. Model selection for marginal regression analysis of longitudinal data with missing observations and covariate measurement error.

    Science.gov (United States)

    Shen, Chung-Wei; Chen, Yi-Hau

    2015-10-01

    Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  4. Discontinuous Galerkin methods and a posteriori error analysis for heterogenous diffusion problems

    International Nuclear Information System (INIS)

    Stephansen, A.F.

    2007-12-01

    In this thesis we analyse a discontinuous Galerkin (DG) method and two computable a posteriori error estimators for the linear and stationary advection-diffusion-reaction equation with heterogeneous diffusion. The DG method considered, the SWIP method, is a variation of the Symmetric Interior Penalty Galerkin method. The difference is that the SWIP method uses weighted averages with weights that depend on the diffusion. The a priori analysis shows optimal convergence with respect to mesh-size and robustness with respect to heterogeneous diffusion, which is confirmed by numerical tests. Both a posteriori error estimators are of the residual type and control the energy (semi-)norm of the error. Local lower bounds are obtained showing that almost all indicators are independent of heterogeneities. The exception is for the non-conforming part of the error, which has been evaluated using the Oswald interpolator. The second error estimator is sharper in its estimate with respect to the first one, but it is slightly more costly. This estimator is based on the construction of an H(div)-conforming Raviart-Thomas-Nedelec flux using the conservativeness of DG methods. Numerical results show that both estimators can be used for mesh-adaptation. (author)

  5. Bit error rate analysis of free-space optical communication over general Malaga turbulence channels with pointing error

    KAUST Repository

    Alheadary, Wael Ghazy

    2016-12-24

    In this work, we present a bit error rate (BER) and achievable spectral efficiency (ASE) performance of a freespace optical (FSO) link with pointing errors based on intensity modulation/direct detection (IM/DD) and heterodyne detection over general Malaga turbulence channel. More specifically, we present exact closed-form expressions for adaptive and non-adaptive transmission. The closed form expressions are presented in terms of generalized power series of the Meijer\\'s G-function. Moreover, asymptotic closed form expressions are provided to validate our work. In addition, all the presented analytical results are illustrated using a selected set of numerical results.

  6. Estimation of genetic connectedness diagnostics based on prediction errors without the prediction error variance-covariance matrix.

    Science.gov (United States)

    Holmes, John B; Dodds, Ken G; Lee, Michael A

    2017-03-02

    An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.

  7. The Optimal Confidence Intervals for Agricultural Products’ Price Forecasts Based on Hierarchical Historical Errors

    Directory of Open Access Journals (Sweden)

    Yi Wang

    2016-12-01

    Full Text Available With the levels of confidence and system complexity, interval forecasts and entropy analysis can deliver more information than point forecasts. In this paper, we take receivers’ demands as our starting point, use the trade-off model between accuracy and informativeness as the criterion to construct the optimal confidence interval, derive the theoretical formula of the optimal confidence interval and propose a practical and efficient algorithm based on entropy theory and complexity theory. In order to improve the estimation precision of the error distribution, the point prediction errors are STRATIFIED according to prices and the complexity of the system; the corresponding prediction error samples are obtained by the prices stratification; and the error distributions are estimated by the kernel function method and the stability of the system. In a stable and orderly environment for price forecasting, we obtain point prediction error samples by the weighted local region and RBF (Radial basis function neural network methods, forecast the intervals of the soybean meal and non-GMO (Genetically Modified Organism soybean continuous futures closing prices and implement unconditional coverage, independence and conditional coverage tests for the simulation results. The empirical results are compared from various interval evaluation indicators, different levels of noise, several target confidence levels and different point prediction methods. The analysis shows that the optimal interval construction method is better than the equal probability method and the shortest interval method and has good anti-noise ability with the reduction of system entropy; the hierarchical estimation error method can obtain higher accuracy and better interval estimation than the non-hierarchical method in a stable system.

  8. Latent human error analysis and efficient improvement strategies by fuzzy TOPSIS in aviation maintenance tasks.

    Science.gov (United States)

    Chiu, Ming-Chuan; Hsieh, Min-Chih

    2016-05-01

    The purposes of this study were to develop a latent human error analysis process, to explore the factors of latent human error in aviation maintenance tasks, and to provide an efficient improvement strategy for addressing those errors. First, we used HFACS and RCA to define the error factors related to aviation maintenance tasks. Fuzzy TOPSIS with four criteria was applied to evaluate the error factors. Results show that 1) adverse physiological states, 2) physical/mental limitations, and 3) coordination, communication, and planning are the factors related to airline maintenance tasks that could be addressed easily and efficiently. This research establishes a new analytic process for investigating latent human error and provides a strategy for analyzing human error using fuzzy TOPSIS. Our analysis process complements shortages in existing methodologies by incorporating improvement efficiency, and it enhances the depth and broadness of human error analysis methodology. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  9. ERM model analysis for adaptation to hydrological model errors

    Science.gov (United States)

    Baymani-Nezhad, M.; Han, D.

    2018-05-01

    Hydrological conditions are changed continuously and these phenomenons generate errors on flood forecasting models and will lead to get unrealistic results. Therefore, to overcome these difficulties, a concept called model updating is proposed in hydrological studies. Real-time model updating is one of the challenging processes in hydrological sciences and has not been entirely solved due to lack of knowledge about the future state of the catchment under study. Basically, in terms of flood forecasting process, errors propagated from the rainfall-runoff model are enumerated as the main source of uncertainty in the forecasting model. Hence, to dominate the exciting errors, several methods have been proposed by researchers to update the rainfall-runoff models such as parameter updating, model state updating, and correction on input data. The current study focuses on investigations about the ability of rainfall-runoff model parameters to cope with three types of existing errors, timing, shape and volume as the common errors in hydrological modelling. The new lumped model, the ERM model, has been selected for this study to evaluate its parameters for its use in model updating to cope with the stated errors. Investigation about ten events proves that the ERM model parameters can be updated to cope with the errors without the need to recalibrate the model.

  10. Human Error Assessmentin Minefield Cleaning Operation Using Human Event Analysis

    Directory of Open Access Journals (Sweden)

    Mohammad Hajiakbari

    2015-12-01

    Full Text Available Background & objective: Human error is one of the main causes of accidents. Due to the unreliability of the human element and the high-risk nature of demining operations, this study aimed to assess and manage human errors likely to occur in such operations. Methods: This study was performed at a demining site in war zones located in the West of Iran. After acquiring an initial familiarity with the operations, methods, and tools of clearing minefields, job task related to clearing landmines were specified. Next, these tasks were studied using HTA and related possible errors were assessed using ATHEANA. Results: de-mining task was composed of four main operations, including primary detection, technical identification, investigation, and neutralization. There were found four main reasons for accidents occurring in such operations; walking on the mines, leaving mines with no action, error in neutralizing operation and environmental explosion. The possibility of human error in mine clearance operations was calculated as 0.010. Conclusion: The main causes of human error in de-mining operations can be attributed to various factors such as poor weather and operating conditions like outdoor work, inappropriate personal protective equipment, personality characteristics, insufficient accuracy in the work, and insufficient time available. To reduce the probability of human error in de-mining operations, the aforementioned factors should be managed properly.

  11. Complementarity based a posteriori error estimates and their properties

    Czech Academy of Sciences Publication Activity Database

    Vejchodský, Tomáš

    2012-01-01

    Roč. 82, č. 10 (2012), s. 2033-2046 ISSN 0378-4754 R&D Projects: GA ČR(CZ) GA102/07/0496; GA AV ČR IAA100760702 Institutional research plan: CEZ:AV0Z10190503 Keywords : error majorant * a posteriori error estimates * method of hypercircle Subject RIV: BA - General Mathematics Impact factor: 0.836, year: 2012 http://www.sciencedirect.com/science/article/pii/S0378475411001509

  12. Vision based error detection for 3D printing processes

    Directory of Open Access Journals (Sweden)

    Baumann Felix

    2016-01-01

    Full Text Available 3D printers became more popular in the last decade, partly because of the expiration of key patents and the supply of affordable machines. The origin is located in rapid prototyping. With Additive Manufacturing (AM it is possible to create physical objects from 3D model data by layer wise addition of material. Besides professional use for prototyping and low volume manufacturing they are becoming widespread amongst end users starting with the so called Maker Movement. The most prevalent type of consumer grade 3D printers is Fused Deposition Modelling (FDM, also Fused Filament Fabrication FFF. This work focuses on FDM machinery because of their widespread occurrence and large number of open problems like precision and failure. These 3D printers can fail to print objects at a statistical rate depending on the manufacturer and model of the printer. Failures can occur due to misalignment of the print-bed, the print-head, slippage of the motors, warping of the printed material, lack of adhesion or other reasons. The goal of this research is to provide an environment in which these failures can be detected automatically. Direct supervision is inhibited by the recommended placement of FDM printers in separate rooms away from the user due to ventilation issues. The inability to oversee the printing process leads to late or omitted detection of failures. Rejects effect material waste and wasted time thus lowering the utilization of printing resources. Our approach consists of a camera based error detection mechanism that provides a web based interface for remote supervision and early failure detection. Early failure detection can lead to reduced time spent on broken prints, less material wasted and in some cases salvaged objects.

  13. HUMAN RELIABILITY ANALYSIS DENGAN PENDEKATAN COGNITIVE RELIABILITY AND ERROR ANALYSIS METHOD (CREAM

    Directory of Open Access Journals (Sweden)

    Zahirah Alifia Maulida

    2015-01-01

    Full Text Available Kecelakaan kerja pada bidang grinding dan welding menempati urutan tertinggi selama lima tahun terakhir di PT. X. Kecelakaan ini disebabkan oleh human error. Human error terjadi karena pengaruh lingkungan kerja fisik dan non fisik.Penelitian kali menggunakan skenario untuk memprediksi serta mengurangi kemungkinan terjadinya error pada manusia dengan pendekatan CREAM (Cognitive Reliability and Error Analysis Method. CREAM adalah salah satu metode human reliability analysis yang berfungsi untuk mendapatkan nilai Cognitive Failure Probability (CFP yang dapat dilakukan dengan dua cara yaitu basic method dan extended method. Pada basic method hanya akan didapatkan nilai failure probabailty secara umum, sedangkan untuk extended method akan didapatkan CFP untuk setiap task. Hasil penelitian menunjukkan faktor- faktor yang mempengaruhi timbulnya error pada pekerjaan grinding dan welding adalah kecukupan organisasi, kecukupan dari Man Machine Interface (MMI & dukungan operasional, ketersediaan prosedur/ perencanaan, serta kecukupan pelatihan dan pengalaman. Aspek kognitif pada pekerjaan grinding yang memiliki nilai error paling tinggi adalah planning dengan nilai CFP 0.3 dan pada pekerjaan welding yaitu aspek kognitif execution dengan nilai CFP 0.18. Sebagai upaya untuk mengurangi nilai error kognitif pada pekerjaan grinding dan welding rekomendasi yang diberikan adalah memberikan training secara rutin, work instrucstion yang lebih rinci dan memberikan sosialisasi alat. Kata kunci: CREAM (cognitive reliability and error analysis method, HRA (human reliability analysis, cognitive error Abstract The accidents in grinding and welding sectors were the highest cases over the last five years in PT. X and it caused by human error. Human error occurs due to the influence of working environment both physically and non-physically. This study will implement an approaching scenario called CREAM (Cognitive Reliability and Error Analysis Method. CREAM is one of human

  14. Mars gravity field error analysis from simulated radio tracking of Mars Observer

    International Nuclear Information System (INIS)

    Smith, D.E.; Lerch, F.J.; Chan, J.C.; Chinn, D.S.; Iz, H.B.; Mallama, A.; Patel, G.B.

    1990-01-01

    The Mars Observer (MO) Mission, in a near-polar orbit at 360-410 km altitude for nearly a 2-year observing period, will greatly improve our understanding of the geophysics of Mars, including its gravity field. To assess the expected improvement of the gravity field, the authors have conducted an error analysis based upon the mission plan for the Mars Observer radio tracking data from the Deep Space Network. Their results indicate that it should be possible to obtain a high-resolution model (spherical harmonics complete to degree and order 50 corresponding to a 200-km horizontal resolution) for the gravitational field of the planet. This model, in combination with topography from MO altimetry, should provide for an improved determination of the broad scale density structure and stress state of the Martian crust and upper mantle. The mathematical model for the error analysis is based on the representation of doppler tracking data as a function of the Martian gravity field in spherical harmonics, solar radiation pressure, atmospheric drag, angular momentum desaturation residual acceleration (AMDRA) effects, tracking station biases, and the MO orbit parameters. Two approaches are employed. In the first case, the error covariance matrix of the gravity model is estimated including the effects from all the nongravitational parameters (noise-only case). In the second case, the gravity recovery error is computed as above but includes unmodelled systematic effects from atmospheric drag, AMDRA, and solar radiation pressure (biased case). The error spectrum of gravity shows an order of magnitude of improvement over current knowledge based on doppler data precision from a single station of 0.3 mm s -1 noise for 1-min integration intervals during three 60-day periods

  15. SU-E-T-392: Evaluation of Ion Chamber/film and Log File Based QA to Detect Delivery Errors

    International Nuclear Information System (INIS)

    Nelson, C; Mason, B; Kirsner, S; Ohrt, J

    2015-01-01

    Purpose: Ion chamber and film (ICAF) is a method used to verify patient dose prior to treatment. More recently, log file based QA has been shown as an alternative for measurement based QA. In this study, we delivered VMAT plans with and without errors to determine if ICAF and/or log file based QA was able to detect the errors. Methods: Using two VMAT patients, the original treatment plan plus 7 additional plans with delivery errors introduced were generated and delivered. The erroneous plans had gantry, collimator, MLC, gantry and collimator, collimator and MLC, MLC and gantry, and gantry, collimator, and MLC errors. The gantry and collimator errors were off by 4 0 for one of the two arcs. The MLC error introduced was one in which the opening aperture didn’t move throughout the delivery of the field. For each delivery, an ICAF measurement was made as well as a dose comparison based upon log files. Passing criteria to evaluate the plans were ion chamber less and 5% and film 90% of pixels pass the 3mm/3% gamma analysis(GA). For log file analysis 90% of voxels pass the 3mm/3% 3D GA and beam parameters match what was in the plan. Results: Two original plans were delivered and passed both ICAF and log file base QA. Both ICAF and log file QA met the dosimetry criteria on 4 of the 12 erroneous cases analyzed (2 cases were not analyzed). For the log file analysis, all 12 erroneous plans alerted a mismatch in delivery versus what was planned. The 8 plans that didn’t meet criteria all had MLC errors. Conclusion: Our study demonstrates that log file based pre-treatment QA was able to detect small errors that may not be detected using an ICAF and both methods of were able to detect larger delivery errors

  16. Error analysis of mathematics students who are taught by using the book of mathematics learning strategy in solving pedagogical problems based on Polya’s four-step approach

    Science.gov (United States)

    Halomoan Siregar, Budi; Dewi, Izwita; Andriani, Ade

    2018-03-01

    The purpose of this study is to analyse the types of students errors and causes of them in solving of pedagogic problems. The type of this research is qualitative descriptive, conducted on 34 students of mathematics education in academic year 2017 to 2018. The data in this study is obtained through interviews and tests. Furthermore, the data is then analyzed through three stages: 1) data reduction, 2) data description, and 3) conclusions. The data is reduced by organizing and classifying them in order to obtain meaningful information. After reducing, then the data presented in a simple form of narrative, graphics, and tables to illustrate clearly the errors of students. Based on the information then drawn a conclusion. The results of this study indicate that the students made various errors: 1) they made a mistake in answer what being asked at the problem, because they misunderstood the problem, 2) they fail to plan the learning process based on constructivism, due to lack of understanding of how to design the learning, 3) they determine an inappropriate learning tool, because they did not understand what kind of learning tool is relevant to use.

  17. US-LHC IR magnet error analysis and compensation

    International Nuclear Information System (INIS)

    Wei, J.; Ptitsin, V.; Pilat, F.; Tepikian, S.; Gelfand, N.; Wan, W.; Holt, J.

    1998-01-01

    This paper studies the impact of the insertion-region (IR) magnet field errors on LHC collision performance. Compensation schemes including magnet orientation optimization, body-end compensation, tuning shims, and local nonlinear correction are shown to be highly effective

  18. Analysis of error in Monte Carlo transport calculations

    International Nuclear Information System (INIS)

    Booth, T.E.

    1979-01-01

    The Monte Carlo method for neutron transport calculations suffers, in part, because of the inherent statistical errors associated with the method. Without an estimate of these errors in advance of the calculation, it is difficult to decide what estimator and biasing scheme to use. Recently, integral equations have been derived that, when solved, predicted errors in Monte Carlo calculations in nonmultiplying media. The present work allows error prediction in nonanalog Monte Carlo calculations of multiplying systems, even when supercritical. Nonanalog techniques such as biased kernels, particle splitting, and Russian Roulette are incorporated. Equations derived here allow prediction of how much a specific variance reduction technique reduces the number of histories required, to be weighed against the change in time required for calculation of each history. 1 figure, 1 table

  19. Error Analysis for Fourier Methods for Option Pricing

    KAUST Repository

    Hä ppö lä , Juho

    2016-01-01

    We provide a bound for the error committed when using a Fourier method to price European options when the underlying follows an exponential Levy dynamic. The price of the option is described by a partial integro-differential equation (PIDE

  20. Error Analysis of Inertial Navigation Systems Using Test Algorithms

    OpenAIRE

    Vaispacher, Tomáš; Bréda, Róbert; Adamčík, František

    2015-01-01

    Content of this contribution is an issue of inertial sensors errors, specification of inertial measurement units and generating of test signals for Inertial Navigation System (INS). Given the different levels of navigation tasks, part of this contribution is comparison of the actual types of Inertial Measurement Units. Considering this comparison, there is proposed the way of solving inertial sensors errors and their modelling for low – cost inertial navigation applications. The last part is ...

  1. Analysis of gross error rates in operation of commercial nuclear power stations

    International Nuclear Information System (INIS)

    Joos, D.W.; Sabri, Z.A.; Husseiny, A.A.

    1979-01-01

    Experience in operation of US commercial nuclear power plants is reviewed over a 25-month period. The reports accumulated in that period on events of human error and component failure are examined to evaluate gross operator error rates. The impact of such errors on plant operation and safety is examined through the use of proper taxonomies of error, tasks and failures. Four categories of human errors are considered; namely, operator, maintenance, installation and administrative. The computed error rates are used to examine appropriate operator models for evaluation of operator reliability. Human error rates are found to be significant to a varying degree in both BWR and PWR. This emphasizes the import of considering human factors in safety and reliability analysis of nuclear systems. The results also indicate that human errors, and especially operator errors, do indeed follow the exponential reliability model. (Auth.)

  2. Error Analysis of Variations on Larsen's Benchmark Problem

    International Nuclear Information System (INIS)

    Azmy, YY

    2001-01-01

    Error norms for three variants of Larsen's benchmark problem are evaluated using three numerical methods for solving the discrete ordinates approximation of the neutron transport equation in multidimensional Cartesian geometry. The three variants of Larsen's test problem are concerned with the incoming flux boundary conditions: unit incoming flux on the left and bottom edges (Larsen's configuration); unit, incoming flux only on the left edge; unit incoming flux only on the bottom edge. The three methods considered are the Diamond Difference (DD) method, and the constant-approximation versions of the Arbitrarily High Order Transport method of the Nodal type (AHOT-N), and of the Characteristic (AHOT-C) type. The cell-wise error is computed as the difference between the cell-averaged flux computed by each method and the exact value, then the L 1 , L 2 , and L ∞ error norms are calculated. The results of this study demonstrate that while integral error norms, i.e. L 1 , L 2 , converge to zero with mesh refinement, the pointwise L ∞ norm does not due to solution discontinuity across the singular characteristic. Little difference is observed between the error norm behavior of the three methods considered in spite of the fact that AHOT-C is locally exact, suggesting that numerical diffusion across the singular characteristic as the major source of error on the global scale. However, AHOT-C possesses a given accuracy in a larger fraction of computational cells than DD

  3. Error Analysis of Brailled Instructional Materials Produced by Public School Personnel in Texas

    Science.gov (United States)

    Herzberg, Tina

    2010-01-01

    In this study, a detailed error analysis was performed to determine if patterns of errors existed in braille transcriptions. The most frequently occurring errors were the insertion of letters or words that were not contained in the original print material; the incorrect usage of the emphasis indicator; and the incorrect formatting of titles,…

  4. Error analysis for 1-1/2-loop semiscale system isothermal test data

    International Nuclear Information System (INIS)

    Feldman, E.M.; Naff, S.A.

    1975-05-01

    An error analysis was performed on the measurements made during the isothermal portion of the Semiscale Blowdown and Emergency Core Cooling (ECC) Project. A brief description of the measurement techniques employed, identification of potential sources of errors, and quantification of the errors associated with data is presented. (U.S.)

  5. An advanced SEU tolerant latch based on error detection

    Science.gov (United States)

    Xu, Hui; Zhu, Jianwei; Lu, Xiaoping; Li, Jingzhao

    2018-05-01

    This paper proposes a latch that can mitigate SEUs via an error detection circuit. The error detection circuit is hardened by a C-element and a stacked PMOS. In the hold state, a particle strikes the latch or the error detection circuit may cause a fault logic state of the circuit. The error detection circuit can detect the upset node in the latch and the fault output will be corrected. The upset node in the error detection circuit can be corrected by the C-element. The power dissipation and propagation delay of the proposed latch are analyzed by HSPICE simulations. The proposed latch consumes about 77.5% less energy and 33.1% less propagation delay than the triple modular redundancy (TMR) latch. Simulation results demonstrate that the proposed latch can mitigate SEU effectively. Project supported by the National Natural Science Foundation of China (Nos. 61404001, 61306046), the Anhui Province University Natural Science Research Major Project (No. KJ2014ZD12), the Huainan Science and Technology Program (No. 2013A4011), and the National Natural Science Foundation of China (No. 61371025).

  6. A Conceptual Framework of Human Reliability Analysis for Execution Human Error in NPP Advanced MCRs

    International Nuclear Information System (INIS)

    Jang, In Seok; Kim, Ar Ryum; Seong, Poong Hyun; Jung, Won Dea

    2014-01-01

    The operation environment of Main Control Rooms (MCRs) in Nuclear Power Plants (NPPs) has changed with the adoption of new human-system interfaces that are based on computer-based technologies. The MCRs that include these digital and computer technologies, such as large display panels, computerized procedures, and soft controls, are called Advanced MCRs. Among the many features of Advanced MCRs, soft controls are a particularly important feature because the operation action in NPP Advanced MCRs is performed by soft control. Using soft controls such as mouse control, and touch screens, operators can select a specific screen, then choose the controller, and finally manipulate the given devices. Due to the different interfaces between soft control and hardwired conventional type control, different human error probabilities and a new Human Reliability Analysis (HRA) framework should be considered in the HRA for advanced MCRs. In other words, new human error modes should be considered for interface management tasks such as navigation tasks, and icon (device) selection tasks in monitors and a new framework of HRA method taking these newly generated human error modes into account should be considered. In this paper, a conceptual framework for a HRA method for the evaluation of soft control execution human error in advanced MCRs is suggested by analyzing soft control tasks

  7. A Conceptual Framework of Human Reliability Analysis for Execution Human Error in NPP Advanced MCRs

    Energy Technology Data Exchange (ETDEWEB)

    Jang, In Seok; Kim, Ar Ryum; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of); Jung, Won Dea [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-08-15

    The operation environment of Main Control Rooms (MCRs) in Nuclear Power Plants (NPPs) has changed with the adoption of new human-system interfaces that are based on computer-based technologies. The MCRs that include these digital and computer technologies, such as large display panels, computerized procedures, and soft controls, are called Advanced MCRs. Among the many features of Advanced MCRs, soft controls are a particularly important feature because the operation action in NPP Advanced MCRs is performed by soft control. Using soft controls such as mouse control, and touch screens, operators can select a specific screen, then choose the controller, and finally manipulate the given devices. Due to the different interfaces between soft control and hardwired conventional type control, different human error probabilities and a new Human Reliability Analysis (HRA) framework should be considered in the HRA for advanced MCRs. In other words, new human error modes should be considered for interface management tasks such as navigation tasks, and icon (device) selection tasks in monitors and a new framework of HRA method taking these newly generated human error modes into account should be considered. In this paper, a conceptual framework for a HRA method for the evaluation of soft control execution human error in advanced MCRs is suggested by analyzing soft control tasks.

  8. Forward error correction based on algebraic-geometric theory

    CERN Document Server

    A Alzubi, Jafar; M Chen, Thomas

    2014-01-01

    This book covers the design, construction, and implementation of algebraic-geometric codes from Hermitian curves. Matlab simulations of algebraic-geometric codes and Reed-Solomon codes compare their bit error rate using different modulation schemes over additive white Gaussian noise channel model. Simulation results of Algebraic-geometric codes bit error rate performance using quadrature amplitude modulation (16QAM and 64QAM) are presented for the first time and shown to outperform Reed-Solomon codes at various code rates and channel models. The book proposes algebraic-geometric block turbo codes. It also presents simulation results that show an improved bit error rate performance at the cost of high system complexity due to using algebraic-geometric codes and Chase-Pyndiah’s algorithm simultaneously. The book proposes algebraic-geometric irregular block turbo codes (AG-IBTC) to reduce system complexity. Simulation results for AG-IBTCs are presented for the first time.

  9. Measurements and their uncertainties a practical guide to modern error analysis

    CERN Document Server

    Hughes, Ifan G

    2010-01-01

    This hands-on guide is primarily intended to be used in undergraduate laboratories in the physical sciences and engineering. It assumes no prior knowledge of statistics. It introduces the necessary concepts where needed, with key points illustrated with worked examples and graphic illustrations. In contrast to traditional mathematical treatments it uses a combination of spreadsheet and calculus-based approaches, suitable as a quick and easy on-the-spot reference. The emphasisthroughout is on practical strategies to be adopted in the laboratory. Error analysis is introduced at a level accessible to school leavers, and carried through to research level. Error calculation and propagation is presented though a series of rules-of-thumb, look-up tables and approaches amenable to computer analysis. The general approach uses the chi-square statistic extensively. Particular attention is given to hypothesis testing and extraction of parameters and their uncertainties by fitting mathematical models to experimental data....

  10. Analysis of Human Errors in Japanese Nuclear Power Plants using JHPES/JAESS

    International Nuclear Information System (INIS)

    Kojima, Mitsuhiro; Mimura, Masahiro; Yamaguchi, Osamu

    1998-01-01

    CRIEPI (Central Research Institute for Electric Power Industries) / HFC (Human Factors research Center) developed J-HPES (Japanese version of Human Performance Enhancement System) based on the HPES which was originally developed by INPO to analyze events resulted from human errors. J-HPES was systematized into a computer program named JAESS (J-HPES Analysis and Evaluation Support System) and both systems were distributed to all Japanese electric power companies to analyze events by themselves. CRIEPI / HFC also analyzed the incidents in Japanese nuclear power plants (NPPs) which were officially reported and identified as human error related with J-HPES / JAESS. These incidents have numbered up to 188 cases over the last 30 years. An outline of this analysis is given, and some preliminary findings are shown. (authors)

  11. Error analysis in predictive modelling demonstrated on mould data.

    Science.gov (United States)

    Baranyi, József; Csernus, Olívia; Beczner, Judit

    2014-01-17

    The purpose of this paper was to develop a predictive model for the effect of temperature and water activity on the growth rate of Aspergillus niger and to determine the sources of the error when the model is used for prediction. Parallel mould growth curves, derived from the same spore batch, were generated and fitted to determine their growth rate. The variances of replicate ln(growth-rate) estimates were used to quantify the experimental variability, inherent to the method of determining the growth rate. The environmental variability was quantified by the variance of the respective means of replicates. The idea is analogous to the "within group" and "between groups" variability concepts of ANOVA procedures. A (secondary) model, with temperature and water activity as explanatory variables, was fitted to the natural logarithm of the growth rates determined by the primary model. The model error and the experimental and environmental errors were ranked according to their contribution to the total error of prediction. Our method can readily be applied to analysing the error structure of predictive models of bacterial growth models, too. © 2013.

  12. Development of an analysis rule of diagnosis error for standard method of human reliability analysis

    International Nuclear Information System (INIS)

    Jeong, W. D.; Kang, D. I.; Jeong, K. S.

    2003-01-01

    This paper presents the status of development of Korea standard method for Human Reliability Analysis (HRA), and proposed a standard procedure and rules for the evaluation of diagnosis error probability. The quality of KSNP HRA was evaluated using the requirement of ASME PRA standard guideline, and the design requirement for the standard HRA method was defined. Analysis procedure and rules, developed so far, to analyze diagnosis error probability was suggested as a part of the standard method. And also a study of comprehensive application was performed to evaluate the suitability of the proposed rules

  13. ATHEANA: A Technique for Human Error Analysis: An Overview of Its Methodological Basis

    International Nuclear Information System (INIS)

    Wreathall, John; Ramey-Smith, Ann

    1998-01-01

    The U.S. NRC has developed a new human reliability analysis (HRA) method, called A Technique for Human Event Analysis (ATHEANA), to provide a way of modeling the so-called 'errors of commission' - that is, situations in which operators terminate or disable engineered safety features (ESFs) or similar equipment during accident conditions, thereby putting the plant at an increased risk of core damage. In its reviews of operational events, NRC has found that these errors of commission occur with a relatively high frequency (as high as 2 or 3 per year), but are noticeably missing from the scope of most current probabilistic risk assessments (PRAs). This new method was developed through a formalized approach that describes what can occur when operators behave rationally but have inadequate knowledge or poor judgement. In particular, the method is based on models of decision-making and response planning that have been used extensively in the aviation field, and on the analysis of major accidents in both the nuclear and non-nuclear fields. Other papers at this conference present summaries of these event analyses in both the nuclear and non-nuclear fields. This paper presents an overview of ATHEANA and summarizes how the method structures the analysis of operationally significant events, and helps HRA analysts identify and model potentially risk-significant errors of commission in plant PRAs. (authors)

  14. Analysis of Relationships between the Level of Errors in Leg and Monofin Movement and Stroke Parameters in Monofin Swimming

    Science.gov (United States)

    Rejman, Marek

    2013-01-01

    The aim of this study was to analyze the error structure in propulsive movements with regard to its influence on monofin swimming speed. The random cycles performed by six swimmers were filmed during a progressive test (900m). An objective method to estimate errors committed in the area of angular displacement of the feet and monofin segments was employed. The parameters were compared with a previously described model. Mutual dependences between the level of errors, stroke frequency, stroke length and amplitude in relation to swimming velocity were analyzed. The results showed that proper foot movements and the avoidance of errors, arising at the distal part of the fin, ensure the progression of swimming speed. The individual stroke parameters distribution which consists of optimally increasing stroke frequency to the maximal possible level that enables the stabilization of stroke length leads to the minimization of errors. Identification of key elements in the stroke structure based on the analysis of errors committed should aid in improving monofin swimming technique. Key points The monofin swimming technique was evaluated through the prism of objectively defined errors committed by the swimmers. The dependences between the level of errors, stroke rate, stroke length and amplitude in relation to swimming velocity were analyzed. Optimally increasing stroke rate to the maximal possible level that enables the stabilization of stroke length leads to the minimization of errors. Propriety foot movement and the avoidance of errors arising at the distal part of fin, provide for the progression of swimming speed. The key elements improving monofin swimming technique, based on the analysis of errors committed, were designated. PMID:24149742

  15. Analysis and improvement of gas turbine blade temperature measurement error

    International Nuclear Information System (INIS)

    Gao, Shan; Wang, Lixin; Feng, Chi; Daniel, Ketui

    2015-01-01

    Gas turbine blade components are easily damaged; they also operate in harsh high-temperature, high-pressure environments over extended durations. Therefore, ensuring that the blade temperature remains within the design limits is very important. In this study, measurement errors in turbine blade temperatures were analyzed, taking into account detector lens contamination, the reflection of environmental energy from the target surface, the effects of the combustion gas, and the emissivity of the blade surface. In this paper, each of the above sources of measurement error is discussed, and an iterative computing method for calculating blade temperature is proposed. (paper)

  16. Analysis and improvement of gas turbine blade temperature measurement error

    Science.gov (United States)

    Gao, Shan; Wang, Lixin; Feng, Chi; Daniel, Ketui

    2015-10-01

    Gas turbine blade components are easily damaged; they also operate in harsh high-temperature, high-pressure environments over extended durations. Therefore, ensuring that the blade temperature remains within the design limits is very important. In this study, measurement errors in turbine blade temperatures were analyzed, taking into account detector lens contamination, the reflection of environmental energy from the target surface, the effects of the combustion gas, and the emissivity of the blade surface. In this paper, each of the above sources of measurement error is discussed, and an iterative computing method for calculating blade temperature is proposed.

  17. Undesirable effects of covariance matrix techniques for error analysis

    International Nuclear Information System (INIS)

    Seibert, D.

    1994-01-01

    Regression with χ 2 constructed from covariance matrices should not be used for some combinations of covariance matrices and fitting functions. Using the technique for unsuitable combinations can amplify systematic errors. This amplification is uncontrolled, and can produce arbitrarily inaccurate results that might not be ruled out by a χ 2 test. In addition, this technique can give incorrect (artificially small) errors for fit parameters. I give a test for this instability and a more robust (but computationally more intensive) method for fitting correlated data

  18. Review of U.S. Army Unmanned Aerial Systems Accident Reports: Analysis of Human Error Contributions

    Science.gov (United States)

    2018-03-20

    within report documents. The information presented was obtained through a request to use the U.S. Army Combat Readiness Center’s Risk Management ...controlled flight into terrain (13 accidents), fueling errors by improper techniques (7 accidents), and a variety of maintenance errors (10 accidents). The...and 9 of the 10 maintenance accidents. Table 4. Frequencies Based on Source of Human Error Human error source Presence Poor Planning

  19. Reliability-Based Marginal Cost Pricing Problem Case with Both Demand Uncertainty and Travelers’ Perception Errors

    Directory of Open Access Journals (Sweden)

    Shaopeng Zhong

    2013-01-01

    Full Text Available Focusing on the first-best marginal cost pricing (MCP in a stochastic network with both travel demand uncertainty and stochastic perception errors within the travelers’ route choice decision processes, this paper develops a perceived risk-based stochastic network marginal cost pricing (PRSN-MCP model. Numerical examples based on an integrated method combining the moment analysis approach, the fitting distribution method, and the reliability measures are also provided to demonstrate the importance and properties of the proposed model. The main finding is that ignoring the effect of travel time reliability and travelers’ perception errors may significantly reduce the performance of the first-best MCP tolls, especially under high travelers’ confidence and network congestion levels. The analysis result could also enhance our understanding of (1 the effect of stochastic perception error (SPE on the perceived travel time distribution and the components of road toll; (2 the effect of road toll on the actual travel time distribution and its reliability measures; (3 the effect of road toll on the total network travel time distribution and its statistics; and (4 the effect of travel demand level and the value of reliability (VoR level on the components of road toll.

  20. Human error in strabismus surgery: Quantification with a sensitivity analysis

    NARCIS (Netherlands)

    S. Schutte (Sander); J.R. Polling (Jan Roelof); F.C.T. van der Helm (Frans); H.J. Simonsz (Huib)

    2009-01-01

    textabstractBackground: Reoperations are frequently necessary in strabismus surgery. The goal of this study was to analyze human-error related factors that introduce variability in the results of strabismus surgery in a systematic fashion. Methods: We identified the primary factors that influence

  1. Human error in strabismus surgery : Quantification with a sensitivity analysis

    NARCIS (Netherlands)

    Schutte, S.; Polling, J.R.; Van der Helm, F.C.T.; Simonsz, H.J.

    2008-01-01

    Background- Reoperations are frequently necessary in strabismus surgery. The goal of this study was to analyze human-error related factors that introduce variability in the results of strabismus surgery in a systematic fashion. Methods- We identified the primary factors that influence the outcome of

  2. Linguistic Error Analysis on Students' Thesis Proposals

    Science.gov (United States)

    Pescante-Malimas, Mary Ann; Samson, Sonrisa C.

    2017-01-01

    This study identified and analyzed the common linguistic errors encountered by Linguistics, Literature, and Advertising Arts majors in their Thesis Proposal classes in the First Semester 2016-2017. The data were the drafts of the thesis proposals of the students from the three different programs. A total of 32 manuscripts were analyzed which was…

  3. Reading and Spelling Error Analysis of Native Arabic Dyslexic Readers

    Science.gov (United States)

    Abu-rabia, Salim; Taha, Haitham

    2004-01-01

    This study was an investigation of reading and spelling errors of dyslexic Arabic readers ("n"=20) compared with two groups of normal readers: a young readers group, matched with the dyslexics by reading level ("n"=20) and an age-matched group ("n"=20). They were tested on reading and spelling of texts, isolated…

  4. Analysis of Students' Error in Learning of Quadratic Equations

    Science.gov (United States)

    Zakaria, Effandi; Ibrahim; Maat, Siti Mistima

    2010-01-01

    The purpose of the study was to determine the students' error in learning quadratic equation. The samples were 30 form three students from a secondary school in Jambi, Indonesia. Diagnostic test was used as the instrument of this study that included three components: factorization, completing the square and quadratic formula. Diagnostic interview…

  5. Catching errors with patient-specific pretreatment machine log file analysis.

    Science.gov (United States)

    Rangaraj, Dharanipathy; Zhu, Mingyao; Yang, Deshan; Palaniswaamy, Geethpriya; Yaddanapudi, Sridhar; Wooten, Omar H; Brame, Scott; Mutic, Sasa

    2013-01-01

    A robust, efficient, and reliable quality assurance (QA) process is highly desired for modern external beam radiation therapy treatments. Here, we report the results of a semiautomatic, pretreatment, patient-specific QA process based on dynamic machine log file analysis clinically implemented for intensity modulated radiation therapy (IMRT) treatments delivered by high energy linear accelerators (Varian 2100/2300 EX, Trilogy, iX-D, Varian Medical Systems Inc, Palo Alto, CA). The multileaf collimator machine (MLC) log files are called Dynalog by Varian. Using an in-house developed computer program called "Dynalog QA," we automatically compare the beam delivery parameters in the log files that are generated during pretreatment point dose verification measurements, with the treatment plan to determine any discrepancies in IMRT deliveries. Fluence maps are constructed and compared between the delivered and planned beams. Since clinical introduction in June 2009, 912 machine log file analyses QA were performed by the end of 2010. Among these, 14 errors causing dosimetric deviation were detected and required further investigation and intervention. These errors were the result of human operating mistakes, flawed treatment planning, and data modification during plan file transfer. Minor errors were also reported in 174 other log file analyses, some of which stemmed from false positives and unreliable results; the origins of these are discussed herein. It has been demonstrated that the machine log file analysis is a robust, efficient, and reliable QA process capable of detecting errors originating from human mistakes, flawed planning, and data transfer problems. The possibility of detecting these errors is low using point and planar dosimetric measurements. Copyright © 2013 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  6. BEATBOX v1.0: Background Error Analysis Testbed with Box Models

    Science.gov (United States)

    Knote, Christoph; Barré, Jérôme; Eckl, Max

    2018-02-01

    The Background Error Analysis Testbed (BEATBOX) is a new data assimilation framework for box models. Based on the BOX Model eXtension (BOXMOX) to the Kinetic Pre-Processor (KPP), this framework allows users to conduct performance evaluations of data assimilation experiments, sensitivity analyses, and detailed chemical scheme diagnostics from an observation simulation system experiment (OSSE) point of view. The BEATBOX framework incorporates an observation simulator and a data assimilation system with the possibility of choosing ensemble, adjoint, or combined sensitivities. A user-friendly, Python-based interface allows for the tuning of many parameters for atmospheric chemistry and data assimilation research as well as for educational purposes, for example observation error, model covariances, ensemble size, perturbation distribution in the initial conditions, and so on. In this work, the testbed is described and two case studies are presented to illustrate the design of a typical OSSE experiment, data assimilation experiments, a sensitivity analysis, and a method for diagnosing model errors. BEATBOX is released as an open source tool for the atmospheric chemistry and data assimilation communities.

  7. BEATBOX v1.0: Background Error Analysis Testbed with Box Models

    Directory of Open Access Journals (Sweden)

    C. Knote

    2018-02-01

    Full Text Available The Background Error Analysis Testbed (BEATBOX is a new data assimilation framework for box models. Based on the BOX Model eXtension (BOXMOX to the Kinetic Pre-Processor (KPP, this framework allows users to conduct performance evaluations of data assimilation experiments, sensitivity analyses, and detailed chemical scheme diagnostics from an observation simulation system experiment (OSSE point of view. The BEATBOX framework incorporates an observation simulator and a data assimilation system with the possibility of choosing ensemble, adjoint, or combined sensitivities. A user-friendly, Python-based interface allows for the tuning of many parameters for atmospheric chemistry and data assimilation research as well as for educational purposes, for example observation error, model covariances, ensemble size, perturbation distribution in the initial conditions, and so on. In this work, the testbed is described and two case studies are presented to illustrate the design of a typical OSSE experiment, data assimilation experiments, a sensitivity analysis, and a method for diagnosing model errors. BEATBOX is released as an open source tool for the atmospheric chemistry and data assimilation communities.

  8. An adjoint-based scheme for eigenvalue error improvement

    International Nuclear Information System (INIS)

    Merton, S.R.; Smedley-Stevenson, R.P.; Pain, C.C.; El-Sheikh, A.H.; Buchan, A.G.

    2011-01-01

    A scheme for improving the accuracy and reducing the error in eigenvalue calculations is presented. Using a rst order Taylor series expansion of both the eigenvalue solution and the residual of the governing equation, an approximation to the error in the eigenvalue is derived. This is done using a convolution of the equation residual and adjoint solution, which is calculated in-line with the primal solution. A defect correction on the solution is then performed in which the approximation to the error is used to apply a correction to the eigenvalue. The method is shown to dramatically improve convergence of the eigenvalue. The equation for the eigenvalue is shown to simplify when certain normalizations are applied to the eigenvector. Two such normalizations are considered; the rst of these is a fission-source type of normalisation and the second is an eigenvector normalisation. Results are demonstrated on a number of demanding elliptic problems using continuous Galerkin weighted nite elements. Moreover, the correction scheme may also be applied to hyperbolic problems and arbitrary discretization. This is not limited to spatial corrections and may be used throughout the phase space of the discrete equation. The applied correction not only improves fidelity of the calculation, it allows assessment of the reliability of numerical schemes to be made and could be used to guide mesh adaption algorithms or to automate mesh generation schemes. (author)

  9. Bounding quantum gate error rate based on reported average fidelity

    International Nuclear Information System (INIS)

    Sanders, Yuval R; Wallman, Joel J; Sanders, Barry C

    2016-01-01

    Remarkable experimental advances in quantum computing are exemplified by recent announcements of impressive average gate fidelities exceeding 99.9% for single-qubit gates and 99% for two-qubit gates. Although these high numbers engender optimism that fault-tolerant quantum computing is within reach, the connection of average gate fidelity with fault-tolerance requirements is not direct. Here we use reported average gate fidelity to determine an upper bound on the quantum-gate error rate, which is the appropriate metric for assessing progress towards fault-tolerant quantum computation, and we demonstrate that this bound is asymptotically tight for general noise. Although this bound is unlikely to be saturated by experimental noise, we demonstrate using explicit examples that the bound indicates a realistic deviation between the true error rate and the reported average fidelity. We introduce the Pauli distance as a measure of this deviation, and we show that knowledge of the Pauli distance enables tighter estimates of the error rate of quantum gates. (fast track communication)

  10. Fractional Order Differentiation by Integration and Error Analysis in Noisy Environment

    KAUST Repository

    Liu, Dayan

    2015-03-31

    The integer order differentiation by integration method based on the Jacobi orthogonal polynomials for noisy signals was originally introduced by Mboup, Join and Fliess. We propose to extend this method from the integer order to the fractional order to estimate the fractional order derivatives of noisy signals. Firstly, two fractional order differentiators are deduced from the Jacobi orthogonal polynomial filter, using the Riemann-Liouville and the Caputo fractional order derivative definitions respectively. Exact and simple formulae for these differentiators are given by integral expressions. Hence, they can be used for both continuous-time and discrete-time models in on-line or off-line applications. Secondly, some error bounds are provided for the corresponding estimation errors. These bounds allow to study the design parameters\\' influence. The noise error contribution due to a large class of stochastic processes is studied in discrete case. The latter shows that the differentiator based on the Caputo fractional order derivative can cope with a class of noises, whose mean value and variance functions are polynomial time-varying. Thanks to the design parameters analysis, the proposed fractional order differentiators are significantly improved by admitting a time-delay. Thirdly, in order to reduce the calculation time for on-line applications, a recursive algorithm is proposed. Finally, the proposed differentiator based on the Riemann-Liouville fractional order derivative is used to estimate the state of a fractional order system and numerical simulations illustrate the accuracy and the robustness with respect to corrupting noises.

  11. Geometrical error calibration in reflective surface testing based on reverse Hartmann test

    Science.gov (United States)

    Gong, Zhidong; Wang, Daodang; Xu, Ping; Wang, Chao; Liang, Rongguang; Kong, Ming; Zhao, Jun; Mo, Linhai; Mo, Shuhui

    2017-08-01

    In the fringe-illumination deflectometry based on reverse-Hartmann-test configuration, ray tracing of the modeled testing system is performed to reconstruct the test surface error. Careful calibration of system geometry is required to achieve high testing accuracy. To realize the high-precision surface testing with reverse Hartmann test, a computer-aided geometrical error calibration method is proposed. The aberrations corresponding to various geometrical errors are studied. With the aberration weights for various geometrical errors, the computer-aided optimization of system geometry with iterative ray tracing is carried out to calibration the geometrical error, and the accuracy in the order of subnanometer is achieved.

  12. Spectral Analysis of Forecast Error Investigated with an Observing System Simulation Experiment

    Science.gov (United States)

    Prive, N. C.; Errico, Ronald M.

    2015-01-01

    The spectra of analysis and forecast error are examined using the observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASAGMAO). A global numerical weather prediction model, the Global Earth Observing System version 5 (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation, is cycled for two months with once-daily forecasts to 336 hours to generate a control case. Verification of forecast errors using the Nature Run as truth is compared with verification of forecast errors using self-analysis; significant underestimation of forecast errors is seen using self-analysis verification for up to 48 hours. Likewise, self analysis verification significantly overestimates the error growth rates of the early forecast, as well as mischaracterizing the spatial scales at which the strongest growth occurs. The Nature Run-verified error variances exhibit a complicated progression of growth, particularly for low wave number errors. In a second experiment, cycling of the model and data assimilation over the same period is repeated, but using synthetic observations with different explicitly added observation errors having the same error variances as the control experiment, thus creating a different realization of the control. The forecast errors of the two experiments become more correlated during the early forecast period, with correlations increasing for up to 72 hours before beginning to decrease.

  13. DOI resolution measurement and error analysis with LYSO and APDs

    International Nuclear Information System (INIS)

    Lee, Chae-hun; Cho, Gyuseong

    2008-01-01

    Spatial resolution degradation in PET occurs at the edge of Field Of View (FOV) due to parallax error. To improve spatial resolution at the edge of FOV, Depth-Of-Interaction (DOI) PET has been investigated and several methods for DOI positioning were proposed. In this paper, a DOI-PET detector module using two 8x4 array avalanche photodiodes (APDs) (Hamamatsu, S8550) and a 2 cm long LYSO scintillation crystal was proposed and its DOI characteristics were investigated experimentally. In order to measure DOI positions, signals from two APDs were compared. Energy resolution was obtained from the sum of two APDs' signals and DOI positioning error was calculated. Finally, an optimum DOI step size in a 2 cm long LYSO were suggested to help to design a DOI-PET

  14. A human error probability estimate methodology based on fuzzy inference and expert judgment on nuclear plants

    International Nuclear Information System (INIS)

    Nascimento, C.S. do; Mesquita, R.N. de

    2009-01-01

    Recent studies point human error as an important factor for many industrial and nuclear accidents: Three Mile Island (1979), Bhopal (1984), Chernobyl and Challenger (1986) are classical examples. Human contribution to these accidents may be better understood and analyzed by using Human Reliability Analysis (HRA), which has being taken as an essential part on Probabilistic Safety Analysis (PSA) of nuclear plants. Both HRA and PSA depend on Human Error Probability (HEP) for a quantitative analysis. These probabilities are extremely affected by the Performance Shaping Factors (PSF), which has a direct effect on human behavior and thus shape HEP according with specific environment conditions and personal individual characteristics which are responsible for these actions. This PSF dependence raises a great problem on data availability as turn these scarcely existent database too much generic or too much specific. Besides this, most of nuclear plants do not keep historical records of human error occurrences. Therefore, in order to overcome this occasional data shortage, a methodology based on Fuzzy Inference and expert judgment was employed in this paper in order to determine human error occurrence probabilities and to evaluate PSF's on performed actions by operators in a nuclear power plant (IEA-R1 nuclear reactor). Obtained HEP values were compared with reference tabled data used on current literature in order to show method coherence and valid approach. This comparison leads to a conclusion that this work results are able to be employed both on HRA and PSA enabling efficient prospection of plant safety conditions, operational procedures and local working conditions potential improvements (author)

  15. Time Error Analysis of SOE System Using Network Time Protocol

    International Nuclear Information System (INIS)

    Keum, Jong Yong; Park, Geun Ok; Park, Heui Youn

    2005-01-01

    To find the accuracy of time in the fully digitalized SOE (Sequence of Events) system, we used a formal specification of the Network Time Protocol (NTP) Version 3, which is used to synchronize time keeping among a set of distributed computers. Through constructing a simple experimental environments and experimenting internet time synchronization, we analyzed the time errors of local clocks of SOE system synchronized with a time server via computer networks

  16. Error analysis of pupils in calculating with fractions

    OpenAIRE

    Uranič, Petra

    2016-01-01

    In this thesis I examine the correlation between the frequency of errors that seventh grade pupils make in their calculations with fractions and their level of understanding of fractions. Fractions are a relevant and demanding theme in the mathematics curriculum. Although we use fractions on a daily basis, pupils find learning fractions to be very difficult. They generally do not struggle with the concept of fractions itself, but they frequently have problems with mathematical operations ...

  17. Magnetic error analysis of recycler pbar injection transfer line

    Energy Technology Data Exchange (ETDEWEB)

    Yang, M.J.; /Fermilab

    2007-06-01

    Detailed study of Fermilab Recycler Ring anti-proton injection line became feasible with its BPM system upgrade, though the beamline has been in existence and operational since year 2000. Previous attempts were not fruitful due to limitations in the BPM system. Among the objectives are the assessment of beamline optics and the presence of error fields. In particular the field region of the permanent Lambertson magnets at both ends of R22 transfer line will be scrutinized.

  18. Analysis of Periodic Errors for Synthesized-Reference-Wave Holography

    Directory of Open Access Journals (Sweden)

    V. Schejbal

    2009-12-01

    Full Text Available Synthesized-reference-wave holographic techniques offer relatively simple and cost-effective measurement of antenna radiation characteristics and reconstruction of complex aperture fields using near-field intensity-pattern measurement. These methods allow utilization of advantages of methods for probe compensations for amplitude and phasing near-field measurements for the planar and cylindrical scanning including accuracy analyses. The paper analyzes periodic errors, which can be created during scanning, using both theoretical results and numerical simulations.

  19. Apology in cases of medical error disclosure: Thoughts based on a preliminary study.

    Science.gov (United States)

    Dahan, Sonia; Ducard, Dominique; Caeymaex, Laurence

    2017-01-01

    Disclosing medical errors is considered necessary by patients, ethicists, and health care professionals. Literature insists on the framing of this disclosure and describes the apology as appropriate and necessary. However, this policy seems difficult to put into practice. Few works have explored the function and meaning of the apology. The aim of this study was to explore the role ascribed to apology in communication between healthcare professionals and patients when disclosing a medical error, and to discuss these findings using a linguistic and philosophical perspective. Qualitative exploratory study, based on face-to-face semi-structured interviews, with seven physicians in a neonatal unit in France. Discourse analysis. Four themes emerged. Difference between apology in everyday life and in the medical encounter; place of the apology in the process of disclosure together with explanations, regrets, empathy and ways to avoid repeating the error; effects of the apology were to allow the patient-physician relationship undermined by the error, to be maintained, responsibility to be accepted, the first steps towards forgiveness to be taken, and a less hierarchical doctor-patient relationship to be created; ways of expressing apology ("I am sorry") reflected regrets and empathy more than an explicit apology. This study highlights how the act of apology can be seen as a "language act" as described by philosophers Austin and Searle, and how it functions as a technique for making amends following a wrongdoing and as an action undertaken in order that neither party should lose face, thus echoing the sociologist Goffmann's interaction theory. This interpretation also accords with the views of Lazare, for whom the function of apology is a restoration of dignity after the humiliation of the error. This approach to the apology illustrates how meaning and impact of real-life language acts can be clarified by philosophical and sociological ideas.

  20. Modeling and Experimental Study of Soft Error Propagation Based on Cellular Automaton

    OpenAIRE

    He, Wei; Wang, Yueke; Xing, Kefei; Yang, Jianwei

    2016-01-01

    Aiming to estimate SEE soft error performance of complex electronic systems, a soft error propagation model based on cellular automaton is proposed and an estimation methodology based on circuit partitioning and error propagation is presented. Simulations indicate that different fault grade jamming and different coupling factors between cells are the main parameters influencing the vulnerability of the system. Accelerated radiation experiments have been developed to determine the main paramet...

  1. Slow Learner Errors Analysis in Solving Fractions Problems in Inclusive Junior High School Class

    Science.gov (United States)

    Novitasari, N.; Lukito, A.; Ekawati, R.

    2018-01-01

    A slow learner whose IQ is between 71 and 89 will have difficulties in solving mathematics problems that often lead to errors. The errors could be analyzed to where the errors may occur and its type. This research is qualitative descriptive which aims to describe the locations, types, and causes of slow learner errors in the inclusive junior high school class in solving the fraction problem. The subject of this research is one slow learner of seventh-grade student which was selected through direct observation by the researcher and through discussion with mathematics teacher and special tutor which handles the slow learner students. Data collection methods used in this study are written tasks and semistructured interviews. The collected data was analyzed by Newman’s Error Analysis (NEA). Results show that there are four locations of errors, namely comprehension, transformation, process skills, and encoding errors. There are four types of errors, such as concept, principle, algorithm, and counting errors. The results of this error analysis will help teachers to identify the causes of the errors made by the slow learner.

  2. Residual-based a posteriori error estimation for multipoint flux mixed finite element methods

    KAUST Repository

    Du, Shaohong; Sun, Shuyu; Xie, Xiaoping

    2015-01-01

    A novel residual-type a posteriori error analysis technique is developed for multipoint flux mixed finite element methods for flow in porous media in two or three space dimensions. The derived a posteriori error estimator for the velocity and pressure error in L-norm consists of discretization and quadrature indicators, and is shown to be reliable and efficient. The main tools of analysis are a locally postprocessed approximation to the pressure solution of an auxiliary problem and a quadrature error estimate. Numerical experiments are presented to illustrate the competitive behavior of the estimator.

  3. Residual-based a posteriori error estimation for multipoint flux mixed finite element methods

    KAUST Repository

    Du, Shaohong

    2015-10-26

    A novel residual-type a posteriori error analysis technique is developed for multipoint flux mixed finite element methods for flow in porous media in two or three space dimensions. The derived a posteriori error estimator for the velocity and pressure error in L-norm consists of discretization and quadrature indicators, and is shown to be reliable and efficient. The main tools of analysis are a locally postprocessed approximation to the pressure solution of an auxiliary problem and a quadrature error estimate. Numerical experiments are presented to illustrate the competitive behavior of the estimator.

  4. Quantum secret sharing based on quantum error-correcting codes

    International Nuclear Information System (INIS)

    Zhang Zu-Rong; Liu Wei-Tao; Li Cheng-Zu

    2011-01-01

    Quantum secret sharing(QSS) is a procedure of sharing classical information or quantum information by using quantum states. This paper presents how to use a [2k − 1, 1, k] quantum error-correcting code (QECC) to implement a quantum (k, 2k − 1) threshold scheme. It also takes advantage of classical enhancement of the [2k − 1, 1, k] QECC to establish a QSS scheme which can share classical information and quantum information simultaneously. Because information is encoded into QECC, these schemes can prevent intercept-resend attacks and be implemented on some noisy channels. (general)

  5. Residual-based Methods for Controlling Discretization Error in CFD

    Science.gov (United States)

    2015-08-24

    ccjccjccj iVi Jwxf V dVxf V 1 ,,, )(det)( 1)(1   . (25) where J is the Jacobian of the coordinate transformation and the weights can be found from...179. Layton, W., Lee , H.K., and Peterson, J. (2002). “A Defect-Correction Method for the Incompressible Navier-Stokes Equations,” Applied Mathematics...and Computation, Vol. 129, pp. 1-19. Lee , D. and Tsuei, Y.M. (1992). “A Formula for Estimation of Truncation Errors of Convective Terms in a

  6. Timing analysis for embedded systems using non-preemptive EDF scheduling under bounded error arrivals

    Directory of Open Access Journals (Sweden)

    Michael Short

    2017-07-01

    Full Text Available Embedded systems consist of one or more processing units which are completely encapsulated by the devices under their control, and they often have stringent timing constraints associated with their functional specification. Previous research has considered the performance of different types of task scheduling algorithm and developed associated timing analysis techniques for such systems. Although preemptive scheduling techniques have traditionally been favored, rapid increases in processor speeds combined with improved insights into the behavior of non-preemptive scheduling techniques have seen an increased interest in their use for real-time applications such as multimedia, automation and control. However when non-preemptive scheduling techniques are employed there is a potential lack of error confinement should any timing errors occur in individual software tasks. In this paper, the focus is upon adding fault tolerance in systems using non-preemptive deadline-driven scheduling. Schedulability conditions are derived for fault-tolerant periodic and sporadic task sets experiencing bounded error arrivals under non-preemptive deadline scheduling. A timing analysis algorithm is presented based upon these conditions and its run-time properties are studied. Computational experiments show it to be highly efficient in terms of run-time complexity and competitive ratio when compared to previous approaches.

  7. The Influence of Guided Error-Based Learning on Motor Skills Self-Efficacy and Achievement.

    Science.gov (United States)

    Chien, Kuei-Pin; Chen, Sufen

    2018-01-01

    The authors investigated the role of errors in motor skills teaching, specifically the influence of errors on skills self-efficacy and achievement. The participants were 75 undergraduate students enrolled in pétanque courses. The experimental group (guided error-based learning, n = 37) received a 6-week period of instruction based on the students' errors, whereas the control group (correct motion instruction, n = 38) received a 6-week period of instruction emphasizing correct motor skills. The experimental group had significantly higher scores in motor skills self-efficacy and outcomes than did the control group. Novices' errors reflect their schema in motor skills learning, which provides a basis for instructors to implement student-centered instruction and to facilitate the learning process. Guided error-based learning can effectively enhance beginners' skills self-efficacy and achievement in precision sports such as pétanque.

  8. Comprehensive Anti-error Study on Power Grid Dispatching Based on Regional Regulation and Integration

    Science.gov (United States)

    Zhang, Yunju; Chen, Zhongyi; Guo, Ming; Lin, Shunsheng; Yan, Yinyang

    2018-01-01

    With the large capacity of the power system, the development trend of the large unit and the high voltage, the scheduling operation is becoming more frequent and complicated, and the probability of operation error increases. This paper aims at the problem of the lack of anti-error function, single scheduling function and low working efficiency for technical support system in regional regulation and integration, the integrated construction of the error prevention of the integrated architecture of the system of dispatching anti - error of dispatching anti - error of power network based on cloud computing has been proposed. Integrated system of error prevention of Energy Management System, EMS, and Operation Management System, OMS have been constructed either. The system architecture has good scalability and adaptability, which can improve the computational efficiency, reduce the cost of system operation and maintenance, enhance the ability of regional regulation and anti-error checking with broad development prospects.

  9. Analysis and Compensation for Gear Accuracy with Setting Error in Form Grinding

    Directory of Open Access Journals (Sweden)

    Chenggang Fang

    2015-01-01

    Full Text Available In the process of form grinding, gear setting error was the main factor that influenced the form grinding accuracy; we proposed an effective method to improve form grinding accuracy that corrected the error by controlling the machine operations. Based on establishing the geometry model of form grinding and representing the gear setting errors as homogeneous coordinate, tooth mathematic model was obtained and simplified under the gear setting error. Then, according to the gear standard of ISO1328-1: 1997 and the ANSI/AGMA 2015-1-A01: 2002, the relationship was investigated by changing the gear setting errors with respect to tooth profile deviation, helix deviation, and cumulative pitch deviation, respectively, under the condition of gear eccentricity error, gear inclination error, and gear resultant error. An error compensation method was proposed based on solving sensitivity coefficient matrix of setting error in a five-axis CNC form grinding machine; simulation and experimental results demonstrated that the method can effectively correct the gear setting error, as well as further improving the forming grinding accuracy.

  10. Error Ellipsoid Analysis for the Diameter Measurement of Cylindroid Components Using a Laser Radar Measurement System

    Directory of Open Access Journals (Sweden)

    Zhengchun Du

    2016-05-01

    Full Text Available The use of three-dimensional (3D data in the industrial measurement field is becoming increasingly popular because of the rapid development of laser scanning techniques based on the time-of-flight principle. However, the accuracy and uncertainty of these types of measurement methods are seldom investigated. In this study, a mathematical uncertainty evaluation model for the diameter measurement of standard cylindroid components has been proposed and applied to a 3D laser radar measurement system (LRMS. First, a single-point error ellipsoid analysis for the LRMS was established. An error ellipsoid model and algorithm for diameter measurement of cylindroid components was then proposed based on the single-point error ellipsoid. Finally, four experiments were conducted using the LRMS to measure the diameter of a standard cylinder in the laboratory. The experimental results of the uncertainty evaluation consistently matched well with the predictions. The proposed uncertainty evaluation model for cylindrical diameters can provide a reliable method for actual measurements and support further accuracy improvement of the LRMS.

  11. Human error probability evaluation as part of reliability analysis of digital protection system of advanced pressurized water reactor - APR 1400

    International Nuclear Information System (INIS)

    Varde, P. V.; Lee, D. Y.; Han, J. B.

    2003-03-01

    A case of study on human reliability analysis has been performed as part of reliability analysis of digital protection system of the reactor automatically actuates the shutdown system of the reactor when demanded. However, the safety analysis takes credit for operator action as a diverse mean for tripping the reactor for, though a low probability, ATWS scenario. Based on the available information two cases, viz., human error in tripping the reactor and calibration error for instrumentations in protection system, have been analyzed. Wherever applicable a parametric study has also been performed

  12. Output Error Analysis of Planar 2-DOF Five-bar Mechanism

    Science.gov (United States)

    Niu, Kejia; Wang, Jun; Ting, Kwun-Lon; Tao, Fen; Cheng, Qunchao; Wang, Quan; Zhang, Kaiyang

    2018-03-01

    Aiming at the mechanism error caused by clearance of planar 2-DOF Five-bar motion pair, the method of equivalent joint clearance of kinematic pair to virtual link is applied. The structural error model of revolute joint clearance is established based on the N-bar rotation laws and the concept of joint rotation space, The influence of the clearance of the moving pair is studied on the output error of the mechanis. and the calculation method and basis of the maximum error are given. The error rotation space of the mechanism under the influence of joint clearance is obtained. The results show that this method can accurately calculate the joint space error rotation space, which provides a new way to analyze the planar parallel mechanism error caused by joint space.

  13. Modeling and Experimental Study of Soft Error Propagation Based on Cellular Automaton

    Directory of Open Access Journals (Sweden)

    Wei He

    2016-01-01

    Full Text Available Aiming to estimate SEE soft error performance of complex electronic systems, a soft error propagation model based on cellular automaton is proposed and an estimation methodology based on circuit partitioning and error propagation is presented. Simulations indicate that different fault grade jamming and different coupling factors between cells are the main parameters influencing the vulnerability of the system. Accelerated radiation experiments have been developed to determine the main parameters for raw soft error vulnerability of the module and coupling factors. Results indicate that the proposed method is feasible.

  14. Error Floor Analysis of Coded Slotted ALOHA over Packet Erasure Channels

    DEFF Research Database (Denmark)

    Ivanov, Mikhail; Graell i Amat, Alexandre; Brannstrom, F.

    2014-01-01

    We present a framework for the analysis of the error floor of coded slotted ALOHA (CSA) for finite frame lengths over the packet erasure channel. The error floor is caused by stopping sets in the corresponding bipartite graph, whose enumeration is, in general, not a trivial problem. We therefore ...... identify the most dominant stopping sets for the distributions of practical interest. The derived analytical expressions allow us to accurately predict the error floor at low to moderate channel loads and characterize the unequal error protection inherent in CSA.......We present a framework for the analysis of the error floor of coded slotted ALOHA (CSA) for finite frame lengths over the packet erasure channel. The error floor is caused by stopping sets in the corresponding bipartite graph, whose enumeration is, in general, not a trivial problem. We therefore...

  15. Analysis of Student Errors on Division of Fractions

    Science.gov (United States)

    Maelasari, E.; Jupri, A.

    2017-02-01

    This study aims to describe the type of student errors that typically occurs at the completion of the division arithmetic operations on fractions, and to describe the causes of students’ mistakes. This research used a descriptive qualitative method, and involved 22 fifth grade students at one particular elementary school in Kuningan, Indonesia. The results of this study showed that students’ error answers caused by students changing their way of thinking to solve multiplication and division operations on the same procedures, the changing of mix fractions to common fraction have made students confused, and students are careless in doing calculation. From student written work, in solving the fraction problems, we found that there is influence between the uses of learning methods and student response, and some of student responses beyond researchers’ prediction. We conclude that the teaching method is not only the important thing that must be prepared, but the teacher should also prepare about predictions of students’ answers to the problems that will be given in the learning process. This could be a reflection for teachers to be better and to achieve the expected learning goals.

  16. Hardware-efficient bosonic quantum error-correcting codes based on symmetry operators

    Science.gov (United States)

    Niu, Murphy Yuezhen; Chuang, Isaac L.; Shapiro, Jeffrey H.

    2018-03-01

    We establish a symmetry-operator framework for designing quantum error-correcting (QEC) codes based on fundamental properties of the underlying system dynamics. Based on this framework, we propose three hardware-efficient bosonic QEC codes that are suitable for χ(2 )-interaction based quantum computation in multimode Fock bases: the χ(2 ) parity-check code, the χ(2 ) embedded error-correcting code, and the χ(2 ) binomial code. All of these QEC codes detect photon-loss or photon-gain errors by means of photon-number parity measurements, and then correct them via χ(2 ) Hamiltonian evolutions and linear-optics transformations. Our symmetry-operator framework provides a systematic procedure for finding QEC codes that are not stabilizer codes, and it enables convenient extension of a given encoding to higher-dimensional qudit bases. The χ(2 ) binomial code is of special interest because, with m ≤N identified from channel monitoring, it can correct m -photon-loss errors, or m -photon-gain errors, or (m -1 )th -order dephasing errors using logical qudits that are encoded in O (N ) photons. In comparison, other bosonic QEC codes require O (N2) photons to correct the same degree of bosonic errors. Such improved photon efficiency underscores the additional error-correction power that can be provided by channel monitoring. We develop quantum Hamming bounds for photon-loss errors in the code subspaces associated with the χ(2 ) parity-check code and the χ(2 ) embedded error-correcting code, and we prove that these codes saturate their respective bounds. Our χ(2 ) QEC codes exhibit hardware efficiency in that they address the principal error mechanisms and exploit the available physical interactions of the underlying hardware, thus reducing the physical resources required for implementing their encoding, decoding, and error-correction operations, and their universal encoded-basis gate sets.

  17. Patient safety in the clinical laboratory: a longitudinal analysis of specimen identification errors.

    Science.gov (United States)

    Wagar, Elizabeth A; Tamashiro, Lorraine; Yasin, Bushra; Hilborne, Lee; Bruckner, David A

    2006-11-01

    Patient safety is an increasingly visible and important mission for clinical laboratories. Attention to improving processes related to patient identification and specimen labeling is being paid by accreditation and regulatory organizations because errors in these areas that jeopardize patient safety are common and avoidable through improvement in the total testing process. To assess patient identification and specimen labeling improvement after multiple implementation projects using longitudinal statistical tools. Specimen errors were categorized by a multidisciplinary health care team. Patient identification errors were grouped into 3 categories: (1) specimen/requisition mismatch, (2) unlabeled specimens, and (3) mislabeled specimens. Specimens with these types of identification errors were compared preimplementation and postimplementation for 3 patient safety projects: (1) reorganization of phlebotomy (4 months); (2) introduction of an electronic event reporting system (10 months); and (3) activation of an automated processing system (14 months) for a 24-month period, using trend analysis and Student t test statistics. Of 16,632 total specimen errors, mislabeled specimens, requisition mismatches, and unlabeled specimens represented 1.0%, 6.3%, and 4.6% of errors, respectively. Student t test showed a significant decrease in the most serious error, mislabeled specimens (P patient safety projects. Trend analysis demonstrated decreases in all 3 error types for 26 months. Applying performance-improvement strategies that focus longitudinally on specimen labeling errors can significantly reduce errors, therefore improving patient safety. This is an important area in which laboratory professionals, working in interdisciplinary teams, can improve safety and outcomes of care.

  18. New method of classifying human errors at nuclear power plants and the analysis results of applying this method to maintenance errors at domestic plants

    International Nuclear Information System (INIS)

    Takagawa, Kenichi; Miyazaki, Takamasa; Gofuku, Akio; Iida, Hiroyasu

    2007-01-01

    Since many of the adverse events that have occurred in nuclear power plants in Japan and abroad have been related to maintenance or operation, it is necessary to plan preventive measures based on detailed analyses of human errors made by maintenance workers or operators. Therefore, before planning preventive measures, we developed a new method of analyzing human errors. Since each human error is an unsafe action caused by some misjudgement made by a person, we decided to classify them into six categories according to the stage in the judgment process in which the error was made. By further classifying each error into either an omission-type or commission-type, we produced 12 categories of errors. Then, we divided them into the two categories of basic error tendencies and individual error tendencies, and categorized background factors into four categories: imperfect planning; imperfect facilities or tools; imperfect environment; and imperfect instructions or communication. We thus defined the factors in each category to make it easy to identify factors that caused the error. Then using this method, we studied the characteristics of human errors that involved maintenance workers and planners since many maintenance errors have occurred. Among the human errors made by workers (worker errors) during the implementation stage, the following three types were prevalent with approximately 80%: commission-type 'projection errors', omission-type comprehension errors' and commission type 'action errors'. The most common among the individual factors of worker errors was 'repetition or habit' (schema), based on the assumption of a typical situation, and the half number of the 'repetition or habit' cases (schema) were not influenced by any background factors. The most common background factor that contributed to the individual factor was 'imperfect work environment', followed by 'insufficient knowledge'. Approximately 80% of the individual factors were 'repetition or habit' or

  19. Prepopulated radiology report templates: a prospective analysis of error rate and turnaround time.

    Science.gov (United States)

    Hawkins, C M; Hall, S; Hardin, J; Salisbury, S; Towbin, A J

    2012-08-01

    Current speech recognition software allows exam-specific standard reports to be prepopulated into the dictation field based on the radiology information system procedure code. While it is thought that prepopulating reports can decrease the time required to dictate a study and the overall number of errors in the final report, this hypothesis has not been studied in a clinical setting. A prospective study was performed. During the first week, radiologists dictated all studies using prepopulated standard reports. During the second week, all studies were dictated after prepopulated reports had been disabled. Final radiology reports were evaluated for 11 different types of errors. Each error within a report was classified individually. The median time required to dictate an exam was compared between the 2 weeks. There were 12,387 reports dictated during the study, of which, 1,173 randomly distributed reports were analyzed for errors. There was no difference in the number of errors per report between the 2 weeks; however, radiologists overwhelmingly preferred using a standard report both weeks. Grammatical errors were by far the most common error type, followed by missense errors and errors of omission. There was no significant difference in the median dictation time when comparing studies performed each week. The use of prepopulated reports does not alone affect the error rate or dictation time of radiology reports. While it is a useful feature for radiologists, it must be coupled with other strategies in order to decrease errors.

  20. Influence of surface error on electromagnetic performance of reflectors based on Zernike polynomials

    Science.gov (United States)

    Li, Tuanjie; Shi, Jiachen; Tang, Yaqiong

    2018-04-01

    This paper investigates the influence of surface error distribution on the electromagnetic performance of antennas. The normalized Zernike polynomials are used to describe a smooth and continuous deformation surface. Based on the geometrical optics and piecewise linear fitting method, the electrical performance of reflector described by the Zernike polynomials is derived to reveal the relationship between surface error distribution and electromagnetic performance. Then the relation database between surface figure and electric performance is built for ideal and deformed surfaces to realize rapidly calculation of far-field electric performances. The simulation analysis of the influence of Zernike polynomials on the electrical properties for the axis-symmetrical reflector with the axial mode helical antenna as feed is further conducted to verify the correctness of the proposed method. Finally, the influence rules of surface error distribution on electromagnetic performance are summarized. The simulation results show that some terms of Zernike polynomials may decrease the amplitude of main lobe of antenna pattern, and some may reduce the pointing accuracy. This work extracts a new concept for reflector's shape adjustment in manufacturing process.

  1. Enhanced Pedestrian Navigation Based on Course Angle Error Estimation Using Cascaded Kalman Filters.

    Science.gov (United States)

    Song, Jin Woo; Park, Chan Gook

    2018-04-21

    An enhanced pedestrian dead reckoning (PDR) based navigation algorithm, which uses two cascaded Kalman filters (TCKF) for the estimation of course angle and navigation errors, is proposed. The proposed algorithm uses a foot-mounted inertial measurement unit (IMU), waist-mounted magnetic sensors, and a zero velocity update (ZUPT) based inertial navigation technique with TCKF. The first stage filter estimates the course angle error of a human, which is closely related to the heading error of the IMU. In order to obtain the course measurements, the filter uses magnetic sensors and a position-trace based course angle. For preventing magnetic disturbance from contaminating the estimation, the magnetic sensors are attached to the waistband. Because the course angle error is mainly due to the heading error of the IMU, and the characteristic error of the heading angle is highly dependent on that of the course angle, the estimated course angle error is used as a measurement for estimating the heading error in the second stage filter. At the second stage, an inertial navigation system-extended Kalman filter-ZUPT (INS-EKF-ZUPT) method is adopted. As the heading error is estimated directly by using course-angle error measurements, the estimation accuracy for the heading and yaw gyro bias can be enhanced, compared with the ZUPT-only case, which eventually enhances the position accuracy more efficiently. The performance enhancements are verified via experiments, and the way-point position error for the proposed method is compared with those for the ZUPT-only case and with other cases that use ZUPT and various types of magnetic heading measurements. The results show that the position errors are reduced by a maximum of 90% compared with the conventional ZUPT based PDR algorithms.

  2. Stochastic and sensitivity analysis of shape error of inflatable antenna reflectors

    Science.gov (United States)

    San, Bingbing; Yang, Qingshan; Yin, Liwei

    2017-03-01

    Inflatable antennas are promising candidates to realize future satellite communications and space observations since they are lightweight, low-cost and small-packaged-volume. However, due to their high flexibility, inflatable reflectors are difficult to manufacture accurately, which may result in undesirable shape errors, and thus affect their performance negatively. In this paper, the stochastic characteristics of shape errors induced during manufacturing process are investigated using Latin hypercube sampling coupled with manufacture simulations. Four main random error sources are involved, including errors in membrane thickness, errors in elastic modulus of membrane, boundary deviations and pressure variations. Using regression and correlation analysis, a global sensitivity study is conducted to rank the importance of these error sources. This global sensitivity analysis is novel in that it can take into account the random variation and the interaction between error sources. Analyses are parametrically carried out with various focal-length-to-diameter ratios (F/D) and aperture sizes (D) of reflectors to investigate their effects on significance ranking of error sources. The research reveals that RMS (Root Mean Square) of shape error is a random quantity with an exponent probability distribution and features great dispersion; with the increase of F/D and D, both mean value and standard deviation of shape errors are increased; in the proposed range, the significance ranking of error sources is independent of F/D and D; boundary deviation imposes the greatest effect with a much higher weight than the others; pressure variation ranks the second; error in thickness and elastic modulus of membrane ranks the last with very close sensitivities to pressure variation. Finally, suggestions are given for the control of the shape accuracy of reflectors and allowable values of error sources are proposed from the perspective of reliability.

  3. Contribution of Error Analysis to Foreign Language Teaching

    Directory of Open Access Journals (Sweden)

    Vacide ERDOĞAN

    2014-01-01

    Full Text Available It is inevitable that learners make mistakes in the process of foreign language learning.However, what is questioned by language teachers is why students go on making the same mistakeseven when such mistakes have been repeatedly pointed out to them. Yet not all mistakes are the same;sometimes they seem to be deeply ingrained, but at other times students correct themselves with ease.Thus, researchers and teachers of foreign language came to realize that the mistakes a person made inthe process of constructing a new system of language is needed to be analyzed carefully, for theypossibly held in them some of the keys to the understanding of second language acquisition. In thisrespect, the aim of this study is to point out the significance of learners’ errors for they provideevidence of how language is learned and what strategies or procedures the learners are employing inthe discovery of language.

  4. Study on analysis from sources of error for Airborne LIDAR

    Science.gov (United States)

    Ren, H. C.; Yan, Q.; Liu, Z. J.; Zuo, Z. Q.; Xu, Q. Q.; Li, F. F.; Song, C.

    2016-11-01

    With the advancement of Aerial Photogrammetry, it appears that to obtain geo-spatial information of high spatial and temporal resolution provides a new technical means for Airborne LIDAR measurement techniques, with unique advantages and broad application prospects. Airborne LIDAR is increasingly becoming a new kind of space for earth observation technology, which is mounted by launching platform for aviation, accepting laser pulses to get high-precision, high-density three-dimensional coordinate point cloud data and intensity information. In this paper, we briefly demonstrates Airborne laser radar systems, and that some errors about Airborne LIDAR data sources are analyzed in detail, so the corresponding methods is put forwarded to avoid or eliminate it. Taking into account the practical application of engineering, some recommendations were developed for these designs, which has crucial theoretical and practical significance in Airborne LIDAR data processing fields.

  5. Hebbian errors in learning: an analysis using the Oja model.

    Science.gov (United States)

    Rădulescu, Anca; Cox, Kingsley; Adams, Paul

    2009-06-21

    Recent work on long term potentiation in brain slices shows that Hebb's rule is not completely synapse-specific, probably due to intersynapse diffusion of calcium or other factors. We previously suggested that such errors in Hebbian learning might be analogous to mutations in evolution. We examine this proposal quantitatively, extending the classical Oja unsupervised model of learning by a single linear neuron to include Hebbian inspecificity. We introduce an error matrix E, which expresses possible crosstalk between updating at different connections. When there is no inspecificity, this gives the classical result of convergence to the first principal component of the input distribution (PC1). We show the modified algorithm converges to the leading eigenvector of the matrix EC, where C is the input covariance matrix. In the most biologically plausible case when there are no intrinsically privileged connections, E has diagonal elements Q and off-diagonal elements (1-Q)/(n-1), where Q, the quality, is expected to decrease with the number of inputs n and with a synaptic parameter b that reflects synapse density, calcium diffusion, etc. We study the dependence of the learning accuracy on b, n and the amount of input activity or correlation (analytically and computationally). We find that accuracy increases (learning becomes gradually less useful) with increases in b, particularly for intermediate (i.e., biologically realistic) correlation strength, although some useful learning always occurs up to the trivial limit Q=1/n. We discuss the relation of our results to Hebbian unsupervised learning in the brain. When the mechanism lacks specificity, the network fails to learn the expected, and typically most useful, result, especially when the input correlation is weak. Hebbian crosstalk would reflect the very high density of synapses along dendrites, and inevitably degrades learning.

  6. Yaw Angle Error Compensation for Airborne 3-D SAR Based on Wavenumber-domain Subblock

    Directory of Open Access Journals (Sweden)

    Ding Zhen-yu

    2015-08-01

    Full Text Available Airborne array antenna SAR is used to obtain three-dimensional imaging; however it is impaired by motion errors. In particular, rotation error changes the relative position among the different antenna units and strongly affects the image quality. Unfortunately, the presently available algorithm can not compensate for the rotation error. In this study, an airborne array antenna SAR three-dimensional imaging model is discussed along with the effect of rotation errors, and more specifically, the yaw angle error. The analysis reveals that along- and cross-track wavenumbers can be obtained from the echo phase, and when used to calculate the range error, these wavenumbers lead to a target position irrelevant result that eliminates the error's spatial variance. Therefore, a wavenumber-domain subblock compensation method is proposed by computing the range error in the subblock of the along- and cross-track 2-D wavenumber domain and precisely compensating for the error in the space domain. Simulations show that the algorithm can compensate for the effect of yaw angle error.

  7. Human reliability analysis of errors of commission: a review of methods and applications

    Energy Technology Data Exchange (ETDEWEB)

    Reer, B

    2007-06-15

    Illustrated by specific examples relevant to contemporary probabilistic safety assessment (PSA), this report presents a review of human reliability analysis (HRA) addressing post initiator errors of commission (EOCs), i.e. inappropriate actions under abnormal operating conditions. The review addressed both methods and applications. Emerging HRA methods providing advanced features and explicit guidance suitable for PSA are: A Technique for Human Event Analysis (ATHEANA, key publications in 1998/2000), Methode d'Evaluation de la Realisation des Missions Operateur pour la Surete (MERMOS, 1998/2000), the EOC HRA method developed by the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS, 2003), the Misdiagnosis Tree Analysis (MDTA) method (2005/2006), the Cognitive Reliability and Error Analysis Method (CREAM, 1998), and the Commission Errors Search and Assessment (CESA) method (2002/2004). As a result of a thorough investigation of various PSA/HRA applications, this paper furthermore presents an overview of EOCs (termination of safety injection, shutdown of secondary cooling, etc.) referred to in predictive studies and a qualitative review of cases of EOC quantification. The main conclusions of the review of both the methods and the EOC HRA cases are: (1) The CESA search scheme, which proceeds from possible operator actions to the affected systems to scenarios, may be preferable because this scheme provides a formalized way for identifying relatively important scenarios with EOC opportunities; (2) an EOC identification guidance like CESA, which is strongly based on the procedural guidance and important measures of systems or components affected by inappropriate actions, however should pay some attention to EOCs associated with familiar but non-procedural actions and EOCs leading to failures of manually initiated safety functions. (3) Orientations of advanced EOC quantification comprise a) modeling of multiple contexts for a given scenario, b) accounting for

  8. Human reliability analysis of errors of commission: a review of methods and applications

    International Nuclear Information System (INIS)

    Reer, B.

    2007-06-01

    Illustrated by specific examples relevant to contemporary probabilistic safety assessment (PSA), this report presents a review of human reliability analysis (HRA) addressing post initiator errors of commission (EOCs), i.e. inappropriate actions under abnormal operating conditions. The review addressed both methods and applications. Emerging HRA methods providing advanced features and explicit guidance suitable for PSA are: A Technique for Human Event Analysis (ATHEANA, key publications in 1998/2000), Methode d'Evaluation de la Realisation des Missions Operateur pour la Surete (MERMOS, 1998/2000), the EOC HRA method developed by the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS, 2003), the Misdiagnosis Tree Analysis (MDTA) method (2005/2006), the Cognitive Reliability and Error Analysis Method (CREAM, 1998), and the Commission Errors Search and Assessment (CESA) method (2002/2004). As a result of a thorough investigation of various PSA/HRA applications, this paper furthermore presents an overview of EOCs (termination of safety injection, shutdown of secondary cooling, etc.) referred to in predictive studies and a qualitative review of cases of EOC quantification. The main conclusions of the review of both the methods and the EOC HRA cases are: (1) The CESA search scheme, which proceeds from possible operator actions to the affected systems to scenarios, may be preferable because this scheme provides a formalized way for identifying relatively important scenarios with EOC opportunities; (2) an EOC identification guidance like CESA, which is strongly based on the procedural guidance and important measures of systems or components affected by inappropriate actions, however should pay some attention to EOCs associated with familiar but non-procedural actions and EOCs leading to failures of manually initiated safety functions. (3) Orientations of advanced EOC quantification comprise a) modeling of multiple contexts for a given scenario, b) accounting for

  9. The study of error for analysis in dynamic image from the error of count rates in Nal (Tl) scintillation camera

    International Nuclear Information System (INIS)

    Oh, Joo Young; Kang, Chun Goo; Kim, Jung Yul; Oh, Ki Baek; Kim, Jae Sam; Park, Hoon Hee

    2013-01-01

    This study is aimed to evaluate the effect of T 1/2 upon count rates in the analysis of dynamic scan using NaI (Tl) scintillation camera, and suggest a new quality control method with this effects. We producted a point source with '9 9m TcO 4 - of 18.5 to 185 MBq in the 2 mL syringes, and acquired 30 frames of dynamic images with 10 to 60 seconds each using Infinia gamma camera (GE, USA). In the second experiment, 90 frames of dynamic images were acquired from 74 MBq point source by 5 gamma cameras (Infinia 2, Forte 2, Argus 1). There were not significant differences in average count rates of the sources with 18.5 to 92.5 MBq in the analysis of 10 to 60 seconds/frame with 10 seconds interval in the first experiment (p>0.05). But there were significantly low average count rates with the sources over 111 MBq activity at 60 seconds/frame (p<0.01). According to the second analysis results of linear regression by count rates of 5 gamma cameras those were acquired during 90 minutes, counting efficiency of fourth gamma camera was most low as 0.0064%, and gradient and coefficient of variation was high as 0.0042 and 0.229 each. We could not find abnormal fluctuation in χ 2 test with count rates (p>0.02), and we could find the homogeneity of variance in Levene's F-test among the gamma cameras (p>0.05). At the correlation analysis, there was only correlation between counting efficiency and gradient as significant negative correlation (r=-0.90, p<0.05). Lastly, according to the results of calculation of T 1/2 error from change of gradient with -0.25% to +0.25%, if T 1/2 is relatively long, or gradient is high, the error increase relationally. When estimate the value of 4th camera which has highest gradient from the above mentioned result, we could not see T 1/2 error within 60 minutes at that value. In conclusion, it is necessary for the scintillation gamma camera in medical field to manage hard for the quality of radiation measurement. Especially, we found a

  10. On the effects of systematic errors in analysis of nuclear scattering data

    International Nuclear Information System (INIS)

    Bennett, M.T.; Steward, C.; Amos, K.; Allen, L.J.

    1995-01-01

    The effects of systematic errors on elastic scattering differential cross-section data upon the assessment of quality fits to that data have been studied. Three cases are studied, namely the differential cross-section data sets from elastic scattering of 200 MeV protons from 12 C, of 350 MeV 16 O- 16 O scattering and of 288.6 MeV 12 C- 12 C scattering. First, to estimate the probability of any unknown systematic errors, select sets of data have been processed using the method of generalized cross validation; a method based upon the premise that any data set should satisfy an optimal smoothness criterion. In another case, the S function that provided a statistically significant fit to data, upon allowance for angle variation, became overdetermined. A far simpler S function form could then be found to describe the scattering process. The S functions so obtained have been used in a fixed energy inverse scattering study to specify effective, local, Schroedinger potentials for the collisions. An error analysis has been performed on the results to specify confidence levels for those interactions. 19 refs., 6 tabs., 15 figs

  11. Review of advances in human reliability analysis of errors of commission-Part 2: EOC quantification

    International Nuclear Information System (INIS)

    Reer, Bernhard

    2008-01-01

    In close connection with examples relevant to contemporary probabilistic safety assessment (PSA), a review of advances in human reliability analysis (HRA) of post-initiator errors of commission (EOCs), i.e. inappropriate actions under abnormal operating conditions, has been carried out. The review comprises both EOC identification (part 1) and quantification (part 2); part 2 is presented in this article. Emerging HRA methods in this field are: ATHEANA, MERMOS, the EOC HRA method developed by Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS), the MDTA method and CREAM. The essential advanced features are on the conceptual side, especially to envisage the modeling of multiple contexts for an EOC to be quantified (ATHEANA, MERMOS and MDTA), in order to explicitly address adverse conditions. There is promising progress in providing systematic guidance to better account for cognitive demands and tendencies (GRS, CREAM), and EOC recovery (MDTA). Problematic issues are associated with the implementation of multiple context modeling and the assessment of context-specific error probabilities. Approaches for task or error opportunity scaling (CREAM, GRS) and the concept of reference cases (ATHEANA outlook) provide promising orientations for achieving progress towards data-based quantification. Further development work is needed and should be carried out in close connection with large-scale applications of existing approaches

  12. Error Analysis of Ia Supernova and Query on Cosmic Dark Energy ...

    Indian Academy of Sciences (India)

    2007), we find that. 3.796% of the data is an outline of 2.6σ based on the average total observational error of the distance modulus of SNIa, 0.31 m . Obviously, the distance modulus error deviates Gaussian distribution seriously, and it is not suitable to calculate the system- atic error σsys of SNIa by the χ2 check test method.

  13. Evaluation of a Web-based Error Reporting Surveillance System in a Large Iranian Hospital.

    Science.gov (United States)

    Askarian, Mehrdad; Ghoreishi, Mahboobeh; Akbari Haghighinejad, Hourvash; Palenik, Charles John; Ghodsi, Maryam

    2017-08-01

    Proper reporting of medical errors helps healthcare providers learn from adverse incidents and improve patient safety. A well-designed and functioning confidential reporting system is an essential component to this process. There are many error reporting methods; however, web-based systems are often preferred because they can provide; comprehensive and more easily analyzed information. This study addresses the use of a web-based error reporting system. This interventional study involved the application of an in-house designed "voluntary web-based medical error reporting system." The system has been used since July 2014 in Nemazee Hospital, Shiraz University of Medical Sciences. The rate and severity of errors reported during the year prior and a year after system launch were compared. The slope of the error report trend line was steep during the first 12 months (B = 105.727, P = 0.00). However, it slowed following launch of the web-based reporting system and was no longer statistically significant (B = 15.27, P = 0.81) by the end of the second year. Most recorded errors were no-harm laboratory types and were due to inattention. Usually, they were reported by nurses and other permanent employees. Most reported errors occurred during morning shifts. Using a standardized web-based error reporting system can be beneficial. This study reports on the performance of an in-house designed reporting system, which appeared to properly detect and analyze medical errors. The system also generated follow-up reports in a timely and accurate manner. Detection of near-miss errors could play a significant role in identifying areas of system defects.

  14. Flight Technical Error Analysis of the SATS Higher Volume Operations Simulation and Flight Experiments

    Science.gov (United States)

    Williams, Daniel M.; Consiglio, Maria C.; Murdoch, Jennifer L.; Adams, Catherine H.

    2005-01-01

    This paper provides an analysis of Flight Technical Error (FTE) from recent SATS experiments, called the Higher Volume Operations (HVO) Simulation and Flight experiments, which NASA conducted to determine pilot acceptability of the HVO concept for normal operating conditions. Reported are FTE results from simulation and flight experiment data indicating the SATS HVO concept is viable and acceptable to low-time instrument rated pilots when compared with today s system (baseline). Described is the comparative FTE analysis of lateral, vertical, and airspeed deviations from the baseline and SATS HVO experimental flight procedures. Based on FTE analysis, all evaluation subjects, low-time instrument-rated pilots, flew the HVO procedures safely and proficiently in comparison to today s system. In all cases, the results of the flight experiment validated the results of the simulation experiment and confirm the utility of the simulation platform for comparative Human in the Loop (HITL) studies of SATS HVO and Baseline operations.

  15. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.

    Science.gov (United States)

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing

    2018-01-15

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.

  16. The treatment of commission errors in first generation human reliability analysis methods

    Energy Technology Data Exchange (ETDEWEB)

    Alvarengga, Marco Antonio Bayout; Fonseca, Renato Alves da, E-mail: bayout@cnen.gov.b, E-mail: rfonseca@cnen.gov.b [Comissao Nacional de Energia Nuclear (CNEN) Rio de Janeiro, RJ (Brazil); Melo, Paulo Fernando Frutuoso e, E-mail: frutuoso@nuclear.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Nuclear

    2011-07-01

    Human errors in human reliability analysis can be classified generically as errors of omission and commission errors. Omission errors are related to the omission of any human action that should have been performed, but does not occur. Errors of commission are those related to human actions that should not be performed, but which in fact are performed. Both involve specific types of cognitive error mechanisms, however, errors of commission are more difficult to model because they are characterized by non-anticipated actions that are performed instead of others that are omitted (omission errors) or are entered into an operational task without being part of the normal sequence of this task. The identification of actions that are not supposed to occur depends on the operational context that will influence or become easy certain unsafe actions of the operator depending on the operational performance of its parameters and variables. The survey of operational contexts and associated unsafe actions is a characteristic of second-generation models, unlike the first generation models. This paper discusses how first generation models can treat errors of commission in the steps of detection, diagnosis, decision-making and implementation, in the human information processing, particularly with the use of THERP tables of errors quantification. (author)

  17. Anisotropic mesh adaptation for solution of finite element problems using hierarchical edge-based error estimates

    Energy Technology Data Exchange (ETDEWEB)

    Lipnikov, Konstantin [Los Alamos National Laboratory; Agouzal, Abdellatif [UNIV DE LYON; Vassilevski, Yuri [Los Alamos National Laboratory

    2009-01-01

    We present a new technology for generating meshes minimizing the interpolation and discretization errors or their gradients. The key element of this methodology is construction of a space metric from edge-based error estimates. For a mesh with N{sub h} triangles, the error is proportional to N{sub h}{sup -1} and the gradient of error is proportional to N{sub h}{sup -1/2} which are optimal asymptotics. The methodology is verified with numerical experiments.

  18. OOK power model based dynamic error testing for smart electricity meter

    International Nuclear Information System (INIS)

    Wang, Xuewei; Chen, Jingxia; Jia, Xiaolu; Zhu, Meng; Yuan, Ruiming; Jiang, Zhenyu

    2017-01-01

    This paper formulates the dynamic error testing problem for a smart meter, with consideration and investigation of both the testing signal and the dynamic error testing method. To solve the dynamic error testing problems, the paper establishes an on-off-keying (OOK) testing dynamic current model and an OOK testing dynamic load energy (TDLE) model. Then two types of TDLE sequences and three modes of OOK testing dynamic power are proposed. In addition, a novel algorithm, which helps to solve the problem of dynamic electric energy measurement’s traceability, is derived for dynamic errors. Based on the above researches, OOK TDLE sequence generation equipment is developed and a dynamic error testing system is constructed. Using the testing system, five kinds of meters were tested in the three dynamic power modes. The test results show that the dynamic error is closely related to dynamic power mode and the measurement uncertainty is 0.38%. (paper)

  19. OOK power model based dynamic error testing for smart electricity meter

    Science.gov (United States)

    Wang, Xuewei; Chen, Jingxia; Yuan, Ruiming; Jia, Xiaolu; Zhu, Meng; Jiang, Zhenyu

    2017-02-01

    This paper formulates the dynamic error testing problem for a smart meter, with consideration and investigation of both the testing signal and the dynamic error testing method. To solve the dynamic error testing problems, the paper establishes an on-off-keying (OOK) testing dynamic current model and an OOK testing dynamic load energy (TDLE) model. Then two types of TDLE sequences and three modes of OOK testing dynamic power are proposed. In addition, a novel algorithm, which helps to solve the problem of dynamic electric energy measurement’s traceability, is derived for dynamic errors. Based on the above researches, OOK TDLE sequence generation equipment is developed and a dynamic error testing system is constructed. Using the testing system, five kinds of meters were tested in the three dynamic power modes. The test results show that the dynamic error is closely related to dynamic power mode and the measurement uncertainty is 0.38%.

  20. Analysis technique for controlling system wavefront error with active/adaptive optics

    Science.gov (United States)

    Genberg, Victor L.; Michels, Gregory J.

    2017-08-01

    The ultimate goal of an active mirror system is to control system level wavefront error (WFE). In the past, the use of this technique was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for controlling system level WFE using a linear optics model is presented. An error estimate is included in the analysis output for both surface error disturbance fitting and actuator influence function fitting. To control adaptive optics, the technique has been extended to write system WFE in state space matrix form. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.

  1. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation

    Directory of Open Access Journals (Sweden)

    Tao Li

    2016-03-01

    Full Text Available The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF and Kalman filter (KF. The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition.

  2. Particle Filter with Novel Nonlinear Error Model for Miniature Gyroscope-Based Measurement While Drilling Navigation.

    Science.gov (United States)

    Li, Tao; Yuan, Gannan; Li, Wang

    2016-03-15

    The derivation of a conventional error model for the miniature gyroscope-based measurement while drilling (MGWD) system is based on the assumption that the errors of attitude are small enough so that the direction cosine matrix (DCM) can be approximated or simplified by the errors of small-angle attitude. However, the simplification of the DCM would introduce errors to the navigation solutions of the MGWD system if the initial alignment cannot provide precise attitude, especially for the low-cost microelectromechanical system (MEMS) sensors operated in harsh multilateral horizontal downhole drilling environments. This paper proposes a novel nonlinear error model (NNEM) by the introduction of the error of DCM, and the NNEM can reduce the propagated errors under large-angle attitude error conditions. The zero velocity and zero position are the reference points and the innovations in the states estimation of particle filter (PF) and Kalman filter (KF). The experimental results illustrate that the performance of PF is better than KF and the PF with NNEM can effectively restrain the errors of system states, especially for the azimuth, velocity, and height in the quasi-stationary condition.

  3. Examining Teaching Based on Errors in Mathematics Amongst Pupils with Learning Disabilities

    Science.gov (United States)

    Magen-Nagar, Noga

    2016-01-01

    Teaching mathematics while learning from students' mistakes, errors and misconceptions, is most important for meaningful learning. This study was based on intervention programs prepared by preservice teachers. It aimed to examine their knowledge of assessment of errors in mathematics amongst pupils with learning disabilities, and their use as a…

  4. Optimization of sample absorbance for quantitative analysis in the presence of pathlength error in the IR and NIR regions

    International Nuclear Information System (INIS)

    Hirschfeld, T.; Honigs, D.; Hieftje, G.

    1985-01-01

    Optical absorbance levels for quantiative analysis in the presence of photometric error have been described in the past. In newer instrumentation, such as FT-IR and NIRA spectrometers, the photometric error is no longer limiting. In these instruments, pathlength error due to cell or sampling irreproducibility is often a major concern. One can derive optimal absorbance by taking both pathlength and photometric errors into account. This paper analyzes the cases of pathlength error >> photometric error (trivial) and various cases in which the pathlength errors and the photometric error are of the same order: adjustable concentration (trivial until dilution errors are considered), constant relative pathlength error (trivial), and constant absolute pathlength error. The latter, in particular, is analyzed in detail to give the behavior of the error, the behavior of the optimal absorbance in its presence, and the total error levels attainable

  5. [Patient safety culture based on a non-punitive response to error and freedom of expression of healthcare professionals].

    Science.gov (United States)

    Mahjoub, Mohamed; Bouafia, Nabiha; Cheikh, Asma Ben; Ezzi, Olfa; Njah, Mansour

    2016-11-25

    This study provided an overview of healthcare professionals’ perception of patient safety based on analysis of the concept of freedom of expression and non-punitive response in order to identify and correct errors in our health system. This concept is a cornerstone of the patient safety culture among healthcare professionals and plays a central role in the quality improvement strategy..

  6. Error analysis of dimensionless scaling experiments with multiple points using linear regression

    International Nuclear Information System (INIS)

    Guercan, Oe.D.; Vermare, L.; Hennequin, P.; Bourdelle, C.

    2010-01-01

    A general method of error estimation in the case of multiple point dimensionless scaling experiments, using linear regression and standard error propagation, is proposed. The method reduces to the previous result of Cordey (2009 Nucl. Fusion 49 052001) in the case of a two-point scan. On the other hand, if the points follow a linear trend, it explains how the estimated error decreases as more points are added to the scan. Based on the analytical expression that is derived, it is argued that for a low number of points, adding points to the ends of the scanned range, rather than the middle, results in a smaller error estimate. (letter)

  7. An error analysis in the early grades mathematics – a learning opportunity?

    Directory of Open Access Journals (Sweden)

    Roelien Herholdt

    2014-07-01

    Full Text Available Error analysis is the study of errors in learners’ work with a view to looking for possible explanations for these errors. It is a multifaceted activity involving analysis of correct, partially correct and incorrect processes and thinking about possible remediating strategies. This paper reports on such an analysis of learner tests. The tests were administered as part of the evaluation of an intervention project that aimed to teach mathematical problem solving skills to grade 1-4 learners. Quantitative error analysis was carried out using a coding sheet for each grade. A reliability coefficient was found for each test, as were item means and discrimination indexes for each item. The analysis provided some insight into the more common procedural and conceptual errors evidenced in the learners’ scripts. Findings showed similar difficulties across intervention and control schools and highlighted particular areas of difficulty. The authors argue that this analysis is an example of large-scale error analysis, but that the analysis method could be adopted by teachers of grades 1-4.

  8. Error rates in forensic DNA analysis: Definition, numbers, impact and communication

    NARCIS (Netherlands)

    Kloosterman, A.; Sjerps, M.; Quak, A.

    2014-01-01

    Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and

  9. An international collaborative family-based whole genome quantitative trait linkage scan for myopic refractive error

    DEFF Research Database (Denmark)

    Abbott, Diana; Li, Yi-Ju; Guggenheim, Jeremy A

    2012-01-01

    To investigate quantitative trait loci linked to refractive error, we performed a genome-wide quantitative trait linkage analysis using single nucleotide polymorphism markers and family data from five international sites....

  10. Quality of IT service delivery — Analysis and framework for human error prevention

    KAUST Repository

    Shwartz, L.

    2010-12-01

    In this paper, we address the problem of reducing the occurrence of Human Errors that cause service interruptions in IT Service Support and Delivery operations. Analysis of a large volume of service interruption records revealed that more than 21% of interruptions were caused by human error. We focus on Change Management, the process with the largest risk of human error, and identify the main instances of human errors as the 4 Wrongs: request, time, configuration item, and command. Analysis of change records revealed that the humanerror prevention by partial automation is highly relevant. We propose the HEP Framework, a framework for execution of IT Service Delivery operations that reduces human error by addressing the 4 Wrongs using content integration, contextualization of operation patterns, partial automation of command execution, and controlled access to resources.

  11. Error sensitivity analysis in 10-30-day extended range forecasting by using a nonlinear cross-prediction error model

    Science.gov (United States)

    Xia, Zhiye; Xu, Lisheng; Chen, Hongbin; Wang, Yongqian; Liu, Jinbao; Feng, Wenlan

    2017-06-01

    Extended range forecasting of 10-30 days, which lies between medium-term and climate prediction in terms of timescale, plays a significant role in decision-making processes for the prevention and mitigation of disastrous meteorological events. The sensitivity of initial error, model parameter error, and random error in a nonlinear crossprediction error (NCPE) model, and their stability in the prediction validity period in 10-30-day extended range forecasting, are analyzed quantitatively. The associated sensitivity of precipitable water, temperature, and geopotential height during cases of heavy rain and hurricane is also discussed. The results are summarized as follows. First, the initial error and random error interact. When the ratio of random error to initial error is small (10-6-10-2), minor variation in random error cannot significantly change the dynamic features of a chaotic system, and therefore random error has minimal effect on the prediction. When the ratio is in the range of 10-1-2 (i.e., random error dominates), attention should be paid to the random error instead of only the initial error. When the ratio is around 10-2-10-1, both influences must be considered. Their mutual effects may bring considerable uncertainty to extended range forecasting, and de-noising is therefore necessary. Second, in terms of model parameter error, the embedding dimension m should be determined by the factual nonlinear time series. The dynamic features of a chaotic system cannot be depicted because of the incomplete structure of the attractor when m is small. When m is large, prediction indicators can vanish because of the scarcity of phase points in phase space. A method for overcoming the cut-off effect ( m > 4) is proposed. Third, for heavy rains, precipitable water is more sensitive to the prediction validity period than temperature or geopotential height; however, for hurricanes, geopotential height is most sensitive, followed by precipitable water.

  12. Bayesian analysis of data and model error in rainfall-runoff hydrological models

    Science.gov (United States)

    Kavetski, D.; Franks, S. W.; Kuczera, G.

    2004-12-01

    A major unresolved issue in the identification and use of conceptual hydrologic models is realistic description of uncertainty in the data and model structure. In particular, hydrologic parameters often cannot be measured directly and must be inferred (calibrated) from observed forcing/response data (typically, rainfall and runoff). However, rainfall varies significantly in space and time, yet is often estimated from sparse gauge networks. Recent work showed that current calibration methods (e.g., standard least squares, multi-objective calibration, generalized likelihood uncertainty estimation) ignore forcing uncertainty and assume that the rainfall is known exactly. Consequently, they can yield strongly biased and misleading parameter estimates. This deficiency confounds attempts to reliably test model hypotheses, to generalize results across catchments (the regionalization problem) and to quantify predictive uncertainty when the hydrologic model is extrapolated. This paper continues the development of a Bayesian total error analysis (BATEA) methodology for the calibration and identification of hydrologic models, which explicitly incorporates the uncertainty in both the forcing and response data, and allows systematic model comparison based on residual model errors and formal Bayesian hypothesis testing (e.g., using Bayes factors). BATEA is based on explicit stochastic models for both forcing and response uncertainty, whereas current techniques focus solely on response errors. Hence, unlike existing methods, the BATEA parameter equations directly reflect the modeler's confidence in all the data. We compare several approaches to approximating the parameter distributions: a) full Markov Chain Monte Carlo methods and b) simplified approaches based on linear approximations. Studies using synthetic and real data from the US and Australia show that BATEA systematically reduces the parameter bias, leads to more meaningful model fits and allows model comparison taking

  13. Mixtures of Berkson and classical covariate measurement error in the linear mixed model: Bias analysis and application to a study on ultrafine particles.

    Science.gov (United States)

    Deffner, Veronika; Küchenhoff, Helmut; Breitner, Susanne; Schneider, Alexandra; Cyrys, Josef; Peters, Annette

    2018-03-13

    The ultrafine particle measurements in the Augsburger Umweltstudie, a panel study conducted in Augsburg, Germany, exhibit measurement error from various sources. Measurements of mobile devices show classical possibly individual-specific measurement error; Berkson-type error, which may also vary individually, occurs, if measurements of fixed monitoring stations are used. The combination of fixed site and individual exposure measurements results in a mixture of the two error types. We extended existing bias analysis approaches to linear mixed models with a complex error structure including individual-specific error components, autocorrelated errors, and a mixture of classical and Berkson error. Theoretical considerations and simulation results show, that autocorrelation may severely change the attenuation of the effect estimations. Furthermore, unbalanced designs and the inclusion of confounding variables influence the degree of attenuation. Bias correction with the method of moments using data with mixture measurement error partially yielded better results compared to the usage of incomplete data with classical error. Confidence intervals (CIs) based on the delta method achieved better coverage probabilities than those based on Bootstrap samples. Moreover, we present the application of these new methods to heart rate measurements within the Augsburger Umweltstudie: the corrected effect estimates were slightly higher than their naive equivalents. The substantial measurement error of ultrafine particle measurements has little impact on the results. The developed methodology is generally applicable to longitudinal data with measurement error. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Error and Uncertainty Analysis for Ecological Modeling and Simulation

    Science.gov (United States)

    2001-12-01

    nitrate flux to the Gulf of Mexico. Nature (Brief Communication) 414: 166-167. (Uncertainty analysis done with SERDP software) Gertner, G., G...D. Goolsby 2001. Relating N inputs to the Mississippi River Basin and nitrate flux in the Lower Mississippi River: A comparison of approaches...Journal of Remote Sensing, 25(4):367-380. Wu, J., D.E. Jelinski, M. Luck, and P.T. Tueller, 2000. Multiscale analysis of landscape heterogeneity: scale

  15. Highly accurate fluorogenic DNA sequencing with information theory-based error correction.

    Science.gov (United States)

    Chen, Zitian; Zhou, Wenxiong; Qiao, Shuo; Kang, Li; Duan, Haifeng; Xie, X Sunney; Huang, Yanyi

    2017-12-01

    Eliminating errors in next-generation DNA sequencing has proved challenging. Here we present error-correction code (ECC) sequencing, a method to greatly improve sequencing accuracy by combining fluorogenic sequencing-by-synthesis (SBS) with an information theory-based error-correction algorithm. ECC embeds redundancy in sequencing reads by creating three orthogonal degenerate sequences, generated by alternate dual-base reactions. This is similar to encoding and decoding strategies that have proved effective in detecting and correcting errors in information communication and storage. We show that, when combined with a fluorogenic SBS chemistry with raw accuracy of 98.1%, ECC sequencing provides single-end, error-free sequences up to 200 bp. ECC approaches should enable accurate identification of extremely rare genomic variations in various applications in biology and medicine.

  16. Mixed Methods Analysis of Medical Error Event Reports: A Report from the ASIPS Collaborative

    National Research Council Canada - National Science Library

    Harris, Daniel M; Westfall, John M; Fernald, Douglas H; Duclos, Christine W; West, David R; Niebauer, Linda; Marr, Linda; Quintela, Javan; Main, Deborah S

    2005-01-01

    .... This paper presents a mixed methods approach to analyzing narrative error event reports. Mixed methods studies integrate one or more qualitative and quantitative techniques for data collection and analysis...

  17. Quality of IT service delivery — Analysis and framework for human error prevention

    KAUST Repository

    Shwartz, L.; Rosu, D.; Loewenstern, D.; Buco, M. J.; Guo, S.; Lavrado, Rafael Coelho; Gupta, M.; De, P.; Madduri, V.; Singh, J. K.

    2010-01-01

    In this paper, we address the problem of reducing the occurrence of Human Errors that cause service interruptions in IT Service Support and Delivery operations. Analysis of a large volume of service interruption records revealed that more than 21

  18. Ergodic Capacity Analysis of Free-Space Optical Links with Nonzero Boresight Pointing Errors

    KAUST Repository

    Ansari, Imran Shafique; Alouini, Mohamed-Slim; Cheng, Julian

    2015-01-01

    A unified capacity analysis of a free-space optical (FSO) link that accounts for nonzero boresight pointing errors and both types of detection techniques (i.e. intensity modulation/ direct detection as well as heterodyne detection) is addressed

  19. Infant search and object permanence: a meta-analysis of the A-not-B error.

    Science.gov (United States)

    Wellman, H M; Cross, D; Bartsch, K

    1987-01-01

    Research on Piaget's stage 4 object concept has failed to reveal a clear or consistent pattern of results. Piaget found that 8-12-month-old infants would make perserverative errors; his explanation for this phenomenon was that the infant's concept of the object was contextually dependent on his or her actions. Some studies designed to test Piaget's explanation have replicated Piaget's basic finding, yet many have found no preference for the A location or the B location or an actual preference for the B location. More recently, researchers have attempted to uncover the causes for these results concerning the A-not-B error. Again, however, different studies have yielded different results, and qualitative reviews have failed to yield a consistent explanation for the results of the individual studies. This state of affairs suggests that the phenomenon may simply be too complex to be captured by individual studies varying 1 factor at a time and by reviews based on similar qualitative considerations. Therefore, the current investigation undertook a meta-analysis, a synthesis capturing the quantitative information across the now sizable number of studies. We entered several important factors into the meta-analysis, including the effects of age, the number of A trials, the length of delay between hiding and search, the number of locations, the distances between locations, and the distinctive visual properties of the hiding arrays. Of these, the analysis consistently indicated that age, delay, and number of hiding locations strongly influence infants' search. The pattern of specific findings also yielded new information about infant search. A general characterization of the results is that, at every age, both above-chance and below-chance performance was observed. That is, at each age at least 1 combination of delay and number of locations yielded above-chance A-not-B errors or significant perseverative search. At the same time, at each age at least 1 alternative

  20. Model-free and model-based reward prediction errors in EEG.

    Science.gov (United States)

    Sambrook, Thomas D; Hardwick, Ben; Wills, Andy J; Goslin, Jeremy

    2018-05-24

    Learning theorists posit two reinforcement learning systems: model-free and model-based. Model-based learning incorporates knowledge about structure and contingencies in the world to assign candidate actions with an expected value. Model-free learning is ignorant of the world's structure; instead, actions hold a value based on prior reinforcement, with this value updated by expectancy violation in the form of a reward prediction error. Because they use such different learning mechanisms, it has been previously assumed that model-based and model-free learning are computationally dissociated in the brain. However, recent fMRI evidence suggests that the brain may compute reward prediction errors to both model-free and model-based estimates of value, signalling the possibility that these systems interact. Because of its poor temporal resolution, fMRI risks confounding reward prediction errors with other feedback-related neural activity. In the present study, EEG was used to show the presence of both model-based and model-free reward prediction errors and their place in a temporal sequence of events including state prediction errors and action value updates. This demonstration of model-based prediction errors questions a long-held assumption that model-free and model-based learning are dissociated in the brain. Copyright © 2018 Elsevier Inc. All rights reserved.

  1. Error evaluation method for material accountancy measurement. Evaluation of random and systematic errors based on material accountancy data

    International Nuclear Information System (INIS)

    Nidaira, Kazuo

    2008-01-01

    International Target Values (ITV) shows random and systematic measurement uncertainty components as a reference for routinely achievable measurement quality in the accountancy measurement. The measurement uncertainty, called error henceforth, needs to be periodically evaluated and checked against ITV for consistency as the error varies according to measurement methods, instruments, operators, certified reference samples, frequency of calibration, and so on. In the paper an error evaluation method was developed with focuses on (1) Specifying clearly error calculation model, (2) Getting always positive random and systematic error variances, (3) Obtaining probability density distribution of an error variance and (4) Confirming the evaluation method by simulation. In addition the method was demonstrated by applying real data. (author)

  2. A posteriori error analysis of multiscale operator decomposition methods for multiphysics models

    International Nuclear Information System (INIS)

    Estep, D; Carey, V; Tavener, S; Ginting, V; Wildey, T

    2008-01-01

    Multiphysics, multiscale models present significant challenges in computing accurate solutions and for estimating the error in information computed from numerical solutions. In this paper, we describe recent advances in extending the techniques of a posteriori error analysis to multiscale operator decomposition solution methods. While the particulars of the analysis vary considerably with the problem, several key ideas underlie a general approach being developed to treat operator decomposition multiscale methods. We explain these ideas in the context of three specific examples

  3. Application of grey incidence analysis to connection between human errors and root cause

    International Nuclear Information System (INIS)

    Ren Yinxiang; Yu Ren; Zhou Gang; Chen Dengke

    2008-01-01

    By introducing grey incidence analysis, the relatively important impact of root cause upon human errors was researched in the paper. On the basis of WANO statistic data and grey incidence analysis, lack of alternate examine, bad basic operation, short of theoretical knowledge, relaxation of organization and management and deficiency of regulations are the important influence of root cause on human err ors. Finally, the question to reduce human errors was discussed. (authors)

  4. Solution-verified reliability analysis and design of bistable MEMS using error estimation and adaptivity.

    Energy Technology Data Exchange (ETDEWEB)

    Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.

    2006-10-01

    This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.

  5. Error analysis of supersonic air-to-air ejector schlieren pictures

    Directory of Open Access Journals (Sweden)

    Kolář J.

    2013-04-01

    Full Text Available The scope of this article is focused on general analysis of errors and uncertainties possibly arising from CFD-to-schlieren pictures matching. Analysis is based on classic analytical equations. These are firstly evaluated with the presumption of constant density gradient along the ray course. In other words, the deflection of light-ray caused by density gradient is negligible in compare to the cross size of constant gradient area. It is the aim of this work to determine, whether this presumption is applicable in case of supersonic air-to-air ejector. The colour and black and white schlieren pictures are carried out and compared to CFD results. Simulations had covered various eddy viscosities. Computed pressure gradients are transformed into deflection angles and further to ray displacement. Resulting computed light- ray deflection is matched to experimental results

  6. Error performance analysis in K-tier uplink cellular networks using a stochastic geometric approach

    KAUST Repository

    Afify, Laila H.

    2015-09-14

    In this work, we develop an analytical paradigm to analyze the average symbol error probability (ASEP) performance of uplink traffic in a multi-tier cellular network. The analysis is based on the recently developed Equivalent-in-Distribution approach that utilizes stochastic geometric tools to account for the network geometry in the performance characterization. Different from the other stochastic geometry models adopted in the literature, the developed analysis accounts for important communication system parameters and goes beyond signal-to-interference-plus-noise ratio characterization. That is, the presented model accounts for the modulation scheme, constellation type, and signal recovery techniques to model the ASEP. To this end, we derive single integral expressions for the ASEP for different modulation schemes due to aggregate network interference. Finally, all theoretical findings of the paper are verified via Monte Carlo simulations.

  7. Manufacturing error sensitivity analysis and optimal design method of cable-network antenna structures

    Science.gov (United States)

    Zong, Yali; Hu, Naigang; Duan, Baoyan; Yang, Guigeng; Cao, Hongjun; Xu, Wanye

    2016-03-01

    Inevitable manufacturing errors and inconsistency between assumed and actual boundary conditions can affect the shape precision and cable tensions of a cable-network antenna, and even result in failure of the structure in service. In this paper, an analytical sensitivity analysis method of the shape precision and cable tensions with respect to the parameters carrying uncertainty was studied. Based on the sensitivity analysis, an optimal design procedure was proposed to alleviate the effects of the parameters that carry uncertainty. The validity of the calculated sensitivities is examined by those computed by a finite difference method. Comparison with a traditional design method shows that the presented design procedure can remarkably reduce the influence of the uncertainties on the antenna performance. Moreover, the results suggest that especially slender front net cables, thick tension ties, relatively slender boundary cables and high tension level can improve the ability of cable-network antenna structures to resist the effects of the uncertainties on the antenna performance.

  8. Kinematic Analysis of Speech Sound Sequencing Errors Induced by Delayed Auditory Feedback.

    Science.gov (United States)

    Cler, Gabriel J; Lee, Jackson C; Mittelman, Talia; Stepp, Cara E; Bohland, Jason W

    2017-06-22

    Delayed auditory feedback (DAF) causes speakers to become disfluent and make phonological errors. Methods for assessing the kinematics of speech errors are lacking, with most DAF studies relying on auditory perceptual analyses, which may be problematic, as errors judged to be categorical may actually represent blends of sounds or articulatory errors. Eight typical speakers produced nonsense syllable sequences under normal and DAF (200 ms). Lip and tongue kinematics were captured with electromagnetic articulography. Time-locked acoustic recordings were transcribed, and the kinematics of utterances with and without perceived errors were analyzed with existing and novel quantitative methods. New multivariate measures showed that for 5 participants, kinematic variability for productions perceived to be error free was significantly increased under delay; these results were validated by using the spatiotemporal index measure. Analysis of error trials revealed both typical productions of a nontarget syllable and productions with articulatory kinematics that incorporated aspects of both the target and the perceived utterance. This study is among the first to characterize articulatory changes under DAF and provides evidence for different classes of speech errors, which may not be perceptually salient. New methods were developed that may aid visualization and analysis of large kinematic data sets. https://doi.org/10.23641/asha.5103067.

  9. Error performance analysis in downlink cellular networks with interference management

    KAUST Repository

    Afify, Laila H.; Elsawy, Hesham; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2015-01-01

    Modeling aggregate network interference in cellular networks has recently gained immense attention both in academia and industry. While stochastic geometry based models have succeeded to account for the cellular network geometry, they mostly

  10. Opportunistic error correction for OFDM-based DVB systems

    NARCIS (Netherlands)

    Shao, X.; Slump, Cornelis H.

    2013-01-01

    DVB-T2 (second generation terrestrial digital video broadcasting) employs LDPC (Low Density Parity Check) codes combined with BCH (Bose-Chaudhuri-Hocquengham) codes, which has a better performance in comparison to convolutional and Reed-Solomon codes used in other OFDM-based DVB systems. However,

  11. Implementation of an RFID-Based Sequencing-Error-Proofing System for Automotive Manufacturing Logistics

    Directory of Open Access Journals (Sweden)

    Yong-Shin Kang

    2018-01-01

    Full Text Available Serialized tracing provides the ability to track and trace the lifecycle of the products and parts. Unlike barcodes, Radio frequency identification (RFID, which is an important building block for internet of things (IoT, does not require a line of sight and has the advantages of recognizing many objects simultaneously and rapidly, and storing more information than barcodes. Therefore, RFID has been used in a variety of application domains such as logistics, distributions, and manufacturing, significantly improving traceability and process efficiency. In this study, we applied RFID to improve the just-in-sequence operation of an automotive inbound logistics process. First, we implemented an RFID-based visibility system for real-time traceability and control of part supply from the production lines of suppliers to the assembly line of a car manufacturer. Second, we developed an RFID-based sequence-error proofing system to avoid accidental line stops due to incorrect part sequencing. The whole system has been successfully installed in a rear-axle inbound logistics process of GM Korea. We achieved a significant amount of cost savings, especially due to the prevention of sequencing errors and part shortages, and the reduction of manual operations. Thorough cost-benefit analysis demonstrates the clear economic feasibility of using RFID technologies for the just-in-sequence inbound logistics in an automobile manufacturing environment.

  12. A Parallel Decoding Algorithm for Short Polar Codes Based on Error Checking and Correcting

    Science.gov (United States)

    Pan, Xiaofei; Pan, Kegang; Ye, Zhan; Gong, Chao

    2014-01-01

    We propose a parallel decoding algorithm based on error checking and correcting to improve the performance of the short polar codes. In order to enhance the error-correcting capacity of the decoding algorithm, we first derive the error-checking equations generated on the basis of the frozen nodes, and then we introduce the method to check the errors in the input nodes of the decoder by the solutions of these equations. In order to further correct those checked errors, we adopt the method of modifying the probability messages of the error nodes with constant values according to the maximization principle. Due to the existence of multiple solutions of the error-checking equations, we formulate a CRC-aided optimization problem of finding the optimal solution with three different target functions, so as to improve the accuracy of error checking. Besides, in order to increase the throughput of decoding, we use a parallel method based on the decoding tree to calculate probability messages of all the nodes in the decoder. Numerical results show that the proposed decoding algorithm achieves better performance than that of some existing decoding algorithms with the same code length. PMID:25540813

  13. Subdivision Error Analysis and Compensation for Photoelectric Angle Encoder in a Telescope Control System

    Directory of Open Access Journals (Sweden)

    Yanrui Su

    2015-01-01

    Full Text Available As the position sensor, photoelectric angle encoder affects the accuracy and stability of telescope control system (TCS. A TCS-based subdivision error compensation method for encoder is proposed. Six types of subdivision error sources are extracted through mathematical expressions of subdivision signals first. Then the period length relationships between subdivision signals and subdivision errors are deduced. And the error compensation algorithm only utilizing the shaft position of TCS is put forward, along with two control models; Model I is that the algorithm applies only to the speed loop of TCS and Model II is applied to both speed loop and position loop. Combined with actual project, elevation jittering phenomenon of the telescope is discussed to decide the necessity of DC-type subdivision error compensation. Low-speed elevation performance before and after error compensation is compared to help decide that Model II is preferred. In contrast to original performance, the maximum position error of the elevation with DC subdivision error compensation is reduced by approximately 47.9% from 1.42″ to 0.74″. The elevation gets a huge decrease in jitters. This method can compensate the encoder subdivision errors effectively and improve the stability of TCS.

  14. Decimal Fraction Arithmetic: Logical Error Analysis and Its Validation.

    Science.gov (United States)

    Standiford, Sally N.; And Others

    This report illustrates procedures of item construction for addition and subtraction examples involving decimal fractions. Using a procedural network of skills required to solve such examples, an item characteristic matrix of skills analysis was developed to describe the characteristics of the content domain by projected student difficulties. Then…

  15. Errors in instumental neutron activation analysis caused by matrix absorption

    International Nuclear Information System (INIS)

    Croudace, I.W.

    1979-01-01

    Instrumental neutron activation analysis of the geochemically important rare earth elements, together with Ta, Hf and U involves energies below 150 keV where absorption of radiation by the sample becomes inceasingly important. Determinations of the total mass absorption coefficients have been made. (C.F.)

  16. Study of maintenance skill-work based on PSFs and error category

    International Nuclear Information System (INIS)

    Nagata, Manabu; Yukimachi, Takeo; Hasegawa, Toshio

    2001-01-01

    In this investigation, the skill-types of skill-work are clarified according to the human error data on the maintenance works at nuclear power plants. At first, the causal PSFs of the errors are extracted from the data and some of the skill-types are characterized as results from factor analysis. Moreover, the skill-work model is reexamined on the basis of the contents of the human error data and the error category corresponding to the data. Furthermore, integrating the tendency of the causal PSFs and the actual error category concerning each skill-type, an extended skill-work model was developed with a flow-chart representation as a tentative stage of the investigation. (author)

  17. Alternatives to accuracy and bias metrics based on percentage errors for radiation belt modeling applications

    Energy Technology Data Exchange (ETDEWEB)

    Morley, Steven Karl [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-07-01

    This report reviews existing literature describing forecast accuracy metrics, concentrating on those based on relative errors and percentage errors. We then review how the most common of these metrics, the mean absolute percentage error (MAPE), has been applied in recent radiation belt modeling literature. Finally, we describe metrics based on the ratios of predicted to observed values (the accuracy ratio) that address the drawbacks inherent in using MAPE. Specifically, we define and recommend the median log accuracy ratio as a measure of bias and the median symmetric accuracy as a measure of accuracy.

  18. The recovery factors analysis of the human errors for research reactors

    International Nuclear Information System (INIS)

    Farcasiu, M.; Nitoi, M.; Apostol, M.; Turcu, I.; Florescu, Ghe.

    2006-01-01

    The results of many Probabilistic Safety Assessment (PSA) studies show a very significant contribution of human errors to systems unavailability of the nuclear installations. The treatment of human interactions is considered one of the major limitations in the context of PSA. To identify those human actions that can have an effect on system reliability or availability applying the Human Reliability Analysis (HRA) is necessary. The recovery factors analysis of the human action is an important step in HRA. This paper presents how can be reduced the human errors probabilities (HEP) using those elements that have the capacity to recovery human error. The recovery factors modeling is marked to identify error likelihood situations or situations that conduct at development of the accident. This analysis is realized by THERP method. The necessary information was obtained from the operating experience of the research reactor TRIGA of the INR Pitesti. The required data were obtained from generic databases. (authors)

  19. Practical error analysis of the quasi-steady-state approximation ...

    African Journals Online (AJOL)

    It has become associated with singular perturbation theory [1], which provides a means of assessing the accuracy and validity of the QSSA, but this involves rather complicated mathematics. In contrast, it is shown here how the necessary safeguards against misuse can be based on a simpler intuitive approach to singular ...

  20. Heartbeat-based error diagnosis framework for distributed embedded systems

    Science.gov (United States)

    Mishra, Swagat; Khilar, Pabitra Mohan

    2012-01-01

    Distributed Embedded Systems have significant applications in automobile industry as steer-by-wire, fly-by-wire and brake-by-wire systems. In this paper, we provide a general framework for fault detection in a distributed embedded real time system. We use heartbeat monitoring, check pointing and model based redundancy to design a scalable framework that takes care of task scheduling, temperature control and diagnosis of faulty nodes in a distributed embedded system. This helps in diagnosis and shutting down of faulty actuators before the system becomes unsafe. The framework is designed and tested using a new simulation model consisting of virtual nodes working on a message passing system.

  1. Inversion, error analysis, and validation of GPS/MET occultation data

    Directory of Open Access Journals (Sweden)

    A. K. Steiner

    Full Text Available The global positioning system meteorology (GPS/MET experiment was the first practical demonstration of global navigation satellite system (GNSS-based active limb sounding employing the radio occultation technique. This method measures, as principal observable and with millimetric accuracy, the excess phase path (relative to propagation in vacuum of GNSS-transmitted radio waves caused by refraction during passage through the Earth's neutral atmosphere and ionosphere in limb geometry. It shows great potential utility for weather and climate system studies in providing an unique combination of global coverage, high vertical resolution and accuracy, long-term stability, and all-weather capability. We first describe our GPS/MET data processing scheme from excess phases via bending angles to the neutral atmospheric parameters refractivity, density, pressure and temperature. Special emphasis is given to ionospheric correction methodology and the inversion of bending angles to refractivities, where we introduce a matrix inversion technique (instead of the usual integral inversion. The matrix technique is shown to lead to identical results as integral inversion but is more directly extendable to inversion by optimal estimation. The quality of GPS/MET-derived profiles is analyzed with an error estimation analysis employing a Monte Carlo technique. We consider statistical errors together with systematic errors due to upper-boundary initialization of the retrieval by a priori bending angles. Perfect initialization and properly smoothed statistical errors allow for better than 1 K temperature retrieval accuracy up to the stratopause. No initialization and statistical errors yield better than 1 K accuracy up to 30 km but less than 3 K accuracy above 40 km. Given imperfect initialization, biases >2 K propagate down to below 30 km height in unfavorable realistic cases. Furthermore, results of a statistical validation of GPS/MET profiles through comparison

  2. A theoretical basis for the analysis of multiversion software subject to coincident errors

    Science.gov (United States)

    Eckhardt, D. E., Jr.; Lee, L. D.

    1985-01-01

    Fundamental to the development of redundant software techniques (known as fault-tolerant software) is an understanding of the impact of multiple joint occurrences of errors, referred to here as coincident errors. A theoretical basis for the study of redundant software is developed which: (1) provides a probabilistic framework for empirically evaluating the effectiveness of a general multiversion strategy when component versions are subject to coincident errors, and (2) permits an analytical study of the effects of these errors. An intensity function, called the intensity of coincident errors, has a central role in this analysis. This function describes the propensity of programmers to introduce design faults in such a way that software components fail together when executing in the application environment. A condition under which a multiversion system is a better strategy than relying on a single version is given.

  3. Phonological analysis of substitution errors of patients with apraxia of speech

    Directory of Open Access Journals (Sweden)

    Maysa Luchesi Cera

    Full Text Available Abstract The literature on apraxia of speech describes the types and characteristics of phonological errors in this disorder. In general, phonemes affected by errors are described, but the distinctive features involved have not yet been investigated. Objective: To analyze the features involved in substitution errors produced by Brazilian-Portuguese speakers with apraxia of speech. Methods: 20 adults with apraxia of speech were assessed. Phonological analysis of the distinctive features involved in substitution type errors was carried out using the protocol for the evaluation of verbal and non-verbal apraxia. Results: The most affected features were: voiced, continuant, high, anterior, coronal, posterior. Moreover, the mean of the substitutions of marked to markedness features was statistically greater than the markedness to marked features. Conclusions: This study contributes toward a better characterization of the phonological errors found in apraxia of speech, thereby helping to diagnose communication disorders and the selection criteria of phonemes for rehabilitation in these patients.

  4. Analysis of errors during medical and computerized diagnostics of spherical lung neoplasms

    International Nuclear Information System (INIS)

    Pozmogov, A.I.; Petruk, D.A.

    1985-01-01

    Reasons for errors in medical and computerized diagnostics of spherical lung neoplasms are studied based on material of 212 case records and clinicoroentgenological data; it should promote improvement of their diagnostics

  5. Adjoint-Based a Posteriori Error Estimation for Coupled Time-Dependent Systems

    KAUST Repository

    Asner, Liya; Tavener, Simon; Kay, David

    2012-01-01

    We consider time-dependent parabolic problem s coupled across a common interface which we formulate using a Lagrange multiplier construction and solve by applying a monolithic solution technique. We derive an adjoint-based a posteriori error representation for a quantity of interest given by a linear functional of the solution. We establish the accuracy of our error representation formula through numerical experimentation and investigate the effect of error in the adjoint solution. Crucially, the error representation affords a distinction between temporal and spatial errors and can be used as a basis for a blockwise time-space refinement strategy. Numerical tests illustrate the efficacy of the refinement strategy by capturing the distinctive behavior of a localized traveling wave solution. The saddle point systems considered here are equivalent to those arising in the mortar finite element technique for parabolic problems. © 2012 Society for Industrial and Applied Mathematics.

  6. Improvement of the physically-based groundwater model simulations through complementary correction of its errors

    Directory of Open Access Journals (Sweden)

    Jorge Mauricio Reyes Alcalde

    2017-04-01

    Full Text Available Physically-Based groundwater Models (PBM, such MODFLOW, are used as groundwater resources evaluation tools supposing that the produced differences (residuals or errors are white noise. However, in the facts these numerical simulations usually show not only random errors but also systematic errors. For this work it has been developed a numerical procedure to deal with PBM systematic errors, studying its structure in order to model its behavior and correct the results by external and complementary means, trough a framework called Complementary Correction Model (CCM. The application of CCM to PBM shows a decrease in local biases, better distribution of errors and reductions in its temporal and spatial correlations, with 73% of reduction in global RMSN over an original PBM. This methodology seems an interesting chance to update a PBM avoiding the work and costs of interfere its internal structure.

  7. A new anisotropic mesh adaptation method based upon hierarchical a posteriori error estimates

    Science.gov (United States)

    Huang, Weizhang; Kamenski, Lennard; Lang, Jens

    2010-03-01

    A new anisotropic mesh adaptation strategy for finite element solution of elliptic differential equations is presented. It generates anisotropic adaptive meshes as quasi-uniform ones in some metric space, with the metric tensor being computed based on hierarchical a posteriori error estimates. A global hierarchical error estimate is employed in this study to obtain reliable directional information of the solution. Instead of solving the global error problem exactly, which is costly in general, we solve it iteratively using the symmetric Gauß-Seidel method. Numerical results show that a few GS iterations are sufficient for obtaining a reasonably good approximation to the error for use in anisotropic mesh adaptation. The new method is compared with several strategies using local error estimators or recovered Hessians. Numerical results are presented for a selection of test examples and a mathematical model for heat conduction in a thermal battery with large orthotropic jumps in the material coefficients.

  8. Disasters of endoscopic surgery and how to avoid them: error analysis.

    Science.gov (United States)

    Troidl, H

    1999-08-01

    For every innovation there are two sides to consider. For endoscopic surgery the positive side is more comfort for the patient, and the negative side is new complications, even disasters, such as injuries to organs (e.g., the bowel), vessels, and the common bile duct. These disasters are rare and seldom reported in the scientific world, as at conferences, at symposiums, and in publications. Today there are many methods for testing an innovation (controlled clinical trials, consensus conferences, audits, and confidential inquiries). Reporting "complications," however, does not help to avoid them. We need real methods for avoiding negative failures. The failure analysis is the method of choice in industry. If an airplane crashes, error analysis starts immediately. Humans make errors, and making errors means punishment. Failure analysis means rigorously and objectively investigating a clinical situation to find clinical relevant information for avoiding these negative events in the future. Error analysis has four important steps: (1) What was the clinical situation? (2) What has happened? (3) Most important: Why did it happen? (4) How do we avoid the negative event or disaster in the future. Error analysis has decisive advantages. It is easy to perform; it supplies clinically relevant information to help avoid it; and there is no need for money. It can be done everywhere; and the information is available in a short time. The other side of the coin is that error analysis is of course retrospective, it may not be objective, and most important it will probably have legal consequences. To be more effective in medicine and surgery we must handle our errors using a different approach. According to Sir Karl Popper: "The consituation is that we have to learn from our errors. To cover up failure is therefore the biggest intellectual sin.

  9. Use of error files in uncertainty analysis and data adjustment

    International Nuclear Information System (INIS)

    Chestnutt, M.M.; McCracken, A.K.; McCracken, A.K.

    1979-01-01

    Some results are given from uncertainty analyses on Pressurized Water Reactor (PWR) and Fast Reactor Theoretical Benchmarks. Upper limit estimates of calculated quantities are shown to be significantly reduced by the use of ENDF/B data covariance files and recently published few-group covariance matrices. Some problems in the analysis of single-material benchmark experiments are discussed with reference to the Winfrith iron benchmark experiment. Particular attention is given to the difficulty of making use of very extensive measurements which are likely to be a feature of this type of experiment. Preliminary results of an adjustment in iron are shown

  10. Bayesian error analysis model for reconstructing transcriptional regulatory networks

    OpenAIRE

    Sun, Ning; Carroll, Raymond J.; Zhao, Hongyu

    2006-01-01

    Transcription regulation is a fundamental biological process, and extensive efforts have been made to dissect its mechanisms through direct biological experiments and regulation modeling based on physical–chemical principles and mathematical formulations. Despite these efforts, transcription regulation is yet not well understood because of its complexity and limitations in biological experiments. Recent advances in high throughput technologies have provided substantial amounts and diverse typ...

  11. Mistakes, Errors and Foul-Ups: Practice-Based Evidence for Evidence Based Practice

    Directory of Open Access Journals (Sweden)

    Mark Turner

    2016-11-01

    Full Text Available In human medicine, the management of care to ensure safety for the service-user constitutes an important element of the patient ‘journey.’ The name given to this discipline is patient safety. It is founded upon those elements of good medical practice which help avoid or mitigate human error.  Investigations in the U.S. first highlighted the alarming extent of medical error: Brennan et al. (1991 concluded that in the state of New York, the overall rate of adverse events was approximately 4% for hospitalised patients, which equated to over 13,000 deaths a year. Doctors looked to other safety critical industries and aviation in particular (Reason 1995, to address this phenomenon: there is now a wealth of research on the impact of various safety initiatives on measurable rates of harm. The World Health Organisation’s ‘Safe Surgery Saves Lives’ initiative - a campaign that advocates the use of a surgical checklist to standardise aspects of peri-operative care - is one example of aviation methodology successfully employed in a clinical setting (van Klei et al. 2012. The critical importance of effective communication, leadership and situational awareness has also been discussed at length in the human patient safety literature.ObjectivesVeterinary patient safety is an analogous discipline and researchers have attempted to understand more about the topic of veterinary medical error. However, the evidence-base for veterinary patient safety is sparse.  This presentation aims to summarise the evidence to date and highlight the benefits in practice of an emerging subject. MethodA search of the terms veterinary patient safety on the PubMed database from 1990 to 2016 was performed.Findings15 articles were identified as contributing to the veterinary patient safety literature.OutcomeThe available literature has addressed a number of areas. The use of checklists in a clinical setting has been proven to reduce the incidence of specific undesirable

  12. Thermal error analysis and compensation for digital image/volume correlation

    Science.gov (United States)

    Pan, Bing

    2018-02-01

    Digital image/volume correlation (DIC/DVC) rely on the digital images acquired by digital cameras and x-ray CT scanners to extract the motion and deformation of test samples. Regrettably, these imaging devices are unstable optical systems, whose imaging geometry may undergo unavoidable slight and continual changes due to self-heating effect or ambient temperature variations. Changes in imaging geometry lead to both shift and expansion in the recorded 2D or 3D images, and finally manifest as systematic displacement and strain errors in DIC/DVC measurements. Since measurement accuracy is always the most important requirement in various experimental mechanics applications, these thermal-induced errors (referred to as thermal errors) should be given serious consideration in order to achieve high accuracy, reproducible DIC/DVC measurements. In this work, theoretical analyses are first given to understand the origin of thermal errors. Then real experiments are conducted to quantify thermal errors. Three solutions are suggested to mitigate or correct thermal errors. Among these solutions, a reference sample compensation approach is highly recommended because of its easy implementation, high accuracy and in-situ error correction capability. Most of the work has appeared in our previously published papers, thus its originality is not claimed. Instead, this paper aims to give a comprehensive overview and more insights of our work on thermal error analysis and compensation for DIC/DVC measurements.

  13. Pedal Application Errors

    Science.gov (United States)

    2012-03-01

    This project examined the prevalence of pedal application errors and the driver, vehicle, roadway and/or environmental characteristics associated with pedal misapplication crashes based on a literature review, analysis of news media reports, a panel ...

  14. ANALYSIS OF RELATIONSHIPS BETWEEN THE LEVEL OF ERRORS IN LEG AND MONOFIN MOVEMENT AND STROKE PARAMETERS IN MONOFIN SWIMMING

    Directory of Open Access Journals (Sweden)

    Marek Rejman

    2013-03-01

    Full Text Available The aim of this study was to analyze the error structure in propulsive movements with regard to its influence on monofin swimming speed. The random cycles performed by six swimmers were filmed during a progressive test (900m. An objective method to estimate errors committed in the area of angular displacement of the feet and monofin segments was employed. The parameters were compared with a previously described model. Mutual dependences between the level of errors, stroke frequency, stroke length and amplitude in relation to swimming velocity were analyzed. The results showed that proper foot movements and the avoidance of errors, arising at the distal part of the fin, ensure the progression of swimming speed. The individual stroke parameters distribution which consists of optimally increasing stroke frequency to the maximal possible level that enables the stabilization of stroke length leads to the minimization of errors. Identification of key elements in the stroke structure based on the analysis of errors committed should aid in improving monofin swimming technique

  15. Soft error modeling and analysis of the Neutron Intercepting Silicon Chip (NISC)

    International Nuclear Information System (INIS)

    Celik, Cihangir; Unlue, Kenan; Narayanan, Vijaykrishnan; Irwin, Mary J.

    2011-01-01

    Soft errors are transient errors caused due to excess charge carriers induced primarily by external radiations in the semiconductor devices. Soft error phenomena could be used to detect thermal neutrons with a neutron monitoring/detection system by enhancing soft error occurrences in the memory devices. This way, one can convert all semiconductor memory devices into neutron detection systems. Such a device is being developed at The Pennsylvania State University and named Neutron Intercepting Silicon Chip (NISC). The NISC is envisioning a miniature, power efficient, and active/passive operation neutron sensor/detector system. NISC aims to achieve this goal by introducing 10 B-enriched Borophosphosilicate Glass (BPSG) insulation layers in the semiconductor memories. In order to model and analyze the NISC, an analysis tool using Geant4 as the transport and tracking engine is developed for the simulation of the charged particle interactions in the semiconductor memory model, named NISC Soft Error Analysis Tool (NISCSAT). A simple model with 10 B-enriched layer on top of the lumped silicon region is developed in order to represent the semiconductor memory node. Soft error probability calculations were performed via the NISCSAT with both single node and array configurations to investigate device scaling by using different node dimensions in the model. Mono-energetic, mono-directional thermal and fast neutrons are used as the neutron sources. Soft error contribution due to the BPSG layer is also investigated with different 10 B contents and the results are presented in this paper.

  16. Development of safety analysis and constraint detection techniques for process interaction errors

    International Nuclear Information System (INIS)

    Fan, Chin-Feng; Tsai, Shang-Lin; Tseng, Wan-Hui

    2011-01-01

    Among the new failure modes introduced by computer into safety systems, the process interaction error is the most unpredictable and complicated failure mode, which may cause disastrous consequences. This paper presents safety analysis and constraint detection techniques for process interaction errors among hardware, software, and human processes. Among interaction errors, the most dreadful ones are those that involve run-time misinterpretation from a logic process. We call them the 'semantic interaction errors'. Such abnormal interaction is not adequately emphasized in current research. In our static analysis, we provide a fault tree template focusing on semantic interaction errors by checking conflicting pre-conditions and post-conditions among interacting processes. Thus, far-fetched, but highly risky, interaction scenarios involve interpretation errors can be identified. For run-time monitoring, a range of constraint types is proposed for checking abnormal signs at run time. We extend current constraints to a broader relational level and a global level, considering process/device dependencies and physical conservation rules in order to detect process interaction errors. The proposed techniques can reduce abnormal interactions; they can also be used to assist in safety-case construction.

  17. Development of safety analysis and constraint detection techniques for process interaction errors

    Energy Technology Data Exchange (ETDEWEB)

    Fan, Chin-Feng, E-mail: csfanc@saturn.yzu.edu.tw [Computer Science and Engineering Dept., Yuan-Ze University, Taiwan (China); Tsai, Shang-Lin; Tseng, Wan-Hui [Computer Science and Engineering Dept., Yuan-Ze University, Taiwan (China)

    2011-02-15

    Among the new failure modes introduced by computer into safety systems, the process interaction error is the most unpredictable and complicated failure mode, which may cause disastrous consequences. This paper presents safety analysis and constraint detection techniques for process interaction errors among hardware, software, and human processes. Among interaction errors, the most dreadful ones are those that involve run-time misinterpretation from a logic process. We call them the 'semantic interaction errors'. Such abnormal interaction is not adequately emphasized in current research. In our static analysis, we provide a fault tree template focusing on semantic interaction errors by checking conflicting pre-conditions and post-conditions among interacting processes. Thus, far-fetched, but highly risky, interaction scenarios involve interpretation errors can be identified. For run-time monitoring, a range of constraint types is proposed for checking abnormal signs at run time. We extend current constraints to a broader relational level and a global level, considering process/device dependencies and physical conservation rules in order to detect process interaction errors. The proposed techniques can reduce abnormal interactions; they can also be used to assist in safety-case construction.

  18. Human error prediction and countermeasures based on CREAM in spent nuclear fuel (SNF) transportation

    International Nuclear Information System (INIS)

    Kim, Jae San

    2007-02-01

    Since the 1980s, in order to secure the storage capacity of spent nuclear fuel (SNF) at NPPs, SNF assemblies have been transported on-site from one unit to another unit nearby. However in the future the amount of the spent fuel will approach capacity in the areas used, and some of these SNFs will have to be transported to an off-site spent fuel repository. Most SNF materials used at NPPs will be transported by general cargo ships from abroad, and these SNFs will be stored in an interim storage facility. In the process of transporting SNF, human interactions will involve inspecting and preparing the cask and spent fuel, loading the cask onto the vehicle or ship, transferring the cask as well as storage or monitoring the cask. The transportation of SNF involves a number of activities that depend on reliable human performance. In the case of the transport of a cask, human errors may include spent fuel bundle misidentification or cask transport accidents among others. Reviews of accident events when transporting the Radioactive Material (RAM) throughout the world indicate that human error is the major causes for more than 65% of significant events. For the safety of SNF transportation, it is very important to predict human error and to deduce a method that minimizes the human error. This study examines the human factor effects on the safety of transporting spent nuclear fuel (SNF). It predicts and identifies the possible human errors in the SNF transport process (loading, transfer and storage of the SNF). After evaluating the human error mode in each transport process, countermeasures to minimize the human error are deduced. The human errors in SNF transportation were analyzed using Hollnagel's Cognitive Reliability and Error Analysis Method (CREAM). After determining the important factors for each process, countermeasures to minimize human error are provided in three parts: System design, Operational environment, and Human ability

  19. A Human Reliability Analysis of Post- Accident Human Errors in the Low Power and Shutdown PSA of KSNP

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Daeil; Kim, J. H.; Jang, S. C

    2007-03-15

    Korea Atomic Energy Research Institute, using the ANS low power and shutdown (LPSD) probabilistic risk assessment (PRA) Standard, evaluated the LPSD PSA model of the KSNP, Yonggwang Units 5 and 6, and identified the items to be improved. The evaluation results of human reliability analysis (HRA) of the post-accident human errors in the LPSD PSA model for the KSNP showed that 10 items among 19 items of supporting requirements for those in the ANS PRA Standard were identified as them to be improved. Thus, we newly carried out a HRA for post-accident human errors in the LPSD PSA model for the KSNP. Following tasks are the improvements in the HRA of post-accident human errors of the LPSD PSA model for the KSNP compared with the previous one: Interviews with operators in the interpretation of the procedure, modeling of operator actions, and the quantification results of human errors, site visit. Applications of limiting value to the combined post-accident human errors. Documentation of information of all the input and bases for the detailed quantifications and the dependency analysis using the quantification sheets The assessment results for the new HRA results of post-accident human errors using the ANS LPSD PRA Standard show that above 80% items of its supporting requirements for post-accident human errors were graded as its Category II. The number of the re-estimated human errors using the LPSD Korea Standard HRA method is 385. Among them, the number of individual post-accident human errors is 253. The number of dependent post-accident human errors is 135. The quantification results of the LPSD PSA model for the KSNP with new HEPs show that core damage frequency (CDF) is increased by 5.1% compared with the previous baseline CDF It is expected that this study results will be greatly helpful to improve the PSA quality for the domestic nuclear power plants because they have sufficient PSA quality to meet the Category II of Supporting Requirements for the post

  20. A Human Reliability Analysis of Post- Accident Human Errors in the Low Power and Shutdown PSA of KSNP

    International Nuclear Information System (INIS)

    Kang, Daeil; Kim, J. H.; Jang, S. C.

    2007-03-01

    Korea Atomic Energy Research Institute, using the ANS low power and shutdown (LPSD) probabilistic risk assessment (PRA) Standard, evaluated the LPSD PSA model of the KSNP, Yonggwang Units 5 and 6, and identified the items to be improved. The evaluation results of human reliability analysis (HRA) of the post-accident human errors in the LPSD PSA model for the KSNP showed that 10 items among 19 items of supporting requirements for those in the ANS PRA Standard were identified as them to be improved. Thus, we newly carried out a HRA for post-accident human errors in the LPSD PSA model for the KSNP. Following tasks are the improvements in the HRA of post-accident human errors of the LPSD PSA model for the KSNP compared with the previous one: Interviews with operators in the interpretation of the procedure, modeling of operator actions, and the quantification results of human errors, site visit. Applications of limiting value to the combined post-accident human errors. Documentation of information of all the input and bases for the detailed quantifications and the dependency analysis using the quantification sheets The assessment results for the new HRA results of post-accident human errors using the ANS LPSD PRA Standard show that above 80% items of its supporting requirements for post-accident human errors were graded as its Category II. The number of the re-estimated human errors using the LPSD Korea Standard HRA method is 385. Among them, the number of individual post-accident human errors is 253. The number of dependent post-accident human errors is 135. The quantification results of the LPSD PSA model for the KSNP with new HEPs show that core damage frequency (CDF) is increased by 5.1% compared with the previous baseline CDF It is expected that this study results will be greatly helpful to improve the PSA quality for the domestic nuclear power plants because they have sufficient PSA quality to meet the Category II of Supporting Requirements for the post

  1. A method for the estimation of the residual error in the SALP approach for fault tree analysis

    International Nuclear Information System (INIS)

    Astolfi, M.; Contini, S.

    1980-01-01

    The aim of this report is the illustration of the algorithms implemented in the SALP-MP code for the estimation of the residual error. These algorithms are of more general use, and it would be possible to implement them on all codes of the series SALP previously developed, as well as, with minor modifications, to analysis procedures based on 'top-down' approaches. At the time, combined 'top-down' - 'bottom up' procedures are being studied in order to take advantage from both approaches for further reduction of computer time and better estimation of the residual error, for which the developed algorithms are still applicable

  2. Analysis on the dynamic error for optoelectronic scanning coordinate measurement network

    Science.gov (United States)

    Shi, Shendong; Yang, Linghui; Lin, Jiarui; Guo, Siyang; Ren, Yongjie

    2018-01-01

    Large-scale dynamic three-dimension coordinate measurement technique is eagerly demanded in equipment manufacturing. Noted for advantages of high accuracy, scale expandability and multitask parallel measurement, optoelectronic scanning measurement network has got close attention. It is widely used in large components jointing, spacecraft rendezvous and docking simulation, digital shipbuilding and automated guided vehicle navigation. At present, most research about optoelectronic scanning measurement network is focused on static measurement capacity and research about dynamic accuracy is insufficient. Limited by the measurement principle, the dynamic error is non-negligible and restricts the application. The workshop measurement and positioning system is a representative which can realize dynamic measurement function in theory. In this paper we conduct deep research on dynamic error resources and divide them two parts: phase error and synchronization error. Dynamic error model is constructed. Based on the theory above, simulation about dynamic error is carried out. Dynamic error is quantized and the rule of volatility and periodicity has been found. Dynamic error characteristics are shown in detail. The research result lays foundation for further accuracy improvement.

  3. Sleep quality, posttraumatic stress, depression, and human errors in train drivers: a population-based nationwide study in South Korea.

    Science.gov (United States)

    Jeon, Hong Jin; Kim, Ji-Hae; Kim, Bin-Na; Park, Seung Jin; Fava, Maurizio; Mischoulon, David; Kang, Eun-Ho; Roh, Sungwon; Lee, Dongsoo

    2014-12-01

    Human error is defined as an unintended error that is attributable to humans rather than machines, and that is important to avoid to prevent accidents. We aimed to investigate the association between sleep quality and human errors among train drivers. Cross-sectional. Population-based. A sample of 5,480 subjects who were actively working as train drivers were recruited in South Korea. The participants were 4,634 drivers who completed all questionnaires (response rate 84.6%). None. The Pittsburgh Sleep Quality Index (PSQI), the Center for Epidemiologic Studies Depression Scale (CES-D), the Impact of Event Scale-Revised (IES-R), the State-Trait Anxiety Inventory (STAI), and the Korean Occupational Stress Scale (KOSS). Of 4,634 train drivers, 349 (7.5%) showed more than one human error per 5 y. Human errors were associated with poor sleep quality, higher PSQI total scores, short sleep duration at night, and longer sleep latency. Among train drivers with poor sleep quality, those who experienced severe posttraumatic stress showed a significantly higher number of human errors than those without. Multiple logistic regression analysis showed that human errors were significantly associated with poor sleep quality and posttraumatic stress, whereas there were no significant associations with depression, trait and state anxiety, and work stress after adjusting for age, sex, education years, marital status, and career duration. Poor sleep quality was found to be associated with more human errors in train drivers, especially in those who experienced severe posttraumatic stress. © 2014 Associated Professional Sleep Societies, LLC.

  4. Estimation of the human error probabilities in the human reliability analysis

    International Nuclear Information System (INIS)

    Liu Haibin; He Xuhong; Tong Jiejuan; Shen Shifei

    2006-01-01

    Human error data is an important issue of human reliability analysis (HRA). Using of Bayesian parameter estimation, which can use multiple information, such as the historical data of NPP and expert judgment data to modify the human error data, could get the human error data reflecting the real situation of NPP more truly. This paper, using the numeric compute program developed by the authors, presents some typical examples to illustrate the process of the Bayesian parameter estimation in HRA and discusses the effect of different modification data on the Bayesian parameter estimation. (authors)

  5. Human error analysis project (HEAP) - The fourth pilot study: verbal data for analysis of operator performance

    International Nuclear Information System (INIS)

    Braarud, Per Oeyvind; Droeyvoldsmo, Asgeir; Hollnagel, Erik

    1997-06-01

    This report is the second report from the Pilot study No. 4 within the Human Error Analyses Project (HEAP). The overall objective of HEAP is to provide a better understanding and explicit modelling of how and why ''cognitive errors'' occur. This study investigated the contribution from different verbal data sources for analysis of control room operator's performance. Operator's concurrent verbal report, retrospective verbal report, and process expert's comments were compared for their contribution to an operator performance measure. This study looked into verbal protocols for single operator and for team. The main findings of the study were that all the three verbal data sources could be used to study performance. There was a relative high overlap between the data sources, but also a unique contribution from each source. There was a common pattern in the types of operator activities the data sources gave information about. The operator's concurrent protocol overall contained slightly more information on the operator's activities than the other two verbal sources. The study also showed that concurrent verbal protocol is feasible and useful for analysis of team's activities during a scenario. (author)

  6. Wavefront-error evaluation by mathematical analysis of experimental Foucault-test data

    Science.gov (United States)

    Wilson, R. G.

    1975-01-01

    The diffraction theory of the Foucault test provides an integral formula expressing the complex amplitude and irradiance distribution in the Foucault pattern of a test mirror (lens) as a function of wavefront error. Recent literature presents methods of inverting this formula to express wavefront error in terms of irradiance in the Foucault pattern. The present paper describes a study in which the inversion formulation was applied to photometric Foucault-test measurements on a nearly diffraction-limited mirror to determine wavefront errors for direct comparison with ones determined from scatter-plate interferometer measurements. The results affirm the practicability of the Foucault test for quantitative wavefront analysis of very small errors, and they reveal the fallacy of the prevalent belief that the test is limited to qualitative use only. Implications of the results with regard to optical testing and the potential use of the Foucault test for wavefront analysis in orbital space telescopes are discussed.

  7. Error of semiclassical eigenvalues in the semiclassical limit - an asymptotic analysis of the Sinai billiard

    Science.gov (United States)

    Dahlqvist, Per

    1999-10-01

    We estimate the error in the semiclassical trace formula for the Sinai billiard under the assumption that the largest source of error is due to penumbra diffraction: namely, diffraction effects for trajectories passing within a distance Ricons/Journals/Common/cdot" ALT="cdot" ALIGN="TOP"/>O((kR)-2/3) to the disc and trajectories being scattered in very forward directions. Here k is the momentum and R the radius of the scatterer. The semiclassical error is estimated by perturbing the Berry-Keating formula. The analysis necessitates an asymptotic analysis of very long periodic orbits. This is obtained within an approximation originally due to Baladi, Eckmann and Ruelle. We find that the average error, for sufficiently large values of kR, will exceed the mean level spacing.

  8. Active and passive compensation of APPLE II-introduced multipole errors through beam-based measurement

    Energy Technology Data Exchange (ETDEWEB)

    Chung, Ting-Yi; Huang, Szu-Jung; Fu, Huang-Wen; Chang, Ho-Ping; Chang, Cheng-Hsiang [National Synchrotron Radiation Research Center, Hsinchu Science Park, Hsinchu 30076, Taiwan (China); Hwang, Ching-Shiang [National Synchrotron Radiation Research Center, Hsinchu Science Park, Hsinchu 30076, Taiwan (China); Department of Electrophysics, National Chiao Tung University, Hsinchu 30050, Taiwan (China)

    2016-08-01

    The effect of an APPLE II-type elliptically polarized undulator (EPU) on the beam dynamics were investigated using active and passive methods. To reduce the tune shift and improve the injection efficiency, dynamic multipole errors were compensated using L-shaped iron shims, which resulted in stable top-up operation for a minimum gap. The skew quadrupole error was compensated using a multipole corrector, which was located downstream of the EPU for minimizing betatron coupling, and it ensured the enhancement of the synchrotron radiation brightness. The investigation methods, a numerical simulation algorithm, a multipole error correction method, and the beam-based measurement results are discussed.

  9. Error Concealment using Neural Networks for Block-Based Image Coding

    Directory of Open Access Journals (Sweden)

    M. Mokos

    2006-06-01

    Full Text Available In this paper, a novel adaptive error concealment (EC algorithm, which lowers the requirements for channel coding, is proposed. It conceals errors in block-based image coding systems by using neural network. In this proposed algorithm, only the intra-frame information is used for reconstruction of the image with separated damaged blocks. The information of pixels surrounding a damaged block is used to recover the errors using the neural network models. Computer simulation results show that the visual quality and the MSE evaluation of a reconstructed image are significantly improved using the proposed EC algorithm. We propose also a simple non-neural approach for comparison.

  10. Knowledge-Based Trajectory Error Pattern Method Applied to an Active Force Control Scheme

    Directory of Open Access Journals (Sweden)

    Endra Pitowarno, Musa Mailah, Hishamuddin Jamaluddin

    2012-08-01

    Full Text Available The active force control (AFC method is known as a robust control scheme that dramatically enhances the performance of a robot arm particularly in compensating the disturbance effects. The main task of the AFC method is to estimate the inertia matrix in the feedback loop to provide the correct (motor torque required to cancel out these disturbances. Several intelligent control schemes have already been introduced to enhance the estimation methods of acquiring the inertia matrix such as those using neural network, iterative learning and fuzzy logic. In this paper, we propose an alternative scheme called Knowledge-Based Trajectory Error Pattern Method (KBTEPM to suppress the trajectory track error of the AFC scheme. The knowledge is developed from the trajectory track error characteristic based on the previous experimental results of the crude approximation method. It produces a unique, new and desirable error pattern when a trajectory command is forced. An experimental study was performed using simulation work on the AFC scheme with KBTEPM applied to a two-planar manipulator in which a set of rule-based algorithm is derived. A number of previous AFC schemes are also reviewed as benchmark. The simulation results show that the AFC-KBTEPM scheme successfully reduces the trajectory track error significantly even in the presence of the introduced disturbances.Key Words:  Active force control, estimated inertia matrix, robot arm, trajectory error pattern, knowledge-based.

  11. Error Propagation dynamics: from PIV-based pressure reconstruction to vorticity field calculation

    Science.gov (United States)

    Pan, Zhao; Whitehead, Jared; Richards, Geordie; Truscott, Tadd; USU Team; BYU Team

    2017-11-01

    Noninvasive data from velocimetry experiments (e.g., PIV) have been used to calculate vorticity and pressure fields. However, the noise, error, or uncertainties in the PIV measurements would eventually propagate to the calculated pressure or vorticity field through reconstruction schemes. Despite the vast applications of pressure and/or vorticity field calculated from PIV measurements, studies on the error propagation from the velocity field to the reconstructed fields (PIV-pressure and PIV-vorticity are few. In the current study, we break down the inherent connections between PIV-based pressure reconstruction and PIV-based vorticity calculation. The similar error propagation dynamics, which involve competition between physical properties of the flow and numerical errors from reconstruction schemes, are found in both PIV-pressure and PIV-vorticity reconstructions.

  12. Analysis and reduction of 3D systematic and random setup errors during the simulation and treatment of lung cancer patients with CT-based external beam radiotherapy dose planning.

    NARCIS (Netherlands)

    Boer, H.D. de; Sornsen de Koste, J.R. van; Senan, S.; Visser, A.G.; Heijmen, B.J.M.

    2001-01-01

    PURPOSE: To determine the magnitude of the errors made in (a) the setup of patients with lung cancer on the simulator relative to their intended setup with respect to the planned treatment beams and (b) in the setup of these patients on the treatment unit. To investigate how the systematic component

  13. Component Analysis of Errors on PERSIANN Precipitation Estimates over Urmia Lake Basin, IRAN

    Science.gov (United States)

    Ghajarnia, N.; Daneshkar Arasteh, P.; Liaghat, A. M.; Araghinejad, S.

    2016-12-01

    this product. Overall, we believe that different error component's analysis performed in this study, can substantially help any further local studies for post-calibration and bias reduction of PERSIANN estimations.

  14. Detecting medication errors in the New Zealand pharmacovigilance database: a retrospective analysis.

    Science.gov (United States)

    Kunac, Desireé L; Tatley, Michael V

    2011-01-01

    Despite the traditional focus being adverse drug reactions (ADRs), pharmacovigilance centres have recently been identified as a potentially rich and important source of medication error data. To identify medication errors in the New Zealand Pharmacovigilance database (Centre for Adverse Reactions Monitoring [CARM]), and to describe the frequency and characteristics of these events. A retrospective analysis of the CARM pharmacovigilance database operated by the New Zealand Pharmacovigilance Centre was undertaken for the year 1 January-31 December 2007. All reports, excluding those relating to vaccines, clinical trials and pharmaceutical company reports, underwent a preventability assessment using predetermined criteria. Those events deemed preventable were subsequently classified to identify the degree of patient harm, type of error, stage of medication use process where the error occurred and origin of the error. A total of 1412 reports met the inclusion criteria and were reviewed, of which 4.3% (61/1412) were deemed preventable. Not all errors resulted in patient harm: 29.5% (18/61) were 'no harm' errors but 65.5% (40/61) of errors were deemed to have been associated with some degree of patient harm (preventable adverse drug events [ADEs]). For 5.0% (3/61) of events, the degree of patient harm was unable to be determined as the patient outcome was unknown. The majority of preventable ADEs (62.5% [25/40]) occurred in adults aged 65 years and older. The medication classes most involved in preventable ADEs were antibacterials for systemic use and anti-inflammatory agents, with gastrointestinal and respiratory system disorders the most common adverse events reported. For both preventable ADEs and 'no harm' events, most errors were incorrect dose and drug therapy monitoring problems consisting of failures in detection of significant drug interactions, past allergies or lack of necessary clinical monitoring. Preventable events were mostly related to the prescribing and

  15. Analysis and compensation of synchronous measurement error for multi-channel laser interferometer

    International Nuclear Information System (INIS)

    Du, Shengwu; Hu, Jinchun; Zhu, Yu; Hu, Chuxiong

    2017-01-01

    Dual-frequency laser interferometer has been widely used in precision motion system as a displacement sensor, to achieve nanoscale positioning or synchronization accuracy. In a multi-channel laser interferometer synchronous measurement system, signal delays are different in the different channels, which will cause asynchronous measurement, and then lead to measurement error, synchronous measurement error (SME). Based on signal delay analysis of the measurement system, this paper presents a multi-channel SME framework for synchronous measurement, and establishes the model between SME and motion velocity. Further, a real-time compensation method for SME is proposed. This method has been verified in a self-developed laser interferometer signal processing board (SPB). The experiment result showed that, using this compensation method, at a motion velocity 0.89 m s −1 , the max SME between two measuring channels in the SPB is 1.1 nm. This method is more easily implemented and applied to engineering than the method of directly testing smaller signal delay. (paper)

  16. Symbol and Bit Error Rates Analysis of Hybrid PIM-CDMA

    Directory of Open Access Journals (Sweden)

    Ghassemlooy Z

    2005-01-01

    Full Text Available A hybrid pulse interval modulation code-division multiple-access (hPIM-CDMA scheme employing the strict optical orthogonal code (SOCC with unity and auto- and cross-correlation constraints for indoor optical wireless communications is proposed. In this paper, we analyse the symbol error rate (SER and bit error rate (BER of hPIM-CDMA. In the analysis, we consider multiple access interference (MAI, self-interference, and the hybrid nature of the hPIM-CDMA signal detection, which is based on the matched filter (MF. It is shown that the BER/SER performance can only be evaluated if the bit resolution conforms to the condition set by the number of consecutive false alarm pulses that might occur and be detected, so that one symbol being divided into two is unlikely to occur. Otherwise, the probability of SER and BER becomes extremely high and indeterminable. We show that for a large number of users, the BER improves when increasing the code weight . The results presented are compared with other modulation schemes.

  17. Analysis and compensation of synchronous measurement error for multi-channel laser interferometer

    Science.gov (United States)

    Du, Shengwu; Hu, Jinchun; Zhu, Yu; Hu, Chuxiong

    2017-05-01

    Dual-frequency laser interferometer has been widely used in precision motion system as a displacement sensor, to achieve nanoscale positioning or synchronization accuracy. In a multi-channel laser interferometer synchronous measurement system, signal delays are different in the different channels, which will cause asynchronous measurement, and then lead to measurement error, synchronous measurement error (SME). Based on signal delay analysis of the measurement system, this paper presents a multi-channel SME framework for synchronous measurement, and establishes the model between SME and motion velocity. Further, a real-time compensation method for SME is proposed. This method has been verified in a self-developed laser interferometer signal processing board (SPB). The experiment result showed that, using this compensation method, at a motion velocity 0.89 m s-1, the max SME between two measuring channels in the SPB is 1.1 nm. This method is more easily implemented and applied to engineering than the method of directly testing smaller signal delay.

  18. Ergodic Capacity Analysis of Free-Space Optical Links with Nonzero Boresight Pointing Errors

    KAUST Repository

    Ansari, Imran Shafique

    2015-04-01

    A unified capacity analysis of a free-space optical (FSO) link that accounts for nonzero boresight pointing errors and both types of detection techniques (i.e. intensity modulation/ direct detection as well as heterodyne detection) is addressed in this work. More specifically, an exact closed-form expression for the moments of the end-to-end signal-to-noise ratio (SNR) of a single link FSO transmission system is presented in terms of well-known elementary functions. Capitalizing on these new moments expressions, we present approximate and simple closedform results for the ergodic capacity at high and low SNR regimes. All the presented results are verified via computer-based Monte-Carlo simulations.

  19. Bit Error Rate Analysis for MC-CDMA Systems in Nakagami- Fading Channels

    Directory of Open Access Journals (Sweden)

    Li Zexian

    2004-01-01

    Full Text Available Multicarrier code division multiple access (MC-CDMA is a promising technique that combines orthogonal frequency division multiplexing (OFDM with CDMA. In this paper, based on an alternative expression for the -function, characteristic function and Gaussian approximation, we present a new practical technique for determining the bit error rate (BER of multiuser MC-CDMA systems in frequency-selective Nakagami- fading channels. The results are applicable to systems employing coherent demodulation with maximal ratio combining (MRC or equal gain combining (EGC. The analysis assumes that different subcarriers experience independent fading channels, which are not necessarily identically distributed. The final average BER is expressed in the form of a single finite range integral and an integrand composed of tabulated functions which can be easily computed numerically. The accuracy of the proposed approach is demonstrated with computer simulations.

  20. Review of human error analysis methodologies and case study for accident management

    International Nuclear Information System (INIS)

    Jung, Won Dae; Kim, Jae Whan; Lee, Yong Hee; Ha, Jae Joo

    1998-03-01

    In this research, we tried to establish the requirements for the development of a new human error analysis method. To achieve this goal, we performed a case study as following steps; 1. review of the existing HEA methods 2. selection of those methods which are considered to be appropriate for the analysis of operator's tasks in NPPs 3. choice of tasks for the application, selected for the case study: HRMS (Human reliability management system), PHECA (Potential Human Error Cause Analysis), CREAM (Cognitive Reliability and Error Analysis Method). And, as the tasks for the application, 'bleed and feed operation' and 'decision-making for the reactor cavity flooding' tasks are chosen. We measured the applicability of the selected methods to the NPP tasks, and evaluated the advantages and disadvantages between each method. The three methods are turned out to be applicable for the prediction of human error. We concluded that both of CREAM and HRMS are equipped with enough applicability for the NPP tasks, however, compared two methods. CREAM is thought to be more appropriate than HRMS from the viewpoint of overall requirements. The requirements for the new HEA method obtained from the study can be summarized as follows; firstly, it should deal with cognitive error analysis, secondly, it should have adequate classification system for the NPP tasks, thirdly, the description on the error causes and error mechanisms should be explicit, fourthly, it should maintain the consistency of the result by minimizing the ambiguity in each step of analysis procedure, fifty, it should be done with acceptable human resources. (author). 25 refs., 30 tabs., 4 figs

  1. ERROR DISTRIBUTION EVALUATION OF THE THIRD VANISHING POINT BASED ON RANDOM STATISTICAL SIMULATION

    Directory of Open Access Journals (Sweden)

    C. Li

    2012-07-01

    Full Text Available POS, integrated by GPS / INS (Inertial Navigation Systems, has allowed rapid and accurate determination of position and attitude of remote sensing equipment for MMS (Mobile Mapping Systems. However, not only does INS have system error, but also it is very expensive. Therefore, in this paper error distributions of vanishing points are studied and tested in order to substitute INS for MMS in some special land-based scene, such as ground façade where usually only two vanishing points can be detected. Thus, the traditional calibration approach based on three orthogonal vanishing points is being challenged. In this article, firstly, the line clusters, which parallel to each others in object space and correspond to the vanishing points, are detected based on RANSAC (Random Sample Consensus and parallelism geometric constraint. Secondly, condition adjustment with parameters is utilized to estimate nonlinear error equations of two vanishing points (VX, VY. How to set initial weights for the adjustment solution of single image vanishing points is presented. Solving vanishing points and estimating their error distributions base on iteration method with variable weights, co-factor matrix and error ellipse theory. Thirdly, under the condition of known error ellipses of two vanishing points (VX, VY and on the basis of the triangle geometric relationship of three vanishing points, the error distribution of the third vanishing point (VZ is calculated and evaluated by random statistical simulation with ignoring camera distortion. Moreover, Monte Carlo methods utilized for random statistical estimation are presented. Finally, experimental results of vanishing points coordinate and their error distributions are shown and analyzed.

  2. Error Distribution Evaluation of the Third Vanishing Point Based on Random Statistical Simulation

    Science.gov (United States)

    Li, C.

    2012-07-01

    POS, integrated by GPS / INS (Inertial Navigation Systems), has allowed rapid and accurate determination of position and attitude of remote sensing equipment for MMS (Mobile Mapping Systems). However, not only does INS have system error, but also it is very expensive. Therefore, in this paper error distributions of vanishing points are studied and tested in order to substitute INS for MMS in some special land-based scene, such as ground façade where usually only two vanishing points can be detected. Thus, the traditional calibration approach based on three orthogonal vanishing points is being challenged. In this article, firstly, the line clusters, which parallel to each others in object space and correspond to the vanishing points, are detected based on RANSAC (Random Sample Consensus) and parallelism geometric constraint. Secondly, condition adjustment with parameters is utilized to estimate nonlinear error equations of two vanishing points (VX, VY). How to set initial weights for the adjustment solution of single image vanishing points is presented. Solving vanishing points and estimating their error distributions base on iteration method with variable weights, co-factor matrix and error ellipse theory. Thirdly, under the condition of known error ellipses of two vanishing points (VX, VY) and on the basis of the triangle geometric relationship of three vanishing points, the error distribution of the third vanishing point (VZ) is calculated and evaluated by random statistical simulation with ignoring camera distortion. Moreover, Monte Carlo methods utilized for random statistical estimation are presented. Finally, experimental results of vanishing points coordinate and their error distributions are shown and analyzed.

  3. Monte Carlo simulation of expert judgments on human errors in chemical analysis--a case study of ICP-MS.

    Science.gov (United States)

    Kuselman, Ilya; Pennecchi, Francesca; Epstein, Malka; Fajgelj, Ales; Ellison, Stephen L R

    2014-12-01

    Monte Carlo simulation of expert judgments on human errors in a chemical analysis was used for determination of distributions of the error quantification scores (scores of likelihood and severity, and scores of effectiveness of a laboratory quality system in prevention of the errors). The simulation was based on modeling of an expert behavior: confident, reasonably doubting and irresolute expert judgments were taken into account by means of different probability mass functions (pmfs). As a case study, 36 scenarios of human errors which may occur in elemental analysis of geological samples by ICP-MS were examined. Characteristics of the score distributions for three pmfs of an expert behavior were compared. Variability of the scores, as standard deviation of the simulated score values from the distribution mean, was used for assessment of the score robustness. A range of the score values, calculated directly from elicited data and simulated by a Monte Carlo method for different pmfs, was also discussed from the robustness point of view. It was shown that robustness of the scores, obtained in the case study, can be assessed as satisfactory for the quality risk management and improvement of a laboratory quality system against human errors. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. North error estimation based on solar elevation errors in the third step of sky-polarimetric Viking navigation.

    Science.gov (United States)

    Száz, Dénes; Farkas, Alexandra; Barta, András; Kretzer, Balázs; Egri, Ádám; Horváth, Gábor

    2016-07-01

    The theory of sky-polarimetric Viking navigation has been widely accepted for decades without any information about the accuracy of this method. Previously, we have measured the accuracy of the first and second steps of this navigation method in psychophysical laboratory and planetarium experiments. Now, we have tested the accuracy of the third step in a planetarium experiment, assuming that the first and second steps are errorless. Using the fists of their outstretched arms, 10 test persons had to estimate the elevation angles (measured in numbers of fists and fingers) of black dots (representing the position of the occluded Sun) projected onto the planetarium dome. The test persons performed 2400 elevation estimations, 48% of which were more accurate than ±1°. We selected three test persons with the (i) largest and (ii) smallest elevation errors and (iii) highest standard deviation of the elevation error. From the errors of these three persons, we calculated their error function, from which the North errors (the angles with which they deviated from the geographical North) were determined for summer solstice and spring equinox, two specific dates of the Viking sailing period. The range of possible North errors Δ ω N was the lowest and highest at low and high solar elevations, respectively. At high elevations, the maximal Δ ω N was 35.6° and 73.7° at summer solstice and 23.8° and 43.9° at spring equinox for the best and worst test person (navigator), respectively. Thus, the best navigator was twice as good as the worst one. At solstice and equinox, high elevations occur the most frequently during the day, thus high North errors could occur more frequently than expected before. According to our findings, the ideal periods for sky-polarimetric Viking navigation are immediately after sunrise and before sunset, because the North errors are the lowest at low solar elevations.

  5. Analysis of human error and organizational deficiency in events considering risk significance

    International Nuclear Information System (INIS)

    Lee, Yong Suk; Kim, Yoonik; Kim, Say Hyung; Kim, Chansoo; Chung, Chang Hyun; Jung, Won Dea

    2004-01-01

    In this study, we analyzed human and organizational deficiencies in the trip events of Korean nuclear power plants. K-HPES items were used in human error analysis, and the organizational factors by Jacobs and Haber were used for organizational deficiency analysis. We proposed the use of CCDP as a risk measure to consider risk information in prioritizing K-HPES items and organizational factors. Until now, the risk significance of events has not been considered in human error and organizational deficiency analysis. Considering the risk significance of events in the process of analysis is necessary for effective enhancement of nuclear power plant safety by focusing on causes of human error and organizational deficiencies that are associated with significant risk

  6. Facial motion parameter estimation and error criteria in model-based image coding

    Science.gov (United States)

    Liu, Yunhai; Yu, Lu; Yao, Qingdong

    2000-04-01

    Model-based image coding has been given extensive attention due to its high subject image quality and low bit-rates. But the estimation of object motion parameter is still a difficult problem, and there is not a proper error criteria for the quality assessment that are consistent with visual properties. This paper presents an algorithm of the facial motion parameter estimation based on feature point correspondence and gives the motion parameter error criteria. The facial motion model comprises of three parts. The first part is the global 3-D rigid motion of the head, the second part is non-rigid translation motion in jaw area, and the third part consists of local non-rigid expression motion in eyes and mouth areas. The feature points are automatically selected by a function of edges, brightness and end-node outside the blocks of eyes and mouth. The numbers of feature point are adjusted adaptively. The jaw translation motion is tracked by the changes of the feature point position of jaw. The areas of non-rigid expression motion can be rebuilt by using block-pasting method. The estimation approach of motion parameter error based on the quality of reconstructed image is suggested, and area error function and the error function of contour transition-turn rate are used to be quality criteria. The criteria reflect the image geometric distortion caused by the error of estimated motion parameters properly.

  7. Defining Reported Errors on Web-based Reporting System Using ICPS From Nine Units in a Korean University Hospital

    Directory of Open Access Journals (Sweden)

    Chul-Hoon Kim, DDS, PhD

    2009-12-01

    Conclusion: The web-based error reporting system using ICPS proved to be an easy, feasible system for hospitals in Korea. This system will be helpful for inducing general agreement upon errors within clinical nursing practice and bring more attention to any errors made or near misses. Also, it will be able to ameliorate the punitive culture for errors and transform error reporting into a habit for healthcare providers.

  8. Outcomes of a Failure Mode and Effects Analysis for medication errors in pediatric anesthesia.

    Science.gov (United States)

    Martin, Lizabeth D; Grigg, Eliot B; Verma, Shilpa; Latham, Gregory J; Rampersad, Sally E; Martin, Lynn D

    2017-06-01

    The Institute of Medicine has called for development of strategies to prevent medication errors, which are one important cause of preventable harm. Although the field of anesthesiology is considered a leader in patient safety, recent data suggest high medication error rates in anesthesia practice. Unfortunately, few error prevention strategies for anesthesia providers have been implemented. Using Toyota Production System quality improvement methodology, a multidisciplinary team observed 133 h of medication practice in the operating room at a tertiary care freestanding children's hospital. A failure mode and effects analysis was conducted to systematically deconstruct and evaluate each medication handling process step and score possible failure modes to quantify areas of risk. A bundle of five targeted countermeasures were identified and implemented over 12 months. Improvements in syringe labeling (73 to 96%), standardization of medication organization in the anesthesia workspace (0 to 100%), and two-provider infusion checks (23 to 59%) were observed. Medication error reporting improved during the project and was subsequently maintained. After intervention, the median medication error rate decreased from 1.56 to 0.95 per 1000 anesthetics. The frequency of medication error harm events reaching the patient also decreased. Systematic evaluation and standardization of medication handling processes by anesthesia providers in the operating room can decrease medication errors and improve patient safety. © 2017 John Wiley & Sons Ltd.

  9. LEARNING FROM MISTAKES Error Analysis in the English Speech of Indonesian Tertiary Students

    Directory of Open Access Journals (Sweden)

    Imelda Gozali

    2017-12-01

    Full Text Available This study is part of a series of Classroom Action Research conducted with the aim of improving the English speech of students in one of the tertiary institutes in Indonesia. After some years of teaching English conversation, the writer noted that students made various types of errors in their speech, which can be classified generally into morphological, phonological, and lexical. While some of the errors are still generally acceptable, some others elicit laughter or inhibit comprehension altogether. Therefore, the writer is keen to analyze the more common errors made by the students, so as to be able to compile a teaching material that could be utilized to address those errors more effectively in future classes. This research used Error Analysis by Richards (1971 as the basis of classification. It was carried out in five classes with a total number of 80 students for a period of one semester (14 weeks. The results showed that most of the errors were phonological (errors in pronunciation, while others were morphological or grammatical in nature. This prompted the writer to design simple Phonics lessons for future classes.

  10. Demonstration Integrated Knowledge-Based System for Estimating Human Error Probabilities

    Energy Technology Data Exchange (ETDEWEB)

    Auflick, Jack L.

    1999-04-21

    Human Reliability Analysis (HRA) is currently comprised of at least 40 different methods that are used to analyze, predict, and evaluate human performance in probabilistic terms. Systematic HRAs allow analysts to examine human-machine relationships, identify error-likely situations, and provide estimates of relative frequencies for human errors on critical tasks, highlighting the most beneficial areas for system improvements. Unfortunately, each of HRA's methods has a different philosophical approach, thereby producing estimates of human error probabilities (HEPs) that area better or worse match to the error likely situation of interest. Poor selection of methodology, or the improper application of techniques can produce invalid HEP estimates, where that erroneous estimation of potential human failure could have potentially severe consequences in terms of the estimated occurrence of injury, death, and/or property damage.

  11. An Analysis of Students Error in Solving PISA 2012 and Its Scaffolding

    OpenAIRE

    Sari, Yurizka Melia; Valentino, Erik

    2016-01-01

    Based on PISA survey in 2012, Indonesia was only placed on 64 out of 65 participating countries. The survey suggest that the students’ ability of reasoning, spatial orientation, and problem solving are lower compare with other participants countries, especially in Shouth East Asia. Nevertheless, the result of PISA does not elicit clearly on the students’ inability in solving PISA problem such as the location and the types of student’s errors. Therefore, analyzing students’ error in solving PI...

  12. Phase correction and error estimation in InSAR time series analysis

    Science.gov (United States)

    Zhang, Y.; Fattahi, H.; Amelung, F.

    2017-12-01

    During the last decade several InSAR time series approaches have been developed in response to the non-idea acquisition strategy of SAR satellites, such as large spatial and temporal baseline with non-regular acquisitions. The small baseline tubes and regular acquisitions of new SAR satellites such as Sentinel-1 allows us to form fully connected networks of interferograms and simplifies the time series analysis into a weighted least square inversion of an over-determined system. Such robust inversion allows us to focus more on the understanding of different components in InSAR time-series and its uncertainties. We present an open-source python-based package for InSAR time series analysis, called PySAR (https://yunjunz.github.io/PySAR/), with unique functionalities for obtaining unbiased ground displacement time-series, geometrical and atmospheric correction of InSAR data and quantifying the InSAR uncertainty. Our implemented strategy contains several features including: 1) improved spatial coverage using coherence-based network of interferograms, 2) unwrapping error correction using phase closure or bridging, 3) tropospheric delay correction using weather models and empirical approaches, 4) DEM error correction, 5) optimal selection of reference date and automatic outlier detection, 6) InSAR uncertainty due to the residual tropospheric delay, decorrelation and residual DEM error, and 7) variance-covariance matrix of final products for geodetic inversion. We demonstrate the performance using SAR datasets acquired by Cosmo-Skymed and TerraSAR-X, Sentinel-1 and ALOS/ALOS-2, with application on the highly non-linear volcanic deformation in Japan and Ecuador (figure 1). Our result shows precursory deformation before the 2015 eruptions of Cotopaxi volcano, with a maximum uplift of 3.4 cm on the western flank (fig. 1b), with a standard deviation of 0.9 cm (fig. 1a), supporting the finding by Morales-Rivera et al. (2017, GRL); and a post-eruptive subsidence on the same

  13. Analysis of operator splitting errors for near-limit flame simulations

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Zhen; Zhou, Hua [Center for Combustion Energy, Tsinghua University, Beijing 100084 (China); Li, Shan [Center for Combustion Energy, Tsinghua University, Beijing 100084 (China); School of Aerospace Engineering, Tsinghua University, Beijing 100084 (China); Ren, Zhuyin, E-mail: zhuyinren@tsinghua.edu.cn [Center for Combustion Energy, Tsinghua University, Beijing 100084 (China); School of Aerospace Engineering, Tsinghua University, Beijing 100084 (China); Lu, Tianfeng [Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269-3139 (United States); Law, Chung K. [Center for Combustion Energy, Tsinghua University, Beijing 100084 (China); Department of Mechanical and Aerospace Engineering, Princeton University, Princeton, NJ 08544 (United States)

    2017-04-15

    High-fidelity simulations of ignition, extinction and oscillatory combustion processes are of practical interest in a broad range of combustion applications. Splitting schemes, widely employed in reactive flow simulations, could fail for stiff reaction–diffusion systems exhibiting near-limit flame phenomena. The present work first employs a model perfectly stirred reactor (PSR) problem with an Arrhenius reaction term and a linear mixing term to study the effects of splitting errors on the near-limit combustion phenomena. Analysis shows that the errors induced by decoupling of the fractional steps may result in unphysical extinction or ignition. The analysis is then extended to the prediction of ignition, extinction and oscillatory combustion in unsteady PSRs of various fuel/air mixtures with a 9-species detailed mechanism for hydrogen oxidation and an 88-species skeletal mechanism for n-heptane oxidation, together with a Jacobian-based analysis for the time scales. The tested schemes include the Strang splitting, the balanced splitting, and a newly developed semi-implicit midpoint method. Results show that the semi-implicit midpoint method can accurately reproduce the dynamics of the near-limit flame phenomena and it is second-order accurate over a wide range of time step size. For the extinction and ignition processes, both the balanced splitting and midpoint method can yield accurate predictions, whereas the Strang splitting can lead to significant shifts on the ignition/extinction processes or even unphysical results. With an enriched H radical source in the inflow stream, a delay of the ignition process and the deviation on the equilibrium temperature are observed for the Strang splitting. On the contrary, the midpoint method that solves reaction and diffusion together matches the fully implicit accurate solution. The balanced splitting predicts the temperature rise correctly but with an over-predicted peak. For the sustainable and decaying oscillatory

  14. On Kolmogorov asymptotics of estimators of the misclassification error rate in linear discriminant analysis

    KAUST Repository

    Zollanvari, Amin

    2013-05-24

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  15. Statistical analysis with measurement error or misclassification strategy, method and application

    CERN Document Server

    Yi, Grace Y

    2017-01-01

    This monograph on measurement error and misclassification covers a broad range of problems and emphasizes unique features in modeling and analyzing problems arising from medical research and epidemiological studies. Many measurement error and misclassification problems have been addressed in various fields over the years as well as with a wide spectrum of data, including event history data (such as survival data and recurrent event data), correlated data (such as longitudinal data and clustered data), multi-state event data, and data arising from case-control studies. Statistical Analysis with Measurement Error or Misclassification: Strategy, Method and Application brings together assorted methods in a single text and provides an update of recent developments for a variety of settings. Measurement error effects and strategies of handling mismeasurement for different models are closely examined in combination with applications to specific problems. Readers with diverse backgrounds and objectives can utilize th...

  16. Writing Skill and Categorical Error Analysis: A Study of First Year Undergraduate University Students

    Directory of Open Access Journals (Sweden)

    Adnan Satariyan

    2014-09-01

    Full Text Available Abstract This study identifies and analyses the common errors in writing skill of the first year students of Azad University of South Tehran Branch in relation to their first language (L1, the type of high school they graduated, and their exposure to media and technology in order to learn English. It also determines the categories in which the errors are committed (content, organisation/discourse, vocabulary, mechanics, or syntax and whether or not there is a significant difference in the percentage of errors committed and these categories. Participants of this study are 190 first year students that are asked to write an essay. An error analysis model adapted from Brown (2001 and Gayeta (2002 is then used to evaluate the essay writings in terms of content, organisation, vocabulary, mechanics, and syntax or language use. The results of the study show that the students have greater difficulties in organisation, content, and vocabulary and experience less difficulties in mechanics and syntax.

  17. On Kolmogorov asymptotics of estimators of the misclassification error rate in linear discriminant analysis

    KAUST Repository

    Zollanvari, Amin; Genton, Marc G.

    2013-01-01

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  18. Reduction of low frequency error for SED36 and APS based HYDRA star trackers

    Science.gov (United States)

    Ouaknine, Julien; Blarre, Ludovic; Oddos-Marcel, Lionel; Montel, Johan; Julio, Jean-Marc

    2017-11-01

    In the frame of the CNES Pleiades satellite, a reduction of the star tracker low frequency error, which is the most penalizing error for the satellite attitude control, was performed. For that purpose, the SED36 star tracker was developed, with a design based on the flight qualified SED16/26. In this paper, the SED36 main features will be first presented. Then, the reduction process of the low frequency error will be developed, particularly the optimization of the optical distortion calibration. The result is an attitude low frequency error of 1.1" at 3 sigma along transverse axes. The implementation of these improvements to HYDRA, the new multi-head APS star tracker developed by SODERN, will finally be presented.

  19. Learning about Expectation Violation from Prediction Error Paradigms – A Meta-Analysis on Brain Processes Following a Prediction Error

    Directory of Open Access Journals (Sweden)

    Lisa D’Astolfo

    2017-07-01

    Full Text Available Modifying patients’ expectations by exposing them to expectation violation situations (thus maximizing the difference between the expected and the actual situational outcome is proposed to be a crucial mechanism for therapeutic success for a variety of different mental disorders. However, clinical observations suggest that patients often maintain their expectations regardless of experiences contradicting their expectations. It remains unclear which information processing mechanisms lead to modification or persistence of patients’ expectations. Insight in the processing could be provided by Neuroimaging studies investigating prediction error (PE, i.e., neuronal reactions to non-expected stimuli. Two methods are often used to investigate the PE: (1 paradigms, in which participants passively observe PEs (”passive” paradigms and (2 paradigms, which encourage a behavioral adaptation following a PE (“active” paradigms. These paradigms are similar to the methods used to induce expectation violations in clinical settings: (1 the confrontation with an expectation violation situation and (2 an enhanced confrontation in which the patient actively challenges his expectation. We used this similarity to gain insight in the different neuronal processing of the two PE paradigms. We performed a meta-analysis contrasting neuronal activity of PE paradigms encouraging a behavioral adaptation following a PE and paradigms enforcing passiveness following a PE. We found more neuronal activity in the striatum, the insula and the fusiform gyrus in studies encouraging behavioral adaptation following a PE. Due to the involvement of reward assessment and avoidance learning associated with the striatum and the insula we propose that the deliberate execution of action alternatives following a PE is associated with the integration of new information into previously existing expectations, therefore leading to an expectation change. While further research is needed

  20. Calidad de la indización e incidencia de errores en la base de datos ECOSOC

    Directory of Open Access Journals (Sweden)

    Extremeño, Ana

    1999-06-01

    Full Text Available Results of the analysis and evaluation of ECOSOC database are presented. Quality indicators are used in order to evaluate the indexing process. Formal indicators are used in the analysis of the records structure and in the process of error detection.

    Se presentan los resultados del análisis y de la evaluación de la base de datos ECOSOC, producida y distribuida por el CINDOC, relativos a los registros sobre Ciencia Política. Se aplican indicadores de calidad para la evaluación del proceso de indización de los términos empleados en la descripción temática de la base de datos, así como indicadores formales con el fin de analizar la estructura de los registros bibliográficos e identificar el índice de errores contenidos en cada uno de los campos que componen dichos registros.

  1. Compensation of position errors in passivity based teleoperation over packet switched communication networks

    NARCIS (Netherlands)

    Secchi, C; Stramigioli, Stefano; Fantuzzi, C.

    Because of the use of scattering based communication channels, passivity based telemanipulation systems can be subject to a steady state position error between master and slave robots. In this paper, we consider the case in which the passive master and slave sides communicate through a packet

  2. Model-based bootstrapping when correcting for measurement error with application to logistic regression.

    Science.gov (United States)

    Buonaccorsi, John P; Romeo, Giovanni; Thoresen, Magne

    2018-03-01

    When fitting regression models, measurement error in any of the predictors typically leads to biased coefficients and incorrect inferences. A plethora of methods have been proposed to correct for this. Obtaining standard errors and confidence intervals using the corrected estimators can be challenging and, in addition, there is concern about remaining bias in the corrected estimators. The bootstrap, which is one option to address these problems, has received limited attention in this context. It has usually been employed by simply resampling observations, which, while suitable in some situations, is not always formally justified. In addition, the simple bootstrap does not allow for estimating bias in non-linear models, including logistic regression. Model-based bootstrapping, which can potentially estimate bias in addition to being robust to the original sampling or whether the measurement error variance is constant or not, has received limited attention. However, it faces challenges that are not present in handling regression models with no measurement error. This article develops new methods for model-based bootstrapping when correcting for measurement error in logistic regression with replicate measures. The methodology is illustrated using two examples, and a series of simulations are carried out to assess and compare the simple and model-based bootstrap methods, as well as other standard methods. While not always perfect, the model-based approaches offer some distinct improvements over the other methods. © 2017, The International Biometric Society.

  3. Positioning performance analysis of the time sum of arrival algorithm with error features

    Science.gov (United States)

    Gong, Feng-xun; Ma, Yan-qiu

    2018-03-01

    The theoretical positioning accuracy of multilateration (MLAT) with the time difference of arrival (TDOA) algorithm is very high. However, there are some problems in practical applications. Here we analyze the location performance of the time sum of arrival (TSOA) algorithm from the root mean square error ( RMSE) and geometric dilution of precision (GDOP) in additive white Gaussian noise (AWGN) environment. The TSOA localization model is constructed. Using it, the distribution of location ambiguity region is presented with 4-base stations. And then, the location performance analysis is started from the 4-base stations with calculating the RMSE and GDOP variation. Subsequently, when the location parameters are changed in number of base stations, base station layout and so on, the performance changing patterns of the TSOA location algorithm are shown. So, the TSOA location characteristics and performance are revealed. From the RMSE and GDOP state changing trend, the anti-noise performance and robustness of the TSOA localization algorithm are proved. The TSOA anti-noise performance will be used for reducing the blind-zone and the false location rate of MLAT systems.

  4. Error modeling for surrogates of dynamical systems using machine learning: Machine-learning-based error model for surrogates of dynamical systems

    International Nuclear Information System (INIS)

    Trehan, Sumeet; Carlberg, Kevin T.; Durlofsky, Louis J.

    2017-01-01

    A machine learning–based framework for modeling the error introduced by surrogate models of parameterized dynamical systems is proposed. The framework entails the use of high-dimensional regression techniques (eg, random forests, and LASSO) to map a large set of inexpensively computed “error indicators” (ie, features) produced by the surrogate model at a given time instance to a prediction of the surrogate-model error in a quantity of interest (QoI). This eliminates the need for the user to hand-select a small number of informative features. The methodology requires a training set of parameter instances at which the time-dependent surrogate-model error is computed by simulating both the high-fidelity and surrogate models. Using these training data, the method first determines regression-model locality (via classification or clustering) and subsequently constructs a “local” regression model to predict the time-instantaneous error within each identified region of feature space. We consider 2 uses for the resulting error model: (1) as a correction to the surrogate-model QoI prediction at each time instance and (2) as a way to statistically model arbitrary functions of the time-dependent surrogate-model error (eg, time-integrated errors). We then apply the proposed framework to model errors in reduced-order models of nonlinear oil-water subsurface flow simulations, with time-varying well-control (bottom-hole pressure) parameters. The reduced-order models used in this work entail application of trajectory piecewise linearization in conjunction with proper orthogonal decomposition. Moreover, when the first use of the method is considered, numerical experiments demonstrate consistent improvement in accuracy in the time-instantaneous QoI prediction relative to the original surrogate model, across a large number of test cases. When the second use is considered, results show that the proposed method provides accurate statistical predictions of the time- and well

  5. Practical Insights from Initial Studies Related to Human Error Analysis Project (HEAP)

    International Nuclear Information System (INIS)

    Follesoe, Knut; Kaarstad, Magnhild; Droeivoldsmo, Asgeir; Hollnagel, Erik; Kirwan; Barry

    1996-01-01

    This report presents practical insights made from an analysis of the three initial studies in the Human Error Analysis Project (HEAP), and the first study in the US NRC Staffing Project. These practical insights relate to our understanding of diagnosis in Nuclear Power Plant (NPP) emergency scenarios and, in particular, the factors that influence whether a diagnosis will succeed or fail. The insights reported here focus on three inter-related areas: (1) the diagnostic strategies and styles that have been observed in single operator and team-based studies; (2) the qualitative aspects of the key operator support systems, namely VDU interfaces, alarms, training and procedures, that have affected the outcome of diagnosis; and (3) the overall success rates of diagnosis and the error types that have been observed in the various studies. With respect to diagnosis, certain patterns have emerged from the various studies, depending on whether operators were alone or in teams, and on their familiarity with the process. Some aspects of the interface and alarm systems were found to contribute to diagnostic failures while others supported performance and recovery. Similar results were found for training and experience. Furthermore, the availability of procedures did not preclude the need for some diagnosis. With respect to HRA and PSA, it was possible to record the failure types seen in the studies, and in some cases to give crude estimates of the failure likelihood for certain scenarios. Although these insights are interim in nature, they do show the type of information that can be derived from these studies. More importantly, they clarify aspects of our understanding of diagnosis in NPP emergencies, including implications for risk assessment, operator support systems development, and for research into diagnosis in a broader range of fields than the nuclear power industry. (author)

  6. Analysis Of The Effect Of Fuel Enrichment Error On Neutronic Properties Of The RSG-GAS Core

    International Nuclear Information System (INIS)

    Saragih, Tukiran; Pinem, Surian

    2002-01-01

    The analysis of the fuel enrichment error effect on neutronic properties has been carried out. The fuel enrichment could be improperly done because of wrong fabrication. Therefore it is necessary to analyze the fuel enrichment error effect to determine how many percents the fuel enrichment maximum can be accepted in the core. The analysis was done by simulation method The RSG-GAS core was simulated with 5 standard fuels and 1 control element having wrong enrichment when inserted into the core. Fuel enrichment error was then simulated from 20%, 25% and 30% and the simulation was done using WIMSD/4 and Batan-2DIFF codes. The cross section of core material of the RSG-GAS was generated by WIMSD/4 code in 1-D, X-Y geometry and 10 energy neutron group. Two dimensions, diffusion calculation based on finite element method was done by using Batan-2DIFF code. Five fuel elements and one control element changed the enrichment was finally arranged as a new core of the RSG-Gas reactor. The neutronic properties can be seen from eigenvalues (k eff ) as well as from the kinetic properties based on moderator void reactivity coefficient. The calculated results showed that the error are still acceptable by k eff 1,097 even until 25% fuel enrichment but not more than 25,5%

  7. Human error and the problem of causality in analysis of accidents

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1990-01-01

    , designers or managers have played a major role. There are, however, several basic problems in analysis of accidents and identification of human error. This paper addresses the nature of causal explanations and the ambiguity of the rules applied for identification of the events to include in analysis......Present technology is characterized by complexity, rapid change and growing size of technical systems. This has caused increasing concern with the human involvement in system safety. Analyses of the major accidents during recent decades have concluded that human errors on part of operators...

  8. The error analysis of the determination of the activity coefficients via the isopiestic method

    International Nuclear Information System (INIS)

    Zhou Jun; Chen Qiyuan; Fang Zheng; Liang Yizeng; Liu Shijun; Zhou Yong

    2005-01-01

    Error analysis is very important to experimental designs. The error analysis of the determination of activity coefficients for a binary system via the isopiestic method shows that the error sources include not only the experimental errors of the analyzed molalities and the measured osmotic coefficients, but also the deviation of the regressed values from the experimental data when the regression function is used. It also shows that the accurate chemical analysis of the molality of the test solution is important, and it is preferable to keep the error of the measured osmotic coefficients changeless in all isopiestic experiments including those experiments on the very dilute solutions. The isopiestic experiments on the dilute solutions are very important, and the lowest molality should be low enough so that a theoretical method can be used below the lowest molality. And it is necessary that the isopiestic experiment should be done on the test solutions of lower than 0.1 mol . kg -1 . For most electrolytes solutions, it is usually preferable to require the lowest molality to be less than 0.05 mol . kg -1 . Moreover, the experimental molalities of the test solutions should be firstly arranged by keeping the interval of the logarithms of the molalities nearly constant, and secondly more number of high molalities should be arranged, and we propose to arrange the experimental molalities greater than 1 mol . kg -1 according to some kind of the arithmetical progression of the intervals of the molalities. After experiments, the error of the calculated activity coefficients of the solutes could be calculated from the actually values of the errors of the measured isopiestic molalities and the deviations of the regressed values from the experimental values with our obtained equations

  9. Study on a new framework of Human Reliability Analysis to evaluate soft control execution error in advanced MCRs of NPPs

    International Nuclear Information System (INIS)

    Jang, Inseok; Kim, Ar Ryum; Jung, Wondea; Seong, Poong Hyun

    2016-01-01

    Highlights: • The operation environment of MCRs in NPPs has changed by adopting new HSIs. • The operation action in NPP Advanced MCRs is performed by soft control. • New HRA framework should be considered in the HRA for advanced MCRs. • HRA framework for evaluation of soft control execution human error is suggested. • Suggested method will be helpful to analyze human reliability in advance MCRs. - Abstract: Since the Three Mile Island (TMI)-2 accident, human error has been recognized as one of the main causes of Nuclear Power Plant (NPP) accidents, and numerous studies related to Human Reliability Analysis (HRA) have been carried out. Most of these methods were developed considering the conventional type of Main Control Rooms (MCRs). However, the operating environment of MCRs in NPPs has changed with the adoption of new Human-System Interfaces (HSIs) that are based on computer-based technologies. The MCRs that include these digital technologies, such as large display panels, computerized procedures, and soft controls, are called advanced MCRs. Among the many features of advanced MCRs, soft controls are a particularly important feature because operating actions in NPP advanced MCRs are performed by soft control. Due to the differences in interfaces between soft control and hardwired conventional type control, different Human Error Probabilities (HEPs) and a new HRA framework should be considered in the HRA for advanced MCRs. To this end, a new framework of a HRA method for evaluating soft control execution human error is suggested by performing a soft control task analysis and the literature regarding widely accepted human error taxonomies is reviewed. Moreover, since most current HRA databases deal with operation in conventional MCRs and are not explicitly designed to deal with digital HSIs, empirical analysis of human error and error recovery considering soft controls under an advanced MCR mockup are carried out to collect human error data, which is

  10. Dual-energy X-ray absorptiometry: analysis of pediatric fat estimate errors due to tissue hydration effects.

    Science.gov (United States)

    Testolin, C G; Gore, R; Rivkin, T; Horlick, M; Arbo, J; Wang, Z; Chiumello, G; Heymsfield, S B

    2000-12-01

    Dual-energy X-ray absorptiometry (DXA) percent (%) fat estimates may be inaccurate in young children, who typically have high tissue hydration levels. This study was designed to provide a comprehensive analysis of pediatric tissue hydration effects on DXA %fat estimates. Phase 1 was experimental and included three in vitro studies to establish the physical basis of DXA %fat-estimation models. Phase 2 extended phase 1 models and consisted of theoretical calculations to estimate the %fat errors emanating from previously reported pediatric hydration effects. Phase 1 experiments supported the two-compartment DXA soft tissue model and established that pixel ratio of low to high energy (R values) are a predictable function of tissue elemental content. In phase 2, modeling of reference body composition values from birth to age 120 mo revealed that %fat errors will arise if a "constant" adult lean soft tissue R value is applied to the pediatric population; the maximum %fat error, approximately 0.8%, would be present at birth. High tissue hydration, as observed in infants and young children, leads to errors in DXA %fat estimates. The magnitude of these errors based on theoretical calculations is small and may not be of clinical or research significance.

  11. An analysis of error patterns in children′s backward digit recall in noise

    Directory of Open Access Journals (Sweden)

    Homira Osman

    2015-01-01

    Full Text Available The purpose of the study was to determine whether perceptual masking or cognitive processing accounts for a decline in working memory performance in the presence of competing speech. The types and patterns of errors made on the backward digit span in quiet and multitalker babble at -5 dB signal-to-noise ratio (SNR were analyzed. The errors were classified into two categories: item (if digits that were not presented in a list were repeated and order (if correct digits were repeated but in an incorrect order. Fifty five children with normal hearing were included. All the children were aged between 7 years and 10 years. Repeated measures of analysis of variance (RM-ANOVA revealed the main effects for error type and digit span length. In terms of listening condition interaction, it was found that the order errors occurred more frequently than item errors in the degraded listening condition compared to quiet. In addition, children had more difficulty recalling the correct order of intermediate items, supporting strong primacy and recency effects. Decline in children′s working memory performance was not primarily related to perceptual difficulties alone. The majority of errors was related to the maintenance of sequential order information, which suggests that reduced performance in competing speech may result from increased cognitive processing demands in noise.

  12. Calculating potential error in sodium MRI with respect to the analysis of small objects.

    Science.gov (United States)

    Stobbe, Robert W; Beaulieu, Christian

    2018-06-01

    To facilitate correct interpretation of sodium MRI measurements, calculation of error with respect to rapid signal decay is introduced and combined with that of spatially correlated noise to assess volume-of-interest (VOI) 23 Na signal measurement inaccuracies, particularly for small objects. Noise and signal decay-related error calculations were verified using twisted projection imaging and a specially designed phantom with different sized spheres of constant elevated sodium concentration. As a demonstration, lesion signal measurement variation (5 multiple sclerosis participants) was compared with that predicted from calculation. Both theory and phantom experiment showed that VOI signal measurement in a large 10-mL, 314-voxel sphere was 20% less than expected on account of point-spread-function smearing when the VOI was drawn to include the full sphere. Volume-of-interest contraction reduced this error but increased noise-related error. Errors were even greater for smaller spheres (40-60% less than expected for a 0.35-mL, 11-voxel sphere). Image-intensity VOI measurements varied and increased with multiple sclerosis lesion size in a manner similar to that predicted from theory. Correlation suggests large underestimation of 23 Na signal in small lesions. Acquisition-specific measurement error calculation aids 23 Na MRI data analysis and highlights the limitations of current low-resolution methodologies. Magn Reson Med 79:2968-2977, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  13. An analysis of error patterns in children's backward digit recall in noise

    Science.gov (United States)

    Osman, Homira; Sullivan, Jessica R.

    2015-01-01

    The purpose of the study was to determine whether perceptual masking or cognitive processing accounts for a decline in working memory performance in the presence of competing speech. The types and patterns of errors made on the backward digit span in quiet and multitalker babble at -5 dB signal-to-noise ratio (SNR) were analyzed. The errors were classified into two categories: item (if digits that were not presented in a list were repeated) and order (if correct digits were repeated but in an incorrect order). Fifty five children with normal hearing were included. All the children were aged between 7 years and 10 years. Repeated measures of analysis of variance (RM-ANOVA) revealed the main effects for error type and digit span length. In terms of listening condition interaction it was found that the order errors occurred more frequently than item errors in the degraded listening condition compared to quiet. In addition, children had more difficulty recalling the correct order of intermediate items, supporting strong primacy and recency effects. Decline in children's working memory performance was not primarily related to perceptual difficulties alone. The majority of errors was related to the maintenance of sequential order information, which suggests that reduced performance in competing speech may result from increased cognitive processing demands in noise. PMID:26168949

  14. Classification based upon gene expression data: bias and precision of error rates.

    Science.gov (United States)

    Wood, Ian A; Visscher, Peter M; Mengersen, Kerrie L

    2007-06-01

    Gene expression data offer a large number of potentially useful predictors for the classification of tissue samples into classes, such as diseased and non-diseased. The predictive error rate of classifiers can be estimated using methods such as cross-validation. We have investigated issues of interpretation and potential bias in the reporting of error rate estimates. The issues considered here are optimization and selection biases, sampling effects, measures of misclassification rate, baseline error rates, two-level external cross-validation and a novel proposal for detection of bias using the permutation mean. Reporting an optimal estimated error rate incurs an optimization bias. Downward bias of 3-5% was found in an existing study of classification based on gene expression data and may be endemic in similar studies. Using a simulated non-informative dataset and two example datasets from existing studies, we show how bias can be detected through the use of label permutations and avoided using two-level external cross-validation. Some studies avoid optimization bias by using single-level cross-validation and a test set, but error rates can be more accurately estimated via two-level cross-validation. In addition to estimating the simple overall error rate, we recommend reporting class error rates plus where possible the conditional risk incorporating prior class probabilities and a misclassification cost matrix. We also describe baseline error rates derived from three trivial classifiers which ignore the predictors. R code which implements two-level external cross-validation with the PAMR package, experiment code, dataset details and additional figures are freely available for non-commercial use from http://www.maths.qut.edu.au/profiles/wood/permr.jsp

  15. Medication errors in residential aged care facilities: a distributed cognition analysis of the information exchange process.

    Science.gov (United States)

    Tariq, Amina; Georgiou, Andrew; Westbrook, Johanna

    2013-05-01

    Medication safety is a pressing concern for residential aged care facilities (RACFs). Retrospective studies in RACF settings identify inadequate communication between RACFs, doctors, hospitals and community pharmacies as the major cause of medication errors. Existing literature offers limited insight about the gaps in the existing information exchange process that may lead to medication errors. The aim of this research was to explicate the cognitive distribution that underlies RACF medication ordering and delivery to identify gaps in medication-related information exchange which lead to medication errors in RACFs. The study was undertaken in three RACFs in Sydney, Australia. Data were generated through ethnographic field work over a period of five months (May-September 2011). Triangulated analysis of data primarily focused on examining the transformation and exchange of information between different media across the process. The findings of this study highlight the extensive scope and intense nature of information exchange in RACF medication ordering and delivery. Rather than attributing error to individual care providers, the explication of distributed cognition processes enabled the identification of gaps in three information exchange dimensions which potentially contribute to the occurrence of medication errors namely: (1) design of medication charts which complicates order processing and record keeping (2) lack of coordination mechanisms between participants which results in misalignment of local practices (3) reliance on restricted communication bandwidth channels mainly telephone and fax which complicates the information processing requirements. The study demonstrates how the identification of these gaps enhances understanding of medication errors in RACFs. Application of the theoretical lens of distributed cognition can assist in enhancing our understanding of medication errors in RACFs through identification of gaps in information exchange. Understanding

  16. Quantization error of CCD cameras and their influence on phase calculation in fringe pattern analysis.

    Science.gov (United States)

    Skydan, Oleksandr A; Lilley, Francis; Lalor, Michael J; Burton, David R

    2003-09-10

    We present an investigation into the phase errors that occur in fringe pattern analysis that are caused by quantization effects. When acquisition devices with a limited value of camera bit depth are used, there are a limited number of quantization levels available to record the signal. This may adversely affect the recorded signal and adds a potential source of instrumental error to the measurement system. Quantization effects also determine the accuracy that may be achieved by acquisition devices in a measurement system. We used the Fourier fringe analysis measurement technique. However, the principles can be applied equally well for other phase measuring techniques to yield a phase error distribution that is caused by the camera bit depth.

  17. Error analysis and system improvements in phase-stepping methods for photoelasticity

    International Nuclear Information System (INIS)

    Wenyan Ji

    1997-11-01

    In the past automated photoelasticity has been demonstrated to be one of the most efficient technique for determining the complete state of stress in a 3-D component. However, the measurement accuracy, which depends on many aspects of both the theoretical foundations and experimental procedures, has not been studied properly. The objective of this thesis is to reveal the intrinsic properties of the errors, provide methods for reducing them and finally improve the system accuracy. A general formulation for a polariscope with all the optical elements in an arbitrary orientation was deduced using the method of Mueller Matrices. The deduction of this formulation indicates an inherent connectivity among the optical elements and gives a knowledge of the errors. In addition, this formulation also shows a common foundation among the photoelastic techniques, consequently, these techniques share many common error sources. The phase-stepping system proposed by Patterson and Wang was used as an exemplar to analyse the errors and provide the proposed improvements. This system can be divided into four parts according to their function, namely the optical system, light source, image acquisition equipment and image analysis software. All the possible error sources were investigated separately and the methods for reducing the influence of the errors and improving the system accuracy are presented. To identify the contribution of each possible error to the final system output, a model was used to simulate the errors and analyse their consequences. Therefore the contribution to the results from different error sources can be estimated quantitatively and finally the accuracy of the systems can be improved. For a conventional polariscope, the system accuracy can be as high as 99.23% for the fringe order and the error less than 5 degrees for the isoclinic angle. The PSIOS system is limited to the low fringe orders. For a fringe order of less than 1.5, the accuracy is 94.60% for fringe

  18. Error Analysis of Indirect Broadband Monitoring of Multilayer Optical Coatings using Computer Simulations

    Science.gov (United States)

    Semenov, Z. V.; Labusov, V. A.

    2017-11-01

    Results of studying the errors of indirect monitoring by means of computer simulations are reported. The monitoring method is based on measuring spectra of reflection from additional monitoring substrates in a wide spectral range. Special software (Deposition Control Simulator) is developed, which allows one to estimate the influence of the monitoring system parameters (noise of the photodetector array, operating spectral range of the spectrometer and errors of its calibration in terms of wavelengths, drift of the radiation source intensity, and errors in the refractive index of deposited materials) on the random and systematic errors of deposited layer thickness measurements. The direct and inverse problems of multilayer coatings are solved using the OptiReOpt library. Curves of the random and systematic errors of measurements of the deposited layer thickness as functions of the layer thickness are presented for various values of the system parameters. Recommendations are given on using the indirect monitoring method for the purpose of reducing the layer thickness measurement error.

  19. Missing texture reconstruction method based on error reduction algorithm using Fourier transform magnitude estimation scheme.

    Science.gov (United States)

    Ogawa, Takahiro; Haseyama, Miki

    2013-03-01

    A missing texture reconstruction method based on an error reduction (ER) algorithm, including a novel estimation scheme of Fourier transform magnitudes is presented in this brief. In our method, Fourier transform magnitude is estimated for a target patch including missing areas, and the missing intensities are estimated by retrieving its phase based on the ER algorithm. Specifically, by monitoring errors converged in the ER algorithm, known patches whose Fourier transform magnitudes are similar to that of the target patch are selected from the target image. In the second approach, the Fourier transform magnitude of the target patch is estimated from those of the selected known patches and their corresponding errors. Consequently, by using the ER algorithm, we can estimate both the Fourier transform magnitudes and phases to reconstruct the missing areas.

  20. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.

    Directory of Open Access Journals (Sweden)

    Wei He

    Full Text Available A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF for space instruments. A model for the system functional error rate (SFER is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA is presented. Based on experimental results of different ions (O, Si, Cl, Ti under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2, while the MTTF is approximately 110.7 h.

  1. On the Relationship Between Anxiety and Error Monitoring: A meta-analysis and conceptual framework

    Directory of Open Access Journals (Sweden)

    Jason eMoser

    2013-08-01

    Full Text Available Research involving event-related brain potentials has revealed that anxiety is associated with enhanced error monitoring, as reflected in increased amplitude of the error-related negativity (ERN. The nature of the relationship between anxiety and error monitoring is unclear, however. Through meta-analysis and a critical review of the literature, we argue that anxious apprehension/worry is the dimension of anxiety most closely associated with error monitoring. Although, overall, anxiety demonstrated a robust, small-to-medium relationship with enhanced ERN (r = -.25, studies employing measures of anxious apprehension show a threefold greater effect size estimate (r = -.35 than those utilizing other measures of anxiety (r = -.09. Our conceptual framework helps explain this more specific relationship between anxiety and enhanced ERN and delineates the unique roles of worry, conflict processing, and modes of cognitive control. Collectively, our analysis suggests that enhanced ERN in anxiety results from the interplay of a decrease in processes supporting active goal maintenance and a compensatory increase in processes dedicated to transient reactivation of task goals on an as-needed basis when salient events (i.e., errors occur.

  2. Error rates in forensic DNA analysis: definition, numbers, impact and communication.

    Science.gov (United States)

    Kloosterman, Ate; Sjerps, Marjan; Quak, Astrid

    2014-09-01

    Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and published. The forensic domain is lagging behind concerning this transparency for various reasons. In this paper we provide definitions and observed frequencies for different types of errors at the Human Biological Traces Department of the Netherlands Forensic Institute (NFI) over the years 2008-2012. Furthermore, we assess their actual and potential impact and describe how the NFI deals with the communication of these numbers to the legal justice system. We conclude that the observed relative frequency of quality failures is comparable to studies from clinical laboratories and genetic testing centres. Furthermore, this frequency is constant over the five-year study period. The most common causes of failures related to the laboratory process were contamination and human error. Most human errors could be corrected, whereas gross contamination in crime samples often resulted in irreversible consequences. Hence this type of contamination is identified as the most significant source of error. Of the known contamination incidents, most were detected by the NFI quality control system before the report was issued to the authorities, and thus did not lead to flawed decisions like false convictions. However in a very limited number of cases crucial errors were detected after the report was issued, sometimes with severe consequences. Many of these errors were made in the post-analytical phase. The error rates reported in this paper are useful for quality improvement and benchmarking, and contribute to an open research culture that promotes public trust. However, they are irrelevant in the context of a particular case. Here case-specific probabilities of undetected errors are needed

  3. Characterization and error analysis of an N×N unfolding procedure applied to filtered, photoelectric x-ray detector arrays. II. Error analysis and generalization

    Directory of Open Access Journals (Sweden)

    D. L. Fehl

    2010-12-01

    Full Text Available A five-channel, filtered-x-ray-detector (XRD array has been used to measure time-dependent, soft-x-ray flux emitted by z-pinch plasmas at the Z pulsed-power accelerator (Sandia National Laboratories, Albuquerque, New Mexico, USA. The preceding, companion paper [D. L. Fehl et al., Phys. Rev. ST Accel. Beams 13, 120402 (2010PRABFM1098-4402] describes an algorithm for spectral reconstructions (unfolds and spectrally integrated flux estimates from data obtained by this instrument. The unfolded spectrum S_{unfold}(E,t is based on (N=5 first-order B-splines (histograms in contiguous unfold bins j=1,…,N; the recovered x-ray flux F_{unfold}(t is estimated as ∫S_{unfold}(E,tdE, where E is x-ray energy and t is time. This paper adds two major improvements to the preceding unfold analysis: (a Error analysis.—Both data noise and response-function uncertainties are propagated into S_{unfold}(E,t and F_{unfold}(t. Noise factors ν are derived from simulations to quantify algorithm-induced changes in the noise-to-signal ratio (NSR for S_{unfold} in each unfold bin j and for F_{unfold} (ν≡NSR_{output}/NSR_{input}: for S_{unfold}, 1≲ν_{j}≲30, an outcome that is strongly spectrally dependent; for F_{unfold}, 0.6≲ν_{F}≲1, a result that is less spectrally sensitive and corroborated independently. For nominal z-pinch experiments, the combined uncertainty (noise and calibrations in F_{unfold}(t at peak is estimated to be ∼15%. (b Generalization of the unfold method.—Spectral sensitivities (called here passband functions are constructed for S_{unfold} and F_{unfold}. Predicting how the unfold algorithm reconstructs arbitrary spectra is thereby reduced to quadratures. These tools allow one to understand and quantitatively predict algorithmic distortions (including negative artifacts, to identify potentially troublesome spectra, and to design more useful response functions.

  4. Utilizing measure-based feedback in control-mastery theory: A clinical error.

    Science.gov (United States)

    Snyder, John; Aafjes-van Doorn, Katie

    2016-09-01

    Clinical errors and ruptures are an inevitable part of clinical practice. Often times, therapists are unaware that a clinical error or rupture has occurred, leaving no space for repair, and potentially leading to patient dropout and/or less effective treatment. One way to overcome our blind spots is by frequently and systematically collecting measure-based feedback from the patient. Patient feedback measures that focus on the process of psychotherapy such as the Patient's Experience of Attunement and Responsiveness scale (PEAR) can be used in conjunction with treatment outcome measures such as the Outcome Questionnaire 45.2 (OQ-45.2) to monitor the patient's therapeutic experience and progress. The regular use of these types of measures can aid clinicians in the identification of clinical errors and the associated patient deterioration that might otherwise go unnoticed and unaddressed. The current case study describes an instance of clinical error that occurred during the 2-year treatment of a highly traumatized young woman. The clinical error was identified using measure-based feedback and subsequently understood and addressed from the theoretical standpoint of the control-mastery theory of psychotherapy. An alternative hypothetical response is also presented and explained using control-mastery theory. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  5. Corpus-Based Websites to Promote Learner Autonomy in Correcting Writing Collocation Errors

    Directory of Open Access Journals (Sweden)

    Pham Thuy Dung

    2016-12-01

    Full Text Available The recent yet powerful emergence of E-learning and using online resources in learning EFL (English as a Foreign Language has helped promote learner autonomy in language acquisition including self-correcting their mistakes. This pilot study despite conducted on a modest sample of 25 second year students majoring in Business English at Hanoi Foreign Trade University is an initial attempt to investigate the feasibility of using corpus-based websites to promote learner autonomy in correcting collocation errors in EFL writing. The data is collected using a pre-questionnaire and a post-interview aiming to find out the participants’ change in belief and attitude toward learner autonomy in collocation errors in writing, the extent of their success in using the corpus-based websites to self-correct the errors and the change in their confidence in self-correcting the errors using the websites. The findings show that a significant majority of students have shifted their belief and attitude toward a more autonomous mode of learning, enjoyed a fair success of using the websites to self-correct the errors and become more confident. The study also yields an implication that a face-to-face training of how to use these online tools is vital to the later confidence and success of the learners

  6. ERROR BOUNDS FOR SURFACE AREA ESTIMATORS BASED ON CROFTON’S FORMULA

    Directory of Open Access Journals (Sweden)

    Markus Kiderlen

    2011-05-01

    Full Text Available According to Crofton's formula, the surface area S(A of a sufficiently regular compact set A in Rd is proportional to the mean of all total projections pA (u on a linear hyperplane with normal u, uniformly averaged over all unit vectors u. In applications, pA (u is only measured in k directions and the mean is approximated by a finite weighted sum bS(A of the total projections in these directions. The choice of the weights depends on the selected quadrature rule. We define an associated zonotope Z (depending only on the projection directions and the quadrature rule, and show that the relative error bS (A/S (A is bounded from below by the inradius of Z and from above by the circumradius of Z. Applying a strengthened isoperimetric inequality due to Bonnesen, we show that the rectangular quadrature rule does not give the best possible error bounds for d =2. In addition, we derive asymptotic behavior of the error (with increasing k in the planar case. The paper concludes with applications to surface area estimation in design-based digital stereology where we show that the weights due to Bonnesen's inequality are better than the usual weights based on the rectangular rule and almost optimal in the sense that the relative error of the surface area estimator is very close to the minimal error.

  7. Quantization-Based Adaptive Actor-Critic Tracking Control With Tracking Error Constraints.

    Science.gov (United States)

    Fan, Quan-Yong; Yang, Guang-Hong; Ye, Dan

    2018-04-01

    In this paper, the problem of adaptive actor-critic (AC) tracking control is investigated for a class of continuous-time nonlinear systems with unknown nonlinearities and quantized inputs. Different from the existing results based on reinforcement learning, the tracking error constraints are considered and new critic functions are constructed to improve the performance further. To ensure that the tracking errors keep within the predefined time-varying boundaries, a tracking error transformation technique is used to constitute an augmented error system. Specific critic functions, rather than the long-term cost function, are introduced to supervise the tracking performance and tune the weights of the AC neural networks (NNs). A novel adaptive controller with a special structure is designed to reduce the effect of the NN reconstruction errors, input quantization, and disturbances. Based on the Lyapunov stability theory, the boundedness of the closed-loop signals and the desired tracking performance can be guaranteed. Finally, simulations on two connected inverted pendulums are given to illustrate the effectiveness of the proposed method.

  8. Performance Analysis of Free-Space Optical Links Over Malaga (M) Turbulence Channels with Pointing Errors

    KAUST Repository

    Ansari, Imran Shafique; Yilmaz, Ferkan; Alouini, Mohamed-Slim

    2015-01-01

    In this work, we present a unified performance analysis of a free-space optical (FSO) link that accounts for pointing errors and both types of detection techniques (i.e. intensity modulation/direct detection (IM/DD) as well as heterodyne detection). More specifically, we present unified exact closedform expressions for the cumulative distribution function, the probability density function, the moment generating function, and the moments of the end-to-end signal-to-noise ratio (SNR) of a single link FSO transmission system, all in terms of the Meijer’s G function except for the moments that is in terms of simple elementary functions. We then capitalize on these unified results to offer unified exact closed-form expressions for various performance metrics of FSO link transmission systems, such as, the outage probability, the scintillation index (SI), the average error rate for binary and M-ary modulation schemes, and the ergodic capacity (except for IM/DD technique, where we present closed-form lower bound results), all in terms of Meijer’s G functions except for the SI that is in terms of simple elementary functions. Additionally, we derive the asymptotic results for all the expressions derived earlier in terms of Meijer’s G function in the high SNR regime in terms of simple elementary functions via an asymptotic expansion of the Meijer’s G function. We also derive new asymptotic expressions for the ergodic capacity in the low as well as high SNR regimes in terms of simple elementary functions via utilizing moments. All the presented results are verified via computer-based Monte-Carlo simulations.

  9. Performance Analysis of Free-Space Optical Links Over Malaga (M) Turbulence Channels with Pointing Errors

    KAUST Repository

    Ansari, Imran Shafique

    2015-08-12

    In this work, we present a unified performance analysis of a free-space optical (FSO) link that accounts for pointing errors and both types of detection techniques (i.e. intensity modulation/direct detection (IM/DD) as well as heterodyne detection). More specifically, we present unified exact closedform expressions for the cumulative distribution function, the probability density function, the moment generating function, and the moments of the end-to-end signal-to-noise ratio (SNR) of a single link FSO transmission system, all in terms of the Meijer’s G function except for the moments that is in terms of simple elementary functions. We then capitalize on these unified results to offer unified exact closed-form expressions for various performance metrics of FSO link transmission systems, such as, the outage probability, the scintillation index (SI), the average error rate for binary and M-ary modulation schemes, and the ergodic capacity (except for IM/DD technique, where we present closed-form lower bound results), all in terms of Meijer’s G functions except for the SI that is in terms of simple elementary functions. Additionally, we derive the asymptotic results for all the expressions derived earlier in terms of Meijer’s G function in the high SNR regime in terms of simple elementary functions via an asymptotic expansion of the Meijer’s G function. We also derive new asymptotic expressions for the ergodic capacity in the low as well as high SNR regimes in terms of simple elementary functions via utilizing moments. All the presented results are verified via computer-based Monte-Carlo simulations.

  10. Error Ratio Analysis: Alternate Mathematics Assessment for General and Special Educators.

    Science.gov (United States)

    Miller, James H.; Carr, Sonya C.

    1997-01-01

    Eighty-seven elementary students in grades four, five, and six, were administered a 30-item multiplication instrument to assess performance in computation across grade levels. An interpretation of student performance using error ratio analysis is provided and the use of this method with groups of students for instructional decision making is…

  11. Diction and Expression in Error Analysis Can Enhance Academic Writing of L2 University Students

    Science.gov (United States)

    Sajid, Muhammad

    2016-01-01

    Without proper linguistic competence in English language, academic writing is one of the most challenging tasks, especially, in various genre specific disciplines by L2 novice writers. This paper examines the role of diction and expression through error analysis in English language of L2 novice writers' academic writing in interdisciplinary texts…

  12. Error Analysis of Ia Supernova and Query on Cosmic Dark Energy

    Indian Academy of Sciences (India)

    2016-01-27

    Jan 27, 2016 ... Error Analysis of Ia Supernova and Query on Cosmic Dark Energy. Qiuhe Peng Yiming Hu Kun ... https://www.ias.ac.in/article/fulltext/joaa/035/03/0253-0256 ... Articles are also visible in Web of Science immediately. All these ...

  13. Error analysis of some Galerkin - least squares methods for the elasticity equations

    International Nuclear Information System (INIS)

    Franca, L.P.; Stenberg, R.

    1989-05-01

    We consider the recent technique of stabilizing mixed finite element methods by augmenting the Galerkin formulation with least squares terms calculated separately on each element. The error analysis is performed in a unified manner yielding improved results for some methods introduced earlier. In addition, a new formulation is introduced and analyzed [pt

  14. Time-series analysis of Nigeria rice supply and demand: Error ...

    African Journals Online (AJOL)

    The study examined a time-series analysis of Nigeria rice supply and demand with a view to determining any long-run equilibrium between them using the Error Correction Model approach (ECM). The data used for the study represents the annual series of 1960-2007 (47 years) for rice supply and demand in Nigeria, ...

  15. Advanced GIS Exercise: Performing Error Analysis in ArcGIS ModelBuilder

    Science.gov (United States)

    Hall, Steven T.; Post, Christopher J.

    2009-01-01

    Knowledge of Geographic Information Systems is quickly becoming an integral part of the natural resource professionals' skill set. With the growing need of professionals with these skills, we created an advanced geographic information systems (GIS) exercise for students at Clemson University to introduce them to the concept of error analysis,…

  16. Error Correction of Measured Unstructured Road Profiles Based on Accelerometer and Gyroscope Data

    Directory of Open Access Journals (Sweden)

    Jinhua Han

    2017-01-01

    Full Text Available This paper describes a noncontact acquisition system composed of several time synchronized laser height sensors, accelerometers, gyroscope, and so forth in order to collect the road profiles of vehicle riding on the unstructured roads. A method of correcting road profiles based on the accelerometer and gyroscope data is proposed to eliminate the adverse impacts of vehicle vibration and attitudes change. Because the power spectral density (PSD of gyro attitudes concentrates in the low frequency band, a method called frequency division is presented to divide the road profiles into two parts: high frequency part and low frequency part. The vibration error of road profiles is corrected by displacement data obtained through two times integration of measured acceleration data. After building the mathematical model between gyro attitudes and road profiles, the gyro attitudes signals are separated from low frequency road profile by the method of sliding block overlap based on correlation analysis. The accuracy and limitations of the system have been analyzed, and its validity has been verified by implementing the system on wheeled equipment for road profiles’ measuring of vehicle testing ground. The paper offers an accurate and practical approach to obtaining unstructured road profiles for road simulation test.

  17. Analyse des erreurs dans les calculs sur ordinateurs Error Analysis in Computing

    Directory of Open Access Journals (Sweden)

    Vignes J.

    2006-11-01

    Full Text Available La méthode présentée ici permet d'évaluer l'erreur sur les résultats d'algorithmes, erreurs dues à l'arithmétique à précision limitée de la machines L'idée de base de cette méthode est qu'à un algorithme donné fournissant un résultat algébrique unique r, correspond en informatique un ensemble R de résultats numériques qui sont tous représentatifs de résultat exact r. La méthode de permutation-perturbation que nous présentons ici permet d'obtenir les éléments de R. La perturbation agit sur les données et résultats de chaque opération élémentaire. La permutation agit sur l'ordre d'exécution des opérations. Une étude statistique des éléments de R permet d'estimer l'erreur commise. Dans la pratique, il suffit de 2 ou 3 éléments de R pour connaître cette erreur. This paper describes a new method for evaluating the error in the results of computation of an algorithm. The basic idea underlying the method is that while in algebra a given algorithm provides a single result r, this same algorithm carried out on a computer provides a set R of numerical results that are ail representative of the exact algebraic result r. The permutation-perturbation method described here can be used to obtain the elements of R. The perturbation acts on the data and results of each elementary operation, and the permutation acts on the order in which operations are carried out. A statistical analysis of the elements of R is performed to determine the error committed. In practice, 2 to 4 R elements are sufficient for determining the error.

  18. Error Made in Conversation by Indonesian Learners Learning English Based on Syntax and Exchanging Information

    Directory of Open Access Journals (Sweden)

    Melania Wiannastiti

    2014-10-01

    Full Text Available In learning a second language or foreign language (L2, learners should master the competences. Normally, L2 learners first should master the linguistic competence which includes the mastery of vocabularies, pronunciation, and grammar. The study is to find out the syntax error made by L2 learners in conversation as well as to find out the exchanging information. The data were the recording conversation of Visual Communication Design students first semester of Binus University joining English Entrant. Error analysis was used to analyze the data. There are two points of view to analyze the data: syntax from taxonomy and exchanging information. Taxonomy employs the error in omission, addition, misinformation, and mis-ordering. Exchanging information point of view employs the error in finite element and mood. The result shows that L2 learners made some errors in grammar and exchanging information because they are influenced by the L1. They tend to transfer from L1 to L2 rather than thinking to create the utterances in L2. 

  19. Students' Errors in Solving the Permutation and Combination Problems Based on Problem Solving Steps of Polya

    Science.gov (United States)

    Sukoriyanto; Nusantara, Toto; Subanji; Chandra, Tjang Daniel

    2016-01-01

    This article was written based on the results of a study evaluating students' errors in problem solving of permutation and combination in terms of problem solving steps according to Polya. Twenty-five students were asked to do four problems related to permutation and combination. The research results showed that the students still did a mistake in…

  20. Error Concealment for 3-D DWT Based Video Codec Using Iterative Thresholding

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Forchhammer, Søren; Codreanu, Marian

    2017-01-01

    Error concealment for video coding based on a 3-D discrete wavelet transform (DWT) is considered. We assume that the video sequence has a sparse representation in a known basis different from the DWT, e.g., in a 2-D discrete cosine transform basis. Then, we formulate the concealment problem as l1...