WorldWideScience

Sample records for release error analysis

  1. Automatic Error Analysis Using Intervals

    Science.gov (United States)

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  2. ATC operational error analysis.

    Science.gov (United States)

    1972-01-01

    The primary causes of operational errors are discussed and the effects of these errors on an ATC system's performance are described. No attempt is made to specify possible error models for the spectrum of blunders that can occur although previous res...

  3. Skylab water balance error analysis

    Science.gov (United States)

    Leonard, J. I.

    1977-01-01

    Estimates of the precision of the net water balance were obtained for the entire Skylab preflight and inflight phases as well as for the first two weeks of flight. Quantitative estimates of both total sampling errors and instrumentation errors were obtained. It was shown that measurement error is minimal in comparison to biological variability and little can be gained from improvement in analytical accuracy. In addition, a propagation of error analysis demonstrated that total water balance error could be accounted for almost entirely by the errors associated with body mass changes. Errors due to interaction between terms in the water balance equation (covariances) represented less than 10% of the total error. Overall, the analysis provides evidence that daily measurements of body water changes obtained from the indirect balance technique are reasonable, precise, and relaible. The method is not biased toward net retention or loss.

  4. Analysis of Error Propagation Within Hierarchical Air Combat Models

    Science.gov (United States)

    2016-06-01

    NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release; distribution is unlimited ANALYSIS OF ERROR ...COVERED Master’s thesis 4. TITLE AND SUBTITLE ANALYSIS OF ERROR PROPAGATION WITHIN HIERARCHICAL AIR COMBAT MODELS 5. FUNDING NUMBERS 6...variance analysis , sampling methods, metamodeling, error propagation, Lanchester equations, agent- based simulation, design of experiments

  5. Error analysis in laparoscopic surgery

    Science.gov (United States)

    Gantert, Walter A.; Tendick, Frank; Bhoyrul, Sunil; Tyrrell, Dana; Fujino, Yukio; Rangel, Shawn; Patti, Marco G.; Way, Lawrence W.

    1998-06-01

    Iatrogenic complications in laparoscopic surgery, as in any field, stem from human error. In recent years, cognitive psychologists have developed theories for understanding and analyzing human error, and the application of these principles has decreased error rates in the aviation and nuclear power industries. The purpose of this study was to apply error analysis to laparoscopic surgery and evaluate its potential for preventing complications. Our approach is based on James Reason's framework using a classification of errors according to three performance levels: at the skill- based performance level, slips are caused by attention failures, and lapses result form memory failures. Rule-based mistakes constitute the second level. Knowledge-based mistakes occur at the highest performance level and are caused by shortcomings in conscious processing. These errors committed by the performer 'at the sharp end' occur in typical situations which often times are brought about by already built-in latent system failures. We present a series of case studies in laparoscopic surgery in which errors are classified and the influence of intrinsic failures and extrinsic system flaws are evaluated. Most serious technical errors in lap surgery stem from a rule-based or knowledge- based mistake triggered by cognitive underspecification due to incomplete or illusory visual input information. Error analysis in laparoscopic surgery should be able to improve human performance, and it should detect and help eliminate system flaws. Complication rates in laparoscopic surgery due to technical errors can thus be considerably reduced.

  6. Errors from Image Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wood, William Monford [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    Presenting a systematic study of the standard analysis of rod-pinch radiographs for obtaining quantitative measurements of areal mass densities, and making suggestions for improving the methodology of obtaining quantitative information from radiographed objects.

  7. Analysis of Medication Error Reports

    Energy Technology Data Exchange (ETDEWEB)

    Whitney, Paul D.; Young, Jonathan; Santell, John; Hicks, Rodney; Posse, Christian; Fecht, Barbara A.

    2004-11-15

    In medicine, as in many areas of research, technological innovation and the shift from paper based information to electronic records has created a climate of ever increasing availability of raw data. There has been, however, a corresponding lag in our abilities to analyze this overwhelming mass of data, and classic forms of statistical analysis may not allow researchers to interact with data in the most productive way. This is true in the emerging area of patient safety improvement. Traditionally, a majority of the analysis of error and incident reports has been carried out based on an approach of data comparison, and starts with a specific question which needs to be answered. Newer data analysis tools have been developed which allow the researcher to not only ask specific questions but also to “mine” data: approach an area of interest without preconceived questions, and explore the information dynamically, allowing questions to be formulated based on patterns brought up by the data itself. Since 1991, United States Pharmacopeia (USP) has been collecting data on medication errors through voluntary reporting programs. USP’s MEDMARXsm reporting program is the largest national medication error database and currently contains well over 600,000 records. Traditionally, USP has conducted an annual quantitative analysis of data derived from “pick-lists” (i.e., items selected from a list of items) without an in-depth analysis of free-text fields. In this paper, the application of text analysis and data analysis tools used by Battelle to analyze the medication error reports already analyzed in the traditional way by USP is described. New insights and findings were revealed including the value of language normalization and the distribution of error incidents by day of the week. The motivation for this effort is to gain additional insight into the nature of medication errors to support improvements in medication safety.

  8. Orbit IMU alignment: Error analysis

    Science.gov (United States)

    Corson, R. W.

    1980-01-01

    A comprehensive accuracy analysis of orbit inertial measurement unit (IMU) alignments using the shuttle star trackers was completed and the results are presented. Monte Carlo techniques were used in a computer simulation of the IMU alignment hardware and software systems to: (1) determine the expected Space Transportation System 1 Flight (STS-1) manual mode IMU alignment accuracy; (2) investigate the accuracy of alignments in later shuttle flights when the automatic mode of star acquisition may be used; and (3) verify that an analytical model previously used for estimating the alignment error is a valid model. The analysis results do not differ significantly from expectations. The standard deviation in the IMU alignment error for STS-1 alignments was determined to the 68 arc seconds per axis. This corresponds to a 99.7% probability that the magnitude of the total alignment error is less than 258 arc seconds.

  9. Having Fun with Error Analysis

    Science.gov (United States)

    Siegel, Peter

    2007-01-01

    We present a fun activity that can be used to introduce students to error analysis: the M&M game. Students are told to estimate the number of individual candies plus uncertainty in a bag of M&M's. The winner is the group whose estimate brackets the actual number with the smallest uncertainty. The exercise produces enthusiastic discussions and…

  10. Measurement Error and Equating Error in Power Analysis

    Science.gov (United States)

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  11. Error Analysis of Band Matrix Method

    OpenAIRE

    Taniguchi, Takeo; Soga, Akira

    1984-01-01

    Numerical error in the solution of the band matrix method based on the elimination method in single precision is investigated theoretically and experimentally, and the behaviour of the truncation error and the roundoff error is clarified. Some important suggestions for the useful application of the band solver are proposed by using the results of above error analysis.

  12. Uncertainty quantification and error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  13. Attitude Determination Error Analysis System (ADEAS) mathematical specifications document

    Science.gov (United States)

    Nicholson, Mark; Markley, F.; Seidewitz, E.

    1988-01-01

    The mathematical specifications of Release 4.0 of the Attitude Determination Error Analysis System (ADEAS), which provides a general-purpose linear error analysis capability for various spacecraft attitude geometries and determination processes, are presented. The analytical basis of the system is presented. The analytical basis of the system is presented, and detailed equations are provided for both three-axis-stabilized and spin-stabilized attitude sensor models.

  14. Error Analysis and the EFL Classroom Teaching

    Science.gov (United States)

    Xie, Fang; Jiang, Xue-mei

    2007-01-01

    This paper makes a study of error analysis and its implementation in the EFL (English as Foreign Language) classroom teaching. It starts by giving a systematic review of the concepts and theories concerning EA (Error Analysis), the various reasons causing errors are comprehensively explored. The author proposes that teachers should employ…

  15. Error Analysis in Mathematics. Technical Report #1012

    Science.gov (United States)

    Lai, Cheng-Fei

    2012-01-01

    Error analysis is a method commonly used to identify the cause of student errors when they make consistent mistakes. It is a process of reviewing a student's work and then looking for patterns of misunderstanding. Errors in mathematics can be factual, procedural, or conceptual, and may occur for a number of reasons. Reasons why students make…

  16. An Error Analysis on TFL Learners’ Writings

    Directory of Open Access Journals (Sweden)

    Arif ÇERÇİ

    2016-12-01

    Full Text Available The main purpose of the present study is to identify and represent TFL learners’ writing errors through error analysis. All the learners started learning Turkish as foreign language with A1 (beginner level and completed the process by taking C1 (advanced certificate in TÖMER at Gaziantep University. The data of the present study were collected from 14 students’ writings in proficiency exams for each level. The data were grouped as grammatical, syntactic, spelling, punctuation, and word choice errors. The ratio and categorical distributions of identified errors were analyzed through error analysis. The data were analyzed through statistical procedures in an effort to determine whether error types differ according to the levels of the students. The errors in this study are limited to the linguistic and intralingual developmental errors

  17. A Comparative Study on Error Analysis

    DEFF Research Database (Denmark)

    Wu, Xiaoli; Zhang, Chun

    2015-01-01

    students (N= 54 students from LU; and N= 33 students from AU) participating in the studies, among them 44 are 2nd-year students (n=28 from LU and n=16 from AU) and 43 3rd-year students (n=26 from LU and n=17 from AU). Students’ writing samples were first collected and the errors on the use of comparative...... of the grammatical errors with using comparative sentences is developed, which include comparative item-related errors, comparative result-related errors and blend errors. The results further indicate that these errors could attribute to negative L1 transfer and overgeneralization of grammatical rule and structures......Title: A Comparative Study on Error Analysis Subtitle: - Belgian (L1) and Danish (L1) learners’ use of Chinese (L2) comparative sentences in written production Xiaoli Wu, Chun Zhang Abstract: Making errors is an inevitable and necessary part of learning. The collection, classification and analysis...

  18. Error Analysis in the Teaching of English

    OpenAIRE

    Hasyim, Sunardi

    2002-01-01

    The main purpose of this article is to discuss the importance of error analysis in the teaching of English as a foreign language. Although errors are bad things in learning English as a foreign language%2C error analysis is advantageous for both learners and teachers. For learners%2C error analysis is needed to show them in what aspect in grammar which is difficult for them%2C where as for teachers%2C it is required to evaluate themselves whether they are successful or not in teaching English...

  19. Error Analysis: Past, Present, and Future

    Science.gov (United States)

    McCloskey, George

    2017-01-01

    This commentary will take an historical perspective on the Kaufman Test of Educational Achievement (KTEA) error analysis, discussing where it started, where it is today, and where it may be headed in the future. In addition, the commentary will compare and contrast the KTEA error analysis procedures that are rooted in psychometric methodology and…

  20. ERROR ANALYSIS in the TEACHING of ENGLISH

    Directory of Open Access Journals (Sweden)

    Sunardi Hasyim

    2002-01-01

    Full Text Available The main purpose of this article is to discuss the importance of error analysis in the teaching of English as a foreign language. Although errors are bad things in learning English as a foreign language%2C error analysis is advantageous for both learners and teachers. For learners%2C error analysis is needed to show them in what aspect in grammar which is difficult for them%2C where as for teachers%2C it is required to evaluate themselves whether they are successful or not in teaching English.%0D%0AIn this article%2C the writer presented some English sentences containing grammatical errors. These grammatical errors were analyzed based on the theories presented by the linguists. This analysis aimed at showing the students the causes and kinds of the grammatical errors. By this way%2C the students are expected to increase their knowledge on the English grammar. Abstract in Bahasa Indonesia : errors%2C+mistake%2C+over+orrer%2C+covert+error%2C+interference%2C+overgeneralization%2C+grammar%2C+interlingual%2C+intralingual%2C+idiosyncrasies.

  1. Experimental research on English vowel errors analysis

    Directory of Open Access Journals (Sweden)

    Huang Qiuhua

    2016-01-01

    Full Text Available Our paper analyzed relevant acoustic parameters of people’s speech samples and the results that compared with English standard pronunciation with methods of experimental phonetics by phonetic analysis software and statistical analysis software. Then we summarized phonetic pronunciation errors of college students through the analysis of English pronunciation of vowels, we found that college students’ English pronunciation are easy occur tongue position and lip shape errors during pronounce vowels. Based on analysis of pronunciation errors, we put forward targeted voice training for college students’ English pronunciation, eventually increased the students learning interest, and improved the teaching of English phonetics.

  2. Experimental research on English vowel errors analysis

    OpenAIRE

    Huang Qiuhua

    2016-01-01

    Our paper analyzed relevant acoustic parameters of people’s speech samples and the results that compared with English standard pronunciation with methods of experimental phonetics by phonetic analysis software and statistical analysis software. Then we summarized phonetic pronunciation errors of college students through the analysis of English pronunciation of vowels, we found that college students’ English pronunciation are easy occur tongue position and lip shape errors during pronounce vow...

  3. Analysis of Position Error Headway Protection

    Science.gov (United States)

    1975-07-01

    An analysis is developed to determine safe headway on PRT systems that use point-follower control. Periodic measurements of the position error relative to a nominal trajectory provide warning against the hazards of overspeed and unexpected stop. A co...

  4. An Error Analysis on TFL Learners’ Writings

    OpenAIRE

    ÇERÇİ, Arif; DERMAN, Serdar; BARDAKÇI, Mehmet

    2016-01-01

    The main purpose of the present study is to identify and represent TFL learners’ writing errors through error analysis. All the learners started learning Turkish as foreign language with A1 (beginner) level and completed the process by taking C1 (advanced) certificate in TÖMER at Gaziantep University. The data of the present study were collected from 14 students’ writings in proficiency exams for each level. The data were grouped as grammatical, syntactic, spelling, punctuation, and word choi...

  5. Numeracy, Literacy and Newman's Error Analysis

    Science.gov (United States)

    White, Allan Leslie

    2010-01-01

    Newman (1977, 1983) defined five specific literacy and numeracy skills as crucial to performance on mathematical word problems: reading, comprehension, transformation, process skills, and encoding. Newman's Error Analysis (NEA) provided a framework for considering the reasons that underlay the difficulties students experienced with mathematical…

  6. Error Propagation Analysis for Quantitative Intracellular Metabolomics

    Directory of Open Access Journals (Sweden)

    Jana Tillack

    2012-11-01

    Full Text Available Model-based analyses have become an integral part of modern metabolic engineering and systems biology in order to gain knowledge about complex and not directly observable cellular processes. For quantitative analyses, not only experimental data, but also measurement errors, play a crucial role. The total measurement error of any analytical protocol is the result of an accumulation of single errors introduced by several processing steps. Here, we present a framework for the quantification of intracellular metabolites, including error propagation during metabolome sample processing. Focusing on one specific protocol, we comprehensively investigate all currently known and accessible factors that ultimately impact the accuracy of intracellular metabolite concentration data. All intermediate steps are modeled, and their uncertainty with respect to the final concentration data is rigorously quantified. Finally, on the basis of a comprehensive metabolome dataset of Corynebacterium glutamicum, an integrated error propagation analysis for all parts of the model is conducted, and the most critical steps for intracellular metabolite quantification are detected.

  7. Error Analysis and Propagation in Metabolomics Data Analysis.

    Science.gov (United States)

    Moseley, Hunter N B

    2013-01-01

    Error analysis plays a fundamental role in describing the uncertainty in experimental results. It has several fundamental uses in metabolomics including experimental design, quality control of experiments, the selection of appropriate statistical methods, and the determination of uncertainty in results. Furthermore, the importance of error analysis has grown with the increasing number, complexity, and heterogeneity of measurements characteristic of 'omics research. The increase in data complexity is particularly problematic for metabolomics, which has more heterogeneity than other omics technologies due to the much wider range of molecular entities detected and measured. This review introduces the fundamental concepts of error analysis as they apply to a wide range of metabolomics experimental designs and it discusses current methodologies for determining the propagation of uncertainty in appropriate metabolomics data analysis. These methodologies include analytical derivation and approximation techniques, Monte Carlo error analysis, and error analysis in metabolic inverse problems. Current limitations of each methodology with respect to metabolomics data analysis are also discussed.

  8. Error analysis of stochastic gradient descent ranking.

    Science.gov (United States)

    Chen, Hong; Tang, Yi; Li, Luoqing; Yuan, Yuan; Li, Xuelong; Tang, Yuanyan

    2013-06-01

    Ranking is always an important task in machine learning and information retrieval, e.g., collaborative filtering, recommender systems, drug discovery, etc. A kernel-based stochastic gradient descent algorithm with the least squares loss is proposed for ranking in this paper. The implementation of this algorithm is simple, and an expression of the solution is derived via a sampling operator and an integral operator. An explicit convergence rate for leaning a ranking function is given in terms of the suitable choices of the step size and the regularization parameter. The analysis technique used here is capacity independent and is novel in error analysis of ranking learning. Experimental results on real-world data have shown the effectiveness of the proposed algorithm in ranking tasks, which verifies the theoretical analysis in ranking error.

  9. Error analysis of aspheric surface with reference datum.

    Science.gov (United States)

    Peng, Yanglin; Dai, Yifan; Chen, Shanyong; Song, Ci; Shi, Feng

    2015-07-20

    Severe requirements of location tolerance provide new challenges for optical component measurement, evaluation, and manufacture. Form error, location error, and the relationship between form error and location error need to be analyzed together during error analysis of aspheric surface with reference datum. Based on the least-squares optimization method, we develop a least-squares local optimization method to evaluate form error of aspheric surface with reference datum, and then calculate the location error. According to the error analysis of a machined aspheric surface, the relationship between form error and location error is revealed, and the influence on the machining process is stated. In different radius and aperture of aspheric surface, the change laws are simulated by superimposing normally distributed random noise on an ideal surface. It establishes linkages between machining and error analysis, and provides an effective guideline for error correcting.

  10. Error Analysis of Determining Airplane Location by Global Positioning System

    OpenAIRE

    Hajiyev, Chingiz; Burat, Alper

    1999-01-01

    This paper studies the error analysis of determining airplane location by global positioning system (GPS) using statistical testing method. The Newton Rhapson method positions the airplane at the intersection point of four spheres. Absolute errors, relative errors and standard deviation have been calculated The results show that the positioning error of the airplane varies with the coordinates of GPS satellite and the airplane.

  11. 76 FR 42715 - Quarantine Release Errors in Blood Establishments; Public Workshop

    Science.gov (United States)

    2011-07-19

    ... rather than a detailed description and analysis of the problem. Thus, the root causes of QREs are not... release of units with incomplete or absent testing for transfusion-transmitted infectious diseases. On...

  12. Trends in MODIS Geolocation Error Analysis

    Science.gov (United States)

    Wolfe, R. E.; Nishihama, Masahiro

    2009-01-01

    Data from the two MODIS instruments have been accurately geolocated (Earth located) to enable retrieval of global geophysical parameters. The authors describe the approach used to geolocate with sub-pixel accuracy over nine years of data from M0DIS on NASA's E0S Terra spacecraft and seven years of data from MODIS on the Aqua spacecraft. The approach uses a geometric model of the MODIS instruments, accurate navigation (orbit and attitude) data and an accurate Earth terrain model to compute the location of each MODIS pixel. The error analysis approach automatically matches MODIS imagery with a global set of over 1,000 ground control points from the finer-resolution Landsat satellite to measure static biases and trends in the MO0lS geometric model parameters. Both within orbit and yearly thermally induced cyclic variations in the pointing have been found as well as a general long-term trend.

  13. Analysis of errors in forensic science

    Directory of Open Access Journals (Sweden)

    Mingxiao Du

    2017-01-01

    Full Text Available Reliability of expert testimony is one of the foundations of judicial justice. Both expert bias and scientific errors affect the reliability of expert opinion, which in turn affects the trustworthiness of the findings of fact in legal proceedings. Expert bias can be eliminated by replacing experts; however, it may be more difficult to eliminate scientific errors. From the perspective of statistics, errors in operation of forensic science include systematic errors, random errors, and gross errors. In general, process repetition and abiding by the standard ISO/IEC:17025: 2005, general requirements for the competence of testing and calibration laboratories, during operation are common measures used to reduce errors that originate from experts and equipment, respectively. For example, to reduce gross errors, the laboratory can ensure that a test is repeated several times by different experts. In applying for forensic principles and methods, the Federal Rules of Evidence 702 mandate that judges consider factors such as peer review, to ensure the reliability of the expert testimony. As the scientific principles and methods may not undergo professional review by specialists in a certain field, peer review serves as an exclusive standard. This study also examines two types of statistical errors. As false-positive errors involve a higher possibility of an unfair decision-making, they should receive more attention than false-negative errors.

  14. Analysis of error-prone survival data under additive hazards models: measurement error effects and adjustments.

    Science.gov (United States)

    Yan, Ying; Yi, Grace Y

    2016-07-01

    Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.

  15. Error Propagation Analysis for Quantitative Intracellular Metabolomics

    OpenAIRE

    Jana Tillack; Nicole Paczia; Katharina Nöh; Wolfgang Wiechert; Stephan Noack

    2012-01-01

    Model-based analyses have become an integral part of modern metabolic engineering and systems biology in order to gain knowledge about complex and not directly observable cellular processes. For quantitative analyses, not only experimental data, but also measurement errors, play a crucial role. The total measurement error of any analytical protocol is the result of an accumulation of single errors introduced by several processing steps. Here, we present a framework for the quantification of i...

  16. Error analysis for resonant thermonuclear reaction rates

    CERN Document Server

    Thompson, W J

    1999-01-01

    A detailed presentation is given of estimating uncertainties in thermonuclear reaction rates for stellar nucleosynthesis involving narrow resonances, starting from random errors in measured or calculated resonance and nuclear level properties. Special attention is given to statistical matters such as probability distributions, error propagation, and correlations between errors. Interpretation of resulting uncertainties in reaction rates and the distinction between symmetric and asymmetric errors are also discussed. Computing reaction rate uncertainties is described. We give examples from explosive nucleosynthesis by hydrogen burning on light nuclei.

  17. [Analysis of intrusion errors in free recall].

    Science.gov (United States)

    Diesfeldt, H F A

    2017-06-01

    Extra-list intrusion errors during five trials of the eight-word list-learning task of the Amsterdam Dementia Screening Test (ADST) were investigated in 823 consecutive psychogeriatric patients (87.1% suffering from major neurocognitive disorder). Almost half of the participants (45.9%) produced one or more intrusion errors on the verbal recall test. Correct responses were lower when subjects made intrusion errors, but learning slopes did not differ between subjects who committed intrusion errors and those who did not so. Bivariate regression analyses revealed that participants who committed intrusion errors were more deficient on measures of eight-word recognition memory, delayed visual recognition and tests of executive control (the Behavioral Dyscontrol Scale and the ADST-Graphical Sequences as measures of response inhibition). Using hierarchical multiple regression, only free recall and delayed visual recognition retained an independent effect in the association with intrusion errors, such that deficient scores on tests of episodic memory were sufficient to explain the occurrence of intrusion errors. Measures of inhibitory control did not add significantly to the explanation of intrusion errors in free recall, which makes insufficient strength of memory traces rather than a primary deficit in inhibition the preferred account for intrusion errors in free recall.

  18. A technique for human error analysis (ATHEANA)

    Energy Technology Data Exchange (ETDEWEB)

    Cooper, S.E.; Ramey-Smith, A.M.; Wreathall, J.; Parry, G.W. [and others

    1996-05-01

    Probabilistic risk assessment (PRA) has become an important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. Human reliability analysis (HRA) is a critical element of PRA; however, limitations in the analysis of human actions in PRAs have long been recognized as a constraint when using PRA. A multidisciplinary HRA framework has been developed with the objective of providing a structured approach for analyzing operating experience and understanding nuclear plant safety, human error, and the underlying factors that affect them. The concepts of the framework have matured into a rudimentary working HRA method. A trial application of the method has demonstrated that it is possible to identify potentially significant human failure events from actual operating experience which are not generally included in current PRAs, as well as to identify associated performance shaping factors and plant conditions that have an observable impact on the frequency of core damage. A general process was developed, albeit in preliminary form, that addresses the iterative steps of defining human failure events and estimating their probabilities using search schemes. Additionally, a knowledge- base was developed which describes the links between performance shaping factors and resulting unsafe actions.

  19. ERROR ANALYSIS ON INFORMATION AND TECHNOLOGY STUDENTS’ SENTENCE WRITING ASSIGNMENTS

    OpenAIRE

    Rentauli Mariah Silalahi

    2015-01-01

    Students’ error analysis is very important for helping EFL teachers to develop their teaching materials, assessments and methods. However, it takes much time and effort from the teachers to do such an error analysis towards their students’ language. This study seeks to identify the common errors made by 1 class of 28 freshmen students studying English in their first semester in an IT university. The data is collected from their writing assignments for eight consecutive weeks. The errors found...

  20. Probabilistic error analysis of computer arithmetics

    Energy Technology Data Exchange (ETDEWEB)

    Bareiss, E.H.; Barlow, J.L.

    1978-12-01

    The problem of continuous and discrete error distribution for real computer arithmetics is discussed. The existing literature is surveyed. Several new and important theorems are proven. Results are illustrated with 9 figures and 14 tables.

  1. Coordinating sentence composition with error correction: A multilevel analysis

    Directory of Open Access Journals (Sweden)

    Van Waes, L.

    2011-01-01

    Full Text Available Error analysis involves detecting and correcting discrepancies between the 'text produced so far' (TPSF and the writer's mental representation of what the text should be. While many factors determine the choice of strategy, cognitive effort is a major contributor to this choice. This research shows how cognitive effort during error analysis affects strategy choice and success as measured by a series of online text production measures. We hypothesize that error correction with speech recognition software differs from error correction with keyboard for two reasons. Speech produces auditory commands and, consequently, different error types. The study reported on here measured the effects of (1 mode of presentation (auditory or visual-tactile, (2 error span, whether the error spans more or less than two characters, and (3 lexicality, whether the text error comprises an existing word. A multilevel analysis was conducted to take into account the hierarchical nature of these data. For each variable (interference reaction time, preparation time, production time, immediacy of error correction, and accuracy of error correction, multilevel regression models are presented. As such, we take into account possible disturbing person characteristics while testing the effect of the different conditions and error types at the sentence level.The results show that writers delay error correction more often when the TPSF is read out aloud first. The auditory property of speech seems to free resources for the primary task of writing, i.e. text production. Moreover, the results show that large errors in the TPSF require more cognitive effort, and are solved with a higher accuracy than small errors. The latter also holds for the correction of small errors that result in non-existing words.

  2. Detecting errors in micro and trace analysis by using statistics

    DEFF Research Database (Denmark)

    Heydorn, K.

    1993-01-01

    to be in statistical control. Significant deviations between analytical results from different laboratories reveal the presence of systematic errors, and agreement between different laboratories indicate the absence of systematic errors. This statistical approach, referred to as the analysis of precision, was applied...... to results for chlorine in freshwater from BCR certification analyses by highly competent analytical laboratories in the EC. Titration showed systematic errors of several percent, while radiochemical neutron activation analysis produced results without detectable bias....

  3. Analysis of Errors in a Special Perturbations Satellite Orbit Propagator

    Energy Technology Data Exchange (ETDEWEB)

    Beckerman, M.; Jones, J.P.

    1999-02-01

    We performed an analysis of error densities for the Special Perturbations orbit propagator using data for 29 satellites in orbits of interest to Space Shuttle and International Space Station collision avoidance. We find that the along-track errors predominate. These errors increase monotonically over each 36-hour prediction interval. The predicted positions in the along-track direction progressively either leap ahead of or lag behind the actual positions. Unlike the along-track errors the radial and cross-track errors oscillate about their nearly zero mean values. As the number of observations per fit interval decline the along-track prediction errors, and amplitudes of the radial and cross-track errors, increase.

  4. ERROR ANALYSIS ON INFORMATION AND TECHNOLOGY STUDENTS’ SENTENCE WRITING ASSIGNMENTS

    Directory of Open Access Journals (Sweden)

    Rentauli Mariah Silalahi

    2015-03-01

    Full Text Available Students’ error analysis is very important for helping EFL teachers to develop their teaching materials, assessments and methods. However, it takes much time and effort from the teachers to do such an error analysis towards their students’ language. This study seeks to identify the common errors made by 1 class of 28 freshmen students studying English in their first semester in an IT university. The data is collected from their writing assignments for eight consecutive weeks. The errors found were classified into 24 types and the top ten most common errors committed by the students were article, preposition, spelling, word choice, subject-verb agreement, auxiliary verb, plural form, verb form, capital letter, and meaningless sentences. The findings about the students’ frequency of committing errors were, then, contrasted to their midterm test result and in order to find out the reasons behind the error recurrence; the students were given some questions to answer in a questionnaire format. Most of the students admitted that careless was the major reason for their errors and lack understanding came next. This study suggests EFL teachers to devote their time to continuously check the students’ language by giving corrections so that the students can learn from their errors and stop committing the same errors.

  5. ERROR CONVERGENCE ANALYSIS FOR LOCAL HYPERTHERMIA APPLICATIONS

    Directory of Open Access Journals (Sweden)

    NEERU MALHOTRA

    2016-01-01

    Full Text Available The accuracy of numerical solution for electromagnetic problem is greatly influenced by the convergence of the solution obtained. In order to quantify the correctness of the numerical solution the errors produced on solving the partial differential equations are required to be analyzed. Mesh quality is another parameter that affects convergence. The various quality metrics are dependent on the type of solver used for numerical simulation. The paper focuses on comparing the performance of iterative solvers used in COMSOL Multiphysics software. The modeling of coaxial coupled waveguide applicator operating at 485MHz has been done for local hyperthermia applications using adaptive finite element method. 3D heat distribution within the muscle phantom depicting spherical leison and localized heating pattern confirms the proper selection of the solver. The convergence plots are obtained during simulation of the problem using GMRES (generalized minimal residual and geometric multigrid linear iterative solvers. The best error convergence is achieved by using nonlinearity multigrid solver and further introducing adaptivity in nonlinear solver.

  6. Implications of Error Analysis Studies for Academic Interventions

    Science.gov (United States)

    Mather, Nancy; Wendling, Barbara J.

    2017-01-01

    We reviewed 13 studies that focused on analyzing student errors on achievement tests from the Kaufman Test of Educational Achievement-Third edition (KTEA-3). The intent was to determine what instructional implications could be derived from in-depth error analysis. As we reviewed these studies, several themes emerged. We explain how a careful…

  7. Lower extremity angle measurement with accelerometers - error and sensitivity analysis

    NARCIS (Netherlands)

    Willemsen, A.T.M.; Willemsen, Antoon Th.M.; Frigo, Carlo; Boom, H.B.K.

    1991-01-01

    The use of accelerometers for angle assessment of the lower extremities is investigated. This method is evaluated by an error-and-sensitivity analysis using healthy subject data. Of three potential error sources (the reference system, the accelerometers, and the model assumptions) the last is found

  8. Error analysis of sensor measurements in a small UAV

    OpenAIRE

    Ackerman, James S.

    2005-01-01

    This thesis focuses on evaluating the measurement errors in the gimbal system of the SUAV autonomous aircraft developed at NPS. These measurements are used by the vision based target position estimation system developed at NPS. Analysis of the errors inherent in these measurements will help direct future investment in better sensors to improve the estimation system's performance.

  9. Controlling the Type I Error Rate in Stepwise Regression Analysis.

    Science.gov (United States)

    Pohlmann, John T.

    1979-01-01

    The type I error rate in stepwise regression analysis deserves serious consideration by researchers. The problem-wide error rate is the probability of selecting any variable when all variables have population regression weights of zero. Appropriate significance tests are presented and a Monte Carlo experiment is described. (Author/CTM)

  10. Analysis of Medication Errors in Simulated Pediatric Resuscitation by Residents

    Directory of Open Access Journals (Sweden)

    Evelyn Porter

    2014-07-01

    Full Text Available Introduction: The objective of our study was to estimate the incidence of prescribing medication errors specifically made by a trainee and identify factors associated with these errors during the simulated resuscitation of a critically ill child. Methods: The results of the simulated resuscitation are described. We analyzed data from the simulated resuscitation for the occurrence of a prescribing medication error. We compared univariate analysis of each variable to medication error rate and performed a separate multiple logistic regression analysis on the significant univariate variables to assess the association between the selected variables. Results: We reviewed 49 simulated resuscitations . The final medication error rate for the simulation was 26.5% (95% CI 13.7% - 39.3%. On univariate analysis, statistically significant findings for decreased prescribing medication error rates included senior residents in charge, presence of a pharmacist, sleeping greater than 8 hours prior to the simulation, and a visual analog scale score showing more confidence in caring for critically ill children. Multiple logistic regression analysis using the above significant variables showed only the presence of a pharmacist to remain significantly associated with decreased medication error, odds ratio of 0.09 (95% CI 0.01 - 0.64. Conclusion: Our results indicate that the presence of a clinical pharmacist during the resuscitation of a critically ill child reduces the medication errors made by resident physician trainees.

  11. Real-time analysis for Stochastic errors of MEMS gyro

    Science.gov (United States)

    Miao, Zhiyong; Shi, Hongyang; Zhang, Yi

    2017-10-01

    Since a good knowledge of MEMS gyro stochastic errors is important and critical to MEMS INS/GPS integration system. Therefore, the stochastic errors of MEMS gyro should be accurately modeled and identified. The Allan variance method is IEEE standard method in the filed of analysis stochastic errors of gyro. This kind of method can fully characterize the random character of stochastic errors. However, it requires a large amount of data to be stored, resulting in large offline computational burden. Moreover, it has a painful procedure of drawing slope lines for estimation. To overcome the barriers, a simple linear state-space model was established for MEMS gyro. Then, a recursive EM algorithm was implemented to estimate the stochastic errors of MEMS gyro in real time. The experimental results of ADIS16405 IMU show that the real-time estimations of proposed approach are well within the error limits of Allan variance method. Moreover, the proposed method effectively avoids the storage of data.

  12. An error analysis perspective for patient alignment systems.

    Science.gov (United States)

    Figl, Michael; Kaar, Marcus; Hoffman, Rainer; Kratochwil, Alfred; Hummel, Johann

    2013-09-01

    This paper analyses the effects of error sources which can be found in patient alignment systems. As an example, an ultrasound (US) repositioning system and its transformation chain are assessed. The findings of this concept can also be applied to any navigation system. In a first step, all error sources were identified and where applicable, corresponding target registration errors were computed. By applying error propagation calculations on these commonly used registration/calibration and tracking errors, we were able to analyse the components of the overall error. Furthermore, we defined a special situation where the whole registration chain reduces to the error caused by the tracking system. Additionally, we used a phantom to evaluate the errors arising from the image-to-image registration procedure, depending on the image metric used. We have also discussed how this analysis can be applied to other positioning systems such as Cone Beam CT-based systems or Brainlab's ExacTrac. The estimates found by our error propagation analysis are in good agreement with the numbers found in the phantom study but significantly smaller than results from patient evaluations. We probably underestimated human influences such as the US scan head positioning by the operator and tissue deformation. Rotational errors of the tracking system can multiply these errors, depending on the relative position of tracker and probe. We were able to analyse the components of the overall error of a typical patient positioning system. We consider this to be a contribution to the optimization of the positioning accuracy for computer guidance systems.

  13. Data Analysis & Statistical Methods for Command File Errors

    Science.gov (United States)

    Meshkat, Leila; Waggoner, Bruce; Bryant, Larry

    2014-01-01

    This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.

  14. Predictive error analysis for a water resource management model

    Science.gov (United States)

    Gallagher, Mark; Doherty, John

    2007-02-01

    SummaryIn calibrating a model, a set of parameters is assigned to the model which will be employed for the making of all future predictions. If these parameters are estimated through solution of an inverse problem, formulated to be properly posed through either pre-calibration or mathematical regularisation, then solution of this inverse problem will, of necessity, lead to a simplified parameter set that omits the details of reality, while still fitting historical data acceptably well. Furthermore, estimates of parameters so obtained will be contaminated by measurement noise. Both of these phenomena will lead to errors in predictions made by the model, with the potential for error increasing with the hydraulic property detail on which the prediction depends. Integrity of model usage demands that model predictions be accompanied by some estimate of the possible errors associated with them. The present paper applies theory developed in a previous work to the analysis of predictive error associated with a real world, water resource management model. The analysis offers many challenges, including the fact that the model is a complex one that was partly calibrated by hand. Nevertheless, it is typical of models which are commonly employed as the basis for the making of important decisions, and for which such an analysis must be made. The potential errors associated with point-based and averaged water level and creek inflow predictions are examined, together with the dependence of these errors on the amount of averaging involved. Error variances associated with predictions made by the existing model are compared with "optimized error variances" that could have been obtained had calibration been undertaken in such a way as to minimize predictive error variance. The contributions by different parameter types to the overall error variance of selected predictions are also examined.

  15. Error Consistency Analysis Scheme for Infrared Ultraspectral Sounding Retrieval Error Budget Estimation

    Science.gov (United States)

    Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larry, L.

    2013-01-01

    Great effort has been devoted towards validating geophysical parameters retrieved from ultraspectral infrared radiances obtained from satellite remote sensors. An error consistency analysis scheme (ECAS), utilizing fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of mean difference and standard deviation of error in both spectral radiance and retrieval domains. The retrieval error is assessed through ECAS without relying on other independent measurements such as radiosonde data. ECAS establishes a link between the accuracies of radiances and retrieved geophysical parameters. ECAS can be applied to measurements from any ultraspectral instrument and any retrieval scheme with its associated RTM. In this manuscript, ECAS is described and demonstrated with measurements from the MetOp-A satellite Infrared Atmospheric Sounding Interferometer (IASI). This scheme can be used together with other validation methodologies to give a more definitive characterization of the error and/or uncertainty of geophysical parameters retrieved from ultraspectral radiances observed from current and future satellite remote sensors such as IASI, the Atmospheric Infrared Sounder (AIRS), and the Cross-track Infrared Sounder (CrIS).

  16. The use of error analysis to assess resident performance.

    Science.gov (United States)

    D'Angelo, Anne-Lise D; Law, Katherine E; Cohen, Elaine R; Greenberg, Jacob A; Kwan, Calvin; Greenberg, Caprice; Wiegmann, Douglas A; Pugh, Carla M

    2015-11-01

    The aim of this study was to assess validity of a human factors error assessment method for evaluating resident performance during a simulated operative procedure. Seven postgraduate year 4-5 residents had 30 minutes to complete a simulated laparoscopic ventral hernia (LVH) repair on day 1 of a national, advanced laparoscopic course. Faculty provided immediate feedback on operative errors and residents participated in a final product analysis of their repairs. Residents then received didactic and hands-on training regarding several advanced laparoscopic procedures during a lecture session and animate lab. On day 2, residents performed a nonequivalent LVH repair using a simulator. Three investigators reviewed and coded videos of the repairs using previously developed human error classification systems. Residents committed 121 total errors on day 1 compared with 146 on day 2. One of 7 residents successfully completed the LVH repair on day 1 compared with all 7 residents on day 2 (P = .001). The majority of errors (85%) committed on day 2 were technical and occurred during the last 2 steps of the procedure. There were significant differences in error type (P ≤ .001) and level (P = .019) from day 1 to day 2. The proportion of omission errors decreased from day 1 (33%) to day 2 (14%). In addition, there were more technical and commission errors on day 2. The error assessment tool was successful in categorizing performance errors, supporting known-groups validity evidence. Evaluating resident performance through error classification has great potential in facilitating our understanding of operative readiness. Copyright © 2015 Elsevier Inc. All rights reserved.

  17. An Analysis of Medication Errors at the Military Medical Center: Implications for a Systems Approach for Error Reduction

    National Research Council Canada - National Science Library

    Scheirman, Katherine

    2001-01-01

    An analysis was accomplished of all inpatient medication errors at a military academic medical center during the year 2000, based on the causes of medication errors as described by current research in the field...

  18. Barrier and operational risk analysis of hydrocarbon releases (BORA-Release)

    Energy Technology Data Exchange (ETDEWEB)

    Sklet, Snorre [Department of Production and Quality Engineering, The Norwegian University of Science and Technology (NTNU), NO-7491 Trondheim (Norway)]. E-mail: snorre.sklet@sintef.no; Vinnem, Jan Erik [University of Stavanger (UiS), NO-4036 Stavanger (Norway); Aven, Terje [University of Stavanger (UiS), NO-4036 Stavanger (Norway)

    2006-09-21

    This paper presents results from a case study carried out on an offshore oil and gas production platform with the purpose to apply and test BORA-Release, a method for barrier and operational risk analysis of hydrocarbon releases. A description of the BORA-Release method is given in Part I of the paper. BORA-Release is applied to express the platform specific hydrocarbon release frequencies for three release scenarios for selected systems and activities on the platform. The case study demonstrated that the BORA-Release method is a useful tool for analysing the effect on the release frequency of safety barriers introduced to prevent hydrocarbon releases, and to study the effect on the barrier performance of platform specific conditions of technical, human, operational, and organisational risk influencing factors (RIFs). BORA-Release may also be used to analyse the effect on the release frequency of risk reducing measures.

  19. Formal Analysis of Soft Errors using Theorem Proving

    Directory of Open Access Journals (Sweden)

    Sofiène Tahar

    2013-07-01

    Full Text Available Modeling and analysis of soft errors in electronic circuits has traditionally been done using computer simulations. Computer simulations cannot guarantee correctness of analysis because they utilize approximate real number representations and pseudo random numbers in the analysis and thus are not well suited for analyzing safety-critical applications. In this paper, we present a higher-order logic theorem proving based method for modeling and analysis of soft errors in electronic circuits. Our developed infrastructure includes formalized continuous random variable pairs, their Cumulative Distribution Function (CDF properties and independent standard uniform and Gaussian random variables. We illustrate the usefulness of our approach by modeling and analyzing soft errors in commonly used dynamic random access memory sense amplifier circuits.

  20. QUALITATIVE DATA AND ERROR MEASUREMENT IN INPUT-OUTPUT-ANALYSIS

    NARCIS (Netherlands)

    NIJKAMP, P; OOSTERHAVEN, J; OUWERSLOOT, H; RIETVELD, P

    1992-01-01

    This paper is a contribution to the rapidly emerging field of qualitative data analysis in economics. Ordinal data techniques and error measurement in input-output analysis are here combined in order to test the reliability of a low level of measurement and precision of data by means of a stochastic

  1. Error analysis of mechanical system and wavelength calibration of monochromator.

    Science.gov (United States)

    Zhang, Fudong; Chen, Chen; Liu, Jie; Wang, Zhihong

    2018-02-01

    This study focuses on improving the accuracy of a grating monochromator on the basis of the grating diffraction equation in combination with an analysis of the mechanical transmission relationship between the grating, the sine bar, and the screw of the scanning mechanism. First, the relationship between the mechanical error in the monochromator with the sine drive and the wavelength error is analyzed. Second, a mathematical model of the wavelength error and mechanical error is developed, and an accurate wavelength calibration method based on the sine bar's length adjustment and error compensation is proposed. Based on the mathematical model and calibration method, experiments using a standard light source with known spectral lines and a pre-adjusted sine bar length are conducted. The model parameter equations are solved, and subsequent parameter optimization simulations are performed to determine the optimal length ratio. Lastly, the length of the sine bar is adjusted. The experimental results indicate that the wavelength accuracy is ±0.3 nm, which is better than the original accuracy of ±2.6 nm. The results confirm the validity of the error analysis of the mechanical system of the monochromator as well as the validity of the calibration method.

  2. The Influence of Observation Errors on Analysis Error and Forecast Skill Investigated with an Observing System Simulation Experiment

    Science.gov (United States)

    Prive, N. C.; Errico, R. M.; Tai, K.-S.

    2013-01-01

    The Global Modeling and Assimilation Office (GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a one-month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 hour forecast increased observation error only yields a slight decline in forecast skill in the extratropics, and no discernable degradation of forecast skill in the tropics.

  3. The influence of observation errors on analysis error and forecast skill investigated with an observing system simulation experiment

    Science.gov (United States)

    Privé, N. C.; Errico, R. M.; Tai, K.-S.

    2013-06-01

    The National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASA/GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a 1 month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 h forecast, increased observation error only yields a slight decline in forecast skill in the extratropics and no discernible degradation of forecast skill in the tropics.

  4. Differential Dopamine Release Dynamics in the Nucleus Accumbens Core and Shell Reveal Complementary Signals for Error Prediction and Incentive Motivation.

    Science.gov (United States)

    Saddoris, Michael P; Cacciapaglia, Fabio; Wightman, R Mark; Carelli, Regina M

    2015-08-19

    Mesolimbic dopamine (DA) is phasically released during appetitive behaviors, though there is substantive disagreement about the specific purpose of these DA signals. For example, prediction error (PE) models suggest a role of learning, while incentive salience (IS) models argue that the DA signal imbues stimuli with value and thereby stimulates motivated behavior. However, within the nucleus accumbens (NAc) patterns of DA release can strikingly differ between subregions, and as such, it is possible that these patterns differentially contribute to aspects of PE and IS. To assess this, we measured DA release in subregions of the NAc during a behavioral task that spatiotemporally separated sequential goal-directed stimuli. Electrochemical methods were used to measure subsecond NAc dopamine release in the core and shell during a well learned instrumental chain schedule in which rats were trained to press one lever (seeking; SL) to gain access to a second lever (taking; TL) linked with food delivery, and again during extinction. In the core, phasic DA release was greatest following initial SL presentation, but minimal for the subsequent TL and reward events. In contrast, phasic shell DA showed robust release at all task events. Signaling decreased between the beginning and end of sessions in the shell, but not core. During extinction, peak DA release in the core showed a graded decrease for the SL and pauses in release during omitted expected rewards, whereas shell DA release decreased predominantly during the TL. These release dynamics suggest parallel DA signals capable of supporting distinct theories of appetitive behavior. Dopamine signaling in the brain is important for a variety of cognitive functions, such as learning and motivation. Typically, it is assumed that a single dopamine signal is sufficient to support these cognitive functions, though competing theories disagree on how dopamine contributes to reward-based behaviors. Here, we have found that real

  5. Error Grid Analysis for Arterial Pressure Method Comparison Studies.

    Science.gov (United States)

    Saugel, Bernd; Grothe, Oliver; Nicklas, Julia Y

    2017-12-11

    The measurement of arterial pressure (AP) is a key component of hemodynamic monitoring. A variety of different innovative AP monitoring technologies became recently available. The decision to use these technologies must be based on their measurement performance in validation studies. These studies are AP method comparison studies comparing a new method ("test method") with a reference method. In these studies, different comparative statistical tests are used including correlation analysis, Bland-Altman analysis, and trending analysis. These tests provide information about the statistical agreement without adequately providing information about the clinical relevance of differences between the measurement methods. To overcome this problem, we, in this study, propose an "error grid analysis" for AP method comparison studies that allows illustrating the clinical relevance of measurement differences. We constructed smoothed consensus error grids with calibrated risk zones derived from a survey among 25 specialists in anesthesiology and intensive care medicine. Differences between measurements of the test and the reference method are classified into 5 risk levels ranging from "no risk" to "dangerous risk"; the classification depends on both the differences between the measurements and on the measurements themselves. Based on worked examples and data from the Multiparameter Intelligent Monitoring in Intensive Care II database, we show that the proposed error grids give information about the clinical relevance of AP measurement differences that cannot be obtained from Bland-Altman analysis. Our approach also offers a framework on how to adapt the error grid analysis for different clinical settings and patient populations.

  6. A case of error disclosure: a communication privacy management analysis.

    Science.gov (United States)

    Petronio, Sandra; Helft, Paul R; Child, Jeffrey T

    2013-12-01

    To better understand the process of disclosing medical errors to patients, this research offers a case analysis using Petronios's theoretical frame of Communication Privacy Management (CPM). Given the resistance clinicians often feel about error disclosure, insights into the way choices are made by the clinicians in telling patients about the mistake has the potential to address reasons for resistance. Applying the evidenced-based CPM theory, developed over the last 35 years and dedicated to studying disclosure phenomenon, to disclosing medical mistakes potentially has the ability to reshape thinking about the error disclosure process. Using a composite case representing a surgical mistake, analysis based on CPM theory is offered to gain insights into conversational routines and disclosure management choices of revealing a medical error. The results of this analysis show that an underlying assumption of health information ownership by the patient and family can be at odds with the way the clinician tends to control disclosure about the error. In addition, the case analysis illustrates that there are embedded patterns of disclosure that emerge out of conversations the clinician has with the patient and the patient's family members. These patterns unfold privacy management decisions on the part of the clinician that impact how the patient is told about the error and the way that patients interpret the meaning of the disclosure. These findings suggest the need for a better understanding of how patients manage their private health information in relationship to their expectations for the way they see the clinician caring for or controlling their health information about errors. Significance for public healthMuch of the mission central to public health sits squarely on the ability to communicate effectively. This case analysis offers an in-depth assessment of how error disclosure is complicated by misunderstandings, assuming ownership and control over information

  7. A Case of Error Disclosure: A Communication Privacy Management Analysis

    Science.gov (United States)

    Petronio, Sandra; Helft, Paul R.; Child, Jeffrey T.

    2013-01-01

    To better understand the process of disclosing medical errors to patients, this research offers a case analysis using Petronios’s theoretical frame of Communication Privacy Management (CPM). Given the resistance clinicians often feel about error disclosure, insights into the way choices are made by the clinicians in telling patients about the mistake has the potential to address reasons for resistance. Applying the evidenced-based CPM theory, developed over the last 35 years and dedicated to studying disclosure phenomenon, to disclosing medical mistakes potentially has the ability to reshape thinking about the error disclosure process. Using a composite case representing a surgical mistake, analysis based on CPM theory is offered to gain insights into conversational routines and disclosure management choices of revealing a medical error. The results of this analysis show that an underlying assumption of health information ownership by the patient and family can be at odds with the way the clinician tends to control disclosure about the error. In addition, the case analysis illustrates that there are embedded patterns of disclosure that emerge out of conversations the clinician has with the patient and the patient’s family members. These patterns unfold privacy management decisions on the part of the clinician that impact how the patient is told about the error and the way that patients interpret the meaning of the disclosure. These findings suggest the need for a better understanding of how patients manage their private health information in relationship to their expectations for the way they see the clinician caring for or controlling their health information about errors. Significance for public health Much of the mission central to public health sits squarely on the ability to communicate effectively. This case analysis offers an in-depth assessment of how error disclosure is complicated by misunderstandings, assuming ownership and control over information

  8. Unbiased bootstrap error estimation for linear discriminant analysis.

    Science.gov (United States)

    Vu, Thang; Sima, Chao; Braga-Neto, Ulisses M; Dougherty, Edward R

    2014-12-01

    Convex bootstrap error estimation is a popular tool for classifier error estimation in gene expression studies. A basic question is how to determine the weight for the convex combination between the basic bootstrap estimator and the resubstitution estimator such that the resulting estimator is unbiased at finite sample sizes. The well-known 0.632 bootstrap error estimator uses asymptotic arguments to propose a fixed 0.632 weight, whereas the more recent 0.632+ bootstrap error estimator attempts to set the weight adaptively. In this paper, we study the finite sample problem in the case of linear discriminant analysis under Gaussian populations. We derive exact expressions for the weight that guarantee unbiasedness of the convex bootstrap error estimator in the univariate and multivariate cases, without making asymptotic simplifications. Using exact computation in the univariate case and an accurate approximation in the multivariate case, we obtain the required weight and show that it can deviate significantly from the constant 0.632 weight, depending on the sample size and Bayes error for the problem. The methodology is illustrated by application on data from a well-known cancer classification study.

  9. Error analysis for mesospheric temperature profiling by absorptive occultation sensors

    Directory of Open Access Journals (Sweden)

    M. J. Rieder

    Full Text Available An error analysis for mesospheric profiles retrieved from absorptive occultation data has been performed, starting with realistic error assumptions as would apply to intensity data collected by available high-precision UV photodiode sensors. Propagation of statistical errors was investigated through the complete retrieval chain from measured intensity profiles to atmospheric density, pressure, and temperature profiles. We assumed unbiased errors as the occultation method is essentially self-calibrating and straight-line propagation of occulted signals as we focus on heights of 50–100 km, where refractive bending of the sensed radiation is negligible. Throughout the analysis the errors were characterized at each retrieval step by their mean profile, their covariance matrix and their probability density function (pdf. This furnishes, compared to a variance-only estimation, a much improved insight into the error propagation mechanism. We applied the procedure to a baseline analysis of the performance of a recently proposed solar UV occultation sensor (SMAS – Sun Monitor and Atmospheric Sounder and provide, using a reasonable exponential atmospheric model as background, results on error standard deviations and error correlation functions of density, pressure, and temperature profiles. Two different sensor photodiode assumptions are discussed, respectively, diamond diodes (DD with 0.03% and silicon diodes (SD with 0.1% (unattenuated intensity measurement noise at 10 Hz sampling rate. A factor-of-2 margin was applied to these noise values in order to roughly account for unmodeled cross section uncertainties. Within the entire height domain (50–100 km we find temperature to be retrieved to better than 0.3 K (DD / 1 K (SD accuracy, respectively, at 2 km height resolution. The results indicate that absorptive occultations acquired by a SMAS-type sensor could provide mesospheric profiles of fundamental variables such as temperature with

  10. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy.

    Science.gov (United States)

    Matsuo, Yukinori; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro

    2016-09-01

    The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Balanced data according to the one-factor random effect model were assumed. Analysis-of-variance (anova)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The anova-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.

  11. Error Analysis on Plane-to-Plane Linear Approximate Coordinate ...

    Indian Academy of Sciences (India)

    c Indian Academy of Sciences. Error Analysis on Plane-to-Plane Linear Approximate Coordinate. Transformation. Q. F. Zhang1,∗, Q. Y. Peng1 & J. H. Fan2. 1Department of Computer Science, Jinan University, Guangzhou 510632, China. ... This work is partially supported by the National Natural Science Foundation.

  12. Error Analysis Of Clock Time (T), Declination (*) And Latitude ...

    African Journals Online (AJOL)

    ), latitude (Φ), longitude (λ) and azimuth (A); which are aimed at establishing fixed positions and orientations of survey points and lines on the earth surface. The paper attempts the analysis of the individual and combined effects of error in time ...

  13. L'analyse des erreurs. Problemes et perspectives (Error Analysis. Problems and Perspectives)

    Science.gov (United States)

    Porquier, Remy

    1977-01-01

    Summarizes the usefulness and the disadvantage of error analysis, and discusses a reorientation of error analysis, specifically regarding grammar instruction and the significance of errors. (Text is in French.) (AM)

  14. Error analysis of a public domain pronunciation dictionary

    CSIR Research Space (South Africa)

    Martirosian, O

    2007-11-01

    Full Text Available stream_source_info Martirosian_2007.pdf.txt stream_content_type text/plain stream_size 35149 Content-Encoding UTF-8 stream_name Martirosian_2007.pdf.txt Content-Type text/plain; charset=UTF-8 Error analysis of a public... as a base- line when researching improvement techniques, one must keep in mind that the data used to build the system may contain er- rors. If these errors are not corrected in the baseline system but are found and corrected in the process of using...

  15. An Error Analysis of Structured Light Scanning of Biological Tissue

    DEFF Research Database (Denmark)

    Jensen, Sebastian Hoppe Nesgaard; Wilm, Jakob; Aanæs, Henrik

    2017-01-01

    This paper presents an error analysis and correction model for four structured light methods applied to three common types of biological tissue; skin, fat and muscle. Despite its many advantages, structured light is based on the assumption of direct reflection at the object surface only....... This assumption is violated by most biological material e.g. human skin, which exhibits subsurface scattering. In this study, we find that in general, structured light scans of biological tissue deviate significantly from the ground truth. We show that a large portion of this error can be predicted with a simple......, statistical linear model based on the scan geometry. As such, scans can be corrected without introducing any specially designed pattern strategy or hardware. We can effectively reduce the error in a structured light scanner applied to biological tissue by as much as factor of two or three....

  16. Students’ Written Production Error Analysis in the EFL Classroom Teaching: A Study of Adult English Learners Errors

    Directory of Open Access Journals (Sweden)

    Ranauli Sihombing

    2016-12-01

    Full Text Available Errors analysis has become one of the most interesting issues in the study of Second Language Acquisition. It can not be denied that some teachers do not know a lot about error analysis and related theories of how L1, L2 or foreign language acquired. In addition, the students often feel upset since they find a gap between themselves and the teachers for the errors the students make and the teachers’ understanding about the error correction. The present research aims to investigate what errors adult English learners make in written production of English. The significances of the study is to know what errors students make in writing that the teachers can find solution to the errors the students make for a better English language teaching and learning especially in teaching English for adults. The study employed qualitative method. The research was undertaken at an airline education center in Bandung. The result showed that syntax errors are more frequently found than morphology errors, especially in terms of verb phrase errors. It is recommended that it is important for teacher to know the theory of second language acquisition in order to know how the students learn and produce theirlanguage. In addition, it will be advantages for teachers if they know what errors students frequently make in their learning, so that the teachers can give solution to the students for a better English language learning achievement.   DOI: https://doi.org/10.24071/llt.2015.180205

  17. Orbit Determination Error Analysis Results for the Triana Sun-Earth L2 Libration Point Mission

    Science.gov (United States)

    Marr, G.

    2003-01-01

    Using the NASA Goddard Space Flight Center's Orbit Determination Error Analysis System (ODEAS), orbit determination error analysis results are presented for all phases of the Triana Sun-Earth L1 libration point mission and for the science data collection phase of a future Sun-Earth L2 libration point mission. The Triana spacecraft was nominally to be released by the Space Shuttle in a low Earth orbit, and this analysis focuses on that scenario. From the release orbit a transfer trajectory insertion (TTI) maneuver performed using a solid stage would increase the velocity be approximately 3.1 km/sec sending Triana on a direct trajectory to its mission orbit. The Triana mission orbit is a Sun-Earth L1 Lissajous orbit with a Sun-Earth-vehicle (SEV) angle between 4.0 and 15.0 degrees, which would be achieved after a Lissajous orbit insertion (LOI) maneuver at approximately launch plus 6 months. Because Triana was to be launched by the Space Shuttle, TTI could potentially occur over a 16 orbit range from low Earth orbit. This analysis was performed assuming TTI was performed from a low Earth orbit with an inclination of 28.5 degrees and assuming support from a combination of three Deep Space Network (DSN) stations, Goldstone, Canberra, and Madrid and four commercial Universal Space Network (USN) stations, Alaska, Hawaii, Perth, and Santiago. These ground stations would provide coherent two-way range and range rate tracking data usable for orbit determination. Larger range and range rate errors were assumed for the USN stations. Nominally, DSN support would end at TTI+144 hours assuming there were no USN problems. Post-TTI coverage for a range of TTI longitudes for a given nominal trajectory case were analyzed. The orbit determination error analysis after the first correction maneuver would be generally applicable to any libration point mission utilizing a direct trajectory.

  18. Error Analysis of Remotely-Acquired Mossbauer Spectra

    Science.gov (United States)

    Schaefer, Martha W.; Dyar, M. Darby; Agresti, David G.; Schaefer, Bradley E.

    2005-01-01

    On the Mars Exploration Rovers, Mossbauer spectroscopy has recently been called upon to assist in the task of mineral identification, a job for which it is rarely used in terrestrial studies. For example, Mossbauer data were used to support the presence of olivine in Martian soil at Gusev and jarosite in the outcrop at Meridiani. The strength (and uniqueness) of these interpretations lies in the assumption that peak positions can be determined with high degrees of both accuracy and precision. We summarize here what we believe to be the major sources of error associated with peak positions in remotely-acquired spectra, and speculate on their magnitudes. Our discussion here is largely qualitative because necessary background information on MER calibration sources, geometries, etc., have not yet been released to the PDS; we anticipate that a more quantitative discussion can be presented by March 2005.

  19. Dispersion analysis and linear error analysis capabilities of the space vehicle dynamics simulation program

    Science.gov (United States)

    Snow, L. S.; Kuhn, A. E.

    1975-01-01

    Previous error analyses conducted by the Guidance and Dynamics Branch of NASA have used the Guidance Analysis Program (GAP) as the trajectory simulation tool. Plans are made to conduct all future error analyses using the Space Vehicle Dynamics Simulation (SVDS) program. A study was conducted to compare the inertial measurement unit (IMU) error simulations of the two programs. Results of the GAP/SVDS comparison are presented and problem areas encountered while attempting to simulate IMU errors, vehicle performance uncertainties and environmental uncertainties using SVDS are defined. An evaluation of the SVDS linear error analysis capability is also included.

  20. ORAN- ORBITAL AND GEODETIC PARAMETER ESTIMATION ERROR ANALYSIS

    Science.gov (United States)

    Putney, B.

    1994-01-01

    The Orbital and Geodetic Parameter Estimation Error Analysis program, ORAN, was developed as a Bayesian least squares simulation program for orbital trajectories. ORAN does not process data, but is intended to compute the accuracy of the results of a data reduction, if measurements of a given accuracy are available and are processed by a minimum variance data reduction program. Actual data may be used to provide the time when a given measurement was available and the estimated noise on that measurement. ORAN is designed to consider a data reduction process in which a number of satellite data periods are reduced simultaneously. If there is more than one satellite in a data period, satellite-to-satellite tracking may be analyzed. The least squares estimator in most orbital determination programs assumes that measurements can be modeled by a nonlinear regression equation containing a function of parameters to be estimated and parameters which are assumed to be constant. The partitioning of parameters into those to be estimated (adjusted) and those assumed to be known (unadjusted) is somewhat arbitrary. For any particular problem, the data will be insufficient to adjust all parameters subject to uncertainty, and some reasonable subset of these parameters is selected for estimation. The final errors in the adjusted parameters may be decomposed into a component due to measurement noise and a component due to errors in the assumed values of the unadjusted parameters. Error statistics associated with the first component are generally evaluated in an orbital determination program. ORAN is used to simulate the orbital determination processing and to compute error statistics associated with the second component. Satellite observations may be simulated with desired noise levels given in many forms including range and range rate, altimeter height, right ascension and declination, direction cosines, X and Y angles, azimuth and elevation, and satellite-to-satellite range and

  1. Accounting for Errors in Model Analysis Theory: A Numerical Approach

    Science.gov (United States)

    Sommer, Steven R.; Lindell, Rebecca S.

    2004-09-01

    By studying the patterns of a group of individuals' responses to a series of multiple-choice questions, researchers can utilize Model Analysis Theory to create a probability distribution of mental models for a student population. The eigenanalysis of this distribution yields information about what mental models the students possess, as well as how consistently they utilize said mental models. Although the theory considers the probabilistic distribution to be fundamental, there exists opportunities for random errors to occur. In this paper we will discuss a numerical approach for mathematically accounting for these random errors. As an example of this methodology, analysis of data obtained from the Lunar Phases Concept Inventory will be presented. Limitations and applicability of this numerical approach will be discussed.

  2. Acquisition of case in Lithuanian as L2: Error analysis

    Directory of Open Access Journals (Sweden)

    Laura Cubajevaite

    2009-05-01

    Full Text Available Although teaching Lithuanian as a foreign language is not a new subject, there has not been much research in this field. The paper presents a study based on an analysis of grammatical errors which was carried out at Vytautas Magnus University. The data was selected randomly by analysing written assignments of beginner to advanced level students.DOI: http://dx.doi.org/10.5128/ERYa5.04

  3. Magnetospheric Multiscale (MMS) Mission Commissioning Phase Orbit Determination Error Analysis

    Science.gov (United States)

    Chung, Lauren R.; Novak, Stefan; Long, Anne; Gramling, Cheryl

    2009-01-01

    The Magnetospheric MultiScale (MMS) mission commissioning phase starts in a 185 km altitude x 12 Earth radii (RE) injection orbit and lasts until the Phase 1 mission orbits and orientation to the Earth-Sun li ne are achieved. During a limited time period in the early part of co mmissioning, five maneuvers are performed to raise the perigee radius to 1.2 R E, with a maneuver every other apogee. The current baseline is for the Goddard Space Flight Center Flight Dynamics Facility to p rovide MMS orbit determination support during the early commissioning phase using all available two-way range and Doppler tracking from bo th the Deep Space Network and Space Network. This paper summarizes th e results from a linear covariance analysis to determine the type and amount of tracking data required to accurately estimate the spacecraf t state, plan each perigee raising maneuver, and support thruster cal ibration during this phase. The primary focus of this study is the na vigation accuracy required to plan the first and the final perigee ra ising maneuvers. Absolute and relative position and velocity error hi stories are generated for all cases and summarized in terms of the ma ximum root-sum-square consider and measurement noise error contributi ons over the definitive and predictive arcs and at discrete times inc luding the maneuver planning and execution times. Details of the meth odology, orbital characteristics, maneuver timeline, error models, and error sensitivities are provided.

  4. Error analysis of compensation cutting technique for wavefront error of KH2PO4 crystal.

    Science.gov (United States)

    Tie, Guipeng; Dai, Yifan; Guan, Chaoliang; Zhu, Dengchao; Song, Bing

    2013-09-20

    Considering the wavefront error of KH(2)PO(4) (KDP) crystal is difficult to control through face fly cutting process because of surface shape deformation during vacuum suction, an error compensation technique based on a spiral turning method is put forward. An in situ measurement device is applied to measure the deformed surface shape after vacuum suction, and the initial surface figure error, which is obtained off-line, is added to the in situ surface shape to obtain the final surface figure to be compensated. Then a three-axis servo technique is utilized to cut the final surface shape. In traditional cutting processes, in addition to common error sources such as the error in the straightness of guide ways, spindle rotation error, and error caused by ambient environment variance, three other errors, the in situ measurement error, position deviation error, and servo-following error, are the main sources affecting compensation accuracy. This paper discusses the effect of these three errors on compensation accuracy and provides strategies to improve the final surface quality. Experimental verification was carried out on one piece of KDP crystal with the size of Φ270 mm×11 mm. After one compensation process, the peak-to-valley value of the transmitted wavefront error dropped from 1.9λ (λ=632.8 nm) to approximately 1/3λ, and the mid-spatial-frequency error does not become worse when the frequency of the cutting tool trajectory is controlled by use of a low-pass filter.

  5. Critical slowing down and error analysis in lattice QCD simulations

    Energy Technology Data Exchange (ETDEWEB)

    Virotta, Francesco

    2012-02-21

    In this work we investigate the critical slowing down of lattice QCD simulations. We perform a preliminary study in the quenched approximation where we find that our estimate of the exponential auto-correlation time scales as {tau}{sub exp}(a){proportional_to}a{sup -5}, where a is the lattice spacing. In unquenched simulations with O(a) improved Wilson fermions we do not obtain a scaling law but find results compatible with the behavior that we find in the pure gauge theory. The discussion is supported by a large set of ensembles both in pure gauge and in the theory with two degenerate sea quarks. We have moreover investigated the effect of slow algorithmic modes in the error analysis of the expectation value of typical lattice QCD observables (hadronic matrix elements and masses). In the context of simulations affected by slow modes we propose and test a method to obtain reliable estimates of statistical errors. The method is supposed to help in the typical algorithmic setup of lattice QCD, namely when the total statistics collected is of O(10){tau}{sub exp}. This is the typical case when simulating close to the continuum limit where the computational costs for producing two independent data points can be extremely large. We finally discuss the scale setting in N{sub f}=2 simulations using the Kaon decay constant f{sub K} as physical input. The method is explained together with a thorough discussion of the error analysis employed. A description of the publicly available code used for the error analysis is included.

  6. Analysis of Spelling Errors Among National University of Lesotho ...

    African Journals Online (AJOL)

    The spelling errors were identified and classified intoerrors of omission, insertion, substitution, transposition, and another category, “others”, was added to accommodate errors outside this classification. Findings indicate that the most frequently occurring error was the error of substitution. Most of the errors were due to the ...

  7. Understanding Drug Release Data through Thermodynamic Analysis

    Science.gov (United States)

    Freire, Marjorie Caroline Liberato Cavalcanti; Alexandrino, Francisco; Marcelino, Henrique Rodrigues; Picciani, Paulo Henrique de Souza; Silva, Kattya Gyselle de Holanda e; Genre, Julieta; de Oliveira, Anselmo Gomes; do Egito, Eryvaldo Sócrates Tabosa

    2017-01-01

    Understanding the factors that can modify the drug release profile of a drug from a Drug-Delivery-System (DDS) is a mandatory step to determine the effectiveness of new therapies. The aim of this study was to assess the Amphotericin-B (AmB) kinetic release profiles from polymeric systems with different compositions and geometries and to correlate these profiles with the thermodynamic parameters through mathematical modeling. Film casting and electrospinning techniques were used to compare behavior of films and fibers, respectively. Release profiles from the DDSs were performed, and the mathematical modeling of the data was carried out. Activation energy, enthalpy, entropy and Gibbs free energy of the drug release process were determined. AmB release profiles showed that the relationship to overcome the enthalpic barrier was PVA-fiber > PVA-film > PLA-fiber > PLA-film. Drug release kinetics from the fibers and the films were better fitted on the Peppas–Sahlin and Higuchi models, respectively. The thermodynamic parameters corroborate these findings, revealing that the AmB release from the evaluated systems was an endothermic and non-spontaneous process. Thermodynamic parameters can be used to explain the drug kinetic release profiles. Such an approach is of utmost importance for DDS containing insoluble compounds, such as AmB, which is associated with an erratic bioavailability. PMID:28773009

  8. Understanding Drug Release Data through Thermodynamic Analysis

    Directory of Open Access Journals (Sweden)

    Marjorie Caroline Liberato Cavalcanti Freire

    2017-06-01

    Full Text Available Understanding the factors that can modify the drug release profile of a drug from a Drug-Delivery-System (DDS is a mandatory step to determine the effectiveness of new therapies. The aim of this study was to assess the Amphotericin-B (AmB kinetic release profiles from polymeric systems with different compositions and geometries and to correlate these profiles with the thermodynamic parameters through mathematical modeling. Film casting and electrospinning techniques were used to compare behavior of films and fibers, respectively. Release profiles from the DDSs were performed, and the mathematical modeling of the data was carried out. Activation energy, enthalpy, entropy and Gibbs free energy of the drug release process were determined. AmB release profiles showed that the relationship to overcome the enthalpic barrier was PVA-fiber > PVA-film > PLA-fiber > PLA-film. Drug release kinetics from the fibers and the films were better fitted on the Peppas–Sahlin and Higuchi models, respectively. The thermodynamic parameters corroborate these findings, revealing that the AmB release from the evaluated systems was an endothermic and non-spontaneous process. Thermodynamic parameters can be used to explain the drug kinetic release profiles. Such an approach is of utmost importance for DDS containing insoluble compounds, such as AmB, which is associated with an erratic bioavailability.

  9. Error analysis on assembly and alignment of laser optical unit

    Directory of Open Access Journals (Sweden)

    Zhao Xiong

    2015-07-01

    Full Text Available As one of largest optical units used in high-power laser inertial confinement fusion facility, the large-aperture transport mirror’s misalignment error can have a very negative impact on the targeting performance of laser beams. In this article, we have carried out a fundamental analysis on the mounting and misalignment errors of transport mirror. An integrated simulated assembly station is proposed to align the mirror precisely, and the design of transport mirror unit is optimized to satisfy the stringent specifications. Finally, methods that integrated theoretical modeling, numerical simulation, and field experiments are used to evaluate the mirror’s alignment, and the results indicate a more robust and precise alignment performance of new design.

  10. Critical slowing down and error analysis in lattice QCD simulations

    Energy Technology Data Exchange (ETDEWEB)

    Schaefer, Stefan [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Sommer, Rainer; Virotta, Francesco [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC

    2010-09-15

    We study the critical slowing down towards the continuum limit of lattice QCD simulations with Hybrid Monte Carlo type algorithms. In particular for the squared topological charge we find it to be very severe with an effective dynamical critical exponent of about 5 in pure gauge theory. We also consider Wilson loops which we can demonstrate to decouple from the modes which slow down the topological charge. Quenched observables are studied and a comparison to simulations of full QCD is made. In order to deal with the slow modes in the simulation, we propose a method to incorporate the information from slow observables into the error analysis of physical observables and arrive at safer error estimates. (orig.)

  11. A comparative study between release analysis and column flotation

    Energy Technology Data Exchange (ETDEWEB)

    Jorge Pineres; Juan Barraza [Universidad del Valle, Cali (Colombia)

    2007-07-01

    This paper shows the results of a comparative study between release analysis and column flotation of three Colombian coals: Guachinte (South West), Cerrejon (North) and Nech (Midlands). Analysis release was used in order to evaluate the coal potential cleaning in terms of both low ash and high organic recovery of froth. Results from release analysis were compared with those from a column flotation and showed that the froth from Nechi coal had the highest recovery and the lowest ash, followed by Cerrejon and then by Guachinte. Results of release analysis were in agreement with the column flotation. 10 refs., 4 figs., 1 tab.

  12. Error performance analysis in downlink cellular networks with interference management

    KAUST Repository

    Afify, Laila H.

    2015-05-01

    Modeling aggregate network interference in cellular networks has recently gained immense attention both in academia and industry. While stochastic geometry based models have succeeded to account for the cellular network geometry, they mostly abstract many important wireless communication system aspects (e.g., modulation techniques, signal recovery techniques). Recently, a novel stochastic geometry model, based on the Equivalent-in-Distribution (EiD) approach, succeeded to capture the aforementioned communication system aspects and extend the analysis to averaged error performance, however, on the expense of increasing the modeling complexity. Inspired by the EiD approach, the analysis developed in [1] takes into consideration the key system parameters, while providing a simple tractable analysis. In this paper, we extend this framework to study the effect of different interference management techniques in downlink cellular network. The accuracy of the proposed analysis is verified via Monte Carlo simulations.

  13. FRamework Assessing Notorious Contributing Influences for Error (FRANCIE): Perspective on Taxonomy Development to Support Error Reporting and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lon N. Haney; David I. Gertman

    2003-04-01

    Beginning in the 1980s a primary focus of human reliability analysis was estimation of human error probabilities. However, detailed qualitative modeling with comprehensive representation of contextual variables often was lacking. This was likely due to the lack of comprehensive error and performance shaping factor taxonomies, and the limited data available on observed error rates and their relationship to specific contextual variables. In the mid 90s Boeing, America West Airlines, NASA Ames Research Center and INEEL partnered in a NASA sponsored Advanced Concepts grant to: assess the state of the art in human error analysis, identify future needs for human error analysis, and develop an approach addressing these needs. Identified needs included the need for a method to identify and prioritize task and contextual characteristics affecting human reliability. Other needs identified included developing comprehensive taxonomies to support detailed qualitative modeling and to structure meaningful data collection efforts across domains. A result was the development of the FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) with a taxonomy for airline maintenance tasks. The assignment of performance shaping factors to generic errors by experts proved to be valuable to qualitative modeling. Performance shaping factors and error types from such detailed approaches can be used to structure error reporting schemes. In a recent NASA Advanced Human Support Technology grant FRANCIE was refined, and two new taxonomies for use on space missions were developed. The development, sharing, and use of error taxonomies, and the refinement of approaches for increased fidelity of qualitative modeling is offered as a means to help direct useful data collection strategies.

  14. Error Analysis for Fourier Methods for Option Pricing

    KAUST Repository

    Häppölä, Juho

    2016-01-06

    We provide a bound for the error committed when using a Fourier method to price European options when the underlying follows an exponential Levy dynamic. The price of the option is described by a partial integro-differential equation (PIDE). Applying a Fourier transformation to the PIDE yields an ordinary differential equation that can be solved analytically in terms of the characteristic exponent of the Levy process. Then, a numerical inverse Fourier transform allows us to obtain the option price. We present a novel bound for the error and use this bound to set the parameters for the numerical method. We analyze the properties of the bound for a dissipative and pure-jump example. The bound presented is independent of the asymptotic behaviour of option prices at extreme asset prices. The error bound can be decomposed into a product of terms resulting from the dynamics and the option payoff, respectively. The analysis is supplemented by numerical examples that demonstrate results comparable to and superior to the existing literature.

  15. ERROR ANALYSIS FOR THE AIRBORNE DIRECT GEOREFERINCING TECHNIQUE

    Directory of Open Access Journals (Sweden)

    A. S. Elsharkawy

    2016-10-01

    Full Text Available Direct Georeferencing was shown to be an important alternative to standard indirect image orientation using classical or GPS-supported aerial triangulation. Since direct Georeferencing without ground control relies on an extrapolation process only, particular focus has to be laid on the overall system calibration procedure. The accuracy performance of integrated GPS/inertial systems for direct Georeferencing in airborne photogrammetric environments has been tested extensively in the last years. In this approach, the limiting factor is a correct overall system calibration including the GPS/inertial component as well as the imaging sensor itself. Therefore remaining errors in the system calibration will significantly decrease the quality of object point determination. This research paper presents an error analysis for the airborne direct Georeferencing technique, where integrated GPS/IMU positioning and navigation systems are used, in conjunction with aerial cameras for airborne mapping compared with GPS/INS supported AT through the implementation of certain amount of error on the EOP and Boresight parameters and study the effect of these errors on the final ground coordinates. The data set is a block of images consists of 32 images distributed over six flight lines, the interior orientation parameters, IOP, are known through careful camera calibration procedure, also 37 ground control points are known through terrestrial surveying procedure. The exact location of camera station at time of exposure, exterior orientation parameters, EOP, is known through GPS/INS integration process. The preliminary results show that firstly, the DG and GPS-supported AT have similar accuracy and comparing with the conventional aerial photography method, the two technologies reduces the dependence on ground control (used only for quality control purposes. Secondly, In the DG Correcting overall system calibration including the GPS/inertial component as well as the

  16. Error Analysis for the Airborne Direct Georeferincing Technique

    Science.gov (United States)

    Elsharkawy, Ahmed S.; Habib, Ayman F.

    2016-10-01

    Direct Georeferencing was shown to be an important alternative to standard indirect image orientation using classical or GPS-supported aerial triangulation. Since direct Georeferencing without ground control relies on an extrapolation process only, particular focus has to be laid on the overall system calibration procedure. The accuracy performance of integrated GPS/inertial systems for direct Georeferencing in airborne photogrammetric environments has been tested extensively in the last years. In this approach, the limiting factor is a correct overall system calibration including the GPS/inertial component as well as the imaging sensor itself. Therefore remaining errors in the system calibration will significantly decrease the quality of object point determination. This research paper presents an error analysis for the airborne direct Georeferencing technique, where integrated GPS/IMU positioning and navigation systems are used, in conjunction with aerial cameras for airborne mapping compared with GPS/INS supported AT through the implementation of certain amount of error on the EOP and Boresight parameters and study the effect of these errors on the final ground coordinates. The data set is a block of images consists of 32 images distributed over six flight lines, the interior orientation parameters, IOP, are known through careful camera calibration procedure, also 37 ground control points are known through terrestrial surveying procedure. The exact location of camera station at time of exposure, exterior orientation parameters, EOP, is known through GPS/INS integration process. The preliminary results show that firstly, the DG and GPS-supported AT have similar accuracy and comparing with the conventional aerial photography method, the two technologies reduces the dependence on ground control (used only for quality control purposes). Secondly, In the DG Correcting overall system calibration including the GPS/inertial component as well as the imaging sensor itself

  17. Analysis of Random Segment Errors on Coronagraph Performance

    Science.gov (United States)

    Stahl, Mark T.; Stahl, H. Philip; Shaklan, Stuart B.; N'Diaye, Mamadou

    2016-01-01

    At 2015 SPIE O&P we presented "Preliminary Analysis of Random Segment Errors on Coronagraph Performance" Key Findings: Contrast Leakage for 4thorder Sinc2(X) coronagraph is 10X more sensitive to random segment piston than random tip/tilt, Fewer segments (i.e. 1 ring) or very many segments (> 16 rings) has less contrast leakage as a function of piston or tip/tilt than an aperture with 2 to 4 rings of segments. Revised Findings: Piston is only 2.5X more sensitive than Tip/Tilt

  18. Error analysis to improve the speech recognition accuracy on ...

    Indian Academy of Sciences (India)

    error-rate and Word Error Rate (WER) by application of the proposed method. Keywords. Speech recognition .... after the application of pronunciation dictionary modification. Table 1. Before PDMM. Confused .... misrecognitions (Mis recog), total errors occurred for the given data are examined. From these values, word error ...

  19. Barrier and operational risk analysis of hydrocarbon releases (BORA-Release). Part I. Method description.

    Science.gov (United States)

    Aven, Terje; Sklet, Snorre; Vinnem, Jan Erik

    2006-09-21

    Investigations of major accidents show that technical, human, operational, as well as organisational factors influence the accident sequences. In spite of these facts, quantitative risk analyses of offshore oil and gas production platforms have focused on technical safety systems. This paper presents a method (called BORA-Release) for qualitative and quantitative risk analysis of the platform specific hydrocarbon release frequency. By using BORA-Release it is possible to analyse the effect of safety barriers introduced to prevent hydrocarbon releases, and how platform specific conditions of technical, human, operational, and organisational risk influencing factors influence the barrier performance. BORA-Release comprises the following main steps: (1) development of a basic risk model including release scenarios, (2) modelling the performance of safety barriers, (3) assignment of industry average probabilities/frequencies and risk quantification based on these probabilities/frequencies, (4) development of risk influence diagrams, (5) scoring of risk influencing factors, (6) weighting of risk influencing factors, (7) adjustment of industry average probabilities/frequencies, and (8) recalculation of the risk in order to determine the platform specific risk related to hydrocarbon release. The various steps in BORA-Release are presented and discussed. Part II of the paper presents results from a case study where BORA-Release is applied.

  20. Barrier and operational risk analysis of hydrocarbon releases (BORA-Release)

    Energy Technology Data Exchange (ETDEWEB)

    Aven, Terje [University of Stavanger (UiS), NO-4036 Stavanger (Norway); Sklet, Snorre [Department of Production and Quality Engineering, Norwegian University of Science and Technology (NTNU), NO-7491 Trondheim (Norway)]. E-mail: snorre.sklet@sintef.no; Vinnem, Jan Erik [University of Stavanger (UiS), NO-4036 Stavanger (Norway)

    2006-09-21

    Investigations of major accidents show that technical, human, operational, as well as organisational factors influence the accident sequences. In spite of these facts, quantitative risk analyses of offshore oil and gas production platforms have focused on technical safety systems. This paper presents a method (called BORA-Release) for qualitative and quantitative risk analysis of the platform specific hydrocarbon release frequency. By using BORA-Release it is possible to analyse the effect of safety barriers introduced to prevent hydrocarbon releases, and how platform specific conditions of technical, human, operational, and organisational risk influencing factors influence the barrier performance. BORA-Release comprises the following main steps: (1) development of a basic risk model including release scenarios, (2) modelling the performance of safety barriers, (3) assignment of industry average probabilities/frequencies and risk quantification based on these probabilities/frequencies, (4) development of risk influence diagrams, (5) scoring of risk influencing factors, (6) weighting of risk influencing factors, (7) adjustment of industry average probabilities/frequencies, and (8) recalculation of the risk in order to determine the platform specific risk related to hydrocarbon release. The various steps in BORA-Release are presented and discussed. Part II of the paper presents results from a case study where BORA-Release is applied.

  1. 14 CFR 417.227 - Toxic release hazard analysis.

    Science.gov (United States)

    2010-01-01

    ... members of the public on land and on any waterborne vessels, populated offshore structures, and aircraft... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false Toxic release hazard analysis. 417.227..., DEPARTMENT OF TRANSPORTATION LICENSING LAUNCH SAFETY Flight Safety Analysis § 417.227 Toxic release hazard...

  2. Performance analysis of fault-tolerant quantum error correction against non-Clifford errors

    Science.gov (United States)

    Sugiyama, Takanori; Fujii, Keisuke; Nagata, Haruhisa; Tanaka, Fuyuhiko

    As error rates of quantum gates implemented in recent experiments approach a fault-tolerant threshold of a 2D planer surface code against a depolarizing noise model, it becomes more important to investigate performance of quantum error correction codes against more general and realistic noise models. A brute-force simulation for the investigation on a classical computer requires an exponential amount of memory, and we need alternative methods for the purpose. The standard approach assuming depolarizing (or Clifford) error models, which is not realistic, can overestimate the performance, and it is not valid to apply the results to experiments. On the other hand, a rigorous approach with the diamond norm is applicable to realistic error models but greatly underestimates the performance and is not practical. Here we propose a new theoretical framework for evaluating performances of quantum error correction, which is practical and applicable to a wider class of error models. We apply the method to a quantum 1D repetition code, and numerically evaluate the performance. This work was supported by the JSPS Research Fellowships for Young Scientists (PD) (No.27-276) and the Grant-in-Aid for Young Scientists (B) (No. 24700273).

  3. SIRTF Focal Plane Survey: A Pre-flight Error Analysis

    Science.gov (United States)

    Bayard, David S.; Brugarolas, Paul B.; Boussalis, Dhemetrios; Kang, Bryan H.

    2003-01-01

    This report contains a pre-flight error analysis of the calibration accuracies expected from implementing the currently planned SIRTF focal plane survey strategy. The main purpose of this study is to verify that the planned strategy will meet focal plane survey calibration requirements (as put forth in the SIRTF IOC-SV Mission Plan [4]), and to quantify the actual accuracies expected. The error analysis was performed by running the Instrument Pointing Frame (IPF) Kalman filter on a complete set of simulated IOC-SV survey data, and studying the resulting propagated covariances. The main conclusion of this study is that the all focal plane calibration requirements can be met with the currently planned survey strategy. The associated margins range from 3 to 95 percent, and tend to be smallest for frames having a 0.14" requirement, and largest for frames having a more generous 0.28" (or larger) requirement. The smallest margin of 3 percent is associated with the IRAC 3.6 and 5.8 micron array centers (frames 068 and 069), and the largest margin of 95 percent is associated with the MIPS 160 micron array center (frame 087). For pointing purposes, the most critical calibrations are for the IRS Peakup sweet spots and short wavelength slit centers (frames 019, 023, 052, 028, 034). Results show that these frames are meeting their 0.14" requirements with an expected accuracy of approximately 0.1", which corresponds to a 28 percent margin.

  4. Error Analysis of CM Data Products Sources of Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Hunt, Brian D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eckert-Gallup, Aubrey Celia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Cochran, Lainy Dromgoole [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kraus, Terrence D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Allen, Mark B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Beal, Bill [National Security Technologies, Joint Base Andrews, MD (United States); Okada, Colin [National Security Technologies, LLC. (NSTec), Las Vegas, NV (United States); Simpson, Mathew [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-02-01

    This goal of this project is to address the current inability to assess the overall error and uncertainty of data products developed and distributed by DOE’s Consequence Management (CM) Program. This is a widely recognized shortfall, the resolution of which would provide a great deal of value and defensibility to the analysis results, data products, and the decision making process that follows this work. A global approach to this problem is necessary because multiple sources of error and uncertainty contribute to the ultimate production of CM data products. Therefore, this project will require collaboration with subject matter experts across a wide range of FRMAC skill sets in order to quantify the types of uncertainty that each area of the CM process might contain and to understand how variations in these uncertainty sources contribute to the aggregated uncertainty present in CM data products. The ultimate goal of this project is to quantify the confidence level of CM products to ensure that appropriate public and worker protections decisions are supported by defensible analysis.

  5. NanoOK: multi-reference alignment analysis of nanopore sequencing data, quality and error profiles.

    Science.gov (United States)

    Leggett, Richard M; Heavens, Darren; Caccamo, Mario; Clark, Matthew D; Davey, Robert P

    2016-01-01

    The Oxford Nanopore MinION sequencer, currently in pre-release testing through the MinION Access Programme (MAP), promises long reads in real-time from an inexpensive, compact, USB device. Tools have been released to extract FASTA/Q from the MinION base calling output and to provide basic yield statistics. However, no single tool yet exists to provide comprehensive alignment-based quality control and error profile analysis--something that is extremely important given the speed with which the platform is evolving. NanoOK generates detailed tabular and graphical output plus an in-depth multi-page PDF report including error profile, quality and yield data. NanoOK is multi-reference, enabling detailed analysis of metagenomic or multiplexed samples. Four popular Nanopore aligners are supported and it is easily extensible to include others. NanoOK is an open-source software, implemented in Java with supporting R scripts. It has been tested on Linux and Mac OS X and can be downloaded from https://github.com/TGAC/NanoOK. A VirtualBox VM containing all dependencies and the DH10B read set used in this article is available from http://opendata.tgac.ac.uk/nanook/. A Docker image is also available from Docker Hub--see program documentation https://documentation.tgac.ac.uk/display/NANOOK. richard.leggett@tgac.ac.uk Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press.

  6. Alignment error analysis of detector array for spatial heterodyne spectrometer.

    Science.gov (United States)

    Jin, Wei; Chen, Di-Hu; Li, Zhi-Wei; Luo, Hai-Yan; Hong, Jin

    2017-12-10

    Spatial heterodyne spectroscopy (SHS) is a new spatial interference spectroscopy which can achieve high spectral resolution. The alignment error of the detector array can lead to a significant influence with the spectral resolution of a SHS system. Theoretical models for analyzing the alignment errors which are divided into three kinds are presented in this paper. Based on these models, the tolerance angle of these errors has been given, respectively. The result of simulation experiments shows that when the angle of slope error, tilt error, and rotation error are less than 1.21°, 1.21°, 0.066° respectively, the alignment reaches an acceptable level.

  7. Effects of Correlated Errors on the Analysis of Space Geodetic Data

    Science.gov (United States)

    Romero-Wolf, Andres; Jacobs, C. S.

    2011-01-01

    As thermal errors are reduced instrumental and troposphere correlated errors will increasingly become more important. Work in progress shows that troposphere covariance error models improve data analysis results. We expect to see stronger effects with higher data rates. Temperature modeling of delay errors may further reduce temporal correlations in the data.

  8. Boundary identification and error analysis of shocked material images

    Science.gov (United States)

    Hock, Margaret; Howard, Marylesa; Cooper, Leora; Meehan, Bernard; Nelson, Keith

    2017-06-01

    To compute quantities such as pressure and velocity from laser-induced shock waves propagating through materials, high-speed images are captured and analyzed. Shock images typically display high noise and spatially-varying intensities, causing conventional analysis techniques to have difficulty identifying boundaries in the images without making significant assumptions about the data. We present a novel machine learning algorithm that efficiently segments, or partitions, images with high noise and spatially-varying intensities, and provides error maps that describe a level of uncertainty in the partitioning. The user trains the algorithm by providing locations of known materials within the image but no assumptions are made on the geometries in the image. The error maps are used to provide lower and upper bounds on quantities of interest, such as velocity and pressure, once boundaries have been identified and propagated through equations of state. This algorithm will be demonstrated on images of shock waves with noise and aberrations to quantify properties of the wave as it progresses. DOE/NV/25946-3126 This work was done by National Security Technologies, LLC, under Contract No. DE- AC52-06NA25946 with the U.S. Department of Energy and supported by the SDRD Program.

  9. A Framework for Examining Mathematics Teacher Knowledge as Used in Error Analysis

    Science.gov (United States)

    Peng, Aihui; Luo, Zengru

    2009-01-01

    Error analysis is a basic and important task for mathematics teachers. Unfortunately, in the present literature there is a lack of detailed understanding about teacher knowledge as used in it. Based on a synthesis of the literature in error analysis, a framework for prescribing and assessing mathematics teacher knowledge in error analysis was…

  10. Error Analysis in Composition of Iranian Lower Intermediate Students

    Science.gov (United States)

    Taghavi, Mehdi

    2012-01-01

    Learners make errors during the process of learning languages. This study examines errors in writing task of twenty Iranian lower intermediate male students aged between 13 and 15. A subject was given to the participants was a composition about the seasons of a year. All of the errors were identified and classified. Corder's classification (1967)…

  11. Error treatment in students' written assignments in Discourse Analysis

    African Journals Online (AJOL)

    ... is generally no consensus on how lecturers should treat students' errors in written assignments, observations in this study enabled the researcher to provide certain strategies that lecturers can adopt. Key words: Error treatment; error handling; corrective feedback, positive cognitive feedback; negative cognitive feedback; ...

  12. The Impact of Text Genre on Iranian Intermediate EFL Students' Writing Errors: An Error Analysis Perspective

    Science.gov (United States)

    Moqimipour, Kourosh; Shahrokhi, Mohsen

    2015-01-01

    The present study aimed at analyzing writing errors caused by the interference of the Persian language, regarded as the first language (L1), in three writing genres, namely narration, description, and comparison/contrast by Iranian EFL students. 65 English paragraphs written by the participants, who were at the intermediate level based on their…

  13. Error-rate performance analysis of opportunistic regenerative relaying

    KAUST Repository

    Tourki, Kamel

    2011-09-01

    In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path can be considered unusable, and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We first derive the exact statistics of each hop, in terms of probability density function (PDF). Then, the PDFs are used to determine accurate closed form expressions for end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation where the detector may use maximum ration combining (MRC) or selection combining (SC). Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over linear network (LN) architecture and considering Rayleigh fading channels. © 2011 IEEE.

  14. Kitchen Physics: Lessons in Fluid Pressure and Error Analysis

    Science.gov (United States)

    Vieyra, Rebecca Elizabeth; Vieyra, Chrystian; Macchia, Stefano

    2017-02-01

    Although the advent and popularization of the "flipped classroom" tends to center around at-home video lectures, teachers are increasingly turning to at-home labs for enhanced student engagement. This paper describes two simple at-home experiments that can be accomplished in the kitchen. The first experiment analyzes the density of four liquids using a waterproof case and a smartphone barometer in a container, sink, or tub. The second experiment determines the relationship between pressure and temperature of an ideal gas in a constant volume container placed momentarily in a refrigerator freezer. These experiences provide a ripe opportunity both for learning fundamental physics concepts as well as to investigate a variety of error analysis techniques that are frequently overlooked in introductory physics courses.

  15. Human error analysis of commercial aviation accidents using the human factors analysis and classification system (HFACS)

    Science.gov (United States)

    2001-02-01

    The Human Factors Analysis and Classification System (HFACS) is a general human error framework : originally developed and tested within the U.S. military as a tool for investigating and analyzing the human : causes of aviation accidents. Based upon ...

  16. An analysis of tracking error in image-guided neurosurgery.

    Science.gov (United States)

    Gerard, Ian J; Collins, D Louis

    2015-10-01

    This study quantifies some of the technical and physical factors that contribute to error in image-guided interventions. Errors associated with tracking, tool calibration and registration between a physical object and its corresponding image were investigated and compared with theoretical descriptions of these errors. A precision milled linear testing apparatus was constructed to perform the measurements. The tracking error was shown to increase in linear fashion with distance normal to the camera, and the tracking error ranged between 0.15 and 0.6 mm. The tool calibration error increased as a function of distance from the camera and the reference tool (0.2-0.8 mm). The fiducial registration error was shown to improve when more points were used up until a plateau value was reached which corresponded to the total fiducial localization error ([Formula: see text]0.8 mm). The target registration error distributions followed a [Formula: see text] distribution with the largest error and variation around fiducial points. To minimize errors, tools should be calibrated as close as possible to the reference tool and camera, and tools should be used as close to the front edge of the camera throughout the intervention, with the camera pointed in the direction where accuracy is least needed during surgery.

  17. The error performance analysis over cyclic redundancy check codes

    Science.gov (United States)

    Yoon, Hee B.

    1991-06-01

    The burst error is generated in digital communication networks by various unpredictable conditions, which occur at high error rates, for short durations, and can impact services. To completely describe a burst error one has to know the bit pattern. This is impossible in practice on working systems. Therefore, under the memoryless binary symmetric channel (MBSC) assumptions, the performance evaluation or estimation schemes for digital signal 1 (DS1) transmission systems carrying live traffic is an interesting and important problem. This study will present some analytical methods, leading to efficient detecting algorithms of burst error using cyclic redundancy check (CRC) code. The definition of burst error is introduced using three different models. Among the three burst error models, the mathematical model is used in this study. The probability density function, function(b) of burst error of length b is proposed. The performance of CRC-n codes is evaluated and analyzed using function(b) through the use of a computer simulation model within CRC block burst error. The simulation result shows that the mean block burst error tends to approach the pattern of the burst error which random bit errors generate.

  18. Fixed-point error analysis of Winograd Fourier transform algorithms

    Science.gov (United States)

    Patterson, R. W.; Mcclellan, J. H.

    1978-01-01

    The quantization error introduced by the Winograd Fourier transform algorithm (WFTA) when implemented in fixed-point arithmetic is studied and compared with that of the fast Fourier transform (FFT). The effect of ordering the computational modules and the relative contributions of data quantization error and coefficient quantization error are determined. In addition, the quantization error introduced by the Good-Winograd (GW) algorithm, which uses Good's prime-factor decomposition for the discrete Fourier transform (DFT) together with Winograd's short length DFT algorithms, is studied. Error introduced by the WFTA is, in all cases, worse than that of the FFT. In general, the WFTA requires one or two more bits for data representation to give an error similar to that of the FFT. Error introduced by the GW algorithm is approximately the same as that of the FFT.

  19. Analysis of error-correction constraints in an optical disk

    Science.gov (United States)

    Roberts, Jonathan D.; Ryley, Alan; Jones, David M.; Burke, David

    1996-07-01

    The compact disk read-only memory (CD-ROM) is a mature storage medium with complex error control. It comprises four levels of Reed Solomon codes allied to a sequence of sophisticated interleaving strategies and 8:14 modulation coding. New storage media are being developed and introduced that place still further demands on signal processing for error correction. It is therefore appropriate to explore thoroughly the limit of existing strategies to assess future requirements. We describe a simulation of all stages of the CD-ROM coding, modulation, and decoding. The results of decoding the burst error of a prescribed number of modulation bits are discussed in detail. Measures of residual uncorrected error within a sector are displayed by C1, C2, P, and Q error counts and by the status of the final cyclic redundancy check (CRC). Where each data sector is encoded separately, it is shown that error-correction performance against burst errors depends critically on the position of the burst within a sector. The C1 error measures the burst length, whereas C2 errors reflect the burst position. The performance of Reed Solomon product codes is shown by the P and Q statistics. It is shown that synchronization loss is critical near the limits of error correction. An example is given of miscorrection that is identified by the CRC check.

  20. Error Analysis of the IGS repro2 Station Position Time Series

    Science.gov (United States)

    Rebischung, P.; Ray, J.; Benoist, C.; Metivier, L.; Altamimi, Z.

    2015-12-01

    Eight Analysis Centers (ACs) of the International GNSS Service (IGS) have completed a second reanalysis campaign (repro2) of the GNSS data collected by the IGS global tracking network back to 1994, using the latest available models and methodology. The AC repro2 contributions include in particular daily terrestrial frame solutions, the first time with sub-weekly resolution for the full IGS history. The AC solutions, comprising positions for 1848 stations with daily polar motion coordinates, were combined to form the IGS contribution to the next release of the International Terrestrial Reference Frame (ITRF2014). Inter-AC position consistency is excellent, about 1.5 mm horizontal and 4 mm vertical. The resulting daily combined frames were then stacked into a long-term cumulative frame assuming generally linear motions, which constitutes the GNSS input to the ITRF2014 inter-technique combination. A special challenge involved identifying the many position discontinuities, averaging about 1.8 per station. A stacked periodogram of the station position residual time series from this long-term solution reveals a number of unexpected spectral lines (harmonics of the GPS draconitic year, fortnightly tidal lines) on top of a white+flicker background noise and strong seasonal variations. In this study, we will present results from station- and AC-specific analyses of the noise and periodic errors present in the IGS repro2 station position time series. So as to better understand their sources, and in view of developing a spatio-temporal error model, we will focus in particular on the spatial distribution of the noise characteristics and of the periodic errors. By computing AC-specific long-term frames and analyzing the respective residual time series, we will additionally study how the characteristics of the noise and of the periodic errors depend on the adopted analysis strategy and reduction software.

  1. An Analysis Methodology for Stochastic Characteristic of Volumetric Error in Multiaxis CNC Machine Tool

    Directory of Open Access Journals (Sweden)

    Qiang Cheng

    2013-01-01

    Full Text Available Traditional approaches about error modeling and analysis of machine tool few consider the probability characteristics of the geometric error and volumetric error systematically. However, the individual geometric error measured at different points is variational and stochastic, and therefore the resultant volumetric error is aslo stochastic and uncertain. In order to address the stochastic characteristic of the volumetric error for multiaxis machine tool, a new probability analysis mathematical model of volumetric error is proposed in this paper. According to multibody system theory, a mean value analysis model for volumetric error is established with consideration of geometric errors. The probability characteristics of geometric errors are obtained by statistical analysis to the measured sample data. Based on probability statistics and stochastic process theory, the variance analysis model of volumetric error is established in matrix, which can avoid the complex mathematics operations during the direct differential. A four-axis horizontal machining center is selected as an illustration example. The analysis results can reveal the stochastic characteristic of volumetric error and are also helpful to make full use of the best workspace to reduce the random uncertainty of the volumetric error and improve the machining accuracy.

  2. Error Analysis of Ia Supernova and Query on Cosmic Dark Energy

    Indian Academy of Sciences (India)

    2016-01-27

    Jan 27, 2016 ... Some serious faults in error analysis of observations for SNIa have been found. Redoing the same error analysis of SNIa, by our idea, it is found that the average total observational error of SNIa is obviously greater than 0.55, so we can not decide whether the Universe is an accelerating expansion or not.

  3. Error Analysis in the Classroom. CAL-ERIC/CLL Series on Languages and Linguistics, No. 12.

    Science.gov (United States)

    Powell, Patricia B.

    This paper begins with a discussion of the meaning and importance of error analysis in language teaching and learning. The practical implications of what error analysis is for the classroom teacher are discussed, along with several possible systems for classifying learner errors. The need for the language teacher to establish certain priorities in…

  4. An error taxonomy system for analysis of haemodialysis incidents.

    Science.gov (United States)

    Gu, Xiuzhu; Itoh, Kenji; Suzuki, Satoshi

    2014-12-01

    This paper describes the development of a haemodialysis error taxonomy system for analysing incidents and predicting the safety status of a dialysis organisation. The error taxonomy system was developed by adapting an error taxonomy system which assumed no specific specialty to haemodialysis situations. Its application was conducted with 1,909 incident reports collected from two dialysis facilities in Japan. Over 70% of haemodialysis incidents were reported as problems or complications related to dialyser, circuit, medication and setting of dialysis condition. Approximately 70% of errors took place immediately before and after the four hours of haemodialysis therapy. Error types most frequently made in the dialysis unit were omission and qualitative errors. Failures or complications classified to staff human factors, communication, task and organisational factors were found in most dialysis incidents. Device/equipment/materials, medicine and clinical documents were most likely to be involved in errors. Haemodialysis nurses were involved in more incidents related to medicine and documents, whereas dialysis technologists made more errors with device/equipment/materials. This error taxonomy system is able to investigate incidents and adverse events occurring in the dialysis setting but is also able to estimate safety-related status of an organisation, such as reporting culture. © 2014 European Dialysis and Transplant Nurses Association/European Renal Care Association.

  5. Error Analysis for Interferometric SAR Measurements of Ice Sheet Flow

    DEFF Research Database (Denmark)

    Mohr, Johan Jacob; Madsen, Søren Nørvang

    1999-01-01

    and slope errors in conjunction with a surface parallel flow assumption. The most surprising result is that assuming a stationary flow the east component of the three-dimensional flow derived from ascending and descending orbit data is independent of slope errors and of the vertical flow....

  6. Factor Rotation and Standard Errors in Exploratory Factor Analysis

    Science.gov (United States)

    Zhang, Guangjian; Preacher, Kristopher J.

    2015-01-01

    In this article, we report a surprising phenomenon: Oblique CF-varimax and oblique CF-quartimax rotation produced similar point estimates for rotated factor loadings and factor correlations but different standard error estimates in an empirical example. Influences of factor rotation on asymptotic standard errors are investigated using a numerical…

  7. English Majors' Errors in Translating Arabic Endophora: Analysis and Remedy

    Science.gov (United States)

    Abdellah, Antar Solhy

    2007-01-01

    Egyptian English majors in the faculty of Education, South Valley University tend to mistranslate the plural inanimate Arabic pronoun with the singular inanimate English pronoun. A diagnostic test was designed to analyze this error. Results showed that a large number of students (first year and fourth year students) make this error, that the error…

  8. Analysis of Students' Errors on Linear Programming at Secondary ...

    African Journals Online (AJOL)

    The purpose of this study was to identify secondary school students' errors on linear programming at 'O' level. It is based on the fact that students' errors inform teaching hence an essential tool for any serious mathematics teacher who intends to improve mathematics teaching. The study was guided by a descriptive survey ...

  9. THE PRACTICAL ANALYSIS OF FINITE ELEMENTS METHOD ERRORS

    Directory of Open Access Journals (Sweden)

    Natalia Bakhova

    2011-03-01

    Full Text Available Abstract. The most important in the practical plan questions of reliable estimations of finite elementsmethod errors are considered. Definition rules of necessary calculations accuracy are developed. Methodsand ways of the calculations allowing receiving at economical expenditures of computing work the best finalresults are offered.Keywords: error, given the accuracy, finite element method, lagrangian and hermitian elements.

  10. An Analysis of the Factors Responsible for Errors in Nigerian ...

    African Journals Online (AJOL)

    The paper presents an empirical study of factors responsible for errors in Nigerian construction documents and aims at identifying the significant factors that are responsible for errors in the Nigerian construction documents. Information was obtained from both consultants (the producers of construction documents) and ...

  11. Evaluation and Error Analysis for a Solar thermal Receiver

    Energy Technology Data Exchange (ETDEWEB)

    Pfander, M.

    2001-07-01

    In the following study a complete balance over the REFOS receiver module, mounted on the tower power plant CESA-1 at the Plataforma Solar de Almeria (PSA), is carried out. Additionally an error inspection of the various measurement techniques used in the REFOS project is made. Especially the flux measurement system Prohermes that is used to determine the total entry power of the receiver module and known as a major error source is analysed in detail. Simulations and experiments on the particular instruments are used to determine and quantify possible error sources. After discovering the origin of the errors they are reduced and included in the error calculation. the ultimate result is presented as an overall efficiency of the receiver module in dependence on the flux density at the receiver module's entry plane and the receiver operating temperature. (Author) 26 refs.

  12. Barrier and operational risk analysis of hydrocarbon releases (BORA-Release). Part II: Results from a case study.

    Science.gov (United States)

    Sklet, Snorre; Vinnem, Jan Erik; Aven, Terje

    2006-09-21

    This paper presents results from a case study carried out on an offshore oil and gas production platform with the purpose to apply and test BORA-Release, a method for barrier and operational risk analysis of hydrocarbon releases. A description of the BORA-Release method is given in Part I of the paper. BORA-Release is applied to express the platform specific hydrocarbon release frequencies for three release scenarios for selected systems and activities on the platform. The case study demonstrated that the BORA-Release method is a useful tool for analysing the effect on the release frequency of safety barriers introduced to prevent hydrocarbon releases, and to study the effect on the barrier performance of platform specific conditions of technical, human, operational, and organisational risk influencing factors (RIFs). BORA-Release may also be used to analyse the effect on the release frequency of risk reducing measures.

  13. Latent human error analysis and efficient improvement strategies by fuzzy TOPSIS in aviation maintenance tasks.

    Science.gov (United States)

    Chiu, Ming-Chuan; Hsieh, Min-Chih

    2016-05-01

    The purposes of this study were to develop a latent human error analysis process, to explore the factors of latent human error in aviation maintenance tasks, and to provide an efficient improvement strategy for addressing those errors. First, we used HFACS and RCA to define the error factors related to aviation maintenance tasks. Fuzzy TOPSIS with four criteria was applied to evaluate the error factors. Results show that 1) adverse physiological states, 2) physical/mental limitations, and 3) coordination, communication, and planning are the factors related to airline maintenance tasks that could be addressed easily and efficiently. This research establishes a new analytic process for investigating latent human error and provides a strategy for analyzing human error using fuzzy TOPSIS. Our analysis process complements shortages in existing methodologies by incorporating improvement efficiency, and it enhances the depth and broadness of human error analysis methodology. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  14. Analysis of the Identification Principle of Yaw Error of Five-axis Machine Tool Rotary Table in the Virtue Error Sensitive Direction Based on the Machining Test

    Science.gov (United States)

    Zhang, Y.; Zhang, L.

    2017-12-01

    The identification principle of the yaw error of five-axis machine tool rotary table in the virtue error sensitive direction by DBB is analysed. According to the measurement principle of DBB, the virtue error sensitive direction was adopted, and a machining test to identify the yaw error of the rotating table is designed and discussed. The function relationship of the yaw error and the machining error in the corresponding sensitive direction was deduced, and the analysis shows that the yaw error can be separated from the machining test.

  15. Human Error Assessmentin Minefield Cleaning Operation Using Human Event Analysis

    Directory of Open Access Journals (Sweden)

    Mohammad Hajiakbari

    2015-12-01

    Full Text Available Background & objective: Human error is one of the main causes of accidents. Due to the unreliability of the human element and the high-risk nature of demining operations, this study aimed to assess and manage human errors likely to occur in such operations. Methods: This study was performed at a demining site in war zones located in the West of Iran. After acquiring an initial familiarity with the operations, methods, and tools of clearing minefields, job task related to clearing landmines were specified. Next, these tasks were studied using HTA and related possible errors were assessed using ATHEANA. Results: de-mining task was composed of four main operations, including primary detection, technical identification, investigation, and neutralization. There were found four main reasons for accidents occurring in such operations; walking on the mines, leaving mines with no action, error in neutralizing operation and environmental explosion. The possibility of human error in mine clearance operations was calculated as 0.010. Conclusion: The main causes of human error in de-mining operations can be attributed to various factors such as poor weather and operating conditions like outdoor work, inappropriate personal protective equipment, personality characteristics, insufficient accuracy in the work, and insufficient time available. To reduce the probability of human error in de-mining operations, the aforementioned factors should be managed properly.

  16. Accidental iatrogenic intoxications by cytotoxic drugs: error analysis and practical preventive strategies.

    Science.gov (United States)

    Zernikow, B; Michel, E; Fleischhack, G; Bode, U

    1999-07-01

    Drug errors are quite common. Many of them become harmful only if they remain undetected, ultimately resulting in injury to the patient. Errors with cytotoxic drugs are especially dangerous because of the highly toxic potential of the drugs involved. For medico-legal reasons, only 1 case of accidental iatrogenic intoxication by cytotoxic drugs tends to be investigated at a time, because the focus is placed on individual responsibility rather than on system errors. The aim of our study was to investigate whether accidental iatrogenic intoxications by cytotoxic drugs are faults of either the individual or the system. The statistical analysis of distribution and quality of such errors, and the in-depth analysis of contributing factors delivered a rational basis for the development of practical preventive strategies. A total of 134 cases of accidental iatrogenic intoxication by a cytotoxic drug (from literature reports since 1966 identified by an electronic literature survey, as well as our own unpublished cases) underwent a systematic error analysis based on a 2-dimensional model of error generation. Incidents were classified by error characteristics and point in time of occurrence, and their distribution was statistically evaluated. The theories of error research, informatics, sensory physiology, cognitive psychology, occupational medicine and management have helped to classify and depict potential sources of error as well as reveal clues for error prevention. Monocausal errors were the exception. In the majority of cases, a confluence of unfavourable circumstances either brought about the error, or prevented its timely interception. Most cases with a fatal outcome involved erroneous drug administration. Object-inherent factors were the predominant causes. A lack of expert as well as general knowledge was a contributing element. In error detection and prevention of error sequelae, supervision and back-checking are essential. Improvement of both the individual

  17. Errors Analysis of Solving Linear Inequalities among the Preparatory Year Students at King Saud University

    Science.gov (United States)

    El-khateeb, Mahmoud M. A.

    2016-01-01

    The purpose of this study aims to investigate the errors classes occurred by the Preparatory year students at King Saud University, through analysis student responses to the items of the study test, and to identify the varieties of the common errors and ratios of common errors that occurred in solving inequalities. In the collection of the data,…

  18. Error Analysis of Brailled Instructional Materials Produced by Public School Personnel in Texas

    Science.gov (United States)

    Herzberg, Tina

    2010-01-01

    In this study, a detailed error analysis was performed to determine if patterns of errors existed in braille transcriptions. The most frequently occurring errors were the insertion of letters or words that were not contained in the original print material; the incorrect usage of the emphasis indicator; and the incorrect formatting of titles,…

  19. Error analysis for Winters' Additive Seasonal Forecasting System

    OpenAIRE

    McKenzie, Edward

    1984-01-01

    A procedure for deriving the variance of the forecast error for Winters' Additive Seasonal Forecasting system is given. Both point and cumulative T-step ahead forecasts are dealt with. Closed form expressions are given in the cases when the model is (i) trend-free and (ii) non-seasonal. The effects of renormal ization of the seasonal factors is also discussed. The fact that the error variance for this system can be infinite is discussed and the relationship of this property ...

  20. Analysis of Random Errors in Horizontal Sextant Angles

    Science.gov (United States)

    1980-09-01

    sea horizon, bringing the direct and ref’lected images into coincidence and reading the micrometer and vernier . This is repeated several times...differences due to the direction of rotation of the micrometer drum were examined as well as the variability in the determination of sextant index error. / DD...minutes of arc respec- tively. In addition, systematic errors resulting from angular differences due to the direction of rotation of the micrometer drum

  1. Systematic error analysis for 3D nanoprofiler tracing normal vector

    Science.gov (United States)

    Kudo, Ryota; Tokuta, Yusuke; Nakano, Motohiro; Yamamura, Kazuya; Endo, Katsuyoshi

    2015-10-01

    In recent years, demand for an optical element having a high degree of freedom shape is increased. High-precision aspherical shape is required for the X-ray focusing mirror etc. For the head-mounted display etc., optical element of the free-form surface is used. For such an optical device fabrication, measurement technology is essential. We have developed a high- precision 3D nanoprofiler. By nanoprofiler, the normal vector information of the sample surface is obtained on the basis of the linearity of light. Normal vector information is differential value of the shape, it is possible to determine the shape by integrating. Repeatability of sub-nanometer has been achieved by nanoprofiler. To pursue the accuracy of shapes, systematic error is analyzed. The systematic errors are figure error of sample and assembly errors of the device. This method utilizes the information of the ideal shape of the sample, and the measurement point coordinates and normal vectors are calculated. However, measured figure is not the ideal shape by the effect of systematic errors. Therefore, the measurement point coordinate and the normal vector is calculated again by feeding back the measured figure. Correction of errors have been attempted by figure re-derivation. It was confirmed theoretically effectiveness by simulation. This approach also applies to the experiment, it was confirmed the possibility of about 4 nm PV figure correction in the employed sample.

  2. Error analysis of the freshmen Criminology students’ grammar in the written English

    Directory of Open Access Journals (Sweden)

    Maico Demi Banate Aperocho

    2017-12-01

    Full Text Available This study identifies the various syntactical errors of the fifty (50 freshmen B.S. Criminology students of the University of Mindanao in Davao City. Specifically, this study aims to answer the following: (1 What are the common errors present in the argumentative essays of the respondents? (2 What are the reasons of the existence of these errors? This study is descriptive-qualitative. It also uses error analysis to point out the syntactical errors present in the compositions of the participants. The fifty essays are subjected to error analysis. Errors are classified based on Chanquoy’s Classification of Writing Errors. Furthermore, Hourani’s Common Reasons of Grammatical Errors Checklist was also used to determine the common reasons of the identified syntactical errors. To create a meaningful interpretation of data and to solicit further ideas from the participants, a focus group discussion is also done. Findings show that students’ most common errors are on the grammatical aspect. In the grammatical aspect, students have more frequently committed errors in the verb aspect (tense, subject agreement, and auxiliary and linker choice compared to spelling and punctuation aspects. Moreover, there are three topmost reasons of committing errors in the paragraph: mother tongue interference, incomprehensibility of the grammar rules, and the incomprehensibility of the writing mechanics. Despite the difficulty in learning English as a second language, students are still very motivated to master the concepts and applications of the language.

  3. An analysis of the grammatical errors of Igbo-Speaking graduates ...

    African Journals Online (AJOL)

    Based on these findings, recommendations were made and they include the restructuring of the English language teacher, education curriculum to integrate contrastive analysis and error analysis as well as the use of interactive strategies in teaching to enhance practice. Keywords: Analysis, Grammatical errors, Igbo ...

  4. Error Consistency in Acquired Apraxia of Speech With Aphasia: Effects of the Analysis Unit.

    Science.gov (United States)

    Haley, Katarina L; Cunningham, Kevin T; Eaton, Catherine Torrington; Jacks, Adam

    2018-02-15

    Diagnostic recommendations for acquired apraxia of speech (AOS) have been contradictory concerning whether speech sound errors are consistent or variable. Studies have reported divergent findings that, on face value, could argue either for or against error consistency as a diagnostic criterion. The purpose of this study was to explain discrepancies in error consistency results based on the unit of analysis (segment, syllable, or word) to help determine which diagnostic recommendation is most appropriate. We analyzed speech samples from 14 left-hemisphere stroke survivors with clinical diagnoses of AOS and aphasia. Each participant produced 3 multisyllabic words 5 times in succession. Broad phonetic transcriptions of these productions were coded for consistency of error location and type using the word and its constituent syllables and sound segments as units of analysis. Consistency of error type varied systematically with the unit of analysis, showing progressively greater consistency as the analysis unit changed from the word to the syllable and then to the sound segment. Consistency of error location varied considerably across participants and correlated positively with error frequency. Low to moderate consistency of error type at the word level confirms original diagnostic accounts of speech output and sound errors in AOS as variable in form. Moderate to high error type consistency at the syllable and sound levels indicate that phonetic error patterns are present. The results are complementary and logically compatible with each other and with the literature.

  5. BNL-built LHC magnet error impact analysis and compensation

    CERN Document Server

    Ptitsyn, V I; Wei, J

    1999-01-01

    Superconducting magnets built at the Brookhaven National Laboratory will be installed in both the Insertion Region IP2 and IP8, and the RF region of the Large Hadron Collider (LHC). In particular, field quality of these IR dipoles will become important during LHC heavy- ion operation when the beta * at IP2 is reduced to 0.5 meters. This paper studies the impact of the magnetic errors in BNL-built magnets on LHC performance at injection and collision, both for proton and heavy-ion operation. Methods and schemes for error compensation are considered including optimization of magnet orientation and compensation using local IR correctors. (2 refs).

  6. Geometric Error Analysis in Applied Calculus Problem Solving

    Science.gov (United States)

    Usman, Ahmed Ibrahim

    2017-01-01

    The paper investigates geometric errors students made as they tried to use their basic geometric knowledge in the solution of the Applied Calculus Optimization Problem (ACOP). Inaccuracies related to the drawing of geometric diagrams (visualization skills) and those associated with the application of basic differentiation concepts into ACOP…

  7. Pitch Error Analysis of Young Piano Students' Music Reading Performances

    Science.gov (United States)

    Rut Gudmundsdottir, Helga

    2010-01-01

    This study analyzed the music reading performances of 6-13-year-old piano students (N = 35) in their second year of piano study. The stimuli consisted of three piano pieces, systematically constructed to vary in terms of left-hand complexity and input simultaneity. The music reading performances were recorded digitally and a code of error analysis…

  8. Error-analysis Based Second Language Teaching Strategies ...

    African Journals Online (AJOL)

    This study examines errors in 60 essays written by 60 students. The participants are class three students who are studying at a secondary school in Owerri North; 27 male and 33 female. They have experienced approximately the same number of years of education through primary and secondary education in Imo State.

  9. Oral Definitions of Newly Learned Words: An Error Analysis

    Science.gov (United States)

    Steele, Sara C.

    2012-01-01

    This study examined and compared patterns of errors in the oral definitions of newly learned words. Fifteen 9- to 11-year-old children with language learning disability (LLD) and 15 typically developing age-matched peers inferred the meanings of 20 nonsense words from four novel reading passages. After reading, children provided oral definitions…

  10. Human error in strabismus surgery: Quantification with a sensitivity analysis

    NARCIS (Netherlands)

    S. Schutte (Sander); J.R. Polling (Jan Roelof); F.C.T. van der Helm (Frans); H.J. Simonsz (Huib)

    2009-01-01

    textabstractBackground: Reoperations are frequently necessary in strabismus surgery. The goal of this study was to analyze human-error related factors that introduce variability in the results of strabismus surgery in a systematic fashion. Methods: We identified the primary factors that influence

  11. Human error in strabismus surgery : Quantification with a sensitivity analysis

    NARCIS (Netherlands)

    Schutte, S.; Polling, J.R.; Van der Helm, F.C.T.; Simonsz, H.J.

    2008-01-01

    Background- Reoperations are frequently necessary in strabismus surgery. The goal of this study was to analyze human-error related factors that introduce variability in the results of strabismus surgery in a systematic fashion. Methods- We identified the primary factors that influence the outcome of

  12. Analysis of Students' Error in Learning of Quadratic Equations

    Science.gov (United States)

    Zakaria, Effandi; Ibrahim; Maat, Siti Mistima

    2010-01-01

    The purpose of the study was to determine the students' error in learning quadratic equation. The samples were 30 form three students from a secondary school in Jambi, Indonesia. Diagnostic test was used as the instrument of this study that included three components: factorization, completing the square and quadratic formula. Diagnostic interview…

  13. Linguistic Error Analysis on Students' Thesis Proposals

    Science.gov (United States)

    Pescante-Malimas, Mary Ann; Samson, Sonrisa C.

    2017-01-01

    This study identified and analyzed the common linguistic errors encountered by Linguistics, Literature, and Advertising Arts majors in their Thesis Proposal classes in the First Semester 2016-2017. The data were the drafts of the thesis proposals of the students from the three different programs. A total of 32 manuscripts were analyzed which was…

  14. Inborn errors of metabolism revealed by organic acid profile analysis ...

    African Journals Online (AJOL)

    Objective: To determine the prevalence and types of inborn errors of amino acid or organic acid metabolism in a group of high risk Egyptian children with clinical signs and symptoms suggestive of inherited metabolic diseases. Subjects and Methods: 117 (79 males ═ 67.5 % and 38 females ═ 32.5 %) high risk patients with ...

  15. Error analysis to improve the speech recognition accuracy on ...

    Indian Academy of Sciences (India)

    Telugu language is one of the most widely spoken south Indian languages. In the proposed Telugu speech recognition system, errors obtained from decoder are analysed to improve the performance of the speech recognition system. Static pronunciation dictionary plays a key role in the speech recognition accuracy.

  16. AN ANALYSIS OF SUBJECT AGREEMENT ERRORS IN ENGLISH ...

    African Journals Online (AJOL)

    Windows User

    however, continuing prevalence of a wide range of errors in students' writing. ... were written before. In English, as in many other languages, one of the grammar rules is that the subjects and the verbs must agree both in number and in person. .... The incorrect sentences which were picked were the ones which had types of.

  17. Preliminary analysis of public dose from CFETR gaseous tritium release

    Energy Technology Data Exchange (ETDEWEB)

    Nie, Baojie [Key Laboratory of Neutronics and Radiation Safety, Institute of Nuclear Energy Safety Technology, Chinese Academy of Sciences, Hefei, Anhui 230031 (China); University of Science and Technology of China, Hefei, Anhui 230027 (China); Ni, Muyi, E-mail: muyi.ni@fds.org.cn [Key Laboratory of Neutronics and Radiation Safety, Institute of Nuclear Energy Safety Technology, Chinese Academy of Sciences, Hefei, Anhui 230031 (China); Lian, Chao; Jiang, Jieqiong [Key Laboratory of Neutronics and Radiation Safety, Institute of Nuclear Energy Safety Technology, Chinese Academy of Sciences, Hefei, Anhui 230031 (China)

    2015-02-15

    Highlights: • Present the amounts and limit dose of tritium release to the environment for CFETR. • Perform a preliminary simulation of radiation dose for gaseous tritium release. • Key parameters about soil types, wind speed, stability class, effective release height and age were sensitivity analyzed. • Tritium release amount is recalculated consistently with dose limit in Chinese regulation for CFETR. - Abstract: To demonstrate tritium self-sufficiency and other engineering issues, the scientific conception of Chinese Fusion Engineering Test Reactor (CFETR) has been proposed in China parallel with ITER and before DEMO reactor. Tritium environmental safety for CFETR is an important issue and must be evaluated because of the huge amounts of tritium cycling in reactor. In this work, different tritium release scenarios of CFETR and dose limit regulations in China are introduced. And the public dose is preliminarily analyzed under normal and accidental events. Furthermore, after finishing the sensitivity analysis of key input parameters, the public dose is reevaluated based on extreme parameters. Finally, tritium release amount is recalculated consistently with the dose limit in Chinese regulation for CFETR, which would provide a reference for tritium system design of CFETR.

  18. Dynamic Error Analysis Method for Vibration Shape Reconstruction of Smart FBG Plate Structure

    Directory of Open Access Journals (Sweden)

    Hesheng Zhang

    2016-01-01

    Full Text Available Shape reconstruction of aerospace plate structure is an important issue for safe operation of aerospace vehicles. One way to achieve such reconstruction is by constructing smart fiber Bragg grating (FBG plate structure with discrete distributed FBG sensor arrays using reconstruction algorithms in which error analysis of reconstruction algorithm is a key link. Considering that traditional error analysis methods can only deal with static data, a new dynamic data error analysis method are proposed based on LMS algorithm for shape reconstruction of smart FBG plate structure. Firstly, smart FBG structure and orthogonal curved network based reconstruction method is introduced. Then, a dynamic error analysis model is proposed for dynamic reconstruction error analysis. Thirdly, the parameter identification is done for the proposed dynamic error analysis model based on least mean square (LMS algorithm. Finally, an experimental verification platform is constructed and experimental dynamic reconstruction analysis is done. Experimental results show that the dynamic characteristics of the reconstruction performance for plate structure can be obtained accurately based on the proposed dynamic error analysis method. The proposed method can also be used for other data acquisition systems and data processing systems as a general error analysis method.

  19. Spectral Analysis of Forecast Error Investigated with an Observing System Simulation Experiment

    Science.gov (United States)

    Prive, N. C.; Errico, Ronald M.

    2015-01-01

    The spectra of analysis and forecast error are examined using the observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASAGMAO). A global numerical weather prediction model, the Global Earth Observing System version 5 (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation, is cycled for two months with once-daily forecasts to 336 hours to generate a control case. Verification of forecast errors using the Nature Run as truth is compared with verification of forecast errors using self-analysis; significant underestimation of forecast errors is seen using self-analysis verification for up to 48 hours. Likewise, self analysis verification significantly overestimates the error growth rates of the early forecast, as well as mischaracterizing the spatial scales at which the strongest growth occurs. The Nature Run-verified error variances exhibit a complicated progression of growth, particularly for low wave number errors. In a second experiment, cycling of the model and data assimilation over the same period is repeated, but using synthetic observations with different explicitly added observation errors having the same error variances as the control experiment, thus creating a different realization of the control. The forecast errors of the two experiments become more correlated during the early forecast period, with correlations increasing for up to 72 hours before beginning to decrease.

  20. Spectral analysis of forecast error investigated with an observing system simulation experiment

    Directory of Open Access Journals (Sweden)

    Nikki C. Privé

    2015-02-01

    Full Text Available The spectra of analysis and forecast error are examined using the observing system simulation experiment framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office. A global numerical weather prediction model, the Global Earth Observing System version 5 with Gridpoint Statistical Interpolation data assimilation, is cycled for 2 months with once-daily forecasts to 336 hours to generate a Control case. Verification of forecast errors using the nature run (NR as truth is compared with verification of forecast errors using self-analysis; significant underestimation of forecast errors is seen using self-analysis verification for up to 48 hours. Likewise, self-analysis verification significantly overestimates the error growth rates of the early forecast, as well as mis-characterising the spatial scales at which the strongest growth occurs. The NR-verified error variances exhibit a complicated progression of growth, particularly for low wavenumber errors. In a second experiment, cycling of the model and data assimilation over the same period is repeated, but using synthetic observations with different explicitly added observation errors having the same error variances as the control experiment, thus creating a different realisation of the control. The forecast errors of the two experiments become more correlated during the early forecast period, with correlations increasing for up to 72 hours before beginning to decrease.

  1. Global Quantitative Sensitivity Analysis and Compensation of Geometric Errors of CNC Machine Tool

    Directory of Open Access Journals (Sweden)

    Shijie Guo

    2016-01-01

    Full Text Available A quantitative analysis to identify the key geometric error elements and their coupling is the prerequisite and foundation for improving the precision of machine tools. The purpose of this paper is to identify key geometric error elements and compensate for geometric errors accordingly. The geometric error model of three-axis machine tool is built on the basis of multibody system theory; and the quantitative global sensitivity analysis (GSA model of geometric error elements is constructed by using extended Fourier amplitude sensitivity test method. The crucial geometric errors are identified; and stochastic characteristics of geometric errors are taken into consideration in the formulation of building up the compensation strategy. The validity of geometric error compensation based on sensitivity analysis is verified on a high-precision three-axis machine tool with open CNC system. The experimental results show that the average compensation rates along the X, Y, and Z directions are 59.8%, 65.5%, and 73.5%, respectively. The methods of sensitivity analysis and geometric errors compensation presented in this paper are suitable for identifying the key geometric errors and improving the precision of CNC machine tools effectively.

  2. Error analysis of terrestrial laser scanning data by means of spherical statistics and 3D graphs.

    Science.gov (United States)

    Cuartero, Aurora; Armesto, Julia; Rodríguez, Pablo G; Arias, Pedro

    2010-01-01

    This paper presents a complete analysis of the positional errors of terrestrial laser scanning (TLS) data based on spherical statistics and 3D graphs. Spherical statistics are preferred because of the 3D vectorial nature of the spatial error. Error vectors have three metric elements (one module and two angles) that were analyzed by spherical statistics. A study case has been presented and discussed in detail. Errors were calculating using 53 check points (CP) and CP coordinates were measured by a digitizer with submillimetre accuracy. The positional accuracy was analyzed by both the conventional method (modular errors analysis) and the proposed method (angular errors analysis) by 3D graphics and numerical spherical statistics. Two packages in R programming language were performed to obtain graphics automatically. The results indicated that the proposed method is advantageous as it offers a more complete analysis of the positional accuracy, such as angular error component, uniformity of the vector distribution, error isotropy, and error, in addition the modular error component by linear statistics.

  3. Study on error analysis and accuracy improvement for aspheric profile measurement

    Science.gov (United States)

    Gao, Huimin; Zhang, Xiaodong; Fang, Fengzhou

    2017-06-01

    Aspheric surfaces are important to the optical systems and need high precision surface metrology. Stylus profilometry is currently the most common approach to measure axially symmetric elements. However, if the asphere has the rotational alignment errors, the wrong cresting point would be located deducing the significantly incorrect surface errors. This paper studied the simulated results of an asphere with rotational angles around X-axis and Y-axis, and the stylus tip shift in X, Y and Z direction. Experimental results show that the same absolute value of rotational errors around X-axis would cause the same profile errors and different value of rotational errors around Y-axis would cause profile errors with different title angle. Moreover, the greater the rotational errors, the bigger the peak-to-valley value of profile errors. To identify the rotational angles in X-axis and Y-axis, the algorithms are performed to analyze the X-axis and Y-axis rotational angles respectively. Then the actual profile errors with multiple profile measurement around X-axis are calculated according to the proposed analysis flow chart. The aim of the multiple measurements strategy is to achieve the zero position of X-axis rotational errors. Finally, experimental results prove the proposed algorithms achieve accurate profile errors for aspheric surfaces avoiding both X-axis and Y-axis rotational errors. Finally, a measurement strategy for aspheric surface is presented systematically.

  4. Slow Learner Errors Analysis in Solving Fractions Problems in Inclusive Junior High School Class

    Science.gov (United States)

    Novitasari, N.; Lukito, A.; Ekawati, R.

    2018-01-01

    A slow learner whose IQ is between 71 and 89 will have difficulties in solving mathematics problems that often lead to errors. The errors could be analyzed to where the errors may occur and its type. This research is qualitative descriptive which aims to describe the locations, types, and causes of slow learner errors in the inclusive junior high school class in solving the fraction problem. The subject of this research is one slow learner of seventh-grade student which was selected through direct observation by the researcher and through discussion with mathematics teacher and special tutor which handles the slow learner students. Data collection methods used in this study are written tasks and semistructured interviews. The collected data was analyzed by Newman’s Error Analysis (NEA). Results show that there are four locations of errors, namely comprehension, transformation, process skills, and encoding errors. There are four types of errors, such as concept, principle, algorithm, and counting errors. The results of this error analysis will help teachers to identify the causes of the errors made by the slow learner.

  5. English Language Error Analysis of the Written Texts Produced by Ukrainian Learners: Data Collection

    Directory of Open Access Journals (Sweden)

    Lessia Mykolayivna Kotsyuk

    2015-12-01

    Full Text Available English Language Error Analysis of the Written Texts Produced by Ukrainian Learners: Data Collection Recently, the studies of second language acquisition have tended to focus on learners errors as they help to predict the difficulties involved in acquiring a second language. Thus, teachers can be made aware of the difficult areas to be encountered by the students and pay special attention and devote emphasis to them. The research goals of the article are to define what error analysis is and how it is important in L2 teaching process, to state the significance of corpus studies in identifying of different types of errors and mistakes, to provide the results of error analysis of the corpus of written texts produced by Ukrainian learners. In this article, major types of errors in English as a second language for Ukrainian students are mentioned.

  6. Using Online Error Analysis Items to Support Preservice Teachers' Pedagogical Content Knowledge in Mathematics

    Science.gov (United States)

    McGuire, Patrick

    2013-01-01

    This article describes how a free, web-based intelligent tutoring system, (ASSISTment), was used to create online error analysis items for preservice elementary and secondary mathematics teachers. The online error analysis items challenged preservice teachers to analyze, diagnose, and provide targeted instructional remediation intended to help…

  7. PROCESSING AND ANALYSIS OF THE MEASURED ALIGNMENT ERRORS FOR RHIC.

    Energy Technology Data Exchange (ETDEWEB)

    PILAT,F.; HEMMER,M.; PTITSIN,V.; TEPIKIAN,S.; TRBOJEVIC,D.

    1999-03-29

    All elements of the Relativistic Heavy Ion Collider (RHIC) have been installed in ideal survey locations, which are defined as the optimum locations of the fiducials with respect to the positions generated by the design. The alignment process included the presurvey of all elements which could affect the beams. During this procedure a special attention was paid to the precise determination of the quadrupole centers as well as the roll angles of the quadrupoles and dipoles. After installation the machine has been surveyed and the resulting as-built measured position of the fiducials have been stored and structured in the survey database. We describe how the alignment errors, inferred by comparison of ideal and as-built data, have been processed and analyzed by including them in the RHIC modeling software. The RHIC model, which also includes individual measured errors for all magnets in the machine and is automatically generated from databases, allows the study of the impact of the measured alignment errors on the machine.

  8. Analysis of Sources of Large Positioning Errors in Deterministic Fingerprinting.

    Science.gov (United States)

    Torres-Sospedra, Joaquín; Moreira, Adriano

    2017-11-27

    Wi-Fi fingerprinting is widely used for indoor positioning and indoor navigation due to the ubiquity of wireless networks, high proliferation of Wi-Fi-enabled mobile devices, and its reasonable positioning accuracy. The assumption is that the position can be estimated based on the received signal strength intensity from multiple wireless access points at a given point. The positioning accuracy, within a few meters, enables the use of Wi-Fi fingerprinting in many different applications. However, it has been detected that the positioning error might be very large in a few cases, which might prevent its use in applications with high accuracy positioning requirements. Hybrid methods are the new trend in indoor positioning since they benefit from multiple diverse technologies (Wi-Fi, Bluetooth, and Inertial Sensors, among many others) and, therefore, they can provide a more robust positioning accuracy. In order to have an optimal combination of technologies, it is crucial to identify when large errors occur and prevent the use of extremely bad positioning estimations in hybrid algorithms. This paper investigates why large positioning errors occur in Wi-Fi fingerprinting and how to detect them by using the received signal strength intensities.

  9. Probability analysis of position errors using uncooled IR stereo camera

    Science.gov (United States)

    Oh, Jun Ho; Lee, Sang Hwa; Lee, Boo Hwan; Park, Jong-Il

    2016-05-01

    This paper analyzes the random phenomenon of 3D positions when tracking moving objects using the infrared (IR) stereo camera, and proposes a probability model of 3D positions. The proposed probability model integrates two random error phenomena. One is the pixel quantization error which is caused by discrete sampling pixels in estimating disparity values of stereo camera. The other is the timing jitter which results from the irregular acquisition-timing in the uncooled IR cameras. This paper derives a probability distribution function by combining jitter model with pixel quantization error. To verify the proposed probability function of 3D positions, the experiments on tracking fast moving objects are performed using IR stereo camera system. The 3D depths of moving object are estimated by stereo matching, and be compared with the ground truth obtained by laser scanner system. According to the experiments, the 3D depths of moving object are estimated within the statistically reliable range which is well derived by the proposed probability distribution. It is expected that the proposed probability model of 3D positions can be applied to various IR stereo camera systems that deal with fast moving objects.

  10. Simultaneous control of error rates in fMRI data analysis.

    Science.gov (United States)

    Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David

    2015-12-01

    The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to "cleaner"-looking brain maps and operational superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Sensitivity analysis of geostatistical approach to recover pollution source release history in groundwater

    Science.gov (United States)

    Long, Y. Q.; Cui, T. T.; Li, W.; Yang, Z. P.; Gai, Y. W.

    2017-08-01

    The geostatistical approach has been studied for many year to identify the pollution source re-lease history in groundwater. We focus on the influence of observation error and hydraulic parameters on the groundwater pollution identification (PSI) result in the paper. Numerical experiment and sensitivity analysis are carried out to find the influence of observation point configuration, error and hydraulic parameters on the PSI result in a 1D homogeneous aquifer. It has been found out that if concentration observation data could accurately describe the characteristics of the real concentration plume at the observed time point, a nice identification of the pollution release process could be obtained. If the calculated pollution discharge process has good similarity with the real discharge process, the order of the observation error fell within 10-6 and 10-3.5, the dispersion coefficient varies fells within -10% and 5%, and the actual mean velocity fell within ±2%. The actual mean velocity is the most sensitive parameter of the geostatistical approach in this case.

  12. Automatic Estimation of Verified Floating-Point Round-Off Errors via Static Analysis

    Science.gov (United States)

    Moscato, Mariano; Titolo, Laura; Dutle, Aaron; Munoz, Cesar A.

    2017-01-01

    This paper introduces a static analysis technique for computing formally verified round-off error bounds of floating-point functional expressions. The technique is based on a denotational semantics that computes a symbolic estimation of floating-point round-o errors along with a proof certificate that ensures its correctness. The symbolic estimation can be evaluated on concrete inputs using rigorous enclosure methods to produce formally verified numerical error bounds. The proposed technique is implemented in the prototype research tool PRECiSA (Program Round-o Error Certifier via Static Analysis) and used in the verification of floating-point programs of interest to NASA.

  13. ERROR ANALYSIS IN THE TRAVEL WRITING MADE BY THE STUDENTS OF ENGLISH STUDY PROGRAM

    Directory of Open Access Journals (Sweden)

    Vika Agustina

    2015-05-01

    Full Text Available This study was conducted to identify the kinds of errors in surface strategy taxonomy and to know the dominant type of errors made by the fifth semester students of English Department of one State University in Malang-Indonesia in producing their travel writing. The type of research of this study is document analysis since it analyses written materials, in this case travel writing texts. The analysis finds that the grammatical errors made by the students based on surface strategy taxonomy theory consist of four types. They are (1 omission, (2 addition, (3 misformation and (4 misordering. The most frequent errors occuring in misformation are in the use of tense form. Secondly, the errors are in omission of noun/verb inflection. The next error, there are many clauses that contain unnecessary phrase added there.

  14. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    Science.gov (United States)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  15. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    Science.gov (United States)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Quality within space exploration ground processing operations, the identification and or classification of underlying contributors and causes of human error must be identified, in order to manage human error.This presentation will provide a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  16. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    Science.gov (United States)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps/incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  17. Error Propagation Analysis in the SAE Architecture Analysis and Design Language (AADL) and the EDICT Tool Framework

    Science.gov (United States)

    LaValley, Brian W.; Little, Phillip D.; Walter, Chris J.

    2011-01-01

    This report documents the capabilities of the EDICT tools for error modeling and error propagation analysis when operating with models defined in the Architecture Analysis & Design Language (AADL). We discuss our experience using the EDICT error analysis capabilities on a model of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER) architecture that uses the Reliable Optical Bus (ROBUS). Based on these experiences we draw some initial conclusions about model based design techniques for error modeling and analysis of highly reliable computing architectures.

  18. Architecture Fault Modeling and Analysis with the Error Model Annex, Version 2

    Science.gov (United States)

    2016-06-01

    Architecture Fault Modeling and Analysis with the Error Model Annex, Version 2 Peter Feiler John Hudak Julien Delange David P. Gluch June...Summary viii Abstract x 1 Introduction 1 1.1 Background 1 1.2 Virtual System Integration and Architecture Fault Modeling 2 1.3 Language Concepts in... Architecture Fault Models 60 7.1 Property Associations on Error Model Elements 60 7.2 Determining a Property Value 62 7.3 User-Defined Error Model Properties

  19. Error-tolerant Finite State Recognition with Applications to Morphological Analysis and Spelling Correction

    OpenAIRE

    Oflazer, Kemal

    1995-01-01

    Error-tolerant recognition enables the recognition of strings that deviate mildly from any string in the regular set recognized by the underlying finite state recognizer. Such recognition has applications in error-tolerant morphological processing, spelling correction, and approximate string matching in information retrieval. After a description of the concepts and algorithms involved, we give examples from two applications: In the context of morphological analysis, error-tolerant recognition...

  20. Phonological analysis of substitution errors of patients with apraxia of speech

    OpenAIRE

    Cera, Maysa Luchesi; Ortiz, Karin Zazo

    2010-01-01

    Abstract The literature on apraxia of speech describes the types and characteristics of phonological errors in this disorder. In general, phonemes affected by errors are described, but the distinctive features involved have not yet been investigated. Objective: To analyze the features involved in substitution errors produced by Brazilian-Portuguese speakers with apraxia of speech. Methods: 20 adults with apraxia of speech were assessed. Phonological analysis of the distinctive features involv...

  1. Analysis of Free-Space Coupling to Photonic Lanterns in the Presence of Tilt Errors

    Science.gov (United States)

    2017-05-01

    Analysis of Free- Space Coupling to Photonic Lanterns in the Presence of Tilt Errors Timothy M. Yarnall, David J. Geisler, Curt M. Schieler...Massachusetts Avenue Cambridge, MA 02139, USA Abstract—Free space coupling to photonic lanterns is more tolerant to tilt errors and F -number mismatch than...these errors. I. INTRODUCTION Photonic lanterns provide a means for transitioning from the free space regime to the single-mode fiber (SMF) regime by

  2. ANSYS workbench tutorial release 14 structural & thermal analysis using the ANSYS workbench release 14 environment

    CERN Document Server

    Lawrence, Kent L

    2012-01-01

    The exercises in ANSYS Workbench Tutorial Release 14 introduce you to effective engineering problem solving through the use of this powerful modeling, simulation and optimization software suite. Topics that are covered include solid modeling, stress analysis, conduction/convection heat transfer, thermal stress, vibration, elastic buckling and geometric/material nonlinearities. It is designed for practicing and student engineers alike and is suitable for use with an organized course of instruction or for self-study. The compact presentation includes just over 100 end-of-chapter problems covering all aspects of the tutorials.

  3. Analysis of Student Errors on Division of Fractions

    Science.gov (United States)

    Maelasari, E.; Jupri, A.

    2017-02-01

    This study aims to describe the type of student errors that typically occurs at the completion of the division arithmetic operations on fractions, and to describe the causes of students’ mistakes. This research used a descriptive qualitative method, and involved 22 fifth grade students at one particular elementary school in Kuningan, Indonesia. The results of this study showed that students’ error answers caused by students changing their way of thinking to solve multiplication and division operations on the same procedures, the changing of mix fractions to common fraction have made students confused, and students are careless in doing calculation. From student written work, in solving the fraction problems, we found that there is influence between the uses of learning methods and student response, and some of student responses beyond researchers’ prediction. We conclude that the teaching method is not only the important thing that must be prepared, but the teacher should also prepare about predictions of students’ answers to the problems that will be given in the learning process. This could be a reflection for teachers to be better and to achieve the expected learning goals.

  4. Error Analysis of non-TLD HDR Brachytherapy Dosimetric Techniques

    Science.gov (United States)

    Amoush, Ahmad

    The American Association of Physicists in Medicine Task Group Report43 (AAPM-TG43) and its updated version TG-43U1 rely on the LiF TLD detector to determine the experimental absolute dose rate for brachytherapy. The recommended uncertainty estimates associated with TLD experimental dosimetry include 5% for statistical errors (Type A) and 7% for systematic errors (Type B). TG-43U1 protocol does not include recommendation for other experimental dosimetric techniques to calculate the absolute dose for brachytherapy. This research used two independent experimental methods and Monte Carlo simulations to investigate and analyze uncertainties and errors associated with absolute dosimetry of HDR brachytherapy for a Tandem applicator. An A16 MicroChamber* and one dose MOSFET detectors† were selected to meet the TG-43U1 recommendations for experimental dosimetry. Statistical and systematic uncertainty analyses associated with each experimental technique were analyzed quantitatively using MCNPX 2.6‡ to evaluate source positional error, Tandem positional error, the source spectrum, phantom size effect, reproducibility, temperature and pressure effects, volume averaging, stem and wall effects, and Tandem effect. Absolute dose calculations for clinical use are based on Treatment Planning System (TPS) with no corrections for the above uncertainties. Absolute dose and uncertainties along the transverse plane were predicted for the A16 microchamber. The generated overall uncertainties are 22%, 17%, 15%, 15%, 16%, 17%, and 19% at 1cm, 2cm, 3cm, 4cm, and 5cm, respectively. Predicting the dose beyond 5cm is complicated due to low signal-to-noise ratio, cable effect, and stem effect for the A16 microchamber. Since dose beyond 5cm adds no clinical information, it has been ignored in this study. The absolute dose was predicted for the MOSFET detector from 1cm to 7cm along the transverse plane. The generated overall uncertainties are 23%, 11%, 8%, 7%, 7%, 9%, and 8% at 1cm, 2cm, 3cm

  5. Stochastic and sensitivity analysis of shape error of inflatable antenna reflectors

    Science.gov (United States)

    San, Bingbing; Yang, Qingshan; Yin, Liwei

    2017-03-01

    Inflatable antennas are promising candidates to realize future satellite communications and space observations since they are lightweight, low-cost and small-packaged-volume. However, due to their high flexibility, inflatable reflectors are difficult to manufacture accurately, which may result in undesirable shape errors, and thus affect their performance negatively. In this paper, the stochastic characteristics of shape errors induced during manufacturing process are investigated using Latin hypercube sampling coupled with manufacture simulations. Four main random error sources are involved, including errors in membrane thickness, errors in elastic modulus of membrane, boundary deviations and pressure variations. Using regression and correlation analysis, a global sensitivity study is conducted to rank the importance of these error sources. This global sensitivity analysis is novel in that it can take into account the random variation and the interaction between error sources. Analyses are parametrically carried out with various focal-length-to-diameter ratios (F/D) and aperture sizes (D) of reflectors to investigate their effects on significance ranking of error sources. The research reveals that RMS (Root Mean Square) of shape error is a random quantity with an exponent probability distribution and features great dispersion; with the increase of F/D and D, both mean value and standard deviation of shape errors are increased; in the proposed range, the significance ranking of error sources is independent of F/D and D; boundary deviation imposes the greatest effect with a much higher weight than the others; pressure variation ranks the second; error in thickness and elastic modulus of membrane ranks the last with very close sensitivities to pressure variation. Finally, suggestions are given for the control of the shape accuracy of reflectors and allowable values of error sources are proposed from the perspective of reliability.

  6. ANALYSIS AND CORRECTION OF SYSTEMATIC HEIGHT MODEL ERRORS

    Directory of Open Access Journals (Sweden)

    K. Jacobsen

    2016-06-01

    Full Text Available The geometry of digital height models (DHM determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC. Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3 has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP, but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM digital surface model (DSM or the new AW3D30 DSM, based on ALOS

  7. Contribution of Error Analysis to Foreign Language Teaching

    Directory of Open Access Journals (Sweden)

    Vacide ERDOĞAN

    2014-01-01

    Full Text Available It is inevitable that learners make mistakes in the process of foreign language learning.However, what is questioned by language teachers is why students go on making the same mistakeseven when such mistakes have been repeatedly pointed out to them. Yet not all mistakes are the same;sometimes they seem to be deeply ingrained, but at other times students correct themselves with ease.Thus, researchers and teachers of foreign language came to realize that the mistakes a person made inthe process of constructing a new system of language is needed to be analyzed carefully, for theypossibly held in them some of the keys to the understanding of second language acquisition. In thisrespect, the aim of this study is to point out the significance of learners’ errors for they provideevidence of how language is learned and what strategies or procedures the learners are employing inthe discovery of language.

  8. Pseudorange error analysis for precise indoor positioning system

    Science.gov (United States)

    Pola, Marek; Bezoušek, Pavel

    2017-05-01

    There is a currently developed system of a transmitter indoor localization intended for fire fighters or members of rescue corps. In this system the transmitter of an ultra-wideband orthogonal frequency-division multiplexing signal position is determined by the time difference of arrival method. The position measurement accuracy highly depends on the directpath signal time of arrival estimation accuracy which is degraded by severe multipath in complicated environments such as buildings. The aim of this article is to assess errors in the direct-path signal time of arrival determination caused by multipath signal propagation and noise. Two methods of the direct-path signal time of arrival estimation are compared here: the cross correlation method and the spectral estimation method.

  9. Error Floor Analysis of Coded Slotted ALOHA over Packet Erasure Channels

    DEFF Research Database (Denmark)

    Ivanov, Mikhail; Graell i Amat, Alexandre; Brannstrom, F.

    2014-01-01

    We present a framework for the analysis of the error floor of coded slotted ALOHA (CSA) for finite frame lengths over the packet erasure channel. The error floor is caused by stopping sets in the corresponding bipartite graph, whose enumeration is, in general, not a trivial problem. We therefore...... identify the most dominant stopping sets for the distributions of practical interest. The derived analytical expressions allow us to accurately predict the error floor at low to moderate channel loads and characterize the unequal error protection inherent in CSA....

  10. Reliability Analysis Models for Differential Protection Considering Communication Delays and Errors

    Directory of Open Access Journals (Sweden)

    Yingjun Wu

    2015-03-01

    Full Text Available This paper proposes three probability models to assess the impact of communication delays and bit errors on differential protection. First, the mechanism of relay protection malfunction caused by communication delays and bit errors is introduced. In general, a channel’s consistent delay or bit error results in refuse-operations, while a channel’s inconsistent delay normally causes false trips. Based on the analysis of the probability distributions of communication delays and bit errors, probabilistic models of false trips and refuse-operations are proposed. Simulation results, using typical parameters, are implemented to investigate the effects of communications on the malfunction probability of differential protection.

  11. A Linguistic Analysis of Errors in the Compositions of Arba Minch University Students

    Science.gov (United States)

    Tizazu, Yoseph

    2014-01-01

    This study reports the dominant linguistic errors that occur in the written productions of Arba Minch University (hereafter AMU) students. A sample of paragraphs was collected for two years from students ranging from freshmen to graduating level. The sampled compositions were then coded, described, and explained using error analysis method. Both…

  12. Boundary error analysis and categorization in the TRECVID news story segmentation task

    NARCIS (Netherlands)

    Arlandis, J.; Over, P.; Kraaij, W.

    2005-01-01

    In this paper, an error analysis based on boundary error popularity (frequency) including semantic boundary categorization is applied in the context of the news story segmentation task from TRECVTD1. Clusters of systems were defined based on the input resources they used including video, audio and

  13. Error Analysis of Mathematical Word Problem Solving across Students with and without Learning Disabilities

    Science.gov (United States)

    Kingsdorf, Sheri; Krawec, Jennifer

    2014-01-01

    Solving word problems is a common area of struggle for students with learning disabilities (LD). In order for instruction to be effective, we first need to have a clear understanding of the specific errors exhibited by students with LD during problem solving. Error analysis has proven to be an effective tool in other areas of math but has had…

  14. Analysis of Errors and Misconceptions in the Learning of Calculus by Undergraduate Students

    Science.gov (United States)

    Muzangwa, Jonatan; Chifamba, Peter

    2012-01-01

    This paper is going to analyse errors and misconceptions in an undergraduate course in Calculus. The study will be based on a group of 10 BEd. Mathematics students at Great Zimbabwe University. Data is gathered through use of two exercises on Calculus 1&2.The analysis of the results from the tests showed that a majority of the errors were due…

  15. ERROR ANALYSIS OF ENGLISH WRITTEN ESSAY OF HIGHER EFL LEARNERS: A CASE STUDY

    Directory of Open Access Journals (Sweden)

    Rina Husnaini Febriyanti

    2016-09-01

    Full Text Available The aim of the research is to identify grammatical error and to investigate the most and the least of grammatical error occurred on the students’ English written essay. The approach of research is qualitative descriptive with descriptive analysis. The samples were taken from the essays made by 34 students in writing class. The findings resulted in: the most common error occurred was subject-verb agreement error and the score was 28, 25%. The second place of frequent error was on verb tense and form with 24, 66% as the score. The third was on spellings errors and the value is 17, 94%. The fourth was error on using auxiliaries and the score 9, 87%. The fifth was error on word order with the score was 8.07%. The rest error was applying passive voice with the score is 4.93%, articles (3.59%, prepositions (1.79%, and pronoun and run-on sentence with the same scores, 0. 45%. This may indicate that most students still made errors even for the usage of basic grammar rules in their writing.

  16. The treatment of commission errors in first generation human reliability analysis methods

    Energy Technology Data Exchange (ETDEWEB)

    Alvarengga, Marco Antonio Bayout; Fonseca, Renato Alves da, E-mail: bayout@cnen.gov.b, E-mail: rfonseca@cnen.gov.b [Comissao Nacional de Energia Nuclear (CNEN) Rio de Janeiro, RJ (Brazil); Melo, Paulo Fernando Frutuoso e, E-mail: frutuoso@nuclear.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Nuclear

    2011-07-01

    Human errors in human reliability analysis can be classified generically as errors of omission and commission errors. Omission errors are related to the omission of any human action that should have been performed, but does not occur. Errors of commission are those related to human actions that should not be performed, but which in fact are performed. Both involve specific types of cognitive error mechanisms, however, errors of commission are more difficult to model because they are characterized by non-anticipated actions that are performed instead of others that are omitted (omission errors) or are entered into an operational task without being part of the normal sequence of this task. The identification of actions that are not supposed to occur depends on the operational context that will influence or become easy certain unsafe actions of the operator depending on the operational performance of its parameters and variables. The survey of operational contexts and associated unsafe actions is a characteristic of second-generation models, unlike the first generation models. This paper discusses how first generation models can treat errors of commission in the steps of detection, diagnosis, decision-making and implementation, in the human information processing, particularly with the use of THERP tables of errors quantification. (author)

  17. Error Analysis on the Use of “Be” in the Students’ Composition

    Directory of Open Access Journals (Sweden)

    Rochmat Budi Santosa

    2016-07-01

    Full Text Available This study aims to identify, analyze and describe the structure of the use of some errors in the writing of English sentences in the text and the aspects surrounding the Student Semester 3 of English Department STAIN Surakarta. In this study, the researcher describes the error use of 'be' both as a linking verb or auxiliary verb. This is a qualitative-descriptive research. Source data used is a document that is the writing assignment undertaken by the Students taking Writing course. Writing tasks are in narrative, descriptive, expositive, and argumentative forms. To analyze the data, researcher uses intra lingual and extra lingual method. This method is used to connect the linguistic elements in sentences, especially some of the elements either as a linking verb or auxiliary verb in English sentences in the text. Based on the analysis of error regarding the use of 'be' it can be concluded that there are 5 (five types of errors made by students; error about the absence (omission of 'be',  error about the addition of 'be', the error on the application of 'be', errors in placements 'be', and a complex error in the use of 'be'. These errors occur due to inter lingual transfer, intra lingual transfer and learning context.

  18. Error analysis of the articulated flexible arm CMM based on geometric method

    Science.gov (United States)

    Wang, Xueying; Liu, Shugui; Zhang, Guoxiong; Wang, Bin

    2006-11-01

    In order to overcome the disadvantage of traditional CMM (Coordinate Measuring Machine), a new type of CMM with rotational joints and flexible arms named articulated arm flexible CMM is developed, in which linear measurements are substituted by angular ones. Firstly a quasi-spherical coordinate system is put forward, the ideal mathematical model of articulated arm flexible CMM is established. On the base of full analysis on the factors affecting the measurement accuracy, ideal mathematical model is modified to error model according to structural parameters and geometric errors. A geometric method is proposed to verify feasibility of error model, and the results convincingly show its validity. Position errors caused by different type of error sources are analyzed, and a theoretic base for introducing error compensation and improving the accuracy of articulated arm flexible CMM is established.

  19. The comparative study of evaluating human error assessment and reduction technique and cognitive reliability and error analysis method techniques in the control room of the cement industry

    Directory of Open Access Journals (Sweden)

    Amin Babaei Pouya

    2015-01-01

    Full Text Available Aims: The present study aimed to evaluate the assessment methods of human errors and compare the results of these techniques in order to introduce the precise method of human error assessment, and recognize the factors affecting the occurrence of these errors. Materials and Methods: This case study was done at three workstation control room of a cement industry in 2014. After determining the responsibilities and critical jobs by hierarchical task analysis, cognitive reliability and error analysis method (CREAM and human error assessment and reduction technique (HEART were used in order to analyze the human errors. Results: The results showed that in the CREAM method, the highest probability of error occurrence is related to monitoring and control (operator with a probability of 0.207, and that of in the HEART method, is related to control signs (operator with a probability of 0.416. The number of errors detected by CREAM and HEART method were 85 and 80, respectively. Time and cost of applying the CREAM methods were 235 h and 1175($, while those in the HEART techniques were 215 h and 1075($. Conclusion: We concluded that the highest probability of calculated errors relates to "monitoring and control (operator," "controlling warning signs (operators," and "cooperation in solving the problem (supervisor" for both techniques. By considering the time and cost factors, HEART has superiority, while CREAM is better due to its extensive evaluation and the number of detected errors.

  20. Human Error Analysis in a Permit to Work System: A Case Study in a Chemical Plant.

    Science.gov (United States)

    Jahangiri, Mehdi; Hoboubi, Naser; Rostamabadi, Akbar; Keshavarzi, Sareh; Hosseini, Ali Akbar

    2016-03-01

    A permit to work (PTW) is a formal written system to control certain types of work which are identified as potentially hazardous. However, human error in PTW processes can lead to an accident. This cross-sectional, descriptive study was conducted to estimate the probability of human errors in PTW processes in a chemical plant in Iran. In the first stage, through interviewing the personnel and studying the procedure in the plant, the PTW process was analyzed using the hierarchical task analysis technique. In doing so, PTW was considered as a goal and detailed tasks to achieve the goal were analyzed. In the next step, the standardized plant analysis risk-human (SPAR-H) reliability analysis method was applied for estimation of human error probability. The mean probability of human error in the PTW system was estimated to be 0.11. The highest probability of human error in the PTW process was related to flammable gas testing (50.7%). The SPAR-H method applied in this study could analyze and quantify the potential human errors and extract the required measures for reducing the error probabilities in PTW system. Some suggestions to reduce the likelihood of errors, especially in the field of modifying the performance shaping factors and dependencies among tasks are provided.

  1. Quality of IT service delivery — Analysis and framework for human error prevention

    KAUST Repository

    Shwartz, L.

    2010-12-01

    In this paper, we address the problem of reducing the occurrence of Human Errors that cause service interruptions in IT Service Support and Delivery operations. Analysis of a large volume of service interruption records revealed that more than 21% of interruptions were caused by human error. We focus on Change Management, the process with the largest risk of human error, and identify the main instances of human errors as the 4 Wrongs: request, time, configuration item, and command. Analysis of change records revealed that the humanerror prevention by partial automation is highly relevant. We propose the HEP Framework, a framework for execution of IT Service Delivery operations that reduces human error by addressing the 4 Wrongs using content integration, contextualization of operation patterns, partial automation of command execution, and controlled access to resources.

  2. Research on Human-Error Factors of Civil Aircraft Pilots Based On Grey Relational Analysis

    Directory of Open Access Journals (Sweden)

    Guo Yundong

    2018-01-01

    Full Text Available In consideration of the situation that civil aviation accidents involve many human-error factors and show the features of typical grey systems, an index system of civil aviation accident human-error factors is built using human factor analysis and classification system model. With the data of accidents happened worldwide between 2008 and 2011, the correlation between human-error factors can be analyzed quantitatively using the method of grey relational analysis. Research results show that the order of main factors affecting pilot human-error factors is preconditions for unsafe acts, unsafe supervision, organization and unsafe acts. The factor related most closely with second-level indexes and pilot human-error factors is the physical/mental limitations of pilots, followed by supervisory violations. The relevancy between the first-level indexes and the corresponding second-level indexes and the relevancy between second-level indexes can also be analyzed quantitatively.

  3. Radionuclides release possibility analysis of MSR at various accident conditions

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Choong Wie; Kim, Hee Reyoung [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2015-10-15

    There are some accidents which go beyond our expectation such as Fukushima Daiichi nuclear disaster and amounts of radionuclides release to environment, so more effort and research are conducted to prevent it. MSR (Molten Salt Reactor) is one of GEN-IV reactor types, and its coolant and fuel are mixtures of molten salt. MSR has a schematic like figure 1 and it has different features with the solid fuel reactor, but most important and interesting feature of MSR is its many safety systems. For example, MSR has a large negative void coefficient. Even though power increases, the reactor slows down soon. Radionuclides release possibility of MSR was analyzed at various accident conditions including Chernobyl and Fukushima ones. The MSR was understood to prevent the severe accident by the negative reactivity coefficient and the absence of explosive material such as water at the Chernobyl disaster condition. It was expected to contain fuel salts in the reactor building and not to release radionuclides into environment even if the primary system could be ruptured or broken and fuel salts would be leaked at the Fukushima Daiichi nuclear disaster condition of earthquake and tsunami. The MSR, which would not lead to the severe accident and therefore prevents the fuel release to the environment at many expected scenarios, was thought to have priority in the aspect of accidents. A quantitative analysis and a further research are needed to evaluate the possibility of radionuclide release to the environment at the various accident conditions based on the simple comparison of the safety feature between MSR and solid fuel reactor.

  4. Mixed Methods Analysis of Medical Error Event Reports: A Report from the ASIPS Collaborative

    National Research Council Canada - National Science Library

    Harris, Daniel M; Westfall, John M; Fernald, Douglas H; Duclos, Christine W; West, David R; Niebauer, Linda; Marr, Linda; Quintela, Javan; Main, Deborah S

    2005-01-01

    .... This paper presents a mixed methods approach to analyzing narrative error event reports. Mixed methods studies integrate one or more qualitative and quantitative techniques for data collection and analysis...

  5. The Friedman Two-Way Analysis of Variance as a Test for Ranking Error

    Science.gov (United States)

    Wagner, Edwin E.

    1976-01-01

    The problem of bias in rankings due to the initial position of entities when presented to judges is discussed. A modification of the Friedman Two-Way Analysis of Variance to test "ranking error" is presented. (JKS)

  6. Error analysis for RADAR neighbor matching localization in linear logarithmic strength varying Wi-Fi environment

    National Research Council Canada - National Science Library

    Zhou, Mu; Tian, Zengshan; Xu, Kunjie; Yu, Xiang; Wu, Haibo

    2014-01-01

    ...) in logarithmic received signal strength (RSS) varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs...

  7. Error Analysis for RADAR Neighbor Matching Localization in Linear Logarithmic Strength Varying Wi-Fi Environment

    National Research Council Canada - National Science Library

    Zhou, Mu; Tian, Zengshan; Xu, Kunjie; Yu, Xiang; Wu, Haibo

    2014-01-01

    ...) in logarithmic received signal strength (RSS) varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs...

  8. Confirmation of standard error analysis techniques applied to EXAFS using simulations

    Energy Technology Data Exchange (ETDEWEB)

    Booth, Corwin H; Hu, Yung-Jin

    2009-12-14

    Systematic uncertainties, such as those in calculated backscattering amplitudes, crystal glitches, etc., not only limit the ultimate accuracy of the EXAFS technique, but also affect the covariance matrix representation of real parameter errors in typical fitting routines. Despite major advances in EXAFS analysis and in understanding all potential uncertainties, these methods are not routinely applied by all EXAFS users. Consequently, reported parameter errors are not reliable in many EXAFS studies in the literature. This situation has made many EXAFS practitioners leery of conventional error analysis applied to EXAFS data. However, conventional error analysis, if properly applied, can teach us more about our data, and even about the power and limitations of the EXAFS technique. Here, we describe the proper application of conventional error analysis to r-space fitting to EXAFS data. Using simulations, we demonstrate the veracity of this analysis by, for instance, showing that the number of independent dat a points from Stern's rule is balanced by the degrees of freedom obtained from a 2 statistical analysis. By applying such analysis to real data, we determine the quantitative effect of systematic errors. In short, this study is intended to remind the EXAFS community about the role of fundamental noise distributions in interpreting our final results.

  9. Error Analysis for Discontinuous Galerkin Method for Parabolic Problems

    Science.gov (United States)

    Kaneko, Hideaki

    2004-01-01

    In the proposal, the following three objectives are stated: (1) A p-version of the discontinuous Galerkin method for a one dimensional parabolic problem will be established. It should be recalled that the h-version in space was used for the discontinuous Galerkin method. An a priori error estimate as well as a posteriori estimate of this p-finite element discontinuous Galerkin method will be given. (2) The parameter alpha that describes the behavior double vertical line u(sub t)(t) double vertical line 2 was computed exactly. This was made feasible because of the explicitly specified initial condition. For practical heat transfer problems, the initial condition may have to be approximated. Also, if the parabolic problem is proposed on a multi-dimensional region, the parameter alpha, for most cases, would be difficult to compute exactly even in the case that the initial condition is known exactly. The second objective of this proposed research is to establish a method to estimate this parameter. This will be done by computing two discontinuous Galerkin approximate solutions at two different time steps starting from the initial time and use them to derive alpha. (3) The third objective is to consider the heat transfer problem over a two dimensional thin plate. The technique developed by Vogelius and Babuska will be used to establish a discontinuous Galerkin method in which the p-element will be used for through thickness approximation. This h-p finite element approach, that results in a dimensional reduction method, was used for elliptic problems, but the application appears new for the parabolic problem. The dimension reduction method will be discussed together with the time discretization method.

  10. Generalized multiplicative error models: Asymptotic inference and empirical analysis

    Science.gov (United States)

    Li, Qian

    This dissertation consists of two parts. The first part focuses on extended Multiplicative Error Models (MEM) that include two extreme cases for nonnegative series. These extreme cases are common phenomena in high-frequency financial time series. The Location MEM(p,q) model incorporates a location parameter so that the series are required to have positive lower bounds. The estimator for the location parameter turns out to be the minimum of all the observations and is shown to be consistent. The second case captures the nontrivial fraction of zero outcomes feature in a series and combines a so-called Zero-Augmented general F distribution with linear MEM(p,q). Under certain strict stationary and moment conditions, we establish a consistency and asymptotic normality of the semiparametric estimation for these two new models. The second part of this dissertation examines the differences and similarities between trades in the home market and trades in the foreign market of cross-listed stocks. We exploit the multiplicative framework to model trading duration, volume per trade and price volatility for Canadian shares that are cross-listed in the New York Stock Exchange (NYSE) and the Toronto Stock Exchange (TSX). We explore the clustering effect, interaction between trading variables, and the time needed for price equilibrium after a perturbation for each market. The clustering effect is studied through the use of univariate MEM(1,1) on each variable, while the interactions among duration, volume and price volatility are captured by a multivariate system of MEM(p,q). After estimating these models by a standard QMLE procedure, we exploit the Impulse Response function to compute the calendar time for a perturbation in these variables to be absorbed into price variance, and use common statistical tests to identify the difference between the two markets in each aspect. These differences are of considerable interest to traders, stock exchanges and policy makers.

  11. From first-release to ex post fiscal data: exploring the sources of revision errors in the EU

    NARCIS (Netherlands)

    Beetsma, R.; Bluhm, B.; Giuliodori, M.; Wierts, P.

    2012-01-01

    This paper explores the determinants of deviations of ex post budget outcomes from firstrelease outcomes published towards the end of the year of budget implementation. The predictive content of the first-release outcomes is important, because these figures are an input for the next budget and the

  12. Error Patterns Analysis of Hearing Aid and Cochlear Implant Users as a Function of Noise.

    Science.gov (United States)

    Chun, Hyungi; Ma, Sunmi; Han, Woojae; Chun, Youngmyoung

    2015-12-01

    Not all impaired listeners may have the same speech perception ability although they will have similar pure-tone threshold and configuration. For this reason, the present study analyzes error patterns in the hearing-impaired compared to normal hearing (NH) listeners as a function of signal-to-noise ratio (SNR). Forty-four adults participated: 10 listeners with NH, 20 hearing aids (HA) users and 14 cochlear implants (CI) users. The Korean standardized monosyllables were presented as the stimuli in quiet and three different SNRs. Total error patterns were classified into types of substitution, omission, addition, fail, and no response, using stacked bar plots. Total error percent for the three groups significantly increased as the SNRs decreased. For error pattern analysis, the NH group showed substitution errors dominantly regardless of the SNRs compared to the other groups. Both the HA and CI groups had substitution errors that declined, while no response errors appeared as the SNRs increased. The CI group was characterized by lower substitution and higher fail errors than did the HA group. Substitutions of initial and final phonemes in the HA and CI groups were limited by place of articulation errors. However, the HA group had missed consonant place cues, such as formant transitions and stop consonant bursts, whereas the CI group usually had limited confusions of nasal consonants with low frequency characteristics. Interestingly, all three groups showed /k/ addition in the final phoneme, a trend that magnified as noise increased. The HA and CI groups had their unique error patterns even though the aided thresholds of the two groups were similar. We expect that the results of this study will focus on high error patterns in auditory training of hearing-impaired listeners, resulting in reducing those errors and improving their speech perception ability.

  13. Analysis of Employee's Survey for Preventing Human-Errors

    Energy Technology Data Exchange (ETDEWEB)

    Sung, Chanho; Kim, Younggab; Joung, Sanghoun [KHNP Central Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    Human errors in nuclear power plant can cause large and small events or incidents. These events or incidents are one of main contributors of reactor trip and might threaten the safety of nuclear plants. To prevent human-errors, KHNP(nuclear power plants) introduced 'Human-error prevention techniques' and have applied the techniques to main parts such as plant operation, operation support, and maintenance and engineering. This paper proposes the methods to prevent and reduce human-errors in nuclear power plants through analyzing survey results which includes the utilization of the human-error prevention techniques and the employees' awareness of preventing human-errors. With regard to human-error prevention, this survey analysis presented the status of the human-error prevention techniques and the employees' awareness of preventing human-errors. Employees' understanding and utilization of the techniques was generally high and training level of employee and training effect on actual works were in good condition. Also, employees answered that the root causes of human-error were due to working environment including tight process, manpower shortage, and excessive mission rather than personal negligence or lack of personal knowledge. Consideration of working environment is certainly needed. At the present time, based on analyzing this survey, the best methods of preventing human-error are personal equipment, training/education substantiality, private mental health check before starting work, prohibit of multiple task performing, compliance with procedures, and enhancement of job site review. However, the most important and basic things for preventing human-error are interests of workers and organizational atmosphere such as communication between managers and workers, and communication between employees and bosses.

  14. Error sensitivity analysis in 10-30-day extended range forecasting by using a nonlinear cross-prediction error model

    Science.gov (United States)

    Xia, Zhiye; Xu, Lisheng; Chen, Hongbin; Wang, Yongqian; Liu, Jinbao; Feng, Wenlan

    2017-06-01

    Extended range forecasting of 10-30 days, which lies between medium-term and climate prediction in terms of timescale, plays a significant role in decision-making processes for the prevention and mitigation of disastrous meteorological events. The sensitivity of initial error, model parameter error, and random error in a nonlinear crossprediction error (NCPE) model, and their stability in the prediction validity period in 10-30-day extended range forecasting, are analyzed quantitatively. The associated sensitivity of precipitable water, temperature, and geopotential height during cases of heavy rain and hurricane is also discussed. The results are summarized as follows. First, the initial error and random error interact. When the ratio of random error to initial error is small (10-6-10-2), minor variation in random error cannot significantly change the dynamic features of a chaotic system, and therefore random error has minimal effect on the prediction. When the ratio is in the range of 10-1-2 (i.e., random error dominates), attention should be paid to the random error instead of only the initial error. When the ratio is around 10-2-10-1, both influences must be considered. Their mutual effects may bring considerable uncertainty to extended range forecasting, and de-noising is therefore necessary. Second, in terms of model parameter error, the embedding dimension m should be determined by the factual nonlinear time series. The dynamic features of a chaotic system cannot be depicted because of the incomplete structure of the attractor when m is small. When m is large, prediction indicators can vanish because of the scarcity of phase points in phase space. A method for overcoming the cut-off effect ( m > 4) is proposed. Third, for heavy rains, precipitable water is more sensitive to the prediction validity period than temperature or geopotential height; however, for hurricanes, geopotential height is most sensitive, followed by precipitable water.

  15. Statistical analysis of astrometric errors for the most productive asteroid surveys

    Science.gov (United States)

    Vereš, Peter; Farnocchia, Davide; Chesley, Steven R.; Chamberlin, Alan B.

    2017-11-01

    We performed a statistical analysis of the astrometric errors for the major asteroid surveys. We analyzed the astrometric residuals as a function of observation epoch, observed brightness and rate of motion, finding that astrometric errors are larger for faint observations and some stations improved their astrometric quality over time. Based on this statistical analysis we develop a new weighting scheme to be used when performing asteroid orbit determination. The proposed weights result in ephemeris predictions that can be conservative by a factor as large as 1.5. However, the new scheme is robust with respect to outliers and handles the larger errors for faint detections.

  16. Solution-verified reliability analysis and design of bistable MEMS using error estimation and adaptivity.

    Energy Technology Data Exchange (ETDEWEB)

    Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.

    2006-10-01

    This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.

  17. Release criteria and pathway analysis for radiological remediation

    Energy Technology Data Exchange (ETDEWEB)

    Subbaraman, G.; Tuttle, R.J.; Oliver, B.M. (Rockwell International Corp., Canoga Park, CA (United States). Rocketdyne Div.); Devgun, J.S. (Argonne National Lab., IL (United States))

    1991-01-01

    Site-specific activity concentrations were derived for soils contaminated with mixed fission products (MFP), or uranium-processing residues, using the Department of Energy (DOE) pathway analysis computer code RESRAD at four different sites. The concentrations and other radiological parameters, such as limits on background-subtracted gamma exposure rate were used as the basis to arrive at release criteria for two of the sites. Valid statistical parameters, calculated for the distribution of radiological data obtained from site surveys, were then compared with the criteria to determine releasability or need for further decontamination. For the other two sites, RESRAD has been used as a preremediation planning tool to derive residual material guidelines for uranium. 11 refs., 4 figs., 3 tabs.

  18. Comparing measurement error correction methods for rate-of-change exposure variables in survival analysis.

    Science.gov (United States)

    Veronesi, Giovanni; Ferrario, Marco M; Chambless, Lloyd E

    2013-12-01

    In this article we focus on comparing measurement error correction methods for rate-of-change exposure variables in survival analysis, when longitudinal data are observed prior to the follow-up time. Motivational examples include the analysis of the association between changes in cardiovascular risk factors and subsequent onset of coronary events. We derive a measurement error model for the rate of change, estimated through subject-specific linear regression, assuming an additive measurement error model for the time-specific measurements. The rate of change is then included as a time-invariant variable in a Cox proportional hazards model, adjusting for the first time-specific measurement (baseline) and an error-free covariate. In a simulation study, we compared bias, standard deviation and mean squared error (MSE) for the regression calibration (RC) and the simulation-extrapolation (SIMEX) estimators. Our findings indicate that when the amount of measurement error is substantial, RC should be the preferred method, since it has smaller MSE for estimating the coefficients of the rate of change and of the variable measured without error. However, when the amount of measurement error is small, the choice of the method should take into account the event rate in the population and the effect size to be estimated. An application to an observational study, as well as examples of published studies where our model could have been applied, are also provided.

  19. ANSYS tutorial release 14 structural & thermal analysis using the ANSYS mechanical APDL release 14 environment

    CERN Document Server

    Lawrence, Kent L

    2012-01-01

    The eight lessons in this book introduce the reader to effective finite element problem solving by demonstrating the use of the comprehensive ANSYS FEM Release 14 software in a series of step-by-step tutorials. The tutorials are suitable for either professional or student use. The lessons discuss linear static response for problems involving truss, plane stress, plane strain, axisymmetric, solid, beam, and plate structural elements. Example problems in heat transfer, thermal stress, mesh creation and transferring models from CAD solid modelers to ANSYS are also included. The tutorials progress from simple to complex. Each lesson can be mastered in a short period of time, and Lessons 1 through 7 should all be completed to obtain a thorough understanding of basic ANSYS structural analysis. The concise treatment includes examples of truss, beam and shell elements completely for use with ANSYS APDL 14.

  20. Error Modeling and Sensitivity Analysis of a Five-Axis Machine Tool

    Directory of Open Access Journals (Sweden)

    Wenjie Tian

    2014-01-01

    Full Text Available Geometric error modeling and its sensitivity analysis are carried out in this paper, which is helpful for precision design of machine tools. Screw theory and rigid body kinematics are used to establish the error model of an RRTTT-type five-axis machine tool, which enables the source errors affecting the compensable and uncompensable pose accuracy of the machine tool to be explicitly separated, thereby providing designers and/or field engineers with an informative guideline for the accuracy improvement by suitable measures, that is, component tolerancing in design, manufacturing, and assembly processes, and error compensation. The sensitivity analysis method is proposed, and the sensitivities of compensable and uncompensable pose accuracies are analyzed. The analysis results will be used for the precision design of the machine tool.

  1. Error analysis for pesticide detection performed on paper-based microfluidic chip devices

    Science.gov (United States)

    Yang, Ning; Shen, Kai; Guo, Jianjiang; Tao, Xinyi; Xu, Peifeng; Mao, Hanping

    2017-07-01

    Paper chip is an efficient and inexpensive device for pesticide residues detection. However, the reasons of detection error are not clear, which is the main problem to hinder the development of pesticide residues detection. This paper focuses on error analysis for pesticide detection performed on paper-based microfluidic chip devices, which test every possible factor to build the mathematical models for detection error. In the result, double-channel structure is selected as the optimal chip structure to reduce detection error effectively. The wavelength of 599.753 nm is chosen since it is the most sensitive detection wavelength to the variation of pesticide concentration. At last, the mathematical models of detection error for detection temperature and prepared time are concluded. This research lays a theory foundation on accurate pesticide residues detection based on paper-based microfluidic chip devices.

  2. Phonological analysis of substitution errors of patients with apraxia of speech

    Directory of Open Access Journals (Sweden)

    Maysa Luchesi Cera

    Full Text Available Abstract The literature on apraxia of speech describes the types and characteristics of phonological errors in this disorder. In general, phonemes affected by errors are described, but the distinctive features involved have not yet been investigated. Objective: To analyze the features involved in substitution errors produced by Brazilian-Portuguese speakers with apraxia of speech. Methods: 20 adults with apraxia of speech were assessed. Phonological analysis of the distinctive features involved in substitution type errors was carried out using the protocol for the evaluation of verbal and non-verbal apraxia. Results: The most affected features were: voiced, continuant, high, anterior, coronal, posterior. Moreover, the mean of the substitutions of marked to markedness features was statistically greater than the markedness to marked features. Conclusions: This study contributes toward a better characterization of the phonological errors found in apraxia of speech, thereby helping to diagnose communication disorders and the selection criteria of phonemes for rehabilitation in these patients.

  3. Phonological analysis of substitution errors of patients with apraxia of speech.

    Science.gov (United States)

    Cera, Maysa Luchesi; Ortiz, Karin Zazo

    2010-01-01

    The literature on apraxia of speech describes the types and characteristics of phonological errors in this disorder. In general, phonemes affected by errors are described, but the distinctive features involved have not yet been investigated. To analyze the features involved in substitution errors produced by Brazilian-Portuguese speakers with apraxia of speech. 20 adults with apraxia of speech were assessed. Phonological analysis of the distinctive features involved in substitution type errors was carried out using the protocol for the evaluation of verbal and non-verbal apraxia. The most affected features were: voiced, continuant, high, anterior, coronal, posterior. Moreover, the mean of the substitutions of marked to markedness features was statistically greater than the markedness to marked features. This study contributes toward a better characterization of the phonological errors found in apraxia of speech, thereby helping to diagnose communication disorders and the selection criteria of phonemes for rehabilitation in these patients.

  4. Covariance Analysis Tool (G-CAT) for Computing Ascent, Descent, and Landing Errors

    Science.gov (United States)

    Boussalis, Dhemetrios; Bayard, David S.

    2013-01-01

    G-CAT is a covariance analysis tool that enables fast and accurate computation of error ellipses for descent, landing, ascent, and rendezvous scenarios, and quantifies knowledge error contributions needed for error budgeting purposes. Because GCAT supports hardware/system trade studies in spacecraft and mission design, it is useful in both early and late mission/ proposal phases where Monte Carlo simulation capability is not mature, Monte Carlo simulation takes too long to run, and/or there is a need to perform multiple parametric system design trades that would require an unwieldy number of Monte Carlo runs. G-CAT is formulated as a variable-order square-root linearized Kalman filter (LKF), typically using over 120 filter states. An important property of G-CAT is that it is based on a 6-DOF (degrees of freedom) formulation that completely captures the combined effects of both attitude and translation errors on the propagated trajectories. This ensures its accuracy for guidance, navigation, and control (GN&C) analysis. G-CAT provides the desired fast turnaround analysis needed for error budgeting in support of mission concept formulations, design trade studies, and proposal development efforts. The main usefulness of a covariance analysis tool such as G-CAT is its ability to calculate the performance envelope directly from a single run. This is in sharp contrast to running thousands of simulations to obtain similar information using Monte Carlo methods. It does this by propagating the "statistics" of the overall design, rather than simulating individual trajectories. G-CAT supports applications to lunar, planetary, and small body missions. It characterizes onboard knowledge propagation errors associated with inertial measurement unit (IMU) errors (gyro and accelerometer), gravity errors/dispersions (spherical harmonics, masscons), and radar errors (multiple altimeter beams, multiple Doppler velocimeter beams). G-CAT is a standalone MATLAB- based tool intended to

  5. Dealing with Uncertainties A Guide to Error Analysis

    CERN Document Server

    Drosg, Manfred

    2007-01-01

    Dealing with Uncertainties proposes and explains a new approach for the analysis of uncertainties. Firstly, it is shown that uncertainties are the consequence of modern science rather than of measurements. Secondly, it stresses the importance of the deductive approach to uncertainties. This perspective has the potential of dealing with the uncertainty of a single data point and of data of a set having differing weights. Both cases cannot be dealt with the inductive approach, which is usually taken. This innovative monograph also fully covers both uncorrelated and correlated uncertainties. The weakness of using statistical weights in regression analysis is discussed. Abundant examples are given for correlation in and between data sets and for the feedback of uncertainties on experiment design.

  6. The study of error for analysis in dynamic image from the error of count rates in Nal (Tl) scintillation camera

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Joo Young; Kang, Chun Goo; Kim, Jung Yul; Oh, Ki Baek; Kim, Jae Sam [Dept. of Nuclear Medicine, Severance Hospital, Yonsei University, Seoul (Korea, Republic of); Park, Hoon Hee [Dept. of Radiological Technology, Shingu college, Sungnam (Korea, Republic of)

    2013-12-15

    This study is aimed to evaluate the effect of T{sub 1/2} upon count rates in the analysis of dynamic scan using NaI (Tl) scintillation camera, and suggest a new quality control method with this effects. We producted a point source with '9{sup 9m}TcO{sub 4}- of 18.5 to 185 MBq in the 2 mL syringes, and acquired 30 frames of dynamic images with 10 to 60 seconds each using Infinia gamma camera (GE, USA). In the second experiment, 90 frames of dynamic images were acquired from 74 MBq point source by 5 gamma cameras (Infinia 2, Forte 2, Argus 1). There were not significant differences in average count rates of the sources with 18.5 to 92.5 MBq in the analysis of 10 to 60 seconds/frame with 10 seconds interval in the first experiment (p>0.05). But there were significantly low average count rates with the sources over 111 MBq activity at 60 seconds/frame (p<0.01). According to the second analysis results of linear regression by count rates of 5 gamma cameras those were acquired during 90 minutes, counting efficiency of fourth gamma camera was most low as 0.0064%, and gradient and coefficient of variation was high as 0.0042 and 0.229 each. We could not find abnormal fluctuation in χ{sup 2} test with count rates (p>0.02), and we could find the homogeneity of variance in Levene's F-test among the gamma cameras (p>0.05). At the correlation analysis, there was only correlation between counting efficiency and gradient as significant negative correlation (r=-0.90, p<0.05). Lastly, according to the results of calculation of T{sub 1/2} error from change of gradient with -0.25% to +0.25%, if T{sub 1/2} is relatively long, or gradient is high, the error increase relationally. When estimate the value of 4th camera which has highest gradient from the above mentioned result, we could not see T{sub 1/2} error within 60 minutes at that value. In conclusion, it is necessary for the scintillation gamma camera in medical field to manage hard for the quality of radiation

  7. M/T method based incremental encoder velocity measurement error analysis and self-adaptive error elimination algorithm

    DEFF Research Database (Denmark)

    Chen, Yangyang; Yang, Ming; Long, Jiang

    2017-01-01

    and A/D conversion error make it hard to achieve theoretical speed measurement accuracy. In this paper, hardware caused speed measurement errors are analyzed and modeled in detail; a Single-Phase Self-adaptive M/T method is proposed to ideally suppress speed measurement error. In the end, simulation......For motor control applications, the speed loop performance is largely depended on the accuracy of speed feedback signal. M/T method, due to its high theoretical accuracy, is the most widely used in incremental encoder adopted speed measurement. However, the inherent encoder optical grating error...

  8. Single trial time-frequency domain analysis of error processing in post-traumatic stress disorder.

    Science.gov (United States)

    Clemans, Zachary A; El-Baz, Ayman S; Hollifield, Michael; Sokhadze, Estate M

    2012-09-13

    Error processing studies in psychology and psychiatry are relatively common. Event-related potentials (ERPs) are often used as measures of error processing, two such response-locked ERPs being the error-related negativity (ERN) and the error-related positivity (Pe). The ERN and Pe occur following committed error in reaction time tasks as low frequency (4-8 Hz) electroencephalographic (EEG) oscillations registered at the midline fronto-central sites. We created an alternative method for analyzing error processing using time-frequency analysis in the form of a wavelet transform. A study was conducted in which subjects with PTSD and healthy control completed a forced-choice task. Single trial EEG data from errors in the task were processed using a continuous wavelet transform. Coefficients from the transform that corresponded to the theta range were averaged to isolate a theta waveform in the time-frequency domain. Measures called the time-frequency ERN and Pe were obtained from these waveforms for five different channels and then averaged to obtain a single time-frequency ERN and Pe for each error trial. A comparison of the amplitude and latency for the time-frequency ERN and Pe between the PTSD and control group was performed. A significant group effect was found on the amplitude of both measures. These results indicate that the developed single trial time-frequency error analysis method is suitable for examining error processing in PTSD and possibly other psychiatric disorders. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  9. Thermal error analysis and compensation for digital image/volume correlation

    Science.gov (United States)

    Pan, Bing

    2018-02-01

    Digital image/volume correlation (DIC/DVC) rely on the digital images acquired by digital cameras and x-ray CT scanners to extract the motion and deformation of test samples. Regrettably, these imaging devices are unstable optical systems, whose imaging geometry may undergo unavoidable slight and continual changes due to self-heating effect or ambient temperature variations. Changes in imaging geometry lead to both shift and expansion in the recorded 2D or 3D images, and finally manifest as systematic displacement and strain errors in DIC/DVC measurements. Since measurement accuracy is always the most important requirement in various experimental mechanics applications, these thermal-induced errors (referred to as thermal errors) should be given serious consideration in order to achieve high accuracy, reproducible DIC/DVC measurements. In this work, theoretical analyses are first given to understand the origin of thermal errors. Then real experiments are conducted to quantify thermal errors. Three solutions are suggested to mitigate or correct thermal errors. Among these solutions, a reference sample compensation approach is highly recommended because of its easy implementation, high accuracy and in-situ error correction capability. Most of the work has appeared in our previously published papers, thus its originality is not claimed. Instead, this paper aims to give a comprehensive overview and more insights of our work on thermal error analysis and compensation for DIC/DVC measurements.

  10. Verb retrieval in brain-damaged subjects: 2. Analysis of errors.

    Science.gov (United States)

    Kemmerer, D; Tranel, D

    2000-07-01

    Verb retrieval for action naming was assessed in 53 brain-damaged subjects by administering a standardized test with 100 items. In a companion paper (Kemmerer & Tranel, 2000), it was shown that impaired and unimpaired subjects did not differ as groups in their sensitivity to a variety of stimulus, lexical, and conceptual factors relevant to the test. For this reason, the main goal of the present study was to determine whether the two groups of subjects manifested theoretically interesting differences in the kinds of errors that they made. All of the subjects' errors were classified according to an error coding system that contains 27 distinct types of errors belonging to five broad categories-verbs, phrases, nouns, adpositional words, and "other" responses. Errors involving the production of verbs that are semantically related to the target were especially prevalent for the unimpaired group, which is similar to the performance of normal control subjects. By contrast, the impaired group had a significantly smaller proportion of errors in the verb category and a significantly larger proportion of errors in each of the nonverb categories. This relationship between error rate and error type is consistent with previous research on both object and action naming errors, and it suggests that subjects with only mild damage to putative lexical systems retain an appreciation of most of the semantic, phonological, and grammatical category features of words, whereas subjects with more severe damage retain a much smaller set of features. At the level of individual subjects, a wide range of "predominant error types" were found, especially among the impaired subjects, which may reflect either different action naming strategies or perhaps different patterns of preservation and impairment of various lexical components. Overall, this study provides a novel addition to the existing literature on the analysis of naming errors made by brain-damaged subjects. Not only does the study

  11. Gait Analysis of Transfemoral Amputees: Errors in Inverse Dynamics Are Substantial and Depend on Prosthetic Design.

    Science.gov (United States)

    Dumas, Raphael; Branemark, Rickard; Frossard, Laurent

    2017-06-01

    Quantitative assessments of prostheses performances rely more and more frequently on gait analysis focusing on prosthetic knee joint forces and moments computed by inverse dynamics. However, this method is prone to errors, as demonstrated in comparison with direct measurements of these forces and moments. The magnitude of errors reported in the literature seems to vary depending on prosthetic components. Therefore, the purposes of this study were (A) to quantify and compare the magnitude of errors in knee joint forces and moments obtained with inverse dynamics and direct measurements on ten participants with transfemoral amputation during walking and (B) to investigate if these errors can be characterised for different prosthetic knees. Knee joint forces and moments computed by inverse dynamics presented substantial errors, especially during the swing phase of gait. Indeed, the median errors in percentage of the moment magnitude were 4% and 26% in extension/flexion, 6% and 19% in adduction/abduction as well as 14% and 27% in internal/external rotation during stance and swing phase, respectively. Moreover, errors varied depending on the prosthetic limb fitted with mechanical or microprocessor-controlled knees. This study confirmed that inverse dynamics should be used cautiously while performing gait analysis of amputees. Alternatively, direct measurements of joint forces and moments could be relevant for mechanical characterising of components and alignments of prosthetic limbs.

  12. Mars Entry Atmospheric Data System Modeling, Calibration, and Error Analysis

    Science.gov (United States)

    Karlgaard, Christopher D.; VanNorman, John; Siemers, Paul M.; Schoenenberger, Mark; Munk, Michelle M.

    2014-01-01

    The Mars Science Laboratory (MSL) Entry, Descent, and Landing Instrumentation (MEDLI)/Mars Entry Atmospheric Data System (MEADS) project installed seven pressure ports through the MSL Phenolic Impregnated Carbon Ablator (PICA) heatshield to measure heatshield surface pressures during entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. In particular, the quantities to be estimated from the MEADS pressure measurements include the dynamic pressure, angle of attack, and angle of sideslip. This report describes the calibration of the pressure transducers utilized to reconstruct the atmospheric data and associated uncertainty models, pressure modeling and uncertainty analysis, and system performance results. The results indicate that the MEADS pressure measurement system hardware meets the project requirements.

  13. Dealing with Uncertainties A Guide to Error Analysis

    CERN Document Server

    Drosg, Manfred

    2009-01-01

    Dealing with Uncertainties is an innovative monograph that lays special emphasis on the deductive approach to uncertainties and on the shape of uncertainty distributions. This perspective has the potential for dealing with the uncertainty of a single data point and with sets of data that have different weights. It is shown that the inductive approach that is commonly used to estimate uncertainties is in fact not suitable for these two cases. The approach that is used to understand the nature of uncertainties is novel in that it is completely decoupled from measurements. Uncertainties which are the consequence of modern science provide a measure of confidence both in scientific data and in information in everyday life. Uncorrelated uncertainties and correlated uncertainties are fully covered and the weakness of using statistical weights in regression analysis is discussed. The text is abundantly illustrated with examples and includes more than 150 problems to help the reader master the subject.

  14. Analysis of transmission error effects on the transfer of real-time simulation data

    Science.gov (United States)

    Credeur, L.

    1977-01-01

    An analysis was made to determine the effect of transmission errors on the quality of data transferred from the Terminal Area Air Traffic Model to a remote site. Data formating schemes feasible within the operational constraints of the data link were proposed and their susceptibility to both random bit error and to noise burst were investigated. It was shown that satisfactory reliability is achieved by a scheme formating the simulation output into three data blocks which has the priority data triply redundant in the first block in addition to having a retransmission priority on that first block when it is received in error.

  15. Classification of Error-Diffused Halftone Images Based on Spectral Regression Kernel Discriminant Analysis

    Directory of Open Access Journals (Sweden)

    Zhigao Zeng

    2016-01-01

    Full Text Available This paper proposes a novel algorithm to solve the challenging problem of classifying error-diffused halftone images. We firstly design the class feature matrices, after extracting the image patches according to their statistics characteristics, to classify the error-diffused halftone images. Then, the spectral regression kernel discriminant analysis is used for feature dimension reduction. The error-diffused halftone images are finally classified using an idea similar to the nearest centroids classifier. As demonstrated by the experimental results, our method is fast and can achieve a high classification accuracy rate with an added benefit of robustness in tackling noise.

  16. Why Is Rainfall Error Analysis Requisite for Data Assimilation and Climate Modeling?

    Science.gov (United States)

    Hou, Arthur Y.; Zhang, Sara Q.

    2004-01-01

    Given the large temporal and spatial variability of precipitation processes, errors in rainfall observations are difficult to quantify yet crucial to making effective use of rainfall data for improving atmospheric analysis, weather forecasting, and climate modeling. We highlight the need for developing a quantitative understanding of systematic and random errors in precipitation observations by examining explicit examples of how each type of errors can affect forecasts and analyses in global data assimilation. We characterize the error information needed from the precipitation measurement community and how it may be used to improve data usage within the general framework of analysis techniques, as well as accuracy requirements from the perspective of climate modeling and global data assimilation.

  17. Human error and the problem of causality in analysis of accidents

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1990-01-01

    Present technology is characterized by complexity, rapid change and growing size of technical systems. This has caused increasing concern with the human involvement in system safety. Analyses of the major accidents during recent decades have concluded that human errors on part of operators...... and for termination of the search for `causes'. In addition, the concept of human error is analysed and its intimate relation with human adaptation and learning is discussed. It is concluded that identification of errors as a separate class of behaviour is becoming increasingly difficult in modern work environments......, designers or managers have played a major role. There are, however, several basic problems in analysis of accidents and identification of human error. This paper addresses the nature of causal explanations and the ambiguity of the rules applied for identification of the events to include in analysis...

  18. Wavefront-error evaluation by mathematical analysis of experimental Foucault-test data

    Science.gov (United States)

    Wilson, R. G.

    1975-01-01

    The diffraction theory of the Foucault test provides an integral formula expressing the complex amplitude and irradiance distribution in the Foucault pattern of a test mirror (lens) as a function of wavefront error. Recent literature presents methods of inverting this formula to express wavefront error in terms of irradiance in the Foucault pattern. The present paper describes a study in which the inversion formulation was applied to photometric Foucault-test measurements on a nearly diffraction-limited mirror to determine wavefront errors for direct comparison with ones determined from scatter-plate interferometer measurements. The results affirm the practicability of the Foucault test for quantitative wavefront analysis of very small errors, and they reveal the fallacy of the prevalent belief that the test is limited to qualitative use only. Implications of the results with regard to optical testing and the potential use of the Foucault test for wavefront analysis in orbital space telescopes are discussed.

  19. Study on Network Error Analysis and Locating based on Integrated Information Decision System

    Science.gov (United States)

    Yang, F.; Dong, Z. H.

    2017-10-01

    Integrated information decision system (IIDS) integrates multiple sub-system developed by many facilities, including almost hundred kinds of software, which provides with various services, such as email, short messages, drawing and sharing. Because the under-layer protocols are different, user standards are not unified, many errors are occurred during the stages of setup, configuration, and operation, which seriously affect the usage. Because the errors are various, which may be happened in different operation phases, stages, TCP/IP communication protocol layers, sub-system software, it is necessary to design a network error analysis and locating tool for IIDS to solve the above problems. This paper studies on network error analysis and locating based on IIDS, which provides strong theory and technology supports for the running and communicating of IIDS.

  20. Error and Symmetry Analysis of Misner's Algorithm for Spherical Harmonic Decomposition on a Cubic Grid

    Science.gov (United States)

    Fiske, David R.

    2004-01-01

    In an earlier paper, Misner (2004, Class. Quant. Grav., 21, S243) presented a novel algorithm for computing the spherical harmonic components of data represented on a cubic grid. I extend Misner s original analysis by making detailed error estimates of the numerical errors accrued by the algorithm, by using symmetry arguments to suggest a more efficient implementation scheme, and by explaining how the algorithm can be applied efficiently on data with explicit reflection symmetries.

  1. An Analysis of Nurses’ Cognitive Work: A New Perspective for Understanding Medical Errors

    Science.gov (United States)

    2005-05-01

    39 An Analysis of Nurses’ Cognitive Work: A New Perspective for Understanding Medical Errors Patricia Potter, Laurie Wolf, Stuart Boxerman...Washington University School of Medicine, St. Louis (SB, DG). Address correspondence to: Patricia Potter, R.N., Ph.D., 12977 Wallingshire Ct.; St...3. Benner P, Sheets V, Uris P, et al. Individual, practice and system causes of errors in nursing: a taxonomy. JONA 2002;32(10):509–23. 4. Smith

  2. Meta-analysis of small RNA-sequencing errors reveals ubiquitous post-transcriptional RNA modifications

    OpenAIRE

    Ebhardt, H. Alexander; Tsang, Herbert H.; Dai, Denny C.; Liu, Yifeng; Bostan, Babak; Fahlman, Richard P.

    2009-01-01

    Recent advances in DNA-sequencing technology have made it possible to obtain large datasets of small RNA sequences. Here we demonstrate that not all non-perfectly matched small RNA sequences are simple technological sequencing errors, but many hold valuable biological information. Analysis of three small RNA datasets originating from Oryza sativa and Arabidopsis thaliana small RNA-sequencing projects demonstrates that many single nucleotide substitution errors overlap when aligning homologous...

  3. Maternal Recall Error in Retrospectively Reported Time-to-Pregnancy: an Assessment and Bias Analysis.

    Science.gov (United States)

    Radin, Rose G; Rothman, Kenneth J; Hatch, Elizabeth E; Mikkelsen, Ellen M; Sorensen, Henrik T; Riis, Anders H; Fox, Matthew P; Wise, Lauren A

    2015-11-01

    Epidemiologic studies of fecundability often use retrospectively measured time-to-pregnancy (TTP), thereby introducing potential for recall error. Little is known about how recall error affects the bias and precision of the fecundability odds ratio (FOR) in such studies. Using data from the Danish Snart-Gravid Study (2007-12), we quantified error for TTP recalled in the first trimester of pregnancy relative to prospectively measured TTP among 421 women who enrolled at the start of their pregnancy attempt and became pregnant within 12 months. We defined recall error as retrospectively measured TTP minus prospectively measured TTP. Using linear regression, we assessed mean differences in recall error by maternal characteristics. We evaluated the resulting bias in the FOR and 95% confidence interval (CI) using simulation analyses that compared corrected and uncorrected retrospectively measured TTP values. Recall error (mean = -0.11 months, 95% CI -0.25, 0.04) was not appreciably associated with maternal age, gravidity, or recent oral contraceptive use. Women with TTP > 2 months were more likely to underestimate their TTP than women with TTP ≤ 2 months (unadjusted mean difference in error: -0.40 months, 95% CI -0.71, -0.09). FORs of recent oral contraceptive use calculated from prospectively measured, retrospectively measured, and corrected TTPs were 0.82 (95% CI 0.67, 0.99), 0.74 (95% CI 0.61, 0.90), and 0.77 (95% CI 0.62, 0.96), respectively. Recall error was small on average among pregnancy planners who became pregnant within 12 months. Recall error biased the FOR of recent oral contraceptive use away from the null by 10%. Quantitative bias analysis of the FOR can help researchers quantify the bias from recall error. © 2015 John Wiley & Sons Ltd.

  4. Detecting medication errors in the New Zealand pharmacovigilance database: a retrospective analysis.

    Science.gov (United States)

    Kunac, Desireé L; Tatley, Michael V

    2011-01-01

    Despite the traditional focus being adverse drug reactions (ADRs), pharmacovigilance centres have recently been identified as a potentially rich and important source of medication error data. To identify medication errors in the New Zealand Pharmacovigilance database (Centre for Adverse Reactions Monitoring [CARM]), and to describe the frequency and characteristics of these events. A retrospective analysis of the CARM pharmacovigilance database operated by the New Zealand Pharmacovigilance Centre was undertaken for the year 1 January-31 December 2007. All reports, excluding those relating to vaccines, clinical trials and pharmaceutical company reports, underwent a preventability assessment using predetermined criteria. Those events deemed preventable were subsequently classified to identify the degree of patient harm, type of error, stage of medication use process where the error occurred and origin of the error. A total of 1412 reports met the inclusion criteria and were reviewed, of which 4.3% (61/1412) were deemed preventable. Not all errors resulted in patient harm: 29.5% (18/61) were 'no harm' errors but 65.5% (40/61) of errors were deemed to have been associated with some degree of patient harm (preventable adverse drug events [ADEs]). For 5.0% (3/61) of events, the degree of patient harm was unable to be determined as the patient outcome was unknown. The majority of preventable ADEs (62.5% [25/40]) occurred in adults aged 65 years and older. The medication classes most involved in preventable ADEs were antibacterials for systemic use and anti-inflammatory agents, with gastrointestinal and respiratory system disorders the most common adverse events reported. For both preventable ADEs and 'no harm' events, most errors were incorrect dose and drug therapy monitoring problems consisting of failures in detection of significant drug interactions, past allergies or lack of necessary clinical monitoring. Preventable events were mostly related to the prescribing and

  5. Review of human error analysis methodologies and case study for accident management

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Won Dae; Kim, Jae Whan; Lee, Yong Hee; Ha, Jae Joo

    1998-03-01

    In this research, we tried to establish the requirements for the development of a new human error analysis method. To achieve this goal, we performed a case study as following steps; 1. review of the existing HEA methods 2. selection of those methods which are considered to be appropriate for the analysis of operator`s tasks in NPPs 3. choice of tasks for the application, selected for the case study: HRMS (Human reliability management system), PHECA (Potential Human Error Cause Analysis), CREAM (Cognitive Reliability and Error Analysis Method). And, as the tasks for the application, `bleed and feed operation` and `decision-making for the reactor cavity flooding` tasks are chosen. We measured the applicability of the selected methods to the NPP tasks, and evaluated the advantages and disadvantages between each method. The three methods are turned out to be applicable for the prediction of human error. We concluded that both of CREAM and HRMS are equipped with enough applicability for the NPP tasks, however, compared two methods. CREAM is thought to be more appropriate than HRMS from the viewpoint of overall requirements. The requirements for the new HEA method obtained from the study can be summarized as follows; firstly, it should deal with cognitive error analysis, secondly, it should have adequate classification system for the NPP tasks, thirdly, the description on the error causes and error mechanisms should be explicit, fourthly, it should maintain the consistency of the result by minimizing the ambiguity in each step of analysis procedure, fifty, it should be done with acceptable human resources. (author). 25 refs., 30 tabs., 4 figs.

  6. De-noising of GPS structural monitoring observation error using wavelet analysis

    Directory of Open Access Journals (Sweden)

    Mosbeh R. Kaloop

    2016-03-01

    Full Text Available In the process of the continuous monitoring of the structure's state properties such as static and dynamic responses using Global Positioning System (GPS, there are unavoidable errors in the observation data. These GPS errors and measurement noises have their disadvantages in the precise monitoring applications because these errors cover up the available signals that are needed. The current study aims to apply three methods, which are used widely to mitigate sensor observation errors. The three methods are based on wavelet analysis, namely principal component analysis method, wavelet compressed method, and the de-noised method. These methods are used to de-noise the GPS observation errors and to prove its performance using the GPS measurements which are collected from the short-time monitoring system designed for Mansoura Railway Bridge located in Egypt. The results have shown that GPS errors can effectively be removed, while the full-movement components of the structure can be extracted from the original signals using wavelet analysis.

  7. A human error analysis methodology, AGAPE-ET, for emergency tasks in nuclear power plants and its application

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jae Whan; Jung, Won Dea [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2002-03-01

    This report presents a procedurised human reliability analysis (HRA) methodology, AGAPE-ET (A Guidance And Procedure for Human Error Analysis for Emergency Tasks), for both qualitative error analysis and quantification of human error probability (HEP) of emergency tasks in nuclear power plants. The AGAPE-ET is based on the simplified cognitive model. By each cognitive function, error causes or error-likely situations have been identified considering the characteristics of the performance of each cognitive function and influencing mechanism of PIFs on the cognitive function. Then, error analysis items have been determined from the identified error causes or error-likely situations to help the analysts cue or guide overall human error analysis. A human error analysis procedure based on the error analysis items is organised. The basic scheme for the quantification of HEP consists in the multiplication of the BHEP assigned by the error analysis item and the weight from the influencing factors decision tree (IFDT) constituted by cognitive function. The method can be characterised by the structured identification of the weak points of the task required to perform and the efficient analysis process that the analysts have only to carry out with the necessary cognitive functions. The report also presents the the application of AFAPE-ET to 31 nuclear emergency tasks and its results. 42 refs., 7 figs., 36 tabs. (Author)

  8. Error analysis and algorithm implementation for an improved optical-electric tracking device based on MEMS

    Science.gov (United States)

    Sun, Hong; Wu, Qian-zhong

    2013-09-01

    In order to improve the precision of optical-electric tracking device, proposing a kind of improved optical-electric tracking device based on MEMS, in allusion to the tracking error of gyroscope senor and the random drift, According to the principles of time series analysis of random sequence, establish AR model of gyro random error based on Kalman filter algorithm, then the output signals of gyro are multiple filtered with Kalman filter. And use ARM as micro controller servo motor is controlled by fuzzy PID full closed loop control algorithm, and add advanced correction and feed-forward links to improve response lag of angle input, Free-forward can make output perfectly follow input. The function of lead compensation link is to shorten the response of input signals, so as to reduce errors. Use the wireless video monitor module and remote monitoring software (Visual Basic 6.0) to monitor servo motor state in real time, the video monitor module gathers video signals, and the wireless video module will sent these signals to upper computer, so that show the motor running state in the window of Visual Basic 6.0. At the same time, take a detailed analysis to the main error source. Through the quantitative analysis of the errors from bandwidth and gyro sensor, it makes the proportion of each error in the whole error more intuitive, consequently, decrease the error of the system. Through the simulation and experiment results shows the system has good following characteristic, and it is very valuable for engineering application.

  9. Error Analysis of Satellite Precipitation-Driven Modeling of Flood Events in Complex Alpine Terrain

    Directory of Open Access Journals (Sweden)

    Yiwen Mei

    2016-03-01

    Full Text Available The error in satellite precipitation-driven complex terrain flood simulations is characterized in this study for eight different global satellite products and 128 flood events over the Eastern Italian Alps. The flood events are grouped according to two flood types: rain floods and flash floods. The satellite precipitation products and runoff simulations are evaluated based on systematic and random error metrics applied on the matched event pairs and basin-scale event properties (i.e., rainfall and runoff cumulative depth and time series shape. Overall, error characteristics exhibit dependency on the flood type. Generally, timing of the event precipitation mass center and dispersion of the time series derived from satellite precipitation exhibits good agreement with the reference; the cumulative depth is mostly underestimated. The study shows a dampening effect in both systematic and random error components of the satellite-driven hydrograph relative to the satellite-retrieved hyetograph. The systematic error in shape of the time series shows a significant dampening effect. The random error dampening effect is less pronounced for the flash flood events and the rain flood events with a high runoff coefficient. This event-based analysis of the satellite precipitation error propagation in flood modeling sheds light on the application of satellite precipitation in mountain flood hydrology.

  10. A Linguistic Analysis of Errors in the Compositions of Arba Minch University Students

    Directory of Open Access Journals (Sweden)

    Yoseph Tizazu

    2014-06-01

    Full Text Available This study reports the dominant linguistic errors that occur in the written productions of Arba Minch University (hereafter AMU students. A sample of paragraphs was collected for two years from students ranging from freshmen to graduating level. The sampled compositions were then coded, described, and explained using error analysis method. Both quantitative and qualitative analyses showed that almost all components of the English language (such as orthography, morphology, syntax, mechanics, and semantics in learners’ compositions have been affected by the errors. On the basis of surface structures affected by the errors, the following kinds of errors have been identified: addition of an auxiliary (*I was read by gass light, omission of a verb (*Sex before marriage ^ many disadvantages, misformation in word class (*riskable for risky and misordering of major constituents in utterances (*I joined in 2003 Arba minch university. The study also identified two causes which triggered learners’ errors: intralingual and interlingual. The majority of the errors, however, attributed to intralingual causes, which mainly resulted from the lack of full mastery on the basics of the English language.

  11. LEARNING FROM MISTAKES Error Analysis in the English Speech of Indonesian Tertiary Students

    Directory of Open Access Journals (Sweden)

    Imelda Gozali

    2017-12-01

    Full Text Available This study is part of a series of Classroom Action Research conducted with the aim of improving the English speech of students in one of the tertiary institutes in Indonesia. After some years of teaching English conversation, the writer noted that students made various types of errors in their speech, which can be classified generally into morphological, phonological, and lexical. While some of the errors are still generally acceptable, some others elicit laughter or inhibit comprehension altogether. Therefore, the writer is keen to analyze the more common errors made by the students, so as to be able to compile a teaching material that could be utilized to address those errors more effectively in future classes. This research used Error Analysis by Richards (1971 as the basis of classification. It was carried out in five classes with a total number of 80 students for a period of one semester (14 weeks. The results showed that most of the errors were phonological (errors in pronunciation, while others were morphological or grammatical in nature. This prompted the writer to design simple Phonics lessons for future classes.

  12. Error analysis and optimization of a 3-degree of freedom translational Parallel Kinematic Machine

    Science.gov (United States)

    Shankar Ganesh, S.; Koteswara Rao, A. B.

    2014-06-01

    In this paper, error modeling and analysis of a typical 3-degree of freedom translational Parallel Kinematic Machine is presented. This mechanism provides translational motion along the Cartesian X-, Y- and Z-axes. It consists of three limbs each having an arm and forearm with prismatic-revolute-revolute-revolute joints. The moving or tool platform maintains same orientation in the entire workspace due to its joint arrangement. From inverse kinematics, the joint angles for a given position of tool platform necessary for the error modeling and analysis are obtained. Error modeling is done based on the differentiation of the inverse kinematic equations. Variation of pose errors along X, Y and Z directions for a set of dimensions of the parallel kinematic machine is presented. A non-dimensional performance index, namely, global error transformation index is used to study the influence of dimensions and its corresponding global maximum pose error is reported. An attempt is made to find the optimal dimensions of the Parallel Kinematic Machine using Genetic Algorithms in MATLAB. The methodology presented and the results obtained are useful for predicting the performance capability of the Parallel Kinematic Machine under study.

  13. Linear and nonlinear magnetic error measurements using action and phase jump analysis

    Directory of Open Access Journals (Sweden)

    Javier F. Cardona

    2009-01-01

    Full Text Available “Action and phase jump” analysis is presented—a beam based method that uses amplitude and phase knowledge of a particle trajectory to locate and measure magnetic errors in an accelerator lattice. The expected performance of the method is first tested using single-particle simulations in the optical lattice of the Relativistic Heavy Ion Collider (RHIC. Such simulations predict that under ideal conditions typical quadrupole errors can be estimated within an uncertainty of 0.04%. Other simulations suggest that sextupole errors can be estimated within a 3% uncertainty. Then the action and phase jump analysis is applied to real RHIC orbits with known quadrupole errors, and to real Super Proton Synchrotron (SPS orbits with known sextupole errors. It is possible to estimate the strength of a skew quadrupole error from measured RHIC orbits within a 1.2% uncertainty, and to estimate the strength of a strong sextupole component from the measured SPS orbits within a 7% uncertainty.

  14. Error Analysis of the K-Rb-21Ne Comagnetometer Space-Stable Inertial Navigation System.

    Science.gov (United States)

    Cai, Qingzhong; Yang, Gongliu; Quan, Wei; Song, Ningfang; Tu, Yongqiang; Liu, Yiliang

    2018-02-24

    According to the application characteristics of the K-Rb- 21 Ne comagnetometer, a space-stable navigation mechanization is designed and the requirements of the comagnetometer prototype are presented. By analysing the error propagation rule of the space-stable Inertial Navigation System (INS), the three biases, the scale factor of the z -axis, and the misalignment of the x - and y -axis non-orthogonal with the z -axis, are confirmed to be the main error source. A numerical simulation of the mathematical model for each single error verified the theoretical analysis result of the system's error propagation rule. Thus, numerical simulation based on the semi-physical data result proves the feasibility of the navigation scheme proposed in this paper.

  15. On Kolmogorov Asymptotics of Estimators of the Misclassification Error Rate in Linear Discriminant Analysis.

    Science.gov (United States)

    Zollanvari, Amin; Genton, Marc G

    2013-08-01

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  16. On Kolmogorov asymptotics of estimators of the misclassification error rate in linear discriminant analysis

    KAUST Repository

    Zollanvari, Amin

    2013-05-24

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  17. A Posteriori Error Analysis of Stochastic Differential Equations Using Polynomial Chaos Expansions

    KAUST Repository

    Butler, T.

    2011-01-01

    We develop computable a posteriori error estimates for linear functionals of a solution to a general nonlinear stochastic differential equation with random model/source parameters. These error estimates are based on a variational analysis applied to stochastic Galerkin methods for forward and adjoint problems. The result is a representation for the error estimate as a polynomial in the random model/source parameter. The advantage of this method is that we use polynomial chaos representations for the forward and adjoint systems to cheaply produce error estimates by simple evaluation of a polynomial. By comparison, the typical method of producing such estimates requires repeated forward/adjoint solves for each new choice of random parameter. We present numerical examples showing that there is excellent agreement between these methods. © 2011 Society for Industrial and Applied Mathematics.

  18. Statistical analysis with measurement error or misclassification strategy, method and application

    CERN Document Server

    Yi, Grace Y

    2017-01-01

    This monograph on measurement error and misclassification covers a broad range of problems and emphasizes unique features in modeling and analyzing problems arising from medical research and epidemiological studies. Many measurement error and misclassification problems have been addressed in various fields over the years as well as with a wide spectrum of data, including event history data (such as survival data and recurrent event data), correlated data (such as longitudinal data and clustered data), multi-state event data, and data arising from case-control studies. Statistical Analysis with Measurement Error or Misclassification: Strategy, Method and Application brings together assorted methods in a single text and provides an update of recent developments for a variety of settings. Measurement error effects and strategies of handling mismeasurement for different models are closely examined in combination with applications to specific problems. Readers with diverse backgrounds and objectives can utilize th...

  19. Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip

    Energy Technology Data Exchange (ETDEWEB)

    Du, Xuecheng; He, Chaohui; Liu, Shuhuan, E-mail: liushuhuan@mail.xjtu.edu.cn; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang

    2016-09-21

    Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.

  20. Skeletal mechanism generation for surrogate fuels using directed relation graph with error propagation and sensitivity analysis

    CERN Document Server

    Niemeyer, Kyle E; Raju, Mandhapati P

    2016-01-01

    A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with examples for three hydrocarbon components, n-heptane, iso-octane, and n-decane, relevant to surrogate fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination of the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal. Skeletal mechanisms for n-heptane and iso-octane ...

  1. Analysis on optical heterodyne frequency error of full-field heterodyne interferometer

    Science.gov (United States)

    Li, Yang; Zhang, Wenxi; Wu, Zhou; Lv, Xiaoyu; Kong, Xinxin; Guo, Xiaoli

    2017-06-01

    The full-field heterodyne interferometric measurement technology is beginning better applied by employing low frequency heterodyne acousto-optical modulators instead of complex electro-mechanical scanning devices. The optical element surface could be directly acquired by synchronously detecting the received signal phases of each pixel, because standard matrix detector as CCD and CMOS cameras could be used in heterodyne interferometer. Instead of the traditional four-step phase shifting phase calculating, Fourier spectral analysis method is used for phase extracting which brings lower sensitivity to sources of uncertainty and higher measurement accuracy. In this paper, two types of full-field heterodyne interferometer are described whose advantages and disadvantages are also specified. Heterodyne interferometer has to combine two different frequency beams to produce interference, which brings a variety of optical heterodyne frequency errors. Frequency mixing error and beat frequency error are two different kinds of inescapable heterodyne frequency errors. In this paper, the effects of frequency mixing error to surface measurement are derived. The relationship between the phase extraction accuracy and the errors are calculated. :: The tolerance of the extinction ratio of polarization splitting prism and the signal-to-noise ratio of stray light is given. The error of phase extraction by Fourier analysis that caused by beat frequency shifting is derived and calculated. We also propose an improved phase extraction method based on spectrum correction. An amplitude ratio spectrum correction algorithm with using Hanning window is used to correct the heterodyne signal phase extraction. The simulation results show that this method can effectively suppress the degradation of phase extracting caused by beat frequency error and reduce the measurement uncertainty of full-field heterodyne interferometer.

  2. Error modeling and sensitivity analysis of a parallel robot with SCARA(selective compliance assembly robot arm) motions

    Science.gov (United States)

    Chen, Yuzhen; Xie, Fugui; Liu, Xinjun; Zhou, Yanhua

    2014-07-01

    Parallel robots with SCARA(selective compliance assembly robot arm) motions are utilized widely in the field of high speed pick-and-place manipulation. Error modeling for these robots generally simplifies the parallelogram structures included by the robots as a link. As the established error model fails to reflect the error feature of the parallelogram structures, the effect of accuracy design and kinematic calibration based on the error model come to be undermined. An error modeling methodology is proposed to establish an error model of parallel robots with parallelogram structures. The error model can embody the geometric errors of all joints, including the joints of parallelogram structures. Thus it can contain more exhaustively the factors that reduce the accuracy of the robot. Based on the error model and some sensitivity indices defined in the sense of statistics, sensitivity analysis is carried out. Accordingly, some atlases are depicted to express each geometric error's influence on the moving platform's pose errors. From these atlases, the geometric errors that have greater impact on the accuracy of the moving platform are identified, and some sensitive areas where the pose errors of the moving platform are extremely sensitive to the geometric errors are also figured out. By taking into account the error factors which are generally neglected in all existing modeling methods, the proposed modeling method can thoroughly disclose the process of error transmission and enhance the efficacy of accuracy design and calibration.

  3. Learning about Expectation Violation from Prediction Error Paradigms - A Meta-Analysis on Brain Processes Following a Prediction Error.

    Science.gov (United States)

    D'Astolfo, Lisa; Rief, Winfried

    2017-01-01

    Modifying patients' expectations by exposing them to expectation violation situations (thus maximizing the difference between the expected and the actual situational outcome) is proposed to be a crucial mechanism for therapeutic success for a variety of different mental disorders. However, clinical observations suggest that patients often maintain their expectations regardless of experiences contradicting their expectations. It remains unclear which information processing mechanisms lead to modification or persistence of patients' expectations. Insight in the processing could be provided by Neuroimaging studies investigating prediction error (PE, i.e., neuronal reactions to non-expected stimuli). Two methods are often used to investigate the PE: (1) paradigms, in which participants passively observe PEs ("passive" paradigms) and (2) paradigms, which encourage a behavioral adaptation following a PE ("active" paradigms). These paradigms are similar to the methods used to induce expectation violations in clinical settings: (1) the confrontation with an expectation violation situation and (2) an enhanced confrontation in which the patient actively challenges his expectation. We used this similarity to gain insight in the different neuronal processing of the two PE paradigms. We performed a meta-analysis contrasting neuronal activity of PE paradigms encouraging a behavioral adaptation following a PE and paradigms enforcing passiveness following a PE. We found more neuronal activity in the striatum, the insula and the fusiform gyrus in studies encouraging behavioral adaptation following a PE. Due to the involvement of reward assessment and avoidance learning associated with the striatum and the insula we propose that the deliberate execution of action alternatives following a PE is associated with the integration of new information into previously existing expectations, therefore leading to an expectation change. While further research is needed to directly assess

  4. Learning about Expectation Violation from Prediction Error Paradigms – A Meta-Analysis on Brain Processes Following a Prediction Error

    Directory of Open Access Journals (Sweden)

    Lisa D’Astolfo

    2017-07-01

    Full Text Available Modifying patients’ expectations by exposing them to expectation violation situations (thus maximizing the difference between the expected and the actual situational outcome is proposed to be a crucial mechanism for therapeutic success for a variety of different mental disorders. However, clinical observations suggest that patients often maintain their expectations regardless of experiences contradicting their expectations. It remains unclear which information processing mechanisms lead to modification or persistence of patients’ expectations. Insight in the processing could be provided by Neuroimaging studies investigating prediction error (PE, i.e., neuronal reactions to non-expected stimuli. Two methods are often used to investigate the PE: (1 paradigms, in which participants passively observe PEs (”passive” paradigms and (2 paradigms, which encourage a behavioral adaptation following a PE (“active” paradigms. These paradigms are similar to the methods used to induce expectation violations in clinical settings: (1 the confrontation with an expectation violation situation and (2 an enhanced confrontation in which the patient actively challenges his expectation. We used this similarity to gain insight in the different neuronal processing of the two PE paradigms. We performed a meta-analysis contrasting neuronal activity of PE paradigms encouraging a behavioral adaptation following a PE and paradigms enforcing passiveness following a PE. We found more neuronal activity in the striatum, the insula and the fusiform gyrus in studies encouraging behavioral adaptation following a PE. Due to the involvement of reward assessment and avoidance learning associated with the striatum and the insula we propose that the deliberate execution of action alternatives following a PE is associated with the integration of new information into previously existing expectations, therefore leading to an expectation change. While further research is needed

  5. A Cognitive Human Error Analysis with CREAM in Control Room of Petrochemical Industry

    Directory of Open Access Journals (Sweden)

    Sana Shokria

    2016-10-01

    Full Text Available Background The cognitive human error analysis technique is one of the second-generation techniques used to evaluate human reliability; it has a strong, detailed theoretical background that focuses on the important cognitive features of human behavior. Objectives The aim of this study was to assign task and jobs crisis using analysis of cognitive human error with CREAM. Finally, based on the results, the major causes of error were detected. Methods This cross-sectional study was conducted on 53 people working in an olefin unit. It is one of the most important control rooms located in a special economic zone in Assaluyeh petrochemical industry. In this study, first a job analysis was conducted and the sub-tasks and conditions affecting the performance of the staff were determined. Then, the control mode coefficient and control mode type, as well as the possibility of total error were determined. Finally, the cognitive functions and type of cognitive error related to each sub-task were identified. Results Among the six evaluated occupational tasks, the tasks performed by board-man and site-man had the highest values of total human error in terms of transitory overall error coefficient (0.056 and 0.031, respectively. In addition, the following results were obtained on the basis of the extended CREAM: execution failure (31.72%, interpretation failure (29.20%, planning failure (14.63%, and observation failure (24.39%. Conclusions Common Performance Conditions (CPCs, empowerment, and the time available for work were among the most important factors that reduced occupational performance. To optimize a communication system, it is necessary to arrange the priority of tasks, hold joint meetings, inform the staff about the termination of work permits, hold training sessions, and measure the pollutants.

  6. Diction and Expression in Error Analysis Can Enhance Academic Writing of L2 University Students

    Directory of Open Access Journals (Sweden)

    Muhammad Sajid

    2016-06-01

    Full Text Available Without proper linguistic competence in English language, academic writing is one of the most challenging tasks, especially, in various genre specific disciplines by L2 novice writers. This paper examines the role of diction and expression through error analysis in English language of L2 novice writers’ academic writing in interdisciplinary texts of IT & Computer sciences and Business & Management sciences. Though the importance of vocabulary in L2 academic discourse is widely recognized, there has been little research focusing on diction and expression at higher education level. A corpus of 40 introductions of the published research articles, downloaded from the journals (e.g., 20 from IT & Computer sciences and 20 Business & Management sciences authored by L2 novice writers, was analyzed to determine lexico-grammatical errors from the texts by applying Markin4 method of Error Analysis. ‘Rewrites’ in italics letters is an attempt to demonstrate English language flexibility, infinite vastness and richness in diction and expression, comparing it with the excerpts taken from the corpus. Keywords: diction & expression, academic writing, error analysis, lexico-grammatical errors

  7. Monte Carlo analysis: error of extrapolated thermal conductivity from molecular dynamics simulations

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Xiang-Yang [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Andersson, Anders David [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-11-07

    In this short report, we give an analysis of the extrapolated thermal conductivity of UO2 from earlier molecular dynamics (MD) simulations [1]. Because almost all material properties are functions of temperature, e.g. fission gas release, the fuel thermal conductivity is the most important parameter from a model sensitivity perspective [2]. Thus, it is useful to perform such analysis.

  8. Calculating potential error in sodium MRI with respect to the analysis of small objects.

    Science.gov (United States)

    Stobbe, Robert W; Beaulieu, Christian

    2017-10-11

    To facilitate correct interpretation of sodium MRI measurements, calculation of error with respect to rapid signal decay is introduced and combined with that of spatially correlated noise to assess volume-of-interest (VOI) 23 Na signal measurement inaccuracies, particularly for small objects. Noise and signal decay-related error calculations were verified using twisted projection imaging and a specially designed phantom with different sized spheres of constant elevated sodium concentration. As a demonstration, lesion signal measurement variation (5 multiple sclerosis participants) was compared with that predicted from calculation. Both theory and phantom experiment showed that VOI signal measurement in a large 10-mL, 314-voxel sphere was 20% less than expected on account of point-spread-function smearing when the VOI was drawn to include the full sphere. Volume-of-interest contraction reduced this error but increased noise-related error. Errors were even greater for smaller spheres (40-60% less than expected for a 0.35-mL, 11-voxel sphere). Image-intensity VOI measurements varied and increased with multiple sclerosis lesion size in a manner similar to that predicted from theory. Correlation suggests large underestimation of 23 Na signal in small lesions. Acquisition-specific measurement error calculation aids 23 Na MRI data analysis and highlights the limitations of current low-resolution methodologies. Magn Reson Med, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  9. Error analysis of the crystal orientations obtained by the dictionary approach to EBSD indexing.

    Science.gov (United States)

    Ram, Farangis; Wright, Stuart; Singh, Saransh; De Graef, Marc

    2017-10-01

    The efficacy of the dictionary approach to Electron Back-Scatter Diffraction (EBSD) indexing was evaluated through the analysis of the error in the retrieved crystal orientations. EBSPs simulated by the Callahan-De Graef forward model were used for this purpose. Patterns were noised, distorted, and binned prior to dictionary indexing. Patterns with a high level of noise, with optical distortions, and with a 25 × 25 pixel size, when the error in projection center was 0.7% of the pattern width and the error in specimen tilt was 0.8°, were indexed with a 0.8° mean error in orientation. The same patterns, but 60 × 60 pixel in size, were indexed by the standard 2D Hough transform based approach with almost the same orientation accuracy. Optimal detection parameters in the Hough space were obtained by minimizing the orientation error. It was shown that if the error in detector geometry can be reduced to 0.1% in projection center and 0.1° in specimen tilt, the dictionary approach can retrieve a crystal orientation with a 0.2° accuracy. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. An analysis of error patterns in children's backward digit recall in noise

    Science.gov (United States)

    Osman, Homira; Sullivan, Jessica R.

    2015-01-01

    The purpose of the study was to determine whether perceptual masking or cognitive processing accounts for a decline in working memory performance in the presence of competing speech. The types and patterns of errors made on the backward digit span in quiet and multitalker babble at -5 dB signal-to-noise ratio (SNR) were analyzed. The errors were classified into two categories: item (if digits that were not presented in a list were repeated) and order (if correct digits were repeated but in an incorrect order). Fifty five children with normal hearing were included. All the children were aged between 7 years and 10 years. Repeated measures of analysis of variance (RM-ANOVA) revealed the main effects for error type and digit span length. In terms of listening condition interaction it was found that the order errors occurred more frequently than item errors in the degraded listening condition compared to quiet. In addition, children had more difficulty recalling the correct order of intermediate items, supporting strong primacy and recency effects. Decline in children's working memory performance was not primarily related to perceptual difficulties alone. The majority of errors was related to the maintenance of sequential order information, which suggests that reduced performance in competing speech may result from increased cognitive processing demands in noise. PMID:26168949

  11. Practical Implementation and Error Analysis of PSCPWM-Based Switching Audio Power Amplifiers

    DEFF Research Database (Denmark)

    Christensen, Frank Schwartz; Frederiksen, Thomas Mansachs; Andersen, Michael Andreas E.

    1999-01-01

    The paper presents an in-depth analysis of practical results for Parallel Phase-Shifted Carrier Pulse-Width Modulation (PSCPWM) - amplifier. Spectral analyses of error sources involved in PSCPWM are presented. The analysis is performed both by numerical means in MATLAB and by simulation in PSPICE......, followed by practical verification on a prototype. A toolbox for MATLAB has been developed to ease the complex analysis....

  12. Medication errors in residential aged care facilities: a distributed cognition analysis of the information exchange process.

    Science.gov (United States)

    Tariq, Amina; Georgiou, Andrew; Westbrook, Johanna

    2013-05-01

    Medication safety is a pressing concern for residential aged care facilities (RACFs). Retrospective studies in RACF settings identify inadequate communication between RACFs, doctors, hospitals and community pharmacies as the major cause of medication errors. Existing literature offers limited insight about the gaps in the existing information exchange process that may lead to medication errors. The aim of this research was to explicate the cognitive distribution that underlies RACF medication ordering and delivery to identify gaps in medication-related information exchange which lead to medication errors in RACFs. The study was undertaken in three RACFs in Sydney, Australia. Data were generated through ethnographic field work over a period of five months (May-September 2011). Triangulated analysis of data primarily focused on examining the transformation and exchange of information between different media across the process. The findings of this study highlight the extensive scope and intense nature of information exchange in RACF medication ordering and delivery. Rather than attributing error to individual care providers, the explication of distributed cognition processes enabled the identification of gaps in three information exchange dimensions which potentially contribute to the occurrence of medication errors namely: (1) design of medication charts which complicates order processing and record keeping (2) lack of coordination mechanisms between participants which results in misalignment of local practices (3) reliance on restricted communication bandwidth channels mainly telephone and fax which complicates the information processing requirements. The study demonstrates how the identification of these gaps enhances understanding of medication errors in RACFs. Application of the theoretical lens of distributed cognition can assist in enhancing our understanding of medication errors in RACFs through identification of gaps in information exchange. Understanding

  13. [Two Data Inversion Algorithms of Aerosol Horizontal Distributiol Detected by MPL and Error Analysis].

    Science.gov (United States)

    Lü, Li-hui; Liu, Wen-qing; Zhang, Tian-shu; Lu, Yi-huai; Dong, Yun-sheng; Chen, Zhen-yi; Fan, Guang-qiang; Qi, Shao-shuai

    2015-07-01

    Atmospheric aerosols have important impacts on human health, the environment and the climate system. Micro Pulse Lidar (MPL) is a new effective tool for detecting atmosphere aerosol horizontal distribution. And the extinction coefficient inversion and error analysis are important aspects of data processing. In order to detect the horizontal distribution of atmospheric aerosol near the ground, slope and Fernald algorithms were both used to invert horizontal MPL data and then the results were compared. The error analysis showed that the error of the slope algorithm and Fernald algorithm were mainly from theoretical model and some assumptions respectively. Though there still some problems exist in those two horizontal extinction coefficient inversions, they can present the spatial and temporal distribution of aerosol particles accurately, and the correlations with the forward-scattering visibility sensor are both high with the value of 95%. Furthermore relatively speaking, Fernald algorithm is more suitable for the inversion of horizontal extinction coefficient.

  14. Bit error rate analysis of free-space optical communication over general Malaga turbulence channels with pointing error

    KAUST Repository

    Alheadary, Wael Ghazy

    2016-12-24

    In this work, we present a bit error rate (BER) and achievable spectral efficiency (ASE) performance of a freespace optical (FSO) link with pointing errors based on intensity modulation/direct detection (IM/DD) and heterodyne detection over general Malaga turbulence channel. More specifically, we present exact closed-form expressions for adaptive and non-adaptive transmission. The closed form expressions are presented in terms of generalized power series of the Meijer\\'s G-function. Moreover, asymptotic closed form expressions are provided to validate our work. In addition, all the presented analytical results are illustrated using a selected set of numerical results.

  15. Evaluation of parametric models by the prediction error in colorectal cancer survival analysis.

    Science.gov (United States)

    Baghestani, Ahmad Reza; Gohari, Mahmood Reza; Orooji, Arezoo; Pourhoseingholi, Mohamad Amin; Zali, Mohammad Reza

    2015-01-01

    The aim of this study is to determine the factors influencing predicted survival time for patients with colorectal cancer (CRC) using parametric models and select the best model by predicting error's technique. Survival models are statistical techniques to estimate or predict the overall time up to specific events. Prediction is important in medical science and the accuracy of prediction is determined by a measurement, generally based on loss functions, called prediction error. A total of 600 colorectal cancer patients who admitted to the Cancer Registry Center of Gastroenterology and Liver Disease Research Center, Taleghani Hospital, Tehran, were followed at least for 5 years and have completed selected information for this study. Body Mass Index (BMI), Sex, family history of CRC, tumor site, stage of disease and histology of tumor included in the analysis. The survival time was compared by the Log-rank test and multivariate analysis was carried out using parametric models including Log normal, Weibull and Log logistic regression. For selecting the best model, the prediction error by apparent loss was used. Log rank test showed a better survival for females, BMI more than 25, patients with early stage at diagnosis and patients with colon tumor site. Prediction error by apparent loss was estimated and indicated that Weibull model was the best one for multivariate analysis. BMI and Stage were independent prognostic factors, according to Weibull model. In this study, according to prediction error Weibull regression showed a better fit. Prediction error would be a criterion to select the best model with the ability to make predictions of prognostic factors in survival analysis.

  16. Error Analysis for High Resolution Topography with Bi-Static Single-Pass SAR Interferometry

    Science.gov (United States)

    Muellerschoen, Ronald J.; Chen, Curtis W.; Hensley, Scott; Rodriguez, Ernesto

    2006-01-01

    We present a flow down error analysis from the radar system to topographic height errors for bi-static single pass SAR interferometry for a satellite tandem pair. Because of orbital dynamics the baseline length and baseline orientation evolve spatially and temporally, the height accuracy of the system is modeled as a function of the spacecraft position and ground location. Vector sensitivity equations of height and the planar error components due to metrology, media effects, and radar system errors are derived and evaluated globally for a baseline mission. Included in the model are terrain effects that contribute to layover and shadow and slope effects on height errors. The analysis also accounts for nonoverlapping spectra and the non-overlapping bandwidth due to differences between the two platforms' viewing geometries. The model is applied to a 514 km altitude 97.4 degree inclination tandem satellite mission with a 300 m baseline separation and X-band SAR. Results from our model indicate that global DTED level 3 can be achieved.

  17. On the relationship between anxiety and error monitoring: a meta-analysis and conceptual framework.

    Science.gov (United States)

    Moser, Jason S; Moran, Tim P; Schroder, Hans S; Donnellan, M Brent; Yeung, Nick

    2013-01-01

    Research involving event-related brain potentials has revealed that anxiety is associated with enhanced error monitoring, as reflected in increased amplitude of the error-related negativity (ERN). The nature of the relationship between anxiety and error monitoring is unclear, however. Through meta-analysis and a critical review of the literature, we argue that anxious apprehension/worry is the dimension of anxiety most closely associated with error monitoring. Although, overall, anxiety demonstrated a robust, "small-to-medium" relationship with enhanced ERN (r = -0.25), studies employing measures of anxious apprehension show a threefold greater effect size estimate (r = -0.35) than those utilizing other measures of anxiety (r = -0.09). Our conceptual framework helps explain this more specific relationship between anxiety and enhanced ERN and delineates the unique roles of worry, conflict processing, and modes of cognitive control. Collectively, our analysis suggests that enhanced ERN in anxiety results from the interplay of a decrease in processes supporting active goal maintenance and a compensatory increase in processes dedicated to transient reactivation of task goals on an as-needed basis when salient events (i.e., errors) occur.

  18. Circular Array of Magnetic Sensors for Current Measurement: Analysis for Error Caused by Position of Conductor.

    Science.gov (United States)

    Yu, Hao; Qian, Zheng; Liu, Huayi; Qu, Jiaqi

    2018-02-14

    This paper analyzes the measurement error, caused by the position of the current-carrying conductor, of a circular array of magnetic sensors for current measurement. The circular array of magnetic sensors is an effective approach for AC or DC non-contact measurement, as it is low-cost, light-weight, has a large linear range, wide bandwidth, and low noise. Especially, it has been claimed that such structure has excellent reduction ability for errors caused by the position of the current-carrying conductor, crosstalk current interference, shape of the conduction cross-section, and the Earth's magnetic field. However, the positions of the current-carrying conductor-including un-centeredness and un-perpendicularity-have not been analyzed in detail until now. In this paper, for the purpose of having minimum measurement error, a theoretical analysis has been proposed based on vector inner and exterior product. In the presented mathematical model of relative error, the un-center offset distance, the un-perpendicular angle, the radius of the circle, and the number of magnetic sensors are expressed in one equation. The comparison of the relative error caused by the position of the current-carrying conductor between four and eight sensors is conducted. Tunnel magnetoresistance (TMR) sensors are used in the experimental prototype to verify the mathematical model. The analysis results can be the reference to design the details of the circular array of magnetic sensors for current measurement in practical situations.

  19. A Monte Carlo error analysis program for near-Mars, finite-burn, orbital transfer maneuvers

    Science.gov (United States)

    Green, R. N.; Hoffman, L. H.; Young, G. R.

    1972-01-01

    A computer program was developed which performs an error analysis of a minimum-fuel, finite-thrust, transfer maneuver between two Keplerian orbits in the vicinity of Mars. The method of analysis is the Monte Carlo approach where each off-nominal initial orbit is targeted to the desired final orbit. The errors in the initial orbit are described by two covariance matrices of state deviations and tracking errors. The function of the program is to relate these errors to the resulting errors in the final orbit. The equations of motion for the transfer trajectory are those of a spacecraft maneuvering with constant thrust and mass-flow rate in the neighborhood of a single body. The thrust vector is allowed to rotate in a plane with a constant pitch rate. The transfer trajectory is characterized by six control parameters and the final orbit is defined, or partially defined, by the desired target parameters. The program is applicable to the deboost maneuver (hyperbola to ellipse), orbital trim maneuver (ellipse to ellipse), fly-by maneuver (hyperbola to hyperbola), escape maneuvers (ellipse to hyperbola), and deorbit maneuver.

  20. On the Relationship Between Anxiety and Error Monitoring: A meta-analysis and conceptual framework

    Directory of Open Access Journals (Sweden)

    Jason eMoser

    2013-08-01

    Full Text Available Research involving event-related brain potentials has revealed that anxiety is associated with enhanced error monitoring, as reflected in increased amplitude of the error-related negativity (ERN. The nature of the relationship between anxiety and error monitoring is unclear, however. Through meta-analysis and a critical review of the literature, we argue that anxious apprehension/worry is the dimension of anxiety most closely associated with error monitoring. Although, overall, anxiety demonstrated a robust, small-to-medium relationship with enhanced ERN (r = -.25, studies employing measures of anxious apprehension show a threefold greater effect size estimate (r = -.35 than those utilizing other measures of anxiety (r = -.09. Our conceptual framework helps explain this more specific relationship between anxiety and enhanced ERN and delineates the unique roles of worry, conflict processing, and modes of cognitive control. Collectively, our analysis suggests that enhanced ERN in anxiety results from the interplay of a decrease in processes supporting active goal maintenance and a compensatory increase in processes dedicated to transient reactivation of task goals on an as-needed basis when salient events (i.e., errors occur.

  1. Mechanistic model and analysis of doxorubicin release from liposomal formulations.

    Science.gov (United States)

    Fugit, Kyle D; Xiang, Tian-Xiang; Choi, Du H; Kangarlou, Sogol; Csuhai, Eva; Bummer, Paul M; Anderson, Bradley D

    2015-11-10

    Reliable and predictive models of drug release kinetics in vitro and in vivo are still lacking for liposomal formulations. Developing robust, predictive release models requires systematic, quantitative characterization of these complex drug delivery systems with respect to the physicochemical properties governing the driving force for release. These models must also incorporate changes in release due to the dissolution media and methods employed to monitor release. This paper demonstrates the successful development and application of a mathematical mechanistic model capable of predicting doxorubicin (DXR) release kinetics from liposomal formulations resembling the FDA-approved nanoformulation DOXIL® using dynamic dialysis. The model accounts for DXR equilibria (e.g. self-association, precipitation, ionization), the change in intravesicular pH due to ammonia release, and dialysis membrane transport of DXR. The model was tested using a Box-Behnken experimental design in which release conditions including extravesicular pH, ammonia concentration in the release medium, and the dilution of the formulation (i.e. suspension concentration) were varied. Mechanistic model predictions agreed with observed DXR release up to 19h. The predictions were similar to a computer fit of the release data using an empirical model often employed for analyzing data generated from this type of experimental design. Unlike the empirical model, the mechanistic model was also able to provide reasonable predictions of release outside the tested design space. These results illustrate the usefulness of mechanistic modeling to predict drug release from liposomal formulations in vitro and its potential for future development of in vitro - in vivo correlations for complex nanoformulations. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. What Do Spelling Errors Tell Us? Classification and Analysis of Errors Made by Greek Schoolchildren with and without Dyslexia

    Science.gov (United States)

    Protopapas, Athanassios; Fakou, Aikaterini; Drakopoulou, Styliani; Skaloumbakas, Christos; Mouzaki, Angeliki

    2013-01-01

    In this study we propose a classification system for spelling errors and determine the most common spelling difficulties of Greek children with and without dyslexia. Spelling skills of 542 children from the general population and 44 children with dyslexia, Grades 3-4 and 7, were assessed with a dictated common word list and age-appropriate…

  3. Using Computation Curriculum-Based Measurement Probes for Error Pattern Analysis

    Science.gov (United States)

    Dennis, Minyi Shih; Calhoon, Mary Beth; Olson, Christopher L.; Williams, Cara

    2014-01-01

    This article describes how "curriculum-based measurement--computation" (CBM-C) mathematics probes can be used in combination with "error pattern analysis" (EPA) to pinpoint difficulties in basic computation skills for students who struggle with learning mathematics. Both assessment procedures provide ongoing assessment data…

  4. Time-series analysis of Nigeria rice supply and demand: Error ...

    African Journals Online (AJOL)

    The study examined a time-series analysis of Nigeria rice supply and demand with a view to determining any long-run equilibrium between them using the Error Correction Model approach (ECM). The data used for the study represents the annual series of 1960-2007 (47 years) for rice supply and demand in Nigeria, ...

  5. Utility of KTEA-3 Error Analysis for the Diagnosis of Specific Learning Disabilities

    Science.gov (United States)

    Flanagan, Dawn P.; Mascolo, Jennifer T.; Alfonso, Vincent C.

    2017-01-01

    Through the use of excerpts from one of our own case studies, this commentary applied concepts inherent in, but not limited to, the neuropsychological literature to the interpretation of performance on the Kaufman Tests of Educational Achievement-Third Edition (KTEA-3), particularly at the level of error analysis. The approach to KTEA-3 test…

  6. An Error Analysis in Division Problems in Fractions Posed by Pre-Service Elementary Mathematics Teachers

    Science.gov (United States)

    Isik, Cemalettin; Kar, Tugrul

    2012-01-01

    The present study aimed to make an error analysis in the problems posed by pre-service elementary mathematics teachers about fractional division operation. It was carried out with 64 pre-service teachers studying in their final year in the Department of Mathematics Teaching in an eastern university during the spring semester of academic year…

  7. Combining Reading Quizzes and Error Analysis to Motivate Students to Grow

    Science.gov (United States)

    Wang, Jiawen; Selby, Karen L.

    2017-01-01

    In the spirit of scholarship in teaching and learning at the college level, we suggested and experimented with reading quizzes in combination with error analysis as one way not only to get students better prepared for class but also to provide opportunities for reflection under frameworks of mastery learning and mind growth. Our mixed-method…

  8. Advanced GIS Exercise: Performing Error Analysis in ArcGIS ModelBuilder

    Science.gov (United States)

    Hall, Steven T.; Post, Christopher J.

    2009-01-01

    Knowledge of Geographic Information Systems is quickly becoming an integral part of the natural resource professionals' skill set. With the growing need of professionals with these skills, we created an advanced geographic information systems (GIS) exercise for students at Clemson University to introduce them to the concept of error analysis,…

  9. Formulation and error analysis for a generalized image point correspondence algorithm

    Science.gov (United States)

    Shapiro, Linda (Editor); Rosenfeld, Azriel (Editor); Fotedar, Sunil; Defigueiredo, Rui J. P.; Krishen, Kumar

    1992-01-01

    A Generalized Image Point Correspondence (GIPC) algorithm, which enables the determination of 3-D motion parameters of an object in a configuration where both the object and the camera are moving, is discussed. A detailed error analysis of this algorithm has been carried out. Furthermore, the algorithm was tested on both simulated and video-acquired data, and its accuracy was determined.

  10. An Error Analysis of Hot Electron Temperatures Collected from Cross Sectioned GaN HEMTs (Preprint)

    Science.gov (United States)

    2017-07-03

    are computed from the square roots of the diagonal terms of the Hessian inverse. H−1 ≈ [ σ2A σ2T ] (4) An initial analysis of the gathered spectra was...error analyses such as that presented here become essential for the assesment of those models. VII. ACKNOWLEDGEMENTS This material is based in part upon

  11. Diction and Expression in Error Analysis Can Enhance Academic Writing of L2 University Students

    Science.gov (United States)

    Sajid, Muhammad

    2016-01-01

    Without proper linguistic competence in English language, academic writing is one of the most challenging tasks, especially, in various genre specific disciplines by L2 novice writers. This paper examines the role of diction and expression through error analysis in English language of L2 novice writers' academic writing in interdisciplinary texts…

  12. Execution-Error Modeling and Analysis of the GRAIL Spacecraft Pair

    Science.gov (United States)

    Goodson, Troy D.

    2013-01-01

    The GRAIL spacecraft, Ebb and Flow (aka GRAIL-A and GRAIL-B), completed their prime mission in June and extended mission in December 2012. The excellent performance of the propulsion and attitude control subsystems contributed significantly to the mission's success. In order to better understand this performance, the Navigation Team has analyzed and refined the execution-error models for delta-v maneuvers. There were enough maneuvers in the prime mission to form the basis of a model update that was used in the extended mission. This paper documents the evolution of the execution-error models along with the analysis and software used.

  13. Error analysis for momentum conservation in Atomic-Continuum Coupled Model

    Science.gov (United States)

    Yang, Yantao; Cui, Junzhi; Han, Tiansi

    2016-08-01

    Atomic-Continuum Coupled Model (ACCM) is a multiscale computation model proposed by Xiang et al. (in IOP conference series materials science and engineering, 2010), which is used to study and simulate dynamics and thermal-mechanical coupling behavior of crystal materials, especially metallic crystals. In this paper, we construct a set of interpolation basis functions for the common BCC and FCC lattices, respectively, implementing the computation of ACCM. Based on this interpolation approximation, we give a rigorous mathematical analysis of the error of momentum conservation equation introduced by ACCM, and derive a sequence of inequalities that bound the error. Numerical experiment is carried out to verify our result.

  14. Human errors identification using the human factors analysis and classification system technique (HFACS

    Directory of Open Access Journals (Sweden)

    G. A. Shirali

    2013-12-01

    .Result: In this study, 158 reports of accident in Ahvaz steel industry were analyzed by HFACS technique. This analysis showed that most of the human errors were: in the first level was related to the skill-based errors, in the second to the physical environment, in the third level to the inadequate supervision and in the fourth level to the management of resources. .Conclusion: Studying and analyzing of past events using the HFACS technique can identify the major and root causes of accidents and can be effective on prevent repetitions of such mishaps. Also, it can be used as a basis for developing strategies to prevent future events in steel industries.

  15. Optimal alpha reduces error rates in gene expression studies: a meta-analysis approach.

    Science.gov (United States)

    Mudge, J F; Martyniuk, C J; Houlahan, J E

    2017-06-21

    Transcriptomic approaches (microarray and RNA-seq) have been a tremendous advance for molecular science in all disciplines, but they have made interpretation of hypothesis testing more difficult because of the large number of comparisons that are done within an experiment. The result has been a proliferation of techniques aimed at solving the multiple comparisons problem, techniques that have focused primarily on minimizing Type I error with little or no concern about concomitant increases in Type II errors. We have previously proposed a novel approach for setting statistical thresholds with applications for high throughput omics-data, optimal α, which minimizes the probability of making either error (i.e. Type I or II) and eliminates the need for post-hoc adjustments. A meta-analysis of 242 microarray studies extracted from the peer-reviewed literature found that current practices for setting statistical thresholds led to very high Type II error rates. Further, we demonstrate that applying the optimal α approach results in error rates as low or lower than error rates obtained when using (i) no post-hoc adjustment, (ii) a Bonferroni adjustment and (iii) a false discovery rate (FDR) adjustment which is widely used in transcriptome studies. We conclude that optimal α can reduce error rates associated with transcripts in both microarray and RNA-seq experiments, but point out that improved statistical techniques alone cannot solve the problems associated with high throughput datasets - these approaches need to be coupled with improved experimental design that considers larger sample sizes and/or greater study replication.

  16. Dosing errors in prescribed antibiotics for older persons with CKD: a retrospective time series analysis.

    Science.gov (United States)

    Farag, Alexandra; Garg, Amit X; Li, Lihua; Jain, Arsh K

    2014-03-01

    Prescribing excessive doses of oral antibiotics is common in chronic kidney disease (CKD) and in this population is implicated in more than one-third of preventable adverse drug events. To improve the care of patients with CKD, many ambulatory laboratories now report estimated glomerular filtration rate (eGFR). We sought to describe the rate of ambulatory antibiotic dosing errors in CKD and examine the impact of eGFR reporting on these errors. Population-based retrospective time series analysis. Southwestern Ontario, Canada, from January 2003 to April 2010. Participants were ambulatory patients 66 years or older with CKD stages 4 or 5 (eGFR errors. Using linked health care databases, we assessed the monthly rate of excess dosing of orally prescribed antibiotics that require dose adjustment in CKD. We compared this rate before and after implementation of eGFR reporting. 1,464 prescriptions were filled for study antibiotics throughout the study period. Prior to eGFR reporting, the average rate of antibiotic prescriptions dosed in excess of guidelines was 64 per 100 antibiotic prescriptions. The introduction of eGFR reporting had no impact on this rate (68 per 100 antibiotic prescriptions; P = 0.9). Nitrofurantoin, which is contraindicated in patients with CKD, was prescribed 169 times throughout the study period. Although we attribute the dosing errors to poor awareness of dosing guidelines, we did not assess physician knowledge to confirm this. Dosing errors lead to adverse drug events; however, the latter could not be assessed reliably in our data sources. Ambulatory antibiotic dosing errors are exceedingly common in CKD care. Strategies other than eGFR reporting are needed to prevent this medical error. Copyright © 2014 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  17. Doctors' duty to disclose error: a deontological or Kantian ethical analysis.

    Science.gov (United States)

    Bernstein, Mark; Brown, Barry

    2004-05-01

    Medical (surgical) error is being talked about more openly and besides being the subject of retrospective reviews, is now the subject of prospective research. Disclosure of error has been a difficult issue because of fear of embarrassment for doctors in the eyes of their peers, and fear of punitive action by patients, consisting of medicolegal action and/or complaints to doctors' governing bodies. This paper examines physicians' and surgeons' duty to disclose error, from an ethical standpoint; specifically by applying the moral philosophical theory espoused by Immanuel Kant (ie. deontology). The purpose of this discourse is to apply moral philosophical analysis to a delicate but important issue which will be a matter all physicians and surgeons will have to confront, probably numerous times, in their professional careers.

  18. TIPE KESALAHAN MAHASISWA DALAM MENYELESAIKAN SOAL-SOAL GEOMETRI BERDASAR NEWMAN’S ERROR ANALYSIS (NEA

    Directory of Open Access Journals (Sweden)

    Anita Dewi Utami

    2016-03-01

    Full Text Available The students' ability to solve mathematical problems are affected directly or indirectly by their pattern of problem solving when they were attending primary and secondary schools. The result of observation shows that there are students who can not answer proving problem and take no action at all, thoughit is only at the step of understanding the problem. NEA is a frame work with simple diagnostic procedures, which include (1 decoding, (2 comprehension, (3 transformation, (4 process skills, and (5 encoding. Newman’s developed diagnostic method is used to identify the error categories of descriptive test answer. Therefore, the descriptive types of students’ error in proving problem solving in Geometry 1 subject based on Newman’s error Analysis (NEA, and what are the causes for the student’s mistakes in solving those proving problem, especially in Geometry1 subject is interesting to be discussed in this article.

  19. Theoretical analysis and estimation of decorrelation phase error in digital holographic interferometry

    Science.gov (United States)

    Zhang, Tao; Yan, Yining; Mo, Qingkai

    2016-10-01

    In order to theoretically analyze and estimate decorrelation phase error in digital holographic interferometry, the principle of digital holographic imaging system is introduced in this paper, and general point spread function (PSF) of digital holographic system is derived and its approximate function is obtained. According to the characteristics of the digital holographic imaging in accordance with the laws of statistical optics, the expression of complex amplitude standard deviation of σA, σB and σC in each region of the double exposure time and the relationship between the degree of decorrelation are derived, and the expression of the phase error of decorrelation is given. It is simulated in MATLAB, simulative results indicate that statistical properties of decorrelation phase error obtained through theory analysis correspond to decorrelation phenomenon. And the measuring condition, in digital holography interferometry, which decorrelation degrees between the holographies of every double exposure should satisfy ρx + ρy <0.1, is derived.

  20. Error-correction coding and decoding bounds, codes, decoders, analysis and applications

    CERN Document Server

    Tomlinson, Martin; Ambroze, Marcel A; Ahmed, Mohammed; Jibril, Mubarak

    2017-01-01

    This book discusses both the theory and practical applications of self-correcting data, commonly known as error-correcting codes. The applications included demonstrate the importance of these codes in a wide range of everyday technologies, from smartphones to secure communications and transactions. Written in a readily understandable style, the book presents the authors’ twenty-five years of research organized into five parts: Part I is concerned with the theoretical performance attainable by using error correcting codes to achieve communications efficiency in digital communications systems. Part II explores the construction of error-correcting codes and explains the different families of codes and how they are designed. Techniques are described for producing the very best codes. Part III addresses the analysis of low-density parity-check (LDPC) codes, primarily to calculate their stopping sets and low-weight codeword spectrum which determines the performance of these codes. Part IV deals with decoders desi...

  1. On the BER and capacity analysis of MIMO MRC systems with channel estimation error

    KAUST Repository

    Yang, Liang

    2011-10-01

    In this paper, we investigate the effect of channel estimation error on the capacity and bit-error rate (BER) of a multiple-input multiple-output (MIMO) transmit maximal ratio transmission (MRT) and receive maximal ratio combining (MRC) systems over uncorrelated Rayleigh fading channels. We first derive the ergodic (average) capacity expressions for such systems when power adaptation is applied at the transmitter. The exact capacity expression for the uniform power allocation case is also presented. Furthermore, to investigate the diversity order of MIMO MRT-MRC scheme, we derive the BER performance under a uniform power allocation policy. We also present an asymptotic BER performance analysis for the MIMO MRT-MRC system with multiuser diversity. The numerical results are given to illustrate the sensitivity of the main performance to the channel estimation error and the tightness of the approximate cutoff value. © 2011 IEEE.

  2. A Posteriori Error Analysis for the Optimal Control of Magneto-Static Fields

    OpenAIRE

    Pauly, Dirk; Yousept, Irwin

    2016-01-01

    This paper is concerned with the analysis and numerical analysis for the optimal control of first-order magneto-static equations. Necessary and sufficient optimality conditions are established through a rigorous Hilbert space approach. Then, on the basis of the optimality system, we prove functional a posteriori error estimators for the optimal control, the optimal state, and the adjoint state. 3D numerical results illustrating the theoretical findings are presented.

  3. Republished error management: Descriptions of verbal communication errors between staff. An analysis of 84 root cause analysis-reports from Danish hospitals

    DEFF Research Database (Denmark)

    Rabøl, Louise Isager; Andersen, Mette Lehmann; Østergaard, Doris

    2011-01-01

    incidents. The objective of this study is to review RCA reports (RCAR) for characteristics of verbal communication errors between hospital staff in an organisational perspective. Method Two independent raters analysed 84 RCARs, conducted in six Danish hospitals between 2004 and 2006, for descriptions...... and characteristics of verbal communication errors such as handover errors and error during teamwork. Results Raters found description of verbal communication errors in 44 reports (52%). These included handover errors (35 (86%)), communication errors between different staff groups (19 (43%)), misunderstandings (13...... (30%)), communication errors between junior and senior staff members (11 (25%)), hesitance in speaking up (10 (23%)) and communication errors during teamwork (8 (18%)). The kappa values were 0.44-0.78. Unproceduralized communication and information exchange via telephone, related to transfer between...

  4. Qualitative descriptions of error recovery patterns across reading level and sentence type: an eye movement analysis.

    Science.gov (United States)

    Fletcher, J

    1991-11-01

    Purposes of the present study included describing a variety of error recovery patterns based on eye movement (EM) measures of sentence parsing across reading level and error type. A qualitative pattern analysis of EM mappings was completed for students with reading disabilities (n = 10) and nondisabled students (n = 10) who were parsing control and erred sentences. Independent variables included error type (syntactically ambiguous, semantically anomalous, and control sentences) and reading proficiency level. Dependent variables consisted of seven eye movement measures. Chi-square analyses were performed to examine group differences across frequencies per pattern. Results suggest that the error recovery strategies deployed by both groups were similar in pattern and frequency; patterns were largely organized, strategic, and efficient, as predicted. Evidence for seven newly defined strategies was found, with indications of multiple strategies within sentences by both groups. Strategies tended to be error "reanalysis" (vs. "recovery") heuristics, in that readers from both groups used regressions to reanalyze regions of inconsistency rather than regions of disambiguation. Earlier conclusions regarding disorganized processing and individual differences among adolescents with reading disabilities are discussed.

  5. Model selection for marginal regression analysis of longitudinal data with missing observations and covariate measurement error.

    Science.gov (United States)

    Shen, Chung-Wei; Chen, Yi-Hau

    2015-10-01

    Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Covariate measurement error correction methods in mediation analysis with failure time data.

    Science.gov (United States)

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.

  7. Error Analysis for RADAR Neighbor Matching Localization in Linear Logarithmic Strength Varying Wi-Fi Environment

    Directory of Open Access Journals (Sweden)

    Mu Zhou

    2014-01-01

    Full Text Available This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs in logarithmic received signal strength (RSS varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future.

  8. Sources of errors in the quantitative analysis of food carotenoids by HPLC.

    Science.gov (United States)

    Kimura, M; Rodriguez-Amaya, D B

    1999-09-01

    Several factors render carotenoid determination inherently difficult. Thus, in spite of advances in analytical instrumentation, discrepancies in quantitative results on carotenoids can be encountered in the international literature. A good part of the errors comes from the pre-chromatographic steps such as: sampling scheme that does not yield samples representative of the food lots under investigation; sample preparation which does not maintain representativity and guarantee homogeneity of the analytical sample; incomplete extraction; physical losses of carotenoids during the various steps, especially during partition or washing and by adsorption to glass walls of containers; isomerization and oxidation of carotenoids during analysis. On the other hand, although currently considered the method of choice for carotenoids, high performance liquid chromatography (HPLC) is subject to various sources of errors, such as: incompatibility of the injection solvent and the mobile phase, resulting in distorted or split peaks; erroneous identification; unavailability, impurity and instability of carotenoid standards; quantification of highly overlapping peaks; low recovery from the HPLC column; errors in the preparation of standard solutions and in the calibration procedure; calculation errors. Illustrations of the possible errors in the quantification of carotenoids by HPLC are presented.

  9. A neighbourhood analysis based technique for real-time error concealment in H.264 intra pictures

    Science.gov (United States)

    Beesley, Steven T. C.; Grecos, Christos; Edirisinghe, Eran

    2007-02-01

    H.264s extensive use of context-based adaptive binary arithmetic or variable length coding makes streams highly susceptible to channel errors, a common occurrence over networks such as those used by mobile devices. Even a single bit error will cause a decoder to discard all stream data up to the next fixed length resynchronisation point, the worst scenario is that an entire slice is lost. In cases where retransmission and forward error concealment are not possible, a decoder should conceal any erroneous data in order to minimise the impact on the viewer. Stream errors can often be spotted early in the decode cycle of a macroblock which if aborted can provide unused processor cycles, these can instead be used to conceal errors at minimal cost, even as part of a real time system. This paper demonstrates a technique that utilises Sobel convolution kernels to quickly analyse the neighbourhood surrounding erroneous macroblocks before performing a weighted multi-directional interpolation. This generates significantly improved statistical (PSNR) and visual (IEEE structural similarity) results when compared to the commonly used weighted pixel value averaging. Furthermore it is also computationally scalable, both during analysis and concealment, achieving maximum performance from the spare processing power available.

  10. Spatio-Temporal Error Sources Analysis and Accuracy Improvement in Landsat 8 Image Ground Displacement Measurements

    Directory of Open Access Journals (Sweden)

    Chao Ding

    2016-11-01

    Full Text Available Because of the advantages of low cost, large coverage and short revisit cycle, Landsat 8 images have been widely applied to monitor earth surface movements. However, there are few systematic studies considering the error source characteristics or the improvement of the deformation field accuracy obtained by Landsat 8 image. In this study, we utilize the 2013 Mw 7.7 Balochistan, Pakistan earthquake to analyze error spatio-temporal characteristics and elaborate how to mitigate error sources in the deformation field extracted from multi-temporal Landsat 8 images. We found that the stripe artifacts and the topographic shadowing artifacts are two major error components in the deformation field, which currently lack overall understanding and an effective mitigation strategy. For the stripe artifacts, we propose a small spatial baseline (<200 m method to avoid the stripe artifacts effect on the deformation field. We also propose a small radiometric baseline method to reduce the topographic shadowing artifacts and radiometric decorrelation noises. Those performances and accuracy evaluation show that these two methods are effective in improving the precision of deformation field. This study provides the possibility to detect subtle ground movement with higher precision caused by earthquake, melting glaciers, landslides, etc., with Landsat 8 images. It is also a good reference for error source analysis and corrections in deformation field extracted from other optical satellite images.

  11. Identifying model error in metabolic flux analysis - a generalized least squares approach.

    Science.gov (United States)

    Sokolenko, Stanislav; Quattrociocchi, Marco; Aucoin, Marc G

    2016-09-13

    The estimation of intracellular flux through traditional metabolic flux analysis (MFA) using an overdetermined system of equations is a well established practice in metabolic engineering. Despite the continued evolution of the methodology since its introduction, there has been little focus on validation and identification of poor model fit outside of identifying "gross measurement error". The growing complexity of metabolic models, which are increasingly generated from genome-level data, has necessitated robust validation that can directly assess model fit. In this work, MFA calculation is framed as a generalized least squares (GLS) problem, highlighting the applicability of the common t-test for model validation. To differentiate between measurement and model error, we simulate ideal flux profiles directly from the model, perturb them with estimated measurement error, and compare their validation to real data. Application of this strategy to an established Chinese Hamster Ovary (CHO) cell model shows how fluxes validated by traditional means may be largely non-significant due to a lack of model fit. With further simulation, we explore how t-test significance relates to calculation error and show that fluxes found to be non-significant have 2-4 fold larger error (if measurement uncertainty is in the 5-10 % range). The proposed validation method goes beyond traditional detection of "gross measurement error" to identify lack of fit between model and data. Although the focus of this work is on t-test validation and traditional MFA, the presented framework is readily applicable to other regression analysis methods and MFA formulations.

  12. On Gait Analysis Estimation Errors Using Force Sensors on a Smart Rollator

    Directory of Open Access Journals (Sweden)

    Joaquin Ballesteros

    2016-11-01

    Full Text Available Gait analysis can provide valuable information on a person’s condition and rehabilitation progress. Gait is typically captured using external equipment and/or wearable sensors. These tests are largely constrained to specific controlled environments. In addition, gait analysis often requires experts for calibration, operation and/or to place sensors on volunteers. Alternatively, mobility support devices like rollators can be equipped with onboard sensors to monitor gait parameters, while users perform their Activities of Daily Living. Gait analysis in rollators may use odometry and force sensors in the handlebars. However, force based estimation of gait parameters is less accurate than traditional methods, especially when rollators are not properly used. This paper presents an evaluation of force based gait analysis using a smart rollator on different groups of users to determine when this methodology is applicable. In a second stage, the rollator is used in combination with two lab-based gait analysis systems to assess the rollator estimation error. Our results show that: (i there is an inverse relation between the variance in the force difference between handlebars and support on the handlebars—related to the user condition—and the estimation error; and (ii this error is lower than 10% when the variation in the force difference is above 7 N. This lower limit was exceeded by the 95.83% of our challenged volunteers. In conclusion, rollators are useful for gait characterization as long as users really need the device for ambulation.

  13. On Gait Analysis Estimation Errors Using Force Sensors on a Smart Rollator.

    Science.gov (United States)

    Ballesteros, Joaquin; Urdiales, Cristina; Martinez, Antonio B; van Dieën, Jaap H

    2016-11-10

    Gait analysis can provide valuable information on a person's condition and rehabilitation progress. Gait is typically captured using external equipment and/or wearable sensors. These tests are largely constrained to specific controlled environments. In addition, gait analysis often requires experts for calibration, operation and/or to place sensors on volunteers. Alternatively, mobility support devices like rollators can be equipped with onboard sensors to monitor gait parameters, while users perform their Activities of Daily Living. Gait analysis in rollators may use odometry and force sensors in the handlebars. However, force based estimation of gait parameters is less accurate than traditional methods, especially when rollators are not properly used. This paper presents an evaluation of force based gait analysis using a smart rollator on different groups of users to determine when this methodology is applicable. In a second stage, the rollator is used in combination with two lab-based gait analysis systems to assess the rollator estimation error. Our results show that: (i) there is an inverse relation between the variance in the force difference between handlebars and support on the handlebars-related to the user condition-and the estimation error; and (ii) this error is lower than 10% when the variation in the force difference is above 7 N. This lower limit was exceeded by the 95.83% of our challenged volunteers. In conclusion, rollators are useful for gait characterization as long as users really need the device for ambulation.

  14. Preanalytical errors in medical laboratories: a review of the available methodologies of data collection and analysis.

    Science.gov (United States)

    West, Jamie; Atherton, Jennifer; Costelloe, Seán J; Pourmahram, Ghazaleh; Stretton, Adam; Cornes, Michael

    2017-01-01

    Preanalytical errors have previously been shown to contribute a significant proportion of errors in laboratory processes and contribute to a number of patient safety risks. Accreditation against ISO 15189:2012 requires that laboratory Quality Management Systems consider the impact of preanalytical processes in areas such as the identification and control of non-conformances, continual improvement, internal audit and quality indicators. Previous studies have shown that there is a wide variation in the definition, repertoire and collection methods for preanalytical quality indicators. The International Federation of Clinical Chemistry Working Group on Laboratory Errors and Patient Safety has defined a number of quality indicators for the preanalytical stage, and the adoption of harmonized definitions will support interlaboratory comparisons and continual improvement. There are a variety of data collection methods, including audit, manual recording processes, incident reporting mechanisms and laboratory information systems. Quality management processes such as benchmarking, statistical process control, Pareto analysis and failure mode and effect analysis can be used to review data and should be incorporated into clinical governance mechanisms. In this paper, The Association for Clinical Biochemistry and Laboratory Medicine PreAnalytical Specialist Interest Group review the various data collection methods available. Our recommendation is the use of the laboratory information management systems as a recording mechanism for preanalytical errors as this provides the easiest and most standardized mechanism of data capture.

  15. A Meta-Analysis for Association of Maternal Smoking with Childhood Refractive Error and Amblyopia

    Directory of Open Access Journals (Sweden)

    Li Li

    2016-01-01

    Full Text Available Background. We aimed to evaluate the association between maternal smoking and the occurrence of childhood refractive error and amblyopia. Methods. Relevant articles were identified from PubMed and EMBASE up to May 2015. Combined odds ratio (OR corresponding with its 95% confidence interval (CI was calculated to evaluate the influence of maternal smoking on childhood refractive error and amblyopia. The heterogeneity was evaluated with the Chi-square-based Q statistic and the I2 test. Potential publication bias was finally examined by Egger’s test. Results. A total of 9 articles were included in this meta-analysis. The pooled OR showed that there was no significant association between maternal smoking and childhood refractive error. However, children whose mother smoked during pregnancy were 1.47 (95% CI: 1.12–1.93 times and 1.43 (95% CI: 1.23-1.66 times more likely to suffer from amblyopia and hyperopia, respectively, compared with children whose mother did not smoke, and the difference was significant. Significant heterogeneity was only found among studies involving the influence of maternal smoking on children’s refractive error (P<0.05; I2=69.9%. No potential publication bias was detected by Egger’s test. Conclusion. The meta-analysis suggests that maternal smoking is a risk factor for childhood hyperopia and amblyopia.

  16. Meta-analysis of small RNA-sequencing errors reveals ubiquitous post-transcriptional RNA modifications.

    Science.gov (United States)

    Ebhardt, H Alexander; Tsang, Herbert H; Dai, Denny C; Liu, Yifeng; Bostan, Babak; Fahlman, Richard P

    2009-05-01

    Recent advances in DNA-sequencing technology have made it possible to obtain large datasets of small RNA sequences. Here we demonstrate that not all non-perfectly matched small RNA sequences are simple technological sequencing errors, but many hold valuable biological information. Analysis of three small RNA datasets originating from Oryza sativa and Arabidopsis thaliana small RNA-sequencing projects demonstrates that many single nucleotide substitution errors overlap when aligning homologous non-identical small RNA sequences. Investigating the sites and identities of substitution errors reveal that many potentially originate as a result of post-transcriptional modifications or RNA editing. Modifications include N1-methyl modified purine nucleotides in tRNA, potential deamination or base substitutions in micro RNAs, 3' micro RNA uridine extensions and 5' micro RNA deletions. Additionally, further analysis of large sequencing datasets reveal that the combined effects of 5' deletions and 3' uridine extensions can alter the specificity by which micro RNAs associate with different Argonaute proteins. Hence, we demonstrate that not all sequencing errors in small RNA datasets are technical artifacts, but that these actually often reveal valuable biological insights to the sites of post-transcriptional RNA modifications.

  17. Computational modeling and analysis of iron release from macrophages.

    Directory of Open Access Journals (Sweden)

    Alka A Potdar

    2014-07-01

    Full Text Available A major process of iron homeostasis in whole-body iron metabolism is the release of iron from the macrophages of the reticuloendothelial system. Macrophages recognize and phagocytose senescent or damaged erythrocytes. Then, they process the heme iron, which is returned to the circulation for reutilization by red blood cell precursors during erythropoiesis. The amount of iron released, compared to the amount shunted for storage as ferritin, is greater during iron deficiency. A currently accepted model of iron release assumes a passive-gradient with free diffusion of intracellular labile iron (Fe2+ through ferroportin (FPN, the transporter on the plasma membrane. Outside the cell, a multi-copper ferroxidase, ceruloplasmin (Cp, oxidizes ferrous to ferric ion. Apo-transferrin (Tf, the primary carrier of soluble iron in the plasma, binds ferric ion to form mono-ferric and di-ferric transferrin. According to the passive-gradient model, the removal of ferrous ion from the site of release sustains the gradient that maintains the iron release. Subcellular localization of FPN, however, indicates that the role of FPN may be more complex. By experiments and mathematical modeling, we have investigated the detailed mechanism of iron release from macrophages focusing on the roles of the Cp, FPN and apo-Tf. The passive-gradient model is quantitatively analyzed using a mathematical model for the first time. A comparison of experimental data with model simulations shows that the passive-gradient model cannot explain macrophage iron release. However, a facilitated-transport model associated with FPN can explain the iron release mechanism. According to the facilitated-transport model, intracellular FPN carries labile iron to the macrophage membrane. Extracellular Cp accelerates the oxidation of ferrous ion bound to FPN. Apo-Tf in the extracellular environment binds to the oxidized ferrous ion, completing the release process. Facilitated-transport model can

  18. 14 CFR Appendix I to Part 417 - Methodologies for Toxic Release Hazard Analysis and Operational Procedures

    Science.gov (United States)

    2010-01-01

    ... either the methodology provided in the Risk Management Plan (RMP) Offsite Consequence Analysis Guidance..., App. I Appendix I to Part 417—Methodologies for Toxic Release Hazard Analysis and Operational Procedures I417.1General This appendix provides methodologies for performing toxic release hazard analysis...

  19. Chronology of prescribing error during the hospital stay and prediction of pharmacist's alerts overriding: a prospective analysis.

    Science.gov (United States)

    Caruba, Thibaut; Colombet, Isabelle; Gillaizeau, Florence; Bruni, Vanida; Korb, Virginie; Prognon, Patrice; Bégué, Dominique; Durieux, Pierre; Sabatier, Brigitte

    2010-01-12

    Drug prescribing errors are frequent in the hospital setting and pharmacists play an important role in detection of these errors. The objectives of this study are (1) to describe the drug prescribing errors rate during the patient's stay, (2) to find which characteristics for a prescribing error are the most predictive of their reproduction the next day despite pharmacist's alert (i.e. override the alert). We prospectively collected all medication order lines and prescribing errors during 18 days in 7 medical wards' using computerized physician order entry. We described and modelled the errors rate according to the chronology of hospital stay. We performed a classification and regression tree analysis to find which characteristics of alerts were predictive of their overriding (i.e. prescribing error repeated). 12 533 order lines were reviewed, 117 errors (errors rate 0.9%) were observed and 51% of these errors occurred on the first day of the hospital stay. The risk of a prescribing error decreased over time. 52% of the alerts were overridden (i.e error uncorrected by prescribers on the following day. Drug omissions were the most frequently taken into account by prescribers. The classification and regression tree analysis showed that overriding pharmacist's alerts is first related to the ward of the prescriber and then to either Anatomical Therapeutic Chemical class of the drug or the type of error. Since 51% of prescribing errors occurred on the first day of stay, pharmacist should concentrate his analysis of drug prescriptions on this day. The difference of overriding behavior between wards and according drug Anatomical Therapeutic Chemical class or type of error could also guide the validation tasks and programming of electronic alerts.

  20. Error modeling based on geostatistics for uncertainty analysis in crop mapping using Gaofen-1 multispectral imagery

    Science.gov (United States)

    You, Jiong; Pei, Zhiyuan

    2015-01-01

    With the development of remote sensing technology, its applications in agriculture monitoring systems, crop mapping accuracy, and spatial distribution are more and more being explored by administrators and users. Uncertainty in crop mapping is profoundly affected by the spatial pattern of spectral reflectance values obtained from the applied remote sensing data. Errors in remotely sensed crop cover information and the propagation in derivative products need to be quantified and handled correctly. Therefore, this study discusses the methods of error modeling for uncertainty characterization in crop mapping using GF-1 multispectral imagery. An error modeling framework based on geostatistics is proposed, which introduced the sequential Gaussian simulation algorithm to explore the relationship between classification errors and the spectral signature from remote sensing data source. On this basis, a misclassification probability model to produce a spatially explicit classification error probability surface for the map of a crop is developed, which realizes the uncertainty characterization for crop mapping. In this process, trend surface analysis was carried out to generate a spatially varying mean response and the corresponding residual response with spatial variation for the spectral bands of GF-1 multispectral imagery. Variogram models were employed to measure the spatial dependence in the spectral bands and the derived misclassification probability surfaces. Simulated spectral data and classification results were quantitatively analyzed. Through experiments using data sets from a region in the low rolling country located at the Yangtze River valley, it was found that GF-1 multispectral imagery can be used for crop mapping with a good overall performance, the proposal error modeling framework can be used to quantify the uncertainty in crop mapping, and the misclassification probability model can summarize the spatial variation in map accuracy and is helpful for

  1. Analysis and Compensation Method Research on the Channel Leakage Error for Three-baseline MMWInSAR

    Directory of Open Access Journals (Sweden)

    Qiao Ming

    2013-03-01

    Full Text Available In this paper, modeling of the channel leakage error of a three-baseline MMWInSAR (MilliMeter Wave Interferometric Synthetic Aperture Radar is analyzed, and the mathematical expression of the error’s parameters and interference phase error is deduced. Furthermore, using quantitative analysis, the paper investigates the impact on the interferometric phase error and elevation error from the channel leakage. Finally, a compensation method for the channel leakage error is presented. The results of simulation experiments verified the effectiveness of the compensation method.

  2. Development of an improved HRA method: A technique for human error analysis (ATHEANA)

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, J.H.; Luckas, W.J. [Brookhaven National Lab., Upton, NY (United States); Wreathall, J. [John Wreathall & Co., Dublin, OH (United States)] [and others

    1996-03-01

    Probabilistic risk assessment (PRA) has become an increasingly important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. The NRC recently published a final policy statement, SECY-95-126, encouraging the use of PRA in regulatory activities. Human reliability analysis (HRA), while a critical element of PRA, has limitations in the analysis of human actions in PRAs that have long been recognized as a constraint when using PRA. In fact, better integration of HRA into the PRA process has long been a NRC issue. Of particular concern, has been the omission of errors of commission - those errors that are associated with inappropriate interventions by operators with operating systems. To address these concerns, the NRC identified the need to develop an improved HRA method, so that human reliability can be better represented and integrated into PRA modeling and quantification.

  3. Measurements and their uncertainties a practical guide to modern error analysis

    CERN Document Server

    Hughes, Ifan G

    2010-01-01

    This hands-on guide is primarily intended to be used in undergraduate laboratories in the physical sciences and engineering. It assumes no prior knowledge of statistics. It introduces the necessary concepts where needed, with key points illustrated with worked examples and graphic illustrations. In contrast to traditional mathematical treatments it uses a combination of spreadsheet and calculus-based approaches, suitable as a quick and easy on-the-spot reference. The emphasisthroughout is on practical strategies to be adopted in the laboratory. Error analysis is introduced at a level accessible to school leavers, and carried through to research level. Error calculation and propagation is presented though a series of rules-of-thumb, look-up tables and approaches amenable to computer analysis. The general approach uses the chi-square statistic extensively. Particular attention is given to hypothesis testing and extraction of parameters and their uncertainties by fitting mathematical models to experimental data....

  4. An Analysis of Ripple and Error Fields Induced by a Blanket in the CFETR

    Science.gov (United States)

    Yu, Guanying; Liu, Xufeng; Liu, Songlin

    2016-10-01

    The Chinese Fusion Engineering Tokamak Reactor (CFETR) is an important intermediate device between ITER and DEMO. The Water Cooled Ceramic Breeder (WCCB) blanket whose structural material is mainly made of Reduced Activation Ferritic/Martensitic (RAFM) steel, is one of the candidate conceptual blanket design. An analysis of ripple and error field induced by RAFM steel in WCCB is evaluated with the method of static magnetic analysis in the ANSYS code. Significant additional magnetic field is produced by blanket and it leads to an increased ripple field. Maximum ripple along the separatrix line reaches 0.53% which is higher than 0.5% of the acceptable design value. Simultaneously, one blanket module is taken out for heating purpose and the resulting error field is calculated to be seriously against the requirement. supported by National Natural Science Foundation of China (No. 11175207) and the National Magnetic Confinement Fusion Program of China (No. 2013GB108004)

  5. Analysis of hazardous substances released during CFRP laser processing

    Science.gov (United States)

    Hustedt, Michael; Walter, Juergen; Bluemel, Sven; Jaeschke, Peter; Kaierle, Stefan

    2017-02-01

    Due to their outstanding mechanical properties, in particular their high specific strength parallel to the carbon fibers, carbon fiber reinforced plastics (CFRP) have a high potential regarding resource-efficient lightweight construction. Consequently, these composite materials are increasingly finding application in important industrial branches such as aircraft, automotive and wind energy industry. However, the processing of these materials is highly demanding. On the one hand, mechanical processing methods such as milling or drilling are sometimes rather slow, and they are connected with notable tool wear. On the other hand, thermal processing methods are critical as the two components matrix and reinforcement have widely differing thermophysical properties, possibly leading to damages of the composite structure in terms of pores or delamination. An emerging innovative method for processing of CFRP materials is the laser technology. As principally thermal method, laser processing is connected with the release of potentially hazardous, gaseous and particulate substances. Detailed knowledge of these process emissions is the basis to ensure the protection of man and the environment, according to the existing legal regulations. This knowledge will help to realize adequate protective measures and thus strengthen the development of CFRP laser processing. In this work, selected measurement methods and results of the analysis of the exhaust air and the air at the workplace during different laser processes with CFRP materials are presented. The investigations have been performed in the course of different cooperative projects, funded by the German Federal Ministry of Education and Research (BMBF) in the course of the funding initiative "Photonic Processes and Tools for Resource-Efficient Lightweight Structures".

  6. Republished error management: Descriptions of verbal communication errors between staff. An analysis of 84 root cause analysis-reports from Danish hospitals

    DEFF Research Database (Denmark)

    Rabøl, Louise Isager; Andersen, Mette Lehmann; Østergaard, Doris

    2011-01-01

    Introduction Poor teamwork and communication between healthcare staff are correlated to patient safety incidents. However, the organisational factors responsible for these issues are unexplored. Root cause analyses (RCA) use human factors thinking to analyse the systems behind severe patient safety...... incidents. The objective of this study is to review RCA reports (RCAR) for characteristics of verbal communication errors between hospital staff in an organisational perspective. Method Two independent raters analysed 84 RCARs, conducted in six Danish hospitals between 2004 and 2006, for descriptions...... and characteristics of verbal communication errors such as handover errors and error during teamwork. Results Raters found description of verbal communication errors in 44 reports (52%). These included handover errors (35 (86%)), communication errors between different staff groups (19 (43%)), misunderstandings (13...

  7. Proactive error analysis of ultrasound-guided axillary brachial plexus block performance.

    LENUS (Irish Health Repository)

    O'Sullivan, Owen

    2012-07-13

    Detailed description of the tasks anesthetists undertake during the performance of a complex procedure, such as ultrasound-guided peripheral nerve blockade, allows elements that are vulnerable to human error to be identified. We have applied 3 task analysis tools to one such procedure, namely, ultrasound-guided axillary brachial plexus blockade, with the intention that the results may form a basis to enhance training and performance of the procedure.

  8. Error Analysis of Explicit Partitioned Runge–Kutta Schemes for Conservation Laws

    KAUST Repository

    Hundsdorfer, Willem

    2014-08-27

    An error analysis is presented for explicit partitioned Runge–Kutta methods and multirate methods applied to conservation laws. The interfaces, across which different methods or time steps are used, lead to order reduction of the schemes. Along with cell-based decompositions, also flux-based decompositions are studied. In the latter case mass conservation is guaranteed, but it will be seen that the accuracy may deteriorate.

  9. Error analysis of the finite element and finite volume methods for some viscoelastic fluids

    Czech Academy of Sciences Publication Activity Database

    Lukáčová-Medviďová, M.; Mizerová, H.; She, B.; Stebel, Jan

    2016-01-01

    Roč. 24, č. 2 (2016), s. 105-123 ISSN 1570-2820 R&D Projects: GA ČR(CZ) GAP201/11/1304 Institutional support: RVO:67985840 Keywords : error analysis * Oldroyd-B type models * viscoelastic fluids Subject RIV: BA - General Mathematics Impact factor: 0.405, year: 2016 http://www.degruyter.com/view/j/jnma.2016.24.issue-2/jnma-2014-0057/jnma-2014-0057. xml

  10. Post-Error Slowing in Patients With ADHD: A Meta-Analysis.

    Science.gov (United States)

    Balogh, Lívia; Czobor, Pál

    2016-12-01

    Post-error slowing (PES) is a cognitive mechanism for adaptive responses to reduce the probability of error in subsequent trials after error. To date, no meta-analytic summary of individual studies has been conducted to assess whether ADHD patients differ from controls in PES. We identified 15 relevant publications, reporting 26 pairs of comparisons (ADHD, n = 1,053; healthy control, n = 614). Random-effect meta-analysis was used to determine the statistical effect size (ES) for PES. PES was diminished in the ADHD group as compared with controls, with an ES in the medium range (Cohen's d = 0.42). Significant group difference was observed in relation to the inter-stimulus interval (ISI): While healthy participants slowed down after an error during long (3,500 ms) compared with short ISIs (1,500 ms), ADHD participants sustained or even increased their speed. The pronounced group difference suggests that PES may be considered as a behavioral indicator for differentiating ADHD patients from healthy participants. © The Author(s) 2014.

  11. The linear Fresnel lens - Solar optical analysis of tracking error effects

    Science.gov (United States)

    Cosby, R. M.

    1977-01-01

    Real sun-tracking solar concentrators imperfectly follow the solar disk, operationally sustaining both transverse and axial misalignments. This paper describes an analysis of the solar concentration performance of a line-focusing flat-base Fresnel lens in the presence of small transverse tracking errors. Simple optics and ray-tracing techniques are used to evaluate the lens solar transmittance and focal-plane imaging characteristics. Computer-generated example data for an f/1.0 lens indicate that less than a 1% transmittance degradation occurs for transverse errors up to 2.5 deg. In this range, solar-image profiles shift laterally in the focal plane, the peak concentration ratio drops, and profile asymmetry increases with tracking error. With profile shift as the primary factor, the ninety-percent target-intercept width increases rapidly for small misalignments, e.g., almost threefold for a 1-deg error. The analytical model and computational results provide a design base for tracking and absorber systems for the linear-Fresnel-lens solar concentrator.

  12. Inversion, error analysis, and validation of GPS/MET occultation data

    Energy Technology Data Exchange (ETDEWEB)

    Steiner, A.K.; Kirchengast, G. [Graz Univ. (Austria). Inst. fuer Meteorologie und Geophysik; Ladreiter, H.P.

    1999-01-01

    The global positioning system meteorology (GPS/MET) experiment was the first practical demonstration of global navigation satellite system (GNSS)-based active limb sounding employing the radio occultation technique. This method measures, as principal observable and with millimetric accuracy, the excess phase path (relative to propagation in vacuum) of GNSS-transmitted radio waves caused by refraction during passage through the Earth`s neutral atmosphere and ionosphere in limb geometry. It shows great potential utility for weather and climate system studies in providing an unique combination of global coverage, high vertical resolution and accuracy, long-term stability, and all-weather capability. We first describe our GPS/MET data processing scheme from excess phases via bending angles to the neutral atmospheric parameters refractivity, density, pressure and temperature. Special emphasis is given to ionospheric correction methodology and the inversion of bending angles to refractivities, where we introduce a matrix inversion technique (instead of the usual integral inversion). The matrix technique is shown to lead to identical results as integral inversion but is more directly extendable to inversion by optimal estimation. The quality of GPS/MET-derived profiles is analyzed with an error estimation analysis employing a Monte Carlo technique. We consider statistical errors together with systematic errors due to upper-boundary initialization of the retrieval by a priori bending angles. Perfect initialization and properly smoothed statistical errors allow for better than 1 K temperature retrieval accuracy up to the stratopause. 28 refs.

  13. Error Analysis of Some Demand Simplifications in Hydraulic Models of Water Supply Networks

    Directory of Open Access Journals (Sweden)

    Joaquín Izquierdo

    2013-01-01

    Full Text Available Mathematical modeling of water distribution networks makes use of simplifications aimed to optimize the development and use of the mathematical models involved. Simplified models are used systematically by water utilities, frequently with no awareness of the implications of the assumptions used. Some simplifications are derived from the various levels of granularity at which a network can be considered. This is the case of some demand simplifications, specifically, when consumptions associated with a line are equally allocated to the ends of the line. In this paper, we present examples of situations where this kind of simplification produces models that are very unrealistic. We also identify the main variables responsible for the errors. By performing some error analysis, we assess to what extent such a simplification is valid. Using this information, guidelines are provided that enable the user to establish if a given simplification is acceptable or, on the contrary, supplies information that differs substantially from reality. We also develop easy to implement formulae that enable the allocation of inner line demand to the line ends with minimal error; finally, we assess the errors associated with the simplification and locate the points of a line where maximum discrepancies occur.

  14. Republished error management: Descriptions of verbal communication errors between staff. An analysis of 84 root cause analysis-reports from Danish hospitals.

    Science.gov (United States)

    Rabøl, Louise Isager; Andersen, Mette Lehmann; Ostergaard, Doris; Bjørn, Brian; Lilja, Beth; Mogensen, Torben

    2011-11-01

    Poor teamwork and communication between healthcare staff are correlated to patient safety incidents. However, the organisational factors responsible for these issues are unexplored. Root cause analyses (RCA) use human factors thinking to analyse the systems behind severe patient safety incidents. The objective of this study is to review RCA reports (RCAR) for characteristics of verbal communication errors between hospital staff in an organisational perspective. Two independent raters analysed 84 RCARs, conducted in six Danish hospitals between 2004 and 2006, for descriptions and characteristics of verbal communication errors such as handover errors and error during teamwork. Raters found description of verbal communication errors in 44 reports (52%). These included handover errors (35 (86%)), communication errors between different staff groups (19 (43%)), misunderstandings (13 (30%)), communication errors between junior and senior staff members (11 (25%)), hesitance in speaking up (10 (23%)) and communication errors during teamwork (8 (18%)). The kappa values were 0.44-0.78. Unproceduralized communication and information exchange via telephone, related to transfer between units and consults from other specialties, were particularly vulnerable processes. With the risk of bias in mind, it is concluded that more than half of the RCARs described erroneous verbal communication between staff members as root causes of or contributing factors of severe patient safety incidents. The RCARs rich descriptions of the incidents revealed the organisational factors and needs related to these errors.

  15. Error rates in bite mark analysis in an in vivo animal model.

    Science.gov (United States)

    Avon, S L; Victor, C; Mayhall, J T; Wood, R E

    2010-09-10

    Recent judicial decisions have specified that one foundation of reliability of comparative forensic disciplines is description of both scientific approach used and calculation of error rates in determining the reliability of an expert opinion. Thirty volunteers were recruited for the analysis of dermal bite marks made using a previously established in vivo porcine-skin model. Ten participants were recruited from three separate groups: dentists with no experience in forensics, dentists with an interest in forensic odontology, and board-certified diplomates of the American Board of Forensic Odontology (ABFO). Examiner demographics and measures of experience in bite mark analysis were collected for each volunteer. Each participant received 18 completely documented, simulated in vivo porcine bite mark cases and three paired sets of human dental models. The paired maxillary and mandibular models were identified as suspect A, suspect B, and suspect C. Examiners were tasked to determine, using an analytic method of their own choosing, whether each bite mark of the 18 bite mark cases provided was attributable to any of the suspect dentitions provided. Their findings were recorded on a standardized recording form. The results of the study demonstrated that the group of inexperienced examiners often performed as well as the board-certified group, and both inexperienced and board-certified groups performed better than those with an interest in forensic odontology that had not yet received board certification. Incorrect suspect attributions (possible false inculpation) were most common among this intermediate group. Error rates were calculated for each of the three observer groups for each of the three suspect dentitions. This study demonstrates that error rates can be calculated using an animal model for human dermal bite marks, and although clinical experience is useful, other factors may be responsible for accuracy in bite mark analysis. Further, this study demonstrates

  16. Analysis of metal ion release from biomedical implants

    Directory of Open Access Journals (Sweden)

    Ivana Dimić

    2013-06-01

    Full Text Available Metallic biomaterials are commonly used for fixation or replacement of damaged bones in the human body due to their good combination of mechanical properties. The disadvantage of metals as implant materials is their susceptibility to corrosion and metal ion release, which can cause serious health problems. In certain concentrations metals and metal ions are toxic and their presence can cause diverse inflammatory reactions, genetic mutations or even cancer. In this paper, different approaches to metal ion release examination, from biometallic materials sample preparation to research results interpretation, will be presented. An overview of the analytical techniques, used for determination of the type and concentration of released ions from implants in simulated biofluids, is also given in the paper.

  17. A Preliminary ZEUS Lightning Location Error Analysis Using a Modified Retrieval Theory

    Science.gov (United States)

    Elander, Valjean; Koshak, William; Phanord, Dieudonne

    2004-01-01

    The ZEUS long-range VLF arrival time difference lightning detection network now covers both Europe and Africa, and there are plans for further expansion into the western hemisphere. In order to fully optimize and assess ZEUS lightning location retrieval errors and to determine the best placement of future receivers expected to be added to the network, a software package is being developed jointly between the NASA Marshall Space Flight Center (MSFC) and the University of Nevada Las Vegas (UNLV). The software package, called the ZEUS Error Analysis for Lightning (ZEAL), will be used to obtain global scale lightning location retrieval error maps using both a Monte Carlo approach and chi-squared curvature matrix theory. At the core of ZEAL will be an implementation of an Iterative Oblate (IO) lightning location retrieval method recently developed at MSFC. The IO method will be appropriately modified to account for variable wave propagation speed, and the new retrieval results will be compared with the current ZEUS retrieval algorithm to assess potential improvements. In this preliminary ZEAL work effort, we defined 5000 source locations evenly distributed across the Earth. We then used the existing (as well as potential future ZEUS sites) to simulate arrival time data between source and ZEUS site. A total of 100 sources were considered at each of the 5000 locations, and timing errors were selected from a normal distribution having a mean of 0 seconds and a standard deviation of 20 microseconds. This simulated "noisy" dataset was analyzed using the IO algorithm to estimate source locations. The exact locations were compared with the retrieved locations, and the results are summarized via several color-coded "error maps."

  18. Analysis of S-box in Image Encryption Using Root Mean Square Error Method

    Science.gov (United States)

    Hussain, Iqtadar; Shah, Tariq; Gondal, Muhammad Asif; Mahmood, Hasan

    2012-07-01

    The use of substitution boxes (S-boxes) in encryption applications has proven to be an effective nonlinear component in creating confusion and randomness. The S-box is evolving and many variants appear in literature, which include advanced encryption standard (AES) S-box, affine power affine (APA) S-box, Skipjack S-box, Gray S-box, Lui J S-box, residue prime number S-box, Xyi S-box, and S8 S-box. These S-boxes have algebraic and statistical properties which distinguish them from each other in terms of encryption strength. In some circumstances, the parameters from algebraic and statistical analysis yield results which do not provide clear evidence in distinguishing an S-box for an application to a particular set of data. In image encryption applications, the use of S-boxes needs special care because the visual analysis and perception of a viewer can sometimes identify artifacts embedded in the image. In addition to existing algebraic and statistical analysis already used for image encryption applications, we propose an application of root mean square error technique, which further elaborates the results and enables the analyst to vividly distinguish between the performances of various S-boxes. While the use of the root mean square error analysis in statistics has proven to be effective in determining the difference in original data and the processed data, its use in image encryption has shown promising results in estimating the strength of the encryption method. In this paper, we show the application of the root mean square error analysis to S-box image encryption. The parameters from this analysis are used in determining the strength of S-boxes

  19. A Technique for the Retrospective and Predictive Analysis of Cognitive Errors for the Oil and Gas Industry (TRACEr-OGI

    Directory of Open Access Journals (Sweden)

    Stephen C. Theophilus

    2017-09-01

    Full Text Available Human error remains a major cause of several accidents in the oil and gas (O&G industry. While human error has been analysed in several industries and has been at the centre of many debates and commentaries, a detailed, systematic and comprehensive analysis of human error in the O&G industry has not yet been conducted. Hence, this report aims to use the Technique for Retrospective and Predictive Analysis of Cognitive Errors (TRACEr to analyse historical accidents in the O&G industry. The study has reviewed 163 major and/or fatal O&G industry accidents that occurred between 2000 and 2014. The results obtained have shown that the predominant context for errors was internal communication, mostly influenced by factors of perception. Major accident events were crane accidents and falling objects, relating to the most dominant accident type: ‘Struck by’. The main actors in these events were drillers and operators. Generally, TRACEr proved very useful in identifying major task errors. However, the taxonomy was less useful in identifying both equipment errors and errors due to failures in safety critical control barriers and recovery measures. Therefore, a modified version of the tool named Technique for the Retrospective and Predictive Analysis of Cognitive Errors for the Oil and Gas Industry (TRACEr-OGI was proposed and used. This modified analytical tool was consequently found to be more effective for accident analysis in the O&G industry.

  20. Error Estimates of the Ares I Computed Turbulent Ascent Longitudinal Aerodynamic Analysis

    Science.gov (United States)

    Abdol-Hamid, Khaled S.; Ghaffari, Farhad

    2012-01-01

    Numerical predictions of the longitudinal aerodynamic characteristics for the Ares I class of vehicles, along with the associated error estimate derived from an iterative convergence grid refinement, are presented. Computational results are based on an unstructured grid, Reynolds-averaged Navier-Stokes analysis. The validity of the approach to compute the associated error estimates, derived from a base grid to an extrapolated infinite-size grid, was first demonstrated on a sub-scaled wind tunnel model at representative ascent flow conditions for which the experimental data existed. Such analysis at the transonic flow conditions revealed a maximum deviation of about 23% between the computed longitudinal aerodynamic coefficients with the base grid and the measured data across the entire roll angles. This maximum deviation from the wind tunnel data was associated with the computed normal force coefficient at the transonic flow condition and was reduced to approximately 16% based on the infinite-size grid. However, all the computed aerodynamic coefficients with the base grid at the supersonic flow conditions showed a maximum deviation of only about 8% with that level being improved to approximately 5% for the infinite-size grid. The results and the error estimates based on the established procedure are also presented for the flight flow conditions.

  1. Optimization of isotherm models for pesticide sorption on biopolymer-nanoclay composite by error analysis.

    Science.gov (United States)

    Narayanan, Neethu; Gupta, Suman; Gajbhiye, V T; Manjaiah, K M

    2017-04-01

    A carboxy methyl cellulose-nano organoclay (nano montmorillonite modified with 35-45 wt % dimethyl dialkyl (C 14 -C 18 ) amine (DMDA)) composite was prepared by solution intercalation method. The prepared composite was characterized by infrared spectroscopy (FTIR), X-Ray diffraction spectroscopy (XRD) and scanning electron microscopy (SEM). The composite was utilized for its pesticide sorption efficiency for atrazine, imidacloprid and thiamethoxam. The sorption data was fitted into Langmuir and Freundlich isotherms using linear and non linear methods. The linear regression method suggested best fitting of sorption data into Type II Langmuir and Freundlich isotherms. In order to avoid the bias resulting from linearization, seven different error parameters were also analyzed by non linear regression method. The non linear error analysis suggested that the sorption data fitted well into Langmuir model rather than in Freundlich model. The maximum sorption capacity, Q 0 (μg/g) was given by imidacloprid (2000) followed by thiamethoxam (1667) and atrazine (1429). The study suggests that the degree of determination of linear regression alone cannot be used for comparing the best fitting of Langmuir and Freundlich models and non-linear error analysis needs to be done to avoid inaccurate results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. The development and error analysis of a kinematic parameters based spatial positioning method for an orthopedic navigation robot system.

    Science.gov (United States)

    Pei, Baoqing; Zhu, Gang; Wang, Yu; Qiao, Huiting; Chen, Xiangqian; Wang, Binbin; Li, Xiaoyun; Zhang, Weijun; Liu, Wenyong; Fan, Yubo

    2017-09-01

    Spatial positioning is the key function of a surgical navigation robot system, and accuracy is the most important performance index of such a system. The kinematic parameters of a six degrees of freedom (DOF) robot arm were used to form the transformation from intraoperative fluoroscopy images to a robot's coordinate system without C-arm calibration and to solve the redundant DOF problem. The influences of three typical error sources and their combination on the final navigation error were investigated through Monte Carlo simulation. The navigation error of the proposed method is less than 0.6 mm, and the feasibility was verified through cadaver experiments. Error analysis suggests that the robot kinematic error has a linear relationship with final navigation error, while the image error and gauge error have nonlinear influences. This kinematic parameters based method can provide accurate and convenient navigation for orthopedic surgeries. The result of error analysis will help error design and assignment for surgical robots. Copyright © 2016 John Wiley & Sons, Ltd.

  3. Error analysis of pronouns by normal and language-impaired children.

    Science.gov (United States)

    Moore, M E

    1995-03-01

    Recent research has located extraordinary weakness in specifically language-impaired (SLI) children's development other than grammatical morphemes. A problem with pronoun case marking was reported to be more prevalent in SLI children than in normally developing children matched by mean length of utterance. However, results from the present study do not support that finding. Spontaneous utterances from 3 conversational contexts were generated by 3 groups of normal and SLI children and were analyzed for accuracy of pronoun usage. Third person singular pronouns were judged according to case, gender, number, person and cohesion based on their linguistic and nonlinguistic contexts. Results indicated that SLI children exhibited more total errors than their chronological peers, but not more than their language level peers. An analysis of error types indicated a similar pattern in pronoun case marking.

  4. Accidental hypoglycaemia caused by an arterial flush drug error: a case report and contributory causes analysis.

    Science.gov (United States)

    Gupta, K J; Cook, T M

    2013-11-01

    In 2008, the National Patient Safety Agency (NPSA) issued a Rapid Response Report concerning problems with infusions and sampling from arterial lines. The risk of blood sample contamination from glucose-containing arterial line infusions was highlighted and changes in arterial line management were recommended. Despite this guidance, errors with arterial line infusions remain common. We report a case of severe hypoglycaemia and neuroglycopenia caused by glucose contamination of arterial line blood samples. This case occurred despite the implementation of the practice changes recommended in the 2008 NPSA alert. We report an analysis of the factors contributing to this incident using the Yorkshire Contributory Factors Framework. We discuss the nature of the errors that occurred and list the consequent changes in practice implemented on our unit to prevent recurrence of this incident, which go well beyond those recommended by the NPSA in 2008. © 2013 The Association of Anaesthetists of Great Britain and Ireland.

  5. POSTPROCESSING MIXED FINITE ELEMENT METHODS FOR SOLVING CAHN-HILLIARD EQUATION: METHODS AND ERROR ANALYSIS

    Science.gov (United States)

    Wang, Wansheng; Chen, Long; Zhou, Jie

    2015-01-01

    A postprocessing technique for mixed finite element methods for the Cahn-Hilliard equation is developed and analyzed. Once the mixed finite element approximations have been computed at a fixed time on the coarser mesh, the approximations are postprocessed by solving two decoupled Poisson equations in an enriched finite element space (either on a finer grid or a higher-order space) for which many fast Poisson solvers can be applied. The nonlinear iteration is only applied to a much smaller size problem and the computational cost using Newton and direct solvers is negligible compared with the cost of the linear problem. The analysis presented here shows that this technique remains the optimal rate of convergence for both the concentration and the chemical potential approximations. The corresponding error estimate obtained in our paper, especially the negative norm error estimates, are non-trivial and different with the existing results in the literatures. PMID:27110063

  6. Residents' surgical performance during the laboratory years: an analysis of rule-based errors.

    Science.gov (United States)

    Nathwani, Jay N; Wise, Brett J; Garren, Margaret E; Mohamadipanah, Hossein; Van Beek, Nicole; DiMarco, Shannon M; Pugh, Carla M

    2017-11-01

    Nearly one-third of surgical residents will enter into academic development during their surgical residency by dedicating time to a research fellowship for 1-3 y. Major interest lies in understanding how laboratory residents' surgical skills are affected by minimal clinical exposure during academic development. A widely held concern is that the time away from clinical exposure results in surgical skills decay. This study examines the impact of the academic development years on residents' operative performance. We hypothesize that the use of repeated, annual assessments may result in learning even without individual feedback on participants simulated performance. Surgical performance data were collected from laboratory residents (postgraduate years 2-5) during the summers of 2014, 2015, and 2016. Residents had 15 min to complete a shortened, simulated laparoscopic ventral hernia repair procedure. Final hernia repair skins from all participants were scored using a previously validated checklist. An analysis of variance test compared the mean performance scores of repeat participants to those of first time participants. Twenty-seven (37% female) laboratory residents provided 2-year assessment data over the 3-year span of the study. Second time performance revealed improvement from a mean score of 14 (standard error = 1.0) in the first year to 17.2 (SD = 0.9) in the second year, (F[1, 52] = 5.6, P = 0.022). Detailed analysis demonstrated improvement in performance for 3 grading criteria that were considered to be rule-based errors. There was no improvement in operative strategy errors. Analysis of longitudinal performance of laboratory residents shows higher scores for repeat participants in the category of rule-based errors. These findings suggest that laboratory residents can learn from rule-based mistakes when provided with annual performance-based assessments. This benefit was not seen with operative strategy errors and has important implications for

  7. Chemical analysis of substrates with controlled release fertilizer

    NARCIS (Netherlands)

    Kreij, de C.

    2004-01-01

    Water-soluble fertilizer added to media containing controlled release fertilizer cannot be analysed with the 1:1.5 volume water extract, because the latter increases the element content in the extract. During storage and stirring or mixing the substrate with the extractant, part of the controlled

  8. 2015 TRI National Analysis: Toxics Release Inventory Releases at Various Summary Levels

    Data.gov (United States)

    U.S. Environmental Protection Agency — The TRI National Analysis is EPA's annual interpretation of TRI data at various summary levels. It highlights how toxic chemical wastes were managed, where toxic...

  9. A Cross-sectional Analysis Investigating Organizational Factors That Influence Near-Miss Error Reporting Among Hospital Pharmacists.

    Science.gov (United States)

    Patterson, Mark E; Pace, Heather A

    2016-06-01

    Underreporting near-miss errors undermines hospitals' ability to improve patient safety. The objective of this analysis was to determine the extent to which punitive work climate, inadequate error feedback to staff, or insufficient preventative procedures are associated with decreased frequency of near-miss error reporting among hospital pharmacists. Survey data were obtained from the Agency of Healthcare Research and Quality 2010 Hospital Survey on Patient Safety Culture. Near-miss error reporting was defined using a Likert scale response to the question, "When a mistake is made, but is caught and corrected before affecting the patient, how often is this reported?" Work climate, error feedback to staff, and preventative procedures were defined similarly using responses to survey questions. Multivariate ordinal regressions estimated the likelihood of agreeing that near-miss errors were rarely reported, conditional upon perceived levels of punitive work climate, error feedback, or preventative procedures. Pharmacists disagreeing that procedures were sufficient and that feedback on errors was adequate were more likely to report that near-miss errors were rarely reported (odds ratio [OR], 2.5; 95% confidence interval [CI], 1.7-3.8; OR, 3.5; 95% CI, 2.5-5.1). Those agreeing that mistakes were held against them were equally likely as those disagreeing to report that errors were rarely reported (OR, 0.84; 95% CI, 0.61-1.1). Inadequate error feedback to staff and insufficient preventative procedures increase the likelihood that near-miss errors will be underreported. Hospitals seeking to improve near-miss error reporting should improve error-reporting infrastructures to enable feedback, which, in turn, would create a more preventative system that improves patient safety.

  10. Technology-related medication errors in a tertiary hospital: a 5-year analysis of reported medication incidents.

    Science.gov (United States)

    Samaranayake, N R; Cheung, S T D; Chui, W C M; Cheung, B M Y

    2012-12-01

    Healthcare technology is meant to reduce medication errors. The objective of this study was to assess unintended errors related to technologies in the medication use process. Medication incidents reported from 2006 to 2010 in a main tertiary care hospital were analysed by a pharmacist and technology-related errors were identified. Technology-related errors were further classified as socio-technical errors and device errors. This analysis was conducted using data from medication incident reports which may represent only a small proportion of medication errors that actually takes place in a hospital. Hence, interpretation of results must be tentative. 1538 medication incidents were reported. 17.1% of all incidents were technology-related, of which only 1.9% were device errors, whereas most were socio-technical errors (98.1%). Of these, 61.2% were linked to computerised prescription order entry, 23.2% to bar-coded patient identification labels, 7.2% to infusion pumps, 6.8% to computer-aided dispensing label generation and 1.5% to other technologies. The immediate causes for technology-related errors included, poor interface between user and computer (68.1%), improper procedures or rule violations (22.1%), poor interface between user and infusion pump (4.9%), technical defects (1.9%) and others (3.0%). In 11.4% of the technology-related incidents, the error was detected after the drug had been administered. A considerable proportion of all incidents were technology-related. Most errors were due to socio-technical issues. Unintended and unanticipated errors may happen when using technologies. Therefore, when using technologies, system improvement, awareness, training and monitoring are needed to minimise medication errors. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  11. Analysis of gastrin-releasing peptide gene and gastrin-releasing peptide receptor gene in patients with agoraphobia.

    Science.gov (United States)

    Zimmermann, Katrin; Görgens, Heike; Bräuer, David; Einsle, Franziska; Noack, Barbara; von Kannen, Stephanie; Grossmann, Maria; Hoyer, Jürgen; Strobel, Alexander; Köllner, Volker; Weidner, Kerstin; Ziegler, Andreas; Hemmelmann, Claudia; Schackert, Hans K

    2014-10-01

    A gastrin-releasing peptide receptor (GRPR) knock-out mouse model provided evidence that the gastrin-releasing peptide (GRP) and its neural circuitry operate as a negative feedback-loop regulating fear, suggesting a novel candidate mechanism contributing to individual differences in fear-conditioning and associated psychiatric disorders such as agoraphobia with/without panic disorder. Studies in humans, however, provided inconclusive evidence on the association of GRP and GRPR variations in agoraphobia with/without panic disorder. Based on these findings, we investigated whether GRP and GRPR variants are associated with agoraphobia. Mental disorders were assessed via the Munich-Composite International Diagnostic Interview (M-CIDI) in 95 patients with agoraphobia with/without panic disorder and 119 controls without any mental disorders. A complete sequence analysis of GRP and GRPR was performed in all participants. We found no association of 16 GRP and 7 GRPR variants with agoraphobia with/without panic disorder.

  12. A POSTERIORI ERROR ANALYSIS OF TWO STAGE COMPUTATION METHODS WITH APPLICATION TO EFFICIENT DISCRETIZATION AND THE PARAREAL ALGORITHM.

    Science.gov (United States)

    Chaudhry, Jehanzeb Hameed; Estep, Don; Tavener, Simon; Carey, Varis; Sandelin, Jeff

    2016-01-01

    We consider numerical methods for initial value problems that employ a two stage approach consisting of solution on a relatively coarse discretization followed by solution on a relatively fine discretization. Examples include adaptive error control, parallel-in-time solution schemes, and efficient solution of adjoint problems for computing a posteriori error estimates. We describe a general formulation of two stage computations then perform a general a posteriori error analysis based on computable residuals and solution of an adjoint problem. The analysis accommodates various variations in the two stage computation and in formulation of the adjoint problems. We apply the analysis to compute "dual-weighted" a posteriori error estimates, to develop novel algorithms for efficient solution that take into account cancellation of error, and to the Parareal Algorithm. We test the various results using several numerical examples.

  13. Review of advances in human reliability analysis of errors of commission-Part 2: EOC quantification

    Energy Technology Data Exchange (ETDEWEB)

    Reer, Bernhard [Paul Scherrer Institute, 5232 Villigen PSI (Switzerland)], E-mail: bernhard.reer@psi.ch

    2008-08-15

    In close connection with examples relevant to contemporary probabilistic safety assessment (PSA), a review of advances in human reliability analysis (HRA) of post-initiator errors of commission (EOCs), i.e. inappropriate actions under abnormal operating conditions, has been carried out. The review comprises both EOC identification (part 1) and quantification (part 2); part 2 is presented in this article. Emerging HRA methods in this field are: ATHEANA, MERMOS, the EOC HRA method developed by Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS), the MDTA method and CREAM. The essential advanced features are on the conceptual side, especially to envisage the modeling of multiple contexts for an EOC to be quantified (ATHEANA, MERMOS and MDTA), in order to explicitly address adverse conditions. There is promising progress in providing systematic guidance to better account for cognitive demands and tendencies (GRS, CREAM), and EOC recovery (MDTA). Problematic issues are associated with the implementation of multiple context modeling and the assessment of context-specific error probabilities. Approaches for task or error opportunity scaling (CREAM, GRS) and the concept of reference cases (ATHEANA outlook) provide promising orientations for achieving progress towards data-based quantification. Further development work is needed and should be carried out in close connection with large-scale applications of existing approaches.

  14. On the effects of systematic errors in analysis of nuclear scattering data.

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, M.T.; Steward, C.; Amos, K.; Allen, L.J.

    1995-07-05

    The effects of systematic errors on elastic scattering differential cross-section data upon the assessment of quality fits to that data have been studied. Three cases are studied, namely the differential cross-section data sets from elastic scattering of 200 MeV protons from {sup 12}C, of 350 MeV {sup 16}O-{sup 16}O scattering and of 288.6 MeV {sup 12}C-{sup 12}C scattering. First, to estimate the probability of any unknown systematic errors, select sets of data have been processed using the method of generalized cross validation; a method based upon the premise that any data set should satisfy an optimal smoothness criterion. In another case, the S function that provided a statistically significant fit to data, upon allowance for angle variation, became overdetermined. A far simpler S function form could then be found to describe the scattering process. The S functions so obtained have been used in a fixed energy inverse scattering study to specify effective, local, Schroedinger potentials for the collisions. An error analysis has been performed on the results to specify confidence levels for those interactions. 19 refs., 6 tabs., 15 figs.

  15. Analysis of the Real-time Compensation for Thermal Error at CNC Milling Machine

    Directory of Open Access Journals (Sweden)

    Chen Tsung-Chia

    2016-01-01

    Full Text Available This paper focuses on analyzing and discussing thermal errors at CNC milling machines. The thermal affection makes the deformation of machine tools and is the main problem of accuracy error over than 65%. Effectively improving or controlling thermal errors is helpful for the accuracy of machine. The key point of this paper is the position of tool center point. Firstly, 14 pieces of temperature sensors are used for checking the real field of temperature, and then four sensors with better linearity are chosen for real situations. The test bar and 5 pieces of non-contact sensors are utilized for clearly getting the displacement of the tool center point and head during the process. Based on the theory of MRA (Multiple Regression Analysis, the external zero-point is shifted to build the mathematical module. The database is input to the control board and the PLC is used for real-time compensation for machining. Finally, two work pieces (one compensated and one non-compensated are tested and compensated one presents better precision.

  16. Fractional Order Differentiation by Integration and Error Analysis in Noisy Environment

    KAUST Repository

    Liu, Dayan

    2015-03-31

    The integer order differentiation by integration method based on the Jacobi orthogonal polynomials for noisy signals was originally introduced by Mboup, Join and Fliess. We propose to extend this method from the integer order to the fractional order to estimate the fractional order derivatives of noisy signals. Firstly, two fractional order differentiators are deduced from the Jacobi orthogonal polynomial filter, using the Riemann-Liouville and the Caputo fractional order derivative definitions respectively. Exact and simple formulae for these differentiators are given by integral expressions. Hence, they can be used for both continuous-time and discrete-time models in on-line or off-line applications. Secondly, some error bounds are provided for the corresponding estimation errors. These bounds allow to study the design parameters\\' influence. The noise error contribution due to a large class of stochastic processes is studied in discrete case. The latter shows that the differentiator based on the Caputo fractional order derivative can cope with a class of noises, whose mean value and variance functions are polynomial time-varying. Thanks to the design parameters analysis, the proposed fractional order differentiators are significantly improved by admitting a time-delay. Thirdly, in order to reduce the calculation time for on-line applications, a recursive algorithm is proposed. Finally, the proposed differentiator based on the Riemann-Liouville fractional order derivative is used to estimate the state of a fractional order system and numerical simulations illustrate the accuracy and the robustness with respect to corrupting noises.

  17. ENERGY EFFICIENCY ANALYSIS OF ERROR CORRECTION TECHNIQUES IN UNDERWATER WIRELESS SENSOR NETWORKS

    Directory of Open Access Journals (Sweden)

    M. NORDIN B. ZAKARIA

    2011-02-01

    Full Text Available Research in underwater acoustic networks has been developed rapidly to support large variety of applications such as mining equipment and environmental monitoring. As in terrestrial sensor networks; reliable data transport is demanded in underwater sensor networks. The energy efficiency of error correction technique should be considered because of the severe energy constraints of underwater wireless sensor networks. Forward error correction (FEC andautomatic repeat request (ARQ are the two main error correction techniques in underwater networks. In this paper, a mathematical energy efficiency analysis for FEC and ARQ techniques in underwater environment has been done based on communication distance and packet size. The effects of wind speed, and shipping factor are studied. A comparison between FEC and ARQ in terms of energy efficiency is performed; it is found that energy efficiency of both techniquesincreases with increasing packet size in short distances, but decreases in longer distances. There is also a cut-off distance below which ARQ is more energy efficient than FEC, and after which FEC is more energy efficient than ARQ. This cut-off distance decreases by increasing wind speed. Wind speed has great effecton energy efficiency where as shipping factor has unnoticeable effect on energy efficiency for both techniques.

  18. Human reliability analysis of errors of commission: a review of methods and applications

    Energy Technology Data Exchange (ETDEWEB)

    Reer, B

    2007-06-15

    Illustrated by specific examples relevant to contemporary probabilistic safety assessment (PSA), this report presents a review of human reliability analysis (HRA) addressing post initiator errors of commission (EOCs), i.e. inappropriate actions under abnormal operating conditions. The review addressed both methods and applications. Emerging HRA methods providing advanced features and explicit guidance suitable for PSA are: A Technique for Human Event Analysis (ATHEANA, key publications in 1998/2000), Methode d'Evaluation de la Realisation des Missions Operateur pour la Surete (MERMOS, 1998/2000), the EOC HRA method developed by the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS, 2003), the Misdiagnosis Tree Analysis (MDTA) method (2005/2006), the Cognitive Reliability and Error Analysis Method (CREAM, 1998), and the Commission Errors Search and Assessment (CESA) method (2002/2004). As a result of a thorough investigation of various PSA/HRA applications, this paper furthermore presents an overview of EOCs (termination of safety injection, shutdown of secondary cooling, etc.) referred to in predictive studies and a qualitative review of cases of EOC quantification. The main conclusions of the review of both the methods and the EOC HRA cases are: (1) The CESA search scheme, which proceeds from possible operator actions to the affected systems to scenarios, may be preferable because this scheme provides a formalized way for identifying relatively important scenarios with EOC opportunities; (2) an EOC identification guidance like CESA, which is strongly based on the procedural guidance and important measures of systems or components affected by inappropriate actions, however should pay some attention to EOCs associated with familiar but non-procedural actions and EOCs leading to failures of manually initiated safety functions. (3) Orientations of advanced EOC quantification comprise a) modeling of multiple contexts for a given scenario, b) accounting for

  19. Error Analysis of p-Version Discontinuous Galerkin Method for Heat Transfer in Built-up Structures

    Science.gov (United States)

    Kaneko, Hideaki; Bey, Kim S.

    2004-01-01

    The purpose of this paper is to provide an error analysis for the p-version of the discontinuous Galerkin finite element method for heat transfer in built-up structures. As a special case of the results in this paper, a theoretical error estimate for the numerical experiments recently conducted by James Tomey is obtained.

  20. Error analysis of large-eddy simulation of the turbulent non-premixed sydney bluff-body flame

    NARCIS (Netherlands)

    Kempf, A.M.; Geurts, Bernardus J.; Oefelein, J.C.

    2011-01-01

    A computational error analysis is applied to the large-eddy simulation of the turbulent non-premixed Sydney bluff-body flame, where the error is defined with respect to experimental data. The errorlandscape approach is extended to heterogeneous compressible turbulence, which is coupled to combustion

  1. [Medication errors related to computerized physician order entry at the hospital: Record and analysis over a period of 4 years].

    Science.gov (United States)

    Hellot-Guersing, M; Jarre, C; Molina, C; Leromain, A-S; Derharoutunian, C; Gadot, A; Roubille, R

    2016-01-01

    Computerized physician order entry (CPOE) can generate medication errors. It is necessary to identify them and analyse their causes in order to secure the medication use system. Errors were recorded during the pharmaceutical analysis of prescriptions over a period of 4 years on 425 beds. A code frame was provided. Errors were classified according to type, causes and time of detection. The most often drug implicated and the error correction rate were studied. Deep causes were determined and contributing factors were listed. Among 99,536 prescriptions analyzed, 2636 errors were detected (2.65 errors per 100 orders analyzed). The most common error was omission (31.49%). The most represented cause was redundancy requirement (11.34%). Antibacterials were most commonly involved (224 errors). Exactly 65.9% of the prescriptions were modified by physicians. Three root causes were identified: (1) configuration issues; (2) misuse; (3) design problem. Three types of contributing factors have also been detailed: economic, human and technical factors. Identifying root causes has targeted three types of improvement actions: (1) software settings; (2) training of users; (3) requests for improvements. Contributing factors have to be identified to control the generated risk. Some errors related to CPOE may lead to serious side effects for the patient. That is why it is necessary to identify these errors and analyze them in order to implement improvement actions and prevention to secure the prescription. Copyright © 2015 Académie Nationale de Pharmacie. Published by Elsevier Masson SAS. All rights reserved.

  2. Landslide risk assessment with multi pass DInSAR analysis and error suppressing approach

    Science.gov (United States)

    yun, H.; Kim, J.; Lin, S.; Choi, Y.

    2013-12-01

    Landslide is one of the most dreadful natural hazards and the prime risk source causing lethal damages in many countries. In spite of various attempts to measure the landslide susceptibility by the remote sensed method including Differential Interferometric SAR (DInSAR) analysis, the construction of reliable forecasting systems still remains unsolved. Thus, we tackled the problem of DInSAR analysis for monitoring landslide risk over the mountainous areas where InSAR observations are usually contaminated by the orographic effects and other error elements. In order to measure the correct surface deformation which might be a prelude of landslide, time series analysis and atmospheric correction of DInSAR interferograms were conducted and crossly validated. The target area of this experiment is the eastern part of Korean peninsula centered in Uljin. In there, the landslide originated by the geomorphic factors such as high sloped topography and localized torrential down pour is critical issue. The landslide cases frequently occurred in the cutting side of mountainous area by the anthropogenic construction activities. Although high precision DInSAR measurements for monitoring the landslide risks are essential in such circumstances, it is difficult to attain sufficient enough accuracy because of the external factors inducing the error component in electromagnetic wave propagation. For instance, the local climate characteristics such as orographic effect and the proximity to seashore can produce the significant anomalies in the water vapor distribution and consequently result in the error components of InSAR phase angle measurements. Moreover the high altitude parts of target area cause the stratified tropospheric delay error in DInSAR measurement. After all, the improved DInSAR approaches to cope all above obstacles are highly necessary. Thus we employed two approaches i.e. StaMPS/MTI (Stanford Method for Persistent Scatterers/Multi-Temporal InSAR, Hopper et al., 2007

  3. Task and error analysis balancing benefits over business of electronic medical records.

    Science.gov (United States)

    Carstens, Deborah Sater; Rodriguez, Walter; Wood, Michael B

    2014-01-01

    Task and error analysis research was performed to identify: a) the process for healthcare organisations in managing healthcare for patients with mental illness or substance abuse; b) how the process can be enhanced and; c) if electronic medical records (EMRs) have a role in this process from a business and safety perspective. The research question is if EMRs have a role in enhancing the healthcare for patients with mental illness or substance abuse. A discussion on the business of EMRs is addressed to understand the balancing act between the safety and business aspects of an EMR.

  4. Review of advances in human reliability analysis of errors of commission, Part 1: EOC identification

    Energy Technology Data Exchange (ETDEWEB)

    Reer, Bernhard [Paul Scherrer Institute (PSI), 5232 Villigen PSI (Switzerland)], E-mail: bernhard.reer@hsk.ch

    2008-08-15

    In close connection with examples relevant to contemporary probabilistic safety assessment (PSA), a review of advances in human reliability analysis (HRA) of post-initiator errors of commission (EOCs), i.e. inappropriate actions under abnormal operating conditions, has been carried out. The review comprises both EOC identification (part 1) and quantification (part 2); part 1 is presented in this article. Emerging HRA methods addressing the problem of EOC identification are: A Technique for Human Event Analysis (ATHEANA), the EOC HRA method developed by Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS), the Misdiagnosis Tree Analysis (MDTA) method, and the Commission Errors Search and Assessment (CESA) method. Most of the EOCs referred to in predictive studies comprise the stop of running or the inhibition of anticipated functions; a few comprise the start of a function. The CESA search scheme-which proceeds from possible operator actions to the affected systems to scenarios and uses procedures and importance measures as key sources of input information-provides a formalized way for identifying relatively important scenarios with EOC opportunities. In the implementation however, attention should be paid regarding EOCs associated with familiar but non-procedural actions and EOCs leading to failures of manually initiated safety functions.

  5. Cost-Minimization Analysis of Open and Endoscopic Carpal Tunnel Release.

    Science.gov (United States)

    Zhang, Steven; Vora, Molly; Harris, Alex H S; Baker, Laurence; Curtin, Catherine; Kamal, Robin N

    2016-12-07

    Carpal tunnel release is the most common upper-limb surgical procedure performed annually in the U.S. There are 2 surgical methods of carpal tunnel release: open or endoscopic. Currently, there is no clear clinical or economic evidence supporting the use of one procedure over the other. We completed a cost-minimization analysis of open and endoscopic carpal tunnel release, testing the null hypothesis that there is no difference between the procedures in terms of cost. We conducted a retrospective review using a private-payer and Medicare Advantage database composed of 16 million patient records from 2007 to 2014. The cohort consisted of records with an ICD-9 (International Classification of Diseases, Ninth Revision) diagnosis of carpal tunnel syndrome and a CPT (Current Procedural Terminology) code for carpal tunnel release. Payer fees were used to define cost. We also assessed other associated costs of care, including those of electrodiagnostic studies and occupational therapy. Bivariate comparisons were performed using the chi-square test and the Student t test. Data showed that 86% of the patients underwent open carpal tunnel release. Reimbursement fees for endoscopic release were significantly higher than for open release. Facility fees were responsible for most of the difference between the procedures in reimbursement: facility fees averaged $1,884 for endoscopic release compared with $1,080 for open release (p Occupational therapy fees associated with endoscopic release were less than those associated with open release (an average of $237 per session compared with $272; p = 0.07). The total average annual reimbursement per patient for endoscopic release (facility, surgeon, and occupational therapy fees) was significantly higher than for open release ($2,602 compared with $1,751; p carpal tunnel release.

  6. Pediatric medication errors in the postanesthesia care unit: analysis of MEDMARX data.

    Science.gov (United States)

    Payne, Christopher H; Smith, Christopher R; Newkirk, Laura E; Hicks, Rodney W

    2007-04-01

    Medication errors involving pediatric patients in the postanesthesia care unit may occur as frequently as one in every 20 medication orders and are more likely to cause harm when compared to medication errors in the overall population. Researchers examined six years of records from the MEDMARX database and used consecutive nonprobability sampling and descriptive statistics to compare medication errors in the pediatric data set to those occurring in the total population data set. Nineteen different causes of error involving 28 different products were identified. The results of the study indicate that an organization can focus on causes of errors and products involved in errors to mitigate future error occurrence.

  7. Flight Technical Error Analysis of the SATS Higher Volume Operations Simulation and Flight Experiments

    Science.gov (United States)

    Williams, Daniel M.; Consiglio, Maria C.; Murdoch, Jennifer L.; Adams, Catherine H.

    2005-01-01

    This paper provides an analysis of Flight Technical Error (FTE) from recent SATS experiments, called the Higher Volume Operations (HVO) Simulation and Flight experiments, which NASA conducted to determine pilot acceptability of the HVO concept for normal operating conditions. Reported are FTE results from simulation and flight experiment data indicating the SATS HVO concept is viable and acceptable to low-time instrument rated pilots when compared with today s system (baseline). Described is the comparative FTE analysis of lateral, vertical, and airspeed deviations from the baseline and SATS HVO experimental flight procedures. Based on FTE analysis, all evaluation subjects, low-time instrument-rated pilots, flew the HVO procedures safely and proficiently in comparison to today s system. In all cases, the results of the flight experiment validated the results of the simulation experiment and confirm the utility of the simulation platform for comparative Human in the Loop (HITL) studies of SATS HVO and Baseline operations.

  8. Secondary data analysis of large data sets in urology: successes and errors to avoid.

    Science.gov (United States)

    Schlomer, Bruce J; Copp, Hillary L

    2014-03-01

    Secondary data analysis is the use of data collected for research by someone other than the investigator. In the last several years there has been a dramatic increase in the number of these studies being published in urological journals and presented at urological meetings, especially involving secondary data analysis of large administrative data sets. Along with this expansion, skepticism for secondary data analysis studies has increased for many urologists. In this narrative review we discuss the types of large data sets that are commonly used for secondary data analysis in urology, and discuss the advantages and disadvantages of secondary data analysis. A literature search was performed to identify urological secondary data analysis studies published since 2008 using commonly used large data sets, and examples of high quality studies published in high impact journals are given. We outline an approach for performing a successful hypothesis or goal driven secondary data analysis study and highlight common errors to avoid. More than 350 secondary data analysis studies using large data sets have been published on urological topics since 2008 with likely many more studies presented at meetings but never published. Nonhypothesis or goal driven studies have likely constituted some of these studies and have probably contributed to the increased skepticism of this type of research. However, many high quality, hypothesis driven studies addressing research questions that would have been difficult to conduct with other methods have been performed in the last few years. Secondary data analysis is a powerful tool that can address questions which could not be adequately studied by another method. Knowledge of the limitations of secondary data analysis and of the data sets used is critical for a successful study. There are also important errors to avoid when planning and performing a secondary data analysis study. Investigators and the urological community need to strive to use

  9. The influence of cognitive load on transfer with error prevention training methods: a meta-analysis.

    Science.gov (United States)

    Hutchins, Shaun D; Wickens, Christopher D; Carolan, Thomas F; Cumming, John M

    2013-08-01

    The objective was to conduct research synthesis for the U.S.Army on the effectiveness of two error prevention training strategies (training wheels and scaffolding) on the transfer of training. Motivated as part of an ongoing program of research on training effectiveness, the current work presents some of the program's research into the effects on transfer of error prevention strategies during training from a cognitive load perspective. Based on cognitive load theory, two training strategies were hypothesized to reduce intrinsic load by supporting learners early in acquisition during schema development. A transfer ratio and Hedges' g were used in the two meta-analyses conducted on transfer studies employing the two training strategies. Moderators relevant to cognitive load theory and specific to the implemented strategies were examined.The transfer ratio was the ratio of treatment transfer performance to control transfer. Hedges' g was used in comparing treatment and control group standardized mean differences. Both effect sizes were analyzed with versions of sample weighted fixed effect models. Analysis of the training wheels strategy suggests a transfer benefit. The observed benefit was strongest when the training wheels were a worked example coupled with a principle-based prompt. Analysis of the scaffolding data also suggests a transfer benefit for the strategy. Both training wheels and scaffolding demonstrated positive transfer as training strategies.As error prevention techniques, both support the intrinsic load--reducing implications of cognitive load theory. The findings are applicable to the development of instructional design guidelines in professional skill-based organizations such as the military.

  10. Results of a nuclear power plant application of A New Technique for Human Error Analysis (ATHEANA)

    Energy Technology Data Exchange (ETDEWEB)

    Whitehead, D.W.; Forester, J.A. [Sandia National Labs., Albuquerque, NM (United States); Bley, D.C. [Buttonwood Consulting, Inc. (United States)] [and others

    1998-03-01

    A new method to analyze human errors has been demonstrated at a pressurized water reactor (PWR) nuclear power plant. This was the first application of the new method referred to as A Technique for Human Error Analysis (ATHEANA). The main goals of the demonstration were to test the ATHEANA process as described in the frame-of-reference manual and the implementation guideline, test a training package developed for the method, test the hypothesis that plant operators and trainers have significant insight into the error-forcing-contexts (EFCs) that can make unsafe actions (UAs) more likely, and to identify ways to improve the method and its documentation. A set of criteria to evaluate the success of the ATHEANA method as used in the demonstration was identified. A human reliability analysis (HRA) team was formed that consisted of an expert in probabilistic risk assessment (PRA) with some background in HRA (not ATHEANA) and four personnel from the nuclear power plant. Personnel from the plant included two individuals from their PRA staff and two individuals from their training staff. Both individuals from training are currently licensed operators and one of them was a senior reactor operator on shift until a few months before the demonstration. The demonstration was conducted over a 5-month period and was observed by members of the Nuclear Regulatory Commission`s ATHEANA development team, who also served as consultants to the HRA team when necessary. Example results of the demonstration to date, including identified human failure events (HFEs), UAs, and EFCs are discussed. Also addressed is how simulator exercises are used in the ATHEANA demonstration project.

  11. Linkage analysis of quantitative refraction and refractive errors in the Beaver Dam Eye Study.

    Science.gov (United States)

    Klein, Alison P; Duggal, Priya; Lee, Kristine E; Cheng, Ching-Yu; Klein, Ronald; Bailey-Wilson, Joan E; Klein, Barbara E K

    2011-07-13

    Refraction, as measured by spherical equivalent, is the need for an external lens to focus images on the retina. While genetic factors play an important role in the development of refractive errors, few susceptibility genes have been identified. However, several regions of linkage have been reported for myopia (2q, 4q, 7q, 12q, 17q, 18p, 22q, and Xq) and for quantitative refraction (1p, 3q, 4q, 7p, 8p, and 11p). To replicate previously identified linkage peaks and to identify novel loci that influence quantitative refraction and refractive errors, linkage analysis of spherical equivalent, myopia, and hyperopia in the Beaver Dam Eye Study was performed. Nonparametric, sibling-pair, genome-wide linkage analyses of refraction (spherical equivalent adjusted for age, education, and nuclear sclerosis), myopia and hyperopia in 834 sibling pairs within 486 extended pedigrees were performed. Suggestive evidence of linkage was found for hyperopia on chromosome 3, region q26 (empiric P = 5.34 × 10(-4)), a region that had shown significant genome-wide evidence of linkage to refraction and some evidence of linkage to hyperopia. In addition, the analysis replicated previously reported genome-wide significant linkages to 22q11 of adjusted refraction and myopia (empiric P = 4.43 × 10(-3) and 1.48 × 10(-3), respectively) and to 7p15 of refraction (empiric P = 9.43 × 10(-4)). Evidence was also found of linkage to refraction on 7q36 (empiric P = 2.32 × 10(-3)), a region previously linked to high myopia. The findings provide further evidence that genes controlling refractive errors are located on 3q26, 7p15, 7p36, and 22q11.

  12. Inversion, error analysis, and validation of GPS/MET occultation data

    Directory of Open Access Journals (Sweden)

    A. K. Steiner

    Full Text Available The global positioning system meteorology (GPS/MET experiment was the first practical demonstration of global navigation satellite system (GNSS-based active limb sounding employing the radio occultation technique. This method measures, as principal observable and with millimetric accuracy, the excess phase path (relative to propagation in vacuum of GNSS-transmitted radio waves caused by refraction during passage through the Earth's neutral atmosphere and ionosphere in limb geometry. It shows great potential utility for weather and climate system studies in providing an unique combination of global coverage, high vertical resolution and accuracy, long-term stability, and all-weather capability. We first describe our GPS/MET data processing scheme from excess phases via bending angles to the neutral atmospheric parameters refractivity, density, pressure and temperature. Special emphasis is given to ionospheric correction methodology and the inversion of bending angles to refractivities, where we introduce a matrix inversion technique (instead of the usual integral inversion. The matrix technique is shown to lead to identical results as integral inversion but is more directly extendable to inversion by optimal estimation. The quality of GPS/MET-derived profiles is analyzed with an error estimation analysis employing a Monte Carlo technique. We consider statistical errors together with systematic errors due to upper-boundary initialization of the retrieval by a priori bending angles. Perfect initialization and properly smoothed statistical errors allow for better than 1 K temperature retrieval accuracy up to the stratopause. No initialization and statistical errors yield better than 1 K accuracy up to 30 km but less than 3 K accuracy above 40 km. Given imperfect initialization, biases >2 K propagate down to below 30 km height in unfavorable realistic cases. Furthermore, results of a statistical validation of GPS/MET profiles through comparison

  13. Data reduction and analysis of graphite fiber release experiments

    Science.gov (United States)

    Lieberman, P.; Chovit, A. R.; Sussholz, B.; Korman, H. F.

    1979-01-01

    The burn and burn/explode effects on aircraft structures were examined in a series of fifteen outdoor tests conducted to verify the results obtained in previous burn and explode tests of carbon/graphite composite samples conducted in a closed chamber, and to simulate aircraft accident scenarios in which carbon/graphite fibers would be released. The primary effects that were to be investigaged in these tests were the amount and size distribution of the conductive fibers released from the composite structures, and how these various sizes of fibers transported downwind. The structures included plates, barrels, aircraft spoilers and a cockpit. The heat sources included a propane gas burner and 20 ft by 20 ft and 40 ft by 60 ft JP-5 pool fires. The larger pool fire was selected to simulate an aircraft accident incident. The passive instrumentation included sticky paper and sticky bridal veil over an area 6000 ft downwind and 3000 ft crosswind. The active instrumentation included instrumented meteorological towers, movies, infrared imaging cameras, LADAR, high voltage ball gages, light emitting diode gages, microwave gages and flame velocimeter.

  14. Manual Surface Feature Classification and Error Analysis for NASA's OSIRIS-Rex Asteroid Sample Return Mission Using QGIS

    Science.gov (United States)

    Westermann, M. M.

    2017-06-01

    Error mitigation for manual detection and classification of hazardous surface features for NASA's OSIRIS-REx asteroid sample return mission can be accomplished through the use of open-source GIS and standard land-cover change analysis methods.

  15. A Human Error Analysis of General Aviation Controlled Flight Into Terrain Accidents Occurring Between 1990-1998

    National Research Council Canada - National Science Library

    Shappell, Scott

    2003-01-01

    .... While the study represented the work and opinions of several experts in the FAA and industry, the findings might have benefited from a more detailed human error analysis involving a larger number of accidents...

  16. A functional approach to movement analysis and error identification in sports and physical education

    Directory of Open Access Journals (Sweden)

    Ernst-Joachim eHossner

    2015-09-01

    Full Text Available In a hypothesis-and-theory paper, a functional approach to movement analysis in sports is introduced. In this approach, contrary to classical concepts, it is not anymore the ideal movement of elite athletes that is taken as a template for the movements produced by learners. Instead, movements are understood as the means to solve given tasks that in turn, are defined by to-be-achieved task goals. A functional analysis comprises the steps of (1 recognising constraints that define the functional structure, (2 identifying sub-actions that subserve the achievement of structure-dependent goals, (3 explicating modalities as specifics of the movement execution, and (4 assigning functions to actions, sub-actions and modalities. Regarding motor-control theory, a functional approach can be linked to a dynamical-system framework of behavioural shaping, to cognitive models of modular effect-related motor control as well as to explicit concepts of goal setting and goal achievement. Finally, it is shown that a functional approach is of particular help for sports practice in the context of structuring part practice, recognising functionally equivalent task solutions, finding innovative technique alternatives, distinguishing errors from style, and identifying root causes of movement errors.

  17. Error analysis of leaf area estimates made from allometric regression models

    Science.gov (United States)

    Feiveson, A. H.; Chhikara, R. S.

    1986-01-01

    Biological net productivity, measured in terms of the change in biomass with time, affects global productivity and the quality of life through biochemical and hydrological cycles and by its effect on the overall energy balance. Estimating leaf area for large ecosystems is one of the more important means of monitoring this productivity. For a particular forest plot, the leaf area is often estimated by a two-stage process. In the first stage, known as dimension analysis, a small number of trees are felled so that their areas can be measured as accurately as possible. These leaf areas are then related to non-destructive, easily-measured features such as bole diameter and tree height, by using a regression model. In the second stage, the non-destructive features are measured for all or for a sample of trees in the plots and then used as input into the regression model to estimate the total leaf area. Because both stages of the estimation process are subject to error, it is difficult to evaluate the accuracy of the final plot leaf area estimates. This paper illustrates how a complete error analysis can be made, using an example from a study made on aspen trees in northern Minnesota. The study was a joint effort by NASA and the University of California at Santa Barbara known as COVER (Characterization of Vegetation with Remote Sensing).

  18. Error analysis on squareness of multi-sensor integrated CMM for the multistep registration method

    Science.gov (United States)

    Zhao, Yan; Wang, Yiwen; Ye, Xiuling; Wang, Zhong; Fu, Luhua

    2018-01-01

    The multistep registration(MSR) method in [1] is to register two different classes of sensors deployed on z-arm of CMM(coordinate measuring machine): a video camera and a tactile probe sensor. In general, it is difficult to obtain a very precise registration result with a single common standard, instead, this method is achieved by measuring two different standards with a constant distance between them two which are fixed on a steel plate. Although many factors have been considered such as the measuring ability of sensors, the uncertainty of the machine and the number of data pairs, there is no exact analysis on the squareness between the x-axis and the y-axis on the xy plane. For this sake, error analysis on the squareness of multi-sensor integrated CMM for the multistep registration method will be made to examine the validation of the MSR method. Synthetic experiments on the squareness on the xy plane for the simplified MSR with an inclination rotation are simulated, which will lead to a regular result. Experiments have been carried out with the multi-standard device designed also in [1], meanwhile, inspections with the help of a laser interferometer on the xy plane have been carried out. The final results are conformed to the simulations, and the squareness errors of the MSR method are also similar to the results of interferometer. In other word, the MSR can also adopted/utilized to verify the squareness of a CMM.

  19. Instanton-based techniques for analysis and reduction of error floor of LDPC codes

    Energy Technology Data Exchange (ETDEWEB)

    Chertkov, Michael [Los Alamos National Laboratory; Chilappagari, Shashi K [Los Alamos National Laboratory; Stepanov, Mikhail G [Los Alamos National Laboratory; Vasic, Bane [SENIOR MEMBER, IEEE

    2008-01-01

    We describe a family of instanton-based optimization methods developed recently for the analysis of the error floors of low-density parity-check (LDPC) codes. Instantons are the most probable configurations of the channel noise which result in decoding failures. We show that the general idea and the respective optimization technique are applicable broadly to a variety of channels, discrete or continuous, and variety of sub-optimal decoders. Specifically, we consider: iterative belief propagation (BP) decoders, Gallager type decoders, and linear programming (LP) decoders performing over the additive white Gaussian noise channel (AWGNC) and the binary symmetric channel (BSC). The instanton analysis suggests that the underlying topological structures of the most probable instanton of the same code but different channels and decoders are related to each other. Armed with this understanding of the graphical structure of the instanton and its relation to the decoding failures, we suggest a method to construct codes whose Tanner graphs are free of these structures, and thus have less significant error floors.

  20. Grammatical Error Analysis in Recount Text Made by the Students of Cokroaminoto University of Palopo

    Directory of Open Access Journals (Sweden)

    Hermini Hermini

    2014-02-01

    Full Text Available This study aimed to find out (1 Grammatical errors in recount text made by the English Department students of the second and the sixth semester of Cokroaminoto University of Palopo, (2 the frequent grammatical errors made by the second and the sixth semester students of English department students (3 The difference of grammatical errors made by the second and the sixth semester students. The sample of the study was 723 sentences made by 30 students of the second semester and 30 students of the sixth semester students in academic year 2013/2014 that were taken by cluster random sampling technique. The sentences were 337 (46.61% simple sentences, 83(11.48% compound sentences, 218 (30.15% complex sentences, 85 (11.76% compound complex sentences. The data were collected by using two kinds of instruments namely: writing test to find the students’ grammatical errors and questionnaire to find the solution to prevent or minimize errors. Data on the students’ errors were analyzed by using descriptive statistics. The results of the study showed that the students made 832 errors classified into13 types of errors which consisted of 140 (16.82% errors in production of verb, 110 (13.22% errors in preposition, 106 (12,74% errors in distribution of verb, 98 (11.77% miscellaneous errors, 82 (9.85% errors in missing subject, 67(8.05% errors in part of speech, 61 (7,33% errors in irregular verbs, 58 (6.97% other errors in verb groups, 52(6.25% errors in the use of article, 24 (2.88% errors in gerund, 18 (2.16% errors in infinitive, 11(1.32% errors in pronoun/case, and 5 (0.6% errors in questions. The top six frequent grammatical errors made by the students were production of verb group, preposition, distribution of verb group, miscellaneous error, missing subject, and part of speech. The difference of both groups was the frequency in committing errors such as part of speech, irregular verb, infinitive verbs, and other errors in verb.

  1. Analysis of Relationships between the Level of Errors in Leg and Monofin Movement and Stroke Parameters in Monofin Swimming.

    Science.gov (United States)

    Rejman, Marek

    2013-01-01

    The aim of this study was to analyze the error structure in propulsive movements with regard to its influence on monofin swimming speed. The random cycles performed by six swimmers were filmed during a progressive test (900m). An objective method to estimate errors committed in the area of angular displacement of the feet and monofin segments was employed. The parameters were compared with a previously described model. Mutual dependences between the level of errors, stroke frequency, stroke length and amplitude in relation to swimming velocity were analyzed. The results showed that proper foot movements and the avoidance of errors, arising at the distal part of the fin, ensure the progression of swimming speed. The individual stroke parameters distribution which consists of optimally increasing stroke frequency to the maximal possible level that enables the stabilization of stroke length leads to the minimization of errors. Identification of key elements in the stroke structure based on the analysis of errors committed should aid in improving monofin swimming technique. Key pointsThe monofin swimming technique was evaluated through the prism of objectively defined errors committed by the swimmers.The dependences between the level of errors, stroke rate, stroke length and amplitude in relation to swimming velocity were analyzed.Optimally increasing stroke rate to the maximal possible level that enables the stabilization of stroke length leads to the minimization of errors.Propriety foot movement and the avoidance of errors arising at the distal part of fin, provide for the progression of swimming speed.The key elements improving monofin swimming technique, based on the analysis of errors committed, were designated.

  2. Analysis of Relationships between the Level of Errors in Leg and Monofin Movement and Stroke Parameters in Monofin Swimming

    Science.gov (United States)

    Rejman, Marek

    2013-01-01

    The aim of this study was to analyze the error structure in propulsive movements with regard to its influence on monofin swimming speed. The random cycles performed by six swimmers were filmed during a progressive test (900m). An objective method to estimate errors committed in the area of angular displacement of the feet and monofin segments was employed. The parameters were compared with a previously described model. Mutual dependences between the level of errors, stroke frequency, stroke length and amplitude in relation to swimming velocity were analyzed. The results showed that proper foot movements and the avoidance of errors, arising at the distal part of the fin, ensure the progression of swimming speed. The individual stroke parameters distribution which consists of optimally increasing stroke frequency to the maximal possible level that enables the stabilization of stroke length leads to the minimization of errors. Identification of key elements in the stroke structure based on the analysis of errors committed should aid in improving monofin swimming technique. Key points The monofin swimming technique was evaluated through the prism of objectively defined errors committed by the swimmers. The dependences between the level of errors, stroke rate, stroke length and amplitude in relation to swimming velocity were analyzed. Optimally increasing stroke rate to the maximal possible level that enables the stabilization of stroke length leads to the minimization of errors. Propriety foot movement and the avoidance of errors arising at the distal part of fin, provide for the progression of swimming speed. The key elements improving monofin swimming technique, based on the analysis of errors committed, were designated. PMID:24149742

  3. Spartan Release Engagement Mechanism (REM) stress and fracture analysis

    Science.gov (United States)

    Marlowe, D. S.; West, E. J.

    1984-01-01

    The revised stress and fracture analysis of the Spartan REM hardware for current load conditions and mass properties is presented. The stress analysis was performed using a NASTRAN math model of the Spartan REM adapter, base, and payload. Appendix A contains the material properties, loads, and stress analysis of the hardware. The computer output and model description are in Appendix B. Factors of safety used in the stress analysis were 1.4 on tested items and 2.0 on all other items. Fracture analysis of the items considered fracture critical was accomplished using the MSFC Crack Growth Analysis code. Loads and stresses were obtaind from the stress analysis. The fracture analysis notes are located in Appendix A and the computer output in Appendix B. All items analyzed met design and fracture criteria.

  4. Geolocation with error analysis using imagery from an experimental spotlight SAR

    Science.gov (United States)

    Wonnacott, William Mark

    This dissertation covers the development of a geometry-based sensor model for a specific monostatic spotlight synthetic aperture radar (SAR) system---referred to as the ExSAR (for experimental SAR). This sensor model facilitates single- and multiple-image geopositioning with error analysis. It allows for the use of known ground control points in refining the collection geometry parameters (a process called image resection) and for the subsequent geopositioning of other points using the resected image. Theoretically, the model also allows for the potential recovery of bias-like, persistent errors common across multiple images. The model also includes multi-image correspondence equations to aid in the cross-image identification of conjugate points. The sensor model development begins with a generic, theoretical approach to the modeling of spotlight SAR. A closed-form solution to the range and range-rate condition equations and the corresponding error propagation equation are presented. (The SAR condition equations have traditionally been solved iteratively.) The application of the closed-form solution in the image-to-ground and ground-to-image transformations is documented. The theoretical work also includes a preliminary error sensitivity analysis and a treatment of the spotlight SAR resection process. The ExSAR-specific model is established and assessed with an extensive set of images collected using the experimental radar over arrays of ground control points. Using this set, the imagery metadata elements are assessed, and the optimal element set for geopositioning is determined. The ExSAR imagery is shown to be transformed to the ground plane in only one dimension. The eventual ExSAR sensor model is used with known elevations and single-image geopositioning to show a horizontal accuracy of 8.23 m (rms). With resection using five ground-surveyed control points per image, the horizontal accuracy of reserved check points is 0.45 m (rms). Resections using the same

  5. Synthetic methods in phase equilibria: A new apparatus and error analysis of the method

    DEFF Research Database (Denmark)

    Fonseca, José; von Solms, Nicolas

    2014-01-01

    of the equipment was confirmed through several tests, including measurements along the three phase co-existence line for the system ethane + methanol, the study of the solubility of methane in water, and of carbon dioxide in water. An analysis regarding the application of the synthetic isothermal method...... in the study of gas solubilities was performed, in order to evaluate the influence of common assumptions and of various experimental aspects on the final solubility results. The analysis revealed that the largest influence on the precision of the solubility results is related to the ratio between the volumes...... of the two phases in equilibrium. Experiments with small volume of the vapour phase are less susceptible to the influence of other sources of errors, resulting in a higher precision of the final results. © 2013 Elsevier B.V....

  6. Fault Analysis of Wind Turbines Based on Error Messages and Work Orders

    DEFF Research Database (Denmark)

    Borchersen, Anders Bech; Larsen, Jesper Abildgaard; Stoustrup, Jakob

    2012-01-01

    In this paper data describing the operation and maintenance of an offshore wind farm is presented and analysed. Two different sets of data is presented; the first is auto generated error messages from the Supervisory Control and Data Acquisition (SCADA) system, the other is the work orders...... describing the service performed at the individual turbines. The auto generated alarms are analysed by applying a cleaning procedure to identify the alarms related to components. A severity, occurrence, and detection analysis is performed on the work orders. The outcome of the two analyses are then compared...... to identify common fault types and areas where further data analysis would be beneficial for improving the operation and maintenance of wind turbines in the future....

  7. Error analysis of supersonic air-to-air ejector schlieren pictures

    Directory of Open Access Journals (Sweden)

    Kolář J.

    2013-04-01

    Full Text Available The scope of this article is focused on general analysis of errors and uncertainties possibly arising from CFD-to-schlieren pictures matching. Analysis is based on classic analytical equations. These are firstly evaluated with the presumption of constant density gradient along the ray course. In other words, the deflection of light-ray caused by density gradient is negligible in compare to the cross size of constant gradient area. It is the aim of this work to determine, whether this presumption is applicable in case of supersonic air-to-air ejector. The colour and black and white schlieren pictures are carried out and compared to CFD results. Simulations had covered various eddy viscosities. Computed pressure gradients are transformed into deflection angles and further to ray displacement. Resulting computed light- ray deflection is matched to experimental results

  8. Analysis of family-wise error rates in statistical parametric mapping using random field theory.

    Science.gov (United States)

    Flandin, Guillaume; Friston, Karl J

    2017-11-01

    This technical report revisits the analysis of family-wise error rates in statistical parametric mapping-using random field theory-reported in (Eklund et al. []: arXiv 1511.01863). Contrary to the understandable spin that these sorts of analyses attract, a review of their results suggests that they endorse the use of parametric assumptions-and random field theory-in the analysis of functional neuroimaging data. We briefly rehearse the advantages parametric analyses offer over nonparametric alternatives and then unpack the implications of (Eklund et al. []: arXiv 1511.01863) for parametric procedures. Hum Brain Mapp, 2017. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc. © 2017 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  9. Error performance analysis in K-tier uplink cellular networks using a stochastic geometric approach

    KAUST Repository

    Afify, Laila H.

    2015-09-14

    In this work, we develop an analytical paradigm to analyze the average symbol error probability (ASEP) performance of uplink traffic in a multi-tier cellular network. The analysis is based on the recently developed Equivalent-in-Distribution approach that utilizes stochastic geometric tools to account for the network geometry in the performance characterization. Different from the other stochastic geometry models adopted in the literature, the developed analysis accounts for important communication system parameters and goes beyond signal-to-interference-plus-noise ratio characterization. That is, the presented model accounts for the modulation scheme, constellation type, and signal recovery techniques to model the ASEP. To this end, we derive single integral expressions for the ASEP for different modulation schemes due to aggregate network interference. Finally, all theoretical findings of the paper are verified via Monte Carlo simulations.

  10. A Bayesian Analysis of the Radioactive Releases of Fukushima

    DEFF Research Database (Denmark)

    Tomioka, Ryota; Mørup, Morten

    2012-01-01

    the types of nuclides and their levels of concentration from the recorded mixture of radiations to take necessary measures. We presently formulate a Bayesian generative model for the data available on radioactive releases from the Fukushima Daiichi disaster across Japan. From the sparsely sampled...... the Fukushima Daiichi plant we establish that the model is able to account for the data. We further demonstrate how the model extends to include all the available measurements recorded throughout Japan. The model can be considered a first attempt to apply Bayesian learning unsupervised in order to give a more......The Fukushima Daiichi disaster 11 March, 2011 is considered the largest nuclear accident since the 1986 Chernobyl disaster and has been rated at level 7 on the International Nuclear Event Scale. As different radioactive materials have different effects to human body, it is important to know...

  11. Predictive Analysis of Controllers’ Cognitive Errors Using the TRACEr Technique: A Case Study in an Airport Control Tower

    Directory of Open Access Journals (Sweden)

    Shirali

    2016-03-01

    Full Text Available Background In complex socio-technical systems like aviation systems, human error is said to be the main cause of air transport incidents, accounting for about 75 percent of these incidents and events. air traffic management (ATM is considered a highly reliable industry; however, there is a persistent need to identify safety vulnerabilities and reduce them or their effects, as ATM is very human-centered and will remain so, at least in the mid-term (e.g., until 2025. Objectives The current study aimed to conduct a predictive analysis of controllers’ cognitive errors using the TRACEr technique in an airport control tower. Materials and Methods This paper was done as a qualitative case study to identify controllers’ errors in an airport control tower. First, the controllers’ tasks were described by means of interviews and observation, and then the most critical tasks, which were more likely to have more errors, were chosen to be examined. In the next step, the tasks were broken down into sub-tasks using the hierarchical analysis method and presented as HTA charts. Finally, for all the sub-tasks, different error modes and mechanisms of their occurrence were identified and the results were recorded on TRACEr worksheets. Results The analysis of TRACEr worksheets showed that of a total 315 detected errors, perception and memory errors are the most important errors in tower control controllers’ tasks, and perceptual and spatial confusion is the most important psychological factor related to their occurrence. Conclusions The results of this study led to the identification of many of the errors and conditions that affect the performance of controllers, providing the ability to define safety and ergonomic interventions to reduce the risk of human error. Therefore, the results of this study can be a basis for planning ATM to prioritize prevention programs and safety enhancement

  12. The Challenges of Releasing Human Data for Analysis

    Science.gov (United States)

    Fitts, Mary; Van Baalen, Mary; Johnson-Throop, Kathy; Lee, Lesley; Havelka, Jacque; Wear, Mary; Thomas, Diedre M.

    2011-01-01

    The NASA Johnson Space Center s (NASA JSC) Committee for the Protection of Human Subjects (CPHS) recently approved the formation of two human data repositories: the Lifetime Surveillance of Astronaut Health Repository (LSAH-R) for clinical data and the Life Sciences Data Archive Repository (LSDA-R) for research data. The establishment of these repositories forms the foundation for the release of data and information beyond the scope for which the data was originally collected. The release of clinical and research data and information is primarily managed by two NASA groups: the Evidence Base Working Group (EBWG), consisting of members of both repositories, and the LSAH Policy Board. The goal of unifying these repositories and their processes is to provide a mutually supportive approach to handling medical and research data, to enhance the use of medical and research data to reduce risk, and to promote the understanding of space physiology, countermeasures and other mitigation strategies. Over the past year, both repositories have received over 100 data and information requests from a wide variety of requesters. The disposition of these requests has highlighted the challenges faced when attempting to make data collected on a unique set of subjects available beyond the original intent for which the data were collected. As the EBWG works through each request, many considerations must be factored into account when deciding what data can be shared and how - from the Privacy Act of 1974 and the Health Insurance Portability and Accountability Act (HIPAA), to NASA s Health Information Management System (10HIMS) and Human Experimental and Research Data Records (10HERD) access requirements. Additional considerations include the presence of the data in the repositories and vetting requesters for legitimacy of their use of the data. Additionally, fair access must be ensured for intramural, as well as extramural investigators. All of this must be considered in the formulation

  13. Characterization and error analysis of an N×N unfolding procedure applied to filtered, photoelectric x-ray detector arrays. II. Error analysis and generalization

    Directory of Open Access Journals (Sweden)

    D. L. Fehl

    2010-12-01

    Full Text Available A five-channel, filtered-x-ray-detector (XRD array has been used to measure time-dependent, soft-x-ray flux emitted by z-pinch plasmas at the Z pulsed-power accelerator (Sandia National Laboratories, Albuquerque, New Mexico, USA. The preceding, companion paper [D. L. Fehl et al., Phys. Rev. ST Accel. Beams 13, 120402 (2010PRABFM1098-4402] describes an algorithm for spectral reconstructions (unfolds and spectrally integrated flux estimates from data obtained by this instrument. The unfolded spectrum S_{unfold}(E,t is based on (N=5 first-order B-splines (histograms in contiguous unfold bins j=1,…,N; the recovered x-ray flux F_{unfold}(t is estimated as ∫S_{unfold}(E,tdE, where E is x-ray energy and t is time. This paper adds two major improvements to the preceding unfold analysis: (a Error analysis.—Both data noise and response-function uncertainties are propagated into S_{unfold}(E,t and F_{unfold}(t. Noise factors ν are derived from simulations to quantify algorithm-induced changes in the noise-to-signal ratio (NSR for S_{unfold} in each unfold bin j and for F_{unfold} (ν≡NSR_{output}/NSR_{input}: for S_{unfold}, 1≲ν_{j}≲30, an outcome that is strongly spectrally dependent; for F_{unfold}, 0.6≲ν_{F}≲1, a result that is less spectrally sensitive and corroborated independently. For nominal z-pinch experiments, the combined uncertainty (noise and calibrations in F_{unfold}(t at peak is estimated to be ∼15%. (b Generalization of the unfold method.—Spectral sensitivities (called here passband functions are constructed for S_{unfold} and F_{unfold}. Predicting how the unfold algorithm reconstructs arbitrary spectra is thereby reduced to quadratures. These tools allow one to understand and quantitatively predict algorithmic distortions (including negative artifacts, to identify potentially troublesome spectra, and to design more useful response functions.

  14. Nickel and cobalt release from metal alloys of tools--a current analysis in Germany.

    Science.gov (United States)

    Kickinger-Lörsch, Anja; Bruckner, Thomas; Mahler, Vera

    2015-11-01

    The former 'EU Nickel Directive' and, since 2009, the REACH Regulation (item 27 of Annex XVII) do not include all metallic objects. The nickel content of tools is not regulated by the REACH Regulation, even if they may come into in prolonged contact with the skin. Tools might be possible sources of nickel and cobalt sensitization, and may contribute to elicitation and maintenance of hand eczema. To perform a current analysis of the frequency of nickel or cobalt release from new handheld tools purchased in Germany. Six hundred unused handheld tools from the German market were investigated with the dimethylglyoxime test for nickel release and with disodium-1-nitroso-2-naphthol-3,6-disulfonate solution for cobalt release. Nickel release was detected in 195 of 600 (32.5%) items, and cobalt in only six (1%) of them. Positive nickel results were nearly twice as frequent in tools 'made in Germany' than in tools without a mark of origin. Tools made in other European countries did not release nickel. Cobalt release was only found in pliers and a saw. A correlation was found between price level and nickel release. Among toolkits, 34.2% were inhomogeneous concerning nickel release. The German market currently provides a large number of handheld tools that release nickel, especially tools 'made in Germany'. For consumer protection, it seems appropriate to include handheld tools in the REACH Regulation on nickel. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  15. Error propagation models to examine the effects of geocoding quality on spatial analysis of individual-level datasets.

    Science.gov (United States)

    Zandbergen, P A; Hart, T C; Lenzer, K E; Camponovo, M E

    2012-04-01

    The quality of geocoding has received substantial attention in recent years. A synthesis of published studies shows that the positional errors of street geocoding are somewhat unique relative to those of other types of spatial data: (1) the magnitude of error varies strongly across urban-rural gradients; (2) the direction of error is not uniform, but strongly associated with the properties of local street segments; (3) the distribution of errors does not follow a normal distribution, but is highly skewed and characterized by a substantial number of very large error values; and (4) the magnitude of error is spatially autocorrelated and is related to properties of the reference data. This makes it difficult to employ analytic approaches or Monte Carlo simulations for error propagation modeling because these rely on generalized statistical characteristics. The current paper describes an alternative empirical approach to error propagation modeling for geocoded data and illustrates its implementation using three different case-studies of geocoded individual-level datasets. The first case-study consists of determining the land cover categories associated with geocoded addresses using a point-in-raster overlay. The second case-study consists of a local hotspot characterization using kernel density analysis of geocoded addresses. The third case-study consists of a spatial data aggregation using enumeration areas of varying spatial resolution. For each case-study a high quality reference scenario based on address points forms the basis for the analysis, which is then compared to the result of various street geocoding techniques. Results show that the unique nature of the positional error of street geocoding introduces substantial noise in the result of spatial analysis, including a substantial amount of bias for some analysis scenarios. This confirms findings from earlier studies, but expands these to a wider range of analytical techniques. Copyright © 2012 Elsevier Ltd

  16. Improving Error Resilience Analysis Methodology of Iterative Workloads for Approximate Computing

    NARCIS (Netherlands)

    Gillani, G.A.; Kokkeler, Andre B.J.

    2017-01-01

    Assessing error resilience inherent to the digital processing workloads provides application-specific insights towards approximate computing strategies for improving power efficiency and/or performance. With the case study of radio astronomy calibration, our contributions for improving the error

  17. Allowing for Genotyping Error in Analysis of Unmatched Case-Control Studies

    National Research Council Canada - National Science Library

    Rice K.M; Holmans P

    2003-01-01

    ... (an unmatched case-control study). A drawback of such a study is that it is impossible to detect genotyping errors, and few methods have been developed to allow for the presence of undetected genotyping errors...

  18. Measurement error as a source of QT dispersion: a computerised analysis

    NARCIS (Netherlands)

    J.A. Kors (Jan); G. van Herpen (Gerard)

    1998-01-01

    textabstractOBJECTIVE: To establish a general method to estimate the measuring error in QT dispersion (QTD) determination, and to assess this error using a computer program for automated measurement of QTD. SUBJECTS: Measurements were done on 1220 standard simultaneous

  19. Error Analysis in the Joint Event Location/Seismic Calibration Inverse Problem

    National Research Council Canada - National Science Library

    Rodi, William L

    2006-01-01

    This project is developing new mathematical and computational techniques for analyzing the uncertainty in seismic event locations, as induced by observational errors and errors in travel-time models...

  20. Strato-mesospheric ClO observations by SMILES: error analysis and diurnal variation

    Directory of Open Access Journals (Sweden)

    T. O. Sato

    2012-11-01

    Full Text Available Chlorine monoxide (ClO is the key species for anthropogenic ozone losses in the middle atmosphere. We observed ClO diurnal variations using the Superconducting Submillimeter-Wave Limb-Emission Sounder (SMILES on the International Space Station, which has a non-sun-synchronous orbit. This includes the first global observations of the ClO diurnal variation from the stratosphere up to the mesosphere. The observation of mesospheric ClO was possible due to 10–20 times better signal-to-noise (S/N ratio of the spectra than those of past or ongoing microwave/submillimeter-wave limb-emission sounders. We performed a quantitative error analysis for the strato- and mesospheric ClO from the Level-2 research (L2r product version 2.1.5 taking into account all possible contributions of errors, i.e. errors due to spectrum noise, smoothing, and uncertainties in radiative transfer model and instrument functions. The SMILES L2r v2.1.5 ClO data are useful over the range from 0.01 and 100 hPa with a total error estimate of 10–30 pptv (about 10% with averaging 100 profiles. The SMILES ClO vertical resolution is 3–5 km and 5–8 km for the stratosphere and mesosphere, respectively. The SMILES observations reproduced the diurnal variation of stratospheric ClO, with peak values at midday, observed previously by the Microwave Limb Sounder on the Upper Atmosphere Research Satellite (UARS/MLS. Mesospheric ClO demonstrated an opposite diurnal behavior, with nighttime values being larger than daytime values. A ClO enhancement of about 100 pptv was observed at 0.02 to 0.01 hPa (about 70–80 km for 50° N–65° N from January–February 2010. The performance of SMILES ClO observations opens up new opportunities to investigate ClO up to the mesopause.

  1. Synchrotron radiation measurement of multiphase fluid saturations in porous media: Experimental technique and error analysis

    Science.gov (United States)

    Tuck, David M.; Bierck, Barnes R.; Jaffé, Peter R.

    1998-06-01

    Multiphase flow in porous media is an important research topic. In situ, nondestructive experimental methods for studying multiphase flow are important for improving our understanding and the theory. Rapid changes in fluid saturation, characteristic of immiscible displacement, are difficult to measure accurately using gamma rays due to practical restrictions on source strength. Our objective is to describe a synchrotron radiation technique for rapid, nondestructive saturation measurements of multiple fluids in porous media, and to present a precision and accuracy analysis of the technique. Synchrotron radiation provides a high intensity, inherently collimated photon beam of tunable energy which can yield accurate measurements of fluid saturation in just one second. Measurements were obtained with precision of ±0.01 or better for tetrachloroethylene (PCE) in a 2.5 cm thick glass-bead porous medium using a counting time of 1 s. The normal distribution was shown to provide acceptable confidence limits for PCE saturation changes. Sources of error include heat load on the monochromator, periodic movement of the source beam, and errors in stepping-motor positioning system. Hypodermic needles pushed into the medium to inject PCE changed porosity in a region approximately ±1 mm of the injection point. Improved mass balance between the known and measured PCE injection volumes was obtained when appropriate corrections were applied to calibration values near the injection point.

  2. Development of Items for a Pedagogical Content Knowledge Test Based on Empirical Analysis of Pupils' Errors

    Science.gov (United States)

    Jüttner, Melanie; Neuhaus, Birgit J.

    2012-05-01

    In view of the lack of instruments for measuring biology teachers' pedagogical content knowledge (PCK), this article reports on a study about the development of PCK items for measuring teachers' knowledge of pupils' errors and ways for dealing with them. This study investigated 9th and 10th grade German pupils' (n = 461) drawings in an achievement test about the knee-jerk in biology, which were analysed by using the inductive qualitative analysis of their content. The empirical data were used for the development of the items in the PCK test. The validation of the items was determined with think-aloud interviews of German secondary school teachers (n = 5). If the item was determined, the reliability was tested by the results of German secondary school biology teachers (n = 65) who took the PCK test. The results indicated that these items are satisfactorily reliable (Cronbach's alpha values ranged from 0.60 to 0.65). We suggest a larger sample size and American biology teachers be used in our further studies. The findings of this study about teachers' professional knowledge from the PCK test could provide new information about the influence of teachers' knowledge on their pupils' understanding of biology and their possible errors in learning biology.

  3. Bayesian linear regression with skew-symmetric error distributions with applications to survival analysis

    KAUST Repository

    Rubio, Francisco J.

    2016-02-09

    We study Bayesian linear regression models with skew-symmetric scale mixtures of normal error distributions. These kinds of models can be used to capture departures from the usual assumption of normality of the errors in terms of heavy tails and asymmetry. We propose a general noninformative prior structure for these regression models and show that the corresponding posterior distribution is proper under mild conditions. We extend these propriety results to cases where the response variables are censored. The latter scenario is of interest in the context of accelerated failure time models, which are relevant in survival analysis. We present a simulation study that demonstrates good frequentist properties of the posterior credible intervals associated with the proposed priors. This study also sheds some light on the trade-off between increased model flexibility and the risk of over-fitting. We illustrate the performance of the proposed models with real data. Although we focus on models with univariate response variables, we also present some extensions to the multivariate case in the Supporting Information.

  4. Gravity field error analysis: Applications of GPS receivers and gradiometers on low orbiting platforms

    Science.gov (United States)

    Schrama, E.

    1990-01-01

    The concept of a Global Positioning System (GPS) receiver as a tracking facility and a gradiometer as a separate instrument on a low orbiting platform offers a unique tool to map the Earth's gravitational field with unprecedented accuracies. The former technique allows determination of the spacecraft's ephemeris at any epoch to within 3 to 10 cm, the latter permits the measurement of the tensor of second order derivatives of the gravity field to within 0.01 to 0.0001 Eotvos units depending on the type of gradiometer. First, a variety of error sources in gradiometry where emphasis is placed on the rotational problem pursuing as well a static as a dynamic approach is described. Next, an analytical technique is described and applied for an error analysis of gravity field parameters from gradiometer and GPS observation types. Results are discussed for various configurations proposed on Topex/Poseidon, Gravity Probe-B, and Aristoteles, indicating that GPS only solutions may be computed up to degree and order 35, 55, and 85 respectively, whereas a combined GPS/gradiometer experiment on Aristoteles may result in an acceptable solution up to degree and order 240.

  5. Gravity field error analysis - Applications of Global Positioning System receivers and gradiometers on low orbiting platforms

    Science.gov (United States)

    Schrama, Ernst J. O.

    1991-11-01

    The concept of a Global Positioning System (GPS) receiver as a tracking facility and a gradiometer as a separate instrument on a low-orbiting platform offers a unique tool to map the earth's gravitational field with unprecedented accuracies. The former technique allows determination of the spacecraft's ephemeris at any epoch to within 3-10 cm, the latter permits the measurement of the tensor of second order derivatives of the gravity field to within 0.01 to 0.0001 Eotvos units depending on the type of gradiometer. First, a variety of error sources in gradiometry where emphasis is placed on the rotational problem pursuing as well a static as a dynamic approach is described. Next, an analytical technique is described and applied for an error analysis of gravity field parameters from gradiometer and GPS observation types. Results are discussed for various configurations proposed on Topex/Poseidon, Gravity Probe-B, and Aristoteles, indicating that GPS only solutions may be computed up to degree and order 35, 55, and 85, respectively, whereas a combined GPS/gradiometer experiment on Aristoteles may result in an acceptable solution up to degree and order 240.

  6. Advanced Laboratory at Texas State University: Error Analysis, Experimental Design, and Research Experience for Undergraduates

    Science.gov (United States)

    Ventrice, Carl

    2009-04-01

    Physics is an experimental science. In other words, all physical laws are based on experimentally observable phenomena. Therefore, it is important that all physics students have an understanding of the limitations of certain experimental techniques and the associated errors associated with a particular measurement. The students in the Advanced Laboratory class at Texas State perform three detailed laboratory experiments during the semester and give an oral presentation at the end of the semester on a scientific topic of their choosing. The laboratory reports are written in the format of a ``Physical Review'' journal article. The experiments are chosen to give the students a detailed background in error analysis and experimental design. For instance, the first experiment performed in the spring 2009 semester is entitled Measurement of the local acceleration due to gravity in the RFM Technology and Physics Building. The goal of this experiment is to design and construct an instrument that is to be used to measure the local gravitational field in the Physics Building to an accuracy of ±0.005 m/s^2. In addition, at least one of the experiments chosen each semester involves the use of the research facilities within the physics department (e.g., microfabrication clean room, surface science lab, thin films lab, etc.), which gives the students experience working in a research environment.

  7. Measurement errors related to contact angle analysis of hydrogel and silicone hydrogel contact lenses.

    Science.gov (United States)

    Read, Michael L; Morgan, Philip B; Maldonado-Codina, Carole

    2009-11-01

    This work sought to undertake a comprehensive investigation of the measurement errors associated with contact angle assessment of curved hydrogel contact lens surfaces. The contact angle coefficient of repeatability (COR) associated with three measurement conditions (image analysis COR, intralens COR, and interlens COR) was determined by measuring the contact angles (using both sessile drop and captive bubble methods) for three silicone hydrogel lenses (senofilcon A, balafilcon A, lotrafilcon A) and one conventional hydrogel lens (etafilcon A). Image analysis COR values were about 2 degrees , whereas intralens COR values (95% confidence intervals) ranged from 4.0 degrees (3.3 degrees , 4.7 degrees ) (lotrafilcon A, captive bubble) to 10.2 degrees (8.4 degrees , 12.1 degrees ) (senofilcon A, sessile drop). Interlens COR values ranged from 4.5 degrees (3.7 degrees , 5.2 degrees ) (lotrafilcon A, captive bubble) to 16.5 degrees (13.6 degrees , 19.4 degrees ) (senofilcon A, sessile drop). Measurement error associated with image analysis was shown to be small as an absolute measure, although proportionally more significant for lenses with low contact angle. Sessile drop contact angles were typically less repeatable than captive bubble contact angles. For sessile drop measures, repeatability was poorer with the silicone hydrogel lenses when compared with the conventional hydrogel lens; this phenomenon was not observed for the captive bubble method, suggesting that methodological factors related to the sessile drop technique (such as surface dehydration and blotting) may play a role in the increased variability of contact angle measurements observed with silicone hydrogel contact lenses.

  8. Analysis of liquid medication dose errors made by patients and caregivers using alternative measuring devices.

    Science.gov (United States)

    Ryu, Gyeong Suk; Lee, Yu Jeung

    2012-01-01

    Patients use several types of devices to measure liquid medication. Using a criterion ranging from a 10% to 40% variation from a target 5 mL for a teaspoon dose, previous studies have found that a considerable proportion of patients or caregivers make errors when dosing liquid medication with measuring devices. To determine the rate and magnitude of liquid medication dose errors that occur with patient/caregiver use of various measuring devices in a community pharmacy. Liquid medication measurements by patients or caregivers were observed in a convenience sample of community pharmacy patrons in Korea during a 2-week period in March 2011. Participants included all patients or caregivers (N = 300) who came to the pharmacy to buy over-the-counter liquid medication or to have a liquid medication prescription filled during the study period. The participants were instructed by an investigator who was also a pharmacist to select their preferred measuring devices from 6 alternatives (etched-calibration dosing cup, printed-calibration dosing cup, dosing spoon, syringe, dispensing bottle, or spoon with a bottle adapter) and measure a 5 mL dose of Coben (chlorpheniramine maleate/phenylephrine HCl, Daewoo Pharm. Co., Ltd) syrup using the device of their choice. The investigator used an ISOLAB graduated cylinder (Germany, blue grad, 10 mL) to measure the amount of syrup dispensed by the study participants. Participant characteristics were recorded including gender, age, education level, and relationship to the person for whom the medication was intended. Of the 300 participants, 257 (85.7%) were female; 286 (95.3%) had at least a high school education; and 282 (94.0%) were caregivers (parent or grandparent) for the patient. The mean (SD) measured dose was 4.949 (0.378) mL for the 300 participants. In analysis of variance of the 6 measuring devices, the greatest difference from the 5 mL target was a mean 5.552 mL for 17 subjects who used the regular (etched) dosing cup and 4

  9. Analysis of operator splitting errors for near-limit flame simulations

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Zhen; Zhou, Hua [Center for Combustion Energy, Tsinghua University, Beijing 100084 (China); Li, Shan [Center for Combustion Energy, Tsinghua University, Beijing 100084 (China); School of Aerospace Engineering, Tsinghua University, Beijing 100084 (China); Ren, Zhuyin, E-mail: zhuyinren@tsinghua.edu.cn [Center for Combustion Energy, Tsinghua University, Beijing 100084 (China); School of Aerospace Engineering, Tsinghua University, Beijing 100084 (China); Lu, Tianfeng [Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269-3139 (United States); Law, Chung K. [Center for Combustion Energy, Tsinghua University, Beijing 100084 (China); Department of Mechanical and Aerospace Engineering, Princeton University, Princeton, NJ 08544 (United States)

    2017-04-15

    High-fidelity simulations of ignition, extinction and oscillatory combustion processes are of practical interest in a broad range of combustion applications. Splitting schemes, widely employed in reactive flow simulations, could fail for stiff reaction–diffusion systems exhibiting near-limit flame phenomena. The present work first employs a model perfectly stirred reactor (PSR) problem with an Arrhenius reaction term and a linear mixing term to study the effects of splitting errors on the near-limit combustion phenomena. Analysis shows that the errors induced by decoupling of the fractional steps may result in unphysical extinction or ignition. The analysis is then extended to the prediction of ignition, extinction and oscillatory combustion in unsteady PSRs of various fuel/air mixtures with a 9-species detailed mechanism for hydrogen oxidation and an 88-species skeletal mechanism for n-heptane oxidation, together with a Jacobian-based analysis for the time scales. The tested schemes include the Strang splitting, the balanced splitting, and a newly developed semi-implicit midpoint method. Results show that the semi-implicit midpoint method can accurately reproduce the dynamics of the near-limit flame phenomena and it is second-order accurate over a wide range of time step size. For the extinction and ignition processes, both the balanced splitting and midpoint method can yield accurate predictions, whereas the Strang splitting can lead to significant shifts on the ignition/extinction processes or even unphysical results. With an enriched H radical source in the inflow stream, a delay of the ignition process and the deviation on the equilibrium temperature are observed for the Strang splitting. On the contrary, the midpoint method that solves reaction and diffusion together matches the fully implicit accurate solution. The balanced splitting predicts the temperature rise correctly but with an over-predicted peak. For the sustainable and decaying oscillatory

  10. An analysis of subject agreement errors in English: the case of third ...

    African Journals Online (AJOL)

    For the incorrect sentences they were to underline the error and give the correct answer. The main findings of the study were: subject-verb agreement errors are prominent in simple sentence constructions and in complex linguistic environments. The study also found that performance errors appear frequently in simple ...

  11. An Analysis of Errors in Written English Sentences: A Case Study of Thai EFL Students

    Science.gov (United States)

    Sermsook, Kanyakorn; Liamnimit, Jiraporn; Pochakorn, Rattaneekorn

    2017-01-01

    The purposes of the present study were to examine the language errors in a writing of English major students in a Thai university and to explore the sources of the errors. This study focused mainly on sentences because the researcher found that errors in Thai EFL students' sentence construction may lead to miscommunication. 104 pieces of writing…

  12. Human Error Analysis in a Permit to Work System: A Case Study in a Chemical Plant

    Directory of Open Access Journals (Sweden)

    Mehdi Jahangiri

    2016-03-01

    Conclusion: The SPAR-H method applied in this study could analyze and quantify the potential human errors and extract the required measures for reducing the error probabilities in PTW system. Some suggestions to reduce the likelihood of errors, especially in the field of modifying the performance shaping factors and dependencies among tasks are provided.

  13. Computational Fluid Dynamics Analysis on Radiation Error of Surface Air Temperature Measurement

    Science.gov (United States)

    Yang, Jie; Liu, Qing-Quan; Ding, Ren-Hui

    2017-01-01

    Due to solar radiation effect, current air temperature sensors inside a naturally ventilated radiation shield may produce a measurement error that is 0.8 K or higher. To improve air temperature observation accuracy and correct historical temperature of weather stations, a radiation error correction method is proposed. The correction method is based on a computational fluid dynamics (CFD) method and a genetic algorithm (GA) method. The CFD method is implemented to obtain the radiation error of the naturally ventilated radiation shield under various environmental conditions. Then, a radiation error correction equation is obtained by fitting the CFD results using the GA method. To verify the performance of the correction equation, the naturally ventilated radiation shield and an aspirated temperature measurement platform are characterized in the same environment to conduct the intercomparison. The aspirated temperature measurement platform serves as an air temperature reference. The mean radiation error given by the intercomparison experiments is 0.23 K, and the mean radiation error given by the correction equation is 0.2 K. This radiation error correction method allows the radiation error to be reduced by approximately 87 %. The mean absolute error and the root mean square error between the radiation errors given by the correction equation and the radiation errors given by the experiments are 0.036 K and 0.045 K, respectively.

  14. An Analysis of Lexical Errors in Written English | Abaya | Annals of ...

    African Journals Online (AJOL)

    The national reports on K.C.P.E results indicate that most pupils make errors in composition writing, thus contributing to the poor performance in English language. Thus, our objective was to examine the lexical errors in the pupils' written English. We found that pupils make three categories of lexical errors due to first ...

  15. An Analysis of Lexical Errors of Korean Language Learners: Some American College Learners' Case

    Science.gov (United States)

    Kang, Manjin

    2014-01-01

    There has been a huge amount of research on errors of language learners. However, most of them have focused on syntactic errors and those about lexical errors are not found easily despite the importance of lexical learning for the language learners. The case is even rarer for Korean language. In line with this background, this study was designed…

  16. Analysis and interpretation of Viking labeled release experimental results

    Science.gov (United States)

    Levin, G. V.

    1979-01-01

    The Viking Labeled Release (LR) life detection experiment on the surface of Mars produced data consistent with a biological interpretation. In considering the plausibility of this interpretation, terrestrial life forms were identified which could serve as models for Martian microbial life. Prominent among these models are lichens which are known to survive for years in a state of cryptobiosis, to grow in hostile polar environments, to exist on atmospheric nitrogen as sole nitrogen source, and to survive without liquid water by absorbing water directly from the atmosphere. Another model is derived from the endolithic bacteria found in the dry Antarctic valleys; preliminary experiments conducted with samples of these bacteria indicate that they produce positive LR responses approximating the Mars results. However, because of the hositility of the Martian environment to life, and the failure to find organics on the surface of Mars, a number of nonbiological explanations were advanced to account for the Viking LR data. A reaction of the LR nutrient with putative surface hydrogen peroxide is the leading candidate. Other possibilities raised include reactions caused by or with ultraviolet irradiation, gamma-Fe2O3, metalloperoxides or superoxides.

  17. Lexis in Chinese-English Translation of Drug Package Inserts: Corpus-based Error Analysis and Its Translation Strategies.

    Science.gov (United States)

    Ying, Lin; Yumei, Zhou

    2010-12-01

    Error analysis (EA) has been broadly applied to the researches of writing, speaking, second language acquisition (SLA) and translation. This study was carried out based on Carl James' error taxonomy to investigate the distribution of lexical errors in Chinese-English (C-E) translation of drug package inserts (DPIs)(1), explore the underlying causes and propose some translation strategies for correction and reduction of lexical errors in DPIs. A translation corpus consisting of 25 DPIs translated from Chinese into English was established. Lexical errors in the corpus and the error causes were analyzed qualitatively and quantitatively. Some examples were used to analyze the lexical errors and their causes, and some strategies for translating vocabulary in DPIs were proposed according to Eugene Nida's translation theory. This study will not only help translators and medical workers reduce errors in C-E translation of vocabulary in DPIs and other types of medical texts but also shed light on the learning and teaching of C-E translation of medical texts.

  18. Sources of Error and the Statistical Formulation of M S: m b Seismic Event Screening Analysis

    Science.gov (United States)

    Anderson, D. N.; Patton, H. J.; Taylor, S. R.; Bonner, J. L.; Selby, N. D.

    2014-03-01

    The Comprehensive Nuclear-Test-Ban Treaty (CTBT), a global ban on nuclear explosions, is currently in a ratification phase. Under the CTBT, an International Monitoring System (IMS) of seismic, hydroacoustic, infrasonic and radionuclide sensors is operational, and the data from the IMS is analysed by the International Data Centre (IDC). The IDC provides CTBT signatories basic seismic event parameters and a screening analysis indicating whether an event exhibits explosion characteristics (for example, shallow depth). An important component of the screening analysis is a statistical test of the null hypothesis H 0: explosion characteristics using empirical measurements of seismic energy (magnitudes). The established magnitude used for event size is the body-wave magnitude (denoted m b) computed from the initial segment of a seismic waveform. IDC screening analysis is applied to events with m b greater than 3.5. The Rayleigh wave magnitude (denoted M S) is a measure of later arriving surface wave energy. Magnitudes are measurements of seismic energy that include adjustments (physical correction model) for path and distance effects between event and station. Relative to m b, earthquakes generally have a larger M S magnitude than explosions. This article proposes a hypothesis test (screening analysis) using M S and m b that expressly accounts for physical correction model inadequacy in the standard error of the test statistic. With this hypothesis test formulation, the 2009 Democratic Peoples Republic of Korea announced nuclear weapon test fails to reject the null hypothesis H 0: explosion characteristics.

  19. Fringe-print-through error analysis and correction in snapshot phase-shifting interference microscope.

    Science.gov (United States)

    Zhang, Yu; Tian, Xiaobo; Liang, Rongguang

    2017-10-30

    To reduce the environmental errors, a snapshot phase-shifting interference microscope (SPSIM) has been developed for surface roughness measurement. However, fringe-print-through (FPT) error widely exists in the phase-shifting interferometry (PSI). To ensure the measurement accuracy, we analyze the sources which introduce the FPT error in the SPSIM. We also develop a FPT error correction algorithm which can be used in the different intensity distribution conditions. The simulation and experiment verify the correctness and feasibility of the FPT error correction algorithm.

  20. [Error analysis of the land surface temperature retrieval using HJ-1B thermal infrared remote sensing data].

    Science.gov (United States)

    Zhao, Li-Min; Yu, Tao; Tian, Qing-Jiu; Gu, Xing-Fa; Li, Jia-Guo; Wan, Wei

    2010-12-01

    Error analysis is playing an important role in the application of the remote sensing data and model. A theoretical analysis of error sensitivities in land surface temperature (LST) retrieval using radiance transfer model (RT) is introduced, which was applied to a new thermal infrared remote sensing data of HJ-1B satellite(IRS4). The modification of the RT model with MODTRAN 4 for IRS4 data is mentioned. Error sensitivities of the model are exhibited by analyzing the derivatives of parameters. It is shown that the greater the water vapor content and smaller the emissivity and temperature, the greater the LST retrieval error. The main error origin is from equivalent noise, uncertainty of water vapor content and emissivity, which lead to an error of 0.7, 0.6 and 0.5 K on LST in typical condition, respectively. Hence, a total error of 1 K for LST has been found. It is confirmed that the LST retrieved from HJ-1B data is incredible when application requirement is more than 1K, unless more accurate in situ measurements for atmospheric parameters and emissivity are applied.

  1. Ergodic Capacity Analysis of Free-Space Optical Links with Nonzero Boresight Pointing Errors

    KAUST Repository

    Ansari, Imran Shafique

    2015-04-01

    A unified capacity analysis of a free-space optical (FSO) link that accounts for nonzero boresight pointing errors and both types of detection techniques (i.e. intensity modulation/ direct detection as well as heterodyne detection) is addressed in this work. More specifically, an exact closed-form expression for the moments of the end-to-end signal-to-noise ratio (SNR) of a single link FSO transmission system is presented in terms of well-known elementary functions. Capitalizing on these new moments expressions, we present approximate and simple closedform results for the ergodic capacity at high and low SNR regimes. All the presented results are verified via computer-based Monte-Carlo simulations.

  2. An Analysis of Dissertation Abstracts In Terms Of Translation Errors and Academic Discourse

    Directory of Open Access Journals (Sweden)

    Canan TERZI

    2014-12-01

    Full Text Available This study aimed at evaluating English abstracts of MA and PhD dissertations published in Turkish language and identifying translation errors and problems concerning academic style and discourse. In this study, a random selection of MA and PhD dissertation abstracts both from the dissertations of Turkish speaking researchers and English-speaking researchers were used. The corpus consists of 90 abstracts of MA and PhD dissertations. The abstracts of these dissertations were analyzed in terms of problems stemming from translation issues and academic discourse and style. The findings indicated that Turkish-speaking researchers rely on their translation skills while writing their abstracts in English. Contrary to initial expectations, the results of the analysis of rhetorical moves did not indicate great differences in terms of the move structures, from which we concluded that there might be some universally accepted and attended rhetorical structure in dissertation abstracts.

  3. Design and Error Analysis of a Vehicular AR System with Auto-Harmonization.

    Science.gov (United States)

    Foxlin, Eric; Calloway, Thomas; Zhang, Hongsheng

    2015-12-01

    This paper describes the design, development and testing of an AR system that was developed for aerospace and ground vehicles to meet stringent accuracy and robustness requirements. The system uses an optical see-through HMD, and thus requires extremely low latency, high tracking accuracy and precision alignment and calibration of all subsystems in order to avoid mis-registration and "swim". The paper focuses on the optical/inertial hybrid tracking system and describes novel solutions to the challenges with the optics, algorithms, synchronization, and alignment with the vehicle and HMD systems. Tracker accuracy is presented with simulation results to predict the registration accuracy. A car test is used to create a through-the-eyepiece video demonstrating well-registered augmentations of the road and nearby structures while driving. Finally, a detailed covariance analysis of AR registration error is derived.

  4. Optogenetic Analysis of Depolarization-Dependent Glucagonlike Peptide-1 Release.

    Science.gov (United States)

    Chimerel, Catalin; Riccio, Cristian; Murison, Keir; Gribble, Fiona M; Reimann, Frank

    2017-10-01

    Incretin hormones play an important role in the regulation of food intake and glucose homeostasis. Glucagonlike peptide-1 (GLP-1)-secreting cells have been demonstrated to be electrically excitable and to fire action potentials (APs) with increased frequency in response to nutrient exposure. However, nutrients can also be metabolized or activate G-protein-coupled receptors, thus potentially stimulating GLP-1 secretion independent of their effects on the plasma membrane potential. Here we used channelrhodopsins to manipulate the membrane potential of GLUTag cells, a well-established model of GLP-1-secreting enteroendocrine L cells. Using channelrhodopsins with fast or slow on/off kinetics (CheTA and SSFO, respectively), we found that trains of light pulses could trigger APs and calcium elevation in GLUTag cells stably expressing either CheTA or SSFO. Tetrodotoxin reduced light-triggered AP frequency but did not impair calcium responses, whereas further addition of the calcium-channel blockers nifedipine and ω-conotoxin GVIA abolished both APs and calcium transients. Light pulse trains did not trigger GLP-1 secretion from CheTA-expressing cells under basal conditions but were an effective stimulus when cyclic adenosine monophosphate (cAMP) concentrations were elevated by forskolin plus 3-isobutyl 1-methylxanthine. In SSFO-expressing cells, light-stimulated GLP-1 release was observed at resting and elevated cAMP concentrations and was blocked by nifedipine plus ω-conotoxin GVIA but not tetrodotoxin. We conclude that cAMP elevation or cumulative membrane depolarization triggered by SSFO enhances the efficiency of light-triggered action potential firing, voltage-gated calcium entry, and GLP-1 secretion.

  5. Aroma release and retronasal perception during and after consumption of flavored whey protein gels with different textures. 1. in vivo release analysis.

    Science.gov (United States)

    Mestres, Montserrat; Moran, Noelia; Jordan, Alfons; Buettner, Andrea

    2005-01-26

    The influence of gel texture on retronasal aroma release during mastication was followed by means of real-time proton-transfer reaction mass spectrometry and compared to sensory perception of overall aroma intensity. A clear correlation was found between individual-specific consumption patterns and the respective physicochemical release patterns in vivo. A modified data analysis approach was used to monitor the aroma changes during the mastication process. It was found that the temporal resolution of the release profile played an important role in adequate description of the release processes. On the basis of this observation, a hypothesis is presented for the observed differences in intensity rating.

  6. Error estimation and adaptive mesh refinement for parallel analysis of shell structures

    Science.gov (United States)

    Keating, Scott C.; Felippa, Carlos A.; Park, K. C.

    1994-01-01

    The formulation and application of element-level, element-independent error indicators is investigated. This research culminates in the development of an error indicator formulation which is derived based on the projection of element deformation onto the intrinsic element displacement modes. The qualifier 'element-level' means that no information from adjacent elements is used for error estimation. This property is ideally suited for obtaining error values and driving adaptive mesh refinements on parallel computers where access to neighboring elements residing on different processors may incur significant overhead. In addition such estimators are insensitive to the presence of physical interfaces and junctures. An error indicator qualifies as 'element-independent' when only visible quantities such as element stiffness and nodal displacements are used to quantify error. Error evaluation at the element level and element independence for the error indicator are highly desired properties for computing error in production-level finite element codes. Four element-level error indicators have been constructed. Two of the indicators are based on variational formulation of the element stiffness and are element-dependent. Their derivations are retained for developmental purposes. The second two indicators mimic and exceed the first two in performance but require no special formulation of the element stiffness mesh refinement which we demonstrate for two dimensional plane stress problems. The parallelizing of substructures and adaptive mesh refinement is discussed and the final error indicator using two-dimensional plane-stress and three-dimensional shell problems is demonstrated.

  7. Prepopulated radiology report templates: a prospective analysis of error rate and turnaround time.

    Science.gov (United States)

    Hawkins, C M; Hall, S; Hardin, J; Salisbury, S; Towbin, A J

    2012-08-01

    Current speech recognition software allows exam-specific standard reports to be prepopulated into the dictation field based on the radiology information system procedure code. While it is thought that prepopulating reports can decrease the time required to dictate a study and the overall number of errors in the final report, this hypothesis has not been studied in a clinical setting. A prospective study was performed. During the first week, radiologists dictated all studies using prepopulated standard reports. During the second week, all studies were dictated after prepopulated reports had been disabled. Final radiology reports were evaluated for 11 different types of errors. Each error within a report was classified individually. The median time required to dictate an exam was compared between the 2 weeks. There were 12,387 reports dictated during the study, of which, 1,173 randomly distributed reports were analyzed for errors. There was no difference in the number of errors per report between the 2 weeks; however, radiologists overwhelmingly preferred using a standard report both weeks. Grammatical errors were by far the most common error type, followed by missense errors and errors of omission. There was no significant difference in the median dictation time when comparing studies performed each week. The use of prepopulated reports does not alone affect the error rate or dictation time of radiology reports. While it is a useful feature for radiologists, it must be coupled with other strategies in order to decrease errors.

  8. Artificial intelligence environment for the analysis and classification of errors in discrete sequential processes

    Energy Technology Data Exchange (ETDEWEB)

    Ahuja, S.B.

    1985-01-01

    The study evolved over two phases. First, an existing artificial intelligence technique, heuristic state space search, was used to successfully address and resolve significant issues that have prevented automated error classification in the past. A general method was devised for constructing heuristic functions to guide the search process, which successfully avoided the combinatorial explosion normally associated with search paradigms. A prototype error classifier, SLIPS/I, was tested and evaluated using both real-world data from a databank of speech errors and artificially generated random errors. It showed that heuristic state space search is a viable paradigm for conducting domain-independent error classification within practical limits of memory space and processing time. The second phase considered sequential error classification as a diagnostic process in which a set of disorders (elementary errors) is said to be a classification of an observed set of manifestations (local differences between an intended sequence and the errorful sequence) it if provides a regular cover for them. Using a model of abductive logic based on the set covering theory, this new perspective of error classification as a diagnostic process models human diagnostic reasoning in classifying complex errors. A high level, non-procedural error specification language (ESL) was also designed.

  9. Analysis and Compensation for Gear Accuracy with Setting Error in Form Grinding

    Directory of Open Access Journals (Sweden)

    Chenggang Fang

    2015-01-01

    Full Text Available In the process of form grinding, gear setting error was the main factor that influenced the form grinding accuracy; we proposed an effective method to improve form grinding accuracy that corrected the error by controlling the machine operations. Based on establishing the geometry model of form grinding and representing the gear setting errors as homogeneous coordinate, tooth mathematic model was obtained and simplified under the gear setting error. Then, according to the gear standard of ISO1328-1: 1997 and the ANSI/AGMA 2015-1-A01: 2002, the relationship was investigated by changing the gear setting errors with respect to tooth profile deviation, helix deviation, and cumulative pitch deviation, respectively, under the condition of gear eccentricity error, gear inclination error, and gear resultant error. An error compensation method was proposed based on solving sensitivity coefficient matrix of setting error in a five-axis CNC form grinding machine; simulation and experimental results demonstrated that the method can effectively correct the gear setting error, as well as further improving the forming grinding accuracy.

  10. VR-based training and assessment in ultrasound-guided regional anesthesia: from error analysis to system design.

    LENUS (Irish Health Repository)

    2011-01-01

    If VR-based medical training and assessment is to improve patient care and safety (i.e. a genuine health gain), it has to be based on clinically relevant measurement of performance. Metrics on errors are particularly useful for capturing and correcting undesired behaviors before they occur in the operating room. However, translating clinically relevant metrics and errors into meaningful system design is a challenging process. This paper discusses how an existing task and error analysis was translated into the system design of a VR-based training and assessment environment for Ultrasound Guided Regional Anesthesia (UGRA).

  11. An Error Analysis on Using Simple Past Tense by Eleventh Years Students of the Ark Scholl Sidikalang

    OpenAIRE

    Tambunan, Wandi

    2015-01-01

    This thesis entitled “An Error Analysis of Using Simple Past Tense By The Eleventh Year Students Of The Ark School Sidikalang”. The thesis describes the errors made by the eleventh year students in using past tense in writing recount text. The purposes of writing this thesis is to know the types of error and the causes of them. This research uses qualitative method. The population is the eleventh year students of The Ark School Sidikalang which consists of four classes, and totalling the popu...

  12. Error budget analysis of SCIAMACHY limb ozone profile retrievals using the SCIATRAN model

    Directory of Open Access Journals (Sweden)

    N. Rahpoe

    2013-10-01

    Full Text Available A comprehensive error characterization of SCIAMACHY (Scanning Imaging Absorption Spectrometer for Atmospheric CHartographY limb ozone profiles has been established based upon SCIATRAN transfer model simulations. The study was carried out in order to evaluate the possible impact of parameter uncertainties, e.g. in albedo, stratospheric aerosol optical extinction, temperature, pressure, pointing, and ozone absorption cross section on the limb ozone retrieval. Together with the a posteriori covariance matrix available from the retrieval, total random and systematic errors are defined for SCIAMACHY ozone profiles. Main error sources are the pointing errors, errors in the knowledge of stratospheric aerosol parameters, and cloud interference. Systematic errors are of the order of 7%, while the random error amounts to 10–15% for most of the stratosphere. These numbers can be used for the interpretation of instrument intercomparison and validation of the SCIAMACHY V 2.5 limb ozone profiles in a rigorous manner.

  13. A Method to Optimize Geometric Errors of Machine Tool based on SNR Quality Loss Function and Correlation Analysis

    Directory of Open Access Journals (Sweden)

    Cai Ligang

    2017-01-01

    Full Text Available Instead improving the accuracy of machine tool by increasing the precision of key components level blindly in the production process, the method of combination of SNR quality loss function and machine tool geometric error correlation analysis to optimize five-axis machine tool geometric errors will be adopted. Firstly, the homogeneous transformation matrix method will be used to build five-axis machine tool geometric error modeling. Secondly, the SNR quality loss function will be used for cost modeling. And then, machine tool accuracy optimal objective function will be established based on the correlation analysis. Finally, ISIGHT combined with MATLAB will be applied to optimize each error. The results show that this method is reasonable and appropriate to relax the range of tolerance values, so as to reduce the manufacturing cost of machine tools.

  14. Error intraobservador en el análisis paleohistológico de superficies craneofaciales / Intra-observer error in paleohistological analysis of craniofacial surfaces

    Directory of Open Access Journals (Sweden)

    Natalia Brachetta Aporta

    2015-12-01

    Full Text Available En el análisis histológico de las superficies óseas craneofaciales se registran rasgos microestructurales producidos por la actividad de modelado óseo, así como otros rasgos no vinculados al crecimiento normal (alteraciones tafonómicas. La identificación de las áreas producto de la actividad celular, así como la determinación de su distribución y su extensión total, puede estar sujeta a diversas fuentes de error. En este sentido, el objetivo del presente trabajo es evaluar el error intraobservador en el relevamiento de las microestructuras correspondientes a formación y reabsorción sobre superficies óseas craneofaciales vinculadas al modelado óseo. Para ello se realizó un diseño observacional de bloques completos aleatorios con medidas repetidas a partir de réplicas de alta resolución de la glabela, el malar y el maxilar observadas al microscopio de luz incidente. Los resultados permitieron detectar la existencia de tendencias en las observaciones a través del tiempo, así como diferencias en el reconocimiento según el tipo de superficie ósea y la región analizada. En general, se registró un aumento de la concordancia a través de las repeticiones en la observación del tipo de actividad y en la cuantificación de la extensión de las áreas de formación y reabsorción. Asimismo, se observó que presentan mayor dificultad en su análisis los rasgos asociados a la actividad de reabsorción así como las regiones con topografía abrupta, como la región maxilar. Estos resultados proveen un marco de referencia para evaluar la confiabilidad de las observaciones en futuros estudios paleohistológicos. PALABRAS CLAVE error de observación; diseño experimental; superficies de modelado óseo   Histological analysis of craniofacial bone surfaces reveals microstructural features produced by the activity of bone modeling, as well as other features not related to normal growth (taphonomic alterations. Identifying the areas

  15. Toxic release consequence analysis tool (TORCAT) for inherently safer design plant.

    Science.gov (United States)

    Shariff, Azmi Mohd; Zaini, Dzulkarnain

    2010-10-15

    Many major accidents due to toxic release in the past have caused many fatalities such as the tragedy of MIC release in Bhopal, India (1984). One of the approaches is to use inherently safer design technique that utilizes inherent safety principle to eliminate or minimize accidents rather than to control the hazard. This technique is best implemented in preliminary design stage where the consequence of toxic release can be evaluated and necessary design improvements can be implemented to eliminate or minimize the accidents to as low as reasonably practicable (ALARP) without resorting to costly protective system. However, currently there is no commercial tool available that has such capability. This paper reports on the preliminary findings on the development of a prototype tool for consequence analysis and design improvement via inherent safety principle by utilizing an integrated process design simulator with toxic release consequence analysis model. The consequence analysis based on the worst-case scenarios during process flowsheeting stage were conducted as case studies. The preliminary finding shows that toxic release consequences analysis tool (TORCAT) has capability to eliminate or minimize the potential toxic release accidents by adopting the inherent safety principle early in preliminary design stage. 2010 Elsevier B.V. All rights reserved.

  16. Analysis of fiber optic gyroscope vibration error based on improved local mean decomposition and kernel principal component analysis.

    Science.gov (United States)

    Song, Rui; Chen, Xiyuan

    2017-03-10

    The fiber optic gyroscope (FOG), one version of an all solid-state rotation sensor, has been widely used in navigation and position applications. However, the elastic-optic effect of fiber will introduce a non-negligible error in the output of FOG in a vibration and shock environment. To overcome the limitations of mechanism structure improvement methods and the traditional nonlinear analysis approaches, a hybrid algorithm of an optimized local mean decomposition-kernel principal component analysis (OLMD-KPCA) method is proposed in this paper. The vibration signal features of higher frequency components are analyzed by OLMD and their energy is calculated to take shape as the input vector of KPCA. In addition, the output data of three axis gyroscopes in an inertial measurement unit (IMU) under vibration experiment are used to validate the effectiveness and generalization ability of the proposed approach. When compared to the wavelet transform (WT), experimental results demonstrate that the OLMD-KPCA method greatly reduces the vibration noise in the FOG output. Besides, the Allan variance analysis results indicate the error coefficients could be decreased by one order of magnitude and the algorithm stability of OLMD-KPCA is proven by another two sets of data under different vibration conditions.

  17. Considerations for analysis of time-to-event outcomes measured with error: Bias and correction with SIMEX.

    Science.gov (United States)

    Oh, Eric J; Shepherd, Bryan E; Lumley, Thomas; Shaw, Pamela A

    2017-11-29

    For time-to-event outcomes, a rich literature exists on the bias introduced by covariate measurement error in regression models, such as the Cox model, and methods of analysis to address this bias. By comparison, less attention has been given to understanding the impact or addressing errors in the failure time outcome. For many diseases, the timing of an event of interest (such as progression-free survival or time to AIDS progression) can be difficult to assess or reliant on self-report and therefore prone to measurement error. For linear models, it is well known that random errors in the outcome variable do not bias regression estimates. With nonlinear models, however, even random error or misclassification can introduce bias into estimated parameters. We compare the performance of 2 common regression models, the Cox and Weibull models, in the setting of measurement error in the failure time outcome. We introduce an extension of the SIMEX method to correct for bias in hazard ratio estimates from the Cox model and discuss other analysis options to address measurement error in the response. A formula to estimate the bias induced into the hazard ratio by classical measurement error in the event time for a log-linear survival model is presented. Detailed numerical studies are presented to examine the performance of the proposed SIMEX method under varying levels and parametric forms of the error in the outcome. We further illustrate the method with observational data on HIV outcomes from the Vanderbilt Comprehensive Care Clinic. Copyright © 2017 John Wiley & Sons, Ltd.

  18. MO-F-BRA-04: Voxel-Based Statistical Analysis of Deformable Image Registration Error via a Finite Element Method.

    Science.gov (United States)

    Li, S; Lu, M; Kim, J; Glide-Hurst, C; Chetty, I; Zhong, H

    2012-06-01

    Purpose Clinical implementation of adaptive treatment planning is limited by the lack of quantitative tools to assess deformable image registration errors (R-ERR). The purpose of this study was to develop a method, using finite element modeling (FEM), to estimate registration errors based on mechanical changes resulting from them. Methods An experimental platform to quantify the correlation between registration errors and their mechanical consequences was developed as follows: diaphragm deformation was simulated on the CT images in patients with lung cancer using a finite element method (FEM). The simulated displacement vector fields (F-DVF) were used to warp each CT image to generate a FEM image. B-Spline based (Elastix) registrations were performed from reference to FEM images to generate a registration DVF (R-DVF). The F- DVF was subtracted from R-DVF. The magnitude of the difference vector was defined as the registration error, which is a consequence of mechanically unbalanced energy (UE), computed using 'in-house-developed' FEM software. A nonlinear regression model was used based on imaging voxel data and the analysis considered clustered voxel data within images. Results A regression model analysis showed that UE was significantly correlated with registration error, DVF and the product of registration error and DVF respectively with R̂2=0.73 (R=0.854). The association was verified independently using 40 tracked landmarks. A linear function between the means of UE values and R- DVF*R-ERR has been established. The mean registration error (N=8) was 0.9 mm. 85.4% of voxels fit this model within one standard deviation. Conclusions An encouraging relationship between UE and registration error has been found. These experimental results suggest the feasibility of UE as a valuable tool for evaluating registration errors, thus supporting 4D and adaptive radiotherapy. The research was supported by NIH/NCI R01CA140341. © 2012 American Association of Physicists in

  19. Analysis of Institutional Press Releases and its Visibility in the Press

    Directory of Open Access Journals (Sweden)

    José Antonio Alcoceba-Hernando, Ph.D.

    2010-01-01

    Full Text Available The relationships between institutional communication and media communication influence the shaping of social representations of public issues. This research article analyses these relationships based on the case study of the external communication of a public institution, the press releases of Spain’s Youth Institute (Instituto de la Juventud, aka, Injuve, during three years and their repercussion in the press during the same period of time. The results obtained in this research allowed drawing conclusions on the types of communication production of the aforementioned institution and the news treatment of such pieces of information by the printed and digital media. The press releases and the news items were studied using quantitative media content analysis which focused, especially, in referential issues like the information treatment, the thematic analysis, youth representations in the case of the releases; and the visibility of the press releases in the making of news

  20. Analysis of the “naming game” with learning errors in communications

    Science.gov (United States)

    Lou, Yang; Chen, Guanrong

    2015-07-01

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.

  1. An Analysis on the Students' Grammatical Error in Writing Skill of Recoount Text at the Tenth Grade of SMA Muhammadiyah Rambah

    OpenAIRE

    Suwarni

    2016-01-01

    The artisle investigated about an analysis on the students' grammatical error in writing skill of recount text at the tenth grade of SMA Muhammadiyah Rambah. The purpose of this study was to know the grammatical error of the students in part of speech and tense in recount text writing. The indicators are: (a) error in sentence pattern, (b) error in tense, (c) error in pronoun, (d) error in preposition, (e) error in punctuation. From these indicators, the researcher would know the kinds of err...

  2. Medical errors in neurosurgery

    OpenAIRE

    Rolston, John D.; Zygourakis, Corinna C.; Han, Seunggu J.; Lau, Catherine Y.; Berger, Mitchel S.; Parsa, Andrew T

    2014-01-01

    Background: Medical errors cause nearly 100,000 deaths per year and cost billions of dollars annually. In order to rationally develop and institute programs to mitigate errors, the relative frequency and costs of different errors must be documented. This analysis will permit the judicious allocation of scarce healthcare resources to address the most costly errors as they are identified. Methods: Here, we provide a systematic review of the neurosurgical literature describing medical errors...

  3. Assessment of blood glucose predictors: the prediction-error grid analysis.

    Science.gov (United States)

    Sivananthan, Sampath; Naumova, Valeriya; Man, Chiara Dalla; Facchinetti, Andrea; Renard, Eric; Cobelli, Claudio; Pereverzyev, Sergei V

    2011-08-01

    Prediction of the future blood glucose (BG) evolution from continuous glucose monitoring (CGM) data is a promising direction in diabetes therapy management, and several glucose predictors have recently been proposed. This raises the problem of their assessment. There were attempts to use for such assessment the continuous glucose-error grid analysis (CG-EGA), originally developed for CGM devices. However, in the CG-EGA the BG rate of change is estimated from past BG readings, whereas predictors provide BG estimation ahead of time. Therefore, the original CG-EGA should be modified to assess predictors. Here we propose a new version of the CG-EGA, the Prediction-Error Grid Analysis (PRED-EGA). The analysis is based both on simulated data and on data from clinical trials, performed in the European FP7-project "DIAdvisor." Simulated data are used to test the ability of the analyzed CG-EGA modifications to capture erroneous predictions in controlled situation. Real data are used to show the impact of the different CG-EGA versions in the evaluation of a predictor. Using the data of 10 virtual and 10 real subjects and analyzing two different predictors, we demonstrate that the straightforward application of the CG-EGA does not adequately classify the prediction performance. For example, we observed that up to 70% of 20 min ahead predictions in the hyperglycemia region that are classified by this application as erroneous are, in fact, accurate. Moreover, for predictions during hypoglycemia the assessments produced by the straightforward application of the CG-EGA are not only too pessimistic (in up to 60% of cases), but this version is not able to detect real erroneous predictions. In contrast, the proposed modification of the CG-EGA, where the rate of change is estimated on the predicted BG profile, is an adequate metric for the assessment of predictions. We propose a new CG-EGA, the PRED-EGA, for the assessment of glucose predictors. The presented analysis shows that

  4. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  5. On the Error State Selection for Stationary SINS Alignment and Calibration Kalman Filters-Part II: Observability/Estimability Analysis.

    Science.gov (United States)

    Silva, Felipe O; Hemerly, Elder M; Leite Filho, Waldemar C

    2017-02-23

    This paper presents the second part of a study aiming at the error state selection in Kalman filters applied to the stationary self-alignment and calibration (SSAC) problem of strapdown inertial navigation systems (SINS). The observability properties of the system are systematically investigated, and the number of unobservable modes is established. Through the analytical manipulation of the full SINS error model, the unobservable modes of the system are determined, and the SSAC error states (except the velocity errors) are proven to be individually unobservable. The estimability of the system is determined through the examination of the major diagonal terms of the covariance matrix and their eigenvalues/eigenvectors. Filter order reduction based on observability analysis is shown to be inadequate, and several misconceptions regarding SSAC observability and estimability deficiencies are removed. As the main contributions of this paper, we demonstrate that, except for the position errors, all error states can be minimally estimated in the SSAC problem and, hence, should not be removed from the filter. Corroborating the conclusions of the first part of this study, a 12-state Kalman filter is found to be the optimal error state selection for SSAC purposes. Results from simulated and experimental tests support the outlined conclusions.

  6. LOW FREQUENCY ERROR ANALYSIS AND CALIBRATION FOR HIGH-RESOLUTION OPTICAL SATELLITE’S UNCONTROLLED GEOMETRIC POSITIONING

    Directory of Open Access Journals (Sweden)

    M. Wang

    2016-06-01

    Full Text Available The low frequency error is a key factor which has affected uncontrolled geometry processing accuracy of the high-resolution optical image. To guarantee the geometric quality of imagery, this paper presents an on-orbit calibration method for the low frequency error based on geometric calibration field. Firstly, we introduce the overall flow of low frequency error on-orbit analysis and calibration, which includes optical axis angle variation detection of star sensor, relative calibration among star sensors, multi-star sensor information fusion, low frequency error model construction and verification. Secondly, we use optical axis angle change detection method to analyze the law of low frequency error variation. Thirdly, we respectively use the method of relative calibration and information fusion among star sensors to realize the datum unity and high precision attitude output. Finally, we realize the low frequency error model construction and optimal estimation of model parameters based on DEM/DOM of geometric calibration field. To evaluate the performance of the proposed calibration method, a certain type satellite’s real data is used. Test results demonstrate that the calibration model in this paper can well describe the law of the low frequency error variation. The uncontrolled geometric positioning accuracy of the high-resolution optical image in the WGS-84 Coordinate Systems is obviously improved after the step-wise calibration.

  7. Improved Atmospheric Soundings and Error Estimates from Analysis of AIRS/AMSU Data

    Science.gov (United States)

    Susskind, Joel

    2007-01-01

    The AIRS Science Team Version 5.0 retrieval algorithm became operational at the Goddard DAAC in July 2007 generating near real-time products from analysis of AIRS/AMSU sounding data. This algorithm contains many significant theoretical advances over the AIRS Science Team Version 4.0 retrieval algorithm used previously. Three very significant developments of Version 5 are: 1) the development and implementation of an improved Radiative Transfer Algorithm (RTA) which allows for accurate treatment of non-Local Thermodynamic Equilibrium (non-LTE) effects on shortwave sounding channels; 2) the development of methodology to obtain very accurate case by case product error estimates which are in turn used for quality control; and 3) development of an accurate AIRS only cloud clearing and retrieval system. These theoretical improvements taken together enabled a new methodology to be developed which further improves soundings in partially cloudy conditions, without the need for microwave observations in the cloud clearing step as has been done previously. In this methodology, longwave C02 channel observations in the spectral region 700 cm-' to 750 cm-' are used exclusively for cloud clearing purposes, while shortwave C02 channels in the spectral region 2195 cm-' to 2395 cm-' are used for temperature sounding purposes. The new methodology for improved error estimates and their use in quality control is described briefly and results are shown indicative of their accuracy. Results are also shown of forecast impact experiments assimilating AIRS Version 5.0 retrieval products in the Goddard GEOS 5 Data Assimilation System using different quality control thresholds.

  8. Performance Analysis of Free-Space Optical Links Over Malaga (M) Turbulence Channels with Pointing Errors

    KAUST Repository

    Ansari, Imran Shafique

    2015-08-12

    In this work, we present a unified performance analysis of a free-space optical (FSO) link that accounts for pointing errors and both types of detection techniques (i.e. intensity modulation/direct detection (IM/DD) as well as heterodyne detection). More specifically, we present unified exact closedform expressions for the cumulative distribution function, the probability density function, the moment generating function, and the moments of the end-to-end signal-to-noise ratio (SNR) of a single link FSO transmission system, all in terms of the Meijer’s G function except for the moments that is in terms of simple elementary functions. We then capitalize on these unified results to offer unified exact closed-form expressions for various performance metrics of FSO link transmission systems, such as, the outage probability, the scintillation index (SI), the average error rate for binary and M-ary modulation schemes, and the ergodic capacity (except for IM/DD technique, where we present closed-form lower bound results), all in terms of Meijer’s G functions except for the SI that is in terms of simple elementary functions. Additionally, we derive the asymptotic results for all the expressions derived earlier in terms of Meijer’s G function in the high SNR regime in terms of simple elementary functions via an asymptotic expansion of the Meijer’s G function. We also derive new asymptotic expressions for the ergodic capacity in the low as well as high SNR regimes in terms of simple elementary functions via utilizing moments. All the presented results are verified via computer-based Monte-Carlo simulations.

  9. Analysis and elimination of bias error in a fiber-optic current sensor.

    Science.gov (United States)

    Wang, Xiaxiao; Zhao, Zijie; Li, Chuansheng; Yu, Jia; Wang, Zhenjie

    2017-11-10

    Bias error, along with scale factor, is a key factor that affects the measurement accuracy of the fiber-optic current sensor. Because of polarization crosstalk, the coherence of parasitic interference signals could be rebuilt and form an output independent of the current to be measured, i.e., the bias error. The bias error is a variable of the birefringence optical path difference. Hence, when the temperature changes, the bias error shows a quasi-periodical tendency whose envelope curve reflects the coherence function of light source. By identifying the key factors of bias error and setting the propagation directions of a super-luminescent diode, polarization-maintaining coupler and polarizer to fast axis, it is possible to eliminate the coherence of parasitic interference signals. Experiments show that the maximum bias error decreases by one order of magnitude at temperatures between -40°C to 60°C.

  10. Quantitative analysis of scaling error compensation methods in dimensional X-ray computed tomography

    DEFF Research Database (Denmark)

    Müller, P.; Hiller, Jochen; Dai, Y.

    2015-01-01

    and repeatability of dimensional and geometrical measurements. The aim of this paper is to discuss different methods for the correction of scaling errors and to quantify their influence on dimensional measurements. Scaling errors occur first and foremost in CT systems with no built-in compensation of positioning...... errors of the manipulator system (magnification axis). This article also introduces a new compensation method for scaling errors using a database of reference scaling factors and discusses its advantages and disadvantages. In total, three methods for the correction of scaling errors – using the CT ball...... geometry and is made of brass, which makes its measurements with CT challenging. It is shown that each scaling error correction method results in different deviations between CT measurements and reference measurements from a CMM. Measurement uncertainties were estimated for each method, taking...

  11. Hanford Site Composite Analysis Technical Approach Description: Waste Form Release.

    Energy Technology Data Exchange (ETDEWEB)

    Hardie, S. [CH2M HILL Plateau Remediation Company, Richland, WA (United States); Paris, B. [CH2M HILL Plateau Remediation Company, Richland, WA (United States); Apted, M. [CH2M HILL Plateau Remediation Company, Richland, WA (United States)

    2017-09-14

    The U.S. Department of Energy (DOE) in DOE O 435.1 Chg. 1, Radioactive Waste Management, requires the preparation and maintenance of a composite analysis (CA). The primary purpose of the CA is to provide a reasonable expectation that the primary public dose limit is not likely to be exceeded by multiple source terms that may significantly interact with plumes originating at a low-level waste disposal facility. The CA is used to facilitate planning and land use decisions that help assure disposal facility authorization will not result in long-term compliance problems; or, to determine management alternatives, corrective actions or assessment needs, if potential problems are identified.

  12. Performance and Error Analysis of Knill's Postselection Scheme in a Two-Dimensional Architecture

    OpenAIRE

    Lai, Ching-Yi; Paz, Gerardo; Suchara, Martin; Brun, Todd A.

    2013-01-01

    Knill demonstrated a fault-tolerant quantum computation scheme based on concatenated error-detecting codes and postselection with a simulated error threshold of 3% over the depolarizing channel. %We design a two-dimensional architecture for fault-tolerant quantum computation based on Knill's postselection scheme. We show how to use Knill's postselection scheme in a practical two-dimensional quantum architecture that we designed with the goal to optimize the error correction properties, while ...

  13. Water flux in animals: analysis of potential errors in the tritiated water method

    Energy Technology Data Exchange (ETDEWEB)

    Nagy, K.A.; Costa, D.

    1979-03-01

    Laboratory studies indicate that tritiated water measurements of water flux are accurate to within -7 to +4% in mammals, but errors are larger in some reptiles. However, under conditions that can occur in field studies, errors may be much greater. Influx of environmental water vapor via lungs and skin can cause errors exceeding +-50% in some circumstances. If water flux rates in an animal vary through time, errors approach +-15% in extreme situations, but are near +-3% in more typical circumstances. Errors due to fractional evaporation of tritiated water may approach -9%. This error probably varies between species. Use of an inappropriate equation for calculating water flux from isotope data can cause errors exceeding +-100%. The following sources of error are either negligible or avoidable: use of isotope dilution space as a measure of body water volume, loss of nonaqueous tritium bound to excreta, binding of tritium with nonaqueous substances in the body, radiation toxicity effects, and small analytical errors in isotope measurements. Water flux rates measured with tritiated water should be within +-10% of actual flux rates in most situations.

  14. Descriptions of verbal communication errors between staff. An analysis of 84 root cause analysis-reports from Danish hospitals

    DEFF Research Database (Denmark)

    Rabøl, Louise Isager; Andersen, Mette Lehmann; Østergaard, Doris

    2011-01-01

    incidents. The objective of this study is to review RCA reports (RCAR) for characteristics of verbal communication errors between hospital staff in an organisational perspective. Method Two independent raters analysed 84 RCARs, conducted in six Danish hospitals between 2004 and 2006, for descriptions...... and characteristics of verbal communication errors such as handover errors and error during teamwork. Results Raters found description of verbal communication errors in 44 reports (52%). These included handover errors (35 (86%)), communication errors between different staff groups (19 (43%)), misunderstandings (13...... (30%)), communication errors between junior and senior staff members (11 (25%)), hesitance in speaking up (10 (23%)) and communication errors during teamwork (8 (18%)). The kappa values were 0.44–0.78. Unproceduralized communication and information exchange via telephone, related to transfer between...

  15. Effects of measurement errors on psychometric measurements in ergonomics studies: Implications for correlations, ANOVA, linear regression, factor analysis, and linear discriminant analysis.

    Science.gov (United States)

    Liu, Yan; Salvendy, Gavriel

    2009-05-01

    This paper aims to demonstrate the effects of measurement errors on psychometric measurements in ergonomics studies. A variety of sources can cause random measurement errors in ergonomics studies and these errors can distort virtually every statistic computed and lead investigators to erroneous conclusions. The effects of measurement errors on five most widely used statistical analysis tools have been discussed and illustrated: correlation; ANOVA; linear regression; factor analysis; linear discriminant analysis. It has been shown that measurement errors can greatly attenuate correlations between variables, reduce statistical power of ANOVA, distort (overestimate, underestimate or even change the sign of) regression coefficients, underrate the explanation contributions of the most important factors in factor analysis and depreciate the significance of discriminant function and discrimination abilities of individual variables in discrimination analysis. The discussions will be restricted to subjective scales and survey methods and their reliability estimates. Other methods applied in ergonomics research, such as physical and electrophysiological measurements and chemical and biomedical analysis methods, also have issues of measurement errors, but they are beyond the scope of this paper. As there has been increasing interest in the development and testing of theories in ergonomics research, it has become very important for ergonomics researchers to understand the effects of measurement errors on their experiment results, which the authors believe is very critical to research progress in theory development and cumulative knowledge in the ergonomics field.

  16. Gas chromatographic-mass spectrometric urinary metabolome analysis to study mutations of inborn errors of metabolism.

    Science.gov (United States)

    Kuhara, Tomiko

    2005-01-01

    Urine contains numerous metabolites, and can provide evidence for the screening or molecular diagnosis of many inborn errors of metabolism (IEMs). The metabolomic analysis of urine by the combined use of urease pretreatment, stable-isotope dilution, and capillary gas chromatography/mass spectrometry offers reliable and quantitative data for the simultaneous screening or molecular diagnosis of more than 130 IEMs. Those IEMs include hyperammonemias and lactic acidemias, and the IEMs of amino acids, pyrimidines, purines, carbohydrates, and others including primary hyperoxalurias, hereditary fructose intolerance, propionic acidemia, and methylmalonic acidemia. Metabolite analysis is comprehensive for mutant genotypes. Enzyme dysfunction-either by the abnormal structure of an enzyme/apoenzyme, the reduced quantity of a normal enzyme/apoenzyme, or the lack of a coenzyme-is involved. Enzyme dysfunction-either by an abnormal regulatory gene, abnormal sub-cellular localization, or by abnormal post-transcriptional or post-translational modification-is included. Mutations-either known or unknown, common or uncommon-are involved. If the urine metabolome approach can accurately observe quantitative abnormality for hundreds of metabolites, reflecting 100 different disease-causing reactions in a body, then it is possible to simultaneously detect different mutant genotypes of far more than tens of thousands. (c) 2004 Wiley Periodicals, Inc., Mass Spec Rev 24:814-827, 2005.

  17. LABOUR MARKET IN UKRAINE: AN EMPIRICAL DYNAMIC ANALYSIS USING ERROR CORRECTION MODEL

    Directory of Open Access Journals (Sweden)

    I. Lukyanenko

    2014-06-01

    Full Text Available The labor market background in Ukraine has not only economic but also significant social value, and therefore is an important element of social and economic policy. The effectiveness of the state socio-economic regulation mechanisms requires profound analysis, modeling and forecasting of the processes of the labor market by means of modern flexible econometric tools, taking into account the short-term dynamics of economic processes and features that are characteristic of the unstable economic development of our country. As a result of empirical research on relationships between the macroeconomic indicators of the labor market in Ukraine, we developed a set of dynamic econometric models using an error-correction mechanism which take into account the long-run equilibrium relationships, as well as provide an opportunity to model the short-term effects of several factors such as the rate of change of wages, size of the labor force, employment and unemployment. The developed model is used to predict future trends of the labor market, as well as to describe the dynamics of its operation under various alternative scenarios of economic development. The application of the developed specifications in the structure of an integral macroeconometric model of Ukraine will allow us to carry out a comprehensive analysis of economic processes in the national economy and its prospects both in the short term and in the long run.

  18. Software Tool for Analysis of Breathing-Related Errors in Transthoracic Electrical Bioimpedance Spectroscopy Measurements

    Science.gov (United States)

    Abtahi, F.; Gyllensten, I. C.; Lindecrantz, K.; Seoane, F.

    2012-12-01

    During the last decades, Electrical Bioimpedance Spectroscopy (EBIS) has been applied in a range of different applications and mainly using the frequency sweep-technique. Traditionally the tissue under study is considered to be timeinvariant and dynamic changes of tissue activity are ignored and instead treated as a noise source. This assumption has not been adequately tested and could have a negative impact and limit the accuracy for impedance monitoring systems. In order to successfully use frequency-sweeping EBIS for monitoring time-variant systems, it is paramount to study the effect of frequency-sweep delay on Cole Model-based analysis. In this work, we present a software tool that can be used to simulate the influence of respiration activity in frequency-sweep EBIS measurements of the human thorax and analyse the effects of the different error sources. Preliminary results indicate that the deviation on the EBIS measurement might be significant at any frequency, and especially in the impedance plane. Therefore the impact on Cole-model analysis might be different depending on method applied for Cole parameter estimation.

  19. Relationship Between Patients' Perceptions of Care Quality and Health Care Errors in 11 Countries: A Secondary Data Analysis.

    Science.gov (United States)

    Hincapie, Ana L; Slack, Marion; Malone, Daniel C; MacKinnon, Neil J; Warholak, Terri L

    2016-01-01

    Patients may be the most reliable reporters of some aspects of the health care process; their perspectives should be considered when pursuing changes to improve patient safety. The authors evaluated the association between patients' perceived health care quality and self-reported medical, medication, and laboratory errors in a multinational sample. The analysis was conducted using the 2010 Commonwealth Fund International Health Policy Survey, a multinational consumer survey conducted in 11 countries. Quality of care was measured by a multifaceted construct developed using Rasch techniques. After adjusting for potentially important confounding variables, an increase in respondents' perceptions of care coordination decreased the odds of self-reporting medical errors, medication errors, and laboratory errors (P < .001). As health care stakeholders continue to search for initiatives that improve care experiences and outcomes, this study's results emphasize the importance of guaranteeing integrated care.

  20. Analysis of influence on back-EMF based sensorless control of PMSM due to parameter variations and measurement errors

    DEFF Research Database (Denmark)

    Wang, Z.; Lu, K.; Ye, Y.

    2011-01-01

    and flux saturation, current and voltage errors due to measurement uncertainties, and signal delay caused by hardwares. This paper reveals some inherent principles for the performance of the back-EMF based sensorless algorithm embedded in a surface mounted PMSM system adapting vector control strategy......To achieve better performance of sensorless control of PMSM, a precise and stable estimation of rotor position and speed is required. Several parameter uncertainties and variable measurement errors may lead to estimation error, such as resistance and inductance variations due to temperature......, gives mathematical analysis and experimental results to support the principles, and quantify the effects of each. It may be a guidance for designers to minify the estimation error and make proper on-line parameter estimations....

  1. Clock Synchronization in Wireless Sensor Networks: Analysis and Design of Error Precision Based on Lossy Networked Control Perspective

    Directory of Open Access Journals (Sweden)

    Wang Ting

    2015-01-01

    Full Text Available Motivated by the importance of the clock synchronization in wireless sensor networks (WSNs, due to the packet loss, the synchronization error variance is a random variable and may exceed the designed boundary of the synchronization variance. Based on the clock synchronization state space model, this paper establishes the model of synchronization error variance analysis and design issues. In the analysis issue, assuming sensor nodes exchange clock information in the network with packet loss, we find a minimum clock information packet arrival rate in order to guarantee the synchronization precision at synchronization node. In the design issue, assuming sensor node freely schedules whether to send the clock information, we look for an optimal clock information exchange rate between synchronization node and reference node which offers the optimal tradeoff between energy consumption and synchronization precision at synchronization node. Finally, simulations further verify the validity of clock synchronization analysis and design from the perspective of synchronization error variance.

  2. Medical error analysis in dermatology according to the reports of the North Rhine Medical Association from 2004-2013.

    Science.gov (United States)

    Lehmann, Lion; Wesselmann, Ulrich; Weber, Beate; Smentkowski, Ulrich

    2015-09-01

    Patient safety is a central issue of health care provision. There are various approaches geared towards improving health care provision and patient safety. By conducting a systematic retrospective error analysis, the present article aims to identify the most common complaints brought forth within the field of dermatology over a period of ten years. The reports of the Expert Committee for Medical Malpractice Claims of the North Rhine Medical Association (from 2004 to 2013) on dermatological procedures were analyzed (n =  247 reports in the field of dermatology). Expert medical assessments in the field of dermatology are most frequently commissioned for nonsurgical therapies (e.g. laser therapy, phototherapy). While suspected diagnostic errors constitute the second most common reason for complaints, presumed dermatosurgery-related errors represent the least common reason for commissioning expert medical assessments. The most common and easily avoidable sources of medical errors include failure to take a biopsy despite suspicious clinical findings, or incorrect clinicopathological correlations resulting in deleterious effects for the patient. Furthermore, given the potential for incorrect indications and the inadequate selection of devices to be used as well as their parameter settings, laser and phototherapies harbor an increased risk in the treatment of dermatological patients. The fourth major source of error leading to complaints relates to incorrect indications as well as incorrect dosage and administration of drugs. Analysis of expert medical assessment reports on treatment errors in dermatology as well as other medical specialties is helpful and provides an opportunity to identify common sources of error and error-prone structures. © 2015 Deutsche Dermatologische Gesellschaft (DDG). Published by John Wiley & Sons Ltd.

  3. SBUV version 8.6 Retrieval Algorithm: Error Analysis and Validation Technique

    Science.gov (United States)

    Kramarova, N. A.; Bhartia, P. K.; Frith, P. K.; McPeters, S. M.; Labow, R. D.; Taylor, G.; Fisher, S.; DeLand, M.

    2012-01-01

    SBUV version 8.6 algorithm was used to reprocess data from the Back Scattered Ultra Violet (BUV), the Solar Back Scattered Ultra Violet (SBUV) and a number of SBUV/2 instruments, which 'span a 41-year period from 1970 to 2011 (except a 5-year gap in the 1970s)[see Bhartia et al, 2012]. In the new version Daumont et al. [1992] ozone cross section were used, and new ozone [McPeters et ai, 2007] and cloud climatologies Doiner and Bhartia, 1995] were implemented. The algorithm uses the Optimum Estimation technique [Rodgers, 2000] to retrieve ozone profiles as ozone layer (partial column, DU) on 21 pressure layers. The corresponding total ozone values are calculated by summing ozone columns at individual layers. The algorithm is optimized to accurately retrieve monthly zonal mean (mzm) profiles rather than an individual profile, since it uses monthly zonal mean ozone climatology as the A Priori. Thus, the SBUV version 8.6 ozone dataset is better suited for long-term trend analysis and monitoring ozone changes rather than for studying short-term ozone variability. Here we discuss some characteristics of the SBUV algorithm and sources of error in the SBUV profile and total ozone retrievals. For the first time the Averaging Kernels, smoothing errors and weighting functions (or Jacobians) are included in the SBUV metadata. The Averaging Kernels (AK) represent the sensitivity of the retrieved profile to the true state and contain valuable information about the retrieval algorithm, such as Vertical Resolution, Degrees of Freedom for Signals (DFS) and Retrieval Efficiency [Rodgers, 2000]. Analysis of AK for mzm ozone profiles shows that the total number of DFS for ozone profiles varies from 4.4 to 5.5 out of 6-9 wavelengths used for retrieval. The number of wavelengths in turn depends on solar zenith angles. Between 25 and 0.5 hPa, where SBUV vertical resolution is the highest, DFS for individual layers are about 0.5.

  4. An analysis of the potential for Glen Canyon Dam releases to inundate archaeological sites in the Grand Canyon, Arizona

    Science.gov (United States)

    Sondossi, Hoda A.; Fairley, Helen C.

    2014-01-01

    The development of a one-dimensional flow-routing model for the Colorado River between Lees Ferry and Diamond Creek, Arizona in 2008 provided a potentially useful tool for assessing the degree to which varying discharges from Glen Canyon Dam may inundate terrestrial environments and potentially affect resources located within the zone of inundation. Using outputs from the model, a geographic information system analysis was completed to evaluate the degree to which flows from Glen Canyon Dam might inundate archaeological sites located along the Colorado River in the Grand Canyon. The analysis indicates that between 4 and 19 sites could be partially inundated by flows released from Glen Canyon Dam under current (2014) operating guidelines, and as many as 82 archaeological sites may have been inundated to varying degrees by uncontrolled high flows released in June 1983. Additionally, the analysis indicates that more of the sites currently (2014) proposed for active management by the National Park Service are located at low elevations and, therefore, tend to be more susceptible to potential inundation effects than sites not currently (2014) targeted for management actions, although the potential for inundation occurs in both groups of sites. Because of several potential sources of error and uncertainty associated with the model and with limitations of the archaeological data used in this analysis, the results are not unequivocal. These caveats, along with the fact that dam-related impacts can involve more than surface-inundation effects, suggest that the results of this analysis should be used with caution to infer potential effects of Glen Canyon Dam on archaeological sites in the Grand Canyon.

  5. Descriptive vector, relative error matrix, and interaction analysis of multivariable plants

    NARCIS (Netherlands)

    Monshizadeh-Naini, Nima; Fatehi, Alireza; Kahki-Sedigh, Ali

    In this paper, we introduce a vector which is able to describe the Niederlinski Index (NI), Relative Gain array (RGA), and the characteristic equation of the relative error matrix. The spectral radius and the structured singular value of the relative error matrix are investigated. The cases where

  6. Estimation of Error Components in Cohort Studies: A Cross-Cohort Analysis of Dutch Mathematics Achievement

    Science.gov (United States)

    Keuning, Jos; Hemker, Bas

    2014-01-01

    The data collection of a cohort study requires making many decisions. Each decision may introduce error in the statistical analyses conducted later on. In the present study, a procedure was developed for estimation of the error made due to the composition of the sample, the item selection procedure, and the test equating process. The math results…

  7. Analysis on the dynamic error for optoelectronic scanning coordinate measurement network

    Science.gov (United States)

    Shi, Shendong; Yang, Linghui; Lin, Jiarui; Guo, Siyang; Ren, Yongjie

    2018-01-01

    Large-scale dynamic three-dimension coordinate measurement technique is eagerly demanded in equipment manufacturing. Noted for advantages of high accuracy, scale expandability and multitask parallel measurement, optoelectronic scanning measurement network has got close attention. It is widely used in large components jointing, spacecraft rendezvous and docking simulation, digital shipbuilding and automated guided vehicle navigation. At present, most research about optoelectronic scanning measurement network is focused on static measurement capacity and research about dynamic accuracy is insufficient. Limited by the measurement principle, the dynamic error is non-negligible and restricts the application. The workshop measurement and positioning system is a representative which can realize dynamic measurement function in theory. In this paper we conduct deep research on dynamic error resources and divide them two parts: phase error and synchronization error. Dynamic error model is constructed. Based on the theory above, simulation about dynamic error is carried out. Dynamic error is quantized and the rule of volatility and periodicity has been found. Dynamic error characteristics are shown in detail. The research result lays foundation for further accuracy improvement.

  8. Error Analysis of Present Simple Tense in the Interlanguage of Adult Arab English Language Learners

    Science.gov (United States)

    Muftah, Muneera; Rafik-Galea, Shameem

    2013-01-01

    The present study analyses errors on present simple tense among adult Arab English language learners. It focuses on the error on 3sg "-s" (the third person singular present tense agreement morpheme "-s"). The learners are undergraduate adult Arabic speakers learning English as a foreign language. The study gathered data from…

  9. Analysis and Experiment of Encoding Errors for MOEMS Micro Mirror Spectrometer

    Science.gov (United States)

    Xiang-Xia, Mo; Zhi-Yu, Wen; Zhi-Hai, Zhang; Yuanjun, Guo

    Micro mirror arrays used in the novel spectrometer to achieve the modulation of Hadamard transformation (HT) and spectrum detection by a single detector can also be considered as a blazed grating. During the modulation, spectrum can not be completely reflected by micro mirror array s on the "on" state. While on the "off" state, there will be still light incidence into detector, since the way of light modulation by mirror arrays is diffraction rather than reflection. This will then cause encoding errors. To diminish these encoding errors, a blazed grating model for mirror arrays is proposed. In this paper, both encoding error and compensation are analyzed for the algorithm of HT. Firstly, establish a theoretic model of micro mirror arrays modulation, and calculate the light field distribution based on the Fraunhofer diffraction theory. Then use MathCAD software and Matlab software to simulate and correct the HT encoding errors. Finally, some experimental tests have been done on the Micro mirror sp ectrometer system. The "on" state errors caused by micro mirror can be eliminated by dividing the background or rectifying the HT mask encoding. However, the "off" state errors can only be eliminated by constructing a comp ensation HT matrix. Both the "on" and "off" state errors are coexisted in the real situation. Exp eriment not only can reduce the encoding errors, but also increase signal-to-noise ratio.

  10. Analysis of error type and frequency in apraxia of speech among Portuguese speakers

    Directory of Open Access Journals (Sweden)

    Maysa Luchesi Cera

    Full Text Available Abstract Most studies characterizing errors in the speech of patients with apraxia involve English language. Objectives: To analyze the types and frequency of errors produced by patients with apraxia of speech whose mother tongue was Brazilian Portuguese. Methods: 20 adults with apraxia of speech caused by stroke were assessed. The types of error committed by patients were analyzed both quantitatively and qualitatively, and frequencies compared. Results: We observed the presence of substitution, omission, trial-and-error, repetition, self-correction, anticipation, addition, reiteration and metathesis, in descending order of frequency, respectively. Omission type errors were one of the most commonly occurring whereas addition errors were infrequent. These findings differed to those reported in English speaking patients, probably owing to differences in the methodologies used for classifying error types; the inclusion of speakers with apraxia secondary to aphasia; and the difference in the structure of Portuguese language to English in terms of syllable onset complexity and effect on motor control. Conclusions: The frequency of omission and addition errors observed differed to the frequency reported for speakers of English.

  11. Analysis of translational errors in frame-based and frameless cranial radiosurgery using an anthropomorphic phantom

    Directory of Open Access Journals (Sweden)

    Taynná Vernalha Rocha Almeida

    2016-04-01

    Full Text Available Abstract Objective: To evaluate three-dimensional translational setup errors and residual errors in image-guided radiosurgery, comparing frameless and frame-based techniques, using an anthropomorphic phantom. Materials and Methods: We initially used specific phantoms for the calibration and quality control of the image-guided system. For the hidden target test, we used an Alderson Radiation Therapy (ART-210 anthropomorphic head phantom, into which we inserted four 5mm metal balls to simulate target treatment volumes. Computed tomography images were the taken with the head phantom properly positioned for frameless and frame-based radiosurgery. Results: For the frameless technique, the mean error magnitude was 0.22 ± 0.04 mm for setup errors and 0.14 ± 0.02 mm for residual errors, the combined uncertainty being 0.28 mm and 0.16 mm, respectively. For the frame-based technique, the mean error magnitude was 0.73 ± 0.14 mm for setup errors and 0.31 ± 0.04 mm for residual errors, the combined uncertainty being 1.15 mm and 0.63 mm, respectively. Conclusion: The mean values, standard deviations, and combined uncertainties showed no evidence of a significant differences between the two techniques when the head phantom ART-210 was used.

  12. Analysis of translational errors in frame-based and frameless cranial radiosurgery using an anthropomorphic phantom

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Taynna Vernalha Rocha [Faculdades Pequeno Principe (FPP), Curitiba, PR (Brazil); Cordova Junior, Arno Lotar; Almeida, Cristiane Maria; Piedade, Pedro Argolo; Silva, Cintia Mara da, E-mail: taynnavra@gmail.com [Centro de Radioterapia Sao Sebastiao, Florianopolis, SC (Brazil); Brincas, Gabriela R. Baseggio [Centro de Diagnostico Medico Imagem, Florianopolis, SC (Brazil); Marins, Priscila; Soboll, Danyel Scheidegger [Universidade Tecnologica Federal do Parana (UTFPR), Curitiba, PR (Brazil)

    2016-03-15

    Objective: To evaluate three-dimensional translational setup errors and residual errors in image-guided radiosurgery, comparing frameless and frame-based techniques, using an anthropomorphic phantom. Materials and Methods: We initially used specific phantoms for the calibration and quality control of the image-guided system. For the hidden target test, we used an Alderson Radiation Therapy (ART)-210 anthropomorphic head phantom, into which we inserted four 5- mm metal balls to simulate target treatment volumes. Computed tomography images were the taken with the head phantom properly positioned for frameless and frame-based radiosurgery. Results: For the frameless technique, the mean error magnitude was 0.22 ± 0.04 mm for setup errors and 0.14 ± 0.02 mm for residual errors, the combined uncertainty being 0.28 mm and 0.16 mm, respectively. For the frame-based technique, the mean error magnitude was 0.73 ± 0.14 mm for setup errors and 0.31 ± 0.04 mm for residual errors, the combined uncertainty being 1.15 mm and 0.63 mm, respectively. Conclusion: The mean values, standard deviations, and combined uncertainties showed no evidence of a significant differences between the two techniques when the head phantom ART-210 was used. (author)

  13. Root cause analysis of transfusion error: identifying causes to implement changes.

    Science.gov (United States)

    Elhence, Priti; Veena, S; Sharma, Raj Kumar; Chaudhary, R K

    2010-12-01

    As part of ongoing efforts to improve transfusion safety, an error reporting system was implemented in our hospital-based transfusion medicine unit at a tertiary care medical institute. This system is based on Medical Event Reporting System-Transfusion Medicine (MERS-TM) and collects data on all near miss, no harm, and misadventures related to the transfusion process. Root cause analyses of one such innocuous appearing error demonstrate how weaknesses in the system can be identified to make necessary changes to achieve transfusion safety. The reported error was investigated, classified, coded, and analyzed using MERS-TM prototype, modified and adopted for our institute. The consequent error was a "mistransfusion" but a "no-harm event" as the transfused unit was of the same blood group as the patient. It was a high event severity level error (level 1). Multiple errors preceded the final error at various functional locations in the transfusion process. Human, organizational, and patient-related factors were identified as root causes and corrective actions were initiated to prevent future occurrences. This case illustrates the usefulness of having an error reporting system in hospitals to highlight human and system failures associated with transfusion that may otherwise go unnoticed. Areas can be identified where resources need to be targeted to improve patient safety. © 2010 American Association of Blood Banks.

  14. Analysis of Errors Committed by Physics Students in Secondary Schools in Ilorin Metropolis, Nigeria

    Science.gov (United States)

    Omosewo, Esther Ore; Akanbi, Abdulrasaq Oladimeji

    2013-01-01

    The study attempt to find out the types of error committed and influence of gender on the type of error committed by senior secondary school physics students in metropolis. Six (6) schools were purposively chosen for the study. One hundred and fifty five students' scripts were randomly sampled for the study. Joint Mock physics essay questions…

  15. Error rates in forensic DNA analysis: Definition, numbers, impact and communication

    NARCIS (Netherlands)

    Kloosterman, A.; Sjerps, M.; Quak, A.

    2014-01-01

    Forensic DNA casework is currently regarded as one of the most important types of forensic evidence, and important decisions in intelligence and justice are based on it. However, errors occasionally occur and may have very serious consequences. In other domains, error rates have been defined and

  16. Analysis of translational errors in frame-based and frameless cranial radiosurgery using an anthropomorphic phantom*

    Science.gov (United States)

    Almeida, Taynná Vernalha Rocha; Cordova Junior, Arno Lotar; Piedade, Pedro Argolo; da Silva, Cintia Mara; Marins, Priscila; Almeida, Cristiane Maria; Brincas, Gabriela R. Baseggio; Soboll, Danyel Scheidegger

    2016-01-01

    Objective To evaluate three-dimensional translational setup errors and residual errors in image-guided radiosurgery, comparing frameless and frame-based techniques, using an anthropomorphic phantom. Materials and Methods We initially used specific phantoms for the calibration and quality control of the image-guided system. For the hidden target test, we used an Alderson Radiation Therapy (ART)-210 anthropomorphic head phantom, into which we inserted four 5mm metal balls to simulate target treatment volumes. Computed tomography images were the taken with the head phantom properly positioned for frameless and frame-based radiosurgery. Results For the frameless technique, the mean error magnitude was 0.22 ± 0.04 mm for setup errors and 0.14 ± 0.02 mm for residual errors, the combined uncertainty being 0.28 mm and 0.16 mm, respectively. For the frame-based technique, the mean error magnitude was 0.73 ± 0.14 mm for setup errors and 0.31 ± 0.04 mm for residual errors, the combined uncertainty being 1.15 mm and 0.63 mm, respectively. Conclusion The mean values, standard deviations, and combined uncertainties showed no evidence of a significant differences between the two techniques when the head phantom ART-210 was used. PMID:27141132

  17. Cost-benefit analysis of the detection of prescribing errors by hospital pharmacy staff

    NARCIS (Netherlands)

    Van Den Bemt, Patricia M. L. A.; Postma, Maarten J.; Van Roon, Eric N.; Chow, Man-Chie C.; Fijn, Roel; Brouwers, Jacobus R. B. J.

    2002-01-01

    Objective: Prescribing errors are a major cause of iatrogenic patient morbidity and therefore interventions aimed at preventing the adverse outcomes of these errors are likely to result in cost reduction. However, it is unclear whether the costs associated with these preventive measures are

  18. Error Analysis of Ia Supernova and Query on Cosmic Dark Energy ...

    Indian Academy of Sciences (India)

    deviates Gaussian distribution seriously, and it is not suitable to calculate the system- atic error σsys of SNIa by the χ2 check test method. In our idea the real intrinsic error of a SNIa compilation should be based on the statistical distribution diagram of the number of SNIa for their absolute bolometric magnitudes (see Fig. 1).

  19. A Human Reliability Analysis of Post- Accident Human Errors in the Low Power and Shutdown PSA of KSNP

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Daeil; Kim, J. H.; Jang, S. C

    2007-03-15

    Korea Atomic Energy Research Institute, using the ANS low power and shutdown (LPSD) probabilistic risk assessment (PRA) Standard, evaluated the LPSD PSA model of the KSNP, Yonggwang Units 5 and 6, and identified the items to be improved. The evaluation results of human reliability analysis (HRA) of the post-accident human errors in the LPSD PSA model for the KSNP showed that 10 items among 19 items of supporting requirements for those in the ANS PRA Standard were identified as them to be improved. Thus, we newly carried out a HRA for post-accident human errors in the LPSD PSA model for the KSNP. Following tasks are the improvements in the HRA of post-accident human errors of the LPSD PSA model for the KSNP compared with the previous one: Interviews with operators in the interpretation of the procedure, modeling of operator actions, and the quantification results of human errors, site visit. Applications of limiting value to the combined post-accident human errors. Documentation of information of all the input and bases for the detailed quantifications and the dependency analysis using the quantification sheets The assessment results for the new HRA results of post-accident human errors using the ANS LPSD PRA Standard show that above 80% items of its supporting requirements for post-accident human errors were graded as its Category II. The number of the re-estimated human errors using the LPSD Korea Standard HRA method is 385. Among them, the number of individual post-accident human errors is 253. The number of dependent post-accident human errors is 135. The quantification results of the LPSD PSA model for the KSNP with new HEPs show that core damage frequency (CDF) is increased by 5.1% compared with the previous baseline CDF It is expected that this study results will be greatly helpful to improve the PSA quality for the domestic nuclear power plants because they have sufficient PSA quality to meet the Category II of Supporting Requirements for the post

  20. Spatio‐temporal analysis and modeling of short‐term wind power forecast errors

    DEFF Research Database (Denmark)

    Tastu, Julija; Pinson, Pierre; Kotwa, Ewelina

    2011-01-01

    Forecasts of wind power production are increasingly being used in various management tasks. So far, such forecasts and related uncertainty information have usually been generated individually for a given site of interest (either a wind farm or a group of wind farms), without properly accounting...... for the spatio‐temporal dependencies observed in the wind generation field. However, it is intuitively expected that, owing to the inertia of meteorological forecasting systems, a forecast error made at a given point in space and time will be related to forecast errors at other points in space in the following...... forecast errors are proposed, and their ability to mimic this structure is discussed. The best performing model is shown to explain 54% of the variations of the forecast errors observed for the individual forecasts used today. Even though focus is on 1‐h‐ahead forecast errors and on western Denmark only...

  1. Analysis of False Positive Errors of an Acute Respiratory Infection Text Classifier due to Contextual Features.

    Science.gov (United States)

    South, Brett R; Shen, Shuying; Chapman, Wendy W; Delisle, Sylvain; Samore, Matthew H; Gundlapalli, Adi V

    2010-03-01

    Text classifiers have been used for biosurveillance tasks to identify patients with diseases or conditions of interest. When compared to a clinical reference standard of 280 cases of Acute Respiratory Infection (ARI), a text classifier consisting of simple rules and NegEx plus string matching for specific concepts of interest produced 569 (4%) false positive (FP) cases. Using instance level manual annotation we estimate the prevalence of contextual attributes and error types leading to FP cases. Errors were due to (1) Deletion errors from abbreviations, spelling mistakes and missing synonyms (57%); (2) Insertion errors from templated document structures such as check boxes, and lists of signs and symptoms (36%) and; (3) Substitution errors from irrelevant concepts and alternate meanings for the same word (6%). We demonstrate that specific concept attributes contribute to false positive cases. These results will inform modifications and adaptations to improve text classifier performance.

  2. Analysis and classification of errors made by teams during neonatal resuscitation.

    Science.gov (United States)

    Yamada, Nicole K; Yaeger, Kimberly A; Halamek, Louis P

    2015-11-01

    The Neonatal Resuscitation Program (NRP) algorithm serves as a guide to healthcare professionals caring for neonates transitioning to extrauterine life. Despite this, adherence to the algorithm is challenging, and errors are frequent. Information-dense, high-risk fields such as air traffic control have proven that formal classification of errors facilitates recognition and remediation. This study was performed to determine and characterize common deviations from the NRP algorithm during neonatal resuscitation. Audiovisual recordings of 250 real neonatal resuscitations were obtained between April 2003 and May 2004. Of these, 23 complex resuscitations were analyzed for adherence to the contemporaneous NRP algorithm and scored using a novel classification tool based on the validated NRP Megacode Checklist. Seven hundred eighty algorithm-driven tasks were observed. One hundred ninety-four tasks were completed incorrectly, for an average error rate of 23%. Forty-two were errors of omission (28% of all errors) and 107 were errors of commission (72% of all errors). Many errors were repetitive and potentially clinically significant: failure to assess heart rate and/or breath sounds, improper rate of positive pressure ventilation, inadequate peak inspiratory and end expiratory pressures during ventilation, improper chest compression technique, and asynchronous PPV and CC. Errors of commission, especially when performing advanced life support interventions such as positive pressure ventilation, intubation, and chest compressions, are common during neonatal resuscitation and are sources of potential harm. The adoption of error reduction strategies capable of decreasing cognitive and technical load and standardizing communication - strategies common in other industries - should be considered in healthcare. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  3. ANALYSIS OF RELATIONSHIPS BETWEEN THE LEVEL OF ERRORS IN LEG AND MONOFIN MOVEMENT AND STROKE PARAMETERS IN MONOFIN SWIMMING

    Directory of Open Access Journals (Sweden)

    Marek Rejman

    2013-03-01

    Full Text Available The aim of this study was to analyze the error structure in propulsive movements with regard to its influence on monofin swimming speed. The random cycles performed by six swimmers were filmed during a progressive test (900m. An objective method to estimate errors committed in the area of angular displacement of the feet and monofin segments was employed. The parameters were compared with a previously described model. Mutual dependences between the level of errors, stroke frequency, stroke length and amplitude in relation to swimming velocity were analyzed. The results showed that proper foot movements and the avoidance of errors, arising at the distal part of the fin, ensure the progression of swimming speed. The individual stroke parameters distribution which consists of optimally increasing stroke frequency to the maximal possible level that enables the stabilization of stroke length leads to the minimization of errors. Identification of key elements in the stroke structure based on the analysis of errors committed should aid in improving monofin swimming technique

  4. Preliminary Analysis of Effect of Random Segment Errors on Coronagraph Performance

    Science.gov (United States)

    Stahl, Mark T.; Shaklan, Stuart B.; Stahl, H. Philip

    2015-01-01

    Are we alone in the Universe is probably the most compelling science question of our generation. To answer it requires a large aperture telescope with extreme wavefront stability. To image and characterize Earth-like planets requires the ability to block 10(exp 10) of the host stars light with a 10(exp -11) stability. For an internal coronagraph, this requires correcting wavefront errors and keeping that correction stable to a few picometers rms for the duration of the science observation. This requirement places severe specifications upon the performance of the observatory, telescope and primary mirror. A key task of the AMTD project (initiated in FY12) is to define telescope level specifications traceable to science requirements and flow those specifications to the primary mirror. From a systems perspective, probably the most important question is: What is the telescope wavefront stability specification? Previously, we suggested this specification should be 10 picometers per 10 minutes; considered issues of how this specification relates to architecture, i.e. monolithic or segmented primary mirror; and asked whether it was better to have few or many segmented. This paper reviews the 10 picometers per 10 minutes specification; provides analysis related to the application of this specification to segmented apertures; and suggests that a 3 or 4 ring segmented aperture is more sensitive to segment rigid body motion that an aperture with fewer or more segments.

  5. The involvement of Pharmacovigilance Centres in medication errors detection: a questionnaire-based analysis.

    Science.gov (United States)

    Benabdallah, Ghita; Benkirane, Raja; Khattabi, Asmae; Edwards, I Ralph; Bencheikh, Rachida Soulaymani

    2011-01-01

    This study assesses the ability of Pharmacovigilance Centres (PVCs) to detect medication errors (ME) and to proceed to building Patient Safety (PS) via their information networks and to underline the limits for this challenge. This was an exploratory study conducted in PVCs members of the World Health Organization International Drug Monitoring network. A questionnaire specifically designed for the needs of the study was sent to a network via a confidential email system. The questionnaire asked for information, progress and improvement made by PVCs in PS and ME. Among the 88 countries, 21 answered. Reporting of Adverse Drug Reactions (ADRs) by health care professionals (HCP) is mandatory for 42% of PVCs. 100% of countries receive reports from HCP, 66% from patients and 24% from PCCs. ADRs reports are received by all communications means. There is an heterogeneity between countries regarding PVCs and PS activities. Among them, 4 PVCs have the prime activity of PS organization. PVCs are able to detect and analyze ME. There is a need to coordinate efforts between countries to optimize ME detection, and its analysis. Bridges need to be built linking PVCs, PCCs and PS organizations in order to avoid duplication of workload.

  6. Multidisciplinary framework for human reliability analysis with an application to errors of commission and dependencies

    Energy Technology Data Exchange (ETDEWEB)

    Barriere, M.T.; Luckas, W.J. [Brookhaven National Lab., Upton, NY (United States); Wreathall, J. [Wreathall (John) and Co., Dublin, OH (United States); Cooper, S.E. [Science Applications International Corp., Reston, VA (United States); Bley, D.C. [PLG, Inc., Newport Beach, CA (United States); Ramey-Smith, A. [Nuclear Regulatory Commission, Washington, DC (United States). Div. of Systems Technology

    1995-08-01

    Since the early 1970s, human reliability analysis (HRA) has been considered to be an integral part of probabilistic risk assessments (PRAs). Nuclear power plant (NPP) events, from Three Mile Island through the mid-1980s, showed the importance of human performance to NPP risk. Recent events demonstrate that human performance continues to be a dominant source of risk. In light of these observations, the current limitations of existing HRA approaches become apparent when the role of humans is examined explicitly in the context of real NPP events. The development of new or improved HRA methodologies to more realistically represent human performance is recognized by the Nuclear Regulatory Commission (NRC) as a necessary means to increase the utility of PRAS. To accomplish this objective, an Improved HRA Project, sponsored by the NRC`s Office of Nuclear Regulatory Research (RES), was initiated in late February, 1992, at Brookhaven National Laboratory (BNL) to develop an improved method for HRA that more realistically assesses the human contribution to plant risk and can be fully integrated with PRA. This report describes the research efforts including the development of a multidisciplinary HRA framework, the characterization and representation of errors of commission, and an approach for addressing human dependencies. The implications of the research and necessary requirements for further development also are discussed.

  7. Error Analysis and Evaluation of the Latest GSMap and IMERG Precipitation Products over Eastern China

    Directory of Open Access Journals (Sweden)

    Shaowei Ning

    2017-01-01

    Full Text Available The present study comprehensively analyzes error characteristics and performance of the two latest GPM-era satellite precipitation products over eastern China from April 2014 to March 2016. Analysis results indicate that the two products have totally different spatial distributions of total bias. Many of the underestimations for the GSMap-gauged could be traced to significant hit bias, with a secondary contribution from missed precipitation. For IMERG, total bias illustrates significant overestimation over most of the eastern part of China, except upper reaches of Yangtze and Yellow River basins. GSMap-gauged tends to overestimate light precipitation (<16 mm/day and underestimate precipitation with rain rate larger than 16 mm/day; however, IMERG underestimates precipitation at rain rate between 8 and 64 mm/day and overestimates precipitation at rain rate more than 64 mm/day. IMERG overestimates extreme precipitation indices (RR99P and R20TOT, with relative bias values of 17.9% and 11.5%, respectively. But GSMap-gauged shows significant underestimation of these indices. In addition, both products performed well in the Huaihe, Liaohe, and Yangtze River basins for extreme precipitation detection. At basin scale comparisons, the GSMap-gauged data has a relatively higher accuracy than IMERG, especially at the Haihe, Huaihe, Liaohe, and Yellow River basins.

  8. Quantifying the predictive consequences of model error with linear subspace analysis

    Science.gov (United States)

    White, Jeremy T.; Doherty, John E.; Hughes, Joseph D.

    2014-01-01

    All computer models are simplified and imperfect simulators of complex natural systems. The discrepancy arising from simplification induces bias in model predictions, which may be amplified by the process of model calibration. This paper presents a new method to identify and quantify the predictive consequences of calibrating a simplified computer model. The method is based on linear theory, and it scales efficiently to the large numbers of parameters and observations characteristic of groundwater and petroleum reservoir models. The method is applied to a range of predictions made with a synthetic integrated surface-water/groundwater model with thousands of parameters. Several different observation processing strategies and parameterization/regularization approaches are examined in detail, including use of the Karhunen-Loève parameter transformation. Predictive bias arising from model error is shown to be prediction specific and often invisible to the modeler. The amount of calibration-induced bias is influenced by several factors, including how expert knowledge is applied in the design of parameterization schemes, the number of parameters adjusted during calibration, how observations and model-generated counterparts are processed, and the level of fit with observations achieved through calibration. Failure to properly implement any of these factors in a prediction-specific manner may increase the potential for predictive bias in ways that are not visible to the calibration and uncertainty analysis process.

  9. High accuracy position method based on computer vision and error analysis

    Science.gov (United States)

    Chen, Shihao; Shi, Zhongke

    2003-09-01

    The study of high accuracy position system is becoming the hotspot in the field of autocontrol. And positioning is one of the most researched tasks in vision system. So we decide to solve the object locating by using the image processing method. This paper describes a new method of high accuracy positioning method through vision system. In the proposed method, an edge-detection filter is designed for a certain running condition. Here, the filter contains two mainly parts: one is image-processing module, this module is to implement edge detection, it contains of multi-level threshold self-adapting segmentation, edge-detection and edge filter; the other one is object-locating module, it is to point out the location of each object in high accurate, and it is made up of medium-filtering and curve-fitting. This paper gives some analysis error for the method to prove the feasibility of vision in position detecting. Finally, to verify the availability of the method, an example of positioning worktable, which is using the proposed method, is given at the end of the paper. Results show that the method can accurately detect the position of measured object and identify object attitude.

  10. Preliminary analysis of effect of random segment errors on coronagraph performance

    Science.gov (United States)

    Stahl, Mark T.; Shaklan, Stuart B.; Stahl, H. Philip

    2015-09-01

    "Are we alone in the Universe?" is probably the most compelling science question of our generation. To answer it requires a large aperture telescope with extreme wavefront stability. To image and characterize Earth-like planets requires the ability to block 1010 of the host star's light with a 10-11 stability. For an internal coronagraph, this requires correcting wavefront errors and keeping that correction stable to a few picometers rms for the duration of the science observation. This requirement places severe specifications upon the performance of the observatory, telescope and primary mirror. A key task of the AMTD project (initiated in FY12) is to define telescope level specifications traceable to science requirements and flow those specifications to the primary mirror. From a systems perspective, probably the most important question is: What is the telescope wavefront stability specification? Previously, we suggested this specification should be 10 picometers per 10 minutes; considered issues of how this specification relates to architecture, i.e. monolithic or segmented primary mirror; and asked whether it was better to have few or many segments. This paper reviews the 10 picometers per 10 minutes specification; provides analysis related to the application of this specification to segmented apertures; and suggests that a 3 or 4 ring segmented aperture is more sensitive to segment rigid body motion that an aperture with fewer or more segments.

  11. Statistical model to perform error analysis of curve fits of wind tunnel test data using the techniques of analysis of variance and regression analysis

    Science.gov (United States)

    Alston, D. W.

    1981-01-01

    The considered research had the objective to design a statistical model that could perform an error analysis of curve fits of wind tunnel test data using analysis of variance and regression analysis techniques. Four related subproblems were defined, and by solving each of these a solution to the general research problem was obtained. The capabilities of the evolved true statistical model are considered. The least squares fit is used to determine the nature of the force, moment, and pressure data. The order of the curve fit is increased in order to delete the quadratic effect in the residuals. The analysis of variance is used to determine the magnitude and effect of the error factor associated with the experimental data.

  12. Impact of Non-Gaussian Error Volumes on Conjunction Assessment Risk Analysis

    Science.gov (United States)

    Ghrist, Richard W.; Plakalovic, Dragan

    2012-01-01

    An understanding of how an initially Gaussian error volume becomes non-Gaussian over time is an important consideration for space-vehicle conjunction assessment. Traditional assumptions applied to the error volume artificially suppress the true non-Gaussian nature of the space-vehicle position uncertainties. For typical conjunction assessment objects, representation of the error volume by a state error covariance matrix in a Cartesian reference frame is a more significant limitation than is the assumption of linearized dynamics for propagating the error volume. In this study, the impact of each assumption is examined and isolated for each point in the volume. Limitations arising from representing the error volume in a Cartesian reference frame is corrected by employing a Monte Carlo approach to probability of collision (Pc), using equinoctial samples from the Cartesian position covariance at the time of closest approach (TCA) between the pair of space objects. A set of actual, higher risk (Pc >= 10 (exp -4)+) conjunction events in various low-Earth orbits using Monte Carlo methods are analyzed. The impact of non-Gaussian error volumes on Pc for these cases is minimal, even when the deviation from a Gaussian distribution is significant.

  13. Measurement-based analysis of error latency. [in computer operating system

    Science.gov (United States)

    Chillarege, Ram; Iyer, Ravishankar K.

    1987-01-01

    This paper demonstrates a practical methodology for the study of error latency under a real workload. The method is illustrated with sampled data on the physical memory activity, gathered by hardware instrumentation on a VAX 11/780 during the normal workload cycle of the installation. These data are used to simulate fault occurrence and to reconstruct the error discovery process in the system. The technique provides a means to study the system under different workloads and for multiple days. An approach to determine the percentage of undiscovered errors is also developed and a verification of the entire methodology is performed. This study finds that the mean error latency, in the memory containing the operating system, varies by a factor of 10 to 1 (in hours) between the low and high workloads. It is found that of all errors occurring within a day, 70 percent are detected in the same day, 82 percent within the following day, and 91 percent within the third day. The increase in failure rate due to latency is not so much a function of remaining errors but is dependent on whether or not there is a latent error.

  14. Methylphenidate improves diminished error and feedback sensitivity in ADHD: An evoked heart rate analysis.

    Science.gov (United States)

    Groen, Yvonne; Mulder, Lambertus J M; Wijers, Albertus A; Minderaa, Ruud B; Althaus, Monika

    2009-09-01

    Attention Deficit Hyperactivity Disorder (ADHD) is a developmental disorder that has previously been related to a decreased sensitivity to errors and feedback. Supplementary to the traditional performance measures, this study uses autonomic measures to study this decreased sensitivity in ADHD and the modulating effects of medication. Children with ADHD, on and off Methylphenidate (Mph), and typically developing (TD) children performed a selective attention task with three feedback conditions: reward, punishment and no feedback. Evoked Heart Rate (EHR) responses were computed for correct and error trials. All groups performed more efficiently with performance feedback than without. EHR analyses, however, showed that enhanced EHR decelerations on error trials seen in TD children, were absent in the medication-free ADHD group for all feedback conditions. The Mph-treated ADHD group showed 'normalised' EHR decelerations to errors and error feedback, depending on the feedback condition. This study provides further evidence for a decreased physiological responsiveness to errors and error feedback in children with ADHD and for a modulating effect of Mph.

  15. Error analysis of cine phase contrast MRI velocity measurements used for strain calculation.

    Science.gov (United States)

    Jensen, Elisabeth R; Morrow, Duane A; Felmlee, Joel P; Odegard, Gregory M; Kaufman, Kenton R

    2015-01-02

    Cine Phase Contrast (CPC) MRI offers unique insight into localized skeletal muscle behavior by providing the ability to quantify muscle strain distribution during cyclic motion. Muscle strain is obtained by temporally integrating and spatially differentiating CPC-encoded velocity. The aim of this study was to quantify CPC measurement accuracy and precision and to describe error propagation into displacement and strain. Using an MRI-compatible jig to move a B-gel phantom within a 1.5 T MRI bore, CPC-encoded velocities were collected. The three orthogonal encoding gradients (through plane, frequency, and phase) were evaluated independently in post-processing. Two systematic error types were corrected: eddy current-induced bias and calibration-type error. Measurement accuracy and precision were quantified before and after removal of systematic error. Through plane- and frequency-encoded data accuracy were within 0.4 mm/s after removal of systematic error - a 70% improvement over the raw data. Corrected phase-encoded data accuracy was within 1.3 mm/s. Measured random error was between 1 to 1.4 mm/s, which followed the theoretical prediction. Propagation of random measurement error into displacement and strain was found to depend on the number of tracked time segments, time segment duration, mesh size, and dimensional order. To verify this, theoretical predictions were compared to experimentally calculated displacement and strain error. For the parameters tested, experimental and theoretical results aligned well. Random strain error approximately halved with a two-fold mesh size increase, as predicted. Displacement and strain accuracy were within 2.6 mm and 3.3%, respectively. These results can be used to predict the accuracy and precision of displacement and strain in user-specific applications. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Maneuver Performance Assessment of the Cassini Spacecraft Through Execution-Error Modeling and Analysis

    Science.gov (United States)

    Wagner, Sean

    2014-01-01

    The Cassini spacecraft has executed nearly 300 maneuvers since 1997, providing ample data for execution-error model updates. With maneuvers through 2017, opportunities remain to improve on the models and remove biases identified in maneuver executions. This manuscript focuses on how execution-error models can be used to judge maneuver performance, while providing a means for detecting performance degradation. Additionally, this paper describes Cassini's execution-error model updates in August 2012. An assessment of Cassini's maneuver performance through OTM-368 on January 5, 2014 is also presented.

  17. Model structural uncertainty quantification and hydrologic parameter and prediction error analysis using airborne electromagnetic data

    DEFF Research Database (Denmark)

    Minsley, B. J.; Christensen, Nikolaj Kruse; Christensen, Steen

    is never perfectly known, however, and incorrect assumptions can be a significant source of error when making model predictions. We describe a systematic approach for quantifying model structural uncertainty that is based on the integration of sparse borehole observations and large-scale airborne...... indicator simulation, we produce many realizations of model structure that are consistent with observed datasets and prior knowledge. Given estimates of model structural uncertainty, we incorporate hydrologic observations to evaluate the errors in hydrologic parameter or prediction errors that occur when...

  18. Analysis of Omni-directivity Error of Electromagnetic Field Probe using Isotropic Antenna

    Directory of Open Access Journals (Sweden)

    Hartansky Rene

    2016-12-01

    Full Text Available This manuscript analyzes the omni-directivity error of an electromagnetic field (EM probe and its dependence on frequency. The global directional characteristic of a whole EM probe consists of three independent directional characteristics of EM sensors - one for each coordinate. The shape of particular directional characteristics is frequency dependent and so is the shape of the whole EM probe’s global directional characteristic. This results in systematic error induced in the measurement of EM fields. This manuscript also contains quantitative formulation of such errors caused by the shape change of directional characteristics for different types of sensors depending on frequency and their mutual arrangement.

  19. Medical errors in neurosurgery.

    Science.gov (United States)

    Rolston, John D; Zygourakis, Corinna C; Han, Seunggu J; Lau, Catherine Y; Berger, Mitchel S; Parsa, Andrew T

    2014-01-01

    Medical errors cause nearly 100,000 deaths per year and cost billions of dollars annually. In order to rationally develop and institute programs to mitigate errors, the relative frequency and costs of different errors must be documented. This analysis will permit the judicious allocation of scarce healthcare resources to address the most costly errors as they are identified. Here, we provide a systematic review of the neurosurgical literature describing medical errors at the departmental level. Eligible articles were identified from the PubMed database, and restricted to reports of recognizable errors across neurosurgical practices. We limited this analysis to cross-sectional studies of errors in order to better match systems-level concerns, rather than reviewing the literature for individually selected errors like wrong-sided or wrong-level surgery. Only a small number of articles met these criteria, highlighting the paucity of data on this topic. From these studies, errors were documented in anywhere from 12% to 88.7% of cases. These errors had many sources, of which only 23.7-27.8% were technical, related to the execution of the surgery itself, highlighting the importance of systems-level approaches to protecting patients and reducing errors. Overall, the magnitude of medical errors in neurosurgery and the lack of focused research emphasize the need for prospective categorization of morbidity with judicious attribution. Ultimately, we must raise awareness of the impact of medical errors in neurosurgery, reduce the occurrence of medical errors, and mitigate their detrimental effects.

  20. Impact of habitat-specific GPS positional error on detection of movement scales by first-passage time analysis.

    Directory of Open Access Journals (Sweden)

    David M Williams

    Full Text Available Advances in animal tracking technologies have reduced but not eliminated positional error. While aware of such inherent error, scientists often proceed with analyses that assume exact locations. The results of such analyses then represent one realization in a distribution of possible outcomes. Evaluating results within the context of that distribution can strengthen or weaken our confidence in conclusions drawn from the analysis in question. We evaluated the habitat-specific positional error of stationary GPS collars placed under a range of vegetation conditions that produced a gradient of canopy cover. We explored how variation of positional error in different vegetation cover types affects a researcher's ability to discern scales of movement in analyses of first-passage time for white-tailed deer (Odocoileus virginianus. We placed 11 GPS collars in 4 different vegetative canopy cover types classified as the proportion of cover above the collar (0-25%, 26-50%, 51-75%, and 76-100%. We simulated the effect of positional error on individual movement paths using cover-specific error distributions at each location. The different cover classes did not introduce any directional bias in positional observations (1 m≤mean≤6.51 m, 0.24≤p≤0.47, but the standard deviation of positional error of fixes increased significantly with increasing canopy cover class for the 0-25%, 26-50%, 51-75% classes (SD = 2.18 m, 3.07 m, and 4.61 m, respectively and then leveled off in the 76-100% cover class (SD = 4.43 m. We then added cover-specific positional errors to individual deer movement paths and conducted first-passage time analyses on the noisy and original paths. First-passage time analyses were robust to habitat-specific error in a forest-agriculture landscape. For deer in a fragmented forest-agriculture environment, and species that move across similar geographic extents, we suggest that first-passage time analysis is robust with regard to