WorldWideScience

Sample records for tomography error analysis

  1. Errors in abdominal computed tomography

    International Nuclear Information System (INIS)

    Stephens, S.; Marting, I.; Dixon, A.K.

    1989-01-01

    Sixty-nine patients are presented in whom a substantial error was made on the initial abdominal computed tomography report. Certain features of these errors have been analysed. In 30 (43.5%) a lesion was simply not recognised (error of observation); in 39 (56.5%) the wrong conclusions were drawn about the nature of normal or abnormal structures (error of interpretation). The 39 errors of interpretation were more complex; in 7 patients an abnormal structure was noted but interpreted as normal, whereas in four a normal structure was thought to represent a lesion. Other interpretive errors included those where the wrong cause for a lesion had been ascribed (24 patients), and those where the abnormality was substantially under-reported (4 patients). Various features of these errors are presented and discussed. Errors were made just as often in relation to small and large lesions. Consultants made as many errors as senior registrar radiologists. It is like that dual reporting is the best method of avoiding such errors and, indeed, this is widely practised in our unit. (Author). 9 refs.; 5 figs.; 1 tab

  2. Error analysis of helmholtz-based MR-electrical properties tomography.

    Science.gov (United States)

    Mandija, Stefano; Sbrizzi, Alessandro; Katscher, Ulrich; Luijten, Peter R; van den Berg, Cornelis A T

    2017-11-16

    MR electrical properties tomography (MR-EPT) aims to measure tissue electrical properties by computing spatial derivatives of measured B1+ data. This computation is very sensitive to spatial fluctuations caused, for example, by noise and Gibbs ringing. In this work, the error arising from the computation of spatial derivatives using finite difference kernels (FD error) has been investigated. In relation to this FD error, it has also been investigated whether mitigation strategies such as Gibbs ringing correction and Gaussian apodization can be beneficial for conductivity reconstructions. Conductivity reconstructions were performed on a phantom (by means of simulations and MR measurements at 3T) and on a human brain model. The accuracy was evaluated as a function of image resolution, FD kernel size, k-space windowing, and signal-to-noise ratio. The impact of mitigation strategies was also investigated. The adopted small FD kernel is highly sensitive to spatial fluctuations, whereas the large FD kernel is more noise-robust. However, large FD kernels lead to extended numerical boundary error propagation, which severely hampers the MR-EPT reconstruction accuracy for highly spatially convoluted tissue structures such as the human brain. Mitigation strategies slightly improve the accuracy of conductivity reconstructions. For the adopted derivative kernels and the investigated scenario, MR-EPT conductivity reconstructions show low accuracy: less than 37% of the voxels have a relative error lower than 30%. The numerical error introduced by the computation of spatial derivatives using FD kernels is one of the major causes of limited accuracy in Helmholtz-based MR-EPT reconstructions. Magn Reson Med, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  3. Calibration and error analysis of metal-oxide-semiconductor field-effect transistor dosimeters for computed tomography radiation dosimetry.

    Science.gov (United States)

    Trattner, Sigal; Prinsen, Peter; Wiegert, Jens; Gerland, Elazar-Lars; Shefer, Efrat; Morton, Tom; Thompson, Carla M; Yagil, Yoad; Cheng, Bin; Jambawalikar, Sachin; Al-Senan, Rani; Amurao, Maxwell; Halliburton, Sandra S; Einstein, Andrew J

    2017-12-01

    Metal-oxide-semiconductor field-effect transistors (MOSFETs) serve as a helpful tool for organ radiation dosimetry and their use has grown in computed tomography (CT). While different approaches have been used for MOSFET calibration, those using the commonly available 100 mm pencil ionization chamber have not incorporated measurements performed throughout its length, and moreover, no previous work has rigorously evaluated the multiple sources of error involved in MOSFET calibration. In this paper, we propose a new MOSFET calibration approach to translate MOSFET voltage measurements into absorbed dose from CT, based on serial measurements performed throughout the length of a 100-mm ionization chamber, and perform an analysis of the errors of MOSFET voltage measurements and four sources of error in calibration. MOSFET calibration was performed at two sites, to determine single calibration factors for tube potentials of 80, 100, and 120 kVp, using a 100-mm-long pencil ion chamber and a cylindrical computed tomography dose index (CTDI) phantom of 32 cm diameter. The dose profile along the 100-mm ion chamber axis was sampled in 5 mm intervals by nine MOSFETs in the nine holes of the CTDI phantom. Variance of the absorbed dose was modeled as a sum of the MOSFET voltage measurement variance and the calibration factor variance, the latter being comprised of three main subcomponents: ionization chamber reading variance, MOSFET-to-MOSFET variation and a contribution related to the fact that the average calibration factor of a few MOSFETs was used as an estimate for the average value of all MOSFETs. MOSFET voltage measurement error was estimated based on sets of repeated measurements. The calibration factor overall voltage measurement error was calculated from the above analysis. Calibration factors determined were close to those reported in the literature and by the manufacturer (~3 mV/mGy), ranging from 2.87 to 3.13 mV/mGy. The error σ V of a MOSFET voltage

  4. Quantitative analysis of scaling error compensation methods in dimensional X-ray computed tomography

    DEFF Research Database (Denmark)

    Müller, P.; Hiller, Jochen; Dai, Y.

    2015-01-01

    plate, using calibrated features measured by CMM and using a database of reference values – are presented and applied within a case study. The investigation was performed on a dose engine component of an insulin pen, for which several dimensional measurands were defined. The component has a complex...... errors of the manipulator system (magnification axis). This article also introduces a new compensation method for scaling errors using a database of reference scaling factors and discusses its advantages and disadvantages. In total, three methods for the correction of scaling errors – using the CT ball...

  5. MUB tomography performance under influence of systematic errors

    Science.gov (United States)

    Sainz, Isabel; García, Andrés; Klimov, Andrei B.

    2018-01-01

    We propose a method for accounting the simplest type of systematic errors in the mutually unbiased bases (MUB) tomography, emerging due to an imperfect (non-orthogonal) preparation of measurement bases. The present approach allows to analyze analytically the performance of MUB tomography in finite systems of an arbitrary (prime) dimension. We compare the estimation error appearing in such an imperfect MUB-based tomography with those intrinsically present in the framework of the symmetric informationally complete positive operator value measure (SIC-POVM) reconstruction scheme and find that MUB tomography outperforms the perfect SIC-POVM tomography including the case of strong errors.

  6. Systematic Errors in Dimensional X-ray Computed Tomography

    DEFF Research Database (Denmark)

    that it is possible to compensate them. In dimensional X-ray computed tomography (CT), many physical quantities influence the final result. However, it is important to know which factors in CT measurements potentially lead to systematic errors. In this talk, typical error sources in dimensional X-ray CT are discussed...

  7. Automatic Error Analysis Using Intervals

    Science.gov (United States)

    Rothwell, E. J.; Cloud, M. J.

    2012-01-01

    A technique for automatic error analysis using interval mathematics is introduced. A comparison to standard error propagation methods shows that in cases involving complicated formulas, the interval approach gives comparable error estimates with much less effort. Several examples are considered, and numerical errors are computed using the INTLAB…

  8. ATC operational error analysis.

    Science.gov (United States)

    1972-01-01

    The primary causes of operational errors are discussed and the effects of these errors on an ATC system's performance are described. No attempt is made to specify possible error models for the spectrum of blunders that can occur although previous res...

  9. Analysis of inter-fraction setup errors and organ motion by daily kilovoltage cone beam computed tomography in intensity modulated radiotherapy of prostate cancer

    International Nuclear Information System (INIS)

    Palombarini, Marcella; Mengoli, Stefano; Fantazzini, Paola; Cadioli, Cecilia; Degli Esposti, Claudio; Frezza, Giovanni Piero

    2012-01-01

    Intensity-modulated radiotherapy (IMRT) enables a better conformality to the target while sparing the surrounding normal tissues and potentially allows to increase the dose to the target, if this is precisely and accurately determined. The goal of this work is to determine inter-fraction setup errors and prostate motion in IMRT for localized prostate cancer, guided by daily kilovoltage cone beam computed tomography (kVCBCT). Systematic and random components of the shifts were retrospectively evaluated by comparing two matching modalities (automatic bone and manual soft-tissue) between each of the 641 daily kVCBCTs (18 patients) and the planning kVCT. A simulated Adaptive Radiation Therapy (ART) protocol using the average of the first 5 kVCBCTs was tested by non-parametric bootstrapping procedure. Shifts were < 1 mm in left-right (LR) and in supero-inferior (SI) direction. In antero-posterior (AP) direction systematic prostate motion (2.7 ± 0.7 mm) gave the major contribution to the variability of results; the averages of the absolute total shifts were significantly larger in anterior (6.3 ± 0.2 mm) than in posterior (3.9 mm ± 0.2 mm) direction. The ART protocol would reduce margins in LR, SI and anterior but not in posterior direction. The online soft-tissue correction based on daily kVCBCT during IMRT of prostate cancer is fast and efficient. The large random movements of prostate respect to bony anatomy, especially in the AP direction, where anisotropic margins are needed, suggest that daily kVCBCT is at the present time preferable for high dose and high gradients IMRT prostate treatments

  10. Errors from Image Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wood, William Monford [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    Presenting a systematic study of the standard analysis of rod-pinch radiographs for obtaining quantitative measurements of areal mass densities, and making suggestions for improving the methodology of obtaining quantitative information from radiographed objects.

  11. Human Error: A Concept Analysis

    Science.gov (United States)

    Hansen, Frederick D.

    2007-01-01

    Human error is the subject of research in almost every industry and profession of our times. This term is part of our daily language and intuitively understood by most people however, it would be premature to assume that everyone's understanding of human error s the same. For example, human error is used to describe the outcome or consequence of human action, the causal factor of an accident, deliberate violations,a nd the actual action taken by a human being. As a result, researchers rarely agree on the either a specific definition or how to prevent human error. The purpose of this article is to explore the specific concept of human error using Concept Analysis as described by Walker and Avant (1995). The concept of human error is examined as currently used in the literature of a variety of industries and professions. Defining attributes and examples of model, borderline, and contrary cases are described. The antecedents and consequences of human error are also discussed and a definition of human error is offered.

  12. Detection of Procedural Errors during Root Canal Instrumentation using Cone Beam Computed Tomography.

    Science.gov (United States)

    Guedes, Orlando Aguirre; da Costa, Marcus Vinícius Corrêa; Dorilêo, Maura Cristiane Gonçales Orçati; de Oliveira, Helder Fernandes; Pedro, Fábio Luis Miranda; Bandeca, Matheus Coelho; Borges, Álvaro Henrique

    2015-03-01

    This study investigated procedural errors made during root canal preparation with nickel-titanium (NiTi) instruments, using cone beam computed tomography (CBCT) imaging method. A total of 100 human mandibular molars were divided into five groups (n = 20) according to the NiTi system used for root canal preparation: Group 1 - BioRaCe, Group 2 - K3, Group 3 - ProTaper, Group 4 - Mtwo and Group 5 - Hero Shaper. CBCT images were obtained to detect procedural errors made during root canal preparation. Two examiners evaluated the presence or absence of fractured instruments, perforations, and canal transportations. Chi-square test was used for statistical analyzes. The significance level was set at a=5%. In a total of 300 prepared root canals, 43 (14.33%) procedural errors were detected. Perforation was the procedural errors most commonly observed (58.14%). Most of the procedural errors were observed in the mesiobuccal root canal (48.84%). In the analysis of procedural errors, there was a significant difference (P procedural errors. CBCT permitted the detection of procedural errors during root canal preparation. The frequency of procedural errors was low when root canals preparation was accomplished with BioRaCe system.

  13. Analysis of Medication Error Reports

    Energy Technology Data Exchange (ETDEWEB)

    Whitney, Paul D.; Young, Jonathan; Santell, John; Hicks, Rodney; Posse, Christian; Fecht, Barbara A.

    2004-11-15

    In medicine, as in many areas of research, technological innovation and the shift from paper based information to electronic records has created a climate of ever increasing availability of raw data. There has been, however, a corresponding lag in our abilities to analyze this overwhelming mass of data, and classic forms of statistical analysis may not allow researchers to interact with data in the most productive way. This is true in the emerging area of patient safety improvement. Traditionally, a majority of the analysis of error and incident reports has been carried out based on an approach of data comparison, and starts with a specific question which needs to be answered. Newer data analysis tools have been developed which allow the researcher to not only ask specific questions but also to “mine” data: approach an area of interest without preconceived questions, and explore the information dynamically, allowing questions to be formulated based on patterns brought up by the data itself. Since 1991, United States Pharmacopeia (USP) has been collecting data on medication errors through voluntary reporting programs. USP’s MEDMARXsm reporting program is the largest national medication error database and currently contains well over 600,000 records. Traditionally, USP has conducted an annual quantitative analysis of data derived from “pick-lists” (i.e., items selected from a list of items) without an in-depth analysis of free-text fields. In this paper, the application of text analysis and data analysis tools used by Battelle to analyze the medication error reports already analyzed in the traditional way by USP is described. New insights and findings were revealed including the value of language normalization and the distribution of error incidents by day of the week. The motivation for this effort is to gain additional insight into the nature of medication errors to support improvements in medication safety.

  14. Orbit IMU alignment: Error analysis

    Science.gov (United States)

    Corson, R. W.

    1980-01-01

    A comprehensive accuracy analysis of orbit inertial measurement unit (IMU) alignments using the shuttle star trackers was completed and the results are presented. Monte Carlo techniques were used in a computer simulation of the IMU alignment hardware and software systems to: (1) determine the expected Space Transportation System 1 Flight (STS-1) manual mode IMU alignment accuracy; (2) investigate the accuracy of alignments in later shuttle flights when the automatic mode of star acquisition may be used; and (3) verify that an analytical model previously used for estimating the alignment error is a valid model. The analysis results do not differ significantly from expectations. The standard deviation in the IMU alignment error for STS-1 alignments was determined to the 68 arc seconds per axis. This corresponds to a 99.7% probability that the magnitude of the total alignment error is less than 258 arc seconds.

  15. Having Fun with Error Analysis

    Science.gov (United States)

    Siegel, Peter

    2007-01-01

    We present a fun activity that can be used to introduce students to error analysis: the M&M game. Students are told to estimate the number of individual candies plus uncertainty in a bag of M&M's. The winner is the group whose estimate brackets the actual number with the smallest uncertainty. The exercise produces enthusiastic discussions and…

  16. Improved characterisation and modelling of measurement errors in electrical resistivity tomography (ERT) surveys

    Science.gov (United States)

    Tso, Chak-Hau Michael; Kuras, Oliver; Wilkinson, Paul B.; Uhlemann, Sebastian; Chambers, Jonathan E.; Meldrum, Philip I.; Graham, James; Sherlock, Emma F.; Binley, Andrew

    2017-11-01

    Measurement errors can play a pivotal role in geophysical inversion. Most inverse models require users to prescribe or assume a statistical model of data errors before inversion. Wrongly prescribed errors can lead to over- or under-fitting of data; however, the derivation of models of data errors is often neglected. With the heightening interest in uncertainty estimation within hydrogeophysics, better characterisation and treatment of measurement errors is needed to provide improved image appraisal. Here we focus on the role of measurement errors in electrical resistivity tomography (ERT). We have analysed two time-lapse ERT datasets: one contains 96 sets of direct and reciprocal data collected from a surface ERT line within a 24 h timeframe; the other is a two-year-long cross-borehole survey at a UK nuclear site with 246 sets of over 50,000 measurements. Our study includes the characterisation of the spatial and temporal behaviour of measurement errors using autocorrelation and correlation coefficient analysis. We find that, in addition to well-known proportionality effects, ERT measurements can also be sensitive to the combination of electrodes used, i.e. errors may not be uncorrelated as often assumed. Based on these findings, we develop a new error model that allows grouping based on electrode number in addition to fitting a linear model to transfer resistance. The new model explains the observed measurement errors better and shows superior inversion results and uncertainty estimates in synthetic examples. It is robust, because it groups errors together based on the electrodes used to make the measurements. The new model can be readily applied to the diagonal data weighting matrix widely used in common inversion methods, as well as to the data covariance matrix in a Bayesian inversion framework. We demonstrate its application using extensive ERT monitoring datasets from the two aforementioned sites.

  17. Measurement Error and Equating Error in Power Analysis

    Science.gov (United States)

    Phillips, Gary W.; Jiang, Tao

    2016-01-01

    Power analysis is a fundamental prerequisite for conducting scientific research. Without power analysis the researcher has no way of knowing whether the sample size is large enough to detect the effect he or she is looking for. This paper demonstrates how psychometric factors such as measurement error and equating error affect the power of…

  18. Error Analysis of Band Matrix Method

    OpenAIRE

    Taniguchi, Takeo; Soga, Akira

    1984-01-01

    Numerical error in the solution of the band matrix method based on the elimination method in single precision is investigated theoretically and experimentally, and the behaviour of the truncation error and the roundoff error is clarified. Some important suggestions for the useful application of the band solver are proposed by using the results of above error analysis.

  19. Uncertainty quantification and error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  20. A Comparative Study on Error Analysis

    DEFF Research Database (Denmark)

    Wu, Xiaoli; Zhang, Chun

    2015-01-01

    of the grammatical errors with using comparative sentences is developed, which include comparative item-related errors, comparative result-related errors and blend errors. The results further indicate that these errors could attribute to negative L1 transfer and overgeneralization of grammatical rule and structures......Title: A Comparative Study on Error Analysis Subtitle: - Belgian (L1) and Danish (L1) learners’ use of Chinese (L2) comparative sentences in written production Xiaoli Wu, Chun Zhang Abstract: Making errors is an inevitable and necessary part of learning. The collection, classification and analysis...... the occurrence of errors either in linguistic or pedagogical terms. The purpose of the current study is to demonstrate the theoretical and practical relevance of error analysis approach in CFL by investigating two cases - (1) Belgian (L1) learners’ use of Chinese (L2) comparative sentences in written production...

  1. Error Analysis and the EFL Classroom Teaching

    Science.gov (United States)

    Xie, Fang; Jiang, Xue-mei

    2007-01-01

    This paper makes a study of error analysis and its implementation in the EFL (English as Foreign Language) classroom teaching. It starts by giving a systematic review of the concepts and theories concerning EA (Error Analysis), the various reasons causing errors are comprehensively explored. The author proposes that teachers should employ…

  2. Randomized benchmarking and process tomography for gate errors in a solid-state qubit.

    Science.gov (United States)

    Chow, J M; Gambetta, J M; Tornberg, L; Koch, Jens; Bishop, Lev S; Houck, A A; Johnson, B R; Frunzio, L; Girvin, S M; Schoelkopf, R J

    2009-03-06

    We present measurements of single-qubit gate errors for a superconducting qubit. Results from quantum process tomography and randomized benchmarking are compared with gate errors obtained from a double pi pulse experiment. Randomized benchmarking reveals a minimum average gate error of 1.1+/-0.3% and a simple exponential dependence of fidelity on the number of gates. It shows that the limits on gate fidelity are primarily imposed by qubit decoherence, in agreement with theory.

  3. Estimation of analysis and forecast error variances

    Directory of Open Access Journals (Sweden)

    Malaquias Peña

    2014-11-01

    Full Text Available Accurate estimates of error variances in numerical analyses and forecasts (i.e. difference between analysis or forecast fields and nature on the resolved scales are critical for the evaluation of forecasting systems, the tuning of data assimilation (DA systems and the proper initialisation of ensemble forecasts. Errors in observations and the difficulty in their estimation, the fact that estimates of analysis errors derived via DA schemes, are influenced by the same assumptions as those used to create the analysis fields themselves, and the presumed but unknown correlation between analysis and forecast errors make the problem difficult. In this paper, an approach is introduced for the unbiased estimation of analysis and forecast errors. The method is independent of any assumption or tuning parameter used in DA schemes. The method combines information from differences between forecast and analysis fields (‘perceived forecast errors’ with prior knowledge regarding the time evolution of (1 forecast error variance and (2 correlation between errors in analyses and forecasts. The quality of the error estimates, given the validity of the prior relationships, depends on the sample size of independent measurements of perceived errors. In a simulated forecast environment, the method is demonstrated to reproduce the true analysis and forecast error within predicted error bounds. The method is then applied to forecasts from four leading numerical weather prediction centres to assess the performance of their corresponding DA and modelling systems. Error variance estimates are qualitatively consistent with earlier studies regarding the performance of the forecast systems compared. The estimated correlation between forecast and analysis errors is found to be a useful diagnostic of the performance of observing and DA systems. In case of significant model-related errors, a methodology to decompose initial value and model-related forecast errors is also

  4. Error Analysis in Mathematics. Technical Report #1012

    Science.gov (United States)

    Lai, Cheng-Fei

    2012-01-01

    Error analysis is a method commonly used to identify the cause of student errors when they make consistent mistakes. It is a process of reviewing a student's work and then looking for patterns of misunderstanding. Errors in mathematics can be factual, procedural, or conceptual, and may occur for a number of reasons. Reasons why students make…

  5. An Error Analysis on TFL Learners’ Writings

    Directory of Open Access Journals (Sweden)

    Arif ÇERÇİ

    2016-12-01

    Full Text Available The main purpose of the present study is to identify and represent TFL learners’ writing errors through error analysis. All the learners started learning Turkish as foreign language with A1 (beginner level and completed the process by taking C1 (advanced certificate in TÖMER at Gaziantep University. The data of the present study were collected from 14 students’ writings in proficiency exams for each level. The data were grouped as grammatical, syntactic, spelling, punctuation, and word choice errors. The ratio and categorical distributions of identified errors were analyzed through error analysis. The data were analyzed through statistical procedures in an effort to determine whether error types differ according to the levels of the students. The errors in this study are limited to the linguistic and intralingual developmental errors

  6. Analysis and classification of human error

    Science.gov (United States)

    Rouse, W. B.; Rouse, S. H.

    1983-01-01

    The literature on human error is reviewed with emphasis on theories of error and classification schemes. A methodology for analysis and classification of human error is then proposed which includes a general approach to classification. Identification of possible causes and factors that contribute to the occurrence of errors is also considered. An application of the methodology to the use of checklists in the aviation domain is presented for illustrative purposes.

  7. Synthetic aperture interferometry: error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Biswas, Amiya; Coupland, Jeremy

    2010-07-10

    Synthetic aperture interferometry (SAI) is a novel way of testing aspherics and has a potential for in-process measurement of aspherics [Appl. Opt.42, 701 (2003)].APOPAI0003-693510.1364/AO.42.000701 A method to measure steep aspherics using the SAI technique has been previously reported [Appl. Opt.47, 1705 (2008)].APOPAI0003-693510.1364/AO.47.001705 Here we investigate the computation of surface form using the SAI technique in different configurations and discuss the computational errors. A two-pass measurement strategy is proposed to reduce the computational errors, and a detailed investigation is carried out to determine the effect of alignment errors on the measurement process.

  8. Synthetic aperture interferometry: error analysis

    International Nuclear Information System (INIS)

    Biswas, Amiya; Coupland, Jeremy

    2010-01-01

    Synthetic aperture interferometry (SAI) is a novel way of testing aspherics and has a potential for in-process measurement of aspherics [Appl. Opt.42, 701 (2003)].APOPAI0003-693510.1364/AO.42.000701 A method to measure steep aspherics using the SAI technique has been previously reported [Appl. Opt.47, 1705 (2008)].APOPAI0003-693510.1364/AO.47.001705 Here we investigate the computation of surface form using the SAI technique in different configurations and discuss the computational errors. A two-pass measurement strategy is proposed to reduce the computational errors, and a detailed investigation is carried out to determine the effect of alignment errors on the measurement process.

  9. Error Analysis in the Teaching of English

    OpenAIRE

    Hasyim, Sunardi

    2002-01-01

    The main purpose of this article is to discuss the importance of error analysis in the teaching of English as a foreign language. Although errors are bad things in learning English as a foreign language%2C error analysis is advantageous for both learners and teachers. For learners%2C error analysis is needed to show them in what aspect in grammar which is difficult for them%2C where as for teachers%2C it is required to evaluate themselves whether they are successful or not in teaching English...

  10. Error Analysis: Past, Present, and Future

    Science.gov (United States)

    McCloskey, George

    2017-01-01

    This commentary will take an historical perspective on the Kaufman Test of Educational Achievement (KTEA) error analysis, discussing where it started, where it is today, and where it may be headed in the future. In addition, the commentary will compare and contrast the KTEA error analysis procedures that are rooted in psychometric methodology and…

  11. Assessment of systematic measurement errors for acoustic travel-time tomography of the atmosphere.

    Science.gov (United States)

    Vecherin, Sergey N; Ostashev, Vladimir E; Wilson, D Keith

    2013-09-01

    Two algorithms are described for assessing systematic errors in acoustic travel-time tomography of the atmosphere, the goal of which is to reconstruct the temperature and wind velocity fields given the transducers' locations and the measured travel times of sound propagating between each speaker-microphone pair. The first algorithm aims at assessing the errors simultaneously with the mean field reconstruction. The second algorithm uses the results of the first algorithm to identify the ray paths corrupted by the systematic errors and then estimates these errors more accurately. Numerical simulations show that the first algorithm can improve the reconstruction when relatively small systematic errors are present in all paths. The second algorithm significantly improves the reconstruction when systematic errors are present in a few, but not all, ray paths. The developed algorithms were applied to experimental data obtained at the Boulder Atmospheric Observatory.

  12. ERROR ANALYSIS in the TEACHING of ENGLISH

    Directory of Open Access Journals (Sweden)

    Sunardi Hasyim

    2002-01-01

    Full Text Available The main purpose of this article is to discuss the importance of error analysis in the teaching of English as a foreign language. Although errors are bad things in learning English as a foreign language%2C error analysis is advantageous for both learners and teachers. For learners%2C error analysis is needed to show them in what aspect in grammar which is difficult for them%2C where as for teachers%2C it is required to evaluate themselves whether they are successful or not in teaching English.%0D%0AIn this article%2C the writer presented some English sentences containing grammatical errors. These grammatical errors were analyzed based on the theories presented by the linguists. This analysis aimed at showing the students the causes and kinds of the grammatical errors. By this way%2C the students are expected to increase their knowledge on the English grammar. Abstract in Bahasa Indonesia : errors%2C+mistake%2C+over+orrer%2C+covert+error%2C+interference%2C+overgeneralization%2C+grammar%2C+interlingual%2C+intralingual%2C+idiosyncrasies.

  13. Notes on human error analysis and prediction

    International Nuclear Information System (INIS)

    Rasmussen, J.

    1978-11-01

    The notes comprise an introductory discussion of the role of human error analysis and prediction in industrial risk analysis. Following this introduction, different classes of human errors and role in industrial systems are mentioned. Problems related to the prediction of human behaviour in reliability and safety analysis are formulated and ''criteria for analyzability'' which must be met by industrial systems so that a systematic analysis can be performed are suggested. The appendices contain illustrative case stories and a review of human error reports for the task of equipment calibration and testing as found in the US Licensee Event Reports. (author)

  14. Experimental research on English vowel errors analysis

    Directory of Open Access Journals (Sweden)

    Huang Qiuhua

    2016-01-01

    Full Text Available Our paper analyzed relevant acoustic parameters of people’s speech samples and the results that compared with English standard pronunciation with methods of experimental phonetics by phonetic analysis software and statistical analysis software. Then we summarized phonetic pronunciation errors of college students through the analysis of English pronunciation of vowels, we found that college students’ English pronunciation are easy occur tongue position and lip shape errors during pronounce vowels. Based on analysis of pronunciation errors, we put forward targeted voice training for college students’ English pronunciation, eventually increased the students learning interest, and improved the teaching of English phonetics.

  15. Quantitative analysis of error mode, error effect and criticality

    International Nuclear Information System (INIS)

    Li Pengcheng; Zhang Li; Xiao Dongsheng; Chen Guohua

    2009-01-01

    The quantitative method of human error mode, effect and criticality is developed in order to reach the ultimate goal of Probabilistic Safety Assessment. The criticality identification matrix of human error mode and task is built to identify the critical human error mode and task and the critical organizational root causes on the basis of the identification of human error probability, error effect probability and the criticality index of error effect. Therefore, this will be beneficial to take targeted measures to reduce and prevent the occurrence of critical human error mode and task. Finally, the application of the technique is explained through the application example. (authors)

  16. A Comparative Study on Error Analysis

    DEFF Research Database (Denmark)

    Wu, Xiaoli; Zhang, Chun

    2015-01-01

    Title: A Comparative Study on Error Analysis Subtitle: - Belgian (L1) and Danish (L1) learners’ use of Chinese (L2) comparative sentences in written production Xiaoli Wu, Chun Zhang Abstract: Making errors is an inevitable and necessary part of learning. The collection, classification and analysis...... of errors in the written and spoken production of L2 learners has a long tradition in L2 pedagogy. Yet, in teaching and learning Chinese as a foreign language (CFL), only handful studies have been made either to define the ‘error’ in a pedagogically insightful way or to empirically investigate...... the occurrence of errors either in linguistic or pedagogical terms. The purpose of the current study is to demonstrate the theoretical and practical relevance of error analysis approach in CFL by investigating two cases - (1) Belgian (L1) learners’ use of Chinese (L2) comparative sentences in written production...

  17. Experimental research on English vowel errors analysis

    OpenAIRE

    Huang Qiuhua

    2016-01-01

    Our paper analyzed relevant acoustic parameters of people’s speech samples and the results that compared with English standard pronunciation with methods of experimental phonetics by phonetic analysis software and statistical analysis software. Then we summarized phonetic pronunciation errors of college students through the analysis of English pronunciation of vowels, we found that college students’ English pronunciation are easy occur tongue position and lip shape errors during pronounce vow...

  18. Analysis of Position Error Headway Protection

    Science.gov (United States)

    1975-07-01

    An analysis is developed to determine safe headway on PRT systems that use point-follower control. Periodic measurements of the position error relative to a nominal trajectory provide warning against the hazards of overspeed and unexpected stop. A co...

  19. Errors in neuroretinal rim measurement by Cirrus high-definition optical coherence tomography in myopic eyes.

    Science.gov (United States)

    Hwang, Young Hoon; Kim, Yong Yeon; Jin, Sunyoung; Na, Jung Hwa; Kim, Hwang Ki; Sohn, Yong Ho

    2012-11-01

    To investigate the prevalence of, and factors associated with, errors in neuroretinal rim measurement by Cirrus high-definition (HD) spectral-domain optical coherence tomography (OCT) in myopic eyes. Neuroretinal rim thicknesses of 255 myopic eyes were measured by Cirrus HD-OCT. The prevalence of, and factors associated with, optic disc margin detection error and cup margin detection error were assessed by analysing 72 cross-sectional optic nerve head (ONH) images obtained at 5° intervals for each eye. Among the 255 eyes, 45 (17.6%) had neuroretinal rim measurement errors; 29 (11.4%) had optic disc margin detection errors at the temporal (16 eyes), superior (11 eyes), and inferior (2 eyes) quadrants; 19 (7.5%) showed cup margin detection errors at the nasal (17 eyes) and temporal (2 eyes) quadrants; and 3 (1.2%) had both disc and cup margin detection errors. Errors in detection of temporal optic disc margin were associated with presence of parapapillary atrophy (PPA), higher myopia, and greater axial length (AL) (perrors were associated with vitreous opacities attached to the ONH surface or acute cup slope angles (pErrors in neuroretinal rim measurement by Cirrus HD-OCT were found in myopic eyes, especially in eyes with PPA, higher myopia, greater AL, vitreous opacity or acute cup slope angle. These findings should be considered when interpreting neuroretinal rim thickness measured by Cirrus HD-OCT.

  20. Error estimation in plant growth analysis

    Directory of Open Access Journals (Sweden)

    Andrzej Gregorczyk

    2014-01-01

    Full Text Available The scheme is presented for calculation of errors of dry matter values which occur during approximation of data with growth curves, determined by the analytical method (logistic function and by the numerical method (Richards function. Further formulae are shown, which describe absolute errors of growth characteristics: Growth rate (GR, Relative growth rate (RGR, Unit leaf rate (ULR and Leaf area ratio (LAR. Calculation examples concerning the growth course of oats and maize plants are given. The critical analysis of the estimation of obtained results has been done. The purposefulness of joint application of statistical methods and error calculus in plant growth analysis has been ascertained.

  1. Orthonormal polynomials in wavefront analysis: error analysis.

    Science.gov (United States)

    Dai, Guang-Ming; Mahajan, Virendra N

    2008-07-01

    Zernike circle polynomials are in widespread use for wavefront analysis because of their orthogonality over a circular pupil and their representation of balanced classical aberrations. However, they are not appropriate for noncircular pupils, such as annular, hexagonal, elliptical, rectangular, and square pupils, due to their lack of orthogonality over such pupils. We emphasize the use of orthonormal polynomials for such pupils, but we show how to obtain the Zernike coefficients correctly. We illustrate that the wavefront fitting with a set of orthonormal polynomials is identical to the fitting with a corresponding set of Zernike polynomials. This is a consequence of the fact that each orthonormal polynomial is a linear combination of the Zernike polynomials. However, since the Zernike polynomials do not represent balanced aberrations for a noncircular pupil, the Zernike coefficients lack the physical significance that the orthonormal coefficients provide. We also analyze the error that arises if Zernike polynomials are used for noncircular pupils by treating them as circular pupils and illustrate it with numerical examples.

  2. An Error Analysis on TFL Learners’ Writings

    OpenAIRE

    ÇERÇİ, Arif; DERMAN, Serdar; BARDAKÇI, Mehmet

    2016-01-01

    The main purpose of the present study is to identify and represent TFL learners’ writing errors through error analysis. All the learners started learning Turkish as foreign language with A1 (beginner) level and completed the process by taking C1 (advanced) certificate in TÖMER at Gaziantep University. The data of the present study were collected from 14 students’ writings in proficiency exams for each level. The data were grouped as grammatical, syntactic, spelling, punctuation, and word choi...

  3. Error propagation analysis for a sensor system

    International Nuclear Information System (INIS)

    Yeater, M.L.; Hockenbury, R.W.; Hawkins, J.; Wilkinson, J.

    1976-01-01

    As part of a program to develop reliability methods for operational use with reactor sensors and protective systems, error propagation analyses are being made for each model. An example is a sensor system computer simulation model, in which the sensor system signature is convoluted with a reactor signature to show the effect of each in revealing or obscuring information contained in the other. The error propagation analysis models the system and signature uncertainties and sensitivities, whereas the simulation models the signatures and by extensive repetitions reveals the effect of errors in various reactor input or sensor response data. In the approach for the example presented, the errors accumulated by the signature (set of ''noise'' frequencies) are successively calculated as it is propagated stepwise through a system comprised of sensor and signal processing components. Additional modeling steps include a Fourier transform calculation to produce the usual power spectral density representation of the product signature, and some form of pattern recognition algorithm

  4. Numeracy, Literacy and Newman's Error Analysis

    Science.gov (United States)

    White, Allan Leslie

    2010-01-01

    Newman (1977, 1983) defined five specific literacy and numeracy skills as crucial to performance on mathematical word problems: reading, comprehension, transformation, process skills, and encoding. Newman's Error Analysis (NEA) provided a framework for considering the reasons that underlay the difficulties students experienced with mathematical…

  5. Analysis of the interface tracking errors

    International Nuclear Information System (INIS)

    Cerne, G.; Tiselj, I.; Petelin, S.

    2001-01-01

    An important limitation of the interface-tracking algorithm is the grid density, which determines the space scale of the surface tracking. In this paper the analysis of the interface tracking errors, which occur in a dispersed flow, is performed for the VOF interface tracking method. A few simple two-fluid tests are proposed for the investigation of the interface tracking errors and their grid dependence. When the grid density becomes too coarse to follow the interface changes, the errors can be reduced either by using denser nodalization or by switching to the two-fluid model during the simulation. Both solutions are analyzed and compared on a simple vortex-flow test.(author)

  6. Macroscopic analysis of human errors at nuclear power plant

    International Nuclear Information System (INIS)

    Jeong, Y. S.; Gee, M. G.; Kim, J. T.

    2003-01-01

    A decision tree for analysis of human errors is developed. The nodes and edges show human error patterns and their occurrence. Since the nodes are related to manageable resources, human errors could be reduced by allocation of their resources and by controlling human error barriers. Microscopic analysis of human errors is also performed by adding the additional information from the graph

  7. Error Analysis and Propagation in Metabolomics Data Analysis.

    Science.gov (United States)

    Moseley, Hunter N B

    2013-01-01

    Error analysis plays a fundamental role in describing the uncertainty in experimental results. It has several fundamental uses in metabolomics including experimental design, quality control of experiments, the selection of appropriate statistical methods, and the determination of uncertainty in results. Furthermore, the importance of error analysis has grown with the increasing number, complexity, and heterogeneity of measurements characteristic of 'omics research. The increase in data complexity is particularly problematic for metabolomics, which has more heterogeneity than other omics technologies due to the much wider range of molecular entities detected and measured. This review introduces the fundamental concepts of error analysis as they apply to a wide range of metabolomics experimental designs and it discusses current methodologies for determining the propagation of uncertainty in appropriate metabolomics data analysis. These methodologies include analytical derivation and approximation techniques, Monte Carlo error analysis, and error analysis in metabolic inverse problems. Current limitations of each methodology with respect to metabolomics data analysis are also discussed.

  8. Error analysis of stochastic gradient descent ranking.

    Science.gov (United States)

    Chen, Hong; Tang, Yi; Li, Luoqing; Yuan, Yuan; Li, Xuelong; Tang, Yuanyan

    2013-06-01

    Ranking is always an important task in machine learning and information retrieval, e.g., collaborative filtering, recommender systems, drug discovery, etc. A kernel-based stochastic gradient descent algorithm with the least squares loss is proposed for ranking in this paper. The implementation of this algorithm is simple, and an expression of the solution is derived via a sampling operator and an integral operator. An explicit convergence rate for leaning a ranking function is given in terms of the suitable choices of the step size and the regularization parameter. The analysis technique used here is capacity independent and is novel in error analysis of ranking learning. Experimental results on real-world data have shown the effectiveness of the proposed algorithm in ranking tasks, which verifies the theoretical analysis in ranking error.

  9. Impact of errors in experimental parameters on reconstructed breast images using diffuse optical tomography.

    Science.gov (United States)

    Deng, Bin; Lundqvist, Mats; Fang, Qianqian; Carp, Stefan A

    2018-03-01

    Near-infrared diffuse optical tomography (NIR-DOT) is an emerging technology that offers hemoglobin based, functional imaging tumor biomarkers for breast cancer management. The most promising clinical translation opportunities are in the differential diagnosis of malignant vs. benign lesions, and in early response assessment and guidance for neoadjuvant chemotherapy. Accurate quantification of the tissue oxy- and deoxy-hemoglobin concentration across the field of view, as well as repeatability during longitudinal imaging in the context of therapy guidance, are essential for the successful translation of NIR-DOT to clinical practice. The ill-posed and ill-condition nature of the DOT inverse problem makes this technique particularly susceptible to model errors that may occur, for example, when the experimental conditions do not fully match the assumptions built into the image reconstruction process. To evaluate the susceptibility of DOT images to experimental errors that might be encountered in practice for a parallel-plate NIR-DOT system, we simulated 7 different types of errors, each with a range of magnitudes. We generated simulated data by using digital breast phantoms derived from five actual mammograms of healthy female volunteers, to which we added a 1-cm tumor. After applying each of the experimental error types and magnitudes to the simulated measurements, we reconstructed optical images with and without structural prior guidance and assessed the overall error in the total hemoglobin concentrations (HbT) and in the HbT contrast between the lesion and surrounding area vs. the best-case scenarios. It is found that slight in-plane probe misalignment and plate rotation did not result in large quantification errors. However, any out-of-plane probe tilting could result in significant deterioration in lesion contrast. Among the error types investigated in this work, optical images were the least likely to be impacted by breast shape inaccuracies but suffered the

  10. Error propagation analysis for a sensor system

    Energy Technology Data Exchange (ETDEWEB)

    Yeater, M.L.; Hockenbury, R.W.; Hawkins, J.; Wilkinson, J.

    1976-01-01

    As part of a program to develop reliability methods for operational use with reactor sensors and protective systems, error propagation analyses are being made for each model. An example is a sensor system computer simulation model, in which the sensor system signature is convoluted with a reactor signature to show the effect of each in revealing or obscuring information contained in the other. The error propagation analysis models the system and signature uncertainties and sensitivities, whereas the simulation models the signatures and by extensive repetitions reveals the effect of errors in various reactor input or sensor response data. In the approach for the example presented, the errors accumulated by the signature (set of ''noise'' frequencies) are successively calculated as it is propagated stepwise through a system comprised of sensor and signal processing components. Additional modeling steps include a Fourier transform calculation to produce the usual power spectral density representation of the product signature, and some form of pattern recognition algorithm.

  11. Error analysis to improve the speech recognition accuracy on ...

    Indian Academy of Sciences (India)

    measures, error-rate and Word Error Rate (WER) by application of the proposed method. Keywords. Speech recognition; pronunciation dictionary modification method; error analysis; F-measure. 1. Introduction. Speech is one of the easiest modes of ...

  12. Error analysis of aspheric surface with reference datum.

    Science.gov (United States)

    Peng, Yanglin; Dai, Yifan; Chen, Shanyong; Song, Ci; Shi, Feng

    2015-07-20

    Severe requirements of location tolerance provide new challenges for optical component measurement, evaluation, and manufacture. Form error, location error, and the relationship between form error and location error need to be analyzed together during error analysis of aspheric surface with reference datum. Based on the least-squares optimization method, we develop a least-squares local optimization method to evaluate form error of aspheric surface with reference datum, and then calculate the location error. According to the error analysis of a machined aspheric surface, the relationship between form error and location error is revealed, and the influence on the machining process is stated. In different radius and aperture of aspheric surface, the change laws are simulated by superimposing normally distributed random noise on an ideal surface. It establishes linkages between machining and error analysis, and provides an effective guideline for error correcting.

  13. Error analysis in solving mathematical problems

    Directory of Open Access Journals (Sweden)

    Geovana Luiza Kliemann

    2017-12-01

    Full Text Available This paper presents a survey carried out within the Centre for Education Programme, in order to assist in improving the quality of the teaching and learning of Mathematics in Primary Education. From the study of the evaluative systems that constitute the scope of the research project, it was found that their focus is solving problems, and from this point, it began the development of several actions with the purpose of assisting the students in the process of solving them. One of these actions objected to analyze the errors presented by students in the 5th year in the interpretation, understanding, and problem-solving. We describe three games developed in six schools, with questions drawn from the “Prova Brasil” performed in previous years, in objective to diagnose the main difficulties presented by the students in solving the problems, besides helping them to verify possibilities to overcome such gaps. To reach the proposed objectives, a qualitative study was carried out in which the researchers were constantly involved during the process. After each meeting, there was an analysis of the responses developed to classify the errors in different categories. It was found that most students attended succeeded in solving the proposed problems, and major errors presented are related to the difficulty of interpretation.

  14. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.

  15. Meteor radar signal processing and error analysis

    Science.gov (United States)

    Kang, Chunmei

    Meteor wind radar systems are a powerful tool for study of the horizontal wind field in the mesosphere and lower thermosphere (MLT). While such systems have been operated for many years, virtually no literature has focused on radar system error analysis. The instrumental error may prevent scientists from getting correct conclusions on geophysical variability. The radar system instrumental error comes from different sources, including hardware, software, algorithms and etc. Radar signal processing plays an important role in radar system and advanced signal processing algorithms may dramatically reduce the radar system errors. In this dissertation, radar system error propagation is analyzed and several advanced signal processing algorithms are proposed to optimize the performance of radar system without increasing the instrument costs. The first part of this dissertation is the development of a time-frequency waveform detector, which is invariant to noise level and stable to a wide range of decay rates. This detector is proposed to discriminate the underdense meteor echoes from the background white Gaussian noise. The performance of this detector is examined using Monte Carlo simulations. The resulting probability of detection is shown to outperform the often used power and energy detectors for the same probability of false alarm. Secondly, estimators to determine the Doppler shift, the decay rate and direction of arrival (DOA) of meteors are proposed and evaluated. The performance of these estimators is compared with the analytically derived Cramer-Rao bound (CRB). The results show that the fast maximum likelihood (FML) estimator for determination of the Doppler shift and decay rate and the spatial spectral method for determination of the DOAs perform best among the estimators commonly used on other radar systems. For most cases, the mean square error (MSE) of the estimator meets the CRB above a 10dB SNR. Thus meteor echoes with an estimated SNR below 10dB are

  16. Error Analysis of Determining Airplane Location by Global Positioning System

    OpenAIRE

    Hajiyev, Chingiz; Burat, Alper

    1999-01-01

    This paper studies the error analysis of determining airplane location by global positioning system (GPS) using statistical testing method. The Newton Rhapson method positions the airplane at the intersection point of four spheres. Absolute errors, relative errors and standard deviation have been calculated The results show that the positioning error of the airplane varies with the coordinates of GPS satellite and the airplane.

  17. Trends in MODIS Geolocation Error Analysis

    Science.gov (United States)

    Wolfe, R. E.; Nishihama, Masahiro

    2009-01-01

    Data from the two MODIS instruments have been accurately geolocated (Earth located) to enable retrieval of global geophysical parameters. The authors describe the approach used to geolocate with sub-pixel accuracy over nine years of data from M0DIS on NASA's E0S Terra spacecraft and seven years of data from MODIS on the Aqua spacecraft. The approach uses a geometric model of the MODIS instruments, accurate navigation (orbit and attitude) data and an accurate Earth terrain model to compute the location of each MODIS pixel. The error analysis approach automatically matches MODIS imagery with a global set of over 1,000 ground control points from the finer-resolution Landsat satellite to measure static biases and trends in the MO0lS geometric model parameters. Both within orbit and yearly thermally induced cyclic variations in the pointing have been found as well as a general long-term trend.

  18. Analysis of errors in forensic science

    Directory of Open Access Journals (Sweden)

    Mingxiao Du

    2017-01-01

    Full Text Available Reliability of expert testimony is one of the foundations of judicial justice. Both expert bias and scientific errors affect the reliability of expert opinion, which in turn affects the trustworthiness of the findings of fact in legal proceedings. Expert bias can be eliminated by replacing experts; however, it may be more difficult to eliminate scientific errors. From the perspective of statistics, errors in operation of forensic science include systematic errors, random errors, and gross errors. In general, process repetition and abiding by the standard ISO/IEC:17025: 2005, general requirements for the competence of testing and calibration laboratories, during operation are common measures used to reduce errors that originate from experts and equipment, respectively. For example, to reduce gross errors, the laboratory can ensure that a test is repeated several times by different experts. In applying for forensic principles and methods, the Federal Rules of Evidence 702 mandate that judges consider factors such as peer review, to ensure the reliability of the expert testimony. As the scientific principles and methods may not undergo professional review by specialists in a certain field, peer review serves as an exclusive standard. This study also examines two types of statistical errors. As false-positive errors involve a higher possibility of an unfair decision-making, they should receive more attention than false-negative errors.

  19. Breast patient setup error assessment: comparison of electronic portal image devices and cone-beam computed tomography matching results

    NARCIS (Netherlands)

    Topolnjak, Rajko; Sonke, Jan-Jakob; Nijkamp, Jasper; Rasch, Coen; Minkema, Danny; Remeijer, Peter; van Vliet-Vroegindeweij, Corine

    2010-01-01

    To quantify the differences in setup errors measured with the cone-beam computed tomography (CBCT) and electronic portal image devices (EPID) in breast cancer patients. Repeat CBCT scan were acquired for routine offline setup verification in 20 breast cancer patients. During the CBCT imaging

  20. A geometricla error in some Computer Programs based on the Aki-Christofferson-Husebye (ACH) Method of Teleseismic Tomography

    Science.gov (United States)

    Julian, B.R.; Evans, J.R.; Pritchard, M.J.; Foulger, G.R.

    2000-01-01

    Some computer programs based on the Aki-Christofferson-Husebye (ACH) method of teleseismic tomography contain an error caused by identifying local grid directions with azimuths on the spherical Earth. This error, which is most severe in high latitudes, introduces systematic errors into computed ray paths and distorts inferred Earth models. It is best dealt with by explicity correcting for the difference between true and grid directions. Methods for computing these directions are presented in this article and are likely to be useful in many other kinds of regional geophysical studies that use Cartesian coordinates and flat-earth approximations.

  1. Analysis of error-prone survival data under additive hazards models: measurement error effects and adjustments.

    Science.gov (United States)

    Yan, Ying; Yi, Grace Y

    2016-07-01

    Covariate measurement error occurs commonly in survival analysis. Under the proportional hazards model, measurement error effects have been well studied, and various inference methods have been developed to correct for error effects under such a model. In contrast, error-contaminated survival data under the additive hazards model have received relatively less attention. In this paper, we investigate this problem by exploring measurement error effects on parameter estimation and the change of the hazard function. New insights of measurement error effects are revealed, as opposed to well-documented results for the Cox proportional hazards model. We propose a class of bias correction estimators that embraces certain existing estimators as special cases. In addition, we exploit the regression calibration method to reduce measurement error effects. Theoretical results for the developed methods are established, and numerical assessments are conducted to illustrate the finite sample performance of our methods.

  2. [Analysis of intrusion errors in free recall].

    Science.gov (United States)

    Diesfeldt, H F A

    2017-06-01

    Extra-list intrusion errors during five trials of the eight-word list-learning task of the Amsterdam Dementia Screening Test (ADST) were investigated in 823 consecutive psychogeriatric patients (87.1% suffering from major neurocognitive disorder). Almost half of the participants (45.9%) produced one or more intrusion errors on the verbal recall test. Correct responses were lower when subjects made intrusion errors, but learning slopes did not differ between subjects who committed intrusion errors and those who did not so. Bivariate regression analyses revealed that participants who committed intrusion errors were more deficient on measures of eight-word recognition memory, delayed visual recognition and tests of executive control (the Behavioral Dyscontrol Scale and the ADST-Graphical Sequences as measures of response inhibition). Using hierarchical multiple regression, only free recall and delayed visual recognition retained an independent effect in the association with intrusion errors, such that deficient scores on tests of episodic memory were sufficient to explain the occurrence of intrusion errors. Measures of inhibitory control did not add significantly to the explanation of intrusion errors in free recall, which makes insufficient strength of memory traces rather than a primary deficit in inhibition the preferred account for intrusion errors in free recall.

  3. A technique for human error analysis (ATHEANA)

    Energy Technology Data Exchange (ETDEWEB)

    Cooper, S.E.; Ramey-Smith, A.M.; Wreathall, J.; Parry, G.W. [and others

    1996-05-01

    Probabilistic risk assessment (PRA) has become an important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. Human reliability analysis (HRA) is a critical element of PRA; however, limitations in the analysis of human actions in PRAs have long been recognized as a constraint when using PRA. A multidisciplinary HRA framework has been developed with the objective of providing a structured approach for analyzing operating experience and understanding nuclear plant safety, human error, and the underlying factors that affect them. The concepts of the framework have matured into a rudimentary working HRA method. A trial application of the method has demonstrated that it is possible to identify potentially significant human failure events from actual operating experience which are not generally included in current PRAs, as well as to identify associated performance shaping factors and plant conditions that have an observable impact on the frequency of core damage. A general process was developed, albeit in preliminary form, that addresses the iterative steps of defining human failure events and estimating their probabilities using search schemes. Additionally, a knowledge- base was developed which describes the links between performance shaping factors and resulting unsafe actions.

  4. A technique for human error analysis (ATHEANA)

    International Nuclear Information System (INIS)

    Cooper, S.E.; Ramey-Smith, A.M.; Wreathall, J.; Parry, G.W.

    1996-05-01

    Probabilistic risk assessment (PRA) has become an important tool in the nuclear power industry, both for the Nuclear Regulatory Commission (NRC) and the operating utilities. Human reliability analysis (HRA) is a critical element of PRA; however, limitations in the analysis of human actions in PRAs have long been recognized as a constraint when using PRA. A multidisciplinary HRA framework has been developed with the objective of providing a structured approach for analyzing operating experience and understanding nuclear plant safety, human error, and the underlying factors that affect them. The concepts of the framework have matured into a rudimentary working HRA method. A trial application of the method has demonstrated that it is possible to identify potentially significant human failure events from actual operating experience which are not generally included in current PRAs, as well as to identify associated performance shaping factors and plant conditions that have an observable impact on the frequency of core damage. A general process was developed, albeit in preliminary form, that addresses the iterative steps of defining human failure events and estimating their probabilities using search schemes. Additionally, a knowledge- base was developed which describes the links between performance shaping factors and resulting unsafe actions

  5. Analysis of Disparity Error for Stereo Autofocus.

    Science.gov (United States)

    Yang, Cheng-Chieh; Huang, Shao-Kang; Shih, Kuang-Tsu; Chen, Homer H

    2018-04-01

    As more and more stereo cameras are installed on electronic devices, we are motivated to investigate how to leverage disparity information for autofocus. The main challenge is that stereo images captured for disparity estimation are subject to defocus blur unless the lenses of the stereo cameras are at the in-focus position. Therefore, it is important to investigate how the presence of defocus blur would affect stereo matching and, in turn, the performance of disparity estimation. In this paper, we give an analytical treatment of this fundamental issue of disparity-based autofocus by examining the relation between image sharpness and disparity error. A statistical approach that treats the disparity estimate as a random variable is developed. Our analysis provides a theoretical backbone for the empirical observation that, regardless of the initial lens position, disparity-based autofocus can bring the lens to the hill zone of the focus profile in one movement. The insight gained from the analysis is useful for the implementation of an autofocus system.

  6. ERROR ANALYSIS ON INFORMATION AND TECHNOLOGY STUDENTS’ SENTENCE WRITING ASSIGNMENTS

    OpenAIRE

    Rentauli Mariah Silalahi

    2015-01-01

    Students’ error analysis is very important for helping EFL teachers to develop their teaching materials, assessments and methods. However, it takes much time and effort from the teachers to do such an error analysis towards their students’ language. This study seeks to identify the common errors made by 1 class of 28 freshmen students studying English in their first semester in an IT university. The data is collected from their writing assignments for eight consecutive weeks. The errors found...

  7. Evaluation of reconstruction errors and identification of artefacts for JET gamma and neutron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Craciunescu, Teddy, E-mail: teddy.craciunescu@jet.uk; Tiseanu, Ion; Zoita, Vasile [National Institute for Laser, Plasma and Radiation Physics, Magurele-Bucharest (Romania); Murari, Andrea [Consorzio RFX, Padova (Italy); Kiptily, Vasily; Sharapov, Sergei [CCFE Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); Lupelli, Ivan [CCFE Culham Science Centre, Abingdon, Oxon OX14 3DB (United Kingdom); University of Rome “Tor Vergata,” Roma (Italy); Fernandes, Ana [Instituto de Plasmas e Fusão Nuclear, Instituto Superior Técnico, Universidade de Lisboa, Lisboa (Portugal); Collaboration: EUROfusion Consortium, JET, Culham Science Centre, Abingdon OX14 3DB (United Kingdom)

    2016-01-15

    The Joint European Torus (JET) neutron profile monitor ensures 2D coverage of the gamma and neutron emissive region that enables tomographic reconstruction. Due to the availability of only two projection angles and to the coarse sampling, tomographic inversion is a limited data set problem. Several techniques have been developed for tomographic reconstruction of the 2-D gamma and neutron emissivity on JET, but the problem of evaluating the errors associated with the reconstructed emissivity profile is still open. The reconstruction technique based on the maximum likelihood principle, that proved already to be a powerful tool for JET tomography, has been used to develop a method for the numerical evaluation of the statistical properties of the uncertainties in gamma and neutron emissivity reconstructions. The image covariance calculation takes into account the additional techniques introduced in the reconstruction process for tackling with the limited data set (projection resampling, smoothness regularization depending on magnetic field). The method has been validated by numerically simulations and applied to JET data. Different sources of artefacts that may significantly influence the quality of reconstructions and the accuracy of variance calculation have been identified.

  8. Fixturing error measurement and analysis using CMMs

    International Nuclear Information System (INIS)

    Wang, Y; Chen, X; Gindy, N

    2005-01-01

    Influence of fixture on the errors of a machined surface can be very significant. The machined surface errors generated during machining can be measured by using a coordinate measurement machine (CMM) through the displacements of three coordinate systems on a fixture-workpiece pair in relation to the deviation of the machined surface. The surface errors consist of the component movement, component twist, deviation between actual machined surface and defined tool path. A turbine blade fixture for grinding operation is used for case study

  9. Solar Tracking Error Analysis of Fresnel Reflector

    Directory of Open Access Journals (Sweden)

    Jiantao Zheng

    2014-01-01

    Full Text Available Depending on the rotational structure of Fresnel reflector, the rotation angle of the mirror was deduced under the eccentric condition. By analyzing the influence of the sun tracking rotation angle error caused by main factors, the change rule and extent of the influence were revealed. It is concluded that the tracking errors caused by the difference between the rotation axis and true north meridian, at noon, were maximum under certain conditions and reduced at morning and afternoon gradually. The tracking error caused by other deviations such as rotating eccentric, latitude, and solar altitude was positive at morning, negative at afternoon, and zero at a certain moment of noon.

  10. Nanoparticles displacement analysis using optical coherence tomography

    Science.gov (United States)

    StrÄ kowski, Marcin R.; Kraszewski, Maciej; StrÄ kowska, Paulina

    2016-03-01

    Optical coherence tomography (OCT) is a versatile optical method for cross-sectional and 3D imaging of biological and non-biological objects. Here we are going to present the application of polarization sensitive spectroscopic OCT system (PS-SOCT) for quantitative measurements of materials containing nanoparticles. The PS-SOCT combines the polarization sensitive analysis with time-frequency analysis. In this contribution the benefits of using the combination of timefrequency and polarization sensitive analysis are being expressed. The usefulness of PS-SOCT for nanoparticles evaluation is going to be tested on nanocomposite materials with TiO2 nanoparticles. The OCT measurements results have been compared with SEM examination of the PMMA matrix with nanoparticles. The experiment has proven that by the use of polarization sensitive and spectroscopic OCT the nanoparticles dispersion and size can be evaluated.

  11. Bayesian tomography and integrated data analysis in fusion diagnostics

    Science.gov (United States)

    Li, Dong; Dong, Y. B.; Deng, Wei; Shi, Z. B.; Fu, B. Z.; Gao, J. M.; Wang, T. B.; Zhou, Yan; Liu, Yi; Yang, Q. W.; Duan, X. R.

    2016-11-01

    In this article, a Bayesian tomography method using non-stationary Gaussian process for a prior has been introduced. The Bayesian formalism allows quantities which bear uncertainty to be expressed in the probabilistic form so that the uncertainty of a final solution can be fully resolved from the confidence interval of a posterior probability. Moreover, a consistency check of that solution can be performed by checking whether the misfits between predicted and measured data are reasonably within an assumed data error. In particular, the accuracy of reconstructions is significantly improved by using the non-stationary Gaussian process that can adapt to the varying smoothness of emission distribution. The implementation of this method to a soft X-ray diagnostics on HL-2A has been used to explore relevant physics in equilibrium and MHD instability modes. This project is carried out within a large size inference framework, aiming at an integrated analysis of heterogeneous diagnostics.

  12. Dose error analysis for a scanned proton beam delivery system

    International Nuclear Information System (INIS)

    Coutrakon, G; Wang, N; Miller, D W; Yang, Y

    2010-01-01

    All particle beam scanning systems are subject to dose delivery errors due to errors in position, energy and intensity of the delivered beam. In addition, finite scan speeds, beam spill non-uniformities, and delays in detector, detector electronics and magnet responses will all contribute errors in delivery. In this paper, we present dose errors for an 8 x 10 x 8 cm 3 target of uniform water equivalent density with 8 cm spread out Bragg peak and a prescribed dose of 2 Gy. Lower doses are also analyzed and presented later in the paper. Beam energy errors and errors due to limitations of scanning system hardware have been included in the analysis. By using Gaussian shaped pencil beams derived from measurements in the research room of the James M Slater Proton Treatment and Research Center at Loma Linda, CA and executing treatment simulations multiple times, statistical dose errors have been calculated in each 2.5 mm cubic voxel in the target. These errors were calculated by delivering multiple treatments to the same volume and calculating the rms variation in delivered dose at each voxel in the target. The variations in dose were the result of random beam delivery errors such as proton energy, spot position and intensity fluctuations. The results show that with reasonable assumptions of random beam delivery errors, the spot scanning technique yielded an rms dose error in each voxel less than 2% or 3% of the 2 Gy prescribed dose. These calculated errors are within acceptable clinical limits for radiation therapy.

  13. Analysis of Errors in a Special Perturbations Satellite Orbit Propagator

    Energy Technology Data Exchange (ETDEWEB)

    Beckerman, M.; Jones, J.P.

    1999-02-01

    We performed an analysis of error densities for the Special Perturbations orbit propagator using data for 29 satellites in orbits of interest to Space Shuttle and International Space Station collision avoidance. We find that the along-track errors predominate. These errors increase monotonically over each 36-hour prediction interval. The predicted positions in the along-track direction progressively either leap ahead of or lag behind the actual positions. Unlike the along-track errors the radial and cross-track errors oscillate about their nearly zero mean values. As the number of observations per fit interval decline the along-track prediction errors, and amplitudes of the radial and cross-track errors, increase.

  14. Procedural errors during root canal preparation using rotary NiTi instruments detected by periapical radiography and cone beam computed tomography.

    Science.gov (United States)

    de Alencar, Ana Helena Gonçalves; Dummer, Paul M H; Oliveira, Henrique César Marçal; Pécora, Jesus Djalma; Estrela, Carlos

    2010-01-01

    This study detected procedural errors created by rotary nickel-titanium (NiTi) instruments during root canal preparation by two imaging methods. Forty extracted human maxillary and mandibular molars were divided randomly into two groups and treated by two endodontists (n=10) and two undergraduate dental students (n=10). The ProTaper Universal Rotary System was used to shape the canals and then they were filled using AH Plus sealer and gutta-percha. Periapical radiographs (PR) and cone beam computed tomography (CBCT) images were obtained and two examiners, who evaluated them to verify the occurrence of procedural errors (fractured instruments, perforations, and canal transportation). The Chi-square test at 0.05 level of significance was used for statistical analyses. There were no significant differences (p>0.05) between the imaging methods. In the analysis of procedural errors, there was no significant difference (p>0.05) between the groups of operators (endodontists vs. students) nor between tooth groups (maxillary molars vs. mandibular molars). In view of the low incidence of procedural errors during root canal preparation performed by students the introduction of rotary NiTi instruments has potential in undergraduate teaching. PR and CBCT permitted the detection of procedural errors, but the CBCT images offer more recourse for diagnosis.

  15. ERROR ANALYSIS ON INFORMATION AND TECHNOLOGY STUDENTS’ SENTENCE WRITING ASSIGNMENTS

    Directory of Open Access Journals (Sweden)

    Rentauli Mariah Silalahi

    2015-03-01

    Full Text Available Students’ error analysis is very important for helping EFL teachers to develop their teaching materials, assessments and methods. However, it takes much time and effort from the teachers to do such an error analysis towards their students’ language. This study seeks to identify the common errors made by 1 class of 28 freshmen students studying English in their first semester in an IT university. The data is collected from their writing assignments for eight consecutive weeks. The errors found were classified into 24 types and the top ten most common errors committed by the students were article, preposition, spelling, word choice, subject-verb agreement, auxiliary verb, plural form, verb form, capital letter, and meaningless sentences. The findings about the students’ frequency of committing errors were, then, contrasted to their midterm test result and in order to find out the reasons behind the error recurrence; the students were given some questions to answer in a questionnaire format. Most of the students admitted that careless was the major reason for their errors and lack understanding came next. This study suggests EFL teachers to devote their time to continuously check the students’ language by giving corrections so that the students can learn from their errors and stop committing the same errors.

  16. Analysis of airways in computed tomography

    DEFF Research Database (Denmark)

    Petersen, Jens

    Chronic Obstructive Pulmonary Disease (COPD) is major cause of death and disability world-wide. It affects lung function through destruction of lung tissue known as emphysema and inflammation of airways, leading to thickened airway walls and narrowed airway lumen. Computed Tomography (CT) imaging...... have become the standard with which to assess emphysema extent but airway abnormalities have so far been more challenging to quantify. Automated methods for analysis are indispensable as the visible airway tree in a CT scan can include several hundreds of individual branches. However, automation...... the Danish Lung Cancer Screening Trial. This includes methods for extracting airway surfaces from the images and ways of achieving comparable measurements in airway branches through matching and anatomical labelling. The methods were used to study effects of differences in inspiration level at the time...

  17. Analysis of mesenteric thickening on computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Takano, Hideyuki; Sekiya, Tohru; Miyakawa, Kunihisa; Ozaki, Masatoki; Katsuyama, Naofumi; Nakano, Masao (University of the Ryukyu, Okinawa (Japan). School of Medicine)

    1990-12-01

    Computed tomography (CT) provides noninvasive information in the evaluation of abnormalities of the gastrointestinal tract by direct imaging of the bowel wall and adjacent mesentery. Several prior studies have discussed the variable CT appearances of mesenteric abnormalities, such as lymphoma, metastasis, inflammatory disease and edema. Although mesenteric thickening was mentioned in these studies, no study has provided a detailed analysis of the CT appearance of the thickened mesentery. Two characteristic types of mesenteric thickening were identified in 47 patients. Type I is 'intramesenteric thickening', which was noted in 25 patients with vascular obstruction, inflammatory disease and edema. Type II is 'mesenteric surface thickening', which was noted in 22 patients with peritonitis carcinomatosa, peritoneal mesothelioma, tuberculous peritonitis and pseudomyxoma peritoneai. An understanding of these two types of mesenteric diseases is important in the identification of mesenteric pathology. (author).

  18. Analysis of mesenteric thickening on computed tomography

    International Nuclear Information System (INIS)

    Takano, Hideyuki; Sekiya, Tohru; Miyakawa, Kunihisa; Ozaki, Masatoki; Katsuyama, Naofumi; Nakano, Masao

    1990-01-01

    Computed tomography (CT) provides noninvasive information in the evaluation of abnormalities of the gastrointestinal tract by direct imaging of the bowel wall and adjacent mesentery. Several prior studies have discussed the variable CT appearances of mesenteric abnormalities, such as lymphoma, metastasis, inflammatory disease and edema. Although mesenteric thickening was mentioned in these studies, no study has provided a detailed analysis of the CT appearance of the thickened mesentery. Two characteristic types of mesenteric thickening were identified in 47 patients. Type I is 'intramesenteric thickening', which was noted in 25 patients with vascular obstruction, inflammatory disease and edema. Type II is 'mesenteric surface thickening', which was noted in 22 patients with peritonitis carcinomatosa, peritoneal mesothelioma, tuberculous peritonitis and pseudomyxoma peritoneai. An understanding of these two types of mesenteric diseases is important in the identification of mesenteric pathology. (author)

  19. ERROR CONVERGENCE ANALYSIS FOR LOCAL HYPERTHERMIA APPLICATIONS

    Directory of Open Access Journals (Sweden)

    NEERU MALHOTRA

    2016-01-01

    Full Text Available The accuracy of numerical solution for electromagnetic problem is greatly influenced by the convergence of the solution obtained. In order to quantify the correctness of the numerical solution the errors produced on solving the partial differential equations are required to be analyzed. Mesh quality is another parameter that affects convergence. The various quality metrics are dependent on the type of solver used for numerical simulation. The paper focuses on comparing the performance of iterative solvers used in COMSOL Multiphysics software. The modeling of coaxial coupled waveguide applicator operating at 485MHz has been done for local hyperthermia applications using adaptive finite element method. 3D heat distribution within the muscle phantom depicting spherical leison and localized heating pattern confirms the proper selection of the solver. The convergence plots are obtained during simulation of the problem using GMRES (generalized minimal residual and geometric multigrid linear iterative solvers. The best error convergence is achieved by using nonlinearity multigrid solver and further introducing adaptivity in nonlinear solver.

  20. Analysis of Medication Errors in Simulated Pediatric Resuscitation by Residents

    Directory of Open Access Journals (Sweden)

    Evelyn Porter

    2014-07-01

    Full Text Available Introduction: The objective of our study was to estimate the incidence of prescribing medication errors specifically made by a trainee and identify factors associated with these errors during the simulated resuscitation of a critically ill child. Methods: The results of the simulated resuscitation are described. We analyzed data from the simulated resuscitation for the occurrence of a prescribing medication error. We compared univariate analysis of each variable to medication error rate and performed a separate multiple logistic regression analysis on the significant univariate variables to assess the association between the selected variables. Results: We reviewed 49 simulated resuscitations . The final medication error rate for the simulation was 26.5% (95% CI 13.7% - 39.3%. On univariate analysis, statistically significant findings for decreased prescribing medication error rates included senior residents in charge, presence of a pharmacist, sleeping greater than 8 hours prior to the simulation, and a visual analog scale score showing more confidence in caring for critically ill children. Multiple logistic regression analysis using the above significant variables showed only the presence of a pharmacist to remain significantly associated with decreased medication error, odds ratio of 0.09 (95% CI 0.01 - 0.64. Conclusion: Our results indicate that the presence of a clinical pharmacist during the resuscitation of a critically ill child reduces the medication errors made by resident physician trainees.

  1. Analysis of medication errors in simulated pediatric resuscitation by residents.

    Science.gov (United States)

    Porter, Evelyn; Barcega, Besh; Kim, Tommy Y

    2014-07-01

    The objective of our study was to estimate the incidence of prescribing medication errors specifically made by a trainee and identify factors associated with these errors during the simulated resuscitation of a critically ill child. The results of the simulated resuscitation are described. We analyzed data from the simulated resuscitation for the occurrence of a prescribing medication error. We compared univariate analysis of each variable to medication error rate and performed a separate multiple logistic regression analysis on the significant univariate variables to assess the association between the selected variables. We reviewed 49 simulated resuscitations. The final medication error rate for the simulation was 26.5% (95% CI 13.7% - 39.3%). On univariate analysis, statistically significant findings for decreased prescribing medication error rates included senior residents in charge, presence of a pharmacist, sleeping greater than 8 hours prior to the simulation, and a visual analog scale score showing more confidence in caring for critically ill children. Multiple logistic regression analysis using the above significant variables showed only the presence of a pharmacist to remain significantly associated with decreased medication error, odds ratio of 0.09 (95% CI 0.01 - 0.64). Our results indicate that the presence of a clinical pharmacist during the resuscitation of a critically ill child reduces the medication errors made by resident physician trainees.

  2. Error Analysis on Plane-to-Plane Linear Approximate Coordinate ...

    Indian Academy of Sciences (India)

    c Indian Academy of Sciences. Error Analysis on Plane-to-Plane Linear Approximate Coordinate. Transformation. Q. F. Zhang1,∗, Q. Y. Peng1 & J. H. Fan2 ... In astronomy, some tasks require performing the coordinate transformation between two tangent planes in ... Based on these parameters, we get maxi- mum errors in ...

  3. Error Analysis on Plane-to-Plane Linear Approximate Coordinate ...

    Indian Academy of Sciences (India)

    Abstract. In this paper, the error analysis has been done for the linear approximate transformation between two tangent planes in celestial sphere in a simple case. The results demonstrate that the error from the linear transformation does not meet the requirement of high-precision astrometry under some conditions, so the ...

  4. Error Analysis on Plane-to-Plane Linear Approximate Coordinate ...

    Indian Academy of Sciences (India)

    2016-01-27

    Jan 27, 2016 ... In this paper, the error analysis has been done for the linear approximate transformation between two tangent planes in celestial sphere in a simple case. The results demonstrate that the error from the linear transformation does not meet the requirement of high-precision astrometry under some conditions, ...

  5. Implications of Error Analysis Studies for Academic Interventions

    Science.gov (United States)

    Mather, Nancy; Wendling, Barbara J.

    2017-01-01

    We reviewed 13 studies that focused on analyzing student errors on achievement tests from the Kaufman Test of Educational Achievement-Third edition (KTEA-3). The intent was to determine what instructional implications could be derived from in-depth error analysis. As we reviewed these studies, several themes emerged. We explain how a careful…

  6. Lower extremity angle measurement with accelerometers - error and sensitivity analysis

    NARCIS (Netherlands)

    Willemsen, A.T.M.; Willemsen, Antoon Th.M.; Frigo, Carlo; Boom, H.B.K.

    1991-01-01

    The use of accelerometers for angle assessment of the lower extremities is investigated. This method is evaluated by an error-and-sensitivity analysis using healthy subject data. Of three potential error sources (the reference system, the accelerometers, and the model assumptions) the last is found

  7. Real-time analysis for Stochastic errors of MEMS gyro

    Science.gov (United States)

    Miao, Zhiyong; Shi, Hongyang; Zhang, Yi

    2017-10-01

    Since a good knowledge of MEMS gyro stochastic errors is important and critical to MEMS INS/GPS integration system. Therefore, the stochastic errors of MEMS gyro should be accurately modeled and identified. The Allan variance method is IEEE standard method in the filed of analysis stochastic errors of gyro. This kind of method can fully characterize the random character of stochastic errors. However, it requires a large amount of data to be stored, resulting in large offline computational burden. Moreover, it has a painful procedure of drawing slope lines for estimation. To overcome the barriers, a simple linear state-space model was established for MEMS gyro. Then, a recursive EM algorithm was implemented to estimate the stochastic errors of MEMS gyro in real time. The experimental results of ADIS16405 IMU show that the real-time estimations of proposed approach are well within the error limits of Allan variance method. Moreover, the proposed method effectively avoids the storage of data.

  8. An error analysis perspective for patient alignment systems.

    Science.gov (United States)

    Figl, Michael; Kaar, Marcus; Hoffman, Rainer; Kratochwil, Alfred; Hummel, Johann

    2013-09-01

    This paper analyses the effects of error sources which can be found in patient alignment systems. As an example, an ultrasound (US) repositioning system and its transformation chain are assessed. The findings of this concept can also be applied to any navigation system. In a first step, all error sources were identified and where applicable, corresponding target registration errors were computed. By applying error propagation calculations on these commonly used registration/calibration and tracking errors, we were able to analyse the components of the overall error. Furthermore, we defined a special situation where the whole registration chain reduces to the error caused by the tracking system. Additionally, we used a phantom to evaluate the errors arising from the image-to-image registration procedure, depending on the image metric used. We have also discussed how this analysis can be applied to other positioning systems such as Cone Beam CT-based systems or Brainlab's ExacTrac. The estimates found by our error propagation analysis are in good agreement with the numbers found in the phantom study but significantly smaller than results from patient evaluations. We probably underestimated human influences such as the US scan head positioning by the operator and tissue deformation. Rotational errors of the tracking system can multiply these errors, depending on the relative position of tracker and probe. We were able to analyse the components of the overall error of a typical patient positioning system. We consider this to be a contribution to the optimization of the positioning accuracy for computer guidance systems.

  9. Error and Uncertainty Analysis for Ecological Modeling and Simulation

    National Research Council Canada - National Science Library

    Gertner, George

    1998-01-01

    The main objectives of this project are a) to develop a general methodology for conducting sensitivity and uncertainty analysis and building error budgets in simulation modeling over space and time; and b...

  10. Error analysis of large aperture static interference imaging spectrometer

    Science.gov (United States)

    Li, Fan; Zhang, Guo

    2015-12-01

    Large Aperture Static Interference Imaging Spectrometer is a new type of spectrometer with light structure, high spectral linearity, high luminous flux and wide spectral range, etc ,which overcomes the contradiction between high flux and high stability so that enables important values in science studies and applications. However, there're different error laws in imaging process of LASIS due to its different imaging style from traditional imaging spectrometers, correspondingly, its data processing is complicated. In order to improve accuracy of spectrum detection and serve for quantitative analysis and monitoring of topographical surface feature, the error law of LASIS imaging is supposed to be learned. In this paper, the LASIS errors are classified as interferogram error, radiometric correction error and spectral inversion error, and each type of error is analyzed and studied. Finally, a case study of Yaogan-14 is proposed, in which the interferogram error of LASIS by time and space combined modulation is mainly experimented and analyzed, as well as the errors from process of radiometric correction and spectral inversion.

  11. Analysis of Employee's Survey for Preventing Human-Errors

    International Nuclear Information System (INIS)

    Sung, Chanho; Kim, Younggab; Joung, Sanghoun

    2013-01-01

    Human errors in nuclear power plant can cause large and small events or incidents. These events or incidents are one of main contributors of reactor trip and might threaten the safety of nuclear plants. To prevent human-errors, KHNP(nuclear power plants) introduced 'Human-error prevention techniques' and have applied the techniques to main parts such as plant operation, operation support, and maintenance and engineering. This paper proposes the methods to prevent and reduce human-errors in nuclear power plants through analyzing survey results which includes the utilization of the human-error prevention techniques and the employees' awareness of preventing human-errors. With regard to human-error prevention, this survey analysis presented the status of the human-error prevention techniques and the employees' awareness of preventing human-errors. Employees' understanding and utilization of the techniques was generally high and training level of employee and training effect on actual works were in good condition. Also, employees answered that the root causes of human-error were due to working environment including tight process, manpower shortage, and excessive mission rather than personal negligence or lack of personal knowledge. Consideration of working environment is certainly needed. At the present time, based on analyzing this survey, the best methods of preventing human-error are personal equipment, training/education substantiality, private mental health check before starting work, prohibit of multiple task performing, compliance with procedures, and enhancement of job site review. However, the most important and basic things for preventing human-error are interests of workers and organizational atmosphere such as communication between managers and workers, and communication between employees and bosses

  12. Attitude Determination Error Analysis System (ADEAS) mathematical specifications document

    Science.gov (United States)

    Nicholson, Mark; Markley, F.; Seidewitz, E.

    1988-01-01

    The mathematical specifications of Release 4.0 of the Attitude Determination Error Analysis System (ADEAS), which provides a general-purpose linear error analysis capability for various spacecraft attitude geometries and determination processes, are presented. The analytical basis of the system is presented. The analytical basis of the system is presented, and detailed equations are provided for both three-axis-stabilized and spin-stabilized attitude sensor models.

  13. Evaluation of in-vivo measurement errors associated with micro-computed tomography scans by means of the bone surface distance approach.

    Science.gov (United States)

    Lu, Yongtao; Boudiffa, Maya; Dall'Ara, Enrico; Bellantuono, Ilaria; Viceconti, Marco

    2015-11-01

    In vivo micro-computed tomography (µCT) scanning is an important tool for longitudinal monitoring of the bone adaptation process in animal models. However, the errors associated with the usage of in vivo µCT measurements for the evaluation of bone adaptations remain unclear. The aim of this study was to evaluate the measurement errors using the bone surface distance approach. The right tibiae of eight 14-week-old C57BL/6 J female mice were consecutively scanned four times in an in vivo µCT scanner using a nominal isotropic image voxel size (10.4 µm) and the tibiae were repositioned between each scan. The repeated scan image datasets were aligned to the corresponding baseline (first) scan image dataset using rigid registration and a region of interest was selected in the proximal tibia metaphysis for analysis. The bone surface distances between the repeated and the baseline scan datasets were evaluated. It was found that the average (±standard deviation) median and 95th percentile bone surface distances were 3.10 ± 0.76 µm and 9.58 ± 1.70 µm, respectively. This study indicated that there were inevitable errors associated with the in vivo µCT measurements of bone microarchitecture and these errors should be taken into account for a better interpretation of bone adaptations measured with in vivo µCT. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  14. Data Analysis & Statistical Methods for Command File Errors

    Science.gov (United States)

    Meshkat, Leila; Waggoner, Bruce; Bryant, Larry

    2014-01-01

    This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.

  15. Errors of DWPF frit analysis: Final report

    International Nuclear Information System (INIS)

    Schumacher, R.F.

    1993-01-01

    Glass frit will be a major raw material for the operation of the Defense Waste Processing Facility. The frit will be controlled by certificate of conformance and a confirmatory analysis from a commercial analytical laboratory. The following effort provides additional quantitative information on the variability of frit chemical analyses at two commercial laboratories. Identical samples of IDMS Frit 202 were chemically analyzed at two commercial laboratories and at three different times over a period of four months. The SRL-ADS analyses, after correction with the reference standard and normalization, provided confirmatory information, but did not detect the low silica level in one of the frit samples. A methodology utilizing elliptical limits for confirming the certificate of conformance or confirmatory analysis was introduced and recommended for use when the analysis values are close but not within the specification limits. It was also suggested that the lithia specification limits might be reduced as long as CELS is used to confirm the analysis

  16. Bayesian Total Error Analysis - An Error Sensitive Approach to Model Calibration

    Science.gov (United States)

    Franks, S. W.; Kavetski, D.; Kuczera, G.

    2002-12-01

    The majority of environmental models require calibration of their parameters before meaningful predictions of catchment behaviour can be made. Despite the importance of reliable parameter estimates, there are growing concerns about the ability of objective-based inference methods to adequately calibrate environmental models. The problem lies with the formulation of the objective or likelihood function, which is currently implemented using essentially ad-hoc methods. We outline limitations of current calibration methodologies and introduce a more systematic Bayesian Total Error Analysis (BATEA) framework for environmental model calibration and validation, which imposes a hitherto missing rigour in environmental modelling by requiring the specification of physically realistic model and data uncertainty models with explicit assumptions that can and must be tested against available evidence. The BATEA formalism enables inference of the hydrological parameters and also of any latent variables of the uncertainty models, e.g., precipitation depth errors. The latter could be useful for improving data sampling and measurement methodologies. In addition, distinguishing between the various sources of errors will reduce the current ambiguity about parameter and predictive uncertainty and enable rational testing of environmental models' hypotheses. Monte Carlo Markov Chain methods are employed to manage the increased computational requirements of BATEA. A case study using synthetic data demonstrates that explicitly accounting for forcing errors leads to immediate advantages over traditional regression (e.g., standard least squares calibration) that ignore rainfall history corruption and pseudo-likelihood methods (e.g., GLUE) do not explicitly characterise data and model errors. It is precisely data and model errors that are responsible for the need for calibration in the first place; we expect that understanding these errors will force fundamental shifts in the model

  17. Error Consistency Analysis Scheme for Infrared Ultraspectral Sounding Retrieval Error Budget Estimation

    Science.gov (United States)

    Zhou, Daniel K.; Larar, Allen M.; Liu, Xu; Smith, William L.; Strow, Larry, L.

    2013-01-01

    Great effort has been devoted towards validating geophysical parameters retrieved from ultraspectral infrared radiances obtained from satellite remote sensors. An error consistency analysis scheme (ECAS), utilizing fast radiative transfer model (RTM) forward and inverse calculations, has been developed to estimate the error budget in terms of mean difference and standard deviation of error in both spectral radiance and retrieval domains. The retrieval error is assessed through ECAS without relying on other independent measurements such as radiosonde data. ECAS establishes a link between the accuracies of radiances and retrieved geophysical parameters. ECAS can be applied to measurements from any ultraspectral instrument and any retrieval scheme with its associated RTM. In this manuscript, ECAS is described and demonstrated with measurements from the MetOp-A satellite Infrared Atmospheric Sounding Interferometer (IASI). This scheme can be used together with other validation methodologies to give a more definitive characterization of the error and/or uncertainty of geophysical parameters retrieved from ultraspectral radiances observed from current and future satellite remote sensors such as IASI, the Atmospheric Infrared Sounder (AIRS), and the Cross-track Infrared Sounder (CrIS).

  18. Grinding Method and Error Analysis of Eccentric Shaft Parts

    Science.gov (United States)

    Wang, Zhiming; Han, Qiushi; Li, Qiguang; Peng, Baoying; Li, Weihua

    2017-12-01

    RV reducer and various mechanical transmission parts are widely used in eccentric shaft parts, The demand of precision grinding technology for eccentric shaft parts now, In this paper, the model of X-C linkage relation of eccentric shaft grinding is studied; By inversion method, the contour curve of the wheel envelope is deduced, and the distance from the center of eccentric circle is constant. The simulation software of eccentric shaft grinding is developed, the correctness of the model is proved, the influence of the X-axis feed error, the C-axis feed error and the wheel radius error on the grinding process is analyzed, and the corresponding error calculation model is proposed. The simulation analysis is carried out to provide the basis for the contour error compensation.

  19. Compensation for geometric modeling errors by positioning of electrodes in electrical impedance tomography

    DEFF Research Database (Denmark)

    Hyvönen, N.; Majander, H.; Staboulis, Stratos

    2017-01-01

    Electrical impedance tomography aims at reconstructing the conductivity inside a physical body from boundary measurements of current and voltage at a finite number of contact electrodes. In many practical applications, the shape of the imaged object is subject to considerable uncertainties...

  20. The use of error analysis to assess resident performance.

    Science.gov (United States)

    D'Angelo, Anne-Lise D; Law, Katherine E; Cohen, Elaine R; Greenberg, Jacob A; Kwan, Calvin; Greenberg, Caprice; Wiegmann, Douglas A; Pugh, Carla M

    2015-11-01

    The aim of this study was to assess validity of a human factors error assessment method for evaluating resident performance during a simulated operative procedure. Seven postgraduate year 4-5 residents had 30 minutes to complete a simulated laparoscopic ventral hernia (LVH) repair on day 1 of a national, advanced laparoscopic course. Faculty provided immediate feedback on operative errors and residents participated in a final product analysis of their repairs. Residents then received didactic and hands-on training regarding several advanced laparoscopic procedures during a lecture session and animate lab. On day 2, residents performed a nonequivalent LVH repair using a simulator. Three investigators reviewed and coded videos of the repairs using previously developed human error classification systems. Residents committed 121 total errors on day 1 compared with 146 on day 2. One of 7 residents successfully completed the LVH repair on day 1 compared with all 7 residents on day 2 (P = .001). The majority of errors (85%) committed on day 2 were technical and occurred during the last 2 steps of the procedure. There were significant differences in error type (P ≤ .001) and level (P = .019) from day 1 to day 2. The proportion of omission errors decreased from day 1 (33%) to day 2 (14%). In addition, there were more technical and commission errors on day 2. The error assessment tool was successful in categorizing performance errors, supporting known-groups validity evidence. Evaluating resident performance through error classification has great potential in facilitating our understanding of operative readiness. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. An Analysis of Medication Errors at the Military Medical Center: Implications for a Systems Approach for Error Reduction

    National Research Council Canada - National Science Library

    Scheirman, Katherine

    2001-01-01

    An analysis was accomplished of all inpatient medication errors at a military academic medical center during the year 2000, based on the causes of medication errors as described by current research in the field...

  2. Sensitivity analysis of geometric errors in additive manufacturing medical models.

    Science.gov (United States)

    Pinto, Jose Miguel; Arrieta, Cristobal; Andia, Marcelo E; Uribe, Sergio; Ramos-Grez, Jorge; Vargas, Alex; Irarrazaval, Pablo; Tejos, Cristian

    2015-03-01

    Additive manufacturing (AM) models are used in medical applications for surgical planning, prosthesis design and teaching. For these applications, the accuracy of the AM models is essential. Unfortunately, this accuracy is compromised due to errors introduced by each of the building steps: image acquisition, segmentation, triangulation, printing and infiltration. However, the contribution of each step to the final error remains unclear. We performed a sensitivity analysis comparing errors obtained from a reference with those obtained modifying parameters of each building step. Our analysis considered global indexes to evaluate the overall error, and local indexes to show how this error is distributed along the surface of the AM models. Our results show that the standard building process tends to overestimate the AM models, i.e. models are larger than the original structures. They also show that the triangulation resolution and the segmentation threshold are critical factors, and that the errors are concentrated at regions with high curvatures. Errors could be reduced choosing better triangulation and printing resolutions, but there is an important need for modifying some of the standard building processes, particularly the segmentation algorithms. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  3. Multiobjective optimization framework for landmark measurement error correction in three-dimensional cephalometric tomography.

    Science.gov (United States)

    DeCesare, A; Secanell, M; Lagravère, M O; Carey, J

    2013-01-01

    The purpose of this study is to minimize errors that occur when using a four vs six landmark superimpositioning method in the cranial base to define the co-ordinate system. Cone beam CT volumetric data from ten patients were used for this study. Co-ordinate system transformations were performed. A co-ordinate system was constructed using two planes defined by four anatomical landmarks located by an orthodontist. A second co-ordinate system was constructed using four anatomical landmarks that are corrected using a numerical optimization algorithm for any landmark location operator error using information from six landmarks. The optimization algorithm minimizes the relative distance and angle between the known fixed points in the two images to find the correction. Measurement errors and co-ordinates in all axes were obtained for each co-ordinate system. Significant improvement is observed after using the landmark correction algorithm to position the final co-ordinate system. The errors found in a previous study are significantly reduced. Errors found were between 1 mm and 2 mm. When analysing real patient data, it was found that the 6-point correction algorithm reduced errors between images and increased intrapoint reliability. A novel method of optimizing the overlay of three-dimensional images using a 6-point correction algorithm was introduced and examined. This method demonstrated greater reliability and reproducibility than the previous 4-point correction algorithm.

  4. Error analysis of two methods for range-images registration

    Science.gov (United States)

    Liu, Xiaoli; Yin, Yongkai; Li, Ameng; He, Dong; Peng, Xiang

    2010-08-01

    With the improvements in range image registration techniques, this paper focuses on error analysis of two registration methods being generally applied in industry metrology including the algorithm comparison, matching error, computing complexity and different application areas. One method is iterative closest points, by which beautiful matching results with little error can be achieved. However some limitations influence its application in automatic and fast metrology. The other method is based on landmarks. We also present a algorithm for registering multiple range-images with non-coding landmarks, including the landmarks' auto-identification and sub-pixel location, 3D rigid motion, point pattern matching, global iterative optimization techniques et al. The registering results by the two methods are illustrated and a thorough error analysis is performed.

  5. Human error identification and analysis implicitly determined from LERs

    International Nuclear Information System (INIS)

    Luckas, W.J. Jr.; Speaker, D.M.

    1983-01-01

    As part of an ongoing effort to quantify human error using modified task analysis on Licensee Event Report (LER) system data, the initial results have been presented and documented in NUREG/CR-1880 and -2416. These results indicate the relatively important need for indepth anlaysis of LERs to obtain a more realistic assessment of human error caused events than those explicitly identified in the LERs themselves

  6. Ionospheric error analysis in gps measurements

    Directory of Open Access Journals (Sweden)

    G. Pugliano

    2008-06-01

    Full Text Available The results of an experiment aimed at evaluating the effects of the ionosphere on GPS positioning applications are presented in this paper. Specifically, the study, based upon a differential approach, was conducted utilizing GPS measurements acquired by various receivers located at increasing inter-distances. The experimental research was developed upon the basis of two groups of baselines: the first group is comprised of "short" baselines (less than 10 km; the second group is characterized by greater distances (up to 90 km. The obtained results were compared either upon the basis of the geometric characteristics, for six different baseline lengths, using 24 hours of data, or upon temporal variations, by examining two periods of varying intensity in ionospheric activity respectively coinciding with the maximum of the 23 solar cycle and in conditions of low ionospheric activity. The analysis revealed variations in terms of inter-distance as well as different performances primarily owing to temporal modifications in the state of the ionosphere.

  7. Formal Analysis of Soft Errors using Theorem Proving

    Directory of Open Access Journals (Sweden)

    Sofiène Tahar

    2013-07-01

    Full Text Available Modeling and analysis of soft errors in electronic circuits has traditionally been done using computer simulations. Computer simulations cannot guarantee correctness of analysis because they utilize approximate real number representations and pseudo random numbers in the analysis and thus are not well suited for analyzing safety-critical applications. In this paper, we present a higher-order logic theorem proving based method for modeling and analysis of soft errors in electronic circuits. Our developed infrastructure includes formalized continuous random variable pairs, their Cumulative Distribution Function (CDF properties and independent standard uniform and Gaussian random variables. We illustrate the usefulness of our approach by modeling and analyzing soft errors in commonly used dynamic random access memory sense amplifier circuits.

  8. Cloud retrieval using infrared sounder data - Error analysis

    Science.gov (United States)

    Wielicki, B. A.; Coakley, J. A., Jr.

    1981-01-01

    An error analysis is presented for cloud-top pressure and cloud-amount retrieval using infrared sounder data. Rms and bias errors are determined for instrument noise (typical of the HIRS-2 instrument on Tiros-N) and for uncertainties in the temperature profiles and water vapor profiles used to estimate clear-sky radiances. Errors are determined for a range of test cloud amounts (0.1-1.0) and cloud-top pressures (920-100 mb). Rms errors vary by an order of magnitude depending on the cloud height and cloud amount within the satellite's field of view. Large bias errors are found for low-altitude clouds. These bias errors are shown to result from physical constraints placed on retrieved cloud properties, i.e., cloud amounts between 0.0 and 1.0 and cloud-top pressures between the ground and tropopause levels. Middle-level and high-level clouds (above 3-4 km) are retrieved with low bias and rms errors.

  9. Application of human error analysis to aviation and space operations

    Energy Technology Data Exchange (ETDEWEB)

    Nelson, W.R.

    1998-03-01

    For the past several years at the Idaho National Engineering and Environmental Laboratory (INEEL) the authors have been working to apply methods of human error analysis to the design of complex systems. They have focused on adapting human reliability analysis (HRA) methods that were developed for Probabilistic Safety Assessment (PSA) for application to system design. They are developing methods so that human errors can be systematically identified during system design, the potential consequences of each error can be assessed, and potential corrective actions (e.g. changes to system design or procedures) can be identified. The primary vehicle the authors have used to develop and apply these methods has been a series of projects sponsored by the National Aeronautics and Space Administration (NASA) to apply human error analysis to aviation operations. They are currently adapting their methods and tools of human error analysis to the domain of air traffic management (ATM) systems. Under the NASA-sponsored Advanced Air Traffic Technologies (AATT) program they are working to address issues of human reliability in the design of ATM systems to support the development of a free flight environment for commercial air traffic in the US. They are also currently testing the application of their human error analysis approach for space flight operations. They have developed a simplified model of the critical habitability functions for the space station Mir, and have used this model to assess the affects of system failures and human errors that have occurred in the wake of the collision incident last year. They are developing an approach so that lessons learned from Mir operations can be systematically applied to design and operation of long-term space missions such as the International Space Station (ISS) and the manned Mars mission.

  10. Bootstrap Standard Error Estimates in Dynamic Factor Analysis

    Science.gov (United States)

    Zhang, Guangjian; Browne, Michael W.

    2010-01-01

    Dynamic factor analysis summarizes changes in scores on a battery of manifest variables over repeated measurements in terms of a time series in a substantially smaller number of latent factors. Algebraic formulae for standard errors of parameter estimates are more difficult to obtain than in the usual intersubject factor analysis because of the…

  11. Understanding Teamwork in Trauma Resuscitation through Analysis of Team Errors

    Science.gov (United States)

    Sarcevic, Aleksandra

    2009-01-01

    An analysis of human errors in complex work settings can lead to important insights into the workspace design. This type of analysis is particularly relevant to safety-critical, socio-technical systems that are highly dynamic, stressful and time-constrained, and where failures can result in catastrophic societal, economic or environmental…

  12. QUALITATIVE DATA AND ERROR MEASUREMENT IN INPUT-OUTPUT-ANALYSIS

    NARCIS (Netherlands)

    NIJKAMP, P; OOSTERHAVEN, J; OUWERSLOOT, H; RIETVELD, P

    1992-01-01

    This paper is a contribution to the rapidly emerging field of qualitative data analysis in economics. Ordinal data techniques and error measurement in input-output analysis are here combined in order to test the reliability of a low level of measurement and precision of data by means of a stochastic

  13. Error analysis of mechanical system and wavelength calibration of monochromator

    Science.gov (United States)

    Zhang, Fudong; Chen, Chen; Liu, Jie; Wang, Zhihong

    2018-02-01

    This study focuses on improving the accuracy of a grating monochromator on the basis of the grating diffraction equation in combination with an analysis of the mechanical transmission relationship between the grating, the sine bar, and the screw of the scanning mechanism. First, the relationship between the mechanical error in the monochromator with the sine drive and the wavelength error is analyzed. Second, a mathematical model of the wavelength error and mechanical error is developed, and an accurate wavelength calibration method based on the sine bar's length adjustment and error compensation is proposed. Based on the mathematical model and calibration method, experiments using a standard light source with known spectral lines and a pre-adjusted sine bar length are conducted. The model parameter equations are solved, and subsequent parameter optimization simulations are performed to determine the optimal length ratio. Lastly, the length of the sine bar is adjusted. The experimental results indicate that the wavelength accuracy is ±0.3 nm, which is better than the original accuracy of ±2.6 nm. The results confirm the validity of the error analysis of the mechanical system of the monochromator as well as the validity of the calibration method.

  14. Error analysis of mechanical system and wavelength calibration of monochromator.

    Science.gov (United States)

    Zhang, Fudong; Chen, Chen; Liu, Jie; Wang, Zhihong

    2018-02-01

    This study focuses on improving the accuracy of a grating monochromator on the basis of the grating diffraction equation in combination with an analysis of the mechanical transmission relationship between the grating, the sine bar, and the screw of the scanning mechanism. First, the relationship between the mechanical error in the monochromator with the sine drive and the wavelength error is analyzed. Second, a mathematical model of the wavelength error and mechanical error is developed, and an accurate wavelength calibration method based on the sine bar's length adjustment and error compensation is proposed. Based on the mathematical model and calibration method, experiments using a standard light source with known spectral lines and a pre-adjusted sine bar length are conducted. The model parameter equations are solved, and subsequent parameter optimization simulations are performed to determine the optimal length ratio. Lastly, the length of the sine bar is adjusted. The experimental results indicate that the wavelength accuracy is ±0.3 nm, which is better than the original accuracy of ±2.6 nm. The results confirm the validity of the error analysis of the mechanical system of the monochromator as well as the validity of the calibration method.

  15. Geometric error analysis for shuttle imaging spectrometer experiment

    Science.gov (United States)

    Wang, S. J.; Ih, C. H.

    1984-01-01

    The demand of more powerful tools for remote sensing and management of earth resources steadily increased over the last decade. With the recent advancement of area array detectors, high resolution multichannel imaging spectrometers can be realistically constructed. The error analysis study for the Shuttle Imaging Spectrometer Experiment system is documented for the purpose of providing information for design, tradeoff, and performance prediction. Error sources including the Shuttle attitude determination and control system, instrument pointing and misalignment, disturbances, ephemeris, Earth rotation, etc., were investigated. Geometric error mapping functions were developed, characterized, and illustrated extensively with tables and charts. Selected ground patterns and the corresponding image distortions were generated for direct visual inspection of how the various error sources affect the appearance of the ground object images.

  16. The Influence of Observation Errors on Analysis Error and Forecast Skill Investigated with an Observing System Simulation Experiment

    Science.gov (United States)

    Prive, N. C.; Errico, R. M.; Tai, K.-S.

    2013-01-01

    The Global Modeling and Assimilation Office (GMAO) observing system simulation experiment (OSSE) framework is used to explore the response of analysis error and forecast skill to observation quality. In an OSSE, synthetic observations may be created that have much smaller error than real observations, and precisely quantified error may be applied to these synthetic observations. Three experiments are performed in which synthetic observations with magnitudes of applied observation error that vary from zero to twice the estimated realistic error are ingested into the Goddard Earth Observing System Model (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation for a one-month period representing July. The analysis increment and observation innovation are strongly impacted by observation error, with much larger variances for increased observation error. The analysis quality is degraded by increased observation error, but the change in root-mean-square error of the analysis state is small relative to the total analysis error. Surprisingly, in the 120 hour forecast increased observation error only yields a slight decline in forecast skill in the extratropics, and no discernable degradation of forecast skill in the tropics.

  17. Computed Tomography Analysis of NASA BSTRA Balls

    Energy Technology Data Exchange (ETDEWEB)

    Perry, R L; Schneberk, D J; Thompson, R R

    2004-10-12

    Fifteen 1.25 inch BSTRA balls were scanned with the high energy computed tomography system at LLNL. This system has a resolution limit of approximately 210 microns. A threshold of 238 microns (two voxels) was used, and no anomalies at or greater than this were observed.

  18. Extensions of the space trajectories error analysis programs

    Science.gov (United States)

    Adams, G. L.; Bradt, A. J.; Peterson, F. M.

    1971-01-01

    A generalized covariance analysis technique which permits the study of the sensitivity of linear estimation algorithms to errors in a priori statistics has been developed and programed. Several sample cases are presented to illustrate the use of this technique. Modifications to the Simulated Trajectories Error Analysis Program (STEAP) to enable targeting a multiprobe mission of the Planetary Explorer type are discussed. The logic for the mini-probe targeting is presented. Finally, the initial phases of the conversion of the Viking mission Lander Trajectory Reconstruction (LTR) program for use on Venus missions is discussed. An integrator instability problem is discussed and a solution proposed.

  19. Error Grid Analysis for Arterial Pressure Method Comparison Studies.

    Science.gov (United States)

    Saugel, Bernd; Grothe, Oliver; Nicklas, Julia Y

    2018-04-01

    The measurement of arterial pressure (AP) is a key component of hemodynamic monitoring. A variety of different innovative AP monitoring technologies became recently available. The decision to use these technologies must be based on their measurement performance in validation studies. These studies are AP method comparison studies comparing a new method ("test method") with a reference method. In these studies, different comparative statistical tests are used including correlation analysis, Bland-Altman analysis, and trending analysis. These tests provide information about the statistical agreement without adequately providing information about the clinical relevance of differences between the measurement methods. To overcome this problem, we, in this study, propose an "error grid analysis" for AP method comparison studies that allows illustrating the clinical relevance of measurement differences. We constructed smoothed consensus error grids with calibrated risk zones derived from a survey among 25 specialists in anesthesiology and intensive care medicine. Differences between measurements of the test and the reference method are classified into 5 risk levels ranging from "no risk" to "dangerous risk"; the classification depends on both the differences between the measurements and on the measurements themselves. Based on worked examples and data from the Multiparameter Intelligent Monitoring in Intensive Care II database, we show that the proposed error grids give information about the clinical relevance of AP measurement differences that cannot be obtained from Bland-Altman analysis. Our approach also offers a framework on how to adapt the error grid analysis for different clinical settings and patient populations.

  20. A case of error disclosure: a communication privacy management analysis.

    Science.gov (United States)

    Petronio, Sandra; Helft, Paul R; Child, Jeffrey T

    2013-12-01

    To better understand the process of disclosing medical errors to patients, this research offers a case analysis using Petronios's theoretical frame of Communication Privacy Management (CPM). Given the resistance clinicians often feel about error disclosure, insights into the way choices are made by the clinicians in telling patients about the mistake has the potential to address reasons for resistance. Applying the evidenced-based CPM theory, developed over the last 35 years and dedicated to studying disclosure phenomenon, to disclosing medical mistakes potentially has the ability to reshape thinking about the error disclosure process. Using a composite case representing a surgical mistake, analysis based on CPM theory is offered to gain insights into conversational routines and disclosure management choices of revealing a medical error. The results of this analysis show that an underlying assumption of health information ownership by the patient and family can be at odds with the way the clinician tends to control disclosure about the error. In addition, the case analysis illustrates that there are embedded patterns of disclosure that emerge out of conversations the clinician has with the patient and the patient's family members. These patterns unfold privacy management decisions on the part of the clinician that impact how the patient is told about the error and the way that patients interpret the meaning of the disclosure. These findings suggest the need for a better understanding of how patients manage their private health information in relationship to their expectations for the way they see the clinician caring for or controlling their health information about errors. Significance for public healthMuch of the mission central to public health sits squarely on the ability to communicate effectively. This case analysis offers an in-depth assessment of how error disclosure is complicated by misunderstandings, assuming ownership and control over information

  1. Applications of human error analysis to aviation and space operations

    International Nuclear Information System (INIS)

    Nelson, W.R.

    1998-01-01

    For the past several years at the Idaho National Engineering and Environmental Laboratory (INEEL) we have been working to apply methods of human error analysis to the design of complex systems. We have focused on adapting human reliability analysis (HRA) methods that were developed for Probabilistic Safety Assessment (PSA) for application to system design. We are developing methods so that human errors can be systematically identified during system design, the potential consequences of each error can be assessed, and potential corrective actions (e.g. changes to system design or procedures) can be identified. These applications lead to different requirements when compared with HR.As performed as part of a PSA. For example, because the analysis will begin early during the design stage, the methods must be usable when only partial design information is available. In addition, the ability to perform numerous ''what if'' analyses to identify and compare multiple design alternatives is essential. Finally, since the goals of such human error analyses focus on proactive design changes rather than the estimate of failure probabilities for PRA, there is more emphasis on qualitative evaluations of error relationships and causal factors than on quantitative estimates of error frequency. The primary vehicle we have used to develop and apply these methods has been a series of prqjects sponsored by the National Aeronautics and Space Administration (NASA) to apply human error analysis to aviation operations. The first NASA-sponsored project had the goal to evaluate human errors caused by advanced cockpit automation. Our next aviation project focused on the development of methods and tools to apply human error analysis to the design of commercial aircraft. This project was performed by a consortium comprised of INEEL, NASA, and Boeing Commercial Airplane Group. The focus of the project was aircraft design and procedures that could lead to human errors during airplane maintenance

  2. Unbiased bootstrap error estimation for linear discriminant analysis.

    Science.gov (United States)

    Vu, Thang; Sima, Chao; Braga-Neto, Ulisses M; Dougherty, Edward R

    2014-12-01

    Convex bootstrap error estimation is a popular tool for classifier error estimation in gene expression studies. A basic question is how to determine the weight for the convex combination between the basic bootstrap estimator and the resubstitution estimator such that the resulting estimator is unbiased at finite sample sizes. The well-known 0.632 bootstrap error estimator uses asymptotic arguments to propose a fixed 0.632 weight, whereas the more recent 0.632+ bootstrap error estimator attempts to set the weight adaptively. In this paper, we study the finite sample problem in the case of linear discriminant analysis under Gaussian populations. We derive exact expressions for the weight that guarantee unbiasedness of the convex bootstrap error estimator in the univariate and multivariate cases, without making asymptotic simplifications. Using exact computation in the univariate case and an accurate approximation in the multivariate case, we obtain the required weight and show that it can deviate significantly from the constant 0.632 weight, depending on the sample size and Bayes error for the problem. The methodology is illustrated by application on data from a well-known cancer classification study.

  3. Error analysis for mesospheric temperature profiling by absorptive occultation sensors

    Directory of Open Access Journals (Sweden)

    M. J. Rieder

    Full Text Available An error analysis for mesospheric profiles retrieved from absorptive occultation data has been performed, starting with realistic error assumptions as would apply to intensity data collected by available high-precision UV photodiode sensors. Propagation of statistical errors was investigated through the complete retrieval chain from measured intensity profiles to atmospheric density, pressure, and temperature profiles. We assumed unbiased errors as the occultation method is essentially self-calibrating and straight-line propagation of occulted signals as we focus on heights of 50–100 km, where refractive bending of the sensed radiation is negligible. Throughout the analysis the errors were characterized at each retrieval step by their mean profile, their covariance matrix and their probability density function (pdf. This furnishes, compared to a variance-only estimation, a much improved insight into the error propagation mechanism. We applied the procedure to a baseline analysis of the performance of a recently proposed solar UV occultation sensor (SMAS – Sun Monitor and Atmospheric Sounder and provide, using a reasonable exponential atmospheric model as background, results on error standard deviations and error correlation functions of density, pressure, and temperature profiles. Two different sensor photodiode assumptions are discussed, respectively, diamond diodes (DD with 0.03% and silicon diodes (SD with 0.1% (unattenuated intensity measurement noise at 10 Hz sampling rate. A factor-of-2 margin was applied to these noise values in order to roughly account for unmodeled cross section uncertainties. Within the entire height domain (50–100 km we find temperature to be retrieved to better than 0.3 K (DD / 1 K (SD accuracy, respectively, at 2 km height resolution. The results indicate that absorptive occultations acquired by a SMAS-type sensor could provide mesospheric profiles of fundamental variables such as temperature with

  4. Analysis of human errors in operating heavy water production facilities

    International Nuclear Information System (INIS)

    Preda, Irina; Lazar, Roxana; Croitoru, Cornelia

    1997-01-01

    The heavy water plants are complex chemical installations in which high quantities of H 2 S, a corrosive inflammable explosive high toxicity gas are circulated. In addition, in the process, it is maintained at high temperatures and pressures. According to the statistics, about 20-30% of the damages arising in the installations are due directly or indirectly to human errors. These are due mainly to incorrect actions, maintenance errors, incorrect recording of instrumental readings, etc. This study of human performances by probabilistic safety analysis gives the possibilities of evaluating the human error contribution in the occurrence of event/accident sequences. This work presents the results obtained from the analysis of human errors at the stage 1 of the heavy water production pilot, at INC-DTCI ICIS Rm.Valcea, using the dual temperature process in the H 2 O-H 2 S isotopic exchange. The case of loss of steam was considered. The results are interpreted having in view the making decision of improving the activity, as well as, the level of safety/reliability, in order to reduce the risk for population/environment. For such an initiation event, the event tree has been developed based on failure trees. The human error probabilities were assessed as a function of the action complexity, the psychological stress level, the existence of written procedures and of a secondary control (the method of decision tree). For a critical accident sequence, weight evaluations (RAW, RRW, F and V) to make evident the contribution of human errors at the risk level and methods to reduce this errors were suggested

  5. Compensation for geometric modeling errors by positioning of electrodes in electrical impedance tomography

    International Nuclear Information System (INIS)

    Hyvönen, N; Majander, H; Staboulis, S

    2017-01-01

    Electrical impedance tomography aims at reconstructing the conductivity inside a physical body from boundary measurements of current and voltage at a finite number of contact electrodes. In many practical applications, the shape of the imaged object is subject to considerable uncertainties that render reconstructing the internal conductivity impossible if they are not taken into account. This work numerically demonstrates that one can compensate for inaccurate modeling of the object boundary in two spatial dimensions by finding compatible locations and sizes for the electrodes as a part of a reconstruction algorithm. The numerical studies, which are based on both simulated and experimental data, are complemented by proving that the employed complete electrode model is approximately conformally invariant, which suggests that the obtained reconstructions in mismodeled domains reflect conformal images of the true targets. The numerical experiments also confirm that a similar approach does not, in general, lead to a functional algorithm in three dimensions. (paper)

  6. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy

    International Nuclear Information System (INIS)

    Matsuo, Yukinori; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro

    2016-01-01

    Purpose: The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Methods: Balanced data according to the one-factor random effect model were assumed. Results: Analysis-of-variance (ANOVA)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The ANOVA-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Conclusions: Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.

  7. Technical Note: Introduction of variance component analysis to setup error analysis in radiotherapy.

    Science.gov (United States)

    Matsuo, Yukinori; Nakamura, Mitsuhiro; Mizowaki, Takashi; Hiraoka, Masahiro

    2016-09-01

    The purpose of this technical note is to introduce variance component analysis to the estimation of systematic and random components in setup error of radiotherapy. Balanced data according to the one-factor random effect model were assumed. Analysis-of-variance (anova)-based computation was applied to estimate the values and their confidence intervals (CIs) for systematic and random errors and the population mean of setup errors. The conventional method overestimates systematic error, especially in hypofractionated settings. The CI for systematic error becomes much wider than that for random error. The anova-based estimation can be extended to a multifactor model considering multiple causes of setup errors (e.g., interpatient, interfraction, and intrafraction). Variance component analysis may lead to novel applications to setup error analysis in radiotherapy.

  8. Analysis of the computed tomography in the acute abdomen

    International Nuclear Information System (INIS)

    Hochhegger, Bruno; Moraes, Everton; Haygert, Carlos Jesus Pereira; Antunes, Paulo Sergio Pase; Gazzoni, Fernando; Lopes, Luis Felipe Dias

    2007-01-01

    Introduction: This study tends to test the capacity of the computed tomography in assist in the diagnosis and the approach of the acute abdomen. Material and method: This is a longitudinal and prospective study, in which were analyzed the patients with the diagnosis of acute abdomen. There were obtained 105 cases of acute abdomen and after the application of the exclusions criteria were included 28 patients in the study. Results: Computed tomography changed the diagnostic hypothesis of the physicians in 50% of the cases (p 0.05), where 78.57% of the patients had surgical indication before computed tomography and 67.86% after computed tomography (p = 0.0546). The index of accurate diagnosis of computed tomography, when compared to the anatomopathologic examination and the final diagnosis, was observed in 82.14% of the cases (p = 0.013). When the analysis was done dividing the patients in surgical and nonsurgical group, were obtained an accuracy of 89.28% (p 0.0001). The difference of 7.2 days of hospitalization (p = 0.003) was obtained compared with the mean of the acute abdomen without use the computed tomography. Conclusion: The computed tomography is correlative with the anatomopathology and has great accuracy in the surgical indication, associated with the capacity of increase the confident index of the physicians, reduces the hospitalization time, reduces the number of surgeries and is cost-effective. (author)

  9. Tomography

    International Nuclear Information System (INIS)

    1985-01-01

    Already widely accepted in medicine, tomography can also be useful in industry. The theory behind tomography and a demonstration of the technique to inspect a motorcycle carburetor is presented. To demonstrate the potential of computer assisted tomography (CAT) to accurately locate defects in three dimensions, a sectioned 5 cm gate valve with a shrink cavity made visible by the sectioning was tomographically imaged using a Co-60 source. The tomographic images revealed a larger cavity below the sectioned surface. The position of this cavity was located with an in-plane and axial precision of approximately +-1 mm. The volume of the cavity was estimated to be approximately 40 mm 3

  10. Error Analysis Of Clock Time (T), Declination (*) And Latitude ...

    African Journals Online (AJOL)

    ), latitude (Φ), longitude (λ) and azimuth (A); which are aimed at establishing fixed positions and orientations of survey points and lines on the earth surface. The paper attempts the analysis of the individual and combined effects of error in time ...

  11. Measurement Error, Education Production and Data Envelopment Analysis

    Science.gov (United States)

    Ruggiero, John

    2006-01-01

    Data Envelopment Analysis has become a popular tool for evaluating the efficiency of decision making units. The nonparametric approach has been widely applied to educational production. The approach is, however, deterministic and leads to biased estimates of performance in the presence of measurement error. Numerous simulation studies confirm the…

  12. Error analysis and bounds in time delay extimation

    Czech Academy of Sciences Publication Activity Database

    Pánek, Petr

    2007-01-01

    Roč. 55, 7 Part I (2007), s. 3547-3549 ISSN 1053-587X Institutional research plan: CEZ:AV0Z20670512 Keywords : time measurement * delay estimation * error analysis Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 1.640, year: 2007

  13. Analysis of possible systematic errors in the Oslo method

    International Nuclear Information System (INIS)

    Larsen, A. C.; Guttormsen, M.; Buerger, A.; Goergen, A.; Nyhus, H. T.; Rekstad, J.; Siem, S.; Toft, H. K.; Tveten, G. M.; Wikan, K.; Krticka, M.; Betak, E.; Schiller, A.; Voinov, A. V.

    2011-01-01

    In this work, we have reviewed the Oslo method, which enables the simultaneous extraction of the level density and γ-ray transmission coefficient from a set of particle-γ coincidence data. Possible errors and uncertainties have been investigated. Typical data sets from various mass regions as well as simulated data have been tested against the assumptions behind the data analysis.

  14. Elemental analysis of hair using PIXE-tomography and INAA

    International Nuclear Information System (INIS)

    Beasley, D.; Gomez-Morilla, I.; Spyrou, N.

    2008-01-01

    3D quantitative elemental maps of a section of a strand of hair were produced using a combination of PIXE-Tomography and simultaneous On/Off Axis STIM-Tomography at the University of Surrey Ion Beam Centre. The distributions of S, K, Cl, Ca, Fe and Zn were determined using the PIXE-T reconstruction package DISRA. The results were compared with conventional bulk PIXE analysis of tomographic data as determined using Dan32. The overall concentrations determined by PIXE were compared with elemental concentrations held in the University of Surrey Hair Database. All the entries currently in the database were produced using INAA. The merits and possible contributions of tomographic PIXE analysis to analysis of hair are discussed. The conclusions drawn from the PIXE-Tomography analysis can be used to argue for more stringent procedures for hair analysis at the University of Surrey. (author)

  15. L'analyse des erreurs. Problemes et perspectives (Error Analysis. Problems and Perspectives)

    Science.gov (United States)

    Porquier, Remy

    1977-01-01

    Summarizes the usefulness and the disadvantage of error analysis, and discusses a reorientation of error analysis, specifically regarding grammar instruction and the significance of errors. (Text is in French.) (AM)

  16. Quantification of evaporation induced error in atom probe tomography using molecular dynamics simulation.

    Science.gov (United States)

    Chen, Shu Jian; Yao, Xupei; Zheng, Changxi; Duan, Wen Hui

    2017-11-01

    Non-equilibrium molecular dynamics was used to simulate the dynamics of atoms at the atom probe surface and five objective functions were used to quantify errors. The results suggested that before ionization, thermal vibration and collision caused the atoms to displace up to 1Å and 25Å respectively. The average atom displacements were found to vary between 0.2 and 0.5Å. About 9 to 17% of the atoms were affected by collision. Due to the effects of collision and ion-ion repulsion, the back-calculated positions were on average 0.3-0.5Å different from the pre-ionized positions of the atoms when the number of ions generated per pulse was minimal. This difference could increase up to 8-10Å when 1.5ion/nm 2 were evaporated per pulse. On the basis of the results, surface ion density was considered an important factor that needed to be controlled to minimize error in the evaporation process. Copyright © 2017. Published by Elsevier B.V.

  17. Students’ Written Production Error Analysis in the EFL Classroom Teaching: A Study of Adult English Learners Errors

    Directory of Open Access Journals (Sweden)

    Ranauli Sihombing

    2016-12-01

    Full Text Available Errors analysis has become one of the most interesting issues in the study of Second Language Acquisition. It can not be denied that some teachers do not know a lot about error analysis and related theories of how L1, L2 or foreign language acquired. In addition, the students often feel upset since they find a gap between themselves and the teachers for the errors the students make and the teachers’ understanding about the error correction. The present research aims to investigate what errors adult English learners make in written production of English. The significances of the study is to know what errors students make in writing that the teachers can find solution to the errors the students make for a better English language teaching and learning especially in teaching English for adults. The study employed qualitative method. The research was undertaken at an airline education center in Bandung. The result showed that syntax errors are more frequently found than morphology errors, especially in terms of verb phrase errors. It is recommended that it is important for teacher to know the theory of second language acquisition in order to know how the students learn and produce theirlanguage. In addition, it will be advantages for teachers if they know what errors students frequently make in their learning, so that the teachers can give solution to the students for a better English language learning achievement.   DOI: https://doi.org/10.24071/llt.2015.180205

  18. Predicting positional error of MLC using volumetric analysis

    International Nuclear Information System (INIS)

    Hareram, E.S.

    2008-01-01

    IMRT normally using multiple beamlets (small width of the beam) for a particular field to deliver so that it is imperative to maintain the positional accuracy of the MLC in order to deliver integrated computed dose accurately. Different manufacturers have reported high precession on MLC devices with leaf positional accuracy nearing 0.1 mm but measuring and rectifying the error in this accuracy is very difficult. Various methods are used to check MLC position and among this volumetric analysis is one of the technique. Volumetric approach was adapted in our method using primus machine and 0.6cc chamber at 5 cm depth In perspex. MLC of 1 mm error introduces an error of 20%, more sensitive to other methods

  19. An Error Analysis of Structured Light Scanning of Biological Tissue

    DEFF Research Database (Denmark)

    Jensen, Sebastian Hoppe Nesgaard; Wilm, Jakob; Aanæs, Henrik

    2017-01-01

    This paper presents an error analysis and correction model for four structured light methods applied to three common types of biological tissue; skin, fat and muscle. Despite its many advantages, structured light is based on the assumption of direct reflection at the object surface only....... This assumption is violated by most biological material e.g. human skin, which exhibits subsurface scattering. In this study, we find that in general, structured light scans of biological tissue deviate significantly from the ground truth. We show that a large portion of this error can be predicted with a simple......, statistical linear model based on the scan geometry. As such, scans can be corrected without introducing any specially designed pattern strategy or hardware. We can effectively reduce the error in a structured light scanner applied to biological tissue by as much as factor of two or three....

  20. Potential Measurement Errors Due to Image Enlargement in Optical Coherence Tomography Imaging

    Science.gov (United States)

    Uji, Akihito; Murakami, Tomoaki; Muraoka, Yuki; Hosoda, Yoshikatsu; Yoshitake, Shin; Dodo, Yoko; Arichika, Shigeta; Yoshimura, Nagahisa

    2015-01-01

    The effect of interpolation and super-resolution (SR) algorithms on quantitative and qualitative assessments of enlarged optical coherence tomography (OCT) images was investigated in this report. Spectral-domain OCT images from 30 eyes in 30 consecutive patients with diabetic macular edema (DME) and 20 healthy eyes in 20 consecutive volunteers were analyzed. Original image (OR) resolution was reduced by a factor of four. Images were then magnified by a factor of four with and without application of one of the following algorithms: bilinear (BL), bicubic (BC), Lanczos3 (LA), and SR. Differences in peak signal-to-noise ratio (PSNR), retinal nerve fiber layer (RNFL) thickness, photoreceptor layer status, and parallelism (reflects the complexity of photoreceptor layer alterations) were analyzed in each image type. The order of PSNRs from highest to lowest was SR > LA > BC > BL > non-processed enlarged images (NONE). The PSNR was statistically different in all groups. The NONE, BC, and LA images resulted in significantly thicker RNFL measurements than the OR image. In eyes with DME, the photoreceptor layer, which was hardly identifiable in NONE images, became detectable with algorithm application. However, OCT photoreceptor parameters were still assessed as more undetectable than in OR images. Parallelism was not statistically different in OR and NONE images, but other image groups had significantly higher parallelism than OR images. Our results indicated that interpolation and SR algorithms increased OCT image resolution. However, qualitative and quantitative assessments were influenced by algorithm use. Additionally, each algorithm affected the assessments differently. PMID:26024236

  1. Atom probe tomography analysis of WC powder.

    Science.gov (United States)

    Weidow, Jonathan

    2013-09-01

    A tantalum doped tungsten carbide powder, (W,Ta)C, was prepared with the purpose to maximise the amount of Ta in the hexagonal mixed crystal carbide. Atom probe tomography (APT) was considered to be the best technique to quantitatively measure the amount of Ta within this carbide. As the carbide powder consisted in the form of very small particles (WC-Co based cemented carbide specimen. With the use of a laser assisted atom probe, it was shown that the method is working and the Ta content of the (W,Ta)C could be measured quantitatively. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. Quantification of abdominal aortic calcification: Inherent measurement errors in current computed tomography imaging.

    Science.gov (United States)

    Buijs, Ruben V C; Leemans, Eva L; Greuter, Marcel; Tielliu, Ignace F J; Zeebregts, Clark J; Willems, Tineke P

    2018-01-01

    Quantification software for coronary calcification is often used to measure abdominal aortic calcification on computed tomography (CT) images. However, there is no evidence substantiating the reliability and accuracy of these tools in this setting. Differences in coronary and abdominal CT acquisition and presence of intravascular contrast may affect the results of these tools. Therefore, this study investigates the effects of CT acquisition parameters and iodine contrast on automated quantification of aortic calcium on CT. Calcium scores, provided in volume and mass, were assessed by automated calcium quantification software on CT scans. First, differences in calcium scores between the abdominal and coronary CT scanning protocols were assessed by imaging a thorax phantom containing calcifications of 9 metrical variations. Second, aortic calcification was quantified in 50 unenhanced and contrast-enhanced clinical abdominal CT scans at a calcification threshold of 299 Hounsfield Units (HU). Also, the lowest possible HU threshold for calcifications was calculated per individual patient and compared to a 130 HU threshold between contrast-enhanced and unenhanced CT images, respectively. No significant differences in volume and mass scores between the abdominal and the coronary CT protocol were found. However, volume and mass of all calcifications were overestimated compared to the physical volume and mass (volume range: 0-649%; mass range: 0-2619%). In comparing unenhanced versus contrast-enhanced CT images showed significant volume differences for both thresholds, as well as for mass differences for the 130 vs patient-specific threshold (230 ± 22.6 HU). Calcification scoring on CT angiography tends to grossly overestimate volume and mass suggesting a low accuracy and reliability. These are reduced further by interference of intravascular contrast. Future studies applying calcium quantification tools on CT angiography imaging should acknowledge these issues and apply

  3. Atom probe tomography analysis of WC powder

    Energy Technology Data Exchange (ETDEWEB)

    Weidow, Jonathan, E-mail: jonathan.weidow@chalmers.se [Department of Applied Physics, Chalmers University of Technology, SE-412 96 Göteborg (Sweden); Institute of Chemical Technologies and Analytics, Vienna University of Technology, Getreidemarkt 9/164, A-1060 Wien (Austria)

    2013-09-15

    A tantalum doped tungsten carbide powder, (W,Ta)C, was prepared with the purpose to maximise the amount of Ta in the hexagonal mixed crystal carbide. Atom probe tomography (APT) was considered to be the best technique to quantitatively measure the amount of Ta within this carbide. As the carbide powder consisted in the form of very small particles (<1 μm), a method to produce APT specimens of such a powder was developed. The powder was at first embedded in copper and a FIB-SEM workstation was used to make an in-situ lift-out from a selected powder particle. The powder particle was then deposited on a post made from a WC-Co based cemented carbide specimen. With the use of a laser assisted atom probe, it was shown that the method is working and the Ta content of the (W,Ta)C could be measured quantitatively. - Highlights: ► Method for producing atom probe tomography specimens of powders was developed. ► Method was successfully implemented on (W,Ta)C powder. ► Method can possibly be implemented on completely other powders.

  4. Dispersion analysis and linear error analysis capabilities of the space vehicle dynamics simulation program

    Science.gov (United States)

    Snow, L. S.; Kuhn, A. E.

    1975-01-01

    Previous error analyses conducted by the Guidance and Dynamics Branch of NASA have used the Guidance Analysis Program (GAP) as the trajectory simulation tool. Plans are made to conduct all future error analyses using the Space Vehicle Dynamics Simulation (SVDS) program. A study was conducted to compare the inertial measurement unit (IMU) error simulations of the two programs. Results of the GAP/SVDS comparison are presented and problem areas encountered while attempting to simulate IMU errors, vehicle performance uncertainties and environmental uncertainties using SVDS are defined. An evaluation of the SVDS linear error analysis capability is also included.

  5. Acquisition of case in Lithuanian as L2: Error analysis

    Directory of Open Access Journals (Sweden)

    Laura Cubajevaite

    2009-05-01

    Full Text Available Although teaching Lithuanian as a foreign language is not a new subject, there has not been much research in this field. The paper presents a study based on an analysis of grammatical errors which was carried out at Vytautas Magnus University. The data was selected randomly by analysing written assignments of beginner to advanced level students.DOI: http://dx.doi.org/10.5128/ERYa5.04

  6. Detecting errors in micro and trace analysis by using statistics

    DEFF Research Database (Denmark)

    Heydorn, K.

    1993-01-01

    By assigning a standard deviation to each step in an analytical method it is possible to predict the standard deviation of each analytical result obtained by this method. If the actual variability of replicate analytical results agrees with the expected, the analytical method is said...... to results for chlorine in freshwater from BCR certification analyses by highly competent analytical laboratories in the EC. Titration showed systematic errors of several percent, while radiochemical neutron activation analysis produced results without detectable bias....

  7. Analysis of Error Propagation Within Hierarchical Air Combat Models

    Science.gov (United States)

    2016-06-01

    of the factors (variables), the other variables were fixed at their baseline levels. The red dots with the standard deviation error bars represent...conducted an analysis to determine if the means and variances of MOEs of interest were statistically different by experimental design (Pav, 2015). To do...summarized data. In the summarized data set, we summarize each Design Point (DP) by its mean and standard deviation , over the stochastic replications. The

  8. Radiological error: analysis, standard setting, targeted instruction and teamworking

    International Nuclear Information System (INIS)

    FitzGerald, Richard

    2005-01-01

    Diagnostic radiology does not have objective benchmarks for acceptable levels of missed diagnoses [1]. Until now, data collection of radiological discrepancies has been very time consuming. The culture within the specialty did not encourage it. However, public concern about patient safety is increasing. There have been recent innovations in compiling radiological interpretive discrepancy rates which may facilitate radiological standard setting. However standard setting alone will not optimise radiologists' performance or patient safety. We must use these new techniques in radiological discrepancy detection to stimulate greater knowledge sharing, targeted instruction and teamworking among radiologists. Not all radiological discrepancies are errors. Radiological discrepancy programmes must not be abused as an instrument for discrediting individual radiologists. Discrepancy rates must not be distorted as a weapon in turf battles. Radiological errors may be due to many causes and are often multifactorial. A systems approach to radiological error is required. Meaningful analysis of radiological discrepancies and errors is challenging. Valid standard setting will take time. Meanwhile, we need to develop top-up training, mentoring and rehabilitation programmes. (orig.)

  9. Magnetospheric Multiscale (MMS) Mission Commissioning Phase Orbit Determination Error Analysis

    Science.gov (United States)

    Chung, Lauren R.; Novak, Stefan; Long, Anne; Gramling, Cheryl

    2009-01-01

    The Magnetospheric MultiScale (MMS) mission commissioning phase starts in a 185 km altitude x 12 Earth radii (RE) injection orbit and lasts until the Phase 1 mission orbits and orientation to the Earth-Sun li ne are achieved. During a limited time period in the early part of co mmissioning, five maneuvers are performed to raise the perigee radius to 1.2 R E, with a maneuver every other apogee. The current baseline is for the Goddard Space Flight Center Flight Dynamics Facility to p rovide MMS orbit determination support during the early commissioning phase using all available two-way range and Doppler tracking from bo th the Deep Space Network and Space Network. This paper summarizes th e results from a linear covariance analysis to determine the type and amount of tracking data required to accurately estimate the spacecraf t state, plan each perigee raising maneuver, and support thruster cal ibration during this phase. The primary focus of this study is the na vigation accuracy required to plan the first and the final perigee ra ising maneuvers. Absolute and relative position and velocity error hi stories are generated for all cases and summarized in terms of the ma ximum root-sum-square consider and measurement noise error contributi ons over the definitive and predictive arcs and at discrete times inc luding the maneuver planning and execution times. Details of the meth odology, orbital characteristics, maneuver timeline, error models, and error sensitivities are provided.

  10. Error analysis of compensation cutting technique for wavefront error of KH2PO4 crystal.

    Science.gov (United States)

    Tie, Guipeng; Dai, Yifan; Guan, Chaoliang; Zhu, Dengchao; Song, Bing

    2013-09-20

    Considering the wavefront error of KH(2)PO(4) (KDP) crystal is difficult to control through face fly cutting process because of surface shape deformation during vacuum suction, an error compensation technique based on a spiral turning method is put forward. An in situ measurement device is applied to measure the deformed surface shape after vacuum suction, and the initial surface figure error, which is obtained off-line, is added to the in situ surface shape to obtain the final surface figure to be compensated. Then a three-axis servo technique is utilized to cut the final surface shape. In traditional cutting processes, in addition to common error sources such as the error in the straightness of guide ways, spindle rotation error, and error caused by ambient environment variance, three other errors, the in situ measurement error, position deviation error, and servo-following error, are the main sources affecting compensation accuracy. This paper discusses the effect of these three errors on compensation accuracy and provides strategies to improve the final surface quality. Experimental verification was carried out on one piece of KDP crystal with the size of Φ270 mm×11 mm. After one compensation process, the peak-to-valley value of the transmitted wavefront error dropped from 1.9λ (λ=632.8 nm) to approximately 1/3λ, and the mid-spatial-frequency error does not become worse when the frequency of the cutting tool trajectory is controlled by use of a low-pass filter.

  11. Landmarking the brain for geometric morphometric analysis: an error study.

    Directory of Open Access Journals (Sweden)

    Madeleine B Chollet

    Full Text Available Neuroanatomic phenotypes are often assessed using volumetric analysis. Although powerful and versatile, this approach is limited in that it is unable to quantify changes in shape, to describe how regions are interrelated, or to determine whether changes in size are global or local. Statistical shape analysis using coordinate data from biologically relevant landmarks is the preferred method for testing these aspects of phenotype. To date, approximately fifty landmarks have been used to study brain shape. Of the studies that have used landmark-based statistical shape analysis of the brain, most have not published protocols for landmark identification or the results of reliability studies on these landmarks. The primary aims of this study were two-fold: (1 to collaboratively develop detailed data collection protocols for a set of brain landmarks, and (2 to complete an intra- and inter-observer validation study of the set of landmarks. Detailed protocols were developed for 29 cortical and subcortical landmarks using a sample of 10 boys aged 12 years old. Average intra-observer error for the final set of landmarks was 1.9 mm with a range of 0.72 mm-5.6 mm. Average inter-observer error was 1.1 mm with a range of 0.40 mm-3.4 mm. This study successfully establishes landmark protocols with a minimal level of error that can be used by other researchers in the assessment of neuroanatomic phenotypes.

  12. Critical slowing down and error analysis in lattice QCD simulations

    Energy Technology Data Exchange (ETDEWEB)

    Virotta, Francesco

    2012-02-21

    In this work we investigate the critical slowing down of lattice QCD simulations. We perform a preliminary study in the quenched approximation where we find that our estimate of the exponential auto-correlation time scales as {tau}{sub exp}(a){proportional_to}a{sup -5}, where a is the lattice spacing. In unquenched simulations with O(a) improved Wilson fermions we do not obtain a scaling law but find results compatible with the behavior that we find in the pure gauge theory. The discussion is supported by a large set of ensembles both in pure gauge and in the theory with two degenerate sea quarks. We have moreover investigated the effect of slow algorithmic modes in the error analysis of the expectation value of typical lattice QCD observables (hadronic matrix elements and masses). In the context of simulations affected by slow modes we propose and test a method to obtain reliable estimates of statistical errors. The method is supposed to help in the typical algorithmic setup of lattice QCD, namely when the total statistics collected is of O(10){tau}{sub exp}. This is the typical case when simulating close to the continuum limit where the computational costs for producing two independent data points can be extremely large. We finally discuss the scale setting in N{sub f}=2 simulations using the Kaon decay constant f{sub K} as physical input. The method is explained together with a thorough discussion of the error analysis employed. A description of the publicly available code used for the error analysis is included.

  13. Segmentation error in spectral domain optical coherence tomography measures of the retinal nerve fibre layer thickness in idiopathic intracranial hypertension.

    Science.gov (United States)

    Aojula, Anuriti; Mollan, Susan P; Horsburgh, John; Yiangou, Andreas; Markey, Kiera A; Mitchell, James L; Scotton, William J; Keane, Pearse A; Sinclair, Alexandra J

    2018-01-04

    Optical Coherence Tomography (OCT) imaging is being increasingly used in clinical practice for the monitoring of papilloedema. The aim is to characterise the extent and location of the Retinal Nerve Fibre Layer (RNFL) Thickness automated segmentation error (SegE) by manual refinement, in a cohort of Idiopathic Intracranial Hypertension (IIH) patients with papilloedema and compare this to controls. Baseline Spectral Domain OCT (SDOCT) scans from patients with IIH, and controls with no retinal or optic nerve pathology, were examined. The internal limiting membrane and RNFL thickness of the most severely affected eye was examined for SegE and re-segmented. Using ImageJ, the total area of the RNFL thickness was calculated pre and post re-segmentation and the percentage change was determined. The distribution of RNFL thickness error was qualitatively assessed. Significantly greater SegE (p = 0.009) was present in RNFL thickness total area, assessed using ImageJ, in IIH patients (n = 46, 5% ± 0-58%) compared to controls (n = 14, 1% ± 0-6%). This was particularly evident in moderate to severe optic disc swelling (n = 23, 10% ± 0-58%, p < 0.001). RNFL thickness was unable to be quantified using SDOCT in patients with severe papilloedema. SegE remain a concern for clinicians using SDOCT to monitor papilloedema in IIH, particularly in the assessment of eyes with moderate to severe oedema. Systematic assessment and manual refinement of SegE is therefore important to ensure the accuracy in longitudinal monitoring of patients.

  14. Analytical sensitivity analysis of geometric errors in a three axis machine tool

    International Nuclear Information System (INIS)

    Park, Sung Ryung; Yang, Seung Han

    2012-01-01

    In this paper, an analytical method is used to perform a sensitivity analysis of geometric errors in a three axis machine tool. First, an error synthesis model is constructed for evaluating the position volumetric error due to the geometric errors, and then an output variable is defined, such as the magnitude of the position volumetric error. Next, the global sensitivity analysis is executed using an analytical method. Finally, the sensitivity indices are calculated using the quantitative values of the geometric errors

  15. Error analysis of short term wind power prediction models

    International Nuclear Information System (INIS)

    De Giorgi, Maria Grazia; Ficarella, Antonio; Tarantino, Marco

    2011-01-01

    The integration of wind farms in power networks has become an important problem. This is because the electricity produced cannot be preserved because of the high cost of storage and electricity production must follow market demand. Short-long-range wind forecasting over different lengths/periods of time is becoming an important process for the management of wind farms. Time series modelling of wind speeds is based upon the valid assumption that all the causative factors are implicitly accounted for in the sequence of occurrence of the process itself. Hence time series modelling is equivalent to physical modelling. Auto Regressive Moving Average (ARMA) models, which perform a linear mapping between inputs and outputs, and Artificial Neural Networks (ANNs) and Adaptive Neuro-Fuzzy Inference Systems (ANFIS), which perform a non-linear mapping, provide a robust approach to wind power prediction. In this work, these models are developed in order to forecast power production of a wind farm with three wind turbines, using real load data and comparing different time prediction periods. This comparative analysis takes in the first time, various forecasting methods, time horizons and a deep performance analysis focused upon the normalised mean error and the statistical distribution hereof in order to evaluate error distribution within a narrower curve and therefore forecasting methods whereby it is more improbable to make errors in prediction. (author)

  16. Critical slowing down and error analysis in lattice QCD simulations

    International Nuclear Information System (INIS)

    Schaefer, Stefan; Sommer, Rainer; Virotta, Francesco

    2010-09-01

    We study the critical slowing down towards the continuum limit of lattice QCD simulations with Hybrid Monte Carlo type algorithms. In particular for the squared topological charge we find it to be very severe with an effective dynamical critical exponent of about 5 in pure gauge theory. We also consider Wilson loops which we can demonstrate to decouple from the modes which slow down the topological charge. Quenched observables are studied and a comparison to simulations of full QCD is made. In order to deal with the slow modes in the simulation, we propose a method to incorporate the information from slow observables into the error analysis of physical observables and arrive at safer error estimates. (orig.)

  17. Critical slowing down and error analysis in lattice QCD simulations

    Energy Technology Data Exchange (ETDEWEB)

    Schaefer, Stefan [Humboldt-Universitaet, Berlin (Germany). Inst. fuer Physik; Sommer, Rainer; Virotta, Francesco [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC

    2010-09-15

    We study the critical slowing down towards the continuum limit of lattice QCD simulations with Hybrid Monte Carlo type algorithms. In particular for the squared topological charge we find it to be very severe with an effective dynamical critical exponent of about 5 in pure gauge theory. We also consider Wilson loops which we can demonstrate to decouple from the modes which slow down the topological charge. Quenched observables are studied and a comparison to simulations of full QCD is made. In order to deal with the slow modes in the simulation, we propose a method to incorporate the information from slow observables into the error analysis of physical observables and arrive at safer error estimates. (orig.)

  18. Encapsulation method for atom probe tomography analysis of nanoparticles

    NARCIS (Netherlands)

    Larson, D.J.; Giddings, A.D.; Wub, Y.; Verheijen, M.A.; Prosa, T.J.; Roozeboom, F.; Rice, K.P.; Kessels, W.M.M.; Geiser, B.P.; Kelly, T.F.

    2015-01-01

    Open-space nanomaterials are a widespread class of technologically important materials that are generally incompatible with analysis by atom probe tomography (APT) due to issues with specimen preparation, field evaporation and data reconstruction. The feasibility of encapsulating such non-compact

  19. Quantitative comparison of analysis methods for spectroscopic optical coherence tomography

    NARCIS (Netherlands)

    Bosschaart, Nienke; van Leeuwen, Ton; Aalders, Maurice C.G.; Faber, Dirk

    2013-01-01

    pectroscopic optical coherence tomography (sOCT) enables the mapping of chromophore concentrations and image contrast enhancement in tissue. Acquisition of depth resolved spectra by sOCT requires analysis methods with optimal spectral/spatial resolution and spectral recovery. In this article, we

  20. Quantitative comparison of analysis methods for spectroscopic optical coherence tomography

    NARCIS (Netherlands)

    Bosschaart, Nienke; van Leeuwen, Ton G.; Aalders, Maurice C. G.; Faber, Dirk J.

    2013-01-01

    Spectroscopic optical coherence tomography (sOCT) enables the mapping of chromophore concentrations and image contrast enhancement in tissue. Acquisition of depth resolved spectra by sOCT requires analysis methods with optimal spectral/spatial resolution and spectral recovery. In this article, we

  1. Analysis of tap weight errors in CCD transversal filters

    NARCIS (Netherlands)

    Ricco, Bruno; Wallinga, Hans

    1978-01-01

    A method is presented to determine and evaluate the actual tap weight errors in CCD split-electrode transversal filters. It is concluded that the correlated part in the tap weight errors dominates the random errors.

  2. Tomography

    International Nuclear Information System (INIS)

    Allan, C.J.; Keller, N.A.; Lupton, L.R.; Taylor, T.; Tonner, P.D.

    1984-10-01

    Tomography is a non-intrusive imaging technique being developed at CRNL as an industrial tool for generating quantitative cross-sectional density maps of objects. Of most interest is tomography's ability to: distinguish features within complex geometries where other NDT techniques fail because of the complexity of the geometry; detect/locate small density changes/defects within objects, e.g. void fraction measurements within thick-walled vessels, shrink cavities in castings, etc.; provide quantitative data that can be used in analyses, e.g. of complex processes, or fracture mechanics; and provide objective quantitative data that can be used for (computer-based) quality assurance decisions, thereby reducing and in some cases eliminating the present subjectivity often encountered in NDT. The CRNL program is reviewed and examples are presented to illustrate the potential and the limitations of the technology

  3. Error performance analysis in downlink cellular networks with interference management

    KAUST Repository

    Afify, Laila H.

    2015-05-01

    Modeling aggregate network interference in cellular networks has recently gained immense attention both in academia and industry. While stochastic geometry based models have succeeded to account for the cellular network geometry, they mostly abstract many important wireless communication system aspects (e.g., modulation techniques, signal recovery techniques). Recently, a novel stochastic geometry model, based on the Equivalent-in-Distribution (EiD) approach, succeeded to capture the aforementioned communication system aspects and extend the analysis to averaged error performance, however, on the expense of increasing the modeling complexity. Inspired by the EiD approach, the analysis developed in [1] takes into consideration the key system parameters, while providing a simple tractable analysis. In this paper, we extend this framework to study the effect of different interference management techniques in downlink cellular network. The accuracy of the proposed analysis is verified via Monte Carlo simulations.

  4. FRamework Assessing Notorious Contributing Influences for Error (FRANCIE): Perspective on Taxonomy Development to Support Error Reporting and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Lon N. Haney; David I. Gertman

    2003-04-01

    Beginning in the 1980s a primary focus of human reliability analysis was estimation of human error probabilities. However, detailed qualitative modeling with comprehensive representation of contextual variables often was lacking. This was likely due to the lack of comprehensive error and performance shaping factor taxonomies, and the limited data available on observed error rates and their relationship to specific contextual variables. In the mid 90s Boeing, America West Airlines, NASA Ames Research Center and INEEL partnered in a NASA sponsored Advanced Concepts grant to: assess the state of the art in human error analysis, identify future needs for human error analysis, and develop an approach addressing these needs. Identified needs included the need for a method to identify and prioritize task and contextual characteristics affecting human reliability. Other needs identified included developing comprehensive taxonomies to support detailed qualitative modeling and to structure meaningful data collection efforts across domains. A result was the development of the FRamework Assessing Notorious Contributing Influences for Error (FRANCIE) with a taxonomy for airline maintenance tasks. The assignment of performance shaping factors to generic errors by experts proved to be valuable to qualitative modeling. Performance shaping factors and error types from such detailed approaches can be used to structure error reporting schemes. In a recent NASA Advanced Human Support Technology grant FRANCIE was refined, and two new taxonomies for use on space missions were developed. The development, sharing, and use of error taxonomies, and the refinement of approaches for increased fidelity of qualitative modeling is offered as a means to help direct useful data collection strategies.

  5. Error Analysis for Fourier Methods for Option Pricing

    KAUST Repository

    Häppölä, Juho

    2016-01-06

    We provide a bound for the error committed when using a Fourier method to price European options when the underlying follows an exponential Levy dynamic. The price of the option is described by a partial integro-differential equation (PIDE). Applying a Fourier transformation to the PIDE yields an ordinary differential equation that can be solved analytically in terms of the characteristic exponent of the Levy process. Then, a numerical inverse Fourier transform allows us to obtain the option price. We present a novel bound for the error and use this bound to set the parameters for the numerical method. We analyze the properties of the bound for a dissipative and pure-jump example. The bound presented is independent of the asymptotic behaviour of option prices at extreme asset prices. The error bound can be decomposed into a product of terms resulting from the dynamics and the option payoff, respectively. The analysis is supplemented by numerical examples that demonstrate results comparable to and superior to the existing literature.

  6. ERROR ANALYSIS FOR THE AIRBORNE DIRECT GEOREFERINCING TECHNIQUE

    Directory of Open Access Journals (Sweden)

    A. S. Elsharkawy

    2016-10-01

    Full Text Available Direct Georeferencing was shown to be an important alternative to standard indirect image orientation using classical or GPS-supported aerial triangulation. Since direct Georeferencing without ground control relies on an extrapolation process only, particular focus has to be laid on the overall system calibration procedure. The accuracy performance of integrated GPS/inertial systems for direct Georeferencing in airborne photogrammetric environments has been tested extensively in the last years. In this approach, the limiting factor is a correct overall system calibration including the GPS/inertial component as well as the imaging sensor itself. Therefore remaining errors in the system calibration will significantly decrease the quality of object point determination. This research paper presents an error analysis for the airborne direct Georeferencing technique, where integrated GPS/IMU positioning and navigation systems are used, in conjunction with aerial cameras for airborne mapping compared with GPS/INS supported AT through the implementation of certain amount of error on the EOP and Boresight parameters and study the effect of these errors on the final ground coordinates. The data set is a block of images consists of 32 images distributed over six flight lines, the interior orientation parameters, IOP, are known through careful camera calibration procedure, also 37 ground control points are known through terrestrial surveying procedure. The exact location of camera station at time of exposure, exterior orientation parameters, EOP, is known through GPS/INS integration process. The preliminary results show that firstly, the DG and GPS-supported AT have similar accuracy and comparing with the conventional aerial photography method, the two technologies reduces the dependence on ground control (used only for quality control purposes. Secondly, In the DG Correcting overall system calibration including the GPS/inertial component as well as the

  7. Reduction in the ionospheric error for a single-frequency GPS timing solution using tomography

    Directory of Open Access Journals (Sweden)

    Cathryn N. Mitchell

    2009-06-01

    Full Text Available

    Abstract

    Single-frequency Global Positioning System (GPS receivers do not accurately compensate for the ionospheric delay imposed upon a GPS signal. They rely upon models to compensate for the ionosphere. This delay compensation can be improved by measuring it directly with a dual-frequency receiver, or by monitoring the ionosphere using real-time maps. This investigation uses a 4D tomographic algorithm, Multi Instrument Data Analysis System (MIDAS, to correct for the ionospheric delay and compares the results to existing single and dualfrequency techniques. Maps of the ionospheric electron density, across Europe, are produced by using data collected from a fixed network of dual-frequency GPS receivers. Single-frequency pseudorange observations are corrected by using the maps to find the excess propagation delay on the GPS L1 signals. Days during the solar maximum year 2002 and the October 2003 storm have been chosen to display results when the ionospheric delays are large and variable. Results that improve upon the use of existing ionospheric models are achieved by applying MIDAS to fixed and mobile single-frequency GPS timing solutions. The approach offers the potential for corrections to be broadcast over a local region, or provided via the internet and allows timing accuracies to within 10 ns to be achieved.



  8. A Simulator for Human Error Probability Analysis (SHERPA)

    International Nuclear Information System (INIS)

    Di Pasquale, Valentina; Miranda, Salvatore; Iannone, Raffaele; Riemma, Stefano

    2015-01-01

    A new Human Reliability Analysis (HRA) method is presented in this paper. The Simulator for Human Error Probability Analysis (SHERPA) model provides a theoretical framework that exploits the advantages of the simulation tools and the traditional HRA methods in order to model human behaviour and to predict the error probability for a given scenario in every kind of industrial system. Human reliability is estimated as function of the performed task, the Performance Shaping Factors (PSF) and the time worked, with the purpose of considering how reliability depends not only on the task and working context, but also on the time that the operator has already spent on the work. The model is able to estimate human reliability; to assess the effects due to different human reliability levels through evaluation of tasks performed more or less correctly; and to assess the impact of context via PSFs. SHERPA also provides the possibility of determining the optimal configuration of breaks. Through a methodology that uses assessments of an economic nature, it allows identification of the conditions required for the suspension of work in the shift for the operator's psychophysical recovery and then for the restoration of acceptable values of reliability. - Highlights: • We propose a new method for Human Reliability Analysis called SHERPA. • SHERPA is able to model human behaviour and to predict the error probability. • Human reliability is function of task done, influencing factors and time worked. • SHERPA exploits benefits of the simulation tools and the traditional HRA methods. • SHERPA is implemented as a simulation template enable to assess human reliability

  9. Analysis of Random Segment Errors on Coronagraph Performance

    Science.gov (United States)

    Stahl, Mark T.; Stahl, H. Philip; Shaklan, Stuart B.; N'Diaye, Mamadou

    2016-01-01

    At 2015 SPIE O&P we presented "Preliminary Analysis of Random Segment Errors on Coronagraph Performance" Key Findings: Contrast Leakage for 4thorder Sinc2(X) coronagraph is 10X more sensitive to random segment piston than random tip/tilt, Fewer segments (i.e. 1 ring) or very many segments (> 16 rings) has less contrast leakage as a function of piston or tip/tilt than an aperture with 2 to 4 rings of segments. Revised Findings: Piston is only 2.5X more sensitive than Tip/Tilt

  10. Error analysis of mathematical problems on TIMSS: A case of Indonesian secondary students

    Science.gov (United States)

    Priyani, H. A.; Ekawati, R.

    2018-01-01

    Indonesian students’ competence in solving mathematical problems is still considered as weak. It was pointed out by the results of international assessment such as TIMSS. This might be caused by various types of errors made. Hence, this study aimed at identifying students’ errors in solving mathematical problems in TIMSS in the topic of numbers that considered as the fundamental concept in Mathematics. This study applied descriptive qualitative analysis. The subject was three students with most errors in the test indicators who were taken from 34 students of 8th graders. Data was obtained through paper and pencil test and student’s’ interview. The error analysis indicated that in solving Applying level problem, the type of error that students made was operational errors. In addition, for reasoning level problem, there are three types of errors made such as conceptual errors, operational errors and principal errors. Meanwhile, analysis of the causes of students’ errors showed that students did not comprehend the mathematical problems given.

  11. Close-range radar rainfall estimation and error analysis

    Science.gov (United States)

    van de Beek, C. Z.; Leijnse, H.; Hazenberg, P.; Uijlenhoet, R.

    2016-08-01

    Quantitative precipitation estimation (QPE) using ground-based weather radar is affected by many sources of error. The most important of these are (1) radar calibration, (2) ground clutter, (3) wet-radome attenuation, (4) rain-induced attenuation, (5) vertical variability in rain drop size distribution (DSD), (6) non-uniform beam filling and (7) variations in DSD. This study presents an attempt to separate and quantify these sources of error in flat terrain very close to the radar (1-2 km), where (4), (5) and (6) only play a minor role. Other important errors exist, like beam blockage, WLAN interferences and hail contamination and are briefly mentioned, but not considered in the analysis. A 3-day rainfall event (25-27 August 2010) that produced more than 50 mm of precipitation in De Bilt, the Netherlands, is analyzed using radar, rain gauge and disdrometer data. Without any correction, it is found that the radar severely underestimates the total rain amount (by more than 50 %). The calibration of the radar receiver is operationally monitored by analyzing the received power from the sun. This turns out to cause a 1 dB underestimation. The operational clutter filter applied by KNMI is found to incorrectly identify precipitation as clutter, especially at near-zero Doppler velocities. An alternative simple clutter removal scheme using a clear sky clutter map improves the rainfall estimation slightly. To investigate the effect of wet-radome attenuation, stable returns from buildings close to the radar are analyzed. It is shown that this may have caused an underestimation of up to 4 dB. Finally, a disdrometer is used to derive event and intra-event specific Z-R relations due to variations in the observed DSDs. Such variations may result in errors when applying the operational Marshall-Palmer Z-R relation. Correcting for all of these effects has a large positive impact on the radar-derived precipitation estimates and yields a good match between radar QPE and gauge

  12. Error Analysis of CM Data Products Sources of Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Hunt, Brian D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eckert-Gallup, Aubrey Celia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Cochran, Lainy Dromgoole [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kraus, Terrence D. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Allen, Mark B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Beal, Bill [National Security Technologies, Joint Base Andrews, MD (United States); Okada, Colin [National Security Technologies, LLC. (NSTec), Las Vegas, NV (United States); Simpson, Mathew [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-02-01

    This goal of this project is to address the current inability to assess the overall error and uncertainty of data products developed and distributed by DOE’s Consequence Management (CM) Program. This is a widely recognized shortfall, the resolution of which would provide a great deal of value and defensibility to the analysis results, data products, and the decision making process that follows this work. A global approach to this problem is necessary because multiple sources of error and uncertainty contribute to the ultimate production of CM data products. Therefore, this project will require collaboration with subject matter experts across a wide range of FRMAC skill sets in order to quantify the types of uncertainty that each area of the CM process might contain and to understand how variations in these uncertainty sources contribute to the aggregated uncertainty present in CM data products. The ultimate goal of this project is to quantify the confidence level of CM products to ensure that appropriate public and worker protections decisions are supported by defensible analysis.

  13. SIRTF Focal Plane Survey: A Pre-flight Error Analysis

    Science.gov (United States)

    Bayard, David S.; Brugarolas, Paul B.; Boussalis, Dhemetrios; Kang, Bryan H.

    2003-01-01

    This report contains a pre-flight error analysis of the calibration accuracies expected from implementing the currently planned SIRTF focal plane survey strategy. The main purpose of this study is to verify that the planned strategy will meet focal plane survey calibration requirements (as put forth in the SIRTF IOC-SV Mission Plan [4]), and to quantify the actual accuracies expected. The error analysis was performed by running the Instrument Pointing Frame (IPF) Kalman filter on a complete set of simulated IOC-SV survey data, and studying the resulting propagated covariances. The main conclusion of this study is that the all focal plane calibration requirements can be met with the currently planned survey strategy. The associated margins range from 3 to 95 percent, and tend to be smallest for frames having a 0.14" requirement, and largest for frames having a more generous 0.28" (or larger) requirement. The smallest margin of 3 percent is associated with the IRAC 3.6 and 5.8 micron array centers (frames 068 and 069), and the largest margin of 95 percent is associated with the MIPS 160 micron array center (frame 087). For pointing purposes, the most critical calibrations are for the IRS Peakup sweet spots and short wavelength slit centers (frames 019, 023, 052, 028, 034). Results show that these frames are meeting their 0.14" requirements with an expected accuracy of approximately 0.1", which corresponds to a 28 percent margin.

  14. Improving patient safety in radiotherapy through error reporting and analysis

    International Nuclear Information System (INIS)

    Findlay, Ú.; Best, H.; Ottrey, M.

    2016-01-01

    Aim: To improve patient safety in radiotherapy (RT) through the analysis and publication of radiotherapy errors and near misses (RTE). Materials and methods: RTE are submitted on a voluntary basis by NHS RT departments throughout the UK to the National Reporting and Learning System (NRLS) or directly to Public Health England (PHE). RTE are analysed by PHE staff using frequency trend analysis based on the classification and pathway coding from Towards Safer Radiotherapy (TSRT). PHE in conjunction with the Patient Safety in Radiotherapy Steering Group publish learning from these events, on a triannual and summarised on a biennial basis, so their occurrence might be mitigated. Results: Since the introduction of this initiative in 2010, over 30,000 (RTE) reports have been submitted. The number of RTE reported in each biennial cycle has grown, ranging from 680 (2010) to 12,691 (2016) RTE. The vast majority of the RTE reported are lower level events, thus not affecting the outcome of patient care. Of the level 1 and 2 incidents reported, it is known the majority of them affected only one fraction of a course of treatment. This means that corrective action could be taken over the remaining treatment fractions so the incident did not have a significant impact on the patient or the outcome of their treatment. Analysis of the RTE reports demonstrates that generation of error is not confined to one professional group or to any particular point in the pathway. It also indicates that the pattern of errors is replicated across service providers in the UK. Conclusion: Use of the terminology, classification and coding of TSRT, together with implementation of the national voluntary reporting system described within this report, allows clinical departments to compare their local analysis to the national picture. Further opportunities to improve learning from this dataset must be exploited through development of the analysis and development of proactive risk management strategies

  15. Alignment error analysis of detector array for spatial heterodyne spectrometer.

    Science.gov (United States)

    Jin, Wei; Chen, Di-Hu; Li, Zhi-Wei; Luo, Hai-Yan; Hong, Jin

    2017-12-10

    Spatial heterodyne spectroscopy (SHS) is a new spatial interference spectroscopy which can achieve high spectral resolution. The alignment error of the detector array can lead to a significant influence with the spectral resolution of a SHS system. Theoretical models for analyzing the alignment errors which are divided into three kinds are presented in this paper. Based on these models, the tolerance angle of these errors has been given, respectively. The result of simulation experiments shows that when the angle of slope error, tilt error, and rotation error are less than 1.21°, 1.21°, 0.066° respectively, the alignment reaches an acceptable level.

  16. Effects of Correlated Errors on the Analysis of Space Geodetic Data

    Science.gov (United States)

    Romero-Wolf, Andres; Jacobs, C. S.

    2011-01-01

    As thermal errors are reduced instrumental and troposphere correlated errors will increasingly become more important. Work in progress shows that troposphere covariance error models improve data analysis results. We expect to see stronger effects with higher data rates. Temperature modeling of delay errors may further reduce temporal correlations in the data.

  17. Different setup errors assessed by weekly cone-beam computed tomography on different registration in nasopharyngeal carcinoma treated with intensity-modulated radiation therapy.

    Science.gov (United States)

    Su, Jiqing; Chen, Wen; Yang, Huiyun; Hong, Jidong; Zhang, Zijian; Yang, Guangzheng; Li, Li; Wei, Rui

    2015-01-01

    The study aimed to investigate the difference of setup errors on different registration in the treatment of nasopharyngeal carcinoma based on weekly cone-beam computed tomography (CBCT). Thirty nasopharyngeal cancer patients scheduled to undergo intensity-modulated radiotherapy (IMRT) were prospectively enrolled in the study. Each patient had a weekly CBCT before radiation therapy. In the entire study, 201 CBCT scans were obtained. The scans were registered to the planning CT to determine the difference of setup errors on different registration sites. Different registration sites were represented by bony landmarks. Nasal septum and pterygoid process represent head, cervical vertebrae 1-3 represent upper neck, and cervical vertebrae 4-6 represent lower neck. Patient positioning errors were recorded in the right-left (RL), superior-inferior (SI), and anterior-posterior (AP) directions over the course of radiotherapy. Planning target volume margins were calculated from the systematic and random errors. In this study, we can make a conclusion that there are setup errors in RL, SI, and AP directions of nasopharyngeal carcinoma patients undergoing IMRT. In addition, the head and neck setup error has the difference, with statistical significance, while patient setup error of neck is greater than that of head during the course of radiotherapy. In our institution, we recommend a planning target volume margin of 3.0 mm in RL direction, 1.3 mm in SI direction, and 2.6 mm in AP direction for nasopharyngeal cancer patients undergoing IMRT with weekly CBCT scans.

  18. A Framework for Examining Mathematics Teacher Knowledge as Used in Error Analysis

    Science.gov (United States)

    Peng, Aihui; Luo, Zengru

    2009-01-01

    Error analysis is a basic and important task for mathematics teachers. Unfortunately, in the present literature there is a lack of detailed understanding about teacher knowledge as used in it. Based on a synthesis of the literature in error analysis, a framework for prescribing and assessing mathematics teacher knowledge in error analysis was…

  19. Error Analysis and Compensation Method Of 6-axis Industrial Robot

    OpenAIRE

    Zhang, Jianhao; Cai, Jinda

    2017-01-01

    A method of compensation is proposed based on the error model with the robot's parameters of kinematic structure and the joint angle. Using the robot kinematics equation depending on D-H algorithm, a kinematic error model is deduced relative to the end actuator of the robot, a comprehensive compensation method of kinematic parameters' error by mapping structural parameters to the joint angular parameter is proposed. In order to solve the angular error problem in the compensation process of ea...

  20. Error analysis and prevention of cosmic ion-induced soft errors in static CMOS RAMS

    International Nuclear Information System (INIS)

    Diehl, S.E.; Ochoa, A. Jr.; Dressendorfer, P.V.; Koga, R.; Kolasinski, W.A.

    1982-06-01

    Cosmic ray interactions with memory cells are known to cause temporary, random, bit errors in some designs. The sensitivity of polysilicon gate CMOS static RAM designs to logic upset by impinging ions has been studied using computer simulations and experimental heavy ion bombardment. Results of the simulations are confirmed by experimental upset cross-section data. Analytical models have been extended to determine and evaluate design modifications which reduce memory cell sensitivity to cosmic ions. A simple design modification, the addition of decoupling resistance in the feedback path, is shown to produce static RAMs immune to cosmic ray-induced bit errors

  1. Error treatment in students' written assignments in Discourse Analysis

    African Journals Online (AJOL)

    ... is generally no consensus on how lecturers should treat students' errors in written assignments, observations in this study enabled the researcher to provide certain strategies that lecturers can adopt. Key words: Error treatment; error handling; corrective feedback, positive cognitive feedback; negative cognitive feedback; ...

  2. Error Analysis in Composition of Iranian Lower Intermediate Students

    Science.gov (United States)

    Taghavi, Mehdi

    2012-01-01

    Learners make errors during the process of learning languages. This study examines errors in writing task of twenty Iranian lower intermediate male students aged between 13 and 15. A subject was given to the participants was a composition about the seasons of a year. All of the errors were identified and classified. Corder's classification (1967)…

  3. The Impact of Text Genre on Iranian Intermediate EFL Students' Writing Errors: An Error Analysis Perspective

    Science.gov (United States)

    Moqimipour, Kourosh; Shahrokhi, Mohsen

    2015-01-01

    The present study aimed at analyzing writing errors caused by the interference of the Persian language, regarded as the first language (L1), in three writing genres, namely narration, description, and comparison/contrast by Iranian EFL students. 65 English paragraphs written by the participants, who were at the intermediate level based on their…

  4. Analysis of personnel error occurrence reports across Defense Program facilities

    Energy Technology Data Exchange (ETDEWEB)

    Stock, D.A.; Shurberg, D.A.; O`Brien, J.N.

    1994-05-01

    More than 2,000 reports from the Occurrence Reporting and Processing System (ORPS) database were examined in order to identify weaknesses in the implementation of the guidance for the Conduct of Operations (DOE Order 5480.19) at Defense Program (DP) facilities. The analysis revealed recurrent problems involving procedures, training of employees, the occurrence of accidents, planning and scheduling of daily operations, and communications. Changes to DOE 5480.19 and modifications of the Occurrence Reporting and Processing System are recommended to reduce the frequency of these problems. The primary tool used in this analysis was a coding scheme based on the guidelines in 5480.19, which was used to classify the textual content of occurrence reports. The occurrence reports selected for analysis came from across all DP facilities, and listed personnel error as a cause of the event. A number of additional reports, specifically from the Plutonium Processing and Handling Facility (TA55), and the Chemistry and Metallurgy Research Facility (CMR), at Los Alamos National Laboratory, were analyzed separately as a case study. In total, 2070 occurrence reports were examined for this analysis. A number of core issues were consistently found in all analyses conducted, and all subsets of data examined. When individual DP sites were analyzed, including some sites which have since been transferred, only minor variations were found in the importance of these core issues. The same issues also appeared in different time periods, in different types of reports, and at the two Los Alamos facilities selected for the case study.

  5. Human error analysis of commercial aviation accidents using the human factors analysis and classification system (HFACS)

    Science.gov (United States)

    2001-02-01

    The Human Factors Analysis and Classification System (HFACS) is a general human error framework : originally developed and tested within the U.S. military as a tool for investigating and analyzing the human : causes of aviation accidents. Based upon ...

  6. Error analysis of acceleration control loops of a synchrotron

    International Nuclear Information System (INIS)

    Zhang, S.Y.; Weng, W.T.

    1991-01-01

    For beam control during acceleration, it is conventional to derive the frequency from an external reference, be it a field marker or an external oscillator, to provide phase and radius feedback loops to ensure the phase stability, radial position and emittance integrity of the beam. The open and closed loop behaviors of both feedback control and their response under the possible frequency, phase and radius errors are derived from fundamental principles and equations. The stability of the loops is investigated under a wide range of variations of the gain and time delays. Actual system performance of the AGS Booster is analyzed and compared to commissioning experiences. Such analysis is useful for setting design criteria and tolerances for new proton synchrotrons. 4 refs., 13 figs

  7. Kitchen Physics: Lessons in Fluid Pressure and Error Analysis

    Science.gov (United States)

    Vieyra, Rebecca Elizabeth; Vieyra, Chrystian; Macchia, Stefano

    2017-02-01

    Although the advent and popularization of the "flipped classroom" tends to center around at-home video lectures, teachers are increasingly turning to at-home labs for enhanced student engagement. This paper describes two simple at-home experiments that can be accomplished in the kitchen. The first experiment analyzes the density of four liquids using a waterproof case and a smartphone barometer in a container, sink, or tub. The second experiment determines the relationship between pressure and temperature of an ideal gas in a constant volume container placed momentarily in a refrigerator freezer. These experiences provide a ripe opportunity both for learning fundamental physics concepts as well as to investigate a variety of error analysis techniques that are frequently overlooked in introductory physics courses.

  8. Error-rate performance analysis of opportunistic regenerative relaying

    KAUST Repository

    Tourki, Kamel

    2011-09-01

    In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path can be considered unusable, and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We first derive the exact statistics of each hop, in terms of probability density function (PDF). Then, the PDFs are used to determine accurate closed form expressions for end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation where the detector may use maximum ration combining (MRC) or selection combining (SC). Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over linear network (LN) architecture and considering Rayleigh fading channels. © 2011 IEEE.

  9. The error performance analysis over cyclic redundancy check codes

    Science.gov (United States)

    Yoon, Hee B.

    1991-06-01

    The burst error is generated in digital communication networks by various unpredictable conditions, which occur at high error rates, for short durations, and can impact services. To completely describe a burst error one has to know the bit pattern. This is impossible in practice on working systems. Therefore, under the memoryless binary symmetric channel (MBSC) assumptions, the performance evaluation or estimation schemes for digital signal 1 (DS1) transmission systems carrying live traffic is an interesting and important problem. This study will present some analytical methods, leading to efficient detecting algorithms of burst error using cyclic redundancy check (CRC) code. The definition of burst error is introduced using three different models. Among the three burst error models, the mathematical model is used in this study. The probability density function, function(b) of burst error of length b is proposed. The performance of CRC-n codes is evaluated and analyzed using function(b) through the use of a computer simulation model within CRC block burst error. The simulation result shows that the mean block burst error tends to approach the pattern of the burst error which random bit errors generate.

  10. An analysis of tracking error in image-guided neurosurgery.

    Science.gov (United States)

    Gerard, Ian J; Collins, D Louis

    2015-10-01

    This study quantifies some of the technical and physical factors that contribute to error in image-guided interventions. Errors associated with tracking, tool calibration and registration between a physical object and its corresponding image were investigated and compared with theoretical descriptions of these errors. A precision milled linear testing apparatus was constructed to perform the measurements. The tracking error was shown to increase in linear fashion with distance normal to the camera, and the tracking error ranged between 0.15 and 0.6 mm. The tool calibration error increased as a function of distance from the camera and the reference tool (0.2-0.8 mm). The fiducial registration error was shown to improve when more points were used up until a plateau value was reached which corresponded to the total fiducial localization error ([Formula: see text]0.8 mm). The target registration error distributions followed a [Formula: see text] distribution with the largest error and variation around fiducial points. To minimize errors, tools should be calibrated as close as possible to the reference tool and camera, and tools should be used as close to the front edge of the camera throughout the intervention, with the camera pointed in the direction where accuracy is least needed during surgery.

  11. Fixed-point error analysis of Winograd Fourier transform algorithms

    Science.gov (United States)

    Patterson, R. W.; Mcclellan, J. H.

    1978-01-01

    The quantization error introduced by the Winograd Fourier transform algorithm (WFTA) when implemented in fixed-point arithmetic is studied and compared with that of the fast Fourier transform (FFT). The effect of ordering the computational modules and the relative contributions of data quantization error and coefficient quantization error are determined. In addition, the quantization error introduced by the Good-Winograd (GW) algorithm, which uses Good's prime-factor decomposition for the discrete Fourier transform (DFT) together with Winograd's short length DFT algorithms, is studied. Error introduced by the WFTA is, in all cases, worse than that of the FFT. In general, the WFTA requires one or two more bits for data representation to give an error similar to that of the FFT. Error introduced by the GW algorithm is approximately the same as that of the FFT.

  12. Medication errors as malpractice-a qualitative content analysis of 585 medication errors by nurses in Sweden.

    Science.gov (United States)

    Björkstén, Karin Sparring; Bergqvist, Monica; Andersén-Karlsson, Eva; Benson, Lina; Ulfvarson, Johanna

    2016-08-24

    Many studies address the prevalence of medication errors but few address medication errors serious enough to be regarded as malpractice. Other studies have analyzed the individual and system contributory factor leading to a medication error. Nurses have a key role in medication administration, and there are contradictory reports on the nurses' work experience in relation to the risk and type for medication errors. All medication errors where a nurse was held responsible for malpractice (n = 585) during 11 years in Sweden were included. A qualitative content analysis and classification according to the type and the individual and system contributory factors was made. In order to test for possible differences between nurses' work experience and associations within and between the errors and contributory factors, Fisher's exact test was used, and Cohen's kappa (k) was performed to estimate the magnitude and direction of the associations. There were a total of 613 medication errors in the 585 cases, the most common being "Wrong dose" (41 %), "Wrong patient" (13 %) and "Omission of drug" (12 %). In 95 % of the cases, an average of 1.4 individual contributory factors was found; the most common being "Negligence, forgetfulness or lack of attentiveness" (68 %), "Proper protocol not followed" (25 %), "Lack of knowledge" (13 %) and "Practice beyond scope" (12 %). In 78 % of the cases, an average of 1.7 system contributory factors was found; the most common being "Role overload" (36 %), "Unclear communication or orders" (30 %) and "Lack of adequate access to guidelines or unclear organisational routines" (30 %). The errors "Wrong patient due to mix-up of patients" and "Wrong route" and the contributory factors "Lack of knowledge" and "Negligence, forgetfulness or lack of attentiveness" were more common in less experienced nurses. The experienced nurses were more prone to "Practice beyond scope of practice" and to make errors in spite of "Lack of adequate

  13. Analysis of error functions in speckle shearing interferometry

    International Nuclear Information System (INIS)

    Wan Saffiey Wan Abdullah

    2001-01-01

    Electronic Speckle Pattern Shearing Interferometry (ESPSI) or shearography has successfully been used in NDT for slope (∂w/ (∂x and / or (∂w/ (∂y) measurement while strain measurement (∂u/ ∂x, ∂v/ ∂y, ∂u/ ∂y and (∂v/ (∂x) is still under investigation. This method is well accepted in industrial applications especially in the aerospace industry. Demand of this method is increasing due to complexity of the test materials and objects. ESPSI has successfully performed in NDT only for qualitative measurement whilst quantitative measurement is the current aim of many manufacturers. Industrial use of such equipment is being completed without considering the errors arising from numerous sources, including wavefront divergence. The majority of commercial systems are operated with diverging object illumination wave fronts without considering the curvature of the object illumination wavefront or the object geometry, when calculating the interferometer fringe function and quantifying data. This thesis reports the novel approach in quantified maximum phase change difference analysis for derivative out-of-plane (OOP) and in-plane (IP) cases that propagate from the divergent illumination wavefront compared to collimated illumination. The theoretical of maximum phase difference is formulated by means of three dependent variables, these being the object distance, illuminated diameter, center of illuminated area and camera distance and illumination angle. The relative maximum phase change difference that may contributed to the error in the measurement analysis in this scope of research is defined by the difference of maximum phase difference value measured by divergent illumination wavefront relative to the maximum phase difference value of collimated illumination wavefront, taken at the edge of illuminated area. Experimental validation using test objects for derivative out-of-plane and derivative in-plane deformation, using a single illumination wavefront

  14. Interfractional and intrafractional errors assessed by daily cone-beam computed tomography in nasopharyngeal carcinoma treated with intensity-modulated radiation therapy. A prospective study

    International Nuclear Information System (INIS)

    Lu Heming; Lin Hui; Feng Guosheng

    2012-01-01

    This prospective study was to assess interfractional and intrafractional errors and to estimate appropriate margins for planning target volume (PTV) by using daily cone-beam computed tomography (CBCT) guidance in nasopharyngeal carcinoma (NPC). Daily pretreatment and post-treatment CBCT scans were acquired separately after initial patient setup and after the completion of each treatment fraction in 10 patients treated with intensity-modulated radiation therapy (IMRT). Online corrections were made before treatment if any translational setup error was found. Interfractional and intrafractional errors were recorded in the right-left (RL), superior-inferior (SI) and anterior-posterior (AP) directions. For the translational shifts, interfractional errors >2 mm occurred in 21.7% of measurements in the RL direction, 12.7% in the SI direction and 34.1% in the AP direction, respectively. Online correction resulted in 100% of residual errors ≤2 mm in the RL and SI directions, and 95.5% of residual errors ≤2 mm in the AP direction. No residual errors >3 mm occurred in the three directions. For the rotational shifts, a significant reduction was found in the magnitudes of residual errors compared with those of interfractional errors. A margin of 4.9 mm, 4.0 mm and 6.3 mm was required in the RL, SI and AP directions, respectively, when daily CBCT scans were not performed. With daily CBCT, the margins were reduced to 1.2 mm in all directions. In conclusion, daily CBCT guidance is an effective modality to improve the accuracy of IMRT for NPC. The online correction could result in a 70-81% reduction in margin size. (author)

  15. Morphological analysis of the vestibular aqueduct by computerized tomography images

    International Nuclear Information System (INIS)

    Marques, Sergio Ricardo; Smith, Ricardo Luiz; Isotani, Sadao; Alonso, Luis Garcia; Anadao, Carlos Augusto; Prates, Jose Carlos; Lederman, Henrique Manoel

    2007-01-01

    Objective: In the last two decades, advances in the computerized tomography (CT) field revise the internal and medium ear evaluation. Therefore, the aim of this study is to analyze the morphology and morphometric aspects of the vestibular aqueduct on the basis of computerized tomography images (CTI). Material and method: Computerized tomography images of vestibular aqueducts were acquired from patients (n = 110) with an age range of 1-92 years. Thereafter, from the vestibular aqueducts images a morphometric analysis was performed. Through a computerized image processing system, the vestibular aqueduct measurements comprised of its area, external opening, length and the distance from the vestibular aqueduct to the internal acoustic meatus. Results: The morphology of the vestibular aqueduct may be funnel-shaped, filiform or tubular and the respective proportions were found to be at 44%, 33% and 22% in children and 21.7%, 53.3% and 25% in adults. The morphometric data showed to be of 4.86 mm 2 of area, 2.24 mm of the external opening, 4.73 mm of length and 11.88 mm of the distance from the vestibular aqueduct to the internal acoustic meatus, in children, and in adults it was of 4.93 mm 2 , 2.09 mm, 4.44 mm, and 11.35 mm, respectively. Conclusions: Computerized tomography showed that the vestibular aqueduct presents high morphological variability. The morphometric analysis showed that the differences found between groups of children and adults or between groups of both genders were not statistically significant

  16. AGAPE-ET for human error analysis of emergency tasks and its application

    International Nuclear Information System (INIS)

    Kim, J. H.; Jeong, W. D.

    2002-01-01

    The paper presents a proceduralised human reliability analysis (HRA) methodology, AGAPE-ET (A Guidance And Procedure for Human Error Analysis for Emergency Tasks), covering both qualitative error analysis and quantification of human error probability (HEP) of emergency tasks in nuclear power plants. The AGAPE-ET method is based on the simplified cognitive model. By each cognitive function, error causes or error-likely situations have been identified considering the characteristics of the performance of each cognitive function and influencing mechanism of the performance influencing factors (PIFs) on the cognitive function. Then, error analysis items have been determined from the identified error causes or error-likely situations and a human error analysis procedure based on the error analysis items is organised to help the analysts cue or guide overall human error analysis. The basic scheme for the quantification of HEP consists in the multiplication of the BHEP assigned by the error analysis item and the weight from the influencing factors decision tree (IFDT) constituted by cognitive function. The method can be characterised by the structured identification of the weak points of the task required to perform and the efficient analysis process that the analysts have only to carry out with the necessary cognitive functions. The paper also presents the application of AGAPE-ET to 31 nuclear emergency tasks and its results

  17. On the error analysis of the meshless FDM and its multipoint extension

    Science.gov (United States)

    Jaworska, Irena

    2018-01-01

    The error analysis for the meshless methods, especially for the Meshless Finite Difference Method (MFDM), is discussed in the paper. Both a priori and a posteriori error estimations are considered. Experimental order of convergence confirms the theoretically developed a priori error bound. The higher order extension of the MFDM - the multipoint approach may be used as a source of the improved reference solution, instead of the true analytical one, for the global and local error estimation of the solution and residual errors. Several types of a posteriori error estimators are described. A variety of performed tests confirm high quality of a posteriori error estimation based on the multipoint MFDM.

  18. Writing error may be a predictive sign for impending brain atrophy progression in amyotrophic lateral sclerosis: a preliminary study using X-ray computed tomography.

    Science.gov (United States)

    Ichikawa, Hiroo; Ohno, Hideki; Murakami, Hidetomo; Ohnaka, Yohei; Kawamura, Mitsuru

    2011-01-01

    To investigate whether writing errors are predictive of longitudinal brain atrophy progression in patients with amyotrophic lateral sclerosis (ALS). The frequency of writing errors in 6 ALS patients without dementia was compared with longitudinal changes in lateral ventricular areas of the bilateral anterior and inferior horns on X-ray computed tomography scans. The increase in area per month for the anterior and inferior horns was used as a measure of longitudinal brain atrophy progression, and was calculated as: (area on the initial scan - area on the follow-up scan)/scan interval (month). The longitudinal rate of increase in the area of the anterior horns showed significant associations with the rates of total writing errors (r = 0.886, p = 0.0152), kana errors (r = 0.887, p = 0.0148) and kana omission (r = 0.856, p = 0.0268), whereas that for the inferior horns size showed no significant association with any writing errors. The increased area of the anterior horns indicates frontal-lobar atrophy, and writing errors may be a predictive sign for impending brain atrophy progression in the frontal lobes, which reflects the development of anterior-type dementia. Copyright © 2011 S. Karger AG, Basel.

  19. Analysis of the "naming game" with learning errors in communications.

    Science.gov (United States)

    Lou, Yang; Chen, Guanrong

    2015-07-16

    Naming game simulates the process of naming an objective by a population of agents organized in a certain communication network. By pair-wise iterative interactions, the population reaches consensus asymptotically. We study naming game with communication errors during pair-wise conversations, with error rates in a uniform probability distribution. First, a model of naming game with learning errors in communications (NGLE) is proposed. Then, a strategy for agents to prevent learning errors is suggested. To that end, three typical topologies of communication networks, namely random-graph, small-world and scale-free networks, are employed to investigate the effects of various learning errors. Simulation results on these models show that 1) learning errors slightly affect the convergence speed but distinctively increase the requirement for memory of each agent during lexicon propagation; 2) the maximum number of different words held by the population increases linearly as the error rate increases; 3) without applying any strategy to eliminate learning errors, there is a threshold of the learning errors which impairs the convergence. The new findings may help to better understand the role of learning errors in naming game as well as in human language development from a network science perspective.

  20. Error Analysis of Ia Supernova and Query on Cosmic Dark Energy

    Indian Academy of Sciences (India)

    2016-01-27

    Jan 27, 2016 ... Some serious faults in error analysis of observations for SNIa have been found. Redoing the same error analysis of SNIa, by our idea, it is found that the average total observational error of SNIa is obviously greater than 0.55, so we can not decide whether the Universe is an accelerating expansion or not.

  1. An error taxonomy system for analysis of haemodialysis incidents.

    Science.gov (United States)

    Gu, Xiuzhu; Itoh, Kenji; Suzuki, Satoshi

    2014-12-01

    This paper describes the development of a haemodialysis error taxonomy system for analysing incidents and predicting the safety status of a dialysis organisation. The error taxonomy system was developed by adapting an error taxonomy system which assumed no specific specialty to haemodialysis situations. Its application was conducted with 1,909 incident reports collected from two dialysis facilities in Japan. Over 70% of haemodialysis incidents were reported as problems or complications related to dialyser, circuit, medication and setting of dialysis condition. Approximately 70% of errors took place immediately before and after the four hours of haemodialysis therapy. Error types most frequently made in the dialysis unit were omission and qualitative errors. Failures or complications classified to staff human factors, communication, task and organisational factors were found in most dialysis incidents. Device/equipment/materials, medicine and clinical documents were most likely to be involved in errors. Haemodialysis nurses were involved in more incidents related to medicine and documents, whereas dialysis technologists made more errors with device/equipment/materials. This error taxonomy system is able to investigate incidents and adverse events occurring in the dialysis setting but is also able to estimate safety-related status of an organisation, such as reporting culture. © 2014 European Dialysis and Transplant Nurses Association/European Renal Care Association.

  2. THE PRACTICAL ANALYSIS OF FINITE ELEMENTS METHOD ERRORS

    Directory of Open Access Journals (Sweden)

    Natalia Bakhova

    2011-03-01

    Full Text Available Abstract. The most important in the practical plan questions of reliable estimations of finite elementsmethod errors are considered. Definition rules of necessary calculations accuracy are developed. Methodsand ways of the calculations allowing receiving at economical expenditures of computing work the best finalresults are offered.Keywords: error, given the accuracy, finite element method, lagrangian and hermitian elements.

  3. Error Analysis for Interferometric SAR Measurements of Ice Sheet Flow

    DEFF Research Database (Denmark)

    Mohr, Johan Jacob; Madsen, Søren Nørvang

    1999-01-01

    and slope errors in conjunction with a surface parallel flow assumption. The most surprising result is that assuming a stationary flow the east component of the three-dimensional flow derived from ascending and descending orbit data is independent of slope errors and of the vertical flow....

  4. Factor Rotation and Standard Errors in Exploratory Factor Analysis

    Science.gov (United States)

    Zhang, Guangjian; Preacher, Kristopher J.

    2015-01-01

    In this article, we report a surprising phenomenon: Oblique CF-varimax and oblique CF-quartimax rotation produced similar point estimates for rotated factor loadings and factor correlations but different standard error estimates in an empirical example. Influences of factor rotation on asymptotic standard errors are investigated using a numerical…

  5. Evaluation and Error Analysis for a Solar Thermal Receiver

    International Nuclear Information System (INIS)

    Pfander, M.

    2001-01-01

    In the following study a complete balance over the REFOS receiver module, mounted on the tower power plant CESA-1 at the Plataforma Solar de Almeria (PSA), is carried out. Additionally an error inspection of the various measurement techniques used in the REFOS project is made. Especially the flux measurement system Pro hermes that is used to determine the total entry power of the receiver module and known as a major error source is analysed in detail. Simulations and experiments on the particular instruments are used to determine and quantify possible error sources. After discovering the origin of the errors they are reduced and included in the error calculation. The ultimate result is presented as an overall efficiency of the receiver module in dependence on the flux density at the receiver modules entry plane and the receiver operating temperature. (Author) 26 refs

  6. Evaluation and Error Analysis for a Solar thermal Receiver

    Energy Technology Data Exchange (ETDEWEB)

    Pfander, M.

    2001-07-01

    In the following study a complete balance over the REFOS receiver module, mounted on the tower power plant CESA-1 at the Plataforma Solar de Almeria (PSA), is carried out. Additionally an error inspection of the various measurement techniques used in the REFOS project is made. Especially the flux measurement system Prohermes that is used to determine the total entry power of the receiver module and known as a major error source is analysed in detail. Simulations and experiments on the particular instruments are used to determine and quantify possible error sources. After discovering the origin of the errors they are reduced and included in the error calculation. the ultimate result is presented as an overall efficiency of the receiver module in dependence on the flux density at the receiver module's entry plane and the receiver operating temperature. (Author) 26 refs.

  7. Latent human error analysis and efficient improvement strategies by fuzzy TOPSIS in aviation maintenance tasks.

    Science.gov (United States)

    Chiu, Ming-Chuan; Hsieh, Min-Chih

    2016-05-01

    The purposes of this study were to develop a latent human error analysis process, to explore the factors of latent human error in aviation maintenance tasks, and to provide an efficient improvement strategy for addressing those errors. First, we used HFACS and RCA to define the error factors related to aviation maintenance tasks. Fuzzy TOPSIS with four criteria was applied to evaluate the error factors. Results show that 1) adverse physiological states, 2) physical/mental limitations, and 3) coordination, communication, and planning are the factors related to airline maintenance tasks that could be addressed easily and efficiently. This research establishes a new analytic process for investigating latent human error and provides a strategy for analyzing human error using fuzzy TOPSIS. Our analysis process complements shortages in existing methodologies by incorporating improvement efficiency, and it enhances the depth and broadness of human error analysis methodology. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  8. Analysis of Daily Setup Variation With Tomotherapy Megavoltage Computed Tomography

    International Nuclear Information System (INIS)

    Zhou Jining; Uhl, Barry; Dewit, Kelly; Young, Mark; Taylor, Brian; Fei Dingyu; Lo, Y-C

    2010-01-01

    The purpose of this study was to evaluate different setup uncertainties for various anatomic sites with TomoTherapy (registered) pretreatment megavoltage computed tomography (MVCT) and to provide optimal margin guidelines for these anatomic sites. Ninety-two patients with tumors in head and neck (HN), brain, lung, abdominal, or prostate regions were included in the study. MVCT was used to verify patient position and tumor target localization before each treatment. With the anatomy registration tool, MVCT provided real-time tumor shift coordinates relative to the positions where the simulation CT was performed. Thermoplastic facemasks were used for HN and brain treatments. Vac-Lok TM cushions were used to immobilize the lower extremities up to the thighs for prostate patients. No respiration suppression was administered for lung and abdomen patients. The interfractional setup variations were recorded and corrected before treatment. The mean interfractional setup error was the smallest for HN among the 5 sites analyzed. The average 3D displacement in lateral, longitudinal, and vertical directions for the 5 sites ranged from 2.2-7.7 mm for HN and lung, respectively. The largest movement in the lung was 2.0 cm in the longitudinal direction, with a mean error of 6.0 mm and standard deviation of 4.8 mm. The mean interfractional rotation variation was small and ranged from 0.2-0.5 deg., with the standard deviation ranging from 0.7-0.9 deg. Internal organ displacement was also investigated with a posttreatment MVCT scan for HN, lung, abdomen, and prostate patients. The maximum 3D intrafractional displacement across all sites was less than 4.5 mm. The interfractional systematic errors and random errors were analyzed and the suggested margins for HN, brain, prostate, abdomen, and lung in the lateral, longitudinal, and vertical directions were between 4.2 and 8.2 mm, 5.0 mm and 12.0 mm, and 1.5 mm and 6.8 mm, respectively. We suggest that TomoTherapy (registered) pretreatment

  9. Analysis of daily setup variation with tomotherapy megavoltage computed tomography.

    Science.gov (United States)

    Zhou, Jining; Uhl, Barry; Dewit, Kelly; Young, Mark; Taylor, Brian; Fei, Ding-Yu; Lo, Yeh-Chi

    2010-01-01

    The purpose of this study was to evaluate different setup uncertainties for various anatomic sites with TomoTherapy pretreatment megavoltage computed tomography (MVCT) and to provide optimal margin guidelines for these anatomic sites. Ninety-two patients with tumors in head and neck (HN), brain, lung, abdominal, or prostate regions were included in the study. MVCT was used to verify patient position and tumor target localization before each treatment. With the anatomy registration tool, MVCT provided real-time tumor shift coordinates relative to the positions where the simulation CT was performed. Thermoplastic facemasks were used for HN and brain treatments. Vac-Lok cushions were used to immobilize the lower extremities up to the thighs for prostate patients. No respiration suppression was administered for lung and abdomen patients. The interfractional setup variations were recorded and corrected before treatment. The mean interfractional setup error was the smallest for HN among the 5 sites analyzed. The average 3D displacement in lateral, longitudinal, and vertical directions for the 5 sites ranged from 2.2-7.7 mm for HN and lung, respectively. The largest movement in the lung was 2.0 cm in the longitudinal direction, with a mean error of 6.0 mm and standard deviation of 4.8 mm. The mean interfractional rotation variation was small and ranged from 0.2-0.5 degrees, with the standard deviation ranging from 0.7-0.9 degrees. Internal organ displacement was also investigated with a posttreatment MVCT scan for HN, lung, abdomen, and prostate patients. The maximum 3D intrafractional displacement across all sites was less than 4.5 mm. The interfractional systematic errors and random errors were analyzed and the suggested margins for HN, brain, prostate, abdomen, and lung in the lateral, longitudinal, and vertical directions were between 4.2 and 8.2 mm, 5.0 mm and 12.0 mm, and 1.5 mm and 6.8 mm, respectively. We suggest that TomoTherapy pretreatment MVCT can be used to

  10. Heidelberg Retina Tomography analysis in optic disks with anatomic particularities.

    Science.gov (United States)

    Dascalu, A M; Alexandrescu, C; Pascu, R; Ilinca, R; Popescu, V; Ciuluvica, R; Voinea, L; Celea, C

    2010-01-01

    Due to its objectivity, reproducibility and predictive value confirmed by many large-scale statistical clinical studies, Heidelberg Retina Tomography has become one of the most used computerized image analysis of the optic disc in glaucoma. It has been signaled, though, that the diagnostic value of Moorfieds Regression Analyses and Glaucoma Probability Score decreases when analyzing optic discs with extreme sizes. The number of false positive results increases in cases of megalopapillae and the number of false negative results increases in cases of small size optic discs. The present paper is a review of the aspects one should take into account when analyzing a HRT result of an optic disc with anatomic particularities.

  11. Micro Computer Tomography for medical device and pharmaceutical packaging analysis.

    Science.gov (United States)

    Hindelang, Florine; Zurbach, Raphael; Roggo, Yves

    2015-04-10

    Biomedical device and medicine product manufacturing are long processes facing global competition. As technology evolves with time, the level of quality, safety and reliability increases simultaneously. Micro Computer Tomography (Micro CT) is a tool allowing a deep investigation of products: it can contribute to quality improvement. This article presents the numerous applications of Micro CT for medical device and pharmaceutical packaging analysis. The samples investigated confirmed CT suitability for verification of integrity, measurements and defect detections in a non-destructive manner. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Different setup errors assessed by weekly cone-beam computed tomography on different registration in nasopharyngeal carcinoma treated with intensity-modulated radiation therapy

    Directory of Open Access Journals (Sweden)

    Su JQ

    2015-09-01

    Full Text Available Jiqing Su,1,2,* Wen Chen,2,* Huiyun Yang,2 Jidong Hong,2 Zijian Zhang,2 Guangzheng Yang,2 Li Li,2 Rui Wei21Department of Oncology, Changsha Central Hospital, Changsha, 2Department of Oncology, Xiangya Hospital, Central South University, Changsha, People’s Republic of China*These authors contributed equally to this workAbstract: The study aimed to investigate the difference of setup errors on different registration in the treatment of nasopharyngeal carcinoma based on weekly cone-beam computed tomography (CBCT. Thirty nasopharyngeal cancer patients scheduled to undergo intensity-modulated radiotherapy (IMRT were prospectively enrolled in the study. Each patient had a weekly CBCT before radiation therapy. In the entire study, 201 CBCT scans were obtained. The scans were registered to the planning CT to determine the difference of setup errors on different registration sites. Different registration sites were represented by bony landmarks. Nasal septum and pterygoid process represent head, cervical vertebrae 1–3 represent upper neck, and cervical vertebrae 4–6 represent lower neck. Patient positioning errors were recorded in the right–left (RL, superior–inferior (SI, and anterior–posterior (AP directions over the course of radiotherapy. Planning target volume margins were calculated from the systematic and random errors. In this study, we can make a conclusion that there are setup errors in RL, SI, and AP directions of nasopharyngeal carcinoma patients undergoing IMRT. In addition, the head and neck setup error has the difference, with statistical significance, while patient setup error of neck is greater than that of head during the course of radiotherapy. In our institution, we recommend a planning target volume margin of 3.0 mm in RL direction, 1.3 mm in SI direction, and 2.6 mm in AP direction for nasopharyngeal cancer patients undergoing IMRT with weekly CBCT scans.Keywords: cone-beam computed tomography, setup error, PTV

  13. Spectrogram Image Analysis of Error Signals for Minimizing Impulse Noise

    Directory of Open Access Journals (Sweden)

    Jeakwan Kim

    2016-01-01

    Full Text Available This paper presents the theoretical and experimental study on the spectrogram image analysis of error signals for minimizing the impulse input noises in the active suppression of noise. Impulse inputs of some specific wave patterns as primary noises to a one-dimensional duct with the length of 1800 mm are shown. The convergence speed of the adaptive feedforward algorithm based on the least mean square approach was controlled by a normalized step size which was incorporated into the algorithm. The variations of the step size govern the stability as well as the convergence speed. Because of this reason, a normalized step size is introduced as a new method for the control of impulse noise. The spectrogram images which indicate the degree of the attenuation of the impulse input noises are considered to represent the attenuation with the new method. The algorithm is extensively investigated in both simulation and real-time control experiment. It is demonstrated that the suggested algorithm worked with a nice stability and performance against impulse noises. The results in this study can be used for practical active noise control systems.

  14. Human Error Assessmentin Minefield Cleaning Operation Using Human Event Analysis

    Directory of Open Access Journals (Sweden)

    Mohammad Hajiakbari

    2015-12-01

    Full Text Available Background & objective: Human error is one of the main causes of accidents. Due to the unreliability of the human element and the high-risk nature of demining operations, this study aimed to assess and manage human errors likely to occur in such operations. Methods: This study was performed at a demining site in war zones located in the West of Iran. After acquiring an initial familiarity with the operations, methods, and tools of clearing minefields, job task related to clearing landmines were specified. Next, these tasks were studied using HTA and related possible errors were assessed using ATHEANA. Results: de-mining task was composed of four main operations, including primary detection, technical identification, investigation, and neutralization. There were found four main reasons for accidents occurring in such operations; walking on the mines, leaving mines with no action, error in neutralizing operation and environmental explosion. The possibility of human error in mine clearance operations was calculated as 0.010. Conclusion: The main causes of human error in de-mining operations can be attributed to various factors such as poor weather and operating conditions like outdoor work, inappropriate personal protective equipment, personality characteristics, insufficient accuracy in the work, and insufficient time available. To reduce the probability of human error in de-mining operations, the aforementioned factors should be managed properly.

  15. Effective training based on the cause analysis of operation errors

    International Nuclear Information System (INIS)

    Fujita, Eimitsu; Noji, Kunio; Kobayashi, Akira.

    1991-01-01

    The authors have investigated typical error types through our training experience, and analyzed the causes of them. Error types which are observed in simulator training are: (1) lack of knowledge or lack of its applying ability to actual operation; (2) defective mastery of skillbase operation; (3) rote operation or stereotyped manner; (4) mind-setting or lack of redundant verification; (5) lack of team work; (6) misjudgement for the plant overall conditions by operation chief, who directs a reactor operator and a turbine operator in the training. The paper describes training methods used in Japan for BWR utilities to overcome these error types

  16. Accidental iatrogenic intoxications by cytotoxic drugs: error analysis and practical preventive strategies.

    Science.gov (United States)

    Zernikow, B; Michel, E; Fleischhack, G; Bode, U

    1999-07-01

    Drug errors are quite common. Many of them become harmful only if they remain undetected, ultimately resulting in injury to the patient. Errors with cytotoxic drugs are especially dangerous because of the highly toxic potential of the drugs involved. For medico-legal reasons, only 1 case of accidental iatrogenic intoxication by cytotoxic drugs tends to be investigated at a time, because the focus is placed on individual responsibility rather than on system errors. The aim of our study was to investigate whether accidental iatrogenic intoxications by cytotoxic drugs are faults of either the individual or the system. The statistical analysis of distribution and quality of such errors, and the in-depth analysis of contributing factors delivered a rational basis for the development of practical preventive strategies. A total of 134 cases of accidental iatrogenic intoxication by a cytotoxic drug (from literature reports since 1966 identified by an electronic literature survey, as well as our own unpublished cases) underwent a systematic error analysis based on a 2-dimensional model of error generation. Incidents were classified by error characteristics and point in time of occurrence, and their distribution was statistically evaluated. The theories of error research, informatics, sensory physiology, cognitive psychology, occupational medicine and management have helped to classify and depict potential sources of error as well as reveal clues for error prevention. Monocausal errors were the exception. In the majority of cases, a confluence of unfavourable circumstances either brought about the error, or prevented its timely interception. Most cases with a fatal outcome involved erroneous drug administration. Object-inherent factors were the predominant causes. A lack of expert as well as general knowledge was a contributing element. In error detection and prevention of error sequelae, supervision and back-checking are essential. Improvement of both the individual

  17. Analysis of translational errors in frame-based and frameless cranial radiosurgery using an anthropomorphic phantom

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Taynna Vernalha Rocha [Faculdades Pequeno Principe (FPP), Curitiba, PR (Brazil); Cordova Junior, Arno Lotar; Almeida, Cristiane Maria; Piedade, Pedro Argolo; Silva, Cintia Mara da, E-mail: taynnavra@gmail.com [Centro de Radioterapia Sao Sebastiao, Florianopolis, SC (Brazil); Brincas, Gabriela R. Baseggio [Centro de Diagnostico Medico Imagem, Florianopolis, SC (Brazil); Marins, Priscila; Soboll, Danyel Scheidegger [Universidade Tecnologica Federal do Parana (UTFPR), Curitiba, PR (Brazil)

    2016-03-15

    Objective: To evaluate three-dimensional translational setup errors and residual errors in image-guided radiosurgery, comparing frameless and frame-based techniques, using an anthropomorphic phantom. Materials and Methods: We initially used specific phantoms for the calibration and quality control of the image-guided system. For the hidden target test, we used an Alderson Radiation Therapy (ART)-210 anthropomorphic head phantom, into which we inserted four 5- mm metal balls to simulate target treatment volumes. Computed tomography images were the taken with the head phantom properly positioned for frameless and frame-based radiosurgery. Results: For the frameless technique, the mean error magnitude was 0.22 ± 0.04 mm for setup errors and 0.14 ± 0.02 mm for residual errors, the combined uncertainty being 0.28 mm and 0.16 mm, respectively. For the frame-based technique, the mean error magnitude was 0.73 ± 0.14 mm for setup errors and 0.31 ± 0.04 mm for residual errors, the combined uncertainty being 1.15 mm and 0.63 mm, respectively. Conclusion: The mean values, standard deviations, and combined uncertainties showed no evidence of a significant differences between the two techniques when the head phantom ART-210 was used. (author)

  18. Analysis of translational errors in frame-based and frameless cranial radiosurgery using an anthropomorphic phantom

    Directory of Open Access Journals (Sweden)

    Taynná Vernalha Rocha Almeida

    2016-04-01

    Full Text Available Abstract Objective: To evaluate three-dimensional translational setup errors and residual errors in image-guided radiosurgery, comparing frameless and frame-based techniques, using an anthropomorphic phantom. Materials and Methods: We initially used specific phantoms for the calibration and quality control of the image-guided system. For the hidden target test, we used an Alderson Radiation Therapy (ART-210 anthropomorphic head phantom, into which we inserted four 5mm metal balls to simulate target treatment volumes. Computed tomography images were the taken with the head phantom properly positioned for frameless and frame-based radiosurgery. Results: For the frameless technique, the mean error magnitude was 0.22 ± 0.04 mm for setup errors and 0.14 ± 0.02 mm for residual errors, the combined uncertainty being 0.28 mm and 0.16 mm, respectively. For the frame-based technique, the mean error magnitude was 0.73 ± 0.14 mm for setup errors and 0.31 ± 0.04 mm for residual errors, the combined uncertainty being 1.15 mm and 0.63 mm, respectively. Conclusion: The mean values, standard deviations, and combined uncertainties showed no evidence of a significant differences between the two techniques when the head phantom ART-210 was used.

  19. Errors Analysis of Solving Linear Inequalities among the Preparatory Year Students at King Saud University

    Science.gov (United States)

    El-khateeb, Mahmoud M. A.

    2016-01-01

    The purpose of this study aims to investigate the errors classes occurred by the Preparatory year students at King Saud University, through analysis student responses to the items of the study test, and to identify the varieties of the common errors and ratios of common errors that occurred in solving inequalities. In the collection of the data,…

  20. Error Analysis of Brailled Instructional Materials Produced by Public School Personnel in Texas

    Science.gov (United States)

    Herzberg, Tina

    2010-01-01

    In this study, a detailed error analysis was performed to determine if patterns of errors existed in braille transcriptions. The most frequently occurring errors were the insertion of letters or words that were not contained in the original print material; the incorrect usage of the emphasis indicator; and the incorrect formatting of titles,…

  1. US-LHC IR magnet error analysis and compensation

    International Nuclear Information System (INIS)

    Wei, J.; Ptitsin, V.; Pilat, F.; Tepikian, S.; Gelfand, N.; Wan, W.; Holt, J.

    1998-01-01

    This paper studies the impact of the insertion-region (IR) magnet field errors on LHC collision performance. Compensation schemes including magnet orientation optimization, body-end compensation, tuning shims, and local nonlinear correction are shown to be highly effective

  2. Applying hierarchical task analysis to medication administration errors

    OpenAIRE

    Lane, R; Stanton, NA; Harrison, DJ

    2006-01-01

    Medication use in hospitals is a complex process and is dependent on the successful interaction of health professionals functioning within different disciplines. Errors can occur at any one of the five main stages of prescribing, documenting, dispensing or preparation, administering and monitoring. The responsibility for the error is often placed on the nurse, as she or he is the last person in the drug administration chain whilst more pressing underlying causal factors remain unresolved. ...

  3. Analysis of Random Errors in Horizontal Sextant Angles

    Science.gov (United States)

    1980-09-01

    sea horizon, bringing the direct and ref’lected images into coincidence and reading the micrometer and vernier . This is repeated several times...differences due to the direction of rotation of the micrometer drum were examined as well as the variability in the determination of sextant index error. / DD...minutes of arc respec- tively. In addition, systematic errors resulting from angular differences due to the direction of rotation of the micrometer drum

  4. WORKING MEMORY STRUCTURE REVEALED IN ANALYSIS OF RECALL ERRORS

    Directory of Open Access Journals (Sweden)

    Regina V Ershova

    2017-12-01

    Full Text Available We analyzed working memory errors stemming from 193 Russian college students taking the Tarnow Unchunkable Test utilizing double digit items on a visual display.In three-item trials with at most one error per trial, single incorrect tens and ones digits (“singlets” were overrepresented and made up the majority of errors, indicating a base 10 organization.These errors indicate that there are separate memory maps for each position and that there are pointers that can move primarily within these maps. Several pointers make up a pointer collection. The number of pointer collections possible is the working memory capacity limit. A model for self-organizing maps is constructed in which the organization is created by turning common pointer collections into maps thereby replacing a pointer collection with a single pointer.The factors 5 and 11 were underrepresented in the errors, presumably because base 10 properties beyond positional order were used for error correction, perhaps reflecting the existence of additional maps of integers divisible by 5 and integers divisible by 11.

  5. Positron emission tomography: Physics, instrumentation, and image analysis

    International Nuclear Information System (INIS)

    Porenta, G.

    1994-01-01

    Positron emission tomography (PET) is a noninvasive diagnostic technique that permits reconstruction of cross-sectional images of the human body which depict the biodistribution of PET tracer substances. A large variety of physiological PET tracers, mostly based on isotopes of carbon, nitrogen, oxygen, and fluorine is available and allows the in vivo investigation of organ perfusion, metabolic pathways and biomolecular processes in normal and diseased states. PET cameras utilize the physical characteristics of positron decay to derive quantitative measurements of tracer concentrations, a capability that has so far been elusive for conventional SPECT (single photon emission computed tomography) imaging techniques. Due to the short half lives of most PET isotopes, an on-site cyclotron and a radiochemistry unit are necessary to provide an adequate supply of PET tracers. While operating a PET center in the past was a complex procedure restricted to few academic centers with ample resources. PET technology has rapidly advanced in recent years and has entered the commercial nuclear medicine market. To date, the availability of compact cyclotrons with remote computer control, automated synthesis units for PET radiochemistry, high-performance PET cameras, and userfriendly analysis workstations permits installation of a clinical PET center within most nuclear medicine facilities. This review provides simple descriptions of important aspects concerning physics, instrumentation, and image analysis in PET imaging which should be understood by medical personnel involved in the clinical operation of a PET imaging center. (author)

  6. Analysis and research on curved surface's prototyping error based on FDM process

    Science.gov (United States)

    Gong, Y. D.; Zhang, Y. C.; Yang, T. B.; Wang, W. S.

    2008-12-01

    Analysis and research methods on curved surface's prototyping error with FDM (Fused Deposition Modeling) process are introduced in this paper, then the experiment result of curved surface's prototyping error is analyzed, and the integrity of point cloud information and the fitting method of curved surface prototyping are discussed as well as the influence on curved surface's prototyping error with different software. Finally, the qualitative and quantitative conclusions on curved surface's prototyping error are acquired in this paper.

  7. Error analysis of the freshmen Criminology students’ grammar in the written English

    Directory of Open Access Journals (Sweden)

    Maico Demi Banate Aperocho

    2017-12-01

    Full Text Available This study identifies the various syntactical errors of the fifty (50 freshmen B.S. Criminology students of the University of Mindanao in Davao City. Specifically, this study aims to answer the following: (1 What are the common errors present in the argumentative essays of the respondents? (2 What are the reasons of the existence of these errors? This study is descriptive-qualitative. It also uses error analysis to point out the syntactical errors present in the compositions of the participants. The fifty essays are subjected to error analysis. Errors are classified based on Chanquoy’s Classification of Writing Errors. Furthermore, Hourani’s Common Reasons of Grammatical Errors Checklist was also used to determine the common reasons of the identified syntactical errors. To create a meaningful interpretation of data and to solicit further ideas from the participants, a focus group discussion is also done. Findings show that students’ most common errors are on the grammatical aspect. In the grammatical aspect, students have more frequently committed errors in the verb aspect (tense, subject agreement, and auxiliary and linker choice compared to spelling and punctuation aspects. Moreover, there are three topmost reasons of committing errors in the paragraph: mother tongue interference, incomprehensibility of the grammar rules, and the incomprehensibility of the writing mechanics. Despite the difficulty in learning English as a second language, students are still very motivated to master the concepts and applications of the language.

  8. Error analysis of 3D-PTV through unsteady interfaces

    Science.gov (United States)

    Akutina, Yulia; Mydlarski, Laurent; Gaskin, Susan; Eiff, Olivier

    2018-03-01

    The feasibility of stereoscopic flow measurements through an unsteady optical interface is investigated. Position errors produced by a wavy optical surface are determined analytically, as are the optimal viewing angles of the cameras to minimize such errors. Two methods of measuring the resulting velocity errors are proposed. These methods are applied to 3D particle tracking velocimetry (3D-PTV) data obtained through the free surface of a water flow within a cavity adjacent to a shallow channel. The experiments were performed using two sets of conditions, one having no strong surface perturbations, and the other exhibiting surface gravity waves. In the latter case, the amplitude of the gravity waves was 6% of the water depth, resulting in water surface inclinations of about 0.2°. (The water depth is used herein as a relevant length scale, because the measurements are performed in the entire water column. In a more general case, the relevant scale is the maximum distance from the interface to the measurement plane, H, which here is the same as the water depth.) It was found that the contribution of the waves to the overall measurement error is low. The absolute position errors of the system were moderate (1.2% of H). However, given that the velocity is calculated from the relative displacement of a particle between two frames, the errors in the measured water velocities were reasonably small, because the error in the velocity is the relative position error over the average displacement distance. The relative position error was measured to be 0.04% of H, resulting in small velocity errors of 0.3% of the free-stream velocity (equivalent to 1.1% of the average velocity in the domain). It is concluded that even though the absolute positions to which the velocity vectors are assigned is distorted by the unsteady interface, the magnitude of the velocity vectors themselves remains accurate as long as the waves are slowly varying (have low curvature). The stronger the

  9. SU-F-T-320: Assessing Placement Error of Optically Stimulated Luminescent in Vivo Dosimeters Using Cone-Beam Computed Tomography

    Energy Technology Data Exchange (ETDEWEB)

    Riegel, A; Klein, E [Northwell Health, Lake Success, NY (United States); Tariq, M; Gomez, C [Hofstra University, Hempstead, NY (United States)

    2016-06-15

    Purpose: Optically-stimulated luminescent dosimeters (OSLDs) are increasingly utilized for in vivo dosimetry of complex radiation delivery techniques such as intensity-modulated radiation therapy (IMRT) and volumetric-modulated arc therapy (VMAT). Evaluation of clinical uncertainties such as placement error has not been performed. This work retrospectively investigates the magnitude of placement error using conebeam computed tomography (CBCT) and its effect on measured/planned dose agreement. Methods: Each OSLD was placed at a physicist-designated location on the patient surface on a weekly basis. The location was given in terms of a gantry angle and two-dimensional offset from central axis. The OSLDs were placed before daily image guidance. We identified 77 CBCTs from 25 head-and-neck patients who received IMRT or VMAT, where OSLDs were visible on the CT image. Grossly misplaced OSLDs were excluded (e.g. wrong laterality). CBCTs were registered with the treatment plan and the distance between the planned and actual OSLD location was calculated in two dimensions in the beam’s eye view. Distances were correlated with measured/planned dose percent differences. Results: OSLDs were grossly misplaced for 5 CBCTs (6.4%). For the remaining 72 CBCTs, average placement error was 7.0±6.0 mm. These errors were not correlated with measured/planned dose percent differences (R{sup 2}=0.0153). Generalizing the dosimetric effect of placement errors may be unreliable. Conclusion: Correct placement of OSLDs for IMRT and VMAT treatments is critical to accurate and precise in vivo dosimetry. Small placement errors could produce large disagreement between measured and planned dose. Further work includes expansion to other treatment sites, examination of planned dose at the actual point of OSLD placement, and the influence of imageguided shifts on measured/planned dose agreement.

  10. Error Consistency in Acquired Apraxia of Speech With Aphasia: Effects of the Analysis Unit.

    Science.gov (United States)

    Haley, Katarina L; Cunningham, Kevin T; Eaton, Catherine Torrington; Jacks, Adam

    2018-02-15

    Diagnostic recommendations for acquired apraxia of speech (AOS) have been contradictory concerning whether speech sound errors are consistent or variable. Studies have reported divergent findings that, on face value, could argue either for or against error consistency as a diagnostic criterion. The purpose of this study was to explain discrepancies in error consistency results based on the unit of analysis (segment, syllable, or word) to help determine which diagnostic recommendation is most appropriate. We analyzed speech samples from 14 left-hemisphere stroke survivors with clinical diagnoses of AOS and aphasia. Each participant produced 3 multisyllabic words 5 times in succession. Broad phonetic transcriptions of these productions were coded for consistency of error location and type using the word and its constituent syllables and sound segments as units of analysis. Consistency of error type varied systematically with the unit of analysis, showing progressively greater consistency as the analysis unit changed from the word to the syllable and then to the sound segment. Consistency of error location varied considerably across participants and correlated positively with error frequency. Low to moderate consistency of error type at the word level confirms original diagnostic accounts of speech output and sound errors in AOS as variable in form. Moderate to high error type consistency at the syllable and sound levels indicate that phonetic error patterns are present. The results are complementary and logically compatible with each other and with the literature.

  11. An advanced human reliability analysis methodology: analysis of cognitive errors focused on

    International Nuclear Information System (INIS)

    Kim, J. H.; Jeong, W. D.

    2001-01-01

    The conventional Human Reliability Analysis (HRA) methods such as THERP/ASEP, HCR and SLIM has been criticised for their deficiency in analysing cognitive errors which occurs during operator's decision making process. In order to supplement the limitation of the conventional methods, an advanced HRA method, what is called the 2 nd generation HRA method, including both qualitative analysis and quantitative assessment of cognitive errors has been being developed based on the state-of-the-art theory of cognitive systems engineering and error psychology. The method was developed on the basis of human decision-making model and the relation between the cognitive function and the performance influencing factors. The application of the proposed method to two emergency operation tasks is presented

  12. Semiparametric analysis of linear transformation models with covariate measurement errors.

    Science.gov (United States)

    Sinha, Samiran; Ma, Yanyuan

    2014-03-01

    We take a semiparametric approach in fitting a linear transformation model to a right censored data when predictive variables are subject to measurement errors. We construct consistent estimating equations when repeated measurements of a surrogate of the unobserved true predictor are available. The proposed approach applies under minimal assumptions on the distributions of the true covariate or the measurement errors. We derive the asymptotic properties of the estimator and illustrate the characteristics of the estimator in finite sample performance via simulation studies. We apply the method to analyze an AIDS clinical trial data set that motivated the work. © 2013, The International Biometric Society.

  13. Bayesian soft x-ray tomography and MHD mode analysis on HL-2A

    Science.gov (United States)

    Li, Dong; Liu, Yi; Svensson, J.; Liu, Y. Q.; Song, X. M.; Yu, L. M.; Mao, Rui; Fu, B. Z.; Deng, Wei; Yuan, B. S.; Ji, X. Q.; Xu, Yuan; Chen, Wei; Zhou, Yan; Yang, Q. W.; Duan, X. R.; Liu, Yong; HL-2A Team

    2016-03-01

    A Bayesian based tomography method using so-called Gaussian processes (GPs) for the emission model has been applied to the soft x-ray (SXR) diagnostics on HL-2A tokamak. To improve the accuracy of reconstructions, the standard GP is extended to a non-stationary version so that different smoothness between the plasma center and the edge can be taken into account in the algorithm. The uncertainty in the reconstruction arising from measurement errors and incapability can be fully analyzed by the usage of Bayesian probability theory. In this work, the SXR reconstructions by this non-stationary Gaussian processes tomography (NSGPT) method have been compared with the equilibrium magnetic flux surfaces, generally achieving a satisfactory agreement in terms of both shape and position. In addition, singular-value-decomposition (SVD) and Fast Fourier Transform (FFT) techniques have been applied for the analysis of SXR and magnetic diagnostics, in order to explore the spatial and temporal features of the saturated long-lived magnetohydrodynamics (MHD) instability induced by energetic particles during neutral beam injection (NBI) on HL-2A. The result shows that this ideal internal kink instability has a dominant m/n  =  1/1 mode structure along with a harmonics m/n  =  2/2, which are coupled near the q  =  1 surface with a rotation frequency of 12 kHz.

  14. Human error in strabismus surgery: Quantification with a sensitivity analysis

    NARCIS (Netherlands)

    S. Schutte (Sander); J.R. Polling (Jan Roelof); F.C.T. van der Helm (Frans); H.J. Simonsz (Huib)

    2009-01-01

    textabstractBackground: Reoperations are frequently necessary in strabismus surgery. The goal of this study was to analyze human-error related factors that introduce variability in the results of strabismus surgery in a systematic fashion. Methods: We identified the primary factors that influence

  15. Human error in strabismus surgery : Quantification with a sensitivity analysis

    NARCIS (Netherlands)

    Schutte, S.; Polling, J.R.; Van der Helm, F.C.T.; Simonsz, H.J.

    2008-01-01

    Background- Reoperations are frequently necessary in strabismus surgery. The goal of this study was to analyze human-error related factors that introduce variability in the results of strabismus surgery in a systematic fashion. Methods- We identified the primary factors that influence the outcome of

  16. Geometric Error Analysis in Applied Calculus Problem Solving

    Science.gov (United States)

    Usman, Ahmed Ibrahim

    2017-01-01

    The paper investigates geometric errors students made as they tried to use their basic geometric knowledge in the solution of the Applied Calculus Optimization Problem (ACOP). Inaccuracies related to the drawing of geometric diagrams (visualization skills) and those associated with the application of basic differentiation concepts into ACOP…

  17. Error analysis to improve the speech recognition accuracy on ...

    Indian Academy of Sciences (India)

    Telugu language is one of the most widely spoken south Indian languages. In the proposed Telugu speech recognition system, errors obtained from decoder are analysed to improve the performance of the speech recognition system. Static pronunciation dictionary plays a key role in the speech recognition accuracy.

  18. Reading and Spelling Error Analysis of Native Arabic Dyslexic Readers

    Science.gov (United States)

    Abu-rabia, Salim; Taha, Haitham

    2004-01-01

    This study was an investigation of reading and spelling errors of dyslexic Arabic readers ("n"=20) compared with two groups of normal readers: a young readers group, matched with the dyslexics by reading level ("n"=20) and an age-matched group ("n"=20). They were tested on reading and spelling of texts, isolated…

  19. Linguistic Error Analysis on Students' Thesis Proposals

    Science.gov (United States)

    Pescante-Malimas, Mary Ann; Samson, Sonrisa C.

    2017-01-01

    This study identified and analyzed the common linguistic errors encountered by Linguistics, Literature, and Advertising Arts majors in their Thesis Proposal classes in the First Semester 2016-2017. The data were the drafts of the thesis proposals of the students from the three different programs. A total of 32 manuscripts were analyzed which was…

  20. Pitch Error Analysis of Young Piano Students' Music Reading Performances

    Science.gov (United States)

    Rut Gudmundsdottir, Helga

    2010-01-01

    This study analyzed the music reading performances of 6-13-year-old piano students (N = 35) in their second year of piano study. The stimuli consisted of three piano pieces, systematically constructed to vary in terms of left-hand complexity and input simultaneity. The music reading performances were recorded digitally and a code of error analysis…

  1. Young Children's Mental Arithmetic Errors: A Working-Memory Analysis.

    Science.gov (United States)

    Brainerd, Charles J.

    1983-01-01

    Presents a stochastic model for distinguishing mental arithmetic errors according to causes of failure. A series of experiments (1) studied questions of goodness of fit and model validity among four and five year olds and (2) used the model to measure the relative contributions of developmental improvements in short-term memory and arithmetical…

  2. Oral Definitions of Newly Learned Words: An Error Analysis

    Science.gov (United States)

    Steele, Sara C.

    2012-01-01

    This study examined and compared patterns of errors in the oral definitions of newly learned words. Fifteen 9- to 11-year-old children with language learning disability (LLD) and 15 typically developing age-matched peers inferred the meanings of 20 nonsense words from four novel reading passages. After reading, children provided oral definitions…

  3. Analysis of Students' Error in Learning of Quadratic Equations

    Science.gov (United States)

    Zakaria, Effandi; Ibrahim; Maat, Siti Mistima

    2010-01-01

    The purpose of the study was to determine the students' error in learning quadratic equation. The samples were 30 form three students from a secondary school in Jambi, Indonesia. Diagnostic test was used as the instrument of this study that included three components: factorization, completing the square and quadratic formula. Diagnostic interview…

  4. Experimental analysis of high-speed gamma-ray tomography performance

    Science.gov (United States)

    Maad, R.; Johansen, G. A.

    2008-08-01

    High-speed gamma-ray tomography (HSGT) based on multiple fan-beam collimated radioisotope sources has proved to be an efficient and fast method for cross sectional imaging of the dynamics in different industrial processes. The objective of the tomography system described here is to identify the flow regime of gas/liquid pipe flows. The performance of such systems is characterized by the spatial resolution, the speed of response and the measurement resolution of the attenuation coefficient. The work presented here is an experimental analysis of how the measurement geometry and the reconstruction method affect the error of the reconstructed pixel values. These relationships are well established for medical x-ray tomography where high intensity x-ray tubes are used as sources. For radioisotope sources, however, the radiation intensity is limited, which causes the measurement uncertainty, i.e. the Poisson noise, to be considerably higher. In addition, the influence of scattered radiation is more severe in a multiple source radioisotope system compared to that of x-ray systems. A computer-controlled flexible geometry gamma-ray tomograph has been developed to acquire experimental data for different fan-beam measurement geometries, and these data have subsequently been used for image reconstruction using seven different iterative image reconstruction algorithms. The results show that the reconstruction algorithms perform cross sectional images with different quality and that there is virtually nothing to be gained by using more than seven sources for flow regime classification of multiphase pipe flow consisting of gas, oil and water.

  5. Dynamic Error Analysis Method for Vibration Shape Reconstruction of Smart FBG Plate Structure

    Directory of Open Access Journals (Sweden)

    Hesheng Zhang

    2016-01-01

    Full Text Available Shape reconstruction of aerospace plate structure is an important issue for safe operation of aerospace vehicles. One way to achieve such reconstruction is by constructing smart fiber Bragg grating (FBG plate structure with discrete distributed FBG sensor arrays using reconstruction algorithms in which error analysis of reconstruction algorithm is a key link. Considering that traditional error analysis methods can only deal with static data, a new dynamic data error analysis method are proposed based on LMS algorithm for shape reconstruction of smart FBG plate structure. Firstly, smart FBG structure and orthogonal curved network based reconstruction method is introduced. Then, a dynamic error analysis model is proposed for dynamic reconstruction error analysis. Thirdly, the parameter identification is done for the proposed dynamic error analysis model based on least mean square (LMS algorithm. Finally, an experimental verification platform is constructed and experimental dynamic reconstruction analysis is done. Experimental results show that the dynamic characteristics of the reconstruction performance for plate structure can be obtained accurately based on the proposed dynamic error analysis method. The proposed method can also be used for other data acquisition systems and data processing systems as a general error analysis method.

  6. The application of two recently developed human reliability techniques to cognitive error analysis

    International Nuclear Information System (INIS)

    Gall, W.

    1990-01-01

    Cognitive error can lead to catastrophic consequences for manned systems, including those whose design renders them immune to the effects of physical slips made by operators. Four such events, pressurized water and boiling water reactor accidents which occurred recently, were analysed. The analysis identifies the factors which contributed to the errors and suggests practical strategies for error recovery or prevention. Two types of analysis were conducted: an unstructured analysis based on the analyst's knowledge of psychological theory, and a structured analysis using two recently-developed human reliability analysis techniques. In general, the structured techniques required less effort to produce results and these were comparable to those of the unstructured analysis. (author)

  7. Spectral analysis of forecast error investigated with an observing system simulation experiment

    Directory of Open Access Journals (Sweden)

    Nikki C. Privé

    2015-02-01

    Full Text Available The spectra of analysis and forecast error are examined using the observing system simulation experiment framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office. A global numerical weather prediction model, the Global Earth Observing System version 5 with Gridpoint Statistical Interpolation data assimilation, is cycled for 2 months with once-daily forecasts to 336 hours to generate a Control case. Verification of forecast errors using the nature run (NR as truth is compared with verification of forecast errors using self-analysis; significant underestimation of forecast errors is seen using self-analysis verification for up to 48 hours. Likewise, self-analysis verification significantly overestimates the error growth rates of the early forecast, as well as mis-characterising the spatial scales at which the strongest growth occurs. The NR-verified error variances exhibit a complicated progression of growth, particularly for low wavenumber errors. In a second experiment, cycling of the model and data assimilation over the same period is repeated, but using synthetic observations with different explicitly added observation errors having the same error variances as the control experiment, thus creating a different realisation of the control. The forecast errors of the two experiments become more correlated during the early forecast period, with correlations increasing for up to 72 hours before beginning to decrease.

  8. Error analysis of terrestrial laser scanning data by means of spherical statistics and 3D graphs.

    Science.gov (United States)

    Cuartero, Aurora; Armesto, Julia; Rodríguez, Pablo G; Arias, Pedro

    2010-01-01

    This paper presents a complete analysis of the positional errors of terrestrial laser scanning (TLS) data based on spherical statistics and 3D graphs. Spherical statistics are preferred because of the 3D vectorial nature of the spatial error. Error vectors have three metric elements (one module and two angles) that were analyzed by spherical statistics. A study case has been presented and discussed in detail. Errors were calculating using 53 check points (CP) and CP coordinates were measured by a digitizer with submillimetre accuracy. The positional accuracy was analyzed by both the conventional method (modular errors analysis) and the proposed method (angular errors analysis) by 3D graphics and numerical spherical statistics. Two packages in R programming language were performed to obtain graphics automatically. The results indicated that the proposed method is advantageous as it offers a more complete analysis of the positional accuracy, such as angular error component, uniformity of the vector distribution, error isotropy, and error, in addition the modular error component by linear statistics.

  9. Spectral Analysis of Forecast Error Investigated with an Observing System Simulation Experiment

    Science.gov (United States)

    Prive, N. C.; Errico, Ronald M.

    2015-01-01

    The spectra of analysis and forecast error are examined using the observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASAGMAO). A global numerical weather prediction model, the Global Earth Observing System version 5 (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation, is cycled for two months with once-daily forecasts to 336 hours to generate a control case. Verification of forecast errors using the Nature Run as truth is compared with verification of forecast errors using self-analysis; significant underestimation of forecast errors is seen using self-analysis verification for up to 48 hours. Likewise, self analysis verification significantly overestimates the error growth rates of the early forecast, as well as mischaracterizing the spatial scales at which the strongest growth occurs. The Nature Run-verified error variances exhibit a complicated progression of growth, particularly for low wave number errors. In a second experiment, cycling of the model and data assimilation over the same period is repeated, but using synthetic observations with different explicitly added observation errors having the same error variances as the control experiment, thus creating a different realization of the control. The forecast errors of the two experiments become more correlated during the early forecast period, with correlations increasing for up to 72 hours before beginning to decrease.

  10. Multi-wavelength analysis from tomography study on solar chromosphere

    International Nuclear Information System (INIS)

    Mumpuni, Emanuel Sungging; Herdiwijaya, Dhani; Djamal, Mitra

    2015-01-01

    The Sun as the most important star for scientific laboratory in astrophysics as well as encompassing all living aspect on Earth, still holds scientific mystery. As the established model that the Sun’s energy fueled by the nuclear reaction, along with transport process to the typical Solar surface on around 6000-K temperature, many aspects still left as an open questions, such as how the chromosphere responded to the photospheric dynamics. In this preliminary work, we try to analyze the Solar chromosphere respond to the Photospheric dynamics using tomography study implementing multi-wavelength analysis observation obtained from Dutch Open Telescope. Using the Hydrogen-alpha Doppler signal as the primary diagnostic tool, we try to investigate the inter-relation between the magnetic and gas pressure dynamics that occur in the chromosphere

  11. Encapsulation method for atom probe tomography analysis of nanoparticles.

    Science.gov (United States)

    Larson, D J; Giddings, A D; Wu, Y; Verheijen, M A; Prosa, T J; Roozeboom, F; Rice, K P; Kessels, W M M; Geiser, B P; Kelly, T F

    2015-12-01

    Open-space nanomaterials are a widespread class of technologically important materials that are generally incompatible with analysis by atom probe tomography (APT) due to issues with specimen preparation, field evaporation and data reconstruction. The feasibility of encapsulating such non-compact matter in a matrix to enable APT measurements is investigated using nanoparticles as an example. Simulations of field evaporation of a void, and the resulting artifacts in ion trajectory, underpin the requirement that no voids remain after encapsulation. The approach is demonstrated by encapsulating Pt nanoparticles in an ZnO:Al matrix created by atomic layer deposition, a growth technique which offers very high surface coverage and conformality. APT measurements of the Pt nanoparticles are correlated with transmission electron microscopy images and numerical simulations in order to evaluate the accuracy of the APT reconstruction. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Study on error analysis and accuracy improvement for aspheric profile measurement

    Science.gov (United States)

    Gao, Huimin; Zhang, Xiaodong; Fang, Fengzhou

    2017-06-01

    Aspheric surfaces are important to the optical systems and need high precision surface metrology. Stylus profilometry is currently the most common approach to measure axially symmetric elements. However, if the asphere has the rotational alignment errors, the wrong cresting point would be located deducing the significantly incorrect surface errors. This paper studied the simulated results of an asphere with rotational angles around X-axis and Y-axis, and the stylus tip shift in X, Y and Z direction. Experimental results show that the same absolute value of rotational errors around X-axis would cause the same profile errors and different value of rotational errors around Y-axis would cause profile errors with different title angle. Moreover, the greater the rotational errors, the bigger the peak-to-valley value of profile errors. To identify the rotational angles in X-axis and Y-axis, the algorithms are performed to analyze the X-axis and Y-axis rotational angles respectively. Then the actual profile errors with multiple profile measurement around X-axis are calculated according to the proposed analysis flow chart. The aim of the multiple measurements strategy is to achieve the zero position of X-axis rotational errors. Finally, experimental results prove the proposed algorithms achieve accurate profile errors for aspheric surfaces avoiding both X-axis and Y-axis rotational errors. Finally, a measurement strategy for aspheric surface is presented systematically.

  13. Slow Learner Errors Analysis in Solving Fractions Problems in Inclusive Junior High School Class

    Science.gov (United States)

    Novitasari, N.; Lukito, A.; Ekawati, R.

    2018-01-01

    A slow learner whose IQ is between 71 and 89 will have difficulties in solving mathematics problems that often lead to errors. The errors could be analyzed to where the errors may occur and its type. This research is qualitative descriptive which aims to describe the locations, types, and causes of slow learner errors in the inclusive junior high school class in solving the fraction problem. The subject of this research is one slow learner of seventh-grade student which was selected through direct observation by the researcher and through discussion with mathematics teacher and special tutor which handles the slow learner students. Data collection methods used in this study are written tasks and semistructured interviews. The collected data was analyzed by Newman’s Error Analysis (NEA). Results show that there are four locations of errors, namely comprehension, transformation, process skills, and encoding errors. There are four types of errors, such as concept, principle, algorithm, and counting errors. The results of this error analysis will help teachers to identify the causes of the errors made by the slow learner.

  14. English Language Error Analysis of the Written Texts Produced by Ukrainian Learners: Data Collection

    Directory of Open Access Journals (Sweden)

    Lessia Mykolayivna Kotsyuk

    2015-12-01

    Full Text Available English Language Error Analysis of the Written Texts Produced by Ukrainian Learners: Data Collection Recently, the studies of second language acquisition have tended to focus on learners errors as they help to predict the difficulties involved in acquiring a second language. Thus, teachers can be made aware of the difficult areas to be encountered by the students and pay special attention and devote emphasis to them. The research goals of the article are to define what error analysis is and how it is important in L2 teaching process, to state the significance of corpus studies in identifying of different types of errors and mistakes, to provide the results of error analysis of the corpus of written texts produced by Ukrainian learners. In this article, major types of errors in English as a second language for Ukrainian students are mentioned.

  15. Using Online Error Analysis Items to Support Preservice Teachers' Pedagogical Content Knowledge in Mathematics

    Science.gov (United States)

    McGuire, Patrick

    2013-01-01

    This article describes how a free, web-based intelligent tutoring system, (ASSISTment), was used to create online error analysis items for preservice elementary and secondary mathematics teachers. The online error analysis items challenged preservice teachers to analyze, diagnose, and provide targeted instructional remediation intended to help…

  16. Error analysis of pupils in calculating with fractions

    OpenAIRE

    Uranič, Petra

    2016-01-01

    In this thesis I examine the correlation between the frequency of errors that seventh grade pupils make in their calculations with fractions and their level of understanding of fractions. Fractions are a relevant and demanding theme in the mathematics curriculum. Although we use fractions on a daily basis, pupils find learning fractions to be very difficult. They generally do not struggle with the concept of fractions itself, but they frequently have problems with mathematical operations ...

  17. Analysis of Periodic Errors for Synthesized-Reference-Wave Holography

    Directory of Open Access Journals (Sweden)

    V. Schejbal

    2009-12-01

    Full Text Available Synthesized-reference-wave holographic techniques offer relatively simple and cost-effective measurement of antenna radiation characteristics and reconstruction of complex aperture fields using near-field intensity-pattern measurement. These methods allow utilization of advantages of methods for probe compensations for amplitude and phasing near-field measurements for the planar and cylindrical scanning including accuracy analyses. The paper analyzes periodic errors, which can be created during scanning, using both theoretical results and numerical simulations.

  18. Analysis of Sources of Large Positioning Errors in Deterministic Fingerprinting.

    Science.gov (United States)

    Torres-Sospedra, Joaquín; Moreira, Adriano

    2017-11-27

    Wi-Fi fingerprinting is widely used for indoor positioning and indoor navigation due to the ubiquity of wireless networks, high proliferation of Wi-Fi-enabled mobile devices, and its reasonable positioning accuracy. The assumption is that the position can be estimated based on the received signal strength intensity from multiple wireless access points at a given point. The positioning accuracy, within a few meters, enables the use of Wi-Fi fingerprinting in many different applications. However, it has been detected that the positioning error might be very large in a few cases, which might prevent its use in applications with high accuracy positioning requirements. Hybrid methods are the new trend in indoor positioning since they benefit from multiple diverse technologies (Wi-Fi, Bluetooth, and Inertial Sensors, among many others) and, therefore, they can provide a more robust positioning accuracy. In order to have an optimal combination of technologies, it is crucial to identify when large errors occur and prevent the use of extremely bad positioning estimations in hybrid algorithms. This paper investigates why large positioning errors occur in Wi-Fi fingerprinting and how to detect them by using the received signal strength intensities.

  19. Simultaneous control of error rates in fMRI data analysis.

    Science.gov (United States)

    Kang, Hakmook; Blume, Jeffrey; Ombao, Hernando; Badre, David

    2015-12-01

    The key idea of statistical hypothesis testing is to fix, and thereby control, the Type I error (false positive) rate across samples of any size. Multiple comparisons inflate the global (family-wise) Type I error rate and the traditional solution to maintaining control of the error rate is to increase the local (comparison-wise) Type II error (false negative) rates. However, in the analysis of human brain imaging data, the number of comparisons is so large that this solution breaks down: the local Type II error rate ends up being so large that scientifically meaningful analysis is precluded. Here we propose a novel solution to this problem: allow the Type I error rate to converge to zero along with the Type II error rate. It works because when the Type I error rate per comparison is very small, the accumulation (or global) Type I error rate is also small. This solution is achieved by employing the likelihood paradigm, which uses likelihood ratios to measure the strength of evidence on a voxel-by-voxel basis. In this paper, we provide theoretical and empirical justification for a likelihood approach to the analysis of human brain imaging data. In addition, we present extensive simulations that show the likelihood approach is viable, leading to "cleaner"-looking brain maps and operational superiority (lower average error rate). Finally, we include a case study on cognitive control related activation in the prefrontal cortex of the human brain. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Analysis of Human Error Types and Performance Shaping Factors in the Next Generation Main Control Room

    International Nuclear Information System (INIS)

    Sin, Y. C.; Jung, Y. S.; Kim, K. H.; Kim, J. H.

    2008-04-01

    Main control room of nuclear power plants has been computerized and digitalized in new and modernized plants, as information and digital technologies make great progresses and become mature. Survey on human factors engineering issues in advanced MCRs: Model-based approach, Literature survey-based approach. Analysis of human error types and performance shaping factors is analysis of three human errors. The results of project can be used for task analysis, evaluation of human error probabilities, and analysis of performance shaping factors in the HRA analysis

  1. Creating a Multi-material Probing Error Test for the Acceptance Testing of Dimensional Computed Tomography Systems

    DEFF Research Database (Denmark)

    Borges de Oliveira, Fabrício; Stolfi, Alessandro; Bartscher, Markus

    2017-01-01

    The requirement of quality assurance of inner and outer structures in complex multi-material assemblies is one important factor that has encouraged the use of industrial X-ray computed tomography (CT). The application of CT as a coordinate measurement system (CMS) has opened up new challenges......, typically associated with performance verification, specification definition and thus standardization. Especially when performing multi-material measurements, further, new, challenging effects are included in dimensional CT measurements, e.g. the influence of material A on material B in multi...

  2. Automatic Estimation of Verified Floating-Point Round-Off Errors via Static Analysis

    Science.gov (United States)

    Moscato, Mariano; Titolo, Laura; Dutle, Aaron; Munoz, Cesar A.

    2017-01-01

    This paper introduces a static analysis technique for computing formally verified round-off error bounds of floating-point functional expressions. The technique is based on a denotational semantics that computes a symbolic estimation of floating-point round-o errors along with a proof certificate that ensures its correctness. The symbolic estimation can be evaluated on concrete inputs using rigorous enclosure methods to produce formally verified numerical error bounds. The proposed technique is implemented in the prototype research tool PRECiSA (Program Round-o Error Certifier via Static Analysis) and used in the verification of floating-point programs of interest to NASA.

  3. Iterative reconstruction for quantitative computed tomography analysis of emphysema: consistent results using different tube currents

    Directory of Open Access Journals (Sweden)

    Yamashiro T

    2015-02-01

    Full Text Available Tsuneo Yamashiro,1 Tetsuhiro Miyara,1 Osamu Honda,2 Noriyuki Tomiyama,2 Yoshiharu Ohno,3 Satoshi Noma,4 Sadayuki Murayama1 On behalf of the ACTIve Study Group 1Department of Radiology, Graduate School of Medical Science, University of the Ryukyus, Nishihara, Okinawa, Japan; 2Department of Radiology, Osaka University Graduate School of Medicine, Suita, Osaka, Japan; 3Department of Radiology, Kobe University Graduate School of Medicine, Kobe, Hyogo, Japan; 4Department of Radiology, Tenri Hospital, Tenri, Nara, Japan Purpose: To assess the advantages of iterative reconstruction for quantitative computed tomography (CT analysis of pulmonary emphysema. Materials and methods: Twenty-two patients with pulmonary emphysema underwent chest CT imaging using identical scanners with three different tube currents: 240, 120, and 60 mA. Scan data were converted to CT images using Adaptive Iterative Dose Reduction using Three Dimensional Processing (AIDR3D and a conventional filtered-back projection mode. Thus, six scans with and without AIDR3D were generated per patient. All other scanning and reconstruction settings were fixed. The percent low attenuation area (LAA%; < -950 Hounsfield units and the lung density 15th percentile were automatically measured using a commercial workstation. Comparisons of LAA% and 15th percentile results between scans with and without using AIDR3D were made by Wilcoxon signed-rank tests. Associations between body weight and measurement errors among these scans were evaluated by Spearman rank correlation analysis. Results: Overall, scan series without AIDR3D had higher LAA% and lower 15th percentile values than those with AIDR3D at each tube current (P<0.0001. For scan series without AIDR3D, lower tube currents resulted in higher LAA% values and lower 15th percentiles. The extent of emphysema was significantly different between each pair among scans when not using AIDR3D (LAA%, P<0.0001; 15th percentile, P<0.01, but was not

  4. Error Floor Analysis of Coded Slotted ALOHA over Packet Erasure Channels

    DEFF Research Database (Denmark)

    Ivanov, Mikhail; Graell i Amat, Alexandre; Brannstrom, F.

    2014-01-01

    We present a framework for the analysis of the error floor of coded slotted ALOHA (CSA) for finite frame lengths over the packet erasure channel. The error floor is caused by stopping sets in the corresponding bipartite graph, whose enumeration is, in general, not a trivial problem. We therefore ...... identify the most dominant stopping sets for the distributions of practical interest. The derived analytical expressions allow us to accurately predict the error floor at low to moderate channel loads and characterize the unequal error protection inherent in CSA.......We present a framework for the analysis of the error floor of coded slotted ALOHA (CSA) for finite frame lengths over the packet erasure channel. The error floor is caused by stopping sets in the corresponding bipartite graph, whose enumeration is, in general, not a trivial problem. We therefore...

  5. Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS)

    Science.gov (United States)

    Alexander, Tiffaney Miller

    2017-01-01

    Research results have shown that more than half of aviation, aerospace and aeronautics mishaps incidents are attributed to human error. As a part of Safety within space exploration ground processing operations, the identification and/or classification of underlying contributors and causes of human error must be identified, in order to manage human error. This research provides a framework and methodology using the Human Error Assessment and Reduction Technique (HEART) and Human Factor Analysis and Classification System (HFACS), as an analysis tool to identify contributing factors, their impact on human error events, and predict the Human Error probabilities (HEPs) of future occurrences. This research methodology was applied (retrospectively) to six (6) NASA ground processing operations scenarios and thirty (30) years of Launch Vehicle related mishap data. This modifiable framework can be used and followed by other space and similar complex operations.

  6. ERROR ANALYSIS IN THE TRAVEL WRITING MADE BY THE STUDENTS OF ENGLISH STUDY PROGRAM

    Directory of Open Access Journals (Sweden)

    Vika Agustina

    2015-05-01

    Full Text Available This study was conducted to identify the kinds of errors in surface strategy taxonomy and to know the dominant type of errors made by the fifth semester students of English Department of one State University in Malang-Indonesia in producing their travel writing. The type of research of this study is document analysis since it analyses written materials, in this case travel writing texts. The analysis finds that the grammatical errors made by the students based on surface strategy taxonomy theory consist of four types. They are (1 omission, (2 addition, (3 misformation and (4 misordering. The most frequent errors occuring in misformation are in the use of tense form. Secondly, the errors are in omission of noun/verb inflection. The next error, there are many clauses that contain unnecessary phrase added there.

  7. Computer aided stress analysis of long bones utilizing computer tomography

    International Nuclear Information System (INIS)

    Marom, S.A.

    1986-01-01

    A computer aided analysis method, utilizing computed tomography (CT) has been developed, which together with a finite element program determines the stress-displacement pattern in a long bone section. The CT data file provides the geometry, the density and the material properties for the generated finite element model. A three-dimensional finite element model of a tibial shaft is automatically generated from the CT file by a pre-processing procedure for a finite element program. The developed pre-processor includes an edge detection algorithm which determines the boundaries of the reconstructed cross-sectional images of the scanned bone. A mesh generation procedure than automatically generates a three-dimensional mesh of a user-selected refinement. The elastic properties needed for the stress analysis are individually determined for each model element using the radiographic density (CT number) of each pixel with the elemental borders. The elastic modulus is determined from the CT radiographic density by using an empirical relationship from the literature. The generated finite element model, together with applied loads, determined from existing gait analysis and initial displacements, comprise a formatted input for the SAP IV finite element program. The output of this program, stresses and displacements at the model elements and nodes, are sorted and displayed by a developed post-processor to provide maximum and minimum values at selected locations in the model

  8. Error Propagation Analysis in the SAE Architecture Analysis and Design Language (AADL) and the EDICT Tool Framework

    Science.gov (United States)

    LaValley, Brian W.; Little, Phillip D.; Walter, Chris J.

    2011-01-01

    This report documents the capabilities of the EDICT tools for error modeling and error propagation analysis when operating with models defined in the Architecture Analysis & Design Language (AADL). We discuss our experience using the EDICT error analysis capabilities on a model of the Scalable Processor-Independent Design for Enhanced Reliability (SPIDER) architecture that uses the Reliable Optical Bus (ROBUS). Based on these experiences we draw some initial conclusions about model based design techniques for error modeling and analysis of highly reliable computing architectures.

  9. Error-tolerant Finite State Recognition with Applications to Morphological Analysis and Spelling Correction

    OpenAIRE

    Oflazer, Kemal

    1995-01-01

    Error-tolerant recognition enables the recognition of strings that deviate mildly from any string in the regular set recognized by the underlying finite state recognizer. Such recognition has applications in error-tolerant morphological processing, spelling correction, and approximate string matching in information retrieval. After a description of the concepts and algorithms involved, we give examples from two applications: In the context of morphological analysis, error-tolerant recognition...

  10. Analysis of Free-Space Coupling to Photonic Lanterns in the Presence of Tilt Errors

    Science.gov (United States)

    2017-05-01

    Analysis of Free- Space Coupling to Photonic Lanterns in the Presence of Tilt Errors Timothy M. Yarnall, David J. Geisler, Curt M. Schieler...Massachusetts Avenue Cambridge, MA 02139, USA Abstract—Free space coupling to photonic lanterns is more tolerant to tilt errors and F -number mismatch than...these errors. I. INTRODUCTION Photonic lanterns provide a means for transitioning from the free space regime to the single-mode fiber (SMF) regime by

  11. Assessment of residual error in liver position using kV cone-beam computed tomography for liver cancer high-precision radiation therapy

    International Nuclear Information System (INIS)

    Hawkins, Maria A.; Brock, Kristy K.; Eccles, Cynthia; Moseley, Douglas; Jaffray, David; Dawson, Laura A.

    2006-01-01

    Purpose: To evaluate the residual error in liver position using breath-hold kilovoltage (kV) cone-beam computed tomography (CT) following on-line orthogonal megavoltage (MV) image-guided breath-hold liver cancer conformal radiotherapy. Methods and Materials: Thirteen patients with liver cancer treated with 6-fraction breath-hold conformal radiotherapy were investigated. Before each fraction, orthogonal MV images were obtained during exhale breath-hold, with repositioning for offsets >3 mm, using the diaphragm for cranio-caudal (CC) alignment and vertebral bodies for medial-lateral (ML) and anterior posterior (AP) alignment. After repositioning, repeat orthogonal MV images, orthogonal kV fluoroscopic movies, and kV cone-beam CTs were obtained in exhale breath-hold. The cone-beam CT livers were registered to the planning CT liver to obtain the residual setup error in liver position. Results: After repositioning, 78 orthogonal MV image pairs, 61 orthogonal kV image pairs, and 72 kV cone-beam CT scans were obtained. Population random setup errors (σ) in liver position were 2.7 mm (CC), 2.3 mm (ML), and 3.0 mm (AP), and systematic errors (Σ) were 1.1 mm, 1.9 mm, and 1.3 mm in the superior, medial, and posterior directions. Liver offsets >5 mm were observed in 33% of cases; offsets >10 mm and liver deformation >5 mm were observed in a minority of patients. Conclusions: Liver position after radiation therapy guided with MV orthogonal imaging was within 5 mm of planned position in the majority of patients. kV cone-beam CT image guidance should improve accuracy with reduced dose compared with orthogonal MV image guidance for liver cancer radiation therapy

  12. Analysis of Student Errors on Division of Fractions

    Science.gov (United States)

    Maelasari, E.; Jupri, A.

    2017-02-01

    This study aims to describe the type of student errors that typically occurs at the completion of the division arithmetic operations on fractions, and to describe the causes of students’ mistakes. This research used a descriptive qualitative method, and involved 22 fifth grade students at one particular elementary school in Kuningan, Indonesia. The results of this study showed that students’ error answers caused by students changing their way of thinking to solve multiplication and division operations on the same procedures, the changing of mix fractions to common fraction have made students confused, and students are careless in doing calculation. From student written work, in solving the fraction problems, we found that there is influence between the uses of learning methods and student response, and some of student responses beyond researchers’ prediction. We conclude that the teaching method is not only the important thing that must be prepared, but the teacher should also prepare about predictions of students’ answers to the problems that will be given in the learning process. This could be a reflection for teachers to be better and to achieve the expected learning goals.

  13. Error Probability Analysis of Hardware Impaired Systems with Asymmetric Transmission

    KAUST Repository

    Javed, Sidrah

    2018-04-26

    Error probability study of the hardware impaired (HWI) systems highly depends on the adopted model. Recent models have proved that the aggregate noise is equivalent to improper Gaussian signals. Therefore, considering the distinct noise nature and self-interfering (SI) signals, an optimal maximum likelihood (ML) receiver is derived. This renders the conventional minimum Euclidean distance (MED) receiver as a sub-optimal receiver because it is based on the assumptions of ideal hardware transceivers and proper Gaussian noise in communication systems. Next, the average error probability performance of the proposed optimal ML receiver is analyzed and tight bounds and approximations are derived for various adopted systems including transmitter and receiver I/Q imbalanced systems with or without transmitter distortions as well as transmitter or receiver only impaired systems. Motivated by recent studies that shed the light on the benefit of improper Gaussian signaling in mitigating the HWIs, asymmetric quadrature amplitude modulation or phase shift keying is optimized and adapted for transmission. Finally, different numerical and simulation results are presented to support the superiority of the proposed ML receiver over MED receiver, the tightness of the derived bounds and effectiveness of asymmetric transmission in dampening HWIs and improving overall system performance

  14. Stochastic and sensitivity analysis of shape error of inflatable antenna reflectors

    Science.gov (United States)

    San, Bingbing; Yang, Qingshan; Yin, Liwei

    2017-03-01

    Inflatable antennas are promising candidates to realize future satellite communications and space observations since they are lightweight, low-cost and small-packaged-volume. However, due to their high flexibility, inflatable reflectors are difficult to manufacture accurately, which may result in undesirable shape errors, and thus affect their performance negatively. In this paper, the stochastic characteristics of shape errors induced during manufacturing process are investigated using Latin hypercube sampling coupled with manufacture simulations. Four main random error sources are involved, including errors in membrane thickness, errors in elastic modulus of membrane, boundary deviations and pressure variations. Using regression and correlation analysis, a global sensitivity study is conducted to rank the importance of these error sources. This global sensitivity analysis is novel in that it can take into account the random variation and the interaction between error sources. Analyses are parametrically carried out with various focal-length-to-diameter ratios (F/D) and aperture sizes (D) of reflectors to investigate their effects on significance ranking of error sources. The research reveals that RMS (Root Mean Square) of shape error is a random quantity with an exponent probability distribution and features great dispersion; with the increase of F/D and D, both mean value and standard deviation of shape errors are increased; in the proposed range, the significance ranking of error sources is independent of F/D and D; boundary deviation imposes the greatest effect with a much higher weight than the others; pressure variation ranks the second; error in thickness and elastic modulus of membrane ranks the last with very close sensitivities to pressure variation. Finally, suggestions are given for the control of the shape accuracy of reflectors and allowable values of error sources are proposed from the perspective of reliability.

  15. Non-destructive analysis and detection of internal characteristics of spruce logs through X computerized tomography

    International Nuclear Information System (INIS)

    Longuetaud, F.

    2005-10-01

    Computerized tomography allows a direct access to internal features of scanned logs on the basis of density and moisture content variations. The objective of this work is to assess the feasibility of an automatic detection of internal characteristics with the final aim of conducting scientific analyses. The database is constituted by CT images of 24 spruces obtained with a medical CT scanner. Studied trees are representative of several social status and are coming from four stands located in North-Eastern France, themselves are representative of several age, density and fertility classes. The automatic processing developed are the following. First, pith detection in logs dealing with the problem of knot presence and ring eccentricity. The accuracy of the localisation was less than one mm. Secondly, the detection of the sapwood/heart-wood limit in logs dealing with the problem of knot presence (main source of difficulty). The error on the diameter was 1.8 mm which corresponds to a relative error of 1.3 per cent. Thirdly, the detection of the whorls location and comparison with an optical method. Fourthly the detection of individualized knots. This process allows to count knots and to locate them in a log (longitudinal position and azimuth); however, the validation of the method and extraction of branch diameter and inclination are still to be developed. An application of this work was a variability analysis of the sapwood content in the trunk: at the within-tree level, the sapwood width was found to be constant under the living crown; at the between-tree level, a strong correlation was found with the amount of living branches. A great number of analyses are possible from our work results, among others: architectural analysis with the pith tracking and the apex death occurrence; analysis of radial variations of the heart-wood shape; analysis of the knot distribution in logs. (author)

  16. Statistical analysis of compressive low rank tomography with random measurements

    Science.gov (United States)

    Acharya, Anirudh; Guţă, Mădălin

    2017-05-01

    We consider the statistical problem of ‘compressive’ estimation of low rank states (r\\ll d ) with random basis measurements, where r, d are the rank and dimension of the state respectively. We investigate whether for a fixed sample size N, the estimation error associated with a ‘compressive’ measurement setup is ‘close’ to that of the setting where a large number of bases are measured. We generalise and extend previous results, and show that the mean square error (MSE) associated with the Frobenius norm attains the optimal rate rd/N with only O(r log{d}) random basis measurements for all states. An important tool in the analysis is the concentration of the Fisher information matrix (FIM). We demonstrate that although a concentration of the MSE follows from a concentration of the FIM for most states, the FIM fails to concentrate for states with eigenvalues close to zero. We analyse this phenomenon in the case of a single qubit and demonstrate a concentration of the MSE about its optimal despite a lack of concentration of the FIM for states close to the boundary of the Bloch sphere. We also consider the estimation error in terms of a different metric-the quantum infidelity. We show that a concentration in the mean infidelity (MINF) does not exist uniformly over all states, highlighting the importance of loss function choice. Specifically, we show that for states that are nearly pure, the MINF scales as 1/\\sqrt{N} but the constant converges to zero as the number of settings is increased. This demonstrates a lack of ‘compressive’ recovery for nearly pure states in this metric.

  17. Incremental Volumetric Remapping Method: Analysis and Error Evaluation

    International Nuclear Information System (INIS)

    Baptista, A. J.; Oliveira, M. C.; Rodrigues, D. M.; Menezes, L. F.; Alves, J. L.

    2007-01-01

    In this paper the error associated with the remapping problem is analyzed. A range of numerical results that assess the performance of three different remapping strategies, applied to FE meshes that typically are used in sheet metal forming simulation, are evaluated. One of the selected strategies is the previously presented Incremental Volumetric Remapping method (IVR), which was implemented in the in-house code DD3TRIM. The IVR method fundaments consists on the premise that state variables in all points associated to a Gauss volume of a given element are equal to the state variable quantities placed in the correspondent Gauss point. Hence, given a typical remapping procedure between a donor and a target mesh, the variables to be associated to a target Gauss volume (and point) are determined by a weighted average. The weight function is the Gauss volume percentage of each donor element that is located inside the target Gauss volume. The calculus of the intersecting volumes between the donor and target Gauss volumes is attained incrementally, for each target Gauss volume, by means of a discrete approach. The other two remapping strategies selected are based in the interpolation/extrapolation of variables by using the finite element shape functions or moving least square interpolants. The performance of the three different remapping strategies is address with two tests. The first remapping test was taken from a literature work. The test consists in remapping successively a rotating symmetrical mesh, throughout N increments, in an angular span of 90 deg. The second remapping error evaluation test consists of remapping an irregular element shape target mesh from a given regular element shape donor mesh and proceed with the inverse operation. In this second test the computation effort is also measured. The results showed that the error level associated to IVR can be very low and with a stable evolution along the number of remapping procedures when compared with the

  18. Pseudorange error analysis for precise indoor positioning system

    Science.gov (United States)

    Pola, Marek; Bezoušek, Pavel

    2017-05-01

    There is a currently developed system of a transmitter indoor localization intended for fire fighters or members of rescue corps. In this system the transmitter of an ultra-wideband orthogonal frequency-division multiplexing signal position is determined by the time difference of arrival method. The position measurement accuracy highly depends on the directpath signal time of arrival estimation accuracy which is degraded by severe multipath in complicated environments such as buildings. The aim of this article is to assess errors in the direct-path signal time of arrival determination caused by multipath signal propagation and noise. Two methods of the direct-path signal time of arrival estimation are compared here: the cross correlation method and the spectral estimation method.

  19. Contribution of Error Analysis to Foreign Language Teaching

    Directory of Open Access Journals (Sweden)

    Vacide ERDOĞAN

    2014-01-01

    Full Text Available It is inevitable that learners make mistakes in the process of foreign language learning.However, what is questioned by language teachers is why students go on making the same mistakeseven when such mistakes have been repeatedly pointed out to them. Yet not all mistakes are the same;sometimes they seem to be deeply ingrained, but at other times students correct themselves with ease.Thus, researchers and teachers of foreign language came to realize that the mistakes a person made inthe process of constructing a new system of language is needed to be analyzed carefully, for theypossibly held in them some of the keys to the understanding of second language acquisition. In thisrespect, the aim of this study is to point out the significance of learners’ errors for they provideevidence of how language is learned and what strategies or procedures the learners are employing inthe discovery of language.

  20. Error Analysis of Remotely-Acquired Mossbauer Spectra

    Science.gov (United States)

    Schaefer, Martha W.; Dyar, M. Darby; Agresti, David G.; Schaefer, Bradley E.

    2005-01-01

    On the Mars Exploration Rovers, Mossbauer spectroscopy has recently been called upon to assist in the task of mineral identification, a job for which it is rarely used in terrestrial studies. For example, Mossbauer data were used to support the presence of olivine in Martian soil at Gusev and jarosite in the outcrop at Meridiani. The strength (and uniqueness) of these interpretations lies in the assumption that peak positions can be determined with high degrees of both accuracy and precision. We summarize here what we believe to be the major sources of error associated with peak positions in remotely-acquired spectra, and speculate on their magnitudes. Our discussion here is largely qualitative because necessary background information on MER calibration sources, geometries, etc., have not yet been released to the PDS; we anticipate that a more quantitative discussion can be presented by March 2005.

  1. An Analysis of College Students' Attitudes towards Error Correction in EFL Context

    Science.gov (United States)

    Zhu, Honglin

    2010-01-01

    This article is based on a survey on the attitudes towards the error correction by their teachers in the process of teaching and learning and it is intended to improve the language teachers' understanding of the nature of error correction. Based on the analysis, the article expounds some principles and techniques that can be applied in the process…

  2. Analysis of Errors and Misconceptions in the Learning of Calculus by Undergraduate Students

    Science.gov (United States)

    Muzangwa, Jonatan; Chifamba, Peter

    2012-01-01

    This paper is going to analyse errors and misconceptions in an undergraduate course in Calculus. The study will be based on a group of 10 BEd. Mathematics students at Great Zimbabwe University. Data is gathered through use of two exercises on Calculus 1&2.The analysis of the results from the tests showed that a majority of the errors were due…

  3. Error Analysis of Mathematical Word Problem Solving across Students with and without Learning Disabilities

    Science.gov (United States)

    Kingsdorf, Sheri; Krawec, Jennifer

    2014-01-01

    Solving word problems is a common area of struggle for students with learning disabilities (LD). In order for instruction to be effective, we first need to have a clear understanding of the specific errors exhibited by students with LD during problem solving. Error analysis has proven to be an effective tool in other areas of math but has had…

  4. A Linguistic Analysis of Errors in the Compositions of Arba Minch University Students

    Science.gov (United States)

    Tizazu, Yoseph

    2014-01-01

    This study reports the dominant linguistic errors that occur in the written productions of Arba Minch University (hereafter AMU) students. A sample of paragraphs was collected for two years from students ranging from freshmen to graduating level. The sampled compositions were then coded, described, and explained using error analysis method. Both…

  5. Boundary error analysis and categorization in the TRECVID news story segmentation task

    NARCIS (Netherlands)

    Arlandis, J.; Over, P.; Kraaij, W.

    2005-01-01

    In this paper, an error analysis based on boundary error popularity (frequency) including semantic boundary categorization is applied in the context of the news story segmentation task from TRECVTD1. Clusters of systems were defined based on the input resources they used including video, audio and

  6. Perceptual Error Analysis of Human and Synthesized Voices.

    Science.gov (United States)

    Englert, Marina; Madazio, Glaucya; Gielow, Ingrid; Lucero, Jorge; Behlau, Mara

    2017-07-01

    To assess the quality of synthesized voices through listeners' skills in discriminating human and synthesized voices. Prospective study. Eighteen human voices with different types and degrees of deviation (roughness, breathiness, and strain, with three degrees of deviation: mild, moderate, and severe) were selected by three voice specialists. Synthesized samples with the same deviations of human voices were produced by the VoiceSim system. The manipulated parameters were vocal frequency perturbation (roughness), additive noise (breathiness), increasing tension, subglottal pressure, and decreasing vocal folds separation (strain). Two hundred sixty-nine listeners were divided in three groups: voice specialist speech language pathologists (V-SLPs), general clinician SLPs (G-SLPs), and naive listeners (NLs). The SLP listeners also indicated the type and degree of deviation. The listeners misclassified 39.3% of the voices, both synthesized (42.3%) and human (36.4%) samples (P = 0.001). V-SLPs presented the lowest error percentage considering the voice nature (34.6%); G-SLPs and NLs identified almost half of the synthesized samples as human (46.9%, 45.6%). The male voices were more susceptible for misidentification. The synthesized breathy samples generated a greater perceptual confusion. The samples with severe deviation seemed to be more susceptible for errors. The synthesized female deviations were correctly classified. The male breathiness and strain were identified as roughness. VoiceSim produced stimuli very similar to the voices of patients with dysphonia. V-SLPs had a better ability to classify human and synthesized voices. VoiceSim is better to simulate vocal breathiness and female deviations; the male samples need adjustment. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  7. Hebbian errors in learning: an analysis using the Oja model.

    Science.gov (United States)

    Rădulescu, Anca; Cox, Kingsley; Adams, Paul

    2009-06-21

    Recent work on long term potentiation in brain slices shows that Hebb's rule is not completely synapse-specific, probably due to intersynapse diffusion of calcium or other factors. We previously suggested that such errors in Hebbian learning might be analogous to mutations in evolution. We examine this proposal quantitatively, extending the classical Oja unsupervised model of learning by a single linear neuron to include Hebbian inspecificity. We introduce an error matrix E, which expresses possible crosstalk between updating at different connections. When there is no inspecificity, this gives the classical result of convergence to the first principal component of the input distribution (PC1). We show the modified algorithm converges to the leading eigenvector of the matrix EC, where C is the input covariance matrix. In the most biologically plausible case when there are no intrinsically privileged connections, E has diagonal elements Q and off-diagonal elements (1-Q)/(n-1), where Q, the quality, is expected to decrease with the number of inputs n and with a synaptic parameter b that reflects synapse density, calcium diffusion, etc. We study the dependence of the learning accuracy on b, n and the amount of input activity or correlation (analytically and computationally). We find that accuracy increases (learning becomes gradually less useful) with increases in b, particularly for intermediate (i.e., biologically realistic) correlation strength, although some useful learning always occurs up to the trivial limit Q=1/n. We discuss the relation of our results to Hebbian unsupervised learning in the brain. When the mechanism lacks specificity, the network fails to learn the expected, and typically most useful, result, especially when the input correlation is weak. Hebbian crosstalk would reflect the very high density of synapses along dendrites, and inevitably degrades learning.

  8. The treatment of commission errors in first generation human reliability analysis methods

    International Nuclear Information System (INIS)

    Alvarengga, Marco Antonio Bayout; Fonseca, Renato Alves da; Melo, Paulo Fernando Frutuoso e

    2011-01-01

    Human errors in human reliability analysis can be classified generically as errors of omission and commission errors. Omission errors are related to the omission of any human action that should have been performed, but does not occur. Errors of commission are those related to human actions that should not be performed, but which in fact are performed. Both involve specific types of cognitive error mechanisms, however, errors of commission are more difficult to model because they are characterized by non-anticipated actions that are performed instead of others that are omitted (omission errors) or are entered into an operational task without being part of the normal sequence of this task. The identification of actions that are not supposed to occur depends on the operational context that will influence or become easy certain unsafe actions of the operator depending on the operational performance of its parameters and variables. The survey of operational contexts and associated unsafe actions is a characteristic of second-generation models, unlike the first generation models. This paper discusses how first generation models can treat errors of commission in the steps of detection, diagnosis, decision-making and implementation, in the human information processing, particularly with the use of THERP tables of errors quantification. (author)

  9. ERROR ANALYSIS OF ENGLISH WRITTEN ESSAY OF HIGHER EFL LEARNERS: A CASE STUDY

    Directory of Open Access Journals (Sweden)

    Rina Husnaini Febriyanti

    2016-09-01

    Full Text Available The aim of the research is to identify grammatical error and to investigate the most and the least of grammatical error occurred on the students’ English written essay. The approach of research is qualitative descriptive with descriptive analysis. The samples were taken from the essays made by 34 students in writing class. The findings resulted in: the most common error occurred was subject-verb agreement error and the score was 28, 25%. The second place of frequent error was on verb tense and form with 24, 66% as the score. The third was on spellings errors and the value is 17, 94%. The fourth was error on using auxiliaries and the score 9, 87%. The fifth was error on word order with the score was 8.07%. The rest error was applying passive voice with the score is 4.93%, articles (3.59%, prepositions (1.79%, and pronoun and run-on sentence with the same scores, 0. 45%. This may indicate that most students still made errors even for the usage of basic grammar rules in their writing.

  10. Analysis and Evaluation of Error-Proof Systems for Configuration Data Management in Railway Signalling

    Science.gov (United States)

    Shimazoe, Toshiyuki; Ishikawa, Hideto; Takei, Tsuyoshi; Tanaka, Kenji

    Recent types of train protection systems such as ATC require the large amounts of low-level configuration data compared to conventional types of them. Hence management of the configuration data is becoming more important than before. Because of this, the authors developed an error-proof system focusing on human operations in the configuration data management. This error-proof system has already been introduced to the Tokaido Shinkansen ATC data management system. However, as effectiveness of the system has not been presented objectively, its full perspective is not clear. To clarify the effectiveness, this paper analyses error-proofing cases introduced to the system, using the concept of QFD and the error-proofing principles. From this analysis, the following methods of evaluation for error-proof systems are proposed: metrics to review the rationality of required qualities are provided by arranging the required qualities according to hazard levels and work phases; metrics to evaluate error-proof systems are provided to improve their reliability effectively by mapping the error-proofing principles onto the error-proofing cases which are applied according to the required qualities and the corresponding hazard levels. In addition, these objectively-analysed error-proofing cases are available to be used as error-proofing-cases database or guidelines for safer HMI design especially for data management.

  11. Error Analysis on the Use of “Be” in the Students’ Composition

    Directory of Open Access Journals (Sweden)

    Rochmat Budi Santosa

    2016-07-01

    Full Text Available This study aims to identify, analyze and describe the structure of the use of some errors in the writing of English sentences in the text and the aspects surrounding the Student Semester 3 of English Department STAIN Surakarta. In this study, the researcher describes the error use of 'be' both as a linking verb or auxiliary verb. This is a qualitative-descriptive research. Source data used is a document that is the writing assignment undertaken by the Students taking Writing course. Writing tasks are in narrative, descriptive, expositive, and argumentative forms. To analyze the data, researcher uses intra lingual and extra lingual method. This method is used to connect the linguistic elements in sentences, especially some of the elements either as a linking verb or auxiliary verb in English sentences in the text. Based on the analysis of error regarding the use of 'be' it can be concluded that there are 5 (five types of errors made by students; error about the absence (omission of 'be',  error about the addition of 'be', the error on the application of 'be', errors in placements 'be', and a complex error in the use of 'be'. These errors occur due to inter lingual transfer, intra lingual transfer and learning context.

  12. The treatment of commission errors in first generation human reliability analysis methods

    Energy Technology Data Exchange (ETDEWEB)

    Alvarengga, Marco Antonio Bayout; Fonseca, Renato Alves da, E-mail: bayout@cnen.gov.b, E-mail: rfonseca@cnen.gov.b [Comissao Nacional de Energia Nuclear (CNEN) Rio de Janeiro, RJ (Brazil); Melo, Paulo Fernando Frutuoso e, E-mail: frutuoso@nuclear.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Nuclear

    2011-07-01

    Human errors in human reliability analysis can be classified generically as errors of omission and commission errors. Omission errors are related to the omission of any human action that should have been performed, but does not occur. Errors of commission are those related to human actions that should not be performed, but which in fact are performed. Both involve specific types of cognitive error mechanisms, however, errors of commission are more difficult to model because they are characterized by non-anticipated actions that are performed instead of others that are omitted (omission errors) or are entered into an operational task without being part of the normal sequence of this task. The identification of actions that are not supposed to occur depends on the operational context that will influence or become easy certain unsafe actions of the operator depending on the operational performance of its parameters and variables. The survey of operational contexts and associated unsafe actions is a characteristic of second-generation models, unlike the first generation models. This paper discusses how first generation models can treat errors of commission in the steps of detection, diagnosis, decision-making and implementation, in the human information processing, particularly with the use of THERP tables of errors quantification. (author)

  13. Knowledge-base for the new human reliability analysis method, A Technique for Human Error Analysis (ATHEANA)

    International Nuclear Information System (INIS)

    Cooper, S.E.; Wreathall, J.; Thompson, C.M., Drouin, M.; Bley, D.C.

    1996-01-01

    This paper describes the knowledge base for the application of the new human reliability analysis (HRA) method, a ''A Technique for Human Error Analysis'' (ATHEANA). Since application of ATHEANA requires the identification of previously unmodeled human failure events, especially errors of commission, and associated error-forcing contexts (i.e., combinations of plant conditions and performance shaping factors), this knowledge base is an essential aid for the HRA analyst

  14. Analysis technique for controlling system wavefront error with active/adaptive optics

    Science.gov (United States)

    Genberg, Victor L.; Michels, Gregory J.

    2017-08-01

    The ultimate goal of an active mirror system is to control system level wavefront error (WFE). In the past, the use of this technique was limited by the difficulty of obtaining a linear optics model. In this paper, an automated method for controlling system level WFE using a linear optics model is presented. An error estimate is included in the analysis output for both surface error disturbance fitting and actuator influence function fitting. To control adaptive optics, the technique has been extended to write system WFE in state space matrix form. The technique is demonstrated by example with SigFit, a commercially available tool integrating mechanical analysis with optical analysis.

  15. Detection method of nonlinearity errors by statistical signal analysis in heterodyne Michelson interferometer.

    Science.gov (United States)

    Hu, Juju; Hu, Haijiang; Ji, Yinghua

    2010-03-15

    Periodic nonlinearity that ranges from tens of nanometers to a few nanometers in heterodyne interferometer limits its use in high accuracy measurement. A novel method is studied to detect the nonlinearity errors based on the electrical subdivision and the analysis method of statistical signal in heterodyne Michelson interferometer. Under the movement of micropositioning platform with the uniform velocity, the method can detect the nonlinearity errors by using the regression analysis and Jackknife estimation. Based on the analysis of the simulations, the method can estimate the influence of nonlinearity errors and other noises for the dimensions measurement in heterodyne Michelson interferometer.

  16. Human Error Analysis in a Permit to Work System: A Case Study in a Chemical Plant.

    Science.gov (United States)

    Jahangiri, Mehdi; Hoboubi, Naser; Rostamabadi, Akbar; Keshavarzi, Sareh; Hosseini, Ali Akbar

    2016-03-01

    A permit to work (PTW) is a formal written system to control certain types of work which are identified as potentially hazardous. However, human error in PTW processes can lead to an accident. This cross-sectional, descriptive study was conducted to estimate the probability of human errors in PTW processes in a chemical plant in Iran. In the first stage, through interviewing the personnel and studying the procedure in the plant, the PTW process was analyzed using the hierarchical task analysis technique. In doing so, PTW was considered as a goal and detailed tasks to achieve the goal were analyzed. In the next step, the standardized plant analysis risk-human (SPAR-H) reliability analysis method was applied for estimation of human error probability. The mean probability of human error in the PTW system was estimated to be 0.11. The highest probability of human error in the PTW process was related to flammable gas testing (50.7%). The SPAR-H method applied in this study could analyze and quantify the potential human errors and extract the required measures for reducing the error probabilities in PTW system. Some suggestions to reduce the likelihood of errors, especially in the field of modifying the performance shaping factors and dependencies among tasks are provided.

  17. Accounting for covariate measurement error in a Cox model analysis of recurrence of depression.

    Science.gov (United States)

    Liu, K; Mazumdar, S; Stone, R A; Dew, M A; Houck, P R; Reynolds, C F

    2001-01-01

    When a covariate measured with error is used as a predictor in a survival analysis using the Cox model, the parameter estimate is usually biased. In clinical research, covariates measured without error such as treatment procedure or sex are often used in conjunction with a covariate measured with error. In a randomized clinical trial of two types of treatments, we account for the measurement error in the covariate, log-transformed total rapid eye movement (REM) activity counts, in a Cox model analysis of the time to recurrence of major depression in an elderly population. Regression calibration and two variants of a likelihood-based approach are used to account for measurement error. The likelihood-based approach is extended to account for the correlation between replicate measures of the covariate. Using the replicate data decreases the standard error of the parameter estimate for log(total REM) counts while maintaining the bias reduction of the estimate. We conclude that covariate measurement error and the correlation between replicates can affect results in a Cox model analysis and should be accounted for. In the depression data, these methods render comparable results that have less bias than the results when measurement error is ignored.

  18. STEM tomography analysis of the trypanosome transition zone.

    Science.gov (United States)

    Trépout, Sylvain; Tassin, Anne-Marie; Marco, Sergio; Bastin, Philippe

    2018-04-01

    The protist Trypanosoma brucei is an emerging model for the study of cilia and flagella. Here, we used scanning transmission electron microscopy (STEM) tomography to describe the structure of the trypanosome transition zone (TZ). At the base of the TZ, nine transition fibres irradiate from the B microtubule of each doublet towards the membrane. The TZ adopts a 9 + 0 structure throughout its length of ∼300 nm and its lumen contains an electron-dense structure. The proximal portion of the TZ has an invariant length of 150 nm and is characterised by a collarette surrounding the membrane and the presence of electron-dense material between the membrane and the doublets. The distal portion exhibits more length variation (from 55 to 235 nm) and contains typical Y-links. STEM analysis revealed a more complex organisation of the Y-links compared to what was reported by conventional transmission electron microscopy. Observation of the very early phase of flagellum assembly demonstrated that the proximal portion and the collarette are assembled early during construction. The presence of the flagella connector that maintains the tip of the new flagellum to the side of the old was confirmed and additional filamentous structures making contact with the membrane of the flagellar pocket were also detected. The structure and potential functions of the TZ in trypanosomes are discussed, as well as its mode of assembly. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Ex vivo brain tumor analysis using spectroscopic optical coherence tomography

    Science.gov (United States)

    Lenz, Marcel; Krug, Robin; Welp, Hubert; Schmieder, Kirsten; Hofmann, Martin R.

    2016-03-01

    A big challenge during neurosurgeries is to distinguish between healthy tissue and cancerous tissue, but currently a suitable non-invasive real time imaging modality is not available. Optical Coherence Tomography (OCT) is a potential technique for such a modality. OCT has a penetration depth of 1-2 mm and a resolution of 1-15 μm which is sufficient to illustrate structural differences between healthy tissue and brain tumor. Therefore, we investigated gray and white matter of healthy central nervous system and meningioma samples with a Spectral Domain OCT System (Thorlabs Callisto). Additional OCT images were generated after paraffin embedding and after the samples were cut into 10 μm thin slices for histological investigation with a bright field microscope. All samples were stained with Hematoxylin and Eosin. In all cases B-scans and 3D images were made. Furthermore, a camera image of the investigated area was made by the built-in video camera of our OCT system. For orientation, the backsides of all samples were marked with blue ink. The structural differences between healthy tissue and meningioma samples were most pronounced directly after removal. After paraffin embedding these differences diminished. A correlation between OCT en face images and microscopy images can be seen. In order to increase contrast, post processing algorithms were applied. Hence we employed Spectroscopic OCT, pattern recognition algorithms and machine learning algorithms such as k-means Clustering and Principal Component Analysis.

  20. Research on Human-Error Factors of Civil Aircraft Pilots Based On Grey Relational Analysis

    Directory of Open Access Journals (Sweden)

    Guo Yundong

    2018-01-01

    Full Text Available In consideration of the situation that civil aviation accidents involve many human-error factors and show the features of typical grey systems, an index system of civil aviation accident human-error factors is built using human factor analysis and classification system model. With the data of accidents happened worldwide between 2008 and 2011, the correlation between human-error factors can be analyzed quantitatively using the method of grey relational analysis. Research results show that the order of main factors affecting pilot human-error factors is preconditions for unsafe acts, unsafe supervision, organization and unsafe acts. The factor related most closely with second-level indexes and pilot human-error factors is the physical/mental limitations of pilots, followed by supervisory violations. The relevancy between the first-level indexes and the corresponding second-level indexes and the relevancy between second-level indexes can also be analyzed quantitatively.

  1. Quality of IT service delivery — Analysis and framework for human error prevention

    KAUST Repository

    Shwartz, L.

    2010-12-01

    In this paper, we address the problem of reducing the occurrence of Human Errors that cause service interruptions in IT Service Support and Delivery operations. Analysis of a large volume of service interruption records revealed that more than 21% of interruptions were caused by human error. We focus on Change Management, the process with the largest risk of human error, and identify the main instances of human errors as the 4 Wrongs: request, time, configuration item, and command. Analysis of change records revealed that the humanerror prevention by partial automation is highly relevant. We propose the HEP Framework, a framework for execution of IT Service Delivery operations that reduces human error by addressing the 4 Wrongs using content integration, contextualization of operation patterns, partial automation of command execution, and controlled access to resources.

  2. Dosimetric effect of intrafraction motion and residual setup error for hypofractionated prostate intensity-modulated radiotherapy with online cone beam computed tomography image guidance.

    LENUS (Irish Health Repository)

    Adamson, Justus

    2012-02-01

    PURPOSE: To quantify the dosimetric effect and margins required to account for prostate intrafractional translation and residual setup error in a cone beam computed tomography (CBCT)-guided hypofractionated radiotherapy protocol. METHODS AND MATERIALS: Prostate position after online correction was measured during dose delivery using simultaneous kV fluoroscopy and posttreatment CBCT in 572 fractions to 30 patients. We reconstructed the dose distribution to the clinical tumor volume (CTV) using a convolution of the static dose with a probability density function (PDF) based on the kV fluoroscopy, and we calculated the minimum dose received by 99% of the CTV (D(99)). We compared reconstructed doses when the convolution was performed per beam, per patient, and when the PDF was created using posttreatment CBCT. We determined the minimum axis-specific margins to limit CTV D(99) reduction to 1%. RESULTS: For 3-mm margins, D(99) reduction was <\\/=5% for 29\\/30 patients. Using post-CBCT rather than localizations at treatment delivery exaggerated dosimetric effects by ~47%, while there was no such bias between the dose convolved with a beam-specific and patient-specific PDF. After eight fractions, final cumulative D(99) could be predicted with a root mean square error of <1%. For 90% of patients, the required margins were <\\/=2, 4, and 3 mm, with 70%, 40%, and 33% of patients requiring no right-left (RL), anteroposterior (AP), and superoinferior margins, respectively. CONCLUSIONS: For protocols with CBCT guidance, RL, AP, and SI margins of 2, 4, and 3 mm are sufficient to account for translational errors; however, the large variation in patient-specific margins suggests that adaptive management may be beneficial.

  3. Influence of the examiner's qualification and sources of error during stage determination of the medial clavicular epiphysis by means of computed tomography.

    Science.gov (United States)

    Wittschieber, Daniel; Schulz, Ronald; Vieth, Volker; Küppers, Martin; Bajanowski, Thomas; Ramsthaler, Frank; Püschel, Klaus; Pfeiffer, Heidi; Schmidt, Sven; Schmeling, Andreas

    2014-01-01

    Computed tomography (CT) of the medial clavicular epiphysis has been well established in forensic age estimations of living individuals undergoing criminal proceedings. The present study examines the influence of the examiner's qualification on the determination of the clavicular ossification stage. Additionally, the most frequent sources of error made during the stage assessment process should be uncovered. To this end, thin-slice CT scans of 1,420 clavicles were evaluated by one inexperienced and two experienced examiners. The latter did the evaluations in consensus. Two classification systems, a five-stage system and a substaging system for the main stages 2 and 3, were used. Prior to three of his six assessment sessions, the inexperienced examiner was specifically taught staging of clavicles. Comparison of the examiners' results revealed increasing inter- and intraobserver agreements with increasing state of qualification of the inexperienced examiner (from κ= 0.494 to 0.674 and from κ= 0.634 to 0.783, respectively). The attribution of a not-assessable anatomic shape variant to an ossification stage was identified as the most frequent error during stage determination (n= 349), followed by the overlooking of the epiphyseal scar defining stage 4 (n= 144). As to the clavicular substages, classifying substage 3a instead of 3b was found to be the most frequent error (n= 69). The data of this study indicate that κ values must not be considered as objective measures for inter- and intraobserver agreements. Furthermore, a high degree of specific qualification, particularly the knowledge about the diversity of anatomic shape variants, appears to be mandatory and indispensable for reliable evaluation of the medial clavicular epiphysis.

  4. Mixed Methods Analysis of Medical Error Event Reports: A Report from the ASIPS Collaborative

    National Research Council Canada - National Science Library

    Harris, Daniel M; Westfall, John M; Fernald, Douglas H; Duclos, Christine W; West, David R; Niebauer, Linda; Marr, Linda; Quintela, Javan; Main, Deborah S

    2005-01-01

    .... This paper presents a mixed methods approach to analyzing narrative error event reports. Mixed methods studies integrate one or more qualitative and quantitative techniques for data collection and analysis...

  5. Error Patterns Analysis of Hearing Aid and Cochlear Implant Users as a Function of Noise.

    Science.gov (United States)

    Chun, Hyungi; Ma, Sunmi; Han, Woojae; Chun, Youngmyoung

    2015-12-01

    Not all impaired listeners may have the same speech perception ability although they will have similar pure-tone threshold and configuration. For this reason, the present study analyzes error patterns in the hearing-impaired compared to normal hearing (NH) listeners as a function of signal-to-noise ratio (SNR). Forty-four adults participated: 10 listeners with NH, 20 hearing aids (HA) users and 14 cochlear implants (CI) users. The Korean standardized monosyllables were presented as the stimuli in quiet and three different SNRs. Total error patterns were classified into types of substitution, omission, addition, fail, and no response, using stacked bar plots. Total error percent for the three groups significantly increased as the SNRs decreased. For error pattern analysis, the NH group showed substitution errors dominantly regardless of the SNRs compared to the other groups. Both the HA and CI groups had substitution errors that declined, while no response errors appeared as the SNRs increased. The CI group was characterized by lower substitution and higher fail errors than did the HA group. Substitutions of initial and final phonemes in the HA and CI groups were limited by place of articulation errors. However, the HA group had missed consonant place cues, such as formant transitions and stop consonant bursts, whereas the CI group usually had limited confusions of nasal consonants with low frequency characteristics. Interestingly, all three groups showed /k/ addition in the final phoneme, a trend that magnified as noise increased. The HA and CI groups had their unique error patterns even though the aided thresholds of the two groups were similar. We expect that the results of this study will focus on high error patterns in auditory training of hearing-impaired listeners, resulting in reducing those errors and improving their speech perception ability.

  6. Generalized multiplicative error models: Asymptotic inference and empirical analysis

    Science.gov (United States)

    Li, Qian

    This dissertation consists of two parts. The first part focuses on extended Multiplicative Error Models (MEM) that include two extreme cases for nonnegative series. These extreme cases are common phenomena in high-frequency financial time series. The Location MEM(p,q) model incorporates a location parameter so that the series are required to have positive lower bounds. The estimator for the location parameter turns out to be the minimum of all the observations and is shown to be consistent. The second case captures the nontrivial fraction of zero outcomes feature in a series and combines a so-called Zero-Augmented general F distribution with linear MEM(p,q). Under certain strict stationary and moment conditions, we establish a consistency and asymptotic normality of the semiparametric estimation for these two new models. The second part of this dissertation examines the differences and similarities between trades in the home market and trades in the foreign market of cross-listed stocks. We exploit the multiplicative framework to model trading duration, volume per trade and price volatility for Canadian shares that are cross-listed in the New York Stock Exchange (NYSE) and the Toronto Stock Exchange (TSX). We explore the clustering effect, interaction between trading variables, and the time needed for price equilibrium after a perturbation for each market. The clustering effect is studied through the use of univariate MEM(1,1) on each variable, while the interactions among duration, volume and price volatility are captured by a multivariate system of MEM(p,q). After estimating these models by a standard QMLE procedure, we exploit the Impulse Response function to compute the calendar time for a perturbation in these variables to be absorbed into price variance, and use common statistical tests to identify the difference between the two markets in each aspect. These differences are of considerable interest to traders, stock exchanges and policy makers.

  7. Error Analysis for Discontinuous Galerkin Method for Parabolic Problems

    Science.gov (United States)

    Kaneko, Hideaki

    2004-01-01

    In the proposal, the following three objectives are stated: (1) A p-version of the discontinuous Galerkin method for a one dimensional parabolic problem will be established. It should be recalled that the h-version in space was used for the discontinuous Galerkin method. An a priori error estimate as well as a posteriori estimate of this p-finite element discontinuous Galerkin method will be given. (2) The parameter alpha that describes the behavior double vertical line u(sub t)(t) double vertical line 2 was computed exactly. This was made feasible because of the explicitly specified initial condition. For practical heat transfer problems, the initial condition may have to be approximated. Also, if the parabolic problem is proposed on a multi-dimensional region, the parameter alpha, for most cases, would be difficult to compute exactly even in the case that the initial condition is known exactly. The second objective of this proposed research is to establish a method to estimate this parameter. This will be done by computing two discontinuous Galerkin approximate solutions at two different time steps starting from the initial time and use them to derive alpha. (3) The third objective is to consider the heat transfer problem over a two dimensional thin plate. The technique developed by Vogelius and Babuska will be used to establish a discontinuous Galerkin method in which the p-element will be used for through thickness approximation. This h-p finite element approach, that results in a dimensional reduction method, was used for elliptic problems, but the application appears new for the parabolic problem. The dimension reduction method will be discussed together with the time discretization method.

  8. Solution-verified reliability analysis and design of bistable MEMS using error estimation and adaptivity.

    Energy Technology Data Exchange (ETDEWEB)

    Eldred, Michael Scott; Subia, Samuel Ramirez; Neckels, David; Hopkins, Matthew Morgan; Notz, Patrick K.; Adams, Brian M.; Carnes, Brian; Wittwer, Jonathan W.; Bichon, Barron J.; Copps, Kevin D.

    2006-10-01

    This report documents the results for an FY06 ASC Algorithms Level 2 milestone combining error estimation and adaptivity, uncertainty quantification, and probabilistic design capabilities applied to the analysis and design of bistable MEMS. Through the use of error estimation and adaptive mesh refinement, solution verification can be performed in an automated and parameter-adaptive manner. The resulting uncertainty analysis and probabilistic design studies are shown to be more accurate, efficient, reliable, and convenient.

  9. Kinematic Analysis of Speech Sound Sequencing Errors Induced by Delayed Auditory Feedback.

    Science.gov (United States)

    Cler, Gabriel J; Lee, Jackson C; Mittelman, Talia; Stepp, Cara E; Bohland, Jason W

    2017-06-22

    Delayed auditory feedback (DAF) causes speakers to become disfluent and make phonological errors. Methods for assessing the kinematics of speech errors are lacking, with most DAF studies relying on auditory perceptual analyses, which may be problematic, as errors judged to be categorical may actually represent blends of sounds or articulatory errors. Eight typical speakers produced nonsense syllable sequences under normal and DAF (200 ms). Lip and tongue kinematics were captured with electromagnetic articulography. Time-locked acoustic recordings were transcribed, and the kinematics of utterances with and without perceived errors were analyzed with existing and novel quantitative methods. New multivariate measures showed that for 5 participants, kinematic variability for productions perceived to be error free was significantly increased under delay; these results were validated by using the spatiotemporal index measure. Analysis of error trials revealed both typical productions of a nontarget syllable and productions with articulatory kinematics that incorporated aspects of both the target and the perceived utterance. This study is among the first to characterize articulatory changes under DAF and provides evidence for different classes of speech errors, which may not be perceptually salient. New methods were developed that may aid visualization and analysis of large kinematic data sets. https://doi.org/10.23641/asha.5103067.

  10. Comparing measurement error correction methods for rate-of-change exposure variables in survival analysis.

    Science.gov (United States)

    Veronesi, Giovanni; Ferrario, Marco M; Chambless, Lloyd E

    2013-12-01

    In this article we focus on comparing measurement error correction methods for rate-of-change exposure variables in survival analysis, when longitudinal data are observed prior to the follow-up time. Motivational examples include the analysis of the association between changes in cardiovascular risk factors and subsequent onset of coronary events. We derive a measurement error model for the rate of change, estimated through subject-specific linear regression, assuming an additive measurement error model for the time-specific measurements. The rate of change is then included as a time-invariant variable in a Cox proportional hazards model, adjusting for the first time-specific measurement (baseline) and an error-free covariate. In a simulation study, we compared bias, standard deviation and mean squared error (MSE) for the regression calibration (RC) and the simulation-extrapolation (SIMEX) estimators. Our findings indicate that when the amount of measurement error is substantial, RC should be the preferred method, since it has smaller MSE for estimating the coefficients of the rate of change and of the variable measured without error. However, when the amount of measurement error is small, the choice of the method should take into account the event rate in the population and the effect size to be estimated. An application to an observational study, as well as examples of published studies where our model could have been applied, are also provided.

  11. Classifying Word Identification Errors, Module B: Analysis of Oral Reading Errors. Toward Competence Instructional Materials for Teacher Education. Report No. Case-02-75.

    Science.gov (United States)

    Bursuk, Laura; Matteoni, Louise

    This module is the second in a two-module cluster. Together, the modules are designed to enable students to recognize and identify by type the errors that occur in recorded samples of oral reading. This one--Module B--focuses on the actual analysis of oral reading errors. Using the understanding of the phonemic and morphemic elements of English…

  12. Optical coherence tomography signal analysis: LIDAR like equation and inverse methods

    International Nuclear Information System (INIS)

    Amaral, Marcello Magri

    2012-01-01

    Optical Coherence Tomography (OCT) is based on the media backscattering properties in order to obtain tomographic images. In a similar way, LIDAR (Light Detection and Range) technique uses these properties to determine atmospheric characteristics, specially the signal extinction coefficient. Exploring this similarity allowed the application of signal inversion methods to the OCT images, allowing to construct images based in the extinction coefficient, original result until now. The goal of this work was to study, propose, develop and implement algorithms based on OCT signal inversion methodologies with the aim of determine the extinction coefficient as a function of depth. Three inversion methods were used and implemented in LABView R : slope, boundary point and optical depth. Associated errors were studied and real samples (homogeneous and stratified) were used for two and three dimension analysis. The extinction coefficient images obtained from the optical depth method were capable to differentiate air from the sample. The images were studied applying PCA and cluster analysis that established the methodology strength in determining the sample's extinction coefficient value. Moreover, the optical depth methodology was applied to study the hypothesis that there is some correlation between signal extinction coefficient and the enamel teeth demineralization during a cariogenic process. By applying this methodology, it was possible to observe the variation of the extinction coefficient as depth function and its correlation with microhardness variation, showing that in deeper layers its values tends to a healthy tooth values, behaving as the same way that the microhardness. (author)

  13. Error Modeling and Sensitivity Analysis of a Five-Axis Machine Tool

    Directory of Open Access Journals (Sweden)

    Wenjie Tian

    2014-01-01

    Full Text Available Geometric error modeling and its sensitivity analysis are carried out in this paper, which is helpful for precision design of machine tools. Screw theory and rigid body kinematics are used to establish the error model of an RRTTT-type five-axis machine tool, which enables the source errors affecting the compensable and uncompensable pose accuracy of the machine tool to be explicitly separated, thereby providing designers and/or field engineers with an informative guideline for the accuracy improvement by suitable measures, that is, component tolerancing in design, manufacturing, and assembly processes, and error compensation. The sensitivity analysis method is proposed, and the sensitivities of compensable and uncompensable pose accuracies are analyzed. The analysis results will be used for the precision design of the machine tool.

  14. Error analysis for pesticide detection performed on paper-based microfluidic chip devices

    Science.gov (United States)

    Yang, Ning; Shen, Kai; Guo, Jianjiang; Tao, Xinyi; Xu, Peifeng; Mao, Hanping

    2017-07-01

    Paper chip is an efficient and inexpensive device for pesticide residues detection. However, the reasons of detection error are not clear, which is the main problem to hinder the development of pesticide residues detection. This paper focuses on error analysis for pesticide detection performed on paper-based microfluidic chip devices, which test every possible factor to build the mathematical models for detection error. In the result, double-channel structure is selected as the optimal chip structure to reduce detection error effectively. The wavelength of 599.753 nm is chosen since it is the most sensitive detection wavelength to the variation of pesticide concentration. At last, the mathematical models of detection error for detection temperature and prepared time are concluded. This research lays a theory foundation on accurate pesticide residues detection based on paper-based microfluidic chip devices.

  15. An Analysis of Errors in a Reuse-Oriented Development Environment

    Science.gov (United States)

    Thomas, William M.; Delis, Alex; Basili, Victor R.

    1995-01-01

    Component reuse is widely considered vital for obtaining significant improvement in development productivity. However, as an organization adopts a reuse-oriented development process, the nature of the problems in development is likely to change. In this paper, we use a measurement-based approach to better understand and evaluate an evolving reuse process. More specifically, we study the effects of reuse across seven projects in narrow domain from a single development organization. An analysis of the errors that occur in new and reused components across all phases of system development provides insight into the factors influencing the reuse process. We found significant differences between errors associated with new and various types of reused components in terms of the types of errors committed, when errors are introduced, and the effect that the errors have on the development process.

  16. Phonological analysis of substitution errors of patients with apraxia of speech

    Directory of Open Access Journals (Sweden)

    Maysa Luchesi Cera

    Full Text Available Abstract The literature on apraxia of speech describes the types and characteristics of phonological errors in this disorder. In general, phonemes affected by errors are described, but the distinctive features involved have not yet been investigated. Objective: To analyze the features involved in substitution errors produced by Brazilian-Portuguese speakers with apraxia of speech. Methods: 20 adults with apraxia of speech were assessed. Phonological analysis of the distinctive features involved in substitution type errors was carried out using the protocol for the evaluation of verbal and non-verbal apraxia. Results: The most affected features were: voiced, continuant, high, anterior, coronal, posterior. Moreover, the mean of the substitutions of marked to markedness features was statistically greater than the markedness to marked features. Conclusions: This study contributes toward a better characterization of the phonological errors found in apraxia of speech, thereby helping to diagnose communication disorders and the selection criteria of phonemes for rehabilitation in these patients.

  17. Covariance Analysis Tool (G-CAT) for Computing Ascent, Descent, and Landing Errors

    Science.gov (United States)

    Boussalis, Dhemetrios; Bayard, David S.

    2013-01-01

    G-CAT is a covariance analysis tool that enables fast and accurate computation of error ellipses for descent, landing, ascent, and rendezvous scenarios, and quantifies knowledge error contributions needed for error budgeting purposes. Because GCAT supports hardware/system trade studies in spacecraft and mission design, it is useful in both early and late mission/ proposal phases where Monte Carlo simulation capability is not mature, Monte Carlo simulation takes too long to run, and/or there is a need to perform multiple parametric system design trades that would require an unwieldy number of Monte Carlo runs. G-CAT is formulated as a variable-order square-root linearized Kalman filter (LKF), typically using over 120 filter states. An important property of G-CAT is that it is based on a 6-DOF (degrees of freedom) formulation that completely captures the combined effects of both attitude and translation errors on the propagated trajectories. This ensures its accuracy for guidance, navigation, and control (GN&C) analysis. G-CAT provides the desired fast turnaround analysis needed for error budgeting in support of mission concept formulations, design trade studies, and proposal development efforts. The main usefulness of a covariance analysis tool such as G-CAT is its ability to calculate the performance envelope directly from a single run. This is in sharp contrast to running thousands of simulations to obtain similar information using Monte Carlo methods. It does this by propagating the "statistics" of the overall design, rather than simulating individual trajectories. G-CAT supports applications to lunar, planetary, and small body missions. It characterizes onboard knowledge propagation errors associated with inertial measurement unit (IMU) errors (gyro and accelerometer), gravity errors/dispersions (spherical harmonics, masscons), and radar errors (multiple altimeter beams, multiple Doppler velocimeter beams). G-CAT is a standalone MATLAB- based tool intended to

  18. Disasters of endoscopic surgery and how to avoid them: error analysis.

    Science.gov (United States)

    Troidl, H

    1999-08-01

    For every innovation there are two sides to consider. For endoscopic surgery the positive side is more comfort for the patient, and the negative side is new complications, even disasters, such as injuries to organs (e.g., the bowel), vessels, and the common bile duct. These disasters are rare and seldom reported in the scientific world, as at conferences, at symposiums, and in publications. Today there are many methods for testing an innovation (controlled clinical trials, consensus conferences, audits, and confidential inquiries). Reporting "complications," however, does not help to avoid them. We need real methods for avoiding negative failures. The failure analysis is the method of choice in industry. If an airplane crashes, error analysis starts immediately. Humans make errors, and making errors means punishment. Failure analysis means rigorously and objectively investigating a clinical situation to find clinical relevant information for avoiding these negative events in the future. Error analysis has four important steps: (1) What was the clinical situation? (2) What has happened? (3) Most important: Why did it happen? (4) How do we avoid the negative event or disaster in the future. Error analysis has decisive advantages. It is easy to perform; it supplies clinically relevant information to help avoid it; and there is no need for money. It can be done everywhere; and the information is available in a short time. The other side of the coin is that error analysis is of course retrospective, it may not be objective, and most important it will probably have legal consequences. To be more effective in medicine and surgery we must handle our errors using a different approach. According to Sir Karl Popper: "The consituation is that we have to learn from our errors. To cover up failure is therefore the biggest intellectual sin.

  19. Analysis of concrete material through gamma ray computerized tomography

    International Nuclear Information System (INIS)

    Oliveira Junior, J.M. de

    2004-01-01

    Computerized Tomography (CT) refers to the cross sectional imaging of an object from both transmission or reflection data collected by illuminating the object from many different directions. The most important contribution of CT is to greatly improve abilities to distinguish regions with different gamma ray transmittance and to separate over-lying structures. The mathematical problem of the CT imaging is that of estimating an image from its projections. These projections can represent, for example, the linear attenuation coefficient of γ-rays along the path of the ray. In this work we will present some new results obtained by using tomographic techniques to analyze column samples of concrete to check the distribution of various materials and structural problems. These concrete samples were made using different proportions of stone, sand and cement. Another set of samples with different proportions of sand and cement were also used to verify the outcome from the CT analysis and the differences between them. Those samples were prepared at the Material Laboratory of Faculdade de Engenharia de Sorocaba, following the same procedures used in real case of concrete tests. The projections used in this work was obtained by Mini Computerized Tomograph of Uniso (MTCU), located at the Experimental Nuclear Physics Laboratory at University of Sorocaba. This tomograph operates with a gamma ray source of 241 Am (photons of 60 keV and 100 mCi of intensity) and a NaI(Tl) solid state detector. The system features translation and rotation scanning modes, a 100 mm effective field of view, and 1 mm spatial resolution. The image reconstruction problem is solved using Discrete Filtered Backprojection (FBP). (author)

  20. Advanced polarization sensitive analysis in optical coherence tomography

    Science.gov (United States)

    Wieloszyńska, Aleksandra; StrÄ kowski, Marcin R.

    2017-08-01

    The optical coherence tomography (OCT) is an optical imaging method, which is widely applied in variety applications. This technology is used to cross-sectional or surface imaging with high resolution in non-contact and non-destructive way. OCT is very useful in medical applications like ophthalmology, dermatology or dentistry, as well as beyond biomedical fields like stress mapping in polymers or protective coatings defects detection. Standard OCT imaging is based on intensity images which can visualize the inner structure of scattering devices. However, there is a number of extensions improving the OCT measurement abilities. The main of them are the polarization sensitive OCT (PS-OCT), Doppler enable OCT (D-OCT) or spectroscopic OCT (S-OCT). Our research activities have been focused on PS-OCT systems. The polarization sensitive analysis delivers an useful information about optical anisotropic properties of the evaluated sample. This kind of measurements is very important for inner stress monitoring or e.g. tissue recognition. Based on our research results and knowledge the standard PS-OCT provide only data about birefringence of the measured sample. However, based on the OCT measurements more information including depolarization and diattenuation might be obtained. In our work, the method based on Jones formalism are going to be presented. It is used to determine birefringence, dichroism and optic axis orientation of the tested sample. In this contribution the setup of the optical system, as well as tests results verifying the measurements abilities of the system are going to be presented. The brief discussion about the effectiveness and usefulness of this approach will be carried out.

  1. Epiretinal membrane as a source of errors during the measurement of peripapillary nerve fibre thickness using spectral-domain optical coherence tomography (SD-OCT).

    Science.gov (United States)

    Rüfer, Florian; Bartsch, Julia Jasmin; Erb, Carl; Riehl, Anneliese; Zeitz, Philipp Franko

    2016-10-01

    We aimed to examine the extent to which measurement errors in the determination of retinal nerve fibre layer (RNFL) using spectral-domain optical coherence tomography (SD-OCT) occur in cases of epiretinal membrane and whether systematic deviations are found in the values obtained. A macular scan and a circumpapillary scan were performed on 97 eyes of 97 patients using SD-OCT. Group 1 comprised 53 patients with epiretinal membrane at an age of 70 ± 4.8 years (median ± average absolute deviation). Group 2 consisted of 44 patients without any macular pathologies (median age 70 ± 5.8 years). Differences in the thickness of the RNFL and segmentation errors in the detection of the RNFL were recorded quantitatively in both groups and checked for statistical significance using non-parametric tests. The median central retinal thickness in Group 1 was 357 ± 79 μm (median ± average absolute deviation), and in Group 2 it was 270 ± 11 μm (p < 0.001). The result of the quadrant-by-quadrant measurement of the average RNFL in Group 1 and Group 2, respectively, was: temporal 88 ± 17 and 73 ± 9 μm, inferior 121 ± 17 and 118 ± 15 μm, nasal 87 ± 15 and 89 ± 14 μm and superior 115 ± 15 and 114 ± 9 μm. Temporally, the difference was statistically significant (p < 0.001). Segmentation errors of the RNFL were found in 19 of 53 eyes (35.8 %) in Group 1 and in no eyes (p < 0.001) in Group 2. In eyes with epiretinal membrane, measuring errors in the SD-OCT occur significantly more frequently than in eyes without any retinal pathologies. If epiretinal membrane and glaucoma are present simultaneously, the results of the automated RNFL measurement using SD-OCT should be critically scrutinised, even if no papillary changes are visible clinically.

  2. Minimizing treatment planning errors in proton therapy using failure mode and effects analysis

    International Nuclear Information System (INIS)

    Zheng, Yuanshui; Johnson, Randall; Larson, Gary

    2016-01-01

    Purpose: Failure mode and effects analysis (FMEA) is a widely used tool to evaluate safety or reliability in conventional photon radiation therapy. However, reports about FMEA application in proton therapy are scarce. The purpose of this study is to apply FMEA in safety improvement of proton treatment planning at their center. Methods: The authors performed an FMEA analysis of their proton therapy treatment planning process using uniform scanning proton beams. The authors identified possible failure modes in various planning processes, including image fusion, contouring, beam arrangement, dose calculation, plan export, documents, billing, and so on. For each error, the authors estimated the frequency of occurrence, the likelihood of being undetected, and the severity of the error if it went undetected and calculated the risk priority number (RPN). The FMEA results were used to design their quality management program. In addition, the authors created a database to track the identified dosimetric errors. Periodically, the authors reevaluated the risk of errors by reviewing the internal error database and improved their quality assurance program as needed. Results: In total, the authors identified over 36 possible treatment planning related failure modes and estimated the associated occurrence, detectability, and severity to calculate the overall risk priority number. Based on the FMEA, the authors implemented various safety improvement procedures into their practice, such as education, peer review, and automatic check tools. The ongoing error tracking database provided realistic data on the frequency of occurrence with which to reevaluate the RPNs for various failure modes. Conclusions: The FMEA technique provides a systematic method for identifying and evaluating potential errors in proton treatment planning before they result in an error in patient dose delivery. The application of FMEA framework and the implementation of an ongoing error tracking system at their

  3. Patient safety in the clinical laboratory: a longitudinal analysis of specimen identification errors.

    Science.gov (United States)

    Wagar, Elizabeth A; Tamashiro, Lorraine; Yasin, Bushra; Hilborne, Lee; Bruckner, David A

    2006-11-01

    Patient safety is an increasingly visible and important mission for clinical laboratories. Attention to improving processes related to patient identification and specimen labeling is being paid by accreditation and regulatory organizations because errors in these areas that jeopardize patient safety are common and avoidable through improvement in the total testing process. To assess patient identification and specimen labeling improvement after multiple implementation projects using longitudinal statistical tools. Specimen errors were categorized by a multidisciplinary health care team. Patient identification errors were grouped into 3 categories: (1) specimen/requisition mismatch, (2) unlabeled specimens, and (3) mislabeled specimens. Specimens with these types of identification errors were compared preimplementation and postimplementation for 3 patient safety projects: (1) reorganization of phlebotomy (4 months); (2) introduction of an electronic event reporting system (10 months); and (3) activation of an automated processing system (14 months) for a 24-month period, using trend analysis and Student t test statistics. Of 16,632 total specimen errors, mislabeled specimens, requisition mismatches, and unlabeled specimens represented 1.0%, 6.3%, and 4.6% of errors, respectively. Student t test showed a significant decrease in the most serious error, mislabeled specimens (P < .001) when compared to before implementation of the 3 patient safety projects. Trend analysis demonstrated decreases in all 3 error types for 26 months. Applying performance-improvement strategies that focus longitudinally on specimen labeling errors can significantly reduce errors, therefore improving patient safety. This is an important area in which laboratory professionals, working in interdisciplinary teams, can improve safety and outcomes of care.

  4. Single trial time-frequency domain analysis of error processing in post-traumatic stress disorder.

    Science.gov (United States)

    Clemans, Zachary A; El-Baz, Ayman S; Hollifield, Michael; Sokhadze, Estate M

    2012-09-13

    Error processing studies in psychology and psychiatry are relatively common. Event-related potentials (ERPs) are often used as measures of error processing, two such response-locked ERPs being the error-related negativity (ERN) and the error-related positivity (Pe). The ERN and Pe occur following committed error in reaction time tasks as low frequency (4-8 Hz) electroencephalographic (EEG) oscillations registered at the midline fronto-central sites. We created an alternative method for analyzing error processing using time-frequency analysis in the form of a wavelet transform. A study was conducted in which subjects with PTSD and healthy control completed a forced-choice task. Single trial EEG data from errors in the task were processed using a continuous wavelet transform. Coefficients from the transform that corresponded to the theta range were averaged to isolate a theta waveform in the time-frequency domain. Measures called the time-frequency ERN and Pe were obtained from these waveforms for five different channels and then averaged to obtain a single time-frequency ERN and Pe for each error trial. A comparison of the amplitude and latency for the time-frequency ERN and Pe between the PTSD and control group was performed. A significant group effect was found on the amplitude of both measures. These results indicate that the developed single trial time-frequency error analysis method is suitable for examining error processing in PTSD and possibly other psychiatric disorders. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  5. Dealing with Uncertainties A Guide to Error Analysis

    CERN Document Server

    Drosg, Manfred

    2007-01-01

    Dealing with Uncertainties proposes and explains a new approach for the analysis of uncertainties. Firstly, it is shown that uncertainties are the consequence of modern science rather than of measurements. Secondly, it stresses the importance of the deductive approach to uncertainties. This perspective has the potential of dealing with the uncertainty of a single data point and of data of a set having differing weights. Both cases cannot be dealt with the inductive approach, which is usually taken. This innovative monograph also fully covers both uncorrelated and correlated uncertainties. The weakness of using statistical weights in regression analysis is discussed. Abundant examples are given for correlation in and between data sets and for the feedback of uncertainties on experiment design.

  6. M/T method based incremental encoder velocity measurement error analysis and self-adaptive error elimination algorithm

    DEFF Research Database (Denmark)

    Chen, Yangyang; Yang, Ming; Long, Jiang

    2017-01-01

    For motor control applications, the speed loop performance is largely depended on the accuracy of speed feedback signal. M/T method, due to its high theoretical accuracy, is the most widely used in incremental encoder adopted speed measurement. However, the inherent encoder optical grating error...... and A/D conversion error make it hard to achieve theoretical speed measurement accuracy. In this paper, hardware caused speed measurement errors are analyzed and modeled in detail; a Single-Phase Self-adaptive M/T method is proposed to ideally suppress speed measurement error. In the end, simulation...

  7. Thermal error analysis and compensation for digital image/volume correlation

    Science.gov (United States)

    Pan, Bing

    2018-02-01

    Digital image/volume correlation (DIC/DVC) rely on the digital images acquired by digital cameras and x-ray CT scanners to extract the motion and deformation of test samples. Regrettably, these imaging devices are unstable optical systems, whose imaging geometry may undergo unavoidable slight and continual changes due to self-heating effect or ambient temperature variations. Changes in imaging geometry lead to both shift and expansion in the recorded 2D or 3D images, and finally manifest as systematic displacement and strain errors in DIC/DVC measurements. Since measurement accuracy is always the most important requirement in various experimental mechanics applications, these thermal-induced errors (referred to as thermal errors) should be given serious consideration in order to achieve high accuracy, reproducible DIC/DVC measurements. In this work, theoretical analyses are first given to understand the origin of thermal errors. Then real experiments are conducted to quantify thermal errors. Three solutions are suggested to mitigate or correct thermal errors. Among these solutions, a reference sample compensation approach is highly recommended because of its easy implementation, high accuracy and in-situ error correction capability. Most of the work has appeared in our previously published papers, thus its originality is not claimed. Instead, this paper aims to give a comprehensive overview and more insights of our work on thermal error analysis and compensation for DIC/DVC measurements.

  8. Verb retrieval in brain-damaged subjects: 2. Analysis of errors.

    Science.gov (United States)

    Kemmerer, D; Tranel, D

    2000-07-01

    Verb retrieval for action naming was assessed in 53 brain-damaged subjects by administering a standardized test with 100 items. In a companion paper (Kemmerer & Tranel, 2000), it was shown that impaired and unimpaired subjects did not differ as groups in their sensitivity to a variety of stimulus, lexical, and conceptual factors relevant to the test. For this reason, the main goal of the present study was to determine whether the two groups of subjects manifested theoretically interesting differences in the kinds of errors that they made. All of the subjects' errors were classified according to an error coding system that contains 27 distinct types of errors belonging to five broad categories-verbs, phrases, nouns, adpositional words, and "other" responses. Errors involving the production of verbs that are semantically related to the target were especially prevalent for the unimpaired group, which is similar to the performance of normal control subjects. By contrast, the impaired group had a significantly smaller proportion of errors in the verb category and a significantly larger proportion of errors in each of the nonverb categories. This relationship between error rate and error type is consistent with previous research on both object and action naming errors, and it suggests that subjects with only mild damage to putative lexical systems retain an appreciation of most of the semantic, phonological, and grammatical category features of words, whereas subjects with more severe damage retain a much smaller set of features. At the level of individual subjects, a wide range of "predominant error types" were found, especially among the impaired subjects, which may reflect either different action naming strategies or perhaps different patterns of preservation and impairment of various lexical components. Overall, this study provides a novel addition to the existing literature on the analysis of naming errors made by brain-damaged subjects. Not only does the study

  9. Use of error files in uncertainty analysis and data adjustment

    International Nuclear Information System (INIS)

    Chestnutt, M.M.; McCracken, A.K.; McCracken, A.K.

    1979-01-01

    Some results are given from uncertainty analyses on Pressurized Water Reactor (PWR) and Fast Reactor Theoretical Benchmarks. Upper limit estimates of calculated quantities are shown to be significantly reduced by the use of ENDF/B data covariance files and recently published few-group covariance matrices. Some problems in the analysis of single-material benchmark experiments are discussed with reference to the Winfrith iron benchmark experiment. Particular attention is given to the difficulty of making use of very extensive measurements which are likely to be a feature of this type of experiment. Preliminary results of an adjustment in iron are shown

  10. Development of safety analysis and constraint detection techniques for process interaction errors

    International Nuclear Information System (INIS)

    Fan, Chin-Feng; Tsai, Shang-Lin; Tseng, Wan-Hui

    2011-01-01

    Among the new failure modes introduced by computer into safety systems, the process interaction error is the most unpredictable and complicated failure mode, which may cause disastrous consequences. This paper presents safety analysis and constraint detection techniques for process interaction errors among hardware, software, and human processes. Among interaction errors, the most dreadful ones are those that involve run-time misinterpretation from a logic process. We call them the 'semantic interaction errors'. Such abnormal interaction is not adequately emphasized in current research. In our static analysis, we provide a fault tree template focusing on semantic interaction errors by checking conflicting pre-conditions and post-conditions among interacting processes. Thus, far-fetched, but highly risky, interaction scenarios involve interpretation errors can be identified. For run-time monitoring, a range of constraint types is proposed for checking abnormal signs at run time. We extend current constraints to a broader relational level and a global level, considering process/device dependencies and physical conservation rules in order to detect process interaction errors. The proposed techniques can reduce abnormal interactions; they can also be used to assist in safety-case construction.

  11. Gait Analysis of Transfemoral Amputees: Errors in Inverse Dynamics Are Substantial and Depend on Prosthetic Design.

    Science.gov (United States)

    Dumas, Raphael; Branemark, Rickard; Frossard, Laurent

    2017-06-01

    Quantitative assessments of prostheses performances rely more and more frequently on gait analysis focusing on prosthetic knee joint forces and moments computed by inverse dynamics. However, this method is prone to errors, as demonstrated in comparison with direct measurements of these forces and moments. The magnitude of errors reported in the literature seems to vary depending on prosthetic components. Therefore, the purposes of this study were (A) to quantify and compare the magnitude of errors in knee joint forces and moments obtained with inverse dynamics and direct measurements on ten participants with transfemoral amputation during walking and (B) to investigate if these errors can be characterised for different prosthetic knees. Knee joint forces and moments computed by inverse dynamics presented substantial errors, especially during the swing phase of gait. Indeed, the median errors in percentage of the moment magnitude were 4% and 26% in extension/flexion, 6% and 19% in adduction/abduction as well as 14% and 27% in internal/external rotation during stance and swing phase, respectively. Moreover, errors varied depending on the prosthetic limb fitted with mechanical or microprocessor-controlled knees. This study confirmed that inverse dynamics should be used cautiously while performing gait analysis of amputees. Alternatively, direct measurements of joint forces and moments could be relevant for mechanical characterising of components and alignments of prosthetic limbs.

  12. Estimation of the human error probabilities in the human reliability analysis

    International Nuclear Information System (INIS)

    Liu Haibin; He Xuhong; Tong Jiejuan; Shen Shifei

    2006-01-01

    Human error data is an important issue of human reliability analysis (HRA). Using of Bayesian parameter estimation, which can use multiple information, such as the historical data of NPP and expert judgment data to modify the human error data, could get the human error data reflecting the real situation of NPP more truly. This paper, using the numeric compute program developed by the authors, presents some typical examples to illustrate the process of the Bayesian parameter estimation in HRA and discusses the effect of different modification data on the Bayesian parameter estimation. (authors)

  13. SYNTACTIC ERRORS ANALYSIS IN THE CASUAL CONVERSATION 60 COMMITED BY TWO SENIOR HIGH STUDENTS

    Directory of Open Access Journals (Sweden)

    Anjar Setiawan

    2017-12-01

    Full Text Available Syntactic structures are the base of English grammar. This study was aimed to analyze the syntactic errors in the casual conversation commited by two senior high students of MAN 2 Semarang. The researcher used qualitative approach to analyze and interpret the meaning of casual conversation. Furthermore, the data collection had been transcribed and analyzed based on the areas of syntactic errors analysis. The findings of the study showed that all areas of syntactic errors happened during the conversation, included auxiliaries, tenses, article, preposition, and conjunction. Both speakers also had a relatively weak vocabulary and their sentences which were sometimes incomprehensible by the interlocutor.

  14. Classification of Error-Diffused Halftone Images Based on Spectral Regression Kernel Discriminant Analysis

    Directory of Open Access Journals (Sweden)

    Zhigao Zeng

    2016-01-01

    Full Text Available This paper proposes a novel algorithm to solve the challenging problem of classifying error-diffused halftone images. We firstly design the class feature matrices, after extracting the image patches according to their statistics characteristics, to classify the error-diffused halftone images. Then, the spectral regression kernel discriminant analysis is used for feature dimension reduction. The error-diffused halftone images are finally classified using an idea similar to the nearest centroids classifier. As demonstrated by the experimental results, our method is fast and can achieve a high classification accuracy rate with an added benefit of robustness in tackling noise.

  15. Wavefront-error evaluation by mathematical analysis of experimental Foucault-test data

    Science.gov (United States)

    Wilson, R. G.

    1975-01-01

    The diffraction theory of the Foucault test provides an integral formula expressing the complex amplitude and irradiance distribution in the Foucault pattern of a test mirror (lens) as a function of wavefront error. Recent literature presents methods of inverting this formula to express wavefront error in terms of irradiance in the Foucault pattern. The present paper describes a study in which the inversion formulation was applied to photometric Foucault-test measurements on a nearly diffraction-limited mirror to determine wavefront errors for direct comparison with ones determined from scatter-plate interferometer measurements. The results affirm the practicability of the Foucault test for quantitative wavefront analysis of very small errors, and they reveal the fallacy of the prevalent belief that the test is limited to qualitative use only. Implications of the results with regard to optical testing and the potential use of the Foucault test for wavefront analysis in orbital space telescopes are discussed.

  16. Study on Network Error Analysis and Locating based on Integrated Information Decision System

    Science.gov (United States)

    Yang, F.; Dong, Z. H.

    2017-10-01

    Integrated information decision system (IIDS) integrates multiple sub-system developed by many facilities, including almost hundred kinds of software, which provides with various services, such as email, short messages, drawing and sharing. Because the under-layer protocols are different, user standards are not unified, many errors are occurred during the stages of setup, configuration, and operation, which seriously affect the usage. Because the errors are various, which may be happened in different operation phases, stages, TCP/IP communication protocol layers, sub-system software, it is necessary to design a network error analysis and locating tool for IIDS to solve the above problems. This paper studies on network error analysis and locating based on IIDS, which provides strong theory and technology supports for the running and communicating of IIDS.

  17. Why Is Rainfall Error Analysis Requisite for Data Assimilation and Climate Modeling?

    Science.gov (United States)

    Hou, Arthur Y.; Zhang, Sara Q.

    2004-01-01

    Given the large temporal and spatial variability of precipitation processes, errors in rainfall observations are difficult to quantify yet crucial to making effective use of rainfall data for improving atmospheric analysis, weather forecasting, and climate modeling. We highlight the need for developing a quantitative understanding of systematic and random errors in precipitation observations by examining explicit examples of how each type of errors can affect forecasts and analyses in global data assimilation. We characterize the error information needed from the precipitation measurement community and how it may be used to improve data usage within the general framework of analysis techniques, as well as accuracy requirements from the perspective of climate modeling and global data assimilation.

  18. Human error and the problem of causality in analysis of accidents

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1990-01-01

    and for termination of the search for `causes'. In addition, the concept of human error is analysed and its intimate relation with human adaptation and learning is discussed. It is concluded that identification of errors as a separate class of behaviour is becoming increasingly difficult in modern work environments......Present technology is characterized by complexity, rapid change and growing size of technical systems. This has caused increasing concern with the human involvement in system safety. Analyses of the major accidents during recent decades have concluded that human errors on part of operators......, designers or managers have played a major role. There are, however, several basic problems in analysis of accidents and identification of human error. This paper addresses the nature of causal explanations and the ambiguity of the rules applied for identification of the events to include in analysis...

  19. Mars Entry Atmospheric Data System Modeling, Calibration, and Error Analysis

    Science.gov (United States)

    Karlgaard, Christopher D.; VanNorman, John; Siemers, Paul M.; Schoenenberger, Mark; Munk, Michelle M.

    2014-01-01

    The Mars Science Laboratory (MSL) Entry, Descent, and Landing Instrumentation (MEDLI)/Mars Entry Atmospheric Data System (MEADS) project installed seven pressure ports through the MSL Phenolic Impregnated Carbon Ablator (PICA) heatshield to measure heatshield surface pressures during entry. These measured surface pressures are used to generate estimates of atmospheric quantities based on modeled surface pressure distributions. In particular, the quantities to be estimated from the MEADS pressure measurements include the dynamic pressure, angle of attack, and angle of sideslip. This report describes the calibration of the pressure transducers utilized to reconstruct the atmospheric data and associated uncertainty models, pressure modeling and uncertainty analysis, and system performance results. The results indicate that the MEADS pressure measurement system hardware meets the project requirements.

  20. Meta-analysis of small RNA-sequencing errors reveals ubiquitous post-transcriptional RNA modifications

    OpenAIRE

    Ebhardt, H. Alexander; Tsang, Herbert H.; Dai, Denny C.; Liu, Yifeng; Bostan, Babak; Fahlman, Richard P.

    2009-01-01

    Recent advances in DNA-sequencing technology have made it possible to obtain large datasets of small RNA sequences. Here we demonstrate that not all non-perfectly matched small RNA sequences are simple technological sequencing errors, but many hold valuable biological information. Analysis of three small RNA datasets originating from Oryza sativa and Arabidopsis thaliana small RNA-sequencing projects demonstrates that many single nucleotide substitution errors overlap when aligning homologous...

  1. Error and Symmetry Analysis of Misner's Algorithm for Spherical Harmonic Decomposition on a Cubic Grid

    Science.gov (United States)

    Fiske, David R.

    2004-01-01

    In an earlier paper, Misner (2004, Class. Quant. Grav., 21, S243) presented a novel algorithm for computing the spherical harmonic components of data represented on a cubic grid. I extend Misner s original analysis by making detailed error estimates of the numerical errors accrued by the algorithm, by using symmetry arguments to suggest a more efficient implementation scheme, and by explaining how the algorithm can be applied efficiently on data with explicit reflection symmetries.

  2. Maternal Recall Error in Retrospectively Reported Time-to-Pregnancy: an Assessment and Bias Analysis.

    Science.gov (United States)

    Radin, Rose G; Rothman, Kenneth J; Hatch, Elizabeth E; Mikkelsen, Ellen M; Sorensen, Henrik T; Riis, Anders H; Fox, Matthew P; Wise, Lauren A

    2015-11-01

    Epidemiologic studies of fecundability often use retrospectively measured time-to-pregnancy (TTP), thereby introducing potential for recall error. Little is known about how recall error affects the bias and precision of the fecundability odds ratio (FOR) in such studies. Using data from the Danish Snart-Gravid Study (2007-12), we quantified error for TTP recalled in the first trimester of pregnancy relative to prospectively measured TTP among 421 women who enrolled at the start of their pregnancy attempt and became pregnant within 12 months. We defined recall error as retrospectively measured TTP minus prospectively measured TTP. Using linear regression, we assessed mean differences in recall error by maternal characteristics. We evaluated the resulting bias in the FOR and 95% confidence interval (CI) using simulation analyses that compared corrected and uncorrected retrospectively measured TTP values. Recall error (mean = -0.11 months, 95% CI -0.25, 0.04) was not appreciably associated with maternal age, gravidity, or recent oral contraceptive use. Women with TTP > 2 months were more likely to underestimate their TTP than women with TTP ≤ 2 months (unadjusted mean difference in error: -0.40 months, 95% CI -0.71, -0.09). FORs of recent oral contraceptive use calculated from prospectively measured, retrospectively measured, and corrected TTPs were 0.82 (95% CI 0.67, 0.99), 0.74 (95% CI 0.61, 0.90), and 0.77 (95% CI 0.62, 0.96), respectively. Recall error was small on average among pregnancy planners who became pregnant within 12 months. Recall error biased the FOR of recent oral contraceptive use away from the null by 10%. Quantitative bias analysis of the FOR can help researchers quantify the bias from recall error. © 2015 John Wiley & Sons Ltd.

  3. Detecting medication errors in the New Zealand pharmacovigilance database: a retrospective analysis.

    Science.gov (United States)

    Kunac, Desireé L; Tatley, Michael V

    2011-01-01

    Despite the traditional focus being adverse drug reactions (ADRs), pharmacovigilance centres have recently been identified as a potentially rich and important source of medication error data. To identify medication errors in the New Zealand Pharmacovigilance database (Centre for Adverse Reactions Monitoring [CARM]), and to describe the frequency and characteristics of these events. A retrospective analysis of the CARM pharmacovigilance database operated by the New Zealand Pharmacovigilance Centre was undertaken for the year 1 January-31 December 2007. All reports, excluding those relating to vaccines, clinical trials and pharmaceutical company reports, underwent a preventability assessment using predetermined criteria. Those events deemed preventable were subsequently classified to identify the degree of patient harm, type of error, stage of medication use process where the error occurred and origin of the error. A total of 1412 reports met the inclusion criteria and were reviewed, of which 4.3% (61/1412) were deemed preventable. Not all errors resulted in patient harm: 29.5% (18/61) were 'no harm' errors but 65.5% (40/61) of errors were deemed to have been associated with some degree of patient harm (preventable adverse drug events [ADEs]). For 5.0% (3/61) of events, the degree of patient harm was unable to be determined as the patient outcome was unknown. The majority of preventable ADEs (62.5% [25/40]) occurred in adults aged 65 years and older. The medication classes most involved in preventable ADEs were antibacterials for systemic use and anti-inflammatory agents, with gastrointestinal and respiratory system disorders the most common adverse events reported. For both preventable ADEs and 'no harm' events, most errors were incorrect dose and drug therapy monitoring problems consisting of failures in detection of significant drug interactions, past allergies or lack of necessary clinical monitoring. Preventable events were mostly related to the prescribing and

  4. Review of human error analysis methodologies and case study for accident management

    International Nuclear Information System (INIS)

    Jung, Won Dae; Kim, Jae Whan; Lee, Yong Hee; Ha, Jae Joo

    1998-03-01

    In this research, we tried to establish the requirements for the development of a new human error analysis method. To achieve this goal, we performed a case study as following steps; 1. review of the existing HEA methods 2. selection of those methods which are considered to be appropriate for the analysis of operator's tasks in NPPs 3. choice of tasks for the application, selected for the case study: HRMS (Human reliability management system), PHECA (Potential Human Error Cause Analysis), CREAM (Cognitive Reliability and Error Analysis Method). And, as the tasks for the application, 'bleed and feed operation' and 'decision-making for the reactor cavity flooding' tasks are chosen. We measured the applicability of the selected methods to the NPP tasks, and evaluated the advantages and disadvantages between each method. The three methods are turned out to be applicable for the prediction of human error. We concluded that both of CREAM and HRMS are equipped with enough applicability for the NPP tasks, however, compared two methods. CREAM is thought to be more appropriate than HRMS from the viewpoint of overall requirements. The requirements for the new HEA method obtained from the study can be summarized as follows; firstly, it should deal with cognitive error analysis, secondly, it should have adequate classification system for the NPP tasks, thirdly, the description on the error causes and error mechanisms should be explicit, fourthly, it should maintain the consistency of the result by minimizing the ambiguity in each step of analysis procedure, fifty, it should be done with acceptable human resources. (author). 25 refs., 30 tabs., 4 figs

  5. Review of human error analysis methodologies and case study for accident management

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Won Dae; Kim, Jae Whan; Lee, Yong Hee; Ha, Jae Joo

    1998-03-01

    In this research, we tried to establish the requirements for the development of a new human error analysis method. To achieve this goal, we performed a case study as following steps; 1. review of the existing HEA methods 2. selection of those methods which are considered to be appropriate for the analysis of operator`s tasks in NPPs 3. choice of tasks for the application, selected for the case study: HRMS (Human reliability management system), PHECA (Potential Human Error Cause Analysis), CREAM (Cognitive Reliability and Error Analysis Method). And, as the tasks for the application, `bleed and feed operation` and `decision-making for the reactor cavity flooding` tasks are chosen. We measured the applicability of the selected methods to the NPP tasks, and evaluated the advantages and disadvantages between each method. The three methods are turned out to be applicable for the prediction of human error. We concluded that both of CREAM and HRMS are equipped with enough applicability for the NPP tasks, however, compared two methods. CREAM is thought to be more appropriate than HRMS from the viewpoint of overall requirements. The requirements for the new HEA method obtained from the study can be summarized as follows; firstly, it should deal with cognitive error analysis, secondly, it should have adequate classification system for the NPP tasks, thirdly, the description on the error causes and error mechanisms should be explicit, fourthly, it should maintain the consistency of the result by minimizing the ambiguity in each step of analysis procedure, fifty, it should be done with acceptable human resources. (author). 25 refs., 30 tabs., 4 figs.

  6. A human error analysis methodology, AGAPE-ET, for emergency tasks in nuclear power plants and its application

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jae Whan; Jung, Won Dea [Korea Atomic Energy Research Institute, Taejeon (Korea)

    2002-03-01

    This report presents a procedurised human reliability analysis (HRA) methodology, AGAPE-ET (A Guidance And Procedure for Human Error Analysis for Emergency Tasks), for both qualitative error analysis and quantification of human error probability (HEP) of emergency tasks in nuclear power plants. The AGAPE-ET is based on the simplified cognitive model. By each cognitive function, error causes or error-likely situations have been identified considering the characteristics of the performance of each cognitive function and influencing mechanism of PIFs on the cognitive function. Then, error analysis items have been determined from the identified error causes or error-likely situations to help the analysts cue or guide overall human error analysis. A human error analysis procedure based on the error analysis items is organised. The basic scheme for the quantification of HEP consists in the multiplication of the BHEP assigned by the error analysis item and the weight from the influencing factors decision tree (IFDT) constituted by cognitive function. The method can be characterised by the structured identification of the weak points of the task required to perform and the efficient analysis process that the analysts have only to carry out with the necessary cognitive functions. The report also presents the the application of AFAPE-ET to 31 nuclear emergency tasks and its results. 42 refs., 7 figs., 36 tabs. (Author)

  7. De-noising of GPS structural monitoring observation error using wavelet analysis

    Directory of Open Access Journals (Sweden)

    Mosbeh R. Kaloop

    2016-03-01

    Full Text Available In the process of the continuous monitoring of the structure's state properties such as static and dynamic responses using Global Positioning System (GPS, there are unavoidable errors in the observation data. These GPS errors and measurement noises have their disadvantages in the precise monitoring applications because these errors cover up the available signals that are needed. The current study aims to apply three methods, which are used widely to mitigate sensor observation errors. The three methods are based on wavelet analysis, namely principal component analysis method, wavelet compressed method, and the de-noised method. These methods are used to de-noise the GPS observation errors and to prove its performance using the GPS measurements which are collected from the short-time monitoring system designed for Mansoura Railway Bridge located in Egypt. The results have shown that GPS errors can effectively be removed, while the full-movement components of the structure can be extracted from the original signals using wavelet analysis.

  8. Electric capacitance tomography and two-phase flow for the nuclear reactor safety analysis

    International Nuclear Information System (INIS)

    Lee, Jae Young

    2008-01-01

    Recently electric capacitance tomography has been developed to be used in the analysis of two-phase flow. Although its electric field is not focused as the hard ray tomography such as the X-ray or gamma ray, its convenience of easy access to the system and easy maintenance due to no requirement of radiation shielding benefits us in its application in the two-phase flow study, one of important area in the nuclear safety analysis. In the present paper, the practical technologies in the electric capacitance tomography are represented in both parts of hardware and software. In the software part, both forward problem and inverse problem are discussed and the method of regularization. In the hardware part, the brief discussion of the electronics circuits is made which provides femto farad resolution with a reasonable speed (150 frame/sec for 16 electrodes). Some representative ideal cases are studied to demonstrate its potential capability for the two-phase flow analysis. Also, some variations of the tomography such as axial tomography, and three dimensional tomography are discussed. It was found that the present ECT is expected to become a useful tool to understand the complicated three dimensional two-phase flow which may be an important feature to be equipped by the safety analysis codes. (author)

  9. Optical System Error Analysis and Calibration Method of High-Accuracy Star Trackers

    Directory of Open Access Journals (Sweden)

    Zheng You

    2013-04-01

    Full Text Available The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.

  10. Optical system error analysis and calibration method of high-accuracy star trackers.

    Science.gov (United States)

    Sun, Ting; Xing, Fei; You, Zheng

    2013-04-08

    The star tracker is a high-accuracy attitude measurement device widely used in spacecraft. Its performance depends largely on the precision of the optical system parameters. Therefore, the analysis of the optical system parameter errors and a precise calibration model are crucial to the accuracy of the star tracker. Research in this field is relatively lacking a systematic and universal analysis up to now. This paper proposes in detail an approach for the synthetic error analysis of the star tracker, without the complicated theoretical derivation. This approach can determine the error propagation relationship of the star tracker, and can build intuitively and systematically an error model. The analysis results can be used as a foundation and a guide for the optical design, calibration, and compensation of the star tracker. A calibration experiment is designed and conducted. Excellent calibration results are achieved based on the calibration model. To summarize, the error analysis approach and the calibration method are proved to be adequate and precise, and could provide an important guarantee for the design, manufacture, and measurement of high-accuracy star trackers.

  11. Nondestructive analysis of urinary calculi using micro computed tomography

    Directory of Open Access Journals (Sweden)

    Lingeman James E

    2004-12-01

    Full Text Available Abstract Background Micro computed tomography (micro CT has been shown to provide exceptionally high quality imaging of the fine structural detail within urinary calculi. We tested the idea that micro CT might also be used to identify the mineral composition of urinary stones non-destructively. Methods Micro CT x-ray attenuation values were measured for mineral that was positively identified by infrared microspectroscopy (FT-IR. To do this, human urinary stones were sectioned with a diamond wire saw. The cut surface was explored by FT-IR and regions of pure mineral were evaluated by micro CT to correlate x-ray attenuation values with mineral content. Additionally, intact stones were imaged with micro CT to visualize internal morphology and map the distribution of specific mineral components in 3-D. Results Micro CT images taken just beneath the cut surface of urinary stones showed excellent resolution of structural detail that could be correlated with structure visible in the optical image mode of FT-IR. Regions of pure mineral were not difficult to find by FT-IR for most stones and such regions could be localized on micro CT images of the cut surface. This was not true, however, for two brushite stones tested; in these, brushite was closely intermixed with calcium oxalate. Micro CT x-ray attenuation values were collected for six minerals that could be found in regions that appeared to be pure, including uric acid (3515 – 4995 micro CT attenuation units, AU, struvite (7242 – 7969 AU, cystine (8619 – 9921 AU, calcium oxalate dihydrate (13815 – 15797 AU, calcium oxalate monohydrate (16297 – 18449 AU, and hydroxyapatite (21144 – 23121 AU. These AU values did not overlap. Analysis of intact stones showed excellent resolution of structural detail and could discriminate multiple mineral types within heterogeneous stones. Conclusions Micro CT gives excellent structural detail of urinary stones, and these results demonstrate the feasibility

  12. Outcomes of a Failure Mode and Effects Analysis for medication errors in pediatric anesthesia.

    Science.gov (United States)

    Martin, Lizabeth D; Grigg, Eliot B; Verma, Shilpa; Latham, Gregory J; Rampersad, Sally E; Martin, Lynn D

    2017-06-01

    The Institute of Medicine has called for development of strategies to prevent medication errors, which are one important cause of preventable harm. Although the field of anesthesiology is considered a leader in patient safety, recent data suggest high medication error rates in anesthesia practice. Unfortunately, few error prevention strategies for anesthesia providers have been implemented. Using Toyota Production System quality improvement methodology, a multidisciplinary team observed 133 h of medication practice in the operating room at a tertiary care freestanding children's hospital. A failure mode and effects analysis was conducted to systematically deconstruct and evaluate each medication handling process step and score possible failure modes to quantify areas of risk. A bundle of five targeted countermeasures were identified and implemented over 12 months. Improvements in syringe labeling (73 to 96%), standardization of medication organization in the anesthesia workspace (0 to 100%), and two-provider infusion checks (23 to 59%) were observed. Medication error reporting improved during the project and was subsequently maintained. After intervention, the median medication error rate decreased from 1.56 to 0.95 per 1000 anesthetics. The frequency of medication error harm events reaching the patient also decreased. Systematic evaluation and standardization of medication handling processes by anesthesia providers in the operating room can decrease medication errors and improve patient safety. © 2017 John Wiley & Sons Ltd.

  13. A Linguistic Analysis of Errors in the Compositions of Arba Minch University Students

    Directory of Open Access Journals (Sweden)

    Yoseph Tizazu

    2014-06-01

    Full Text Available This study reports the dominant linguistic errors that occur in the written productions of Arba Minch University (hereafter AMU students. A sample of paragraphs was collected for two years from students ranging from freshmen to graduating level. The sampled compositions were then coded, described, and explained using error analysis method. Both quantitative and qualitative analyses showed that almost all components of the English language (such as orthography, morphology, syntax, mechanics, and semantics in learners’ compositions have been affected by the errors. On the basis of surface structures affected by the errors, the following kinds of errors have been identified: addition of an auxiliary (*I was read by gass light, omission of a verb (*Sex before marriage ^ many disadvantages, misformation in word class (*riskable for risky and misordering of major constituents in utterances (*I joined in 2003 Arba minch university. The study also identified two causes which triggered learners’ errors: intralingual and interlingual. The majority of the errors, however, attributed to intralingual causes, which mainly resulted from the lack of full mastery on the basics of the English language.

  14. Error Analysis of Satellite Precipitation-Driven Modeling of Flood Events in Complex Alpine Terrain

    Directory of Open Access Journals (Sweden)

    Yiwen Mei

    2016-03-01

    Full Text Available The error in satellite precipitation-driven complex terrain flood simulations is characterized in this study for eight different global satellite products and 128 flood events over the Eastern Italian Alps. The flood events are grouped according to two flood types: rain floods and flash floods. The satellite precipitation products and runoff simulations are evaluated based on systematic and random error metrics applied on the matched event pairs and basin-scale event properties (i.e., rainfall and runoff cumulative depth and time series shape. Overall, error characteristics exhibit dependency on the flood type. Generally, timing of the event precipitation mass center and dispersion of the time series derived from satellite precipitation exhibits good agreement with the reference; the cumulative depth is mostly underestimated. The study shows a dampening effect in both systematic and random error components of the satellite-driven hydrograph relative to the satellite-retrieved hyetograph. The systematic error in shape of the time series shows a significant dampening effect. The random error dampening effect is less pronounced for the flash flood events and the rain flood events with a high runoff coefficient. This event-based analysis of the satellite precipitation error propagation in flood modeling sheds light on the application of satellite precipitation in mountain flood hydrology.

  15. LEARNING FROM MISTAKES Error Analysis in the English Speech of Indonesian Tertiary Students

    Directory of Open Access Journals (Sweden)

    Imelda Gozali

    2017-12-01

    Full Text Available This study is part of a series of Classroom Action Research conducted with the aim of improving the English speech of students in one of the tertiary institutes in Indonesia. After some years of teaching English conversation, the writer noted that students made various types of errors in their speech, which can be classified generally into morphological, phonological, and lexical. While some of the errors are still generally acceptable, some others elicit laughter or inhibit comprehension altogether. Therefore, the writer is keen to analyze the more common errors made by the students, so as to be able to compile a teaching material that could be utilized to address those errors more effectively in future classes. This research used Error Analysis by Richards (1971 as the basis of classification. It was carried out in five classes with a total number of 80 students for a period of one semester (14 weeks. The results showed that most of the errors were phonological (errors in pronunciation, while others were morphological or grammatical in nature. This prompted the writer to design simple Phonics lessons for future classes.

  16. Adjustment of Measurements with Multiplicative Errors: Error Analysis, Estimates of the Variance of Unit Weight, and Effect on Volume Estimation from LiDAR-Type Digital Elevation Models

    Directory of Open Access Journals (Sweden)

    Yun Shi

    2014-01-01

    Full Text Available Modern observation technology has verified that measurement errors can be proportional to the true values of measurements such as GPS, VLBI baselines and LiDAR. Observational models of this type are called multiplicative error models. This paper is to extend the work of Xu and Shimada published in 2000 on multiplicative error models to analytical error analysis of quantities of practical interest and estimates of the variance of unit weight. We analytically derive the variance-covariance matrices of the three least squares (LS adjustments, the adjusted measurements and the corrections of measurements in multiplicative error models. For quality evaluation, we construct five estimators for the variance of unit weight in association of the three LS adjustment methods. Although LiDAR measurements are contaminated with multiplicative random errors, LiDAR-based digital elevation models (DEM have been constructed as if they were of additive random errors. We will simulate a model landslide, which is assumed to be surveyed with LiDAR, and investigate the effect of LiDAR-type multiplicative error measurements on DEM construction and its effect on the estimate of landslide mass volume from the constructed DEM.

  17. Practical Implementation and Error Analysis of PSCPWM-Based Switching Audio Power Amplifiers

    DEFF Research Database (Denmark)

    Christensen, Frank Schwartz; Frederiksen, Thomas Mansachs; Andersen, Michael Andreas E.

    1999-01-01

    The paper presents an in-depth analysis of practical results for Parallel Phase-Shifted Carrier Pulse-Width Modulation (PSCPWM) - amplifier. Spectral analyses of error sources involved in PSCPWM are presented. The analysis is performed both by numerical means in MATLAB and by simulation in PSPICE...

  18. Improving the PSA quality in the human reliability analysis of pre-accident human errors

    International Nuclear Information System (INIS)

    Kang, D.-I.; Jung, W.-D.; Yang, J.-E.

    2004-01-01

    This paper describes the activities for improving the Probabilistic Safety Assessment (PSA) quality in the human reliability analysis (HRA) of the pre-accident human errors for the Korea Standard Nuclear Power Plant (KSNP). We evaluate the HRA results of the PSA for the KSNP and identify the items to be improved using the ASME PRA Standard. Evaluation results show that the ratio of items to be improved for pre-accident human errors is relatively high when compared with the ratio of those for post-accident human errors. They also show that more than 50% of the items to be improved for pre-accident human errors are related to the identification and screening analysis for them. In this paper, we develop the modeling guidelines for pre-accident human errors and apply them to the auxiliary feedwater system of the KSNP. Application results show that more than 50% of the items to be improved for the pre-accident human errors of the auxiliary feedwater system are resolved. (author)

  19. CUSUM-Logistic Regression analysis for the rapid detection of errors in clinical laboratory test results.

    Science.gov (United States)

    Sampson, Maureen L; Gounden, Verena; van Deventer, Hendrik E; Remaley, Alan T

    2016-02-01

    The main drawback of the periodic analysis of quality control (QC) material is that test performance is not monitored in time periods between QC analyses, potentially leading to the reporting of faulty test results. The objective of this study was to develop a patient based QC procedure for the more timely detection of test errors. Results from a Chem-14 panel measured on the Beckman LX20 analyzer were used to develop the model. Each test result was predicted from the other 13 members of the panel by multiple regression, which resulted in correlation coefficients between the predicted and measured result of >0.7 for 8 of the 14 tests. A logistic regression model, which utilized the measured test result, the predicted test result, the day of the week and time of day, was then developed for predicting test errors. The output of the logistic regression was tallied by a daily CUSUM approach and used to predict test errors, with a fixed specificity of 90%. The mean average run length (ARL) before error detection by CUSUM-Logistic Regression (CSLR) was 20 with a mean sensitivity of 97%, which was considerably shorter than the mean ARL of 53 (sensitivity 87.5%) for a simple prediction model that only used the measured result for error detection. A CUSUM-Logistic Regression analysis of patient laboratory data can be an effective approach for the rapid and sensitive detection of clinical laboratory errors. Published by Elsevier Inc.

  20. On Kolmogorov asymptotics of estimators of the misclassification error rate in linear discriminant analysis

    KAUST Repository

    Zollanvari, Amin

    2013-05-24

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  1. Evaluation of human reliability analysis methods addressing cognitive error modelling and quantification

    International Nuclear Information System (INIS)

    Reer, B.; Mertens, J.

    1996-05-01

    Actions and errors by the operating personnel, which are of significance for the safety of a technical system, are classified according to various criteria. Each type of action thus identified is roughly discussed with respect to its quantifiability by state-of-the-art human reliability analysis (HRA) within a probabilistic safety assessment (PSA). Thereby, the principal limit of quantifying human actions are discussed with special emphasis on data quality and cognitive error modelling. In this connection, the basic procedure for a HRA is briefly described under realistic conditions. With respect to the quantitative part of a HRA - the determination of error probabilities - an evaluating description of the standard method THERP (Technique of Human Error Rate Prediction) is given using eight evaluation criteria. Furthermore, six new developments (EdF'sPHRA, HCR, HCR/ORE, SLIM, HEART, INTENT) are briefly described and roughly evaluated. The report concludes with a catalogue of requirements for HRA methods. (orig.) [de

  2. Longwave surface radiation over the globe from satellite data - An error analysis

    Science.gov (United States)

    Gupta, S. K.; Wilber, A. C.; Darnell, W. L.; Suttles, J. T.

    1993-01-01

    Errors have been analyzed for monthly-average downward and net longwave surface fluxes derived on a 5-deg equal-area grid over the globe, using a satellite technique. Meteorological data used in this technique are available from the TIROS Operational Vertical Sounder (TOVS) system flown aboard NOAA's operational sun-synchronous satellites. The data used are for February 1982 from NOAA-6 and NOAA-7 satellites. The errors in the parametrized equations were estimated by comparing their results with those from a detailed radiative transfer model. The errors in the TOVS-derived surface temperature, water vapor burden, and cloud cover were estimated by comparing these meteorological parameters with independent measurements obtained from other satellite sources. Analysis of the overall errors shows that the present technique could lead to underestimation of downward fluxes by 5 to 15 W/sq m and net fluxes by 4 to 12 W/sq m.

  3. Statistical analysis with measurement error or misclassification strategy, method and application

    CERN Document Server

    Yi, Grace Y

    2017-01-01

    This monograph on measurement error and misclassification covers a broad range of problems and emphasizes unique features in modeling and analyzing problems arising from medical research and epidemiological studies. Many measurement error and misclassification problems have been addressed in various fields over the years as well as with a wide spectrum of data, including event history data (such as survival data and recurrent event data), correlated data (such as longitudinal data and clustered data), multi-state event data, and data arising from case-control studies. Statistical Analysis with Measurement Error or Misclassification: Strategy, Method and Application brings together assorted methods in a single text and provides an update of recent developments for a variety of settings. Measurement error effects and strategies of handling mismeasurement for different models are closely examined in combination with applications to specific problems. Readers with diverse backgrounds and objectives can utilize th...

  4. Error Analysis of the K-Rb-21Ne Comagnetometer Space-Stable Inertial Navigation System

    Directory of Open Access Journals (Sweden)

    Qingzhong Cai

    2018-02-01

    Full Text Available According to the application characteristics of the K-Rb-21Ne comagnetometer, a space-stable navigation mechanization is designed and the requirements of the comagnetometer prototype are presented. By analysing the error propagation rule of the space-stable Inertial Navigation System (INS, the three biases, the scale factor of the z-axis, and the misalignment of the x- and y-axis non-orthogonal with the z-axis, are confirmed to be the main error source. A numerical simulation of the mathematical model for each single error verified the theoretical analysis result of the system’s error propagation rule. Thus, numerical simulation based on the semi-physical data result proves the feasibility of the navigation scheme proposed in this paper.

  5. A Posteriori Error Analysis of Stochastic Differential Equations Using Polynomial Chaos Expansions

    KAUST Repository

    Butler, T.

    2011-01-01

    We develop computable a posteriori error estimates for linear functionals of a solution to a general nonlinear stochastic differential equation with random model/source parameters. These error estimates are based on a variational analysis applied to stochastic Galerkin methods for forward and adjoint problems. The result is a representation for the error estimate as a polynomial in the random model/source parameter. The advantage of this method is that we use polynomial chaos representations for the forward and adjoint systems to cheaply produce error estimates by simple evaluation of a polynomial. By comparison, the typical method of producing such estimates requires repeated forward/adjoint solves for each new choice of random parameter. We present numerical examples showing that there is excellent agreement between these methods. © 2011 Society for Industrial and Applied Mathematics.

  6. Error Analysis of the K-Rb-21Ne Comagnetometer Space-Stable Inertial Navigation System.

    Science.gov (United States)

    Cai, Qingzhong; Yang, Gongliu; Quan, Wei; Song, Ningfang; Tu, Yongqiang; Liu, Yiliang

    2018-02-24

    According to the application characteristics of the K-Rb- 21 Ne comagnetometer, a space-stable navigation mechanization is designed and the requirements of the comagnetometer prototype are presented. By analysing the error propagation rule of the space-stable Inertial Navigation System (INS), the three biases, the scale factor of the z -axis, and the misalignment of the x - and y -axis non-orthogonal with the z -axis, are confirmed to be the main error source. A numerical simulation of the mathematical model for each single error verified the theoretical analysis result of the system's error propagation rule. Thus, numerical simulation based on the semi-physical data result proves the feasibility of the navigation scheme proposed in this paper.

  7. On Kolmogorov Asymptotics of Estimators of the Misclassification Error Rate in Linear Discriminant Analysis.

    Science.gov (United States)

    Zollanvari, Amin; Genton, Marc G

    2013-08-01

    We provide a fundamental theorem that can be used in conjunction with Kolmogorov asymptotic conditions to derive the first moments of well-known estimators of the actual error rate in linear discriminant analysis of a multivariate Gaussian model under the assumption of a common known covariance matrix. The estimators studied in this paper are plug-in and smoothed resubstitution error estimators, both of which have not been studied before under Kolmogorov asymptotic conditions. As a result of this work, we present an optimal smoothing parameter that makes the smoothed resubstitution an unbiased estimator of the true error. For the sake of completeness, we further show how to utilize the presented fundamental theorem to achieve several previously reported results, namely the first moment of the resubstitution estimator and the actual error rate. We provide numerical examples to show the accuracy of the succeeding finite sample approximations in situations where the number of dimensions is comparable or even larger than the sample size.

  8. Error analysis: How precise is fused deposition modeling in fabrication of bone models in comparison to the parent bones?

    Directory of Open Access Journals (Sweden)

    M V Reddy

    2018-01-01

    Full Text Available Background: Rapid prototyping (RP is used widely in dental and faciomaxillary surgery with anecdotal uses in orthopedics. The purview of RP in orthopedics is vast. However, there is no error analysis reported in the literature on bone models generated using office-based RP. This study evaluates the accuracy of fused deposition modeling (FDM using standard tessellation language (STL files and errors generated during the fabrication of bone models. Materials and Methods: Nine dry bones were selected and were computed tomography (CT scanned. STL files were procured from the CT scans and three-dimensional (3D models of the bones were printed using our in-house FDM based 3D printer using Acrylonitrile Butadiene Styrene (ABS filament. Measurements were made on the bone and 3D models according to data collection procedures for forensic skeletal material. Statistical analysis was performed to establish interobserver co-relation for measurements on dry bones and the 3D bone models. Statistical analysis was performed using SPSS version 13.0 software to analyze the collected data. Results: The inter-observer reliability was established using intra-class coefficient for both the dry bones and the 3D models. The mean of absolute difference is 0.4 that is very minimal. The 3D models are comparable to the dry bones. Conclusions: STL file dependent FDM using ABS material produces near-anatomical 3D models. The high 3D accuracy hold a promise in the clinical scenario for preoperative planning, mock surgery, and choice of implants and prostheses, especially in complicated acetabular trauma and complex hip surgeries.

  9. Analysis of radiation fields in tomography on diffusion gaseous sound

    International Nuclear Information System (INIS)

    Bekman, I.N.

    1999-01-01

    Perspectives of application of equilibrium and stationary variants of diffusion tomography with radioactive gaseous sounds for spatial reconstruction of heterogeneous media in materials technology were considered. The basic attention were allocated to creation of simple algorithms of detection of sound accumulation on the background of monotonically varying concentration field. Algorithms of transformation of two-dimensional radiation field in three-dimensional distribution of radiation sources were suggested. The methods of analytical elongation of concentration field permitting separation of regional anomalies on the background of local ones and vice verse were discussed. It was shown that both equilibrium and stationary variants of diffusion tomography detect the heterogeneity of testing material, provide reduction of spatial distribution of elements of its structure and give an estimation of relative degree of defectiveness

  10. Detailed semantic analyses of human error incidents occurring at nuclear power plants. Extraction of periodical transition of error occurrence patterns by applying multivariate analysis

    International Nuclear Information System (INIS)

    Hirotsu, Yuko; Suzuki, Kunihiko; Takano, Kenichi; Kojima, Mitsuhiro

    2000-01-01

    It is essential for preventing the recurrence of human error incidents to analyze and evaluate them with the emphasis on human factor. Detailed and structured analyses of all incidents at domestic nuclear power plants (NPPs) reported during last 31 years have been conducted based on J-HPES, in which total 193 human error cases are identified. Results obtained by the analyses have been stored into the J-HPES database. In the previous study, by applying multivariate analysis to above case studies, it was suggested that there were several occurrence patterns identified of how errors occur at NPPs. It was also clarified that the causes related to each human error are different depending on age of their occurrence. This paper described the obtained results in respects of periodical transition of human error occurrence patterns. By applying multivariate analysis to the above data, it was suggested there were two types of error occurrence patterns as to each human error type. First type is common occurrence patterns, not depending on the age, and second type is the one influenced by periodical characteristics. (author)

  11. Error modeling and sensitivity analysis of a parallel robot with SCARA(selective compliance assembly robot arm) motions

    Science.gov (United States)

    Chen, Yuzhen; Xie, Fugui; Liu, Xinjun; Zhou, Yanhua

    2014-07-01

    Parallel robots with SCARA(selective compliance assembly robot arm) motions are utilized widely in the field of high speed pick-and-place manipulation. Error modeling for these robots generally simplifies the parallelogram structures included by the robots as a link. As the established error model fails to reflect the error feature of the parallelogram structures, the effect of accuracy design and kinematic calibration based on the error model come to be undermined. An error modeling methodology is proposed to establish an error model of parallel robots with parallelogram structures. The error model can embody the geometric errors of all joints, including the joints of parallelogram structures. Thus it can contain more exhaustively the factors that reduce the accuracy of the robot. Based on the error model and some sensitivity indices defined in the sense of statistics, sensitivity analysis is carried out. Accordingly, some atlases are depicted to express each geometric error's influence on the moving platform's pose errors. From these atlases, the geometric errors that have greater impact on the accuracy of the moving platform are identified, and some sensitive areas where the pose errors of the moving platform are extremely sensitive to the geometric errors are also figured out. By taking into account the error factors which are generally neglected in all existing modeling methods, the proposed modeling method can thoroughly disclose the process of error transmission and enhance the efficacy of accuracy design and calibration.

  12. Soft error evaluation and vulnerability analysis in Xilinx Zynq-7010 system-on chip

    Energy Technology Data Exchange (ETDEWEB)

    Du, Xuecheng; He, Chaohui; Liu, Shuhuan, E-mail: liushuhuan@mail.xjtu.edu.cn; Zhang, Yao; Li, Yonghong; Xiong, Ceng; Tan, Pengkang

    2016-09-21

    Radiation-induced soft errors are an increasingly important threat to the reliability of modern electronic systems. In order to evaluate system-on chip's reliability and soft error, the fault tree analysis method was used in this work. The system fault tree was constructed based on Xilinx Zynq-7010 All Programmable SoC. Moreover, the soft error rates of different components in Zynq-7010 SoC were tested by americium-241 alpha radiation source. Furthermore, some parameters that used to evaluate the system's reliability and safety were calculated using Isograph Reliability Workbench 11.0, such as failure rate, unavailability and mean time to failure (MTTF). According to fault tree analysis for system-on chip, the critical blocks and system reliability were evaluated through the qualitative and quantitative analysis.

  13. Learning about Expectation Violation from Prediction Error Paradigms - A Meta-Analysis on Brain Processes Following a Prediction Error.

    Science.gov (United States)

    D'Astolfo, Lisa; Rief, Winfried

    2017-01-01

    Modifying patients' expectations by exposing them to expectation violation situations (thus maximizing the difference between the expected and the actual situational outcome) is proposed to be a crucial mechanism for therapeutic success for a variety of different mental disorders. However, clinical observations suggest that patients often maintain their expectations regardless of experiences contradicting their expectations. It remains unclear which information processing mechanisms lead to modification or persistence of patients' expectations. Insight in the processing could be provided by Neuroimaging studies investigating prediction error (PE, i.e., neuronal reactions to non-expected stimuli). Two methods are often used to investigate the PE: (1) paradigms, in which participants passively observe PEs ("passive" paradigms) and (2) paradigms, which encourage a behavioral adaptation following a PE ("active" paradigms). These paradigms are similar to the methods used to induce expectation violations in clinical settings: (1) the confrontation with an expectation violation situation and (2) an enhanced confrontation in which the patient actively challenges his expectation. We used this similarity to gain insight in the different neuronal processing of the two PE paradigms. We performed a meta-analysis contrasting neuronal activity of PE paradigms encouraging a behavioral adaptation following a PE and paradigms enforcing passiveness following a PE. We found more neuronal activity in the striatum, the insula and the fusiform gyrus in studies encouraging behavioral adaptation following a PE. Due to the involvement of reward assessment and avoidance learning associated with the striatum and the insula we propose that the deliberate execution of action alternatives following a PE is associated with the integration of new information into previously existing expectations, therefore leading to an expectation change. While further research is needed to directly assess

  14. Human error and the problem of causality in analysis of accidents

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1990-01-01

    , designers or managers have played a major role. There are, however, several basic problems in analysis of accidents and identification of human error. This paper addresses the nature of causal explanations and the ambiguity of the rules applied for identification of the events to include in analysis......Present technology is characterized by complexity, rapid change and growing size of technical systems. This has caused increasing concern with the human involvement in system safety. Analyses of the major accidents during recent decades have concluded that human errors on part of operators....... The influence of this change on the control of safety of large-scale industrial systems is discussed....

  15. The error analysis of the determination of the activity coefficients via the isopiestic method

    International Nuclear Information System (INIS)

    Zhou Jun; Chen Qiyuan; Fang Zheng; Liang Yizeng; Liu Shijun; Zhou Yong

    2005-01-01

    Error analysis is very important to experimental designs. The error analysis of the determination of activity coefficients for a binary system via the isopiestic method shows that the error sources include not only the experimental errors of the analyzed molalities and the measured osmotic coefficients, but also the deviation of the regressed values from the experimental data when the regression function is used. It also shows that the accurate chemical analysis of the molality of the test solution is important, and it is preferable to keep the error of the measured osmotic coefficients changeless in all isopiestic experiments including those experiments on the very dilute solutions. The isopiestic experiments on the dilute solutions are very important, and the lowest molality should be low enough so that a theoretical method can be used below the lowest molality. And it is necessary that the isopiestic experiment should be done on the test solutions of lower than 0.1 mol . kg -1 . For most electrolytes solutions, it is usually preferable to require the lowest molality to be less than 0.05 mol . kg -1 . Moreover, the experimental molalities of the test solutions should be firstly arranged by keeping the interval of the logarithms of the molalities nearly constant, and secondly more number of high molalities should be arranged, and we propose to arrange the experimental molalities greater than 1 mol . kg -1 according to some kind of the arithmetical progression of the intervals of the molalities. After experiments, the error of the calculated activity coefficients of the solutes could be calculated from the actually values of the errors of the measured isopiestic molalities and the deviations of the regressed values from the experimental values with our obtained equations

  16. Diction and Expression in Error Analysis Can Enhance Academic Writing of L2 University Students

    Directory of Open Access Journals (Sweden)

    Muhammad Sajid

    2016-06-01

    Full Text Available Without proper linguistic competence in English language, academic writing is one of the most challenging tasks, especially, in various genre specific disciplines by L2 novice writers. This paper examines the role of diction and expression through error analysis in English language of L2 novice writers’ academic writing in interdisciplinary texts of IT & Computer sciences and Business & Management sciences. Though the importance of vocabulary in L2 academic discourse is widely recognized, there has been little research focusing on diction and expression at higher education level. A corpus of 40 introductions of the published research articles, downloaded from the journals (e.g., 20 from IT & Computer sciences and 20 Business & Management sciences authored by L2 novice writers, was analyzed to determine lexico-grammatical errors from the texts by applying Markin4 method of Error Analysis. ‘Rewrites’ in italics letters is an attempt to demonstrate English language flexibility, infinite vastness and richness in diction and expression, comparing it with the excerpts taken from the corpus. Keywords: diction & expression, academic writing, error analysis, lexico-grammatical errors

  17. A Human Error Analysis Procedure for Identifying Potential Error Modes and Influencing Factors for Test and Maintenance Activities

    International Nuclear Information System (INIS)

    Kim, Jae Whan; Park, Jin Kyun

    2010-01-01

    Periodic or non-periodic test and maintenance (T and M) activities in large, complex systems such as nuclear power plants (NPPs) are essential for sustaining stable and safe operation of the systems. On the other hand, it also has been raised that human erroneous actions that might occur during T and M activities has the possibility of incurring unplanned reactor trips (RTs) or power derate, making safety-related systems unavailable, or making the reliability of components degraded. Contribution of human errors during normal and abnormal activities of NPPs to the unplanned RTs is known to be about 20% of the total events. This paper introduces a procedure for predictively analyzing human error potentials when maintenance personnel perform T and M tasks based on a work procedure or their work plan. This procedure helps plant maintenance team prepare for plausible human errors. The procedure to be introduced is focusing on the recurrent error forms (or modes) in execution-based errors such as wrong object, omission, too little, and wrong action

  18. An analysis of error patterns in children's backward digit recall in noise

    Science.gov (United States)

    Osman, Homira; Sullivan, Jessica R.

    2015-01-01

    The purpose of the study was to determine whether perceptual masking or cognitive processing accounts for a decline in working memory performance in the presence of competing speech. The types and patterns of errors made on the backward digit span in quiet and multitalker babble at -5 dB signal-to-noise ratio (SNR) were analyzed. The errors were classified into two categories: item (if digits that were not presented in a list were repeated) and order (if correct digits were repeated but in an incorrect order). Fifty five children with normal hearing were included. All the children were aged between 7 years and 10 years. Repeated measures of analysis of variance (RM-ANOVA) revealed the main effects for error type and digit span length. In terms of listening condition interaction it was found that the order errors occurred more frequently than item errors in the degraded listening condition compared to quiet. In addition, children had more difficulty recalling the correct order of intermediate items, supporting strong primacy and recency effects. Decline in children's working memory performance was not primarily related to perceptual difficulties alone. The majority of errors was related to the maintenance of sequential order information, which suggests that reduced performance in competing speech may result from increased cognitive processing demands in noise. PMID:26168949

  19. Error analysis of the crystal orientations obtained by the dictionary approach to EBSD indexing.

    Science.gov (United States)

    Ram, Farangis; Wright, Stuart; Singh, Saransh; De Graef, Marc

    2017-10-01

    The efficacy of the dictionary approach to Electron Back-Scatter Diffraction (EBSD) indexing was evaluated through the analysis of the error in the retrieved crystal orientations. EBSPs simulated by the Callahan-De Graef forward model were used for this purpose. Patterns were noised, distorted, and binned prior to dictionary indexing. Patterns with a high level of noise, with optical distortions, and with a 25 × 25 pixel size, when the error in projection center was 0.7% of the pattern width and the error in specimen tilt was 0.8°, were indexed with a 0.8° mean error in orientation. The same patterns, but 60 × 60 pixel in size, were indexed by the standard 2D Hough transform based approach with almost the same orientation accuracy. Optimal detection parameters in the Hough space were obtained by minimizing the orientation error. It was shown that if the error in detector geometry can be reduced to 0.1% in projection center and 0.1° in specimen tilt, the dictionary approach can retrieve a crystal orientation with a 0.2° accuracy. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. The utilization of consistency metrics for error analysis in deformable image registration

    International Nuclear Information System (INIS)

    Bender, Edward T; Tome, Wolfgang A

    2009-01-01

    The aim of this study was to investigate the utility of consistency metrics, such as inverse consistency, in contour-based deformable registration error analysis. Four images were acquired of the same phantom that has experienced varying levels of deformation. The deformations were simulated with deformable image registration. Using calculated deformation maps, the inconsistencies within the algorithm were investigated. This can be done, for example, by calculating deformation maps both in forward and reverse directions and applying them subsequently to an image. If the algorithm is not inverse consistent, then this final image will not be the same as the original, as it should be. Other consistency tests were done, for example by comparing different algorithms or by applying the deformation maps to a circular set of multiple deformations, whereby the original and final images are in fact the same. The resulting composite deformation map in this case contains a combination of the errors within those maps, because if error free, the resulting deformation map should be zero everywhere. We have termed this the generalized inverse consistency error map (Σ-vector(x-vector)). The correlation between the consistency metrics and registration error varied considerably depending on the registration algorithm and type of consistency metric. There was also a trend for the actual registration error to be larger than the consistency metrics. A disadvantage of these techniques is that good performance in these consistency checks is a necessary but not sufficient condition for an accurate deformation method.

  1. An analysis of error patterns in children′s backward digit recall in noise

    Directory of Open Access Journals (Sweden)

    Homira Osman

    2015-01-01

    Full Text Available The purpose of the study was to determine whether perceptual masking or cognitive processing accounts for a decline in working memory performance in the presence of competing speech. The types and patterns of errors made on the backward digit span in quiet and multitalker babble at -5 dB signal-to-noise ratio (SNR were analyzed. The errors were classified into two categories: item (if digits that were not presented in a list were repeated and order (if correct digits were repeated but in an incorrect order. Fifty five children with normal hearing were included. All the children were aged between 7 years and 10 years. Repeated measures of analysis of variance (RM-ANOVA revealed the main effects for error type and digit span length. In terms of listening condition interaction, it was found that the order errors occurred more frequently than item errors in the degraded listening condition compared to quiet. In addition, children had more difficulty recalling the correct order of intermediate items, supporting strong primacy and recency effects. Decline in children′s working memory performance was not primarily related to perceptual difficulties alone. The majority of errors was related to the maintenance of sequential order information, which suggests that reduced performance in competing speech may result from increased cognitive processing demands in noise.

  2. Calculating potential error in sodium MRI with respect to the analysis of small objects.

    Science.gov (United States)

    Stobbe, Robert W; Beaulieu, Christian

    2017-10-11

    To facilitate correct interpretation of sodium MRI measurements, calculation of error with respect to rapid signal decay is introduced and combined with that of spatially correlated noise to assess volume-of-interest (VOI) 23 Na signal measurement inaccuracies, particularly for small objects. Noise and signal decay-related error calculations were verified using twisted projection imaging and a specially designed phantom with different sized spheres of constant elevated sodium concentration. As a demonstration, lesion signal measurement variation (5 multiple sclerosis participants) was compared with that predicted from calculation. Both theory and phantom experiment showed that VOI signal measurement in a large 10-mL, 314-voxel sphere was 20% less than expected on account of point-spread-function smearing when the VOI was drawn to include the full sphere. Volume-of-interest contraction reduced this error but increased noise-related error. Errors were even greater for smaller spheres (40-60% less than expected for a 0.35-mL, 11-voxel sphere). Image-intensity VOI measurements varied and increased with multiple sclerosis lesion size in a manner similar to that predicted from theory. Correlation suggests large underestimation of 23 Na signal in small lesions. Acquisition-specific measurement error calculation aids 23 Na MRI data analysis and highlights the limitations of current low-resolution methodologies. Magn Reson Med, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  3. Medication errors in residential aged care facilities: a distributed cognition analysis of the information exchange process.

    Science.gov (United States)

    Tariq, Amina; Georgiou, Andrew; Westbrook, Johanna

    2013-05-01

    Medication safety is a pressing concern for residential aged care facilities (RACFs). Retrospective studies in RACF settings identify inadequate communication between RACFs, doctors, hospitals and community pharmacies as the major cause of medication errors. Existing literature offers limited insight about the gaps in the existing information exchange process that may lead to medication errors. The aim of this research was to explicate the cognitive distribution that underlies RACF medication ordering and delivery to identify gaps in medication-related information exchange which lead to medication errors in RACFs. The study was undertaken in three RACFs in Sydney, Australia. Data were generated through ethnographic field work over a period of five months (May-September 2011). Triangulated analysis of data primarily focused on examining the transformation and exchange of information between different media across the process. The findings of this study highlight the extensive scope and intense nature of information exchange in RACF medication ordering and delivery. Rather than attributing error to individual care providers, the explication of distributed cognition processes enabled the identification of gaps in three information exchange dimensions which potentially contribute to the occurrence of medication errors namely: (1) design of medication charts which complicates order processing and record keeping (2) lack of coordination mechanisms between participants which results in misalignment of local practices (3) reliance on restricted communication bandwidth channels mainly telephone and fax which complicates the information processing requirements. The study demonstrates how the identification of these gaps enhances understanding of medication errors in RACFs. Application of the theoretical lens of distributed cognition can assist in enhancing our understanding of medication errors in RACFs through identification of gaps in information exchange. Understanding

  4. Quantitative analysis of optical coherence tomography and histopathology images of normal and dysplastic oral mucosal tissues.

    Science.gov (United States)

    Adegun, Oluyori Kutulola; Tomlins, Pete H; Hagi-Pavli, Eleni; McKenzie, Gordon; Piper, Kim; Bader, Dan L; Fortune, Farida

    2012-07-01

    Selecting the most representative site for biopsy is crucial in establishing a definitive diagnosis of oral epithelial dysplasia. The current process involves clinical examination that can be subjective and prone to sampling errors. The aim of this study was therefore to investigate the use of optical coherence tomography (OCT) for differentiation of normal and dysplastic oral epithelial samples, with a view to developing an objective and reproducible approach for biopsy site selection. Biopsy samples from patients with fibro-epithelial polyps (n = 13), mild dysplasia (n = 2), and moderate/severe dysplasia (n = 4) were scanned at 5-μm intervals using an OCT microscope and subsequently processed and stained with hematoxylin and eosin (H&E). Epithelial differentiation was measured from the rate of change (gradient) of the backscattered light intensity in the OCT signal as a function of depth. This parameter is directly related to the density of optical scattering from the cell nuclei. OCT images of normal oral epithelium showed a clear delineation of the mucosal layers observed in the matching histology. However, OCT images of oral dysplasia did not clearly identify the individual mucosal layers because of the increased density of abnormal cell nuclei, which impeded light penetration. Quantitative analysis on 2D-OCT and histology images differentiated dysplasia from normal control samples. Similar analysis on 3D-OCT datasets resulted in the reclassification of biopsy samples into the normal/mild and moderate/severe groups. Quantitative differentiation of normal and dysplastic lesions using OCT offers a non-invasive objective approach for localizing the most representative site to biopsy, particularly in oral lesions with similar clinical features.

  5. Signal-to-Noise Ratio Analysis of a Phase-Sensitive Voltmeter for Electrical Impedance Tomography.

    Science.gov (United States)

    Murphy, Ethan K; Takhti, Mohammad; Skinner, Joseph; Halter, Ryan J; Odame, Kofi

    2017-04-01

    In this paper, thorough analysis along with mathematical derivations of the matched filter for a voltmeter used in electrical impedance tomography systems are presented. The effect of the random noise in the system prior to the matched filter, generated by other components, are considered. Employing the presented equations allow system/circuit designers to find the maximum tolerable noise prior to the matched filter that leads to the target signal-to-noise ratio (SNR) of the voltmeter, without having to over-design internal components. A practical model was developed that should fall within 2 dB and 5 dB of the median SNR measurements of signal amplitude and phase, respectively. In order to validate our claims, simulation and experimental measurements have been performed with an analog-to-digital converter (ADC) followed by a digital matched filter, while the noise of the whole system was modeled as the input referred at the ADC input. The input signal was contaminated by a known value of additive white Gaussian noise (AWGN) noise, and the noise level was swept from 3% to 75% of the least significant bit (LSB) of the ADC. Differences between experimental and both simulated and analytical SNR values were less than 0.59 and 0.35 dB for RMS values ≥ 20% of an LSB and less than 1.45 and 2.58 dB for RMS values circuit designers in EIT, and a more accurate error analysis that was previously missing in EIT literature.

  6. The industrial computerized tomography applied to the rock analysis

    International Nuclear Information System (INIS)

    Tetzner, Guaraciaba de Campos

    2008-01-01

    This work is a study of the possibilities of the technical applications of Computerized Tomography (CT) by using a device developed in the Radiation Technology Center (CTR), Institute for Energy and Nuclear Research (IPEN-CNEN/SP). The equipment consists of a gamma radiation source ( 60 Co), a scintillation detector of sodium iodide doped with thallium (NaI (Tl)), a mechanical system to move the object (rotation and translation) and a computer system. This operating system has been designed and developed by the CTR-IPEN-CNEN/SP team using national resources and technology. The first validation test of the equipment was carried out using a cylindrical sample of polypropylene (phantom) with two cylindrical cavities (holes) of 5 x 25 cm (diameter and length). In these tests, the holes were filled with materials of different density (air, oil and metal), whose attenuation coefficients are well known. The goal of this first test was to assess the response quality of the equipment. The present report is a study comparing computerized tomography equipment CTR-IPEN-CNEN/SP which uses a source of gamma radiation ( 60 Co) and other equipment provided by the Department of Geosciences in the University of Texas (CTUT), which uses an X-ray source (450 kV and 3.2 mA). As a result, the images obtained and the comprehensive study of the usefulness of the equipment developed here strengthened the proposition that the development of industrial computerized tomography is an important step toward consolidating the national technology. (author)

  7. Bit error rate analysis of free-space optical communication over general Malaga turbulence channels with pointing error

    KAUST Repository

    Alheadary, Wael Ghazy

    2016-12-24

    In this work, we present a bit error rate (BER) and achievable spectral efficiency (ASE) performance of a freespace optical (FSO) link with pointing errors based on intensity modulation/direct detection (IM/DD) and heterodyne detection over general Malaga turbulence channel. More specifically, we present exact closed-form expressions for adaptive and non-adaptive transmission. The closed form expressions are presented in terms of generalized power series of the Meijer\\'s G-function. Moreover, asymptotic closed form expressions are provided to validate our work. In addition, all the presented analytical results are illustrated using a selected set of numerical results.

  8. Evaluation of parametric models by the prediction error in colorectal cancer survival analysis.

    Science.gov (United States)

    Baghestani, Ahmad Reza; Gohari, Mahmood Reza; Orooji, Arezoo; Pourhoseingholi, Mohamad Amin; Zali, Mohammad Reza

    2015-01-01

    The aim of this study is to determine the factors influencing predicted survival time for patients with colorectal cancer (CRC) using parametric models and select the best model by predicting error's technique. Survival models are statistical techniques to estimate or predict the overall time up to specific events. Prediction is important in medical science and the accuracy of prediction is determined by a measurement, generally based on loss functions, called prediction error. A total of 600 colorectal cancer patients who admitted to the Cancer Registry Center of Gastroenterology and Liver Disease Research Center, Taleghani Hospital, Tehran, were followed at least for 5 years and have completed selected information for this study. Body Mass Index (BMI), Sex, family history of CRC, tumor site, stage of disease and histology of tumor included in the analysis. The survival time was compared by the Log-rank test and multivariate analysis was carried out using parametric models including Log normal, Weibull and Log logistic regression. For selecting the best model, the prediction error by apparent loss was used. Log rank test showed a better survival for females, BMI more than 25, patients with early stage at diagnosis and patients with colon tumor site. Prediction error by apparent loss was estimated and indicated that Weibull model was the best one for multivariate analysis. BMI and Stage were independent prognostic factors, according to Weibull model. In this study, according to prediction error Weibull regression showed a better fit. Prediction error would be a criterion to select the best model with the ability to make predictions of prognostic factors in survival analysis.

  9. On the Relationship Between Anxiety and Error Monitoring: A meta-analysis and conceptual framework

    Directory of Open Access Journals (Sweden)

    Jason eMoser

    2013-08-01

    Full Text Available Research involving event-related brain potentials has revealed that anxiety is associated with enhanced error monitoring, as reflected in increased amplitude of the error-related negativity (ERN. The nature of the relationship between anxiety and error monitoring is unclear, however. Through meta-analysis and a critical review of the literature, we argue that anxious apprehension/worry is the dimension of anxiety most closely associated with error monitoring. Although, overall, anxiety demonstrated a robust, small-to-medium relationship with enhanced ERN (r = -.25, studies employing measures of anxious apprehension show a threefold greater effect size estimate (r = -.35 than those utilizing other measures of anxiety (r = -.09. Our conceptual framework helps explain this more specific relationship between anxiety and enhanced ERN and delineates the unique roles of worry, conflict processing, and modes of cognitive control. Collectively, our analysis suggests that enhanced ERN in anxiety results from the interplay of a decrease in processes supporting active goal maintenance and a compensatory increase in processes dedicated to transient reactivation of task goals on an as-needed basis when salient events (i.e., errors occur.

  10. Circular Array of Magnetic Sensors for Current Measurement: Analysis for Error Caused by Position of Conductor.

    Science.gov (United States)

    Yu, Hao; Qian, Zheng; Liu, Huayi; Qu, Jiaqi

    2018-02-14

    This paper analyzes the measurement error, caused by the position of the current-carrying conductor, of a circular array of magnetic sensors for current measurement. The circular array of magnetic sensors is an effective approach for AC or DC non-contact measurement, as it is low-cost, light-weight, has a large linear range, wide bandwidth, and low noise. Especially, it has been claimed that such structure has excellent reduction ability for errors caused by the position of the current-carrying conductor, crosstalk current interference, shape of the conduction cross-section, and the Earth's magnetic field. However, the positions of the current-carrying conductor-including un-centeredness and un-perpendicularity-have not been analyzed in detail until now. In this paper, for the purpose of having minimum measurement error, a theoretical analysis has been proposed based on vector inner and exterior product. In the presented mathematical model of relative error, the un-center offset distance, the un-perpendicular angle, the radius of the circle, and the number of magnetic sensors are expressed in one equation. The comparison of the relative error caused by the position of the current-carrying conductor between four and eight sensors is conducted. Tunnel magnetoresistance (TMR) sensors are used in the experimental prototype to verify the mathematical model. The analysis results can be the reference to design the details of the circular array of magnetic sensors for current measurement in practical situations.

  11. Error Analysis for High Resolution Topography with Bi-Static Single-Pass SAR Interferometry

    Science.gov (United States)

    Muellerschoen, Ronald J.; Chen, Curtis W.; Hensley, Scott; Rodriguez, Ernesto

    2006-01-01

    We present a flow down error analysis from the radar system to topographic height errors for bi-static single pass SAR interferometry for a satellite tandem pair. Because of orbital dynamics the baseline length and baseline orientation evolve spatially and temporally, the height accuracy of the system is modeled as a function of the spacecraft position and ground location. Vector sensitivity equations of height and the planar error components due to metrology, media effects, and radar system errors are derived and evaluated globally for a baseline mission. Included in the model are terrain effects that contribute to layover and shadow and slope effects on height errors. The analysis also accounts for nonoverlapping spectra and the non-overlapping bandwidth due to differences between the two platforms' viewing geometries. The model is applied to a 514 km altitude 97.4 degree inclination tandem satellite mission with a 300 m baseline separation and X-band SAR. Results from our model indicate that global DTED level 3 can be achieved.

  12. On the relationship between anxiety and error monitoring: a meta-analysis and conceptual framework.

    Science.gov (United States)

    Moser, Jason S; Moran, Tim P; Schroder, Hans S; Donnellan, M Brent; Yeung, Nick

    2013-01-01

    Research involving event-related brain potentials has revealed that anxiety is associated with enhanced error monitoring, as reflected in increased amplitude of the error-related negativity (ERN). The nature of the relationship between anxiety and error monitoring is unclear, however. Through meta-analysis and a critical review of the literature, we argue that anxious apprehension/worry is the dimension of anxiety most closely associated with error monitoring. Although, overall, anxiety demonstrated a robust, "small-to-medium" relationship with enhanced ERN (r = -0.25), studies employing measures of anxious apprehension show a threefold greater effect size estimate (r = -0.35) than those utilizing other measures of anxiety (r = -0.09). Our conceptual framework helps explain this more specific relationship between anxiety and enhanced ERN and delineates the unique roles of worry, conflict processing, and modes of cognitive control. Collectively, our analysis suggests that enhanced ERN in anxiety results from the interplay of a decrease in processes supporting active goal maintenance and a compensatory increase in processes dedicated to transient reactivation of task goals on an as-needed basis when salient events (i.e., errors) occur.

  13. Prevalence of technical errors and periapical lesions in a sample of endodontically treated teeth: a CBCT analysis.

    Science.gov (United States)

    Nascimento, Eduarda Helena Leandro; Gaêta-Araujo, Hugo; Andrade, Maria Fernanda Silva; Freitas, Deborah Queiroz

    2018-01-21

    The aims of this study are to identify the most frequent technical errors in endodontically treated teeth and to determine which root canals were most often associated with those errors, as well as to relate endodontic technical errors and the presence of coronal restorations with periapical status by means of cone-beam computed tomography images. Six hundred eighteen endodontically treated teeth (1146 root canals) were evaluated for the quality of their endodontic treatment and for the presence of coronal restorations and periapical lesions. Each root canal was classified according to dental groups, and the endodontic technical errors were recorded. Chi-square's test and descriptive analyses were performed. Six hundred eighty root canals (59.3%) had periapical lesions. Maxillary molars and anterior teeth showed higher prevalence of periapical lesions (p technical error in all root canals, except for the second mesiobuccal root canal of maxillary molars and the distobuccal root canal of mandibular molars, which were non-filled in 78.4 and 30% of the cases, respectively. There is a high prevalence of apical radiolucencies, which increased in the presence of poor coronal restorations, endodontic technical errors, and when both conditions were concomitant. Underfilling was the most frequent technical error, followed by non-homogeneous and non-filled canals. Evaluation of endodontic treatment quality that considers every single root canal aims on warning dental practitioners of the prevalence of technical errors that could be avoided with careful treatment planning and execution.

  14. Towards Corrected and Completed Atomic Site Occupancy Analysis of Superalloys Using Atom Probe Tomography Techniques

    Science.gov (United States)

    2012-08-17

    Advanced Atom Probe Tomography (APT) techniques have been developed and applied to the atomic-scale characterization of multi-component...analysis approaches for solute distribution/segregation analysis, atom probe crystallography, and lattice rectification and has demonstrated potential...materials design, where Integrated Computational Materials engineering (ICME) can be enabled by real world 3D atomic resolution data via atom probe microscopy.

  15. Quantitative analysis of cholesteatoma using high resolution computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Kikuchi, Shigeru; Yamasoba, Tatsuya (Kameda General Hospital, Chiba (Japan)); Iinuma, Toshitaka

    1992-05-01

    Seventy-three cases of adult cholesteatoma, including 52 cases of pars flaccida type cholesteatoma and 21 of pars tensa type cholesteatoma, were examined using high resolution computed tomography, in both axial (lateral semicircular canal plane) and coronal sections (cochlear, vestibular and antral plane). These cases were classified into two subtypes according to the presence of extension of cholesteatoma into the antrum. Sixty cases of chronic otitis media with central perforation (COM) were also examined as controls. Various locations of the middle ear cavity were measured in terms of size in comparison with pars flaccida type cholesteatoma, pars tensa type cholesteatoma and COM. The width of the attic was significantly larger in both pars flaccida type and pars tensa type cholesteatoma than in COM. With pars flaccida type cholesteatoma there was a significantly larger distance between the malleus and lateral wall of the attic than with COM. In contrast, the distance between the malleus and medial wall of the attic was significantly larger with pars tensa type cholesteatoma than with COM. With cholesteatoma extending into the antrum, regardless of the type of cholesteatoma, there were significantly larger distances than with COM at the following sites: the width and height of the aditus ad antrum, and the width, height and anterior-posterior diameter of the antrum. However, these distances were not significantly different between cholesteatoma without extension into the antrum and COM. The hitherto demonstrated qualitative impressions of bone destruction in cholesteatoma were quantitatively verified in detail using high resolution computed tomography. (author).

  16. What Do Spelling Errors Tell Us? Classification and Analysis of Errors Made by Greek Schoolchildren with and without Dyslexia

    Science.gov (United States)

    Protopapas, Athanassios; Fakou, Aikaterini; Drakopoulou, Styliani; Skaloumbakas, Christos; Mouzaki, Angeliki

    2013-01-01

    In this study we propose a classification system for spelling errors and determine the most common spelling difficulties of Greek children with and without dyslexia. Spelling skills of 542 children from the general population and 44 children with dyslexia, Grades 3-4 and 7, were assessed with a dictated common word list and age-appropriate…

  17. Advanced GIS Exercise: Performing Error Analysis in ArcGIS ModelBuilder

    Science.gov (United States)

    Hall, Steven T.; Post, Christopher J.

    2009-01-01

    Knowledge of Geographic Information Systems is quickly becoming an integral part of the natural resource professionals' skill set. With the growing need of professionals with these skills, we created an advanced geographic information systems (GIS) exercise for students at Clemson University to introduce them to the concept of error analysis,…

  18. Using Computation Curriculum-Based Measurement Probes for Error Pattern Analysis

    Science.gov (United States)

    Dennis, Minyi Shih; Calhoon, Mary Beth; Olson, Christopher L.; Williams, Cara

    2014-01-01

    This article describes how "curriculum-based measurement--computation" (CBM-C) mathematics probes can be used in combination with "error pattern analysis" (EPA) to pinpoint difficulties in basic computation skills for students who struggle with learning mathematics. Both assessment procedures provide ongoing assessment data…

  19. time-series analysis of nigeria rice supply and demand: error ...

    African Journals Online (AJOL)

    O. Ojogho Ph.D

    The study examined a time-series analysis of Nigeria rice supply and demand with a view to determining any long-run equilibrium between them using the Error Correction Model approach (ECM). The data used for the study represents the annual series of 1960-2007 (47 years) for rice supply and demand in Nigeria, ...

  20. Error Analysis of Explicit Partitioned Runge-Kutta Schemes for Conservation Laws

    NARCIS (Netherlands)

    W. Hundsdorfer (Willem); D.I. Ketcheson; I. Savostianov (Igor)

    2015-01-01

    htmlabstractAn error analysis is presented for explicit partitioned Runge-Kutta methods and multirate methods applied to conservation laws. The interfaces, across which different methods or time steps are used, lead to order reduction of the schemes. Along with cell-based decompositions, also

  1. Diction and Expression in Error Analysis Can Enhance Academic Writing of L2 University Students

    Science.gov (United States)

    Sajid, Muhammad

    2016-01-01

    Without proper linguistic competence in English language, academic writing is one of the most challenging tasks, especially, in various genre specific disciplines by L2 novice writers. This paper examines the role of diction and expression through error analysis in English language of L2 novice writers' academic writing in interdisciplinary texts…

  2. Utility of KTEA-3 Error Analysis for the Diagnosis of Specific Learning Disabilities

    Science.gov (United States)

    Flanagan, Dawn P.; Mascolo, Jennifer T.; Alfonso, Vincent C.

    2017-01-01

    Through the use of excerpts from one of our own case studies, this commentary applied concepts inherent in, but not limited to, the neuropsychological literature to the interpretation of performance on the Kaufman Tests of Educational Achievement-Third Edition (KTEA-3), particularly at the level of error analysis. The approach to KTEA-3 test…

  3. An Error Analysis in Division Problems in Fractions Posed by Pre-Service Elementary Mathematics Teachers

    Science.gov (United States)

    Isik, Cemalettin; Kar, Tugrul

    2012-01-01

    The present study aimed to make an error analysis in the problems posed by pre-service elementary mathematics teachers about fractional division operation. It was carried out with 64 pre-service teachers studying in their final year in the Department of Mathematics Teaching in an eastern university during the spring semester of academic year…

  4. Combining Reading Quizzes and Error Analysis to Motivate Students to Grow

    Science.gov (United States)

    Wang, Jiawen; Selby, Karen L.

    2017-01-01

    In the spirit of scholarship in teaching and learning at the college level, we suggested and experimented with reading quizzes in combination with error analysis as one way not only to get students better prepared for class but also to provide opportunities for reflection under frameworks of mastery learning and mind growth. Our mixed-method…

  5. Time-series analysis of Nigeria rice supply and demand: Error ...

    African Journals Online (AJOL)

    The study examined a time-series analysis of Nigeria rice supply and demand with a view to determining any long-run equilibrium between them using the Error Correction Model approach (ECM). The data used for the study represents the annual series of 1960-2007 (47 years) for rice supply and demand in Nigeria, ...

  6. Analysis of the computed tomography in the acute abdomen; Analise da tomografia computadorizada no abdome agudo

    Energy Technology Data Exchange (ETDEWEB)

    Hochhegger, Bruno [Complexo Hospitalar Santa Casa de Porto Alegre, RS (Brazil); Moraes, Everton [Universidade Federal de Santa Maria (UFSM), RS (Brazil); Haygert, Carlos Jesus Pereira; Antunes, Paulo Sergio Pase [Hospital Universitario de Santa Maria, RS (Brazil); Gazzoni, Fernando [Pontificia Universidade Catolica de Porto Alegre (PUC-RS), Porto Alegre, RS (Brazil). Hospital Sao Lucas; Andrade, Rubens Gabriel Feijo [Fundacao Universitaria de Cardiologia de Porto Alegre, RS (Brazil). Inst. de Cardiologia; Bueno, Leticia Rossi [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil); Lopes, Luis Felipe Dias [Universidade Federal de Santa Maria (UFSM), RS (Brazil). Dept. de Estatistica]. E-mail: brunorgs@pop.com.br

    2007-07-01

    Introduction: This study tends to test the capacity of the computed tomography in assist in the diagnosis and the approach of the acute abdomen. Material and method: This is a longitudinal and prospective study, in which were analyzed the patients with the diagnosis of acute abdomen. There were obtained 105 cases of acute abdomen and after the application of the exclusions criteria were included 28 patients in the study. Results: Computed tomography changed the diagnostic hypothesis of the physicians in 50% of the cases (p < 0.05), and the confidence index in 85.71% of the cases (p 0.014). Computed tomography also altered the management in 46.43% of the cases (p > 0.05), where 78.57% of the patients had surgical indication before computed tomography and 67.86% after computed tomography (p = 0.0546). The index of accurate diagnosis of computed tomography, when compared to the anatomopathologic examination and the final diagnosis, was observed in 82.14% of the cases (p = 0.013). When the analysis was done dividing the patients in surgical and nonsurgical group, were obtained an accuracy of 89.28% (p 0.0001). The difference of 7.2 days of hospitalization (p = 0.003) was obtained compared with the mean of the acute abdomen without use the computed tomography. Conclusion: The computed tomography is correlative with the anatomopathology and has great accuracy in the surgical indication, associated with the capacity of increase the confident index of the physicians, reduces the hospitalization time, reduces the number of surgeries and is cost-effective. (author)

  7. Training image analysis for model error assessment and dimension reduction in Bayesian-MCMC solutions to inverse problems

    Science.gov (United States)

    Koepke, C.; Irving, J.

    2015-12-01

    Bayesian solutions to inverse problems in near-surface geophysics and hydrology have gained increasing popularity as a means of estimating not only subsurface model parameters, but also their corresponding uncertainties that can be used in probabilistic forecasting and risk analysis. In particular, Markov-chain-Monte-Carlo (MCMC) methods have attracted much recent attention as a means of statistically sampling from the Bayesian posterior distribution. In this regard, two approaches are commonly used to improve the computational tractability of the Bayesian-MCMC approach: (i) Forward models involving a simplification of the underlying physics are employed, which offer a significant reduction in the time required to calculate data, but generally at the expense of model accuracy, and (ii) the model parameter space is represented using a limited set of spatially correlated basis functions as opposed to a more intuitive high-dimensional pixel-based parameterization. It has become well understood that model inaccuracies resulting from (i) can lead to posterior parameter distributions that are highly biased and overly confident. Further, when performing model reduction as described in (ii), it is not clear how the prior distribution for the basis weights should be defined because simple (e.g., Gaussian or uniform) priors that may be suitable for a pixel-based parameterization may result in a strong prior bias when used for the weights. To address the issue of model error resulting from known forward model approximations, we generate a set of error training realizations and analyze them with principal component analysis (PCA) in order to generate a sparse basis. The latter is used in the MCMC inversion to remove the main model-error component from the residuals. To improve issues related to prior bias when performing model reduction, we also use a training realization approach, but this time models are simulated from the prior distribution and analyzed using independent

  8. Error analysis for duct leakage tests in ASHRAE standard 152P

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, J.W.

    1997-06-01

    This report presents an analysis of random uncertainties in the two methods of testing for duct leakage in Standard 152P of the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE). The test method is titled Standard Method of Test for Determining Steady-State and Seasonal Efficiency of Residential Thermal Distribution Systems. Equations have been derived for the uncertainties in duct leakage for given levels of uncertainty in the measured quantities used as inputs to the calculations. Tables of allowed errors in each of these independent variables, consistent with fixed criteria of overall allowed error, have been developed.

  9. Human errors identification using the human factors analysis and classification system technique (HFACS

    Directory of Open Access Journals (Sweden)

    G. A. Shirali

    2013-12-01

    .Result: In this study, 158 reports of accident in Ahvaz steel industry were analyzed by HFACS technique. This analysis showed that most of the human errors were: in the first level was related to the skill-based errors, in the second to the physical environment, in the third level to the inadequate supervision and in the fourth level to the management of resources. .Conclusion: Studying and analyzing of past events using the HFACS technique can identify the major and root causes of accidents and can be effective on prevent repetitions of such mishaps. Also, it can be used as a basis for developing strategies to prevent future events in steel industries.

  10. Execution-Error Modeling and Analysis of the GRAIL Spacecraft Pair

    Science.gov (United States)

    Goodson, Troy D.

    2013-01-01

    The GRAIL spacecraft, Ebb and Flow (aka GRAIL-A and GRAIL-B), completed their prime mission in June and extended mission in December 2012. The excellent performance of the propulsion and attitude control subsystems contributed significantly to the mission's success. In order to better understand this performance, the Navigation Team has analyzed and refined the execution-error models for delta-v maneuvers. There were enough maneuvers in the prime mission to form the basis of a model update that was used in the extended mission. This paper documents the evolution of the execution-error models along with the analysis and software used.

  11. Optimal alpha reduces error rates in gene expression studies: a meta-analysis approach.

    Science.gov (United States)

    Mudge, J F; Martyniuk, C J; Houlahan, J E

    2017-06-21

    Transcriptomic approaches (microarray and RNA-seq) have been a tremendous advance for molecular science in all disciplines, but they have made interpretation of hypothesis testing more difficult because of the large number of comparisons that are done within an experiment. The result has been a proliferation of techniques aimed at solving the multiple comparisons problem, techniques that have focused primarily on minimizing Type I error with little or no concern about concomitant increases in Type II errors. We have previously proposed a novel approach for setting statistical thresholds with applications for high throughput omics-data, optimal α, which minimizes the probability of making either error (i.e. Type I or II) and eliminates the need for post-hoc adjustments. A meta-analysis of 242 microarray studies extracted from the peer-reviewed literature found that current practices for setting statistical thresholds led to very high Type II error rates. Further, we demonstrate that applying the optimal α approach results in error rates as low or lower than error rates obtained when using (i) no post-hoc adjustment, (ii) a Bonferroni adjustment and (iii) a false discovery rate (FDR) adjustment which is widely used in transcriptome studies. We conclude that optimal α can reduce error rates associated with transcripts in both microarray and RNA-seq experiments, but point out that improved statistical techniques alone cannot solve the problems associated with high throughput datasets - these approaches need to be coupled with improved experimental design that considers larger sample sizes and/or greater study replication.

  12. Analysis of strain error sources in micro-beam Laue diffraction

    International Nuclear Information System (INIS)

    Hofmann, Felix; Eve, Sophie; Belnoue, Jonathan; Micha, Jean-Sébastien; Korsunsky, Alexander M.

    2011-01-01

    Micro-beam Laue diffraction is an experimental method that allows the measurement of local lattice orientation and elastic strain within individual grains of engineering alloys, ceramics, and other polycrystalline materials. Unlike other analytical techniques, e.g. based on electron microscopy, it is not limited to surface characterisation or thin sections, but rather allows non-destructive measurements in the material bulk. This is of particular importance for in situ loading experiments where the mechanical response of a material volume (rather than just surface) is studied and it is vital that no perturbation/disturbance is introduced by the measurement technique. Whilst the technique allows lattice orientation to be determined to a high level of precision, accurate measurement of elastic strains and estimating the errors involved is a significant challenge. We propose a simulation-based approach to assess the elastic strain errors that arise from geometrical perturbations of the experimental setup. Using an empirical combination rule, the contributions of different geometrical uncertainties to the overall experimental strain error are estimated. This approach was applied to the micro-beam Laue diffraction setup at beamline BM32 at the European Synchrotron Radiation Facility (ESRF). Using a highly perfect germanium single crystal, the mechanical stability of the instrument was determined and hence the expected strain errors predicted. Comparison with the actual strain errors found in a silicon four-point beam bending test showed good agreement. The simulation-based error analysis approach makes it possible to understand the origins of the experimental strain errors and thus allows a directed improvement of the experimental geometry to maximise the benefit in terms of strain accuracy.

  13. TIPE KESALAHAN MAHASISWA DALAM MENYELESAIKAN SOAL-SOAL GEOMETRI BERDASAR NEWMAN’S ERROR ANALYSIS (NEA

    Directory of Open Access Journals (Sweden)

    Anita Dewi Utami

    2016-03-01

    Full Text Available The students' ability to solve mathematical problems are affected directly or indirectly by their pattern of problem solving when they were attending primary and secondary schools. The result of observation shows that there are students who can not answer proving problem and take no action at all, thoughit is only at the step of understanding the problem. NEA is a frame work with simple diagnostic procedures, which include (1 decoding, (2 comprehension, (3 transformation, (4 process skills, and (5 encoding. Newman’s developed diagnostic method is used to identify the error categories of descriptive test answer. Therefore, the descriptive types of students’ error in proving problem solving in Geometry 1 subject based on Newman’s error Analysis (NEA, and what are the causes for the student’s mistakes in solving those proving problem, especially in Geometry1 subject is interesting to be discussed in this article.

  14. Analysis of the screw compressor rotors’ non-uniform thermal field effect on transmission error

    Science.gov (United States)

    Mustafin, T. N.; Yakupov, R. R.; Burmistrov, A. V.; Khamidullin, M. S.; Khisameev, I. G.

    2015-08-01

    The vibrational state of the screw compressor is largely dependent on the gearing of the rotors and on the possibility of angular backlash in the gears. The presence of the latter leads to a transmission error and is caused by the need for the downward bias of the actual profile in relation to the theoretical. The loss of contact between rotors and, as a consequence, the current value of the quantity, characterizing the transmission error, is affected by a large number of different factors. In particular, a major influence on the amount of possible movement in the gearing will be exerted by thermal deformations of the rotor and the housing parts in the working mode of the machine. The present work is devoted to the analysis of the thermal state in the operation of the screw oil-flooded compressor and its impact on the transmission error and the possibility of losing contact between them during the operating cycle.

  15. Doctors' duty to disclose error: a deontological or Kantian ethical analysis.

    Science.gov (United States)

    Bernstein, Mark; Brown, Barry

    2004-05-01

    Medical (surgical) error is being talked about more openly and besides being the subject of retrospective reviews, is now the subject of prospective research. Disclosure of error has been a difficult issue because of fear of embarrassment for doctors in the eyes of their peers, and fear of punitive action by patients, consisting of medicolegal action and/or complaints to doctors' governing bodies. This paper examines physicians' and surgeons' duty to disclose error, from an ethical standpoint; specifically by applying the moral philosophical theory espoused by Immanuel Kant (ie. deontology). The purpose of this discourse is to apply moral philosophical analysis to a delicate but important issue which will be a matter all physicians and surgeons will have to confront, probably numerous times, in their professional careers.

  16. Error Analysis and Compensation of Gyrocompass Alignment for SINS on Moving Base

    Directory of Open Access Journals (Sweden)

    Bo Xu

    2014-01-01

    Full Text Available An improved method of gyrocompass alignment for strap-down inertial navigation system (SINS on moving base assisted with Doppler velocity log (DVL is proposed in this paper. After analyzing the classical gyrocompass alignment principle on static base, implementation of compass alignment on moving base is given in detail. Furthermore, based on analysis of velocity error, latitude error, and acceleration error on moving base, two improvements are introduced to ensure alignment accuracy and speed: (1 the system parameters are redesigned to decrease the acceleration interference and (2 a data repeated calculation algorithm is used in order to shorten the prolonged alignment time caused by changes in parameters. Simulation and test results indicate that the improved method can realize the alignment on moving base quickly and effectively.

  17. On the BER and capacity analysis of MIMO MRC systems with channel estimation error

    KAUST Repository

    Yang, Liang

    2011-10-01

    In this paper, we investigate the effect of channel estimation error on the capacity and bit-error rate (BER) of a multiple-input multiple-output (MIMO) transmit maximal ratio transmission (MRT) and receive maximal ratio combining (MRC) systems over uncorrelated Rayleigh fading channels. We first derive the ergodic (average) capacity expressions for such systems when power adaptation is applied at the transmitter. The exact capacity expression for the uniform power allocation case is also presented. Furthermore, to investigate the diversity order of MIMO MRT-MRC scheme, we derive the BER performance under a uniform power allocation policy. We also present an asymptotic BER performance analysis for the MIMO MRT-MRC system with multiuser diversity. The numerical results are given to illustrate the sensitivity of the main performance to the channel estimation error and the tightness of the approximate cutoff value. © 2011 IEEE.

  18. Error-correction coding and decoding bounds, codes, decoders, analysis and applications

    CERN Document Server

    Tomlinson, Martin; Ambroze, Marcel A; Ahmed, Mohammed; Jibril, Mubarak

    2017-01-01

    This book discusses both the theory and practical applications of self-correcting data, commonly known as error-correcting codes. The applications included demonstrate the importance of these codes in a wide range of everyday technologies, from smartphones to secure communications and transactions. Written in a readily understandable style, the book presents the authors’ twenty-five years of research organized into five parts: Part I is concerned with the theoretical performance attainable by using error correcting codes to achieve communications efficiency in digital communications systems. Part II explores the construction of error-correcting codes and explains the different families of codes and how they are designed. Techniques are described for producing the very best codes. Part III addresses the analysis of low-density parity-check (LDPC) codes, primarily to calculate their stopping sets and low-weight codeword spectrum which determines the performance of these codes. Part IV deals with decoders desi...

  19. Analysis of the orbit errors in the CERN accelerators using model simulation

    International Nuclear Information System (INIS)

    Lee, M.; Kleban, S.; Clearwater, S.

    1987-09-01

    This paper will describe the use of the PLUS program to find various types of machine and beam errors such as, quadrupole strength, dipole strength, beam position monitors (BPMs), energy profile, and beam launch. We refer to this procedure as the GOLD (Generic Orbit and Lattice Debugger) Method which is a general technique that can be applied to analysis of errors in storage rings and transport lines. One useful feature of the Method is that it analyzes segments of a machine at a time so that the application and efficiency is independent of the size of the overall machine. Because the techniques are the same for all the types of problems it solves, the user need learn only how to find one type of error in order to use the program

  20. BANK CAPITAL AND MACROECONOMIC SHOCKS: A PRINCIPAL COMPONENTS ANALYSIS AND VECTOR ERROR CORRECTION MODEL

    Directory of Open Access Journals (Sweden)

    Christian NZENGUE PEGNET

    2011-07-01

    Full Text Available The recent financial turmoil has clearly highlighted the potential role of financial factors on amplification of macroeconomic developments and stressed the importance of analyzing the relationship between banks’ balance sheets and economic activity. This paper assesses the impact of the bank capital channel in the transmission of schocks in Europe on the basis of bank's balance sheet data. The empirical analysis is carried out through a Principal Component Analysis and in a Vector Error Correction Model.

  1. Fractal-based analysis of optical coherence tomography data to quantify retinal tissue damage.

    Science.gov (United States)

    Somfai, Gábor Márk; Tátrai, Erika; Laurik, Lenke; Varga, Boglárka E; Ölvedy, Vera; Smiddy, William E; Tchitnga, Robert; Somogyi, Anikó; DeBuc, Delia Cabrera

    2014-09-01

    The sensitivity of Optical Coherence Tomography (OCT) images to identify retinal tissue morphology characterized by early neural loss from normal healthy eyes is tested by calculating structural information and fractal dimension. OCT data from 74 healthy eyes and 43 eyes with type 1 diabetes mellitus with mild diabetic retinopathy (MDR) on biomicroscopy was analyzed using a custom-built algorithm (OCTRIMA) to measure locally the intraretinal layer thickness. A power spectrum method was used to calculate the fractal dimension in intraretinal regions of interest identified in the images. ANOVA followed by Newman-Keuls post-hoc analyses were used to test for differences between pathological and normal groups. A modified p value of Fractal dimension was higher for all the layers (except the GCL + IPL and INL) in MDR eyes compared to normal healthy eyes. When comparing MDR with normal healthy eyes, the highest AUROC values estimated for the fractal dimension were observed for GCL + IPL and INL. The maximum discrimination value for fractal dimension of 0.96 (standard error =0.025) for the GCL + IPL complex was obtained at a FD ≤ 1.66 (cut off point, asymptotic 95% Confidence Interval: lower-upper bound = 0.905-1.002). Moreover, the highest AUROC values estimated for the thickness measurements were observed for the OPL, GCL + IPL and OS. Particularly, when comparing MDR eyes with control healthy eyes, we found that the fractal dimension of the GCL + IPL complex was significantly better at diagnosing early DR, compared to the standard thickness measurement. Our results suggest that the GCL + IPL complex, OPL and OS are more susceptible to initial damage when comparing MDR with control healthy eyes. Fractal analysis provided a better sensitivity, offering a potential diagnostic predictor for detecting early neurodegeneration in the retina.

  2. Republished error management: Descriptions of verbal communication errors between staff. An analysis of 84 root cause analysis-reports from Danish hospitals

    DEFF Research Database (Denmark)

    Rabøl, Louise Isager; Andersen, Mette Lehmann; Østergaard, Doris

    2011-01-01

    incidents. The objective of this study is to review RCA reports (RCAR) for characteristics of verbal communication errors between hospital staff in an organisational perspective. Method Two independent raters analysed 84 RCARs, conducted in six Danish hospitals between 2004 and 2006, for descriptions...... and characteristics of verbal communication errors such as handover errors and error during teamwork. Results Raters found description of verbal communication errors in 44 reports (52%). These included handover errors (35 (86%)), communication errors between different staff groups (19 (43%)), misunderstandings (13...... (30%)), communication errors between junior and senior staff members (11 (25%)), hesitance in speaking up (10 (23%)) and communication errors during teamwork (8 (18%)). The kappa values were 0.44-0.78. Unproceduralized communication and information exchange via telephone, related to transfer between...

  3. Covariate measurement error correction methods in mediation analysis with failure time data.

    Science.gov (United States)

    Zhao, Shanshan; Prentice, Ross L

    2014-12-01

    Mediation analysis is important for understanding the mechanisms whereby one variable causes changes in another. Measurement error could obscure the ability of the potential mediator to explain such changes. This article focuses on developing correction methods for measurement error in the mediator with failure time outcomes. We consider a broad definition of measurement error, including technical error, and error associated with temporal variation. The underlying model with the "true" mediator is assumed to be of the Cox proportional hazards model form. The induced hazard ratio for the observed mediator no longer has a simple form independent of the baseline hazard function, due to the conditioning event. We propose a mean-variance regression calibration approach and a follow-up time regression calibration approach, to approximate the partial likelihood for the induced hazard function. Both methods demonstrate value in assessing mediation effects in simulation studies. These methods are generalized to multiple biomarkers and to both case-cohort and nested case-control sampling designs. We apply these correction methods to the Women's Health Initiative hormone therapy trials to understand the mediation effect of several serum sex hormone measures on the relationship between postmenopausal hormone therapy and breast cancer risk. © 2014, The International Biometric Society.

  4. Error analysis for RADAR neighbor matching localization in linear logarithmic strength varying Wi-Fi environment.

    Science.gov (United States)

    Zhou, Mu; Tian, Zengshan; Xu, Kunjie; Yu, Xiang; Wu, Haibo

    2014-01-01

    This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs) in logarithmic received signal strength (RSS) varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs) as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future.

  5. Error Analysis for RADAR Neighbor Matching Localization in Linear Logarithmic Strength Varying Wi-Fi Environment

    Directory of Open Access Journals (Sweden)

    Mu Zhou

    2014-01-01

    Full Text Available This paper studies the statistical errors for the fingerprint-based RADAR neighbor matching localization with the linearly calibrated reference points (RPs in logarithmic received signal strength (RSS varying Wi-Fi environment. To the best of our knowledge, little comprehensive analysis work has appeared on the error performance of neighbor matching localization with respect to the deployment of RPs. However, in order to achieve the efficient and reliable location-based services (LBSs as well as the ubiquitous context-awareness in Wi-Fi environment, much attention has to be paid to the highly accurate and cost-efficient localization systems. To this end, the statistical errors by the widely used neighbor matching localization are significantly discussed in this paper to examine the inherent mathematical relations between the localization errors and the locations of RPs by using a basic linear logarithmic strength varying model. Furthermore, based on the mathematical demonstrations and some testing results, the closed-form solutions to the statistical errors by RADAR neighbor matching localization can be an effective tool to explore alternative deployment of fingerprint-based neighbor matching localization systems in the future.

  6. Model selection for marginal regression analysis of longitudinal data with missing observations and covariate measurement error.

    Science.gov (United States)

    Shen, Chung-Wei; Chen, Yi-Hau

    2015-10-01

    Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. Using APEX to Model Anticipated Human Error: Analysis of a GPS Navigational Aid

    Science.gov (United States)

    VanSelst, Mark; Freed, Michael; Shefto, Michael (Technical Monitor)

    1997-01-01

    The interface development process can be dramatically improved by predicting design facilitated human error at an early stage in the design process. The approach we advocate is to SIMULATE the behavior of a human agent carrying out tasks with a well-specified user interface, ANALYZE the simulation for instances of human error, and then REFINE the interface or protocol to minimize predicted error. This approach, incorporated into the APEX modeling architecture, differs from past approaches to human simulation in Its emphasis on error rather than e.g. learning rate or speed of response. The APEX model consists of two major components: (1) a powerful action selection component capable of simulating behavior in complex, multiple-task environments; and (2) a resource architecture which constrains cognitive, perceptual, and motor capabilities to within empirically demonstrated limits. The model mimics human errors arising from interactions between limited human resources and elements of the computer interface whose design falls to anticipate those limits. We analyze the design of a hand-held Global Positioning System (GPS) device used for radical and navigational decisions in small yacht recalls. The analysis demonstrates how human system modeling can be an effective design aid, helping to accelerate the process of refining a product (or procedure).

  8. Measuring and Detecting Errors in Occupational Coding: an Analysis of SHARE Data

    Directory of Open Access Journals (Sweden)

    Belloni Michele

    2016-12-01

    Full Text Available This article studies coding errors in occupational data, as the quality of this data is important but often neglected. In particular, we recoded open-ended questions on occupation for last and current job in the Dutch sample of the “Survey of Health, Ageing and Retirement in Europe” (SHARE using a high-quality software program for ex-post coding (CASCOT software. Taking CASCOT coding as our benchmark, our results suggest that the incidence of coding errors in SHARE is high, even when the comparison is made at the level of one-digit occupational codes (28% for last job and 30% for current job. This finding highlights the complexity of occupational coding and suggests that processing errors due to miscoding should be taken into account when undertaking statistical analyses or writing econometric models. Our analysis suggests strategies to alleviate such coding errors, and we propose a set of equations that can predict error. These equations may complement coding software and improve the quality of occupational coding.

  9. Analysis of ring enhancement in the cranial computed tomography

    International Nuclear Information System (INIS)

    Huh, Seung Jae; Chung, Yong In; Chang, Kee Hyun

    1980-01-01

    A total of 83 cases with ring enhancement in the cranial computed tomography were radiologically analyzed to determine the specific CT findings of the primary and metastatic brain tumor, inflammatory disease, resolving hematoma, and cerebral infarction. The brief results are as follows. Glioblastoma multiform show a characteristic thick or thin irregular ring enhancement with significant mass effect and surrounding edema. Most of the metastatic tumors also show irregular thick or thin walled ring enhancement with significant surrounding edema. Tumoral hemorrhage was observed in the metastatic melanoma, breast cancer, and lung cancer. The brain abscess usually show characteristic thin regular and smooth ring enhancement with moderate peripheral edema. The parasitic cysts also show thin regular ring enhancement with different degree of surrounding edema. Ring enhancement in resolving hematomas and cerebral infarctions usually occurs about 10-30 days after the onset of symptoms, which shows thin and regular ring pattern without significant surrounding edema

  10. Information Management System Development for the Investigation, Reporting, and Analysis of Human Error in Naval Aviation Maintenance

    National Research Council Canada - National Science Library

    Nelson, Douglas

    2001-01-01

    The purpose of this research is to evaluate and refine a safety information management system that will facilitate data collection, organization, query, analysis and reporting of maintenance errors...

  11. Analysis of dental abfractions by optical coherence tomography

    Science.gov (United States)

    Demjan, Enikö; Mărcăuţeanu, Corina; Bratu, Dorin; Sinescu, Cosmin; Negruţiu, Meda; Ionita, Ciprian; Topală, Florin; Hughes, Michael; Bradu, Adrian; Dobre, George; Podoleanu, Adrian Gh.

    2010-02-01

    Aim and objectives. Abfraction is the pathological loss of cervical hard tooth substance caused by biomechanical overload. High horizontal occlusal forces result in large stress concentrations in the cervical region of the teeth. These stresses may be high enough to cause microfractures in the dental hard tissues, eventually resulting in the loss of cervical enamel and dentin. The present study proposes the microstructural characterization of these cervical lesions by en face optical coherence tomography (eFOCT). Material and methods: 31 extracted bicuspids were investigated using eFOCT. 24 teeth derived from patients with active bruxism and occlusal interferences; they presented deep buccal abfractions and variable degrees of occlusal pathological attrition. The other 7 bicuspids were not exposed to occlusal overload and had a normal morphology of the dental crowns. The dental samples were investigated using an eFOCT system operating at 1300 nm (B-scan at 1 Hz and C-scan mode at 2 Hz). The system has a lateral resolution better than 5 μm and a depth resolution of 9 μm in tissue. OCT images were further compared with micro - computer tomography images. Results. The eFOCT investigation of bicuspids with a normal morphology revealed a homogeneous structure of the buccal cervical enamel. The C-scan and B-scan images obtained from the occlusal overloaded bicuspids visualized the wedge-shaped loss of cervical enamel and damage in the microstructure of the underlaying dentin. The high occlusal forces produced a characteristic pattern of large cracks, which reached the tooth surface. Conclusions: eFOCT is a promising imaging method for dental abfractions and it may offer some insight on the etiological mechanism of these noncarious cervical lesions.

  12. Error Analysis of Deep Sequencing of Phage Libraries: Peptides Censored in Sequencing

    Directory of Open Access Journals (Sweden)

    Wadim L. Matochko

    2013-01-01

    Full Text Available Next-generation sequencing techniques empower selection of ligands from phage-display libraries because they can detect low abundant clones and quantify changes in the copy numbers of clones without excessive selection rounds. Identification of errors in deep sequencing data is the most critical step in this process because these techniques have error rates >1%. Mechanisms that yield errors in Illumina and other techniques have been proposed, but no reports to date describe error analysis in phage libraries. Our paper focuses on error analysis of 7-mer peptide libraries sequenced by Illumina method. Low theoretical complexity of this phage library, as compared to complexity of long genetic reads and genomes, allowed us to describe this library using convenient linear vector and operator framework. We describe a phage library as N×1 frequency vector n=ni, where ni is the copy number of the ith sequence and N is the theoretical diversity, that is, the total number of all possible sequences. Any manipulation to the library is an operator acting on n. Selection, amplification, or sequencing could be described as a product of a N×N matrix and a stochastic sampling operator (Sa. The latter is a random diagonal matrix that describes sampling of a library. In this paper, we focus on the properties of Sa and use them to define the sequencing operator (Seq. Sequencing without any bias and errors is Seq=Sa IN, where IN is a N×N unity matrix. Any bias in sequencing changes IN to a nonunity matrix. We identified a diagonal censorship matrix (CEN, which describes elimination or statistically significant downsampling, of specific reads during the sequencing process.

  13. Identifying model error in metabolic flux analysis - a generalized least squares approach.

    Science.gov (United States)

    Sokolenko, Stanislav; Quattrociocchi, Marco; Aucoin, Marc G

    2016-09-13

    The estimation of intracellular flux through traditional metabolic flux analysis (MFA) using an overdetermined system of equations is a well established practice in metabolic engineering. Despite the continued evolution of the methodology since its introduction, there has been little focus on validation and identification of poor model fit outside of identifying "gross measurement error". The growing complexity of metabolic models, which are increasingly generated from genome-level data, has necessitated robust validation that can directly assess model fit. In this work, MFA calculation is framed as a generalized least squares (GLS) problem, highlighting the applicability of the common t-test for model validation. To differentiate between measurement and model error, we simulate ideal flux profiles directly from the model, perturb them with estimated measurement error, and compare their validation to real data. Application of this strategy to an established Chinese Hamster Ovary (CHO) cell model shows how fluxes validated by traditional means may be largely non-significant due to a lack of model fit. With further simulation, we explore how t-test significance relates to calculation error and show that fluxes found to be non-significant have 2-4 fold larger error (if measurement uncertainty is in the 5-10 % range). The proposed validation method goes beyond traditional detection of "gross measurement error" to identify lack of fit between model and data. Although the focus of this work is on t-test validation and traditional MFA, the presented framework is readily applicable to other regression analysis methods and MFA formulations.

  14. On Gait Analysis Estimation Errors Using Force Sensors on a Smart Rollator

    Directory of Open Access Journals (Sweden)

    Joaquin Ballesteros

    2016-11-01

    Full Text Available Gait analysis can provide valuable information on a person’s condition and rehabilitation progress. Gait is typically captured using external equipment and/or wearable sensors. These tests are largely constrained to specific controlled environments. In addition, gait analysis often requires experts for calibration, operation and/or to place sensors on volunteers. Alternatively, mobility support devices like rollators can be equipped with onboard sensors to monitor gait parameters, while users perform their Activities of Daily Living. Gait analysis in rollators may use odometry and force sensors in the handlebars. However, force based estimation of gait parameters is less accurate than traditional methods, especially when rollators are not properly used. This paper presents an evaluation of force based gait analysis using a smart rollator on different groups of users to determine when this methodology is applicable. In a second stage, the rollator is used in combination with two lab-based gait analysis systems to assess the rollator estimation error. Our results show that: (i there is an inverse relation between the variance in the force difference between handlebars and support on the handlebars—related to the user condition—and the estimation error; and (ii this error is lower than 10% when the variation in the force difference is above 7 N. This lower limit was exceeded by the 95.83% of our challenged volunteers. In conclusion, rollators are useful for gait characterization as long as users really need the device for ambulation.

  15. On Gait Analysis Estimation Errors Using Force Sensors on a Smart Rollator.

    Science.gov (United States)

    Ballesteros, Joaquin; Urdiales, Cristina; Martinez, Antonio B; van Dieën, Jaap H

    2016-11-10

    Gait analysis can provide valuable information on a person's condition and rehabilitation progress. Gait is typically captured using external equipment and/or wearable sensors. These tests are largely constrained to specific controlled environments. In addition, gait analysis often requires experts for calibration, operation and/or to place sensors on volunteers. Alternatively, mobility support devices like rollators can be equipped with onboard sensors to monitor gait parameters, while users perform their Activities of Daily Living. Gait analysis in rollators may use odometry and force sensors in the handlebars. However, force based estimation of gait parameters is less accurate than traditional methods, especially when rollators are not properly used. This paper presents an evaluation of force based gait analysis using a smart rollator on different groups of users to determine when this methodology is applicable. In a second stage, the rollator is used in combination with two lab-based gait analysis systems to assess the rollator estimation error. Our results show that: (i) there is an inverse relation between the variance in the force difference between handlebars and support on the handlebars-related to the user condition-and the estimation error; and (ii) this error is lower than 10% when the variation in the force difference is above 7 N. This lower limit was exceeded by the 95.83% of our challenged volunteers. In conclusion, rollators are useful for gait characterization as long as users really need the device for ambulation.

  16. Meta-analysis of small RNA-sequencing errors reveals ubiquitous post-transcriptional RNA modifications.

    Science.gov (United States)

    Ebhardt, H Alexander; Tsang, Herbert H; Dai, Denny C; Liu, Yifeng; Bostan, Babak; Fahlman, Richard P

    2009-05-01

    Recent advances in DNA-sequencing technology have made it possible to obtain large datasets of small RNA sequences. Here we demonstrate that not all non-perfectly matched small RNA sequences are simple technological sequencing errors, but many hold valuable biological information. Analysis of three small RNA datasets originating from Oryza sativa and Arabidopsis thaliana small RNA-sequencing projects demonstrates that many single nucleotide substitution errors overlap when aligning homologous non-identical small RNA sequences. Investigating the sites and identities of substitution errors reveal that many potentially originate as a result of post-transcriptional modifications or RNA editing. Modifications include N1-methyl modified purine nucleotides in tRNA, potential deamination or base substitutions in micro RNAs, 3' micro RNA uridine extensions and 5' micro RNA deletions. Additionally, further analysis of large sequencing datasets reveal that the combined effects of 5' deletions and 3' uridine extensions can alter the specificity by which micro RNAs associate with different Argonaute proteins. Hence, we demonstrate that not all sequencing errors in small RNA datasets are technical artifacts, but that these actually often reveal valuable biological insights to the sites of post-transcriptional RNA modifications.

  17. Preanalytical errors in medical laboratories: a review of the available methodologies of data collection and analysis.

    Science.gov (United States)

    West, Jamie; Atherton, Jennifer; Costelloe, Seán J; Pourmahram, Ghazaleh; Stretton, Adam; Cornes, Michael

    2017-01-01

    Preanalytical errors have previously been shown to contribute a significant proportion of errors in laboratory processes and contribute to a number of patient safety risks. Accreditation against ISO 15189:2012 requires that laboratory Quality Management Systems consider the impact of preanalytical processes in areas such as the identification and control of non-conformances, continual improvement, internal audit and quality indicators. Previous studies have shown that there is a wide variation in the definition, repertoire and collection methods for preanalytical quality indicators. The International Federation of Clinical Chemistry Working Group on Laboratory Errors and Patient Safety has defined a number of quality indicators for the preanalytical stage, and the adoption of harmonized definitions will support interlaboratory comparisons and continual improvement. There are a variety of data collection methods, including audit, manual recording processes, incident reporting mechanisms and laboratory information systems. Quality management processes such as benchmarking, statistical process control, Pareto analysis and failure mode and effect analysis can be used to review data and should be incorporated into clinical governance mechanisms. In this paper, The Association for Clinical Biochemistry and Laboratory Medicine PreAnalytical Specialist Interest Group review the various data collection methods available. Our recommendation is the use of the laboratory information management systems as a recording mechanism for preanalytical errors as this provides the easiest and most standardized mechanism of data capture.

  18. Monte-Carlo error analysis in x-ray spectral deconvolution

    International Nuclear Information System (INIS)

    Shirk, D.G.; Hoffman, N.M.

    1985-01-01

    The deconvolution of spectral information from sparse x-ray data is a widely encountered problem in data analysis. An often-neglected aspect of this problem is the propagation of random error in the deconvolution process. We have developed a Monte-Carlo approach that enables us to attach error bars to unfolded x-ray spectra. Our Monte-Carlo error analysis has been incorporated into two specific deconvolution techniques: the first is an iterative convergent weight method; the second is a singular-value-decomposition (SVD) method. These two methods were applied to an x-ray spectral deconvolution problem having m channels of observations with n points in energy space. When m is less than n, this problem has no unique solution. We discuss the systematics of nonunique solutions and energy-dependent error bars for both methods. The Monte-Carlo approach has a particular benefit in relation to the SVD method: It allows us to apply the constraint of spectral nonnegativity after the SVD deconvolution rather than before. Consequently, we can identify inconsistencies between different detector channels

  19. A Meta-Analysis for Association of Maternal Smoking with Childhood Refractive Error and Amblyopia

    Directory of Open Access Journals (Sweden)

    Li Li

    2016-01-01

    Full Text Available Background. We aimed to evaluate the association between maternal smoking and the occurrence of childhood refractive error and amblyopia. Methods. Relevant articles were identified from PubMed and EMBASE up to May 2015. Combined odds ratio (OR corresponding with its 95% confidence interval (CI was calculated to evaluate the influence of maternal smoking on childhood refractive error and amblyopia. The heterogeneity was evaluated with the Chi-square-based Q statistic and the I2 test. Potential publication bias was finally examined by Egger’s test. Results. A total of 9 articles were included in this meta-analysis. The pooled OR showed that there was no significant association between maternal smoking and childhood refractive error. However, children whose mother smoked during pregnancy were 1.47 (95% CI: 1.12–1.93 times and 1.43 (95% CI: 1.23-1.66 times more likely to suffer from amblyopia and hyperopia, respectively, compared with children whose mother did not smoke, and the difference was significant. Significant heterogeneity was only found among studies involving the influence of maternal smoking on children’s refractive error (P<0.05; I2=69.9%. No potential publication bias was detected by Egger’s test. Conclusion. The meta-analysis suggests that maternal smoking is a risk factor for childhood hyperopia and amblyopia.

  20. Analysis and Calibration of Sources of Electronic Error in PSD Sensor Response

    Directory of Open Access Journals (Sweden)

    David Rodríguez-Navarro

    2016-04-01

    Full Text Available In order to obtain very precise measurements of the position of agents located at a considerable distance using a sensor system based on position sensitive detectors (PSD, it is necessary to analyze and mitigate the factors that generate substantial errors in the system’s response. These sources of error can be divided into electronic and geometric factors. The former stem from the nature and construction of the PSD as well as the performance, tolerances and electronic response of the system, while the latter are related to the sensor’s optical system. Here, we focus solely on the electrical effects, since the study, analysis and correction of these are a prerequisite for subsequently addressing geometric errors. A simple calibration method is proposed, which considers PSD response, component tolerances, temperature variations, signal frequency used, signal to noise ratio (SNR, suboptimal operational amplifier parameters, and analog to digital converter (ADC quantitation SNRQ, etc. Following an analysis of these effects and calibration of the sensor, it was possible to correct the errors, thus rendering the effects negligible, as reported in the results section.

  1. Systematic analysis of dependent human errors from the maintenance history at finnish NPPs - A status report

    International Nuclear Information System (INIS)

    Laakso, K.

    2002-12-01

    Operating experience has shown missed detection events, where faults have passed inspections and functional tests to operating periods after the maintenance activities during the outage. The causes of these failures have often been complex event sequences, involving human and organisational factors. Especially common cause and other dependent failures of safety systems may significantly contribute to the reactor core damage risk. The topic has been addressed in the Finnish studies of human common cause failures, where experiences on latent human errors have been searched and analysed in detail from the maintenance history. The review of the bulk of the analysis results of the Olkiluoto and Loviisa plant sites shows that the instrumentation and control and electrical equipment is more prone to human error caused failure events than the other maintenance and that plant modifications and also predetermined preventive maintenance are significant sources of common cause failures. Most errors stem from the refuelling and maintenance outage period at the both sites, and less than half of the dependent errors were identified during the same outage. The dependent human errors originating from modifications could be reduced by a more tailored specification and coverage of their start-up testing programs. Improvements could also be achieved by a more case specific planning of the installation inspection and functional testing of complicated maintenance works or work objects of higher plant safety and availability importance. A better use and analysis of condition monitoring information for maintenance steering could also help. The feedback from discussions of the analysis results with plant experts and professionals is still crucial in developing the final conclusions and recommendations that meet the specific development needs at the plants. (au)

  2. Chronology of prescribing error during the hospital stay and prediction of pharmacist's alerts overriding: a prospective analysis

    Directory of Open Access Journals (Sweden)

    Bruni Vanida

    2010-01-01

    Full Text Available Abstract Background Drug prescribing errors are frequent in the hospital setting and pharmacists play an important role in detection of these errors. The objectives of this study are (1 to describe the drug prescribing errors rate during the patient's stay, (2 to find which characteristics for a prescribing error are the most predictive of their reproduction the next day despite pharmacist's alert (i.e. override the alert. Methods We prospectively collected all medication order lines and prescribing errors during 18 days in 7 medical wards' using computerized physician order entry. We described and modelled the errors rate according to the chronology of hospital stay. We performed a classification and regression tree analysis to find which characteristics of alerts were predictive of their overriding (i.e. prescribing error repeated. Results 12 533 order lines were reviewed, 117 errors (errors rate 0.9% were observed and 51% of these errors occurred on the first day of the hospital stay. The risk of a prescribing error decreased over time. 52% of the alerts were overridden (i.e error uncorrected by prescribers on the following day. Drug omissions were the most frequently taken into account by prescribers. The classification and regression tree analysis showed that overriding pharmacist's alerts is first related to the ward of the prescriber and then to either Anatomical Therapeutic Chemical class of the drug or the type of error. Conclusions Since 51% of prescribing errors occurred on the first day of stay, pharmacist should concentrate his analysis of drug prescriptions on this day. The difference of overriding behavior between wards and according drug Anatomical Therapeutic Chemical class or type of error could also guide the validation tasks and programming of electronic alerts.

  3. Error in the parotid contour delineated using computed tomography images rather than magnetic resonance images during radiotherapy planning for nasopharyngeal carcinoma.

    Science.gov (United States)

    Liu, Chengxin; Kong, Xudong; Gong, Guanzhong; Liu, Tonghai; Li, Baosheng; Yin, Yong

    2014-04-01

    To analyze the intra- and interobserver variations errors in the parotid contour delineated using computed tomography (CT) or magnetic resonance (MR) images of patients with nasopharyngeal carcinoma who underwent radiotherapy. Forty-one nasopharyngeal cancer patients were selected. The patients underwent simulation with MR and CT scanning. The gross tumor volume and organs at risk were contoured on both contrasted CT and T1-MR images. For each patient, one radiotherapist delineated the parotid on CT and MR images ten times, and ten different radiotherapists were asked to delineate the parotid on CT and MR images once each. The inter- and intraobserver variations in volume and outline were compared. For the interobserver comparison, the volumes of the parotid as evaluated from the CT and MR images were 34.6 ± 12.1 cm(3) (left), 34.3 ± 9.0 cm(3) (right), and 24.6 ± 7.6 cm(3) (L), 23.2 ± 8.1 cm(3) (R), respectively. For the intraobserver comparison, the volumes evaluated from the CT and MR images were 28.2 ± 7.6 cm(3) (L), 29.4 ± 9.4 cm(3) (R), and 24.4 ± 7.6 cm(3) (L), 22.5 ± 7.4 cm(3) (R), respectively. The relative variations in volume when using MR images were 4.7 ± 0.7 % (L), 5.0 ± 0.6 % (R) for the interobserver variations and 2.3 ± 0.4 % (L), 2.1 ± 0.7 % (R) for the intraobserver variations. However, the inter- and intraobserver relative variations in volume when using CT images were 18.0 ± 4.8 % (L), 17.4 ± 4.6 % (R) and 6.3 ± 1.5 % (L), 6.8 ± 1.5 % (R), respectively. The parotid contour is delineated more accurately and reproducibly on MR images than on CT images.

  4. Descriptive analysis of medication errors reported to the Egyptian national online reporting system during six months.

    Science.gov (United States)

    Shehata, Zahraa Hassan Abdelrahman; Sabri, Nagwa Ali; Elmelegy, Ahmed Abdelsalam

    2016-03-01

    This study analyzes reports to the Egyptian medication error (ME) reporting system from June to December 2014. Fifty hospital pharmacists received training on ME reporting using the national reporting system. All received reports were reviewed and analyzed. The pieces of data analyzed were patient age, gender, clinical setting, stage, type, medication(s), outcome, cause(s), and recommendation(s). Over the course of 6 months, 12,000 valid reports were gathered and included in this analysis. The majority (66%) came from inpatient settings, while 23% came from intensive care units, and 11% came from outpatient departments. Prescribing errors were the most common type of MEs (54%), followed by monitoring (25%) and administration errors (16%). The most frequent error was incorrect dose (20%) followed by drug interactions, incorrect drug, and incorrect frequency. Most reports were potential (25%), prevented (11%), or harmless (51%) errors; only 13% of reported errors lead to patient harm. The top three medication classes involved in reported MEs were antibiotics, drugs acting on the central nervous system, and drugs acting on the cardiovascular system. Causes of MEs were mostly lack of knowledge, environmental factors, lack of drug information sources, and incomplete prescribing. Recommendations for addressing MEs were mainly staff training, local ME reporting, and improving work environment. There are common problems among different healthcare systems, so that sharing experiences on the national level is essential to enable learning from MEs. Internationally, there is a great need for standardizing ME terminology, to facilitate knowledge transfer. Underreporting, inaccurate reporting, and a lack of reporter diversity are some limitations of this study. Egypt now has a national database of MEs that allows researchers and decision makers to assess the problem, identify its root causes, and develop preventive strategies. © The Author 2015. Published by Oxford University

  5. Error analysis for determination of accuracy of an ultrasound navigation system for head and neck surgery.

    Science.gov (United States)

    Kozak, J; Krysztoforski, K; Kroll, T; Helbig, S; Helbig, M

    2009-01-01

    The use of conventional CT- or MRI-based navigation systems for head and neck surgery is unsatisfactory due to tissue shift. Moreover, changes occurring during surgical procedures cannot be visualized. To overcome these drawbacks, we developed a novel ultrasound-guided navigation system for head and neck surgery. A comprehensive error analysis was undertaken to determine the accuracy of this new system. The evaluation of the system accuracy was essentially based on the method of error definition for well-established fiducial marker registration methods (point-pair matching) as used in, for example, CT- or MRI-based navigation. This method was modified in accordance with the specific requirements of ultrasound-guided navigation. The Fiducial Localization Error (FLE), Fiducial Registration Error (FRE) and Target Registration Error (TRE) were determined. In our navigation system, the real error (the TRE actually measured) did not exceed a volume of 1.58 mm(3) with a probability of 0.9. A mean value of 0.8 mm (standard deviation: 0.25 mm) was found for the FRE. The quality of the coordinate tracking system (Polaris localizer) could be defined with an FLE of 0.4 +/- 0.11 mm (mean +/- standard deviation). The quality of the coordinates of the crosshairs of the phantom was determined with a deviation of 0.5 mm (standard deviation: 0.07 mm). The results demonstrate that our newly developed ultrasound-guided navigation system shows only very small system deviations and therefore provides very accurate data for practical applications.

  6. Graded compression ultrasonography and computed tomography in acute colonic diverticulitis: Meta-analysis of test accuracy

    NARCIS (Netherlands)

    Laméris, Wytze; van Randen, Adrienne; Bipat, Shandra; Bossuyt, Patrick M. M.; Boermeester, Marja A.; Stoker, Jaap

    2008-01-01

    The purpose was to investigate the diagnostic accuracy of graded compression ultrasonography (US) and computed tomography (CT) in diagnosing acute colonic diverticulitis (ACD) in suspected patients. We performed a systematic review and meta-analysis of the accuracy of CT and US in diagnosing ACD.

  7. Laboratory and exterior decay of wood plastic composite boards: voids analysis and computed tomography

    Science.gov (United States)

    Grace Sun; Rebecca E. Ibach; Meghan Faillace; Marek Gnatowski; Jessie A. Glaeser; John Haight

    2016-01-01

    After exposure in the field and laboratory soil block culture testing, the void content of wood–plastic composite (WPC) decking boards was compared to unexposed samples. A void volume analysis was conducted based on calculations of sample density and from micro-computed tomography (microCT) data. It was found that reference WPC contains voids of different sizes from...

  8. Quantitative comparison of analysis methods for spectroscopic optical coherence tomography: reply to comment

    NARCIS (Netherlands)

    Bosschaart, Nienke; van Leeuwen, Ton G.; Aalders, Maurice C. G.; Faber, Dirk J.

    2014-01-01

    We reply to the comment by Kraszewski et al on "Quantitative comparison of analysis methods for spectroscopic optical coherence tomography." We present additional simulations evaluating the proposed window function. We conclude that our simulations show good qualitative agreement with the results of

  9. Quantitative comparison of analysis methods for spectroscopic optical coherence tomography: reply to comment

    NARCIS (Netherlands)

    Bosschaart, Nienke; van Leeuwen, Ton; Aalders, Maurice C.G.; Faber, Dirk

    2014-01-01

    We reply to the comment by Kraszewski et al on “Quantitative comparison of analysis methods for spectroscopic optical coherence tomography.” We present additional simulations evaluating the proposed window function. We conclude that our simulations show good qualitative agreement with the results of

  10. Orbit Determination Error Analysis Results for the Triana Sun-Earth L2 Libration Point Mission

    Science.gov (United States)

    Marr, G.

    2003-01-01

    Using the NASA Goddard Space Flight Center's Orbit Determination Error Analysis System (ODEAS), orbit determination error analysis results are presented for all phases of the Triana Sun-Earth L1 libration point mission and for the science data collection phase of a future Sun-Earth L2 libration point mission. The Triana spacecraft was nominally to be released by the Space Shuttle in a low Earth orbit, and this analysis focuses on that scenario. From the release orbit a transfer trajectory insertion (TTI) maneuver performed using a solid stage would increase the velocity be approximately 3.1 km/sec sending Triana on a direct trajectory to its mission orbit. The Triana mission orbit is a Sun-Earth L1 Lissajous orbit with a Sun-Earth-vehicle (SEV) angle between 4.0 and 15.0 degrees, which would be achieved after a Lissajous orbit insertion (LOI) maneuver at approximately launch plus 6 months. Because Triana was to be launched by the Space Shuttle, TTI could potentially occur over a 16 orbit range from low Earth orbit. This analysis was performed assuming TTI was performed from a low Earth orbit with an inclination of 28.5 degrees and assuming support from a combination of three Deep Space Network (DSN) stations, Goldstone, Canberra, and Madrid and four commercial Universal Space Network (USN) stations, Alaska, Hawaii, Perth, and Santiago. These ground stations would provide coherent two-way range and range rate tracking data usable for orbit determination. Larger range and range rate errors were assumed for the USN stations. Nominally, DSN support would end at TTI+144 hours assuming there were no USN problems. Post-TTI coverage for a range of TTI longitudes for a given nominal trajectory case were analyzed. The orbit determination error analysis after the first correction maneuver would be generally applicable to any libration point mission utilizing a direct trajectory.

  11. ATHEANA: A Technique for Human Error Analysis: An Overview of Its Methodological Basis

    International Nuclear Information System (INIS)

    Wreathall, John; Ramey-Smith, Ann

    1998-01-01

    The U.S. NRC has developed a new human reliability analysis (HRA) method, called A Technique for Human Event Analysis (ATHEANA), to provide a way of modeling the so-called 'errors of commission' - that is, situations in which operators terminate or disable engineered safety features (ESFs) or similar equipment during accident conditions, thereby putting the plant at an increased risk of core damage. In its reviews of operational events, NRC has found that these errors of commission occur with a relatively high frequency (as high as 2 or 3 per year), but are noticeably missing from the scope of most current probabilistic risk assessments (PRAs). This new method was developed through a formalized approach that describes what can occur when operators behave rationally but have inadequate knowledge or poor judgement. In particular, the method is based on models of decision-making and response planning that have been used extensively in the aviation field, and on the analysis of major accidents in both the nuclear and non-nuclear fields. Other papers at this conference present summaries of these event analyses in both the nuclear and non-nuclear fields. This paper presents an overview of ATHEANA and summarizes how the method structures the analysis of operationally significant events, and helps HRA analysts identify and model potentially risk-significant errors of commission in plant PRAs. (authors)

  12. Analysis of the mental foramen using cone beam computerized tomography

    Directory of Open Access Journals (Sweden)

    Kunihiro Saito

    Full Text Available AbstractIntroductionKnowledge of the anatomical structures located in the region between the mental foramina is of critical importance in pre-operative planning.ObjectiveTo evaluate the position of the mental foramen relative to the apices of the teeth and the distance to the edges of the mandible, using cone beam computerized tomography.Material and methodOne hundred cone beam computerized tomographs of the mandible were evaluated; the tomographs were taken using a single tomographic device. Each image chosen was evaluated repeatedly from both sides of the mandible, the position of the mental foramen, indicating the region in which the foramen was found and the measures of the mental foramen, the lingual cortex and the mandibular base. Initially, the data were analyzed descriptively. A value of pResultForty-two percent of the mental foramina were located in the apex of the second pre-molar. The lingual margin of the mental foramen was located, on average, 3.1mm from the lingual cortex. The lower margin of the mental foramen was located 7.25 mm above the lower edge of the mandible.ConclusionThe mental foramen was located more frequently at a level of the apices of the second pre-molars, with a distance to the lingual cortex, on average, of 3.1mm and to the base of the mandible, on average, of 7.25 mm.

  13. Inclusive bit error rate analysis for coherent optical code-division multiple-access system

    Science.gov (United States)

    Katz, Gilad; Sadot, Dan

    2002-06-01

    Inclusive noise and bit error rate (BER) analysis for optical code-division multiplexing (OCDM) using coherence techniques is presented. The analysis contains crosstalk calculation of the mutual field variance for different number of users. It is shown that the crosstalk noise depends deeply on the receiver integration time, the laser coherence time, and the number of users. In addition, analytical results of the power fluctuation at the received channel due to the data modulation at the rejected channels are presented. The analysis also includes amplified spontaneous emission (ASE)-related noise effects of in-line amplifiers in a long-distance communication link.

  14. Catching errors with patient-specific pretreatment machine log file analysis.

    Science.gov (United States)

    Rangaraj, Dharanipathy; Zhu, Mingyao; Yang, Deshan; Palaniswaamy, Geethpriya; Yaddanapudi, Sridhar; Wooten, Omar H; Brame, Scott; Mutic, Sasa

    2013-01-01

    A robust, efficient, and reliable quality assurance (QA) process is highly desired for modern external beam radiation therapy treatments. Here, we report the results of a semiautomatic, pretreatment, patient-specific QA process based on dynamic machine log file analysis clinically implemented for intensity modulated radiation therapy (IMRT) treatments delivered by high energy linear accelerators (Varian 2100/2300 EX, Trilogy, iX-D, Varian Medical Systems Inc, Palo Alto, CA). The multileaf collimator machine (MLC) log files are called Dynalog by Varian. Using an in-house developed computer program called "Dynalog QA," we automatically compare the beam delivery parameters in the log files that are generated during pretreatment point dose verification measurements, with the treatment plan to determine any discrepancies in IMRT deliveries. Fluence maps are constructed and compared between the delivered and planned beams. Since clinical introduction in June 2009, 912 machine log file analyses QA were performed by the end of 2010. Among these, 14 errors causing dosimetric deviation were detected and required further investigation and intervention. These errors were the result of human operating mistakes, flawed treatment planning, and data modification during plan file transfer. Minor errors were also reported in 174 other log file analyses, some of which stemmed from false positives and unreliable results; the origins of these are discussed herein. It has been demonstrated that the machine log file analysis is a robust, efficient, and reliable QA process capable of detecting errors originating from human mistakes, flawed planning, and data transfer problems. The possibility of detecting these errors is low using point and planar dosimetric measurements. Copyright © 2013 American Society for Radiation Oncology. Published by Elsevier Inc. All rights reserved.

  15. Space Trajectory Error Analysis Program (STEAP) for halo orbit missions. Volume 2: Programmer's manual

    Science.gov (United States)

    Byrnes, D. V.; Carney, P. C.; Underwood, J. W.; Vogt, E. D.

    1974-01-01

    The six month effort was responsible for the development, test, conversion, and documentation of computer software for the mission analysis of missions to halo orbits about libration points in the earth-sun system. The software consisting of two programs called NOMNAL and ERRAN is part of the Space Trajectories Error Analysis Programs. The program NOMNAL targets a transfer trajectory from earth on a given launch date to a specified halo orbit on a required arrival date. Either impulsive or finite thrust insertion maneuvers into halo orbit are permitted by the program. The transfer trajectory is consistent with a realistic launch profile input by the user. The second program ERRAN conducts error analyses of the targeted transfer trajectory. Measurements including range, doppler, star-planet angles, and apparent planet diameter are processed in a Kalman-Schmidt filter to determine the trajectory knowledge uncertainty.

  16. Analysis of Human Errors in Japanese Nuclear Power Plants using JHPES/JAESS

    International Nuclear Information System (INIS)

    Kojima, Mitsuhiro; Mimura, Masahiro; Yamaguchi, Osamu

    1998-01-01

    CRIEPI (Central Research Institute for Electric Power Industries) / HFC (Human Factors research Center) developed J-HPES (Japanese version of Human Performance Enhancement System) based on the HPES which was originally developed by INPO to analyze events resulted from human errors. J-HPES was systematized into a computer program named JAESS (J-HPES Analysis and Evaluation Support System) and both systems were distributed to all Japanese electric power companies to analyze events by themselves. CRIEPI / HFC also analyzed the incidents in Japanese nuclear power plants (NPPs) which were officially reported and identified as human error related with J-HPES / JAESS. These incidents have numbered up to 188 cases over the last 30 years. An outline of this analysis is given, and some preliminary findings are shown. (authors)

  17. An Analysis of Ripple and Error Fields Induced by a Blanket in the CFETR

    Science.gov (United States)

    Yu, Guanying; Liu, Xufeng; Liu, Songlin

    2016-10-01

    The Chinese Fusion Engineering Tokamak Reactor (CFETR) is an important intermediate device between ITER and DEMO. The Water Cooled Ceramic Breeder (WCCB) blanket whose structural material is mainly made of Reduced Activation Ferritic/Martensitic (RAFM) steel, is one of the candidate conceptual blanket design. An analysis of ripple and error field induced by RAFM steel in WCCB is evaluated with the method of static magnetic analysis in the ANSYS code. Significant additional magnetic field is produced by blanket and it leads to an increased ripple field. Maximum ripple along the separatrix line reaches 0.53% which is higher than 0.5% of the acceptable design value. Simultaneously, one blanket module is taken out for heating purpose and the resulting error field is calculated to be seriously against the requirement. supported by National Natural Science Foundation of China (No. 11175207) and the National Magnetic Confinement Fusion Program of China (No. 2013GB108004)

  18. Measurements and their uncertainties a practical guide to modern error analysis

    CERN Document Server

    Hughes, Ifan G

    2010-01-01

    This hands-on guide is primarily intended to be used in undergraduate laboratories in the physical sciences and engineering. It assumes no prior knowledge of statistics. It introduces the necessary concepts where needed, with key points illustrated with worked examples and graphic illustrations. In contrast to traditional mathematical treatments it uses a combination of spreadsheet and calculus-based approaches, suitable as a quick and easy on-the-spot reference. The emphasisthroughout is on practical strategies to be adopted in the laboratory. Error analysis is introduced at a level accessible to school leavers, and carried through to research level. Error calculation and propagation is presented though a series of rules-of-thumb, look-up tables and approaches amenable to computer analysis. The general approach uses the chi-square statistic extensively. Particular attention is given to hypothesis testing and extraction of parameters and their uncertainties by fitting mathematical models to experimental data....

  19. Proactive error analysis of ultrasound-guided axillary brachial plexus block performance.

    LENUS (Irish Health Repository)

    O'Sullivan, Owen

    2012-07-13

    Detailed description of the tasks anesthetists undertake during the performance of a complex procedure, such as ultrasound-guided peripheral nerve blockade, allows elements that are vulnerable to human error to be identified. We have applied 3 task analysis tools to one such procedure, namely, ultrasound-guided axillary brachial plexus blockade, with the intention that the results may form a basis to enhance training and performance of the procedure.

  20. Error analysis of the finite element and finite volume methods for some viscoelastic fluids

    Czech Academy of Sciences Publication Activity Database

    Lukáčová-Medviďová, M.; Mizerová, H.; She, B.; Stebel, Jan

    2016-01-01

    Roč. 24, č. 2 (2016), s. 105-123 ISSN 1570-2820 R&D Projects: GA ČR(CZ) GAP201/11/1304 Institutional support: RVO:67985840 Keywords : error analysis * Oldroyd-B type models * viscoelastic fluids Subject RIV: BA - General Mathematics Impact factor: 0.405, year: 2016 http://www.degruyter.com/view/j/jnma.2016.24.issue-2/jnma-2014-0057/jnma-2014-0057. xml

  1. Error Analysis of Explicit Partitioned Runge–Kutta Schemes for Conservation Laws

    KAUST Repository

    Hundsdorfer, Willem

    2014-08-27

    An error analysis is presented for explicit partitioned Runge–Kutta methods and multirate methods applied to conservation laws. The interfaces, across which different methods or time steps are used, lead to order reduction of the schemes. Along with cell-based decompositions, also flux-based decompositions are studied. In the latter case mass conservation is guaranteed, but it will be seen that the accuracy may deteriorate.

  2. Schur Complement Reduction in the Mixed-Hybrid Approximation of Darcy's Law: Rounding Error Analysis

    Czech Academy of Sciences Publication Activity Database

    Maryška, Jiří; Rozložník, Miroslav; Tůma, Miroslav

    2000-01-01

    Roč. 117, - (2000), s. 159-173 ISSN 0377-0427 R&D Projects: GA AV ČR IAA2030706; GA ČR GA201/98/P108 Institutional research plan: AV0Z1030915 Keywords : potential fluid flow problem * symmetric indefinite linear systems * Schur complement reduction * iterative methods * rounding error analysis Subject RIV: BA - General Mathematics Impact factor: 0.455, year: 2000

  3. Principal components analysis of reward prediction errors in a reinforcement learning task.

    Science.gov (United States)

    Sambrook, Thomas D; Goslin, Jeremy

    2016-01-01

    Models of reinforcement learning represent reward and punishment in terms of reward prediction errors (RPEs), quantitative signed terms describing the degree to which outcomes are better than expected (positive RPEs) or worse (negative RPEs). An electrophysiological component known as feedback related negativity (FRN) occurs at frontocentral sites 240-340ms after feedback on whether a reward or punishment is obtained, and has been claimed to neurally encode an RPE. An outstanding question however, is whether the FRN is sensitive to the size of both positive RPEs and negative RPEs. Previous attempts to answer this question have examined the simple effects of RPE size for positive RPEs and negative RPEs separately. However, this methodology can be compromised by overlap from components coding for unsigned prediction error size, or "salience", which are sensitive to the absolute size of a prediction error but not its valence. In our study, positive and negative RPEs were parametrically modulated using both reward likelihood and magnitude, with principal components analysis used to separate out overlying components. This revealed a single RPE encoding component responsive to the size of positive RPEs, peaking at ~330ms, and occupying the delta frequency band. Other components responsive to unsigned prediction error size were shown, but no component sensitive to negative RPE size was found. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Error Analysis of Some Demand Simplifications in Hydraulic Models of Water Supply Networks

    Directory of Open Access Journals (Sweden)

    Joaquín Izquierdo

    2013-01-01

    Full Text Available Mathematical modeling of water distribution networks makes use of simplifications aimed to optimize the development and use of the mathematical models involved. Simplified models are used systematically by water utilities, frequently with no awareness of the implications of the assumptions used. Some simplifications are derived from the various levels of granularity at which a network can be considered. This is the case of some demand simplifications, specifically, when consumptions associated with a line are equally allocated to the ends of the line. In this paper, we present examples of situations where this kind of simplification produces models that are very unrealistic. We also identify the main variables responsible for the errors. By performing some error analysis, we assess to what extent such a simplification is valid. Using this information, guidelines are provided that enable the user to establish if a given simplification is acceptable or, on the contrary, supplies information that differs substantially from reality. We also develop easy to implement formulae that enable the allocation of inner line demand to the line ends with minimal error; finally, we assess the errors associated with the simplification and locate the points of a line where maximum discrepancies occur.

  5. A Conceptual Framework of Human Reliability Analysis for Execution Human Error in NPP Advanced MCRs

    International Nuclear Information System (INIS)

    Jang, In Seok; Kim, Ar Ryum; Seong, Poong Hyun; Jung, Won Dea

    2014-01-01

    The operation environment of Main Control Rooms (MCRs) in Nuclear Power Plants (NPPs) has changed with the adoption of new human-system interfaces that are based on computer-based technologies. The MCRs that include these digital and computer technologies, such as large display panels, computerized procedures, and soft controls, are called Advanced MCRs. Among the many features of Advanced MCRs, soft controls are a particularly important feature because the operation action in NPP Advanced MCRs is performed by soft control. Using soft controls such as mouse control, and touch screens, operators can select a specific screen, then choose the controller, and finally manipulate the given devices. Due to the different interfaces between soft control and hardwired conventional type control, different human error probabilities and a new Human Reliability Analysis (HRA) framework should be considered in the HRA for advanced MCRs. In other words, new human error modes should be considered for interface management tasks such as navigation tasks, and icon (device) selection tasks in monitors and a new framework of HRA method taking these newly generated human error modes into account should be considered. In this paper, a conceptual framework for a HRA method for the evaluation of soft control execution human error in advanced MCRs is suggested by analyzing soft control tasks

  6. Post-Error Slowing in Patients With ADHD: A Meta-Analysis.

    Science.gov (United States)

    Balogh, Lívia; Czobor, Pál

    2016-12-01

    Post-error slowing (PES) is a cognitive mechanism for adaptive responses to reduce the probability of error in subsequent trials after error. To date, no meta-analytic summary of individual studies has been conducted to assess whether ADHD patients differ from controls in PES. We identified 15 relevant publications, reporting 26 pairs of comparisons (ADHD, n = 1,053; healthy control, n = 614). Random-effect meta-analysis was used to determine the statistical effect size (ES) for PES. PES was diminished in the ADHD group as compared with controls, with an ES in the medium range (Cohen's d = 0.42). Significant group difference was observed in relation to the inter-stimulus interval (ISI): While healthy participants slowed down after an error during long (3,500 ms) compared with short ISIs (1,500 ms), ADHD participants sustained or even increased their speed. The pronounced group difference suggests that PES may be considered as a behavioral indicator for differentiating ADHD patients from healthy participants. © The Author(s) 2014.

  7. Inversion, error analysis, and validation of GPS/MET occultation data

    Energy Technology Data Exchange (ETDEWEB)

    Steiner, A.K.; Kirchengast, G. [Graz Univ. (Austria). Inst. fuer Meteorologie und Geophysik; Ladreiter, H.P.

    1999-01-01

    The global positioning system meteorology (GPS/MET) experiment was the first practical demonstration of global navigation satellite system (GNSS)-based active limb sounding employing the radio occultation technique. This method measures, as principal observable and with millimetric accuracy, the excess phase path (relative to propagation in vacuum) of GNSS-transmitted radio waves caused by refraction during passage through the Earth`s neutral atmosphere and ionosphere in limb geometry. It shows great potential utility for weather and climate system studies in providing an unique combination of global coverage, high vertical resolution and accuracy, long-term stability, and all-weather capability. We first describe our GPS/MET data processing scheme from excess phases via bending angles to the neutral atmospheric parameters refractivity, density, pressure and temperature. Special emphasis is given to ionospheric correction methodology and the inversion of bending angles to refractivities, where we introduce a matrix inversion technique (instead of the usual integral inversion). The matrix technique is shown to lead to identical results as integral inversion but is more directly extendable to inversion by optimal estimation. The quality of GPS/MET-derived profiles is analyzed with an error estimation analysis employing a Monte Carlo technique. We consider statistical errors together with systematic errors due to upper-boundary initialization of the retrieval by a priori bending angles. Perfect initialization and properly smoothed statistical errors allow for better than 1 K temperature retrieval accuracy up to the stratopause. 28 refs.

  8. Trend analysis of human error events and assessment of their proactive prevention measure at Rokkasho reprocessing plant

    International Nuclear Information System (INIS)

    Yamazaki, Satoru; Tanaka, Izumi; Wakabayashi, Toshio

    2012-01-01

    A trend analysis of human error events is important for preventing the recurrence of human error events. We propose a new method for identifying the common characteristics from results of trend analysis, such as the latent weakness of organization, and a management process for strategic error prevention. In this paper, we describe a trend analysis method for human error events that have been accumulated in the organization and the utilization of the results of trend analysis to prevent accidents proactively. Although the systematic analysis of human error events, the monitoring of their overall trend, and the utilization of the analyzed results have been examined for the plant operation, such information has never been utilized completely. Sharing information on human error events and analyzing their causes lead to the clarification of problems in the management and human factors. This new method was applied to the human error events that occurred in the Rokkasho reprocessing plant from 2010 October. Results revealed that the output of this method is effective in judging the error prevention plan and that the number of human error events is reduced to about 50% those observed in 2009 and 2010. (author)

  9. A trend analysis of human error events for proactive prevention of accidents. Methodology development and effective utilization

    International Nuclear Information System (INIS)

    Hirotsu, Yuko; Ebisu, Mitsuhiro; Aikawa, Takeshi; Matsubara, Katsuyuki

    2006-01-01

    This paper described methods for analyzing human error events that has been accumulated in the individual plant and for utilizing the result to prevent accidents proactively. Firstly, a categorization framework of trigger action and causal factors of human error events were reexamined, and the procedure to analyze human error events was reviewed based on the framework. Secondly, a method for identifying the common characteristics of trigger action data and of causal factor data accumulated by analyzing human error events was clarified. In addition, to utilize the results of trend analysis effectively, methods to develop teaching material for safety education, to develop the checkpoints for the error prevention and to introduce an error management process for strategic error prevention were proposed. (author)

  10. Error rates in bite mark analysis in an in vivo animal model.

    Science.gov (United States)

    Avon, S L; Victor, C; Mayhall, J T; Wood, R E

    2010-09-10

    Recent judicial decisions have specified that one foundation of reliability of comparative forensic disciplines is description of both scientific approach used and calculation of error rates in determining the reliability of an expert opinion. Thirty volunteers were recruited for the analysis of dermal bite marks made using a previously established in vivo porcine-skin model. Ten participants were recruited from three separate groups: dentists with no experience in forensics, dentists with an interest in forensic odontology, and board-certified diplomates of the American Board of Forensic Odontology (ABFO). Examiner demographics and measures of experience in bite mark analysis were collected for each volunteer. Each participant received 18 completely documented, simulated in vivo porcine bite mark cases and three paired sets of human dental models. The paired maxillary and mandibular models were identified as suspect A, suspect B, and suspect C. Examiners were tasked to determine, using an analytic method of their own choosing, whether each bite mark of the 18 bite mark cases provided was attributable to any of the suspect dentitions provided. Their findings were recorded on a standardized recording form. The results of the study demonstrated that the group of inexperienced examiners often performed as well as the board-certified group, and both inexperienced and board-certified groups performed better than those with an interest in forensic odontology that had not yet received board certification. Incorrect suspect attributions (possible false inculpation) were most common among this intermediate group. Error rates were calculated for each of the three observer groups for each of the three suspect dentitions. This study demonstrates that error rates can be calculated using an animal model for human dermal bite marks, and although clinical experience is useful, other factors may be responsible for accuracy in bite mark analysis. Further, this study demonstrates

  11. Models for Ballistic Wind Measurement Error Analysis. Volume II. Users’ Manual.

    Science.gov (United States)

    1983-01-01

    TEST CHART NATIONAL li ’A il (If IANP) ARDl A -CR-83-0008-1 Reports Control Symbol OSO - 1366 MODELS FOR BALLISTIC WIND MEASUREMENTERROR ANALYSIS...AD-A129 360 MODELS FOR BALLSTIC WIND MEASUREMENT ERROR ANALYSIS VO UME 11USERS’ MAN..U) NEW REXICO STATE UNIV LAS U SS CRUCES PHYSICAL SCIENCE LAR...ACCESSION NO. 3. RECIPIENT’S CATALOG NUMBER SASL-CR-83-0008-1 4. TITLE (and Subtitle) 5 TYPE OF REPORT & PERIOD COVERED MODELS FOR BALLISTIC WIND

  12. Space Trajectory Error Analysis Program (STEAP) for halo orbit missions. Volume 1: Analytic and user's manual

    Science.gov (United States)

    Byrnes, D. V.; Carney, P. C.; Underwood, J. W.; Vogt, E. D.

    1974-01-01

    Development, test, conversion, and documentation of computer software for the mission analysis of missions to halo orbits about libration points in the earth-sun system is reported. The software consisting of two programs called NOMNAL and ERRAN is part of the Space Trajectories Error Analysis Programs (STEAP). The program NOMNAL targets a transfer trajectory from Earth on a given launch date to a specified halo orbit on a required arrival date. Either impulsive or finite thrust insertion maneuvers into halo orbit are permitted by the program. The transfer trajectory is consistent with a realistic launch profile input by the user. The second program ERRAN conducts error analyses of the targeted transfer trajectory. Measurements including range, doppler, star-planet angles, and apparent planet diameter are processed in a Kalman-Schmidt filter to determine the trajectory knowledge uncertainty. Execution errors at injection, midcourse correction and orbit insertion maneuvers are analyzed along with the navigation uncertainty to determine trajectory control uncertainties and fuel-sizing requirements. The program is also capable of generalized covariance analyses.

  13. BEATBOX v1.0: Background Error Analysis Testbed with Box Models

    Directory of Open Access Journals (Sweden)

    C. Knote

    2018-02-01

    Full Text Available The Background Error Analysis Testbed (BEATBOX is a new data assimilation framework for box models. Based on the BOX Model eXtension (BOXMOX to the Kinetic Pre-Processor (KPP, this framework allows users to conduct performance evaluations of data assimilation experiments, sensitivity analyses, and detailed chemical scheme diagnostics from an observation simulation system experiment (OSSE point of view. The BEATBOX framework incorporates an observation simulator and a data assimilation system with the possibility of choosing ensemble, adjoint, or combined sensitivities. A user-friendly, Python-based interface allows for the tuning of many parameters for atmospheric chemistry and data assimilation research as well as for educational purposes, for example observation error, model covariances, ensemble size, perturbation distribution in the initial conditions, and so on. In this work, the testbed is described and two case studies are presented to illustrate the design of a typical OSSE experiment, data assimilation experiments, a sensitivity analysis, and a method for diagnosing model errors. BEATBOX is released as an open source tool for the atmospheric chemistry and data assimilation communities.

  14. BEATBOX v1.0: Background Error Analysis Testbed with Box Models

    Science.gov (United States)

    Knote, Christoph; Barré, Jérôme; Eckl, Max

    2018-02-01

    The Background Error Analysis Testbed (BEATBOX) is a new data assimilation framework for box models. Based on the BOX Model eXtension (BOXMOX) to the Kinetic Pre-Processor (KPP), this framework allows users to conduct performance evaluations of data assimilation experiments, sensitivity analyses, and detailed chemical scheme diagnostics from an observation simulation system experiment (OSSE) point of view. The BEATBOX framework incorporates an observation simulator and a data assimilation system with the possibility of choosing ensemble, adjoint, or combined sensitivities. A user-friendly, Python-based interface allows for the tuning of many parameters for atmospheric chemistry and data assimilation research as well as for educational purposes, for example observation error, model covariances, ensemble size, perturbation distribution in the initial conditions, and so on. In this work, the testbed is described and two case studies are presented to illustrate the design of a typical OSSE experiment, data assimilation experiments, a sensitivity analysis, and a method for diagnosing model errors. BEATBOX is released as an open source tool for the atmospheric chemistry and data assimilation communities.

  15. Error Estimates of the Ares I Computed Turbulent Ascent Longitudinal Aerodynamic Analysis

    Science.gov (United States)

    Abdol-Hamid, Khaled S.; Ghaffari, Farhad

    2012-01-01

    Numerical predictions of the longitudinal aerodynamic characteristics for the Ares I class of vehicles, along with the associated error estimate derived from an iterative convergence grid refinement, are presented. Computational results are based on an unstructured grid, Reynolds-averaged Navier-Stokes analysis. The validity of the approach to compute the associated error estimates, derived from a base grid to an extrapolated infinite-size grid, was first demonstrated on a sub-scaled wind tunnel model at representative ascent flow conditions for which the experimental data existed. Such analysis at the transonic flow conditions revealed a maximum deviation of about 23% between the computed longitudinal aerodynamic coefficients with the base grid and the measured data across the entire roll angles. This maximum deviation from the wind tunnel data was associated with the computed normal force coefficient at the transonic flow condition and was reduced to approximately 16% based on the infinite-size grid. However, all the computed aerodynamic coefficients with the base grid at the supersonic flow conditions showed a maximum deviation of only about 8% with that level being improved to approximately 5% for the infinite-size grid. The results and the error estimates based on the established procedure are also presented for the flight flow conditions.

  16. Optimization of isotherm models for pesticide sorption on biopolymer-nanoclay composite by error analysis.

    Science.gov (United States)

    Narayanan, Neethu; Gupta, Suman; Gajbhiye, V T; Manjaiah, K M

    2017-04-01

    A carboxy methyl cellulose-nano organoclay (nano montmorillonite modified with 35-45 wt % dimethyl dialkyl (C 14 -C 18 ) amine (DMDA)) composite was prepared by solution intercalation method. The prepared composite was characterized by infrared spectroscopy (FTIR), X-Ray diffraction spectroscopy (XRD) and scanning electron microscopy (SEM). The composite was utilized for its pesticide sorption efficiency for atrazine, imidacloprid and thiamethoxam. The sorption data was fitted into Langmuir and Freundlich isotherms using linear and non linear methods. The linear regression method suggested best fitting of sorption data into Type II Langmuir and Freundlich isotherms. In order to avoid the bias resulting from linearization, seven different error parameters were also analyzed by non linear regression method. The non linear error analysis suggested that the sorption data fitted well into Langmuir model rather than in Freundlich model. The maximum sorption capacity, Q 0 (μg/g) was given by imidacloprid (2000) followed by thiamethoxam (1667) and atrazine (1429). The study suggests that the degree of determination of linear regression alone cannot be used for comparing the best fitting of Langmuir and Freundlich models and non-linear error analysis needs to be done to avoid inaccurate results. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Wavelet analysis enables system-independent texture analysis of optical coherence tomography images.

    Science.gov (United States)

    Lingley-Papadopoulos, Colleen A; Loew, Murray H; Zara, Jason M

    2009-01-01

    Texture analysis for tissue characterization is a current area of optical coherence tomography (OCT) research. We discuss some of the differences between OCT systems and the effects those differences have on the resulting images and subsequent image analysis. In addition, as an example, two algorithms for the automatic recognition of bladder cancer are compared: one that was developed on a single system with no consideration for system differences, and one that was developed to address the issues associated with system differences. The first algorithm had a sensitivity of 73% and specificity of 69% when tested using leave-one-out cross-validation on data taken from a single system. When tested on images from another system with a different central wavelength, however, the method classified all images as cancerous regardless of the true pathology. By contrast, with the use of wavelet analysis and the removal of system-dependent features, the second algorithm reported sensitivity and specificity values of 87 and 58%, respectively, when trained on images taken with one imaging system and tested on images taken with another.

  18. Multimodal imaging analysis of single-photon emission computed tomography and magnetic resonance tomography for improving diagnosis of Parkinson's disease

    International Nuclear Information System (INIS)

    Barthel, H.; Georgi, P.; Slomka, P.; Dannenberg, C.; Kahn, T.

    2000-01-01

    Parkinson's disease (PD) is characterized by a degeneration of nigrostriated dopaminergic neurons, which can be imaged with 123 I-labeled 2β-carbomethoxy-3β-(4-iodophenyl) tropane ([ 123 I]β-CIT) and single-photon emission computed tomography (SPECT). However, the quality of the region of interest (ROI) technique used for quantitative analysis of SPECT data is compromised by limited anatomical information in the images. We investigated whether the diagnosis of PD can be improved by combining the use of SPECT images with morphological image data from magnetic resonance imaging (MRI)/computed tomography (CT). We examined 27 patients (8 men, 19 women; aged 55±13 years) with PD (Hoehn and Yahr stage 2.1±0.8) by high-resolution [ 123 I]β-CIT SPECT (185-200 MBq, Ceraspect camera). SPECT images were analyzed both by a unimodal technique (ROIs defined directly within the SPECT studies) and a multimodal technique (ROIs defined within individual MRI/CT studies and transferred to the corresponding interactively coregistered SPECT studies). [ 123 I]β-CIT binding ratios (cerebellum as reference), which were obtained for heads of caudate nuclei (CA), putamina (PU), and global striatal structures were compared with clinical parameters. Differences between contra- and ipsilateral (related to symptom dominance) striatal [ 123 I]β-CIT binding ratios proved to be larger in the multimodal ROI technique than in the unimodal approach (e.g., for PU: 1.2*** vs. 0.7**). Binding ratios obtained by the unimodal ROI technique were significantly correlated with those of the multimodal technique (e.g., for CA: y=0.97x+2.8; r=0.70; P com subscore (r=-0.49* vs. -0.32). These results show that the impact of [ 123 I]β-CIT SPECT for diagnosing PD is affected by the method used to analyze the SPECT images. The described multimodal approach, which is based on coregistration of SPECT and morphological imaging data, leads to improved determination of the degree of this dopaminergic disorder

  19. The development and error analysis of a kinematic parameters based spatial positioning method for an orthopedic navigation robot system.

    Science.gov (United States)

    Pei, Baoqing; Zhu, Gang; Wang, Yu; Qiao, Huiting; Chen, Xiangqian; Wang, Binbin; Li, Xiaoyun; Zhang, Weijun; Liu, Wenyong; Fan, Yubo

    2017-09-01

    Spatial positioning is the key function of a surgical navigation robot system, and accuracy is the most important performance index of such a system. The kinematic parameters of a six degrees of freedom (DOF) robot arm were used to form the transformation from intraoperative fluoroscopy images to a robot's coordinate system without C-arm calibration and to solve the redundant DOF problem. The influences of three typical error sources and their combination on the final navigation error were investigated through Monte Carlo simulation. The navigation error of the proposed method is less than 0.6 mm, and the feasibility was verified through cadaver experiments. Error analysis suggests that the robot kinematic error has a linear relationship with final navigation error, while the image error and gauge error have nonlinear influences. This kinematic parameters based method can provide accurate and convenient navigation for orthopedic surgeries. The result of error analysis will help error design and assignment for surgical robots. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Comparison of word-supply and word-analysis error-correction procedures on oral reading by mentally retarded children.

    Science.gov (United States)

    Singh, J; Singh, N N

    1985-07-01

    An alternating treatments design was used to measure the differential effects of two error-correction procedures (word supply and word analysis) and a no-training control condition on the number of oral-reading errors made by four moderately mentally retarded children. Results showed that when compared to the no-training control condition, both error-correction procedures greatly reduced the number of oral-reading errors of all subjects. The word-analysis method, however, was significantly more effective than was word supply. In terms of collateral behavior, the number of self-corrections of errors increased under both intervention conditions when compared to the baseline and no-training control conditions. For 2 subjects there was no difference in the rate of self-corrections under word analysis and word supply but for the other 2, a greater rate was achieved under word analysis.

  1. Accidental hypoglycaemia caused by an arterial flush drug error: a case report and contributory causes analysis.

    Science.gov (United States)

    Gupta, K J; Cook, T M

    2013-11-01

    In 2008, the National Patient Safety Agency (NPSA) issued a Rapid Response Report concerning problems with infusions and sampling from arterial lines. The risk of blood sample contamination from glucose-containing arterial line infusions was highlighted and changes in arterial line management were recommended. Despite this guidance, errors with arterial line infusions remain common. We report a case of severe hypoglycaemia and neuroglycopenia caused by glucose contamination of arterial line blood samples. This case occurred despite the implementation of the practice changes recommended in the 2008 NPSA alert. We report an analysis of the factors contributing to this incident using the Yorkshire Contributory Factors Framework. We discuss the nature of the errors that occurred and list the consequent changes in practice implemented on our unit to prevent recurrence of this incident, which go well beyond those recommended by the NPSA in 2008. © 2013 The Association of Anaesthetists of Great Britain and Ireland.

  2. Error analysis of pronouns by normal and language-impaired children.

    Science.gov (United States)

    Moore, M E

    1995-03-01

    Recent research has located extraordinary weakness in specifically language-impaired (SLI) children's development other than grammatical morphemes. A problem with pronoun case marking was reported to be more prevalent in SLI children than in normally developing children matched by mean length of utterance. However, results from the present study do not support that finding. Spontaneous utterances from 3 conversational contexts were generated by 3 groups of normal and SLI children and were analyzed for accuracy of pronoun usage. Third person singular pronouns were judged according to case, gender, number, person and cohesion based on their linguistic and nonlinguistic contexts. Results indicated that SLI children exhibited more total errors than their chronological peers, but not more than their language level peers. An analysis of error types indicated a similar pattern in pronoun case marking.

  3. Suppressing carrier removal error in the Fourier transform method for interferogram analysis

    International Nuclear Information System (INIS)

    Fan, Qi; Yang, Hongru; Li, Gaoping; Zhao, Jianlin

    2010-01-01

    A new carrier removal method for interferogram analysis using the Fourier transform is presented. The proposed method can be used to suppress the carrier removal error as well as the spectral leakage error. First, the carrier frequencies are estimated with the spectral centroid of the up sidelobe of the apodized interferogram, and then the up sidelobe can be shifted to the origin in the frequency domain by multiplying the original interferogram by a constructed plane reference wave. The influence of the carrier frequencies without an integer multiple of the frequency interval and the window function for apodization of the interferogram can be avoided in our work. The simulation and experimental results show that this method is effective for phase measurement with a high accuracy from a single interferogram

  4. POSTPROCESSING MIXED FINITE ELEMENT METHODS FOR SOLVING CAHN-HILLIARD EQUATION: METHODS AND ERROR ANALYSIS

    Science.gov (United States)

    Wang, Wansheng; Chen, Long; Zhou, Jie

    2015-01-01

    A postprocessing technique for mixed finite element methods for the Cahn-Hilliard equation is developed and analyzed. Once the mixed finite element approximations have been computed at a fixed time on the coarser mesh, the approximations are postprocessed by solving two decoupled Poisson equations in an enriched finite element space (either on a finer grid or a higher-order space) for which many fast Poisson solvers can be applied. The nonlinear iteration is only applied to a much smaller size problem and the computational cost using Newton and direct solvers is negligible compared with the cost of the linear problem. The analysis presented here shows that this technique remains the optimal rate of convergence for both the concentration and the chemical potential approximations. The corresponding error estimate obtained in our paper, especially the negative norm error estimates, are non-trivial and different with the existing results in the literatures. PMID:27110063

  5. Residents' surgical performance during the laboratory years: an analysis of rule-based errors.

    Science.gov (United States)

    Nathwani, Jay N; Wise, Brett J; Garren, Margaret E; Mohamadipanah, Hossein; Van Beek, Nicole; DiMarco, Shannon M; Pugh, Carla M

    2017-11-01

    Nearly one-third of surgical residents will enter into academic development during their surgical residency by dedicating time to a research fellowship for 1-3 y. Major interest lies in understanding how laboratory residents' surgical skills are affected by minimal clinical exposure during academic development. A widely held concern is that the time away from clinical exposure results in surgical skills decay. This study examines the impact of the academic development years on residents' operative performance. We hypothesize that the use of repeated, annual assessments may result in learning even without individual feedback on participants simulated performance. Surgical performance data were collected from laboratory residents (postgraduate years 2-5) during the summers of 2014, 2015, and 2016. Residents had 15 min to complete a shortened, simulated laparoscopic ventral hernia repair procedure. Final hernia repair skins from all participants were scored using a previously validated checklist. An analysis of variance test compared the mean performance scores of repeat participants to those of first time participants. Twenty-seven (37% female) laboratory residents provided 2-year assessment data over the 3-year span of the study. Second time performance revealed improvement from a mean score of 14 (standard error = 1.0) in the first year to 17.2 (SD = 0.9) in the second year, (F[1, 52] = 5.6, P = 0.022). Detailed analysis demonstrated improvement in performance for 3 grading criteria that were considered to be rule-based errors. There was no improvement in operative strategy errors. Analysis of longitudinal performance of laboratory residents shows higher scores for repeat participants in the category of rule-based errors. These findings suggest that laboratory residents can learn from rule-based mistakes when provided with annual performance-based assessments. This benefit was not seen with operative strategy errors and has important implications for

  6. High‐resolution trench photomosaics from image‐based modeling: Workflow and error analysis

    Science.gov (United States)

    Reitman, Nadine G.; Bennett, Scott E. K.; Gold, Ryan D.; Briggs, Richard; Duross, Christopher

    2015-01-01

    Photomosaics are commonly used to construct maps of paleoseismic trench exposures, but the conventional process of manually using image‐editing software is time consuming and produces undesirable artifacts and distortions. Herein, we document and evaluate the application of image‐based modeling (IBM) for creating photomosaics and 3D models of paleoseismic trench exposures, illustrated with a case‐study trench across the Wasatch fault in Alpine, Utah. Our results include a structure‐from‐motion workflow for the semiautomated creation of seamless, high‐resolution photomosaics designed for rapid implementation in a field setting. Compared with conventional manual methods, the IBM photomosaic method provides a more accurate, continuous, and detailed record of paleoseismic trench exposures in approximately half the processing time and 15%–20% of the user input time. Our error analysis quantifies the effect of the number and spatial distribution of control points on model accuracy. For this case study, an ∼87  m2 exposure of a benched trench photographed at viewing distances of 1.5–7 m yields a model with <2  cm root mean square error (rmse) with as few as six control points. Rmse decreases as more control points are implemented, but the gains in accuracy are minimal beyond 12 control points. Spreading control points throughout the target area helps to minimize error. We propose that 3D digital models and corresponding photomosaics should be standard practice in paleoseismic exposure archiving. The error analysis serves as a guide for future investigations that seek balance between speed and accuracy during photomosaic and 3D model construction.

  7. Technology-related medication errors in a tertiary hospital: a 5-year analysis of reported medication incidents.

    Science.gov (United States)

    Samaranayake, N R; Cheung, S T D; Chui, W C M; Cheung, B M Y

    2012-12-01

    Healthcare technology is meant to reduce medication errors. The objective of this study was to assess unintended errors related to technologies in the medication use process. Medication incidents reported from 2006 to 2010 in a main tertiary care hospital were analysed by a pharmacist and technology-related errors were identified. Technology-related errors were further classified as socio-technical errors and device errors. This analysis was conducted using data from medication incident reports which may represent only a small proportion of medication errors that actually takes place in a hospital. Hence, interpretation of results must be tentative. 1538 medication incidents were reported. 17.1% of all incidents were technology-related, of which only 1.9% were device errors, whereas most were socio-technical errors (98.1%). Of these, 61.2% were linked to computerised prescription order entry, 23.2% to bar-coded patient identification labels, 7.2% to infusion pumps, 6.8% to computer-aided dispensing label generation and 1.5% to other technologies. The immediate causes for technology-related errors included, poor interface between user and computer (68.1%), improper procedures or rule violations (22.1%), poor interface between user and infusion pump (4.9%), technical defects (1.9%) and others (3.0%). In 11.4% of the technology-related incidents, the error was detected after the drug had been administered. A considerable proportion of all incidents were technology-related. Most errors were due to socio-technical issues. Unintended and unanticipated errors may happen when using technologies. Therefore, when using technologies, system improvement, awareness, training and monitoring are needed to minimise medication errors. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  8. A Cross-sectional Analysis Investigating Organizational Factors That Influence Near-Miss Error Reporting Among Hospital Pharmacists.

    Science.gov (United States)

    Patterson, Mark E; Pace, Heather A

    2016-06-01

    Underreporting near-miss errors undermines hospitals' ability to improve patient safety. The objective of this analysis was to determine the extent to which punitive work climate, inadequate error feedback to staff, or insufficient preventative procedures are associated with decreased frequency of near-miss error reporting among hospital pharmacists. Survey data were obtained from the Agency of Healthcare Research and Quality 2010 Hospital Survey on Patient Safety Culture. Near-miss error reporting was defined using a Likert scale response to the question, "When a mistake is made, but is caught and corrected before affecting the patient, how often is this reported?" Work climate, error feedback to staff, and preventative procedures were defined similarly using responses to survey questions. Multivariate ordinal regressions estimated the likelihood of agreeing that near-miss errors were rarely reported, conditional upon perceived levels of punitive work climate, error feedback, or preventative procedures. Pharmacists disagreeing that procedures were sufficient and that feedback on errors was adequate were more likely to report that near-miss errors were rarely reported (odds ratio [OR], 2.5; 95% confidence interval [CI], 1.7-3.8; OR, 3.5; 95% CI, 2.5-5.1). Those agreeing that mistakes were held against them were equally likely as those disagreeing to report that errors were rarely reported (OR, 0.84; 95% CI, 0.61-1.1). Inadequate error feedback to staff and insufficient preventative procedures increase the likelihood that near-miss errors will be underreported. Hospitals seeking to improve near-miss error reporting should improve error-reporting infrastructures to enable feedback, which, in turn, would create a more preventative system that improves patient safety.

  9. Space Trajectories Error Analysis (STEAP) Programs. Volume 1: Analytic manual, update

    Science.gov (United States)

    1971-01-01

    Manual revisions are presented for the modified and expanded STEAP series. The STEAP 2 is composed of three independent but related programs: NOMAL for the generation of n-body nominal trajectories performing a number of deterministic guidance events; ERRAN for the linear error analysis and generalized covariance analysis along specific targeted trajectories; and SIMUL for testing the mathematical models used in the navigation and guidance process. The analytic manual provides general problem description, formulation, and solution and the detailed analysis of subroutines. The programmers' manual gives descriptions of the overall structure of the programs as well as the computational flow and analysis of the individual subroutines. The user's manual provides information on the input and output quantities of the programs. These are updates to N69-36472 and N69-36473.

  10. Error analysis of supercritical water correlations using ATHLET system code under DHT conditions

    Energy Technology Data Exchange (ETDEWEB)

    Samuel, J., E-mail: jeffrey.samuel@uoit.ca [Univ. of Ontario Inst. of Tech., Oshawa, ON (Canada)

    2014-07-01

    The thermal-hydraulic computer code ATHLET (Analysis of THermal-hydraulics of LEaks and Transients) is used for analysis of anticipated and abnormal plant transients, including safety analysis of Light Water Reactors (LWRs) and Russian Graphite-Moderated High Power Channel-type Reactors (RBMKs). The range of applicability of ATHLET has been extended to supercritical water by updating the fluid-and transport-properties packages, thus enabling the code to the used in analysis of SuperCritical Water-cooled Reactors (SCWRs). Several well-known heat-transfer correlations for supercritical fluids were added to the ATHLET code and a numerical model was created to represent an experimental test section. In this work, the error in the Heat Transfer Coefficient (HTC) calculation by the ATHLET model is studied along with the ability of the various correlations to predict different heat transfer regimes. (author)

  11. Error Analysis System for Spacecraft Navigation Using the Global Positioning System (GPS)

    Science.gov (United States)

    Truong, S. H.; Hart, R. C.; Hartman, K. R.; Tomcsik, T. L.; Searl, J. E.; Bernstein, A.

    1997-01-01

    The Flight Dynamics Division (FDD) at the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC) is currently developing improved space-navigation filtering algorithms to use the Global Positioning System (GPS) for autonomous real-time onboard orbit determination. In connection with a GPS technology demonstration on the Small Satellite Technology Initiative (SSTI)/Lewis spacecraft, FDD analysts and programmers have teamed with the GSFC Guidance, Navigation, and Control Branch to develop the GPS Enhanced Orbit Determination Experiment (GEODE) system. The GEODE system consists of a Kalman filter operating as a navigation tool for estimating the position, velocity, and additional states required to accurately navigate the orbiting Lewis spacecraft by using astrodynamic modeling and GPS measurements from the receiver. A parallel effort at the FDD is the development of a GPS Error Analysis System (GEAS) that will be used to analyze and improve navigation filtering algorithms during development phases and during in-flight calibration. For GEAS, the Kalman filter theory is extended to estimate the errors in position, velocity, and other error states of interest. The estimation of errors in physical variables at regular intervals will allow the time, cause, and effect of navigation system weaknesses to be identified. In addition, by modeling a sufficient set of navigation system errors, a system failure that causes an observed error anomaly can be traced and accounted for. The GEAS software is formulated using Object Oriented Design (OOD) techniques implemented in the C++ programming language on a Sun SPARC workstation. The Phase 1 of this effort is the development of a basic system to be used to evaluate navigation algorithms implemented in the GEODE system. This paper presents the GEAS mathematical methodology, systems and operations concepts, and software design and implementation. Results from the use of the basic system to evaluate

  12. A POSTERIORI ERROR ANALYSIS OF TWO STAGE COMPUTATION METHODS WITH APPLICATION TO EFFICIENT DISCRETIZATION AND THE PARAREAL ALGORITHM.

    Science.gov (United States)

    Chaudhry, Jehanzeb Hameed; Estep, Don; Tavener, Simon; Carey, Varis; Sandelin, Jeff

    2016-01-01

    We consider numerical methods for initial value problems that employ a two stage approach consisting of solution on a relatively coarse discretization followed by solution on a relatively fine discretization. Examples include adaptive error control, parallel-in-time solution schemes, and efficient solution of adjoint problems for computing a posteriori error estimates. We describe a general formulation of two stage computations then perform a general a posteriori error analysis based on computable residuals and solution of an adjoint problem. The analysis accommodates various variations in the two stage computation and in formulation of the adjoint problems. We apply the analysis to compute "dual-weighted" a posteriori error estimates, to develop novel algorithms for efficient solution that take into account cancellation of error, and to the Parareal Algorithm. We test the various results using several numerical examples.

  13. English word frequency and recognition in bilinguals: Inter-corpus comparison and error analysis.

    Science.gov (United States)

    Shi, Lu-Feng

    2015-01-01

    This study is the second of a two-part investigation on lexical effects on bilinguals' performance on a clinical English word recognition test. Focus is on word-frequency effects using counts provided by four corpora. Frequency of occurrence was obtained for 200 NU-6 words from the Hoosier mental lexicon (HML) and three contemporary corpora, American National Corpora, Hyperspace analogue to language (HAL), and SUBTLEX(US). Correlation analysis was performed between word frequency and error rate. Ten monolinguals and 30 bilinguals participated. Bilinguals were further grouped according to their age of English acquisition and length of schooling/working in English. Word frequency significantly affected word recognition in bilinguals who acquired English late and had limited schooling/working in English. When making errors, bilinguals tended to replace the target word with a word of a higher frequency. Overall, the newer corpora outperformed the HML in predicting error rate. Frequency counts provided by contemporary corpora predict bilinguals' recognition of English monosyllabic words. Word frequency also helps explain top replacement words for misrecognized targets. Word-frequency effects are especially prominent for bilinguals foreign born and educated.

  14. Teamwork and error in the operating room: analysis of skills and roles.

    Science.gov (United States)

    Catchpole, K; Mishra, A; Handa, A; McCulloch, P

    2008-04-01

    To analyze the effects of surgical, anesthetic, and nursing teamwork skills on technical outcomes. The value of team skills in reducing adverse events in the operating room is presently receiving considerable attention. Current work has not yet identified in detail how the teamwork and communication skills of surgeons, anesthetists, and nurses affect the course of an operation. Twenty-six laparoscopic cholecystectomies and 22 carotid endarterectomies were studied using direct observation methods. For each operation, teams' skills were scored for the whole team, and for nursing, surgical, and anesthetic subteams on 4 dimensions (leadership and management [LM]; teamwork and cooperation; problem solving and decision making; and situation awareness). Operating time, errors in surgical technique, and other procedural problems and errors were measured as outcome parameters for each operation. The relationships between teamwork scores and these outcome parameters within each operation were examined using analysis of variance and linear regression. Surgical (F(2,42) = 3.32, P = 0.046) and anesthetic (F(2,42) = 3.26, P = 0.048) LM had significant but opposite relationships with operating time in each operation: operating time increased significantly with higher anesthetic but decreased with higher surgical LM scores. Errors in surgical technique had a strong association with surgical situation awareness (F(2,42) = 7.93, P important insights into relationships between nontechnical skills, technical performance, and operative duration. These results support the concept that interventions designed to improve teamwork and communication may have beneficial effects on technical performance and patient outcome.

  15. Adaptive Green-Kubo estimates of transport coefficients from molecular dynamics based on robust error analysis

    Science.gov (United States)

    Jones, Reese E.; Mandadapu, Kranthi K.

    2012-04-01

    We present a rigorous Green-Kubo methodology for calculating transport coefficients based on on-the-fly estimates of: (a) statistical stationarity of the relevant process, and (b) error in the resulting coefficient. The methodology uses time samples efficiently across an ensemble of parallel replicas to yield accurate estimates, which is particularly useful for estimating the thermal conductivity of semi-conductors near their Debye temperatures where the characteristic decay times of the heat flux correlation functions are large. Employing and extending the error analysis of Zwanzig and Ailawadi [Phys. Rev. 182, 280 (1969)], 10.1103/PhysRev.182.280 and Frenkel [in Proceedings of the International School of Physics "Enrico Fermi", Course LXXV (North-Holland Publishing Company, Amsterdam, 1980)] to the integral of correlation, we are able to provide tight theoretical bounds for the error in the estimate of the transport coefficient. To demonstrate the performance of the method, four test cases of increasing computational cost and complexity are presented: the viscosity of Ar and water, and the thermal conductivity of Si and GaN. In addition to producing accurate estimates of the transport coefficients for these materials, this work demonstrates precise agreement of the computed variances in the estimates of the correlation and the transport coefficient with the extended theory based on the assumption that fluctuations follow a Gaussian process. The proposed algorithm in conjunction with the extended theory enables the calculation of transport coefficients with the Green-Kubo method accurately and efficiently.

  16. Review of advances in human reliability analysis of errors of commission-Part 2: EOC quantification

    International Nuclear Information System (INIS)

    Reer, Bernhard

    2008-01-01

    In close connection with examples relevant to contemporary probabilistic safety assessment (PSA), a review of advances in human reliability analysis (HRA) of post-initiator errors of commission (EOCs), i.e. inappropriate actions under abnormal operating conditions, has been carried out. The review comprises both EOC identification (part 1) and quantification (part 2); part 2 is presented in this article. Emerging HRA methods in this field are: ATHEANA, MERMOS, the EOC HRA method developed by Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS), the MDTA method and CREAM. The essential advanced features are on the conceptual side, especially to envisage the modeling of multiple contexts for an EOC to be quantified (ATHEANA, MERMOS and MDTA), in order to explicitly address adverse conditions. There is promising progress in providing systematic guidance to better account for cognitive demands and tendencies (GRS, CREAM), and EOC recovery (MDTA). Problematic issues are associated with the implementation of multiple context modeling and the assessment of context-specific error probabilities. Approaches for task or error opportunity scaling (CREAM, GRS) and the concept of reference cases (ATHEANA outlook) provide promising orientations for achieving progress towards data-based quantification. Further development work is needed and should be carried out in close connection with large-scale applications of existing approaches

  17. Fractional Order Differentiation by Integration and Error Analysis in Noisy Environment

    KAUST Repository

    Liu, Dayan

    2015-03-31

    The integer order differentiation by integration method based on the Jacobi orthogonal polynomials for noisy signals was originally introduced by Mboup, Join and Fliess. We propose to extend this method from the integer order to the fractional order to estimate the fractional order derivatives of noisy signals. Firstly, two fractional order differentiators are deduced from the Jacobi orthogonal polynomial filter, using the Riemann-Liouville and the Caputo fractional order derivative definitions respectively. Exact and simple formulae for these differentiators are given by integral expressions. Hence, they can be used for both continuous-time and discrete-time models in on-line or off-line applications. Secondly, some error bounds are provided for the corresponding estimation errors. These bounds allow to study the design parameters\\' influence. The noise error contribution due to a large class of stochastic processes is studied in discrete case. The latter shows that the differentiator based on the Caputo fractional order derivative can cope with a class of noises, whose mean value and variance functions are polynomial time-varying. Thanks to the design parameters analysis, the proposed fractional order differentiators are significantly improved by admitting a time-delay. Thirdly, in order to reduce the calculation time for on-line applications, a recursive algorithm is proposed. Finally, the proposed differentiator based on the Riemann-Liouville fractional order derivative is used to estimate the state of a fractional order system and numerical simulations illustrate the accuracy and the robustness with respect to corrupting noises.

  18. On the effects of systematic errors in analysis of nuclear scattering data.

    Energy Technology Data Exchange (ETDEWEB)

    Bennett, M.T.; Steward, C.; Amos, K.; Allen, L.J.

    1995-07-05

    The effects of systematic errors on elastic scattering differential cross-section data upon the assessment of quality fits to that data have been studied. Three cases are studied, namely the differential cross-section data sets from elastic scattering of 200 MeV protons from {sup 12}C, of 350 MeV {sup 16}O-{sup 16}O scattering and of 288.6 MeV {sup 12}C-{sup 12}C scattering. First, to estimate the probability of any unknown systematic errors, select sets of data have been processed using the method of generalized cross validation; a method based upon the premise that any data set should satisfy an optimal smoothness criterion. In another case, the S function that provided a statistically significant fit to data, upon allowance for angle variation, became overdetermined. A far simpler S function form could then be found to describe the scattering process. The S functions so obtained have been used in a fixed energy inverse scattering study to specify effective, local, Schroedinger potentials for the collisions. An error analysis has been performed on the results to specify confidence levels for those interactions. 19 refs., 6 tabs., 15 figs.

  19. Human reliability analysis of errors of commission: a review of methods and applications

    International Nuclear Information System (INIS)

    Reer, B.

    2007-06-01

    Illustrated by specific examples relevant to contemporary probabilistic safety assessment (PSA), this report presents a review of human reliability analysis (HRA) addressing post initiator errors of commission (EOCs), i.e. inappropriate actions under abnormal operating conditions. The review addressed both methods and applications. Emerging HRA methods providing advanced features and explicit guidance suitable for PSA are: A Technique for Human Event Analysis (ATHEANA, key publications in 1998/2000), Methode d'Evaluation de la Realisation des Missions Operateur pour la Surete (MERMOS, 1998/2000), the EOC HRA method developed by the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS, 2003), the Misdiagnosis Tree Analysis (MDTA) method (2005/2006), the Cognitive Reliability and Error Analysis Method (CREAM, 1998), and the Commission Errors Search and Assessment (CESA) method (2002/2004). As a result of a thorough investigation of various PSA/HRA applications, this paper furthermore presents an overview of EOCs (termination of safety injection, shutdown of secondary cooling, etc.) referred to in predictive studies and a qualitative review of cases of EOC quantification. The main conclusions of the review of both the methods and the EOC HRA cases are: (1) The CESA search scheme, which proceeds from possible operator actions to the affected systems to scenarios, may be preferable because this scheme provides a formalized way for identifying relatively important scenarios with EOC opportunities; (2) an EOC identification guidance like CESA, which is strongly based on the procedural guidance and important measures of systems or components affected by inappropriate actions, however should pay some attention to EOCs associated with familiar but non-procedural actions and EOCs leading to failures of manually initiated safety functions. (3) Orientations of advanced EOC quantification comprise a) modeling of multiple contexts for a given scenario, b) accounting for

  20. Human reliability analysis of errors of commission: a review of methods and applications

    Energy Technology Data Exchange (ETDEWEB)

    Reer, B

    2007-06-15

    Illustrated by specific examples relevant to contemporary probabilistic safety assessment (PSA), this report presents a review of human reliability analysis (HRA) addressing post initiator errors of commission (EOCs), i.e. inappropriate actions under abnormal operating conditions. The review addressed both methods and applications. Emerging HRA methods providing advanced features and explicit guidance suitable for PSA are: A Technique for Human Event Analysis (ATHEANA, key publications in 1998/2000), Methode d'Evaluation de la Realisation des Missions Operateur pour la Surete (MERMOS, 1998/2000), the EOC HRA method developed by the Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS, 2003), the Misdiagnosis Tree Analysis (MDTA) method (2005/2006), the Cognitive Reliability and Error Analysis Method (CREAM, 1998), and the Commission Errors Search and Assessment (CESA) method (2002/2004). As a result of a thorough investigation of various PSA/HRA applications, this paper furthermore presents an overview of EOCs (termination of safety injection, shutdown of secondary cooling, etc.) referred to in predictive studies and a qualitative review of cases of EOC quantification. The main conclusions of the review of both the methods and the EOC HRA cases are: (1) The CESA search scheme, which proceeds from possible operator actions to the affected systems to scenarios, may be preferable because this scheme provides a formalized way for identifying relatively important scenarios with EOC opportunities; (2) an EOC identification guidance like CESA, which is strongly based on the procedural guidance and important measures of systems or components affected by inappropriate actions, however should pay some attention to EOCs associated with familiar but non-procedural actions and EOCs leading to failures of manually initiated safety functions. (3) Orientations of advanced EOC quantification comprise a) modeling of multiple contexts for a given scenario, b) accounting for