WorldWideScience

Sample records for including test errors

  1. Including test errors in evaluating surveillance test intervals

    International Nuclear Information System (INIS)

    Kim, I.S.; Samanta, P.K.; Martorell, S.; Vesely, W.E.

    1991-01-01

    Technical Specifications require surveillance testing to assure that the standby systems important to safety will start and perform their intended functions in the event of plant abnormality. However, as evidenced by operating experience, the surveillance tests may be adversely impact safety because of their undesirable side effects, such as initiation of plant transients during testing or wearing-out of safety systems due to testing. This paper first defines the concerns, i.e., the potential adverse effects of surveillance testing, from a risk perspective. Then, we present a methodology to evaluate the risk impact of those adverse effects, focusing on two important kinds of adverse impacts of surveillance testing: (1) risk impact of test-caused trips and (2) risk impact of test-caused equipment wear. The quantitative risk methodology is demonstrated with several surveillance tests conducted at boiling water reactors, such as the tests of the main steam isolation valves, the turbine overspeed protection system, and the emergency diesel generators. We present the results of the risk-effectiveness evaluation of surveillance test intervals, which compares the adverse risk impact with the beneficial risk impact of testing from potential failure detection, along with insights from sensitivity studies

  2. Comprehensive Error Rate Testing (CERT)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services (CMS) implemented the Comprehensive Error Rate Testing (CERT) program to measure improper payments in the Medicare...

  3. Theory of Test Translation Error

    Science.gov (United States)

    Solano-Flores, Guillermo; Backhoff, Eduardo; Contreras-Nino, Luis Angel

    2009-01-01

    In this article, we present a theory of test translation whose intent is to provide the conceptual foundation for effective, systematic work in the process of test translation and test translation review. According to the theory, translation error is multidimensional; it is not simply the consequence of defective translation but an inevitable fact…

  4. Error response test system and method using test mask variable

    Science.gov (United States)

    Gender, Thomas K. (Inventor)

    2006-01-01

    An error response test system and method with increased functionality and improved performance is provided. The error response test system provides the ability to inject errors into the application under test to test the error response of the application under test in an automated and efficient manner. The error response system injects errors into the application through a test mask variable. The test mask variable is added to the application under test. During normal operation, the test mask variable is set to allow the application under test to operate normally. During testing, the error response test system can change the test mask variable to introduce an error into the application under test. The error response system can then monitor the application under test to determine whether the application has the correct response to the error.

  5. Less Truth Than Error: Massachusetts Teacher Tests

    Directory of Open Access Journals (Sweden)

    Walt Haney

    1999-02-01

    Full Text Available Scores on the Massachusetts Teacher Tests of reading and writing are highly unreliable. The tests' margin of error is close to double to triple the range found on well-developed tests. A person retaking the MTT several times could have huge fluctuations in their scores even if their skill level did not change significantly. In fact, the 9 to 17 point margin of error calculated for the tests represents more than 10 percent of the grading scale (assumed to be 0 to 100. The large margin of error means there is both a high false-pass rate and a high false-failure rate. For example, a person who received a score of 72 on the writing test could have scored an 89 or a 55 simply because of the unreliability of the test. Since adults' reading and writing skills do not change a great deal over several months, this range of scores on the same test should not be possible. While this test is being touted as an accurate assessment of a person's fitness to be a teacher, one would expect the scores to accurately reflect a test-taker's verbal ability level. In addition to the large margin of error, the MTT contain questionable content that make them poor tools for measuring test-takers' reading and writing skills. The content and lack of correlation between the reading and writing scores reduces the meaningfulness, or validity, of the tests. The validity is affected not just by the content, but by a host of factors, such as the conditions under which tests were administered and how they were scored. Interviews with a small sample of test-takers confirmed published reports concerning problems with the content and administration.

  6. Error probabilities in default Bayesian hypothesis testing

    NARCIS (Netherlands)

    Gu, Xin; Hoijtink, Herbert; Mulder, J,

    2016-01-01

    This paper investigates the classical type I and type II error probabilities of default Bayes factors for a Bayesian t test. Default Bayes factors quantify the relative evidence between the null hypothesis and the unrestricted alternative hypothesis without needing to specify prior distributions for

  7. A Posteriori Error Estimates Including Algebraic Error and Stopping Criteria for Iterative Solvers

    Czech Academy of Sciences Publication Activity Database

    Jiránek, P.; Strakoš, Zdeněk; Vohralík, M.

    2010-01-01

    Roč. 32, č. 3 (2010), s. 1567-1590 ISSN 1064-8275 R&D Projects: GA AV ČR IAA100300802 Grant - others:GA ČR(CZ) GP201/09/P464 Institutional research plan: CEZ:AV0Z10300504 Keywords : second-order elliptic partial differential equation * finite volume method * a posteriori error estimates * iterative methods for linear algebraic systems * conjugate gradient method * stopping criteria Subject RIV: BA - General Mathematics Impact factor: 3.016, year: 2010

  8. System tuning and measurement error detection testing

    International Nuclear Information System (INIS)

    Krejci, Petr; Machek, Jindrich

    2008-09-01

    The project includes the use of the PEANO (Process Evaluation and Analysis by Neural Operators) system to verify the monitoring of the status of dependent measurements with a view to early measurement fault detection and estimation of selected signal levels. At the present stage, the system's capabilities of detecting measurement errors was assessed and the quality of the estimates was evaluated for various system configurations and the formation of empiric models, and rules were sought for system training at chosen process data recording parameters and operating modes. The aim was to find a suitable system configuration and to document the quality of the tuned system on artificial failures

  9. Injecting Errors for Testing Built-In Test Software

    Science.gov (United States)

    Gender, Thomas K.; Chow, James

    2010-01-01

    Two algorithms have been conceived to enable automated, thorough testing of Built-in test (BIT) software. The first algorithm applies to BIT routines that define pass/fail criteria based on values of data read from such hardware devices as memories, input ports, or registers. This algorithm simulates effects of errors in a device under test by (1) intercepting data from the device and (2) performing AND operations between the data and the data mask specific to the device. This operation yields values not expected by the BIT routine. This algorithm entails very small, permanent instrumentation of the software under test (SUT) for performing the AND operations. The second algorithm applies to BIT programs that provide services to users application programs via commands or callable interfaces and requires a capability for test-driver software to read and write the memory used in execution of the SUT. This algorithm identifies all SUT code execution addresses where errors are to be injected, then temporarily replaces the code at those addresses with small test code sequences to inject latent severe errors, then determines whether, as desired, the SUT detects the errors and recovers

  10. Fast motion-including dose error reconstruction for VMAT with and without MLC tracking

    DEFF Research Database (Denmark)

    Ravkilde, Thomas; Keall, Paul J.; Grau, Cai

    2014-01-01

    of the algorithm for reconstruction of dose and motion-induced dose errors throughout the tracking and non-tracking beam deliveries was quantified. Doses were reconstructed with a mean dose difference relative to the measurements of -0.5% (5.5% standard deviation) for cumulative dose. More importantly, the root...... validate a simple model for fast motion-including dose error reconstruction applicable to intrafractional QA of MLC tracking treatments of moving targets. MLC tracking experiments were performed on a standard linear accelerator with prototype MLC tracking software guided by an electromagnetic transponder......-mean-square deviation between reconstructed and measured motion-induced 3%/3 mm γ failure rates (dose error) was 2.6%. The mean computation time for each calculation of dose and dose error was 295 ms. The motion-including dose reconstruction allows accurate temporal and spatial pinpointing of errors in absorbed dose...

  11. Irregular analytical errors in diagnostic testing - a novel concept.

    Science.gov (United States)

    Vogeser, Michael; Seger, Christoph

    2018-02-23

    -isotope-dilution mass spectrometry methods are increasingly used for pre-market validation of routine diagnostic assays (these tests also involve substantial sets of clinical validation samples). Based on this definition/terminology, we list recognized causes of irregular analytical error as a risk catalog for clinical chemistry in this article. These issues include reproducible individual analytical errors (e.g. caused by anti-reagent antibodies) and non-reproducible, sporadic errors (e.g. errors due to incorrect pipetting volume due to air bubbles in a sample), which can both lead to inaccurate results and risks for patients.

  12. Filipino, Indonesian and Thai Listening Test Errors

    Science.gov (United States)

    Castro, C. S.; And Others

    1975-01-01

    This article reports on a study to identify listening, and aural comprehension difficulties experienced by students of English, specifically RELC (Regional English Language Centre in Singapore) course members. The most critical errors are discussed and conclusions about foreign language learning are drawn. (CLK)

  13. Errors in the Total Testing Process in the Clinical Chemistry ...

    African Journals Online (AJOL)

    2018-03-01

    Mar 1, 2018 ... Analytical errors related to internal and external quality control exceeding the target range, (14.4%) ... indicators to assess errors in the total testing process. The. University ... Evidence showed that the risk of .... Data management and quality control: Pre-test ..... indicators and specifications for key processes.

  14. Potential Errors and Test Assessment in Software Product Line Engineering

    Directory of Open Access Journals (Sweden)

    Hartmut Lackner

    2015-04-01

    Full Text Available Software product lines (SPL are a method for the development of variant-rich software systems. Compared to non-variable systems, testing SPLs is extensive due to an increasingly amount of possible products. Different approaches exist for testing SPLs, but there is less research for assessing the quality of these tests by means of error detection capability. Such test assessment is based on error injection into correct version of the system under test. However to our knowledge, potential errors in SPL engineering have never been systematically identified before. This article presents an overview over existing paradigms for specifying software product lines and the errors that can occur during the respective specification processes. For assessment of test quality, we leverage mutation testing techniques to SPL engineering and implement the identified errors as mutation operators. This allows us to run existing tests against defective products for the purpose of test assessment. From the results, we draw conclusions about the error-proneness of the surveyed SPL design paradigms and how quality of SPL tests can be improved.

  15. Field testing for cosmic ray soft errors in semiconductor memories

    International Nuclear Information System (INIS)

    O'Gorman, T.J.; Ross, J.M.; Taber, A.H.; Ziegler, J.F.; Muhlfeld, H.P.; Montrose, C.J.; Curtis, H.W.; Walsh, J.L.

    1996-01-01

    This paper presents a review of experiments performed by IBM to investigate the causes of soft errors in semiconductor memory chips under field test conditions. The effects of alpha-particles and cosmic rays are separated by comparing multiple measurements of the soft-error rate (SER) of samples of memory chips deep underground and at various altitudes above the earth. The results of case studies on four different memory chips show that cosmic rays are an important source of the ionizing radiation that causes soft errors. The results of field testing are used to confirm the accuracy of the modeling and the accelerated testing of chips

  16. Measurement error in pressure-decay leak testing

    International Nuclear Information System (INIS)

    Robinson, J.N.

    1979-04-01

    The effect of measurement error in presssure-decay leak testing is considered, and examples are presented to demonstrate how it can be properly accomodated in analyzing data from such tests. Suggestions for more effective specification and conduct of leak tests are presented

  17. Geometrical error calibration in reflective surface testing based on reverse Hartmann test

    Science.gov (United States)

    Gong, Zhidong; Wang, Daodang; Xu, Ping; Wang, Chao; Liang, Rongguang; Kong, Ming; Zhao, Jun; Mo, Linhai; Mo, Shuhui

    2017-08-01

    In the fringe-illumination deflectometry based on reverse-Hartmann-test configuration, ray tracing of the modeled testing system is performed to reconstruct the test surface error. Careful calibration of system geometry is required to achieve high testing accuracy. To realize the high-precision surface testing with reverse Hartmann test, a computer-aided geometrical error calibration method is proposed. The aberrations corresponding to various geometrical errors are studied. With the aberration weights for various geometrical errors, the computer-aided optimization of system geometry with iterative ray tracing is carried out to calibration the geometrical error, and the accuracy in the order of subnanometer is achieved.

  18. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbæk, Anders

    In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... of non-stationary non-linear time series models. Thus the paper provides a full asymptotic theory for estimators as well as standard and non-standard test statistics. The derived asymptotic results prove to be new compared to results found elsewhere in the literature due to the impact of the estimated...... symmetric non-linear error correction considered. A simulation study shows that the fi…nite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....

  19. Testing and Inference in Nonlinear Cointegrating Vector Error Correction Models

    DEFF Research Database (Denmark)

    Kristensen, Dennis; Rahbek, Anders

    In this paper, we consider a general class of vector error correction models which allow for asymmetric and non-linear error correction. We provide asymptotic results for (quasi-)maximum likelihood (QML) based estimators and tests. General hypothesis testing is considered, where testing...... of non-stationary non-linear time series models. Thus the paper provides a full asymptotic theory for estimators as well as standard and non-standard test statistics. The derived asymptotic results prove to be new compared to results found elsewhere in the literature due to the impact of the estimated...... symmetric non-linear error correction are considered. A simulation study shows that the finite sample properties of the bootstrapped tests are satisfactory with good size and power properties for reasonable sample sizes....

  20. Error Analysis of Inertial Navigation Systems Using Test Algorithms

    OpenAIRE

    Vaispacher, Tomáš; Bréda, Róbert; Adamčík, František

    2015-01-01

    Content of this contribution is an issue of inertial sensors errors, specification of inertial measurement units and generating of test signals for Inertial Navigation System (INS). Given the different levels of navigation tasks, part of this contribution is comparison of the actual types of Inertial Measurement Units. Considering this comparison, there is proposed the way of solving inertial sensors errors and their modelling for low – cost inertial navigation applications. The last part is ...

  1. Improvement in Detection of Wrong-Patient Errors When Radiologists Include Patient Photographs in Their Interpretation of Portable Chest Radiographs.

    Science.gov (United States)

    Tridandapani, Srini; Olsen, Kevin; Bhatti, Pamela

    2015-12-01

    This study was conducted to determine whether facial photographs obtained simultaneously with radiographs improve radiologists' detection rate of wrong-patient errors, when they are explicitly asked to include the photographs in their evaluation. Radiograph-photograph combinations were obtained from 28 patients at the time of portable chest radiography imaging. From these, pairs of radiographs were generated. Each unique pair consisted of one new and one old (comparison) radiograph. Twelve pairs of mismatched radiographs (i.e., pairs containing radiographs of different patients) were also generated. In phase 1 of the study, 5 blinded radiologist observers were asked to interpret 20 pairs of radiographs without the photographs. In phase 2, each radiologist interpreted another 20 pairs of radiographs with the photographs. Radiologist observers were not instructed about the purpose of the photographs but were asked to include the photographs in their review. The detection rate of mismatched errors was recorded along with the interpretation time for each session for each observer. The two-tailed Fisher exact test was used to evaluate differences in mismatch detection rates between the two phases. A p value of error detection rates without (0/20 = 0%) and with (17/18 = 94.4%) photographs were different (p = 0.0001). The average interpretation times for the set of 20 radiographs were 26.45 (SD 8.69) and 20.55 (SD 3.40) min, for phase 1 and phase 2, respectively (two-tailed Student t test, p = 0.1911). When radiologists include simultaneously obtained photographs in their review of portable chest radiographs, there is a significant improvement in the detection of labeling errors. No statistically significant difference in interpretation time was observed. This may lead to improved patient safety without affecting radiologists' throughput.

  2. Testing and inference in nonlinear cointegrating vector error correction models

    DEFF Research Database (Denmark)

    Kristensen, D.; Rahbek, A.

    2013-01-01

    We analyze estimators and tests for a general class of vector error correction models that allows for asymmetric and nonlinear error correction. For a given number of cointegration relationships, general hypothesis testing is considered, where testing for linearity is of particular interest. Under...... the null of linearity, parameters of nonlinear components vanish, leading to a nonstandard testing problem. We apply so-called sup-tests to resolve this issue, which requires development of new(uniform) functional central limit theory and results for convergence of stochastic integrals. We provide a full...... asymptotic theory for estimators and test statistics. The derived asymptotic results prove to be nonstandard compared to results found elsewhere in the literature due to the impact of the estimated cointegration relations. This complicates implementation of tests motivating the introduction of bootstrap...

  3. Local and omnibus goodness-of-fit tests in classical measurement error models

    KAUST Repository

    Ma, Yanyuan

    2010-09-14

    We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal series-based, omnibus goodness-of-fit test in this context, where no likelihood function is available or calculated-i.e. all the tests are proposed in the semiparametric model framework. We demonstrate that our tests have optimality properties and computational advantages that are similar to those of the classical score tests in the parametric model framework. The test procedures are applicable to several semiparametric extensions of measurement error models, including when the measurement error distribution is estimated non-parametrically as well as for generalized partially linear models. The performance of the local score-type and omnibus goodness-of-fit tests is demonstrated through simulation studies and analysis of a nutrition data set.

  4. Errors in the Total Testing Process in the Clinical Chemistry ...

    African Journals Online (AJOL)

    2018-03-01

    Mar 1, 2018 ... testing processes impair the clinical decision-making process. Such errors are ... and external quality control exceeding the target range, (14.4%) and (51.4%) .... version 3.5.3 and transferred to Statistical. Package for the ...

  5. Testing the prediction error difference between two predictors

    NARCIS (Netherlands)

    van de Wiel, M.A.; Berkhof, J.; van Wieringen, W.N.

    2009-01-01

    We develop an inference framework for the difference in errors between 2 prediction procedures. The 2 procedures may differ in any aspect and possibly utilize different sets of covariates. We apply training and testing on the same data set, which is accommodated by sample splitting. For each split,

  6. Direct cointegration testing in error-correction models

    NARCIS (Netherlands)

    F.R. Kleibergen (Frank); H.K. van Dijk (Herman)

    1994-01-01

    textabstractAbstract An error correction model is specified having only exact identified parameters, some of which reflect a possible departure from a cointegration model. Wald, likelihood ratio, and Lagrange multiplier statistics are derived to test for the significance of these parameters. The

  7. Uncertainty of rotating shadowband irradiometers and Si-pyranometers including the spectral irradiance error

    Science.gov (United States)

    Wilbert, Stefan; Kleindiek, Stefan; Nouri, Bijan; Geuder, Norbert; Habte, Aron; Schwandt, Marko; Vignola, Frank

    2016-05-01

    Concentrating solar power projects require accurate direct normal irradiance (DNI) data including uncertainty specifications for plant layout and cost calculations. Ground measured data are necessary to obtain the required level of accuracy and are often obtained with Rotating Shadowband Irradiometers (RSI) that use photodiode pyranometers and correction functions to account for systematic effects. The uncertainty of Si-pyranometers has been investigated, but so far basically empirical studies were published or decisive uncertainty influences had to be estimated based on experience in analytical studies. One of the most crucial estimated influences is the spectral irradiance error because Si-photodiode-pyranometers only detect visible and color infrared radiation and have a spectral response that varies strongly within this wavelength interval. Furthermore, analytic studies did not discuss the role of correction functions and the uncertainty introduced by imperfect shading. In order to further improve the bankability of RSI and Si-pyranometer data, a detailed uncertainty analysis following the Guide to the Expression of Uncertainty in Measurement (GUM) has been carried out. The study defines a method for the derivation of the spectral error and spectral uncertainties and presents quantitative values of the spectral and overall uncertainties. Data from the PSA station in southern Spain was selected for the analysis. Average standard uncertainties for corrected 10 min data of 2 % for global horizontal irradiance (GHI), and 2.9 % for DNI (for GHI and DNI over 300 W/m²) were found for the 2012 yearly dataset when separate GHI and DHI calibration constants were used. Also the uncertainty in 1 min resolution was analyzed. The effect of correction functions is significant. The uncertainties found in this study are consistent with results of previous empirical studies.

  8. The Human Bathtub: Safety and Risk Predictions Including the Dynamic Probability of Operator Errors

    International Nuclear Information System (INIS)

    Duffey, Romney B.; Saull, John W.

    2006-01-01

    Reactor safety and risk are dominated by the potential and major contribution for human error in the design, operation, control, management, regulation and maintenance of the plant, and hence to all accidents. Given the possibility of accidents and errors, now we need to determine the outcome (error) probability, or the chance of failure. Conventionally, reliability engineering is associated with the failure rate of components, or systems, or mechanisms, not of human beings in and interacting with a technological system. The probability of failure requires a prior knowledge of the total number of outcomes, which for any predictive purposes we do not know or have. Analysis of failure rates due to human error and the rate of learning allow a new determination of the dynamic human error rate in technological systems, consistent with and derived from the available world data. The basis for the analysis is the 'learning hypothesis' that humans learn from experience, and consequently the accumulated experience defines the failure rate. A new 'best' equation has been derived for the human error, outcome or failure rate, which allows for calculation and prediction of the probability of human error. We also provide comparisons to the empirical Weibull parameter fitting used in and by conventional reliability engineering and probabilistic safety analysis methods. These new analyses show that arbitrary Weibull fitting parameters and typical empirical hazard function techniques cannot be used to predict the dynamics of human errors and outcomes in the presence of learning. Comparisons of these new insights show agreement with human error data from the world's commercial airlines, the two shuttle failures, and from nuclear plant operator actions and transient control behavior observed in transients in both plants and simulators. The results demonstrate that the human error probability (HEP) is dynamic, and that it may be predicted using the learning hypothesis and the minimum

  9. BEAM DYNAMICS SIMULATIONS OF SARAF ACCELERATOR INCLUDING ERROR PROPAGATION AND IMPLICATIONS FOR THE EURISOL DRIVER

    CERN Document Server

    J. Rodnizki, D. Berkovits, K. Lavie, I. Mardor, A. Shor and Y. Yanay (Soreq NRC, Yavne), K. Dunkel, C. Piel (ACCEL, Bergisch Gladbach), A. Facco (INFN/LNL, Legnaro, Padova), V. Zviagintsev (TRIUMF, Vancouver)

    AbstractBeam dynamics simulations of SARAF (Soreq Applied Research Accelerator Facility) superconducting RF linear accelerator have been performed in order to establish the accelerator design. The multi-particle simulation includes 3D realistic electromagnetic field distributions, space charge forces and fabrication, misalignment and operation errors. A 4 mA proton or deuteron beam is accelerated up to 40 MeV with a moderated rms emittance growth and a high real-estate gradient of 2 MeV/m. An envelope of 40,000 macro-particles is kept under a radius of 1.1 cm, well below the beam pipe bore radius. The accelerator design of SARAF is proposed as an injector for the EURISOL driver accelerator. The Accel 176 MHZ β0=0.09 and β0=0.15 HWR lattice was extended to 90 MeV based on the LNL 352 MHZ β0=0.31 HWR. The matching between both lattices ensures smooth transition and the possibility to extend the accelerator to the required EURISOL ion energy.

  10. Accelerated testing for cosmic soft-error rate

    International Nuclear Information System (INIS)

    Ziegler, J.F.; Muhlfeld, H.P.; Montrose, C.J.; Curtis, H.W.; O'Gorman, T.J.; Ross, J.M.

    1996-01-01

    This paper describes the experimental techniques which have been developed at IBM to determine the sensitivity of electronic circuits to cosmic rays at sea level. It relates IBM circuit design and modeling, chip manufacture with process variations, and chip testing for SER sensitivity. This vertical integration from design to final test and with feedback to design allows a complete picture of LSI sensitivity to cosmic rays. Since advanced computers are designed with LSI chips long before the chips have been fabricated, and the system architecture is fully formed before the first chips are functional, it is essential to establish the chip reliability as early as possible. This paper establishes techniques to test chips that are only partly functional (e.g., only 1Mb of a 16Mb memory may be working) and can establish chip soft-error upset rates before final chip manufacturing begins. Simple relationships derived from measurement of more than 80 different chips manufactured over 20 years allow total cosmic soft-error rate (SER) to be estimated after only limited testing. Comparisons between these accelerated test results and similar tests determined by ''field testing'' (which may require a year or more of testing after manufacturing begins) show that the experimental techniques are accurate to a factor of 2

  11. Development of an Experimental Measurement System for Human Error Characteristics and a Pilot Test

    International Nuclear Information System (INIS)

    Jang, Tong-Il; Lee, Hyun-Chul; Moon, Kwangsu

    2017-01-01

    Some items out of individual and team characteristics were partially selected, and a pilot test was performed to measure and evaluate them using the experimental measurement system of human error characteristics. It is one of the processes to produce input data to the Eco-DBMS. And also, through the pilot test, it was tried to take methods to measure and acquire the physiological data, and to develop data format and quantification methods for the database. In this study, a pilot test to measure the stress and the tension level, and team cognitive characteristics out of human error characteristics was performed using the human error characteristics measurement and experimental evaluation system. In an experiment measuring the stress level, physiological characteristics using EEG was measured in a simulated unexpected situation. As shown in results, although this experiment was pilot, it was validated that relevant results for evaluating human error coping effects of workers’ FFD management guidelines and unexpected situation against guidelines can be obtained. In following researches, additional experiments including other human error characteristics will be conducted. Furthermore, the human error characteristics measurement and experimental evaluation system will be utilized to validate various human error coping solutions such as human factors criteria, design, and guidelines as well as supplement the human error characteristics database.

  12. Testing accelerometer rectification error caused by multidimensional composite inputs with double turntable centrifuge.

    Science.gov (United States)

    Guan, W; Meng, X F; Dong, X M

    2014-12-01

    Rectification error is a critical characteristic of inertial accelerometers. Accelerometers working in operational situations are stimulated by composite inputs, including constant acceleration and vibration, from multiple directions. However, traditional methods for evaluating rectification error only use one-dimensional vibration. In this paper, a double turntable centrifuge (DTC) was utilized to produce the constant acceleration and vibration simultaneously and we tested the rectification error due to the composite accelerations. At first, we deduced the expression of the rectification error with the output of the DTC and a static model of the single-axis pendulous accelerometer under test. Theoretical investigation and analysis were carried out in accordance with the rectification error model. Then a detailed experimental procedure and testing results were described. We measured the rectification error with various constant accelerations at different frequencies and amplitudes of the vibration. The experimental results showed the distinguished characteristics of the rectification error caused by the composite accelerations. The linear relation between the constant acceleration and the rectification error was proved. The experimental procedure and results presented in this context can be referenced for the investigation of the characteristics of accelerometer with multiple inputs.

  13. Prospective detection of large prediction errors: a hypothesis testing approach

    International Nuclear Information System (INIS)

    Ruan, Dan

    2010-01-01

    Real-time motion management is important in radiotherapy. In addition to effective monitoring schemes, prediction is required to compensate for system latency, so that treatment can be synchronized with tumor motion. However, it is difficult to predict tumor motion at all times, and it is critical to determine when large prediction errors may occur. Such information can be used to pause the treatment beam or adjust monitoring/prediction schemes. In this study, we propose a hypothesis testing approach for detecting instants corresponding to potentially large prediction errors in real time. We treat the future tumor location as a random variable, and obtain its empirical probability distribution with the kernel density estimation-based method. Under the null hypothesis, the model probability is assumed to be a concentrated Gaussian centered at the prediction output. Under the alternative hypothesis, the model distribution is assumed to be non-informative uniform, which reflects the situation that the future position cannot be inferred reliably. We derive the likelihood ratio test (LRT) for this hypothesis testing problem and show that with the method of moments for estimating the null hypothesis Gaussian parameters, the LRT reduces to a simple test on the empirical variance of the predictive random variable. This conforms to the intuition to expect a (potentially) large prediction error when the estimate is associated with high uncertainty, and to expect an accurate prediction when the uncertainty level is low. We tested the proposed method on patient-derived respiratory traces. The 'ground-truth' prediction error was evaluated by comparing the prediction values with retrospective observations, and the large prediction regions were subsequently delineated by thresholding the prediction errors. The receiver operating characteristic curve was used to describe the performance of the proposed hypothesis testing method. Clinical implication was represented by miss

  14. Performance of muon reconstruction including Alignment Position Errors for 2016 Collision Data

    CERN Document Server

    CMS Collaboration

    2016-01-01

    From 2016 Run muon reconstruction is using non-zero Alignment Position Errors to account for the residual uncertainties of muon chambers' positions. Significant improvements are obtained in particular for the startup phase after opening/closing the muon detector. Performance results are presented for real data and MC simulations, related to both the offline reconstruction and the High-Level Trigger.

  15. Image sensor for testing refractive error of eyes

    Science.gov (United States)

    Li, Xiangning; Chen, Jiabi; Xu, Longyun

    2000-05-01

    It is difficult to detect ametropia and anisometropia for children. Image sensor for testing refractive error of eyes does not need the cooperation of children and can be used to do the general survey of ametropia and anisometropia for children. In our study, photographs are recorded by a CCD element in a digital form which can be directly processed by a computer. In order to process the image accurately by digital technique, formula considering the effect of extended light source and the size of lens aperture has been deduced, which is more reliable in practice. Computer simulation of the image sensing is made to verify the fineness of the results.

  16. DINEOF reconstruction of clouded images including error maps – application to the Sea-Surface Temperature around Corsican Island

    Directory of Open Access Journals (Sweden)

    J.-M. Beckers

    2006-01-01

    Full Text Available We present an extension to the Data INterpolating Empirical Orthogonal Functions (DINEOF technique which allows not only to fill in clouded images but also to provide an estimation of the error covariance of the reconstruction. This additional information is obtained by an analogy with optimal interpolation. It is shown that the error fields can be obtained with a clever rearrangement of calculations at a cost comparable to that of the interpolation itself. The method is presented on the reconstruction of sea-surface temperature in the Ligurian Sea and around the Corsican Island (Mediterranean Sea, including the calculation of inter-annual variability of average surface values and their expected errors. The application shows that the error fields are not only able to reflect the data-coverage structure but also the covariances of the physical fields.

  17. Ar-Ar_Redux: rigorous error propagation of 40Ar/39Ar data, including covariances

    Science.gov (United States)

    Vermeesch, P.

    2015-12-01

    Rigorous data reduction and error propagation algorithms are needed to realise Earthtime's objective to improve the interlaboratory accuracy of 40Ar/39Ar dating to better than 1% and thereby facilitate the comparison and combination of the K-Ar and U-Pb chronometers. Ar-Ar_Redux is a new data reduction protocol and software program for 40Ar/39Ar geochronology which takes into account two previously underappreciated aspects of the method: 1. 40Ar/39Ar measurements are compositional dataIn its simplest form, the 40Ar/39Ar age equation can be written as: t = log(1+J [40Ar/39Ar-298.5636Ar/39Ar])/λ = log(1 + JR)/λ Where λ is the 40K decay constant and J is the irradiation parameter. The age t does not depend on the absolute abundances of the three argon isotopes but only on their relative ratios. Thus, the 36Ar, 39Ar and 40Ar abundances can be normalised to unity and plotted on a ternary diagram or 'simplex'. Argon isotopic data are therefore subject to the peculiar mathematics of 'compositional data', sensu Aitchison (1986, The Statistical Analysis of Compositional Data, Chapman & Hall). 2. Correlated errors are pervasive throughout the 40Ar/39Ar methodCurrent data reduction protocols for 40Ar/39Ar geochronology propagate the age uncertainty as follows: σ2(t) = [J2 σ2(R) + R2 σ2(J)] / [λ2 (1 + R J)], which implies zero covariance between R and J. In reality, however, significant error correlations are found in every step of the 40Ar/39Ar data acquisition and processing, in both single and multi collector instruments, during blank, interference and decay corrections, age calculation etc. Ar-Ar_Redux revisits every aspect of the 40Ar/39Ar method by casting the raw mass spectrometer data into a contingency table of logratios, which automatically keeps track of all covariances in a compositional context. Application of the method to real data reveals strong correlations (r2 of up to 0.9) between age measurements within a single irradiation batch. Propertly taking

  18. Robust and Adaptive OMR System Including Fuzzy Modeling, Fusion of Musical Rules, and Possible Error Detection

    Directory of Open Access Journals (Sweden)

    Bloch Isabelle

    2007-01-01

    Full Text Available This paper describes a system for optical music recognition (OMR in case of monophonic typeset scores. After clarifying the difficulties specific to this domain, we propose appropriate solutions at both image analysis level and high-level interpretation. Thus, a recognition and segmentation method is designed, that allows dealing with common printing defects and numerous symbol interconnections. Then, musical rules are modeled and integrated, in order to make a consistent decision. This high-level interpretation step relies on the fuzzy sets and possibility framework, since it allows dealing with symbol variability, flexibility, and imprecision of music rules, and merging all these heterogeneous pieces of information. Other innovative features are the indication of potential errors and the possibility of applying learning procedures, in order to gain in robustness. Experiments conducted on a large data base show that the proposed method constitutes an interesting contribution to OMR.

  19. Error Analysis in a Device to Test Optical Systems by Using Ronchi Test and Phase Shifting

    International Nuclear Information System (INIS)

    Cabrera-Perez, Brasilia; Castro-Ramos, Jorge; Gordiano-Alvarado, Gabriel; Vazquez y Montiel, Sergio

    2008-01-01

    In optical workshops, Ronchi test is used to determine the optical quality of any concave surface, while it is in the polishing process its quality is verified. The Ronchi test is one of the simplest and most effective methods used for evaluating and measuring aberrations. In this work, we describe a device to test converging mirrors and lenses either with small F/numbers or large F/numbers, using LED (Light-Emitting Diode) that has been adapted in the Ronchi testing as source of illumination. With LED used the radiation angle is bigger than common LED. It uses external power supplies to have well stability intensity to avoid error during the phase shift. The setup also has the advantage to receive automatic input and output data, this is possible because phase shifting interferometry and a square Ronchi ruling with a variable intensity LED were used. Error analysis of the different parameters involved in the test of Ronchi was made. For example, we analyze the error in the shifting of phase, the error introduced by the movement of the motor, misalignments of x-axis, y-axis and z-axis of the surface under test, error in the period of the grid used

  20. OOK power model based dynamic error testing for smart electricity meter

    International Nuclear Information System (INIS)

    Wang, Xuewei; Chen, Jingxia; Jia, Xiaolu; Zhu, Meng; Yuan, Ruiming; Jiang, Zhenyu

    2017-01-01

    This paper formulates the dynamic error testing problem for a smart meter, with consideration and investigation of both the testing signal and the dynamic error testing method. To solve the dynamic error testing problems, the paper establishes an on-off-keying (OOK) testing dynamic current model and an OOK testing dynamic load energy (TDLE) model. Then two types of TDLE sequences and three modes of OOK testing dynamic power are proposed. In addition, a novel algorithm, which helps to solve the problem of dynamic electric energy measurement’s traceability, is derived for dynamic errors. Based on the above researches, OOK TDLE sequence generation equipment is developed and a dynamic error testing system is constructed. Using the testing system, five kinds of meters were tested in the three dynamic power modes. The test results show that the dynamic error is closely related to dynamic power mode and the measurement uncertainty is 0.38%. (paper)

  1. OOK power model based dynamic error testing for smart electricity meter

    Science.gov (United States)

    Wang, Xuewei; Chen, Jingxia; Yuan, Ruiming; Jia, Xiaolu; Zhu, Meng; Jiang, Zhenyu

    2017-02-01

    This paper formulates the dynamic error testing problem for a smart meter, with consideration and investigation of both the testing signal and the dynamic error testing method. To solve the dynamic error testing problems, the paper establishes an on-off-keying (OOK) testing dynamic current model and an OOK testing dynamic load energy (TDLE) model. Then two types of TDLE sequences and three modes of OOK testing dynamic power are proposed. In addition, a novel algorithm, which helps to solve the problem of dynamic electric energy measurement’s traceability, is derived for dynamic errors. Based on the above researches, OOK TDLE sequence generation equipment is developed and a dynamic error testing system is constructed. Using the testing system, five kinds of meters were tested in the three dynamic power modes. The test results show that the dynamic error is closely related to dynamic power mode and the measurement uncertainty is 0.38%.

  2. Property transfer assessments should include radon gas testing

    International Nuclear Information System (INIS)

    Nardi, M.A.

    1992-01-01

    There are two emerging influences that will require radon gas testing as part of many property transfers and most environmental assessments. These requirements come from lending regulators and state legislatures and affect single family, multifamily, and commercial properties. Fannie Mae and others have developed environmental investigation guidelines for protection from long term legal liabilities in the purchase of environmentally contaminated real estate. These guidelines include radon gas testing for many properties. Several states have enacted laws that require environmental disclosure forms be prepared to ensure that the parties involved in certain real estate transactions are aware of the environmental liabilities that may come with the transfer of property. Indiana has recently enacted legislation that would require the disclosure of the presence of radon gas on many commercial real estate transactions. With more banks and state governments following this trend, radon gas testing should be performed during all property transfers and environmental assessments to protect the parties involved from any long term legal liabilities

  3. Environmental site assessments should include radon gas testing

    International Nuclear Information System (INIS)

    Nardi, M.A.

    1991-01-01

    There are two emerging influences that will require radon gas testing as part of many property transfers and most site assessments. These requirements come from lending regulators and state legislatures. Fannie Mae and others have developed environmental investigation guidelines for the purchase of environmentally contaminated real estate. These guidelines include radon gas testing for many properties. Several states have enacted laws that require environmental disclosure forms be prepared to ensure that the parties involved in certain real estate transactions are aware of the environmental liabilities that may come with the transfer of property. Indiana has recently enacted legislation that would require the disclosure of the presence of radon gas on many commercial real estate transactions. With more lenders and state governments likely to follow this trend, radon gas testing should be performed during all property transfers and site assessment to protect the parties involved from any legal liabilities

  4. Effects of human errors on the determination of surveillance test interval

    International Nuclear Information System (INIS)

    Chung, Dae Wook; Koo, Bon Hyun

    1990-01-01

    This paper incorporates the effects of human error relevant to the periodic test on the unavailability of the safety system as well as the component unavailability. Two types of possible human error during the test are considered. One is the possibility that a good safety system is inadvertently left in a bad state after the test (Type A human error) and the other is the possibility that bad safety system is undetected upon the test (Type B human error). An event tree model is developed for the steady-state unavailability of safety system to determine the effects of human errors on the component unavailability and the test interval. We perform the reliability analysis of safety injection system (SIS) by applying aforementioned two types of human error to safety injection pumps. Results of various sensitivity analyses show that; 1) the appropriate test interval decreases and steady-state unavailability increases as the probabilities of both types of human errors increase, and they are far more sensitive to Type A human error than Type B and 2) the SIS unavailability increases slightly as the probability of Type B human error increases, and significantly as the probability of Type A human error increases. Therefore, to avoid underestimation, the effects of human error should be incorporated in the system reliability analysis which aims at the relaxations of the surveillance test intervals, and Type A human error has more important effect on the unavailability and surveillance test interval

  5. How to reduce errors in applying impairment tests

    OpenAIRE

    Petersen, Christian; Plenborg, Thomas

    2007-01-01

    Fair value accounting has become predominant in accounting as a vast number of IAS/IFRS standards are based on fair value accounting, including IAS 36 Impairment of assets. Fair value accounting for goodwill is technically challenging, since market prices are not observable. Thus, valuation technologies must be applied in order to test goodwill for impairment. While prior research on goodwill has concentrated on either the (dis)advantages for each accounting procedure for goodwill or exami...

  6. Local and omnibus goodness-of-fit tests in classical measurement error models

    KAUST Repository

    Ma, Yanyuan; Hart, Jeffrey D.; Janicki, Ryan; Carroll, Raymond J.

    2010-01-01

    We consider functional measurement error models, i.e. models where covariates are measured with error and yet no distributional assumptions are made about the mismeasured variable. We propose and study a score-type local test and an orthogonal

  7. Testing and Estimating Shape-Constrained Nonparametric Density and Regression in the Presence of Measurement Error

    KAUST Repository

    Carroll, Raymond J.

    2011-03-01

    In many applications we can expect that, or are interested to know if, a density function or a regression curve satisfies some specific shape constraints. For example, when the explanatory variable, X, represents the value taken by a treatment or dosage, the conditional mean of the response, Y , is often anticipated to be a monotone function of X. Indeed, if this regression mean is not monotone (in the appropriate direction) then the medical or commercial value of the treatment is likely to be significantly curtailed, at least for values of X that lie beyond the point at which monotonicity fails. In the case of a density, common shape constraints include log-concavity and unimodality. If we can correctly guess the shape of a curve, then nonparametric estimators can be improved by taking this information into account. Addressing such problems requires a method for testing the hypothesis that the curve of interest satisfies a shape constraint, and, if the conclusion of the test is positive, a technique for estimating the curve subject to the constraint. Nonparametric methodology for solving these problems already exists, but only in cases where the covariates are observed precisely. However in many problems, data can only be observed with measurement errors, and the methods employed in the error-free case typically do not carry over to this error context. In this paper we develop a novel approach to hypothesis testing and function estimation under shape constraints, which is valid in the context of measurement errors. Our method is based on tilting an estimator of the density or the regression mean until it satisfies the shape constraint, and we take as our test statistic the distance through which it is tilted. Bootstrap methods are used to calibrate the test. The constrained curve estimators that we develop are also based on tilting, and in that context our work has points of contact with methodology in the error-free case.

  8. Benchmark test cases for evaluation of computer-based methods for detection of setup errors: realistic digitally reconstructed electronic portal images with known setup errors

    International Nuclear Information System (INIS)

    Fritsch, Daniel S.; Raghavan, Suraj; Boxwala, Aziz; Earnhart, Jon; Tracton, Gregg; Cullip, Timothy; Chaney, Edward L.

    1997-01-01

    Purpose: The purpose of this investigation was to develop methods and software for computing realistic digitally reconstructed electronic portal images with known setup errors for use as benchmark test cases for evaluation and intercomparison of computer-based methods for image matching and detecting setup errors in electronic portal images. Methods and Materials: An existing software tool for computing digitally reconstructed radiographs was modified to compute simulated megavoltage images. An interface was added to allow the user to specify which setup parameter(s) will contain computer-induced random and systematic errors in a reference beam created during virtual simulation. Other software features include options for adding random and structured noise, Gaussian blurring to simulate geometric unsharpness, histogram matching with a 'typical' electronic portal image, specifying individual preferences for the appearance of the 'gold standard' image, and specifying the number of images generated. The visible male computed tomography data set from the National Library of Medicine was used as the planning image. Results: Digitally reconstructed electronic portal images with known setup errors have been generated and used to evaluate our methods for automatic image matching and error detection. Any number of different sets of test cases can be generated to investigate setup errors involving selected setup parameters and anatomic volumes. This approach has proved to be invaluable for determination of error detection sensitivity under ideal (rigid body) conditions and for guiding further development of image matching and error detection methods. Example images have been successfully exported for similar use at other sites. Conclusions: Because absolute truth is known, digitally reconstructed electronic portal images with known setup errors are well suited for evaluation of computer-aided image matching and error detection methods. High-quality planning images, such as

  9. International Test Comparisons: Reviewing Translation Error in Different Source Language-Target Language Combinations

    Science.gov (United States)

    Zhao, Xueyu; Solano-Flores, Guillermo; Qian, Ming

    2018-01-01

    This article addresses test translation review in international test comparisons. We investigated the applicability of the theory of test translation error--a theory of the multidimensionality and inevitability of test translation error--across source language-target language combinations in the translation of PISA (Programme of International…

  10. Test models for improving filtering with model errors through stochastic parameter estimation

    International Nuclear Information System (INIS)

    Gershgorin, B.; Harlim, J.; Majda, A.J.

    2010-01-01

    The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.

  11. A Human Error Analysis Procedure for Identifying Potential Error Modes and Influencing Factors for Test and Maintenance Activities

    International Nuclear Information System (INIS)

    Kim, Jae Whan; Park, Jin Kyun

    2010-01-01

    Periodic or non-periodic test and maintenance (T and M) activities in large, complex systems such as nuclear power plants (NPPs) are essential for sustaining stable and safe operation of the systems. On the other hand, it also has been raised that human erroneous actions that might occur during T and M activities has the possibility of incurring unplanned reactor trips (RTs) or power derate, making safety-related systems unavailable, or making the reliability of components degraded. Contribution of human errors during normal and abnormal activities of NPPs to the unplanned RTs is known to be about 20% of the total events. This paper introduces a procedure for predictively analyzing human error potentials when maintenance personnel perform T and M tasks based on a work procedure or their work plan. This procedure helps plant maintenance team prepare for plausible human errors. The procedure to be introduced is focusing on the recurrent error forms (or modes) in execution-based errors such as wrong object, omission, too little, and wrong action

  12. Automated reactor protection testing saves time and avoids errors

    International Nuclear Information System (INIS)

    Raimondo, E.

    1990-01-01

    When the Pressurized Water Reactor units in the French 900MWe series were designed, the instrumentation and control systems were equipped for manual periodic testing. Manual reactor protection system testing has since been successfully replaced by an automatic system, which is also applicable to other instrumentation testing. A study on the complete automation of process instrumentation testing has been carried out. (author)

  13. Phoneme Error Pattern by Heritage Speakers of Spanish on an English Word Recognition Test.

    Science.gov (United States)

    Shi, Lu-Feng

    2017-04-01

    Heritage speakers acquire their native language from home use in their early childhood. As the native language is typically a minority language in the society, these individuals receive their formal education in the majority language and eventually develop greater competency with the majority than their native language. To date, there have not been specific research attempts to understand word recognition by heritage speakers. It is not clear if and to what degree we may infer from evidence based on bilingual listeners in general. This preliminary study investigated how heritage speakers of Spanish perform on an English word recognition test and analyzed their phoneme errors. A prospective, cross-sectional, observational design was employed. Twelve normal-hearing adult Spanish heritage speakers (four men, eight women, 20-38 yr old) participated in the study. Their language background was obtained through the Language Experience and Proficiency Questionnaire. Nine English monolingual listeners (three men, six women, 20-41 yr old) were also included for comparison purposes. Listeners were presented with 200 Northwestern University Auditory Test No. 6 words in quiet. They repeated each word orally and in writing. Their responses were scored by word, word-initial consonant, vowel, and word-final consonant. Performance was compared between groups with Student's t test or analysis of variance. Group-specific error patterns were primarily descriptive, but intergroup comparisons were made using 95% or 99% confidence intervals for proportional data. The two groups of listeners yielded comparable scores when their responses were examined by word, vowel, and final consonant. However, heritage speakers of Spanish misidentified significantly more word-initial consonants and had significantly more difficulty with initial /p, b, h/ than their monolingual peers. The two groups yielded similar patterns for vowel and word-final consonants, but heritage speakers made significantly

  14. Compensation of kinematic geometric parameters error and comparative study of accuracy testing for robot

    Science.gov (United States)

    Du, Liang; Shi, Guangming; Guan, Weibin; Zhong, Yuansheng; Li, Jin

    2014-12-01

    Geometric error is the main error of the industrial robot, and it plays a more significantly important fact than other error facts for robot. The compensation model of kinematic error is proposed in this article. Many methods can be used to test the robot accuracy, therefore, how to compare which method is better one. In this article, a method is used to compare two methods for robot accuracy testing. It used Laser Tracker System (LTS) and Three Coordinate Measuring instrument (TCM) to test the robot accuracy according to standard. According to the compensation result, it gets the better method which can improve the robot accuracy apparently.

  15. Thermal Error Test and Intelligent Modeling Research on the Spindle of High Speed CNC Machine Tools

    Science.gov (United States)

    Luo, Zhonghui; Peng, Bin; Xiao, Qijun; Bai, Lu

    2018-03-01

    Thermal error is the main factor affecting the accuracy of precision machining. Through experiments, this paper studies the thermal error test and intelligent modeling for the spindle of vertical high speed CNC machine tools in respect of current research focuses on thermal error of machine tool. Several testing devices for thermal error are designed, of which 7 temperature sensors are used to measure the temperature of machine tool spindle system and 2 displacement sensors are used to detect the thermal error displacement. A thermal error compensation model, which has a good ability in inversion prediction, is established by applying the principal component analysis technology, optimizing the temperature measuring points, extracting the characteristic values closely associated with the thermal error displacement, and using the artificial neural network technology.

  16. Improving Papanicolaou test quality and reducing medical errors by using Toyota production system methods.

    Science.gov (United States)

    Raab, Stephen S; Andrew-Jaja, Carey; Condel, Jennifer L; Dabbs, David J

    2006-01-01

    The objective of the study was to determine whether the Toyota production system process improves Papanicolaou test quality and patient safety. An 8-month nonconcurrent cohort study that included 464 case and 639 control women who had a Papanicolaou test was performed. Office workflow was redesigned using Toyota production system methods by introducing a 1-by-1 continuous flow process. We measured the frequency of Papanicolaou tests without a transformation zone component, follow-up and Bethesda System diagnostic frequency of atypical squamous cells of undetermined significance, and diagnostic error frequency. After the intervention, the percentage of Papanicolaou tests lacking a transformation zone component decreased from 9.9% to 4.7% (P = .001). The percentage of Papanicolaou tests with a diagnosis of atypical squamous cells of undetermined significance decreased from 7.8% to 3.9% (P = .007). The frequency of error per correlating cytologic-histologic specimen pair decreased from 9.52% to 7.84%. The introduction of the Toyota production system process resulted in improved Papanicolaou test quality.

  17. Testing and assessment strategies, including alternative and new approaches

    DEFF Research Database (Denmark)

    Meyer, Otto A.

    2003-01-01

    The object of toxicological testing is to predict possible adverse effect in humans when exposed to chemicals whether used as industrial chemicals, pharmaceuticals or pesticides. Animal models are predominantly used in identifying potential hazards of chemicals. The use of laboratory animals raises...... ethical concern. However, irrespective of animal welfare it is an important aspect of the discipline of toxicology that the primary object is human health. The ideal testing and assessment strategy is simple to use all the available test methods and preferably more in laboratory animal species from which...... uses and of the absence of health problems involved with their use. Thus, the regulatory toxicology is a cocktail of science and pragmatism added a crucial concern for animal welfare. Test methods are most often used in a testing sequence as bricks in a testing strategy. The main key driving forces...

  18. Non-parametric tests of productive efficiency with errors-in-variables

    NARCIS (Netherlands)

    Kuosmanen, T.K.; Post, T.; Scholtes, S.

    2007-01-01

    We develop a non-parametric test of productive efficiency that accounts for errors-in-variables, following the approach of Varian. [1985. Nonparametric analysis of optimizing behavior with measurement error. Journal of Econometrics 30(1/2), 445-458]. The test is based on the general Pareto-Koopmans

  19. Hydraulic testing of Type Q septifoils including modifications

    International Nuclear Information System (INIS)

    Steimke, J.L.; Fowley, M.D.; Guerrero, H.N.

    1992-09-01

    On May 25, 1992 a leak of moderator was detected as K Reactor was approaching initial criticality. The partial length control rods were being withdrawn when the leak detectors in the Process Room alarmed. The apparent location of the moderator leak was the top of the guide tubes which are positioned over the new Type Q septifoils. The reactor was shut down immediately. In response, a testing program was begun at the Heat Transfer Laboratory (HTL). The goals of the program were to determine the cause of the septifoil leak and to test methods for preventing future leaks. These tests are described in this report

  20. An Extended Quadratic Frobenius Primality Test with Average and Worst Case Error Estimates

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Frandsen, Gudmund Skovbjerg

    2003-01-01

    We present an Extended Quadratic Frobenius Primality Test (EQFT), which is related to an extends the Miller-Rabin test and the Quadratic Frobenius test (QFT) by Grantham. EQFT takes time about equivalent to 2 Miller-Rabin tests, but has much smaller error probability, namely 256/331776t for t...... for the error probability of this algorithm as well as a general closed expression bounding the error. For instance, it is at most 2-143 for k = 500, t = 2. Compared to earlier similar results for the Miller-Rabin test, the results indicates that our test in the average case has the effect of 9 Miller......-Rabin tests, while only taking time equivalent to about 2 such tests. We also give bounds for the error in case a prime is sought by incremental search from a random starting point....

  1. An Extended Quadratic Frobenius Primality Test with Average- and Worst-Case Error Estimate

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Frandsen, Gudmund Skovbjerg

    2006-01-01

    We present an Extended Quadratic Frobenius Primality Test (EQFT), which is related to an extends the Miller-Rabin test and the Quadratic Frobenius test (QFT) by Grantham. EQFT takes time about equivalent to 2 Miller-Rabin tests, but has much smaller error probability, namely 256/331776t for t...... for the error probability of this algorithm as well as a general closed expression bounding the error. For instance, it is at most 2-143 for k = 500, t = 2. Compared to earlier similar results for the Miller-Rabin test, the results indicates that our test in the average case has the effect of 9 Miller......-Rabin tests, while only taking time equivalent to about 2 such tests. We also give bounds for the error in case a prime is sought by incremental search from a random starting point....

  2. An Extended Quadratic Frobenius Primality Test with Average Case Error Estimates

    DEFF Research Database (Denmark)

    Damgård, Ivan Bjerre; Frandsen, Gudmund Skovbjerg

    2001-01-01

    We present an Extended Quadratic Frobenius Primality Test (EQFT), which is related to an extends the Miller-Rabin test and the Quadratic Frobenius test (QFT) by Grantham. EQFT takes time about equivalent to 2 Miller-Rabin tests, but has much smaller error probability, namely 256/331776t for t...... for the error probability of this algorithm as well as a general closed expression bounding the error. For instance, it is at most 2-143 for k = 500, t = 2. Compared to earlier similar results for the Miller-Rabin test, the results indicates that our test in the average case has the effect of 9 Miller......-Rabin tests, while only taking time equivalent to about 2 such tests. We also give bounds for the error in case a prime is sought by incremental search from a random starting point....

  3. Human errors in test and maintenance of nuclear power plants. Nordic project work

    International Nuclear Information System (INIS)

    Andersson, H.; Liwaang, B.

    1985-08-01

    The present report is a summary of the NKA/LIT-1 project performed for the period 1981-1985. The report summarizes work on human error influence in test and calibration activities in nuclear power plants, reviews problems regarding optimization of the test intervals, organization of test and maintenance activities, and the analysis of human error contribution to the overall risk in test and mainenace tasks. (author)

  4. Wavefront-error evaluation by mathematical analysis of experimental Foucault-test data

    Science.gov (United States)

    Wilson, R. G.

    1975-01-01

    The diffraction theory of the Foucault test provides an integral formula expressing the complex amplitude and irradiance distribution in the Foucault pattern of a test mirror (lens) as a function of wavefront error. Recent literature presents methods of inverting this formula to express wavefront error in terms of irradiance in the Foucault pattern. The present paper describes a study in which the inversion formulation was applied to photometric Foucault-test measurements on a nearly diffraction-limited mirror to determine wavefront errors for direct comparison with ones determined from scatter-plate interferometer measurements. The results affirm the practicability of the Foucault test for quantitative wavefront analysis of very small errors, and they reveal the fallacy of the prevalent belief that the test is limited to qualitative use only. Implications of the results with regard to optical testing and the potential use of the Foucault test for wavefront analysis in orbital space telescopes are discussed.

  5. Measurements for testing of fluoroscopic screens, including the photofluorographic units

    International Nuclear Information System (INIS)

    Balfanz, R.

    1986-01-01

    Image quality control measurements for fluoroscopic screens and photofluorographs have shown that both types of equipment have a long operating life, so that constancy and technical performance tests are absolutely necessary. It is recommended to conclude in-service maintenance contracts with the manufacturers. (DG) [de

  6. A procedure for the significance testing of unmodeled errors in GNSS observations

    Science.gov (United States)

    Li, Bofeng; Zhang, Zhetao; Shen, Yunzhong; Yang, Ling

    2018-01-01

    It is a crucial task to establish a precise mathematical model for global navigation satellite system (GNSS) observations in precise positioning. Due to the spatiotemporal complexity of, and limited knowledge on, systematic errors in GNSS observations, some residual systematic errors would inevitably remain even after corrected with empirical model and parameterization. These residual systematic errors are referred to as unmodeled errors. However, most of the existing studies mainly focus on handling the systematic errors that can be properly modeled and then simply ignore the unmodeled errors that may actually exist. To further improve the accuracy and reliability of GNSS applications, such unmodeled errors must be handled especially when they are significant. Therefore, a very first question is how to statistically validate the significance of unmodeled errors. In this research, we will propose a procedure to examine the significance of these unmodeled errors by the combined use of the hypothesis tests. With this testing procedure, three components of unmodeled errors, i.e., the nonstationary signal, stationary signal and white noise, are identified. The procedure is tested by using simulated data and real BeiDou datasets with varying error sources. The results show that the unmodeled errors can be discriminated by our procedure with approximately 90% confidence. The efficiency of the proposed procedure is further reassured by applying the time-domain Allan variance analysis and frequency-domain fast Fourier transform. In summary, the spatiotemporally correlated unmodeled errors are commonly existent in GNSS observations and mainly governed by the residual atmospheric biases and multipath. Their patterns may also be impacted by the receiver.

  7. Multidimensional adaptive testing with a minimum error-variance criterion

    NARCIS (Netherlands)

    van der Linden, Willem J.

    1997-01-01

    The case of adaptive testing under a multidimensional logistic response model is addressed. An adaptive algorithm is proposed that minimizes the (asymptotic) variance of the maximum-likelihood (ML) estimator of a linear combination of abilities of interest. The item selection criterion is a simple

  8. Framed bit error rate testing for 100G ethernet equipment

    DEFF Research Database (Denmark)

    Rasmussen, Anders; Ruepp, Sarah Renée; Berger, Michael Stübert

    2010-01-01

    rate. As the need for 100 Gigabit Ethernet equipment rises, so does the need for equipment, which can properly test these systems during development, deployment and use. This paper presents early results from a work-in-progress academia-industry collaboration project and elaborates on the challenges...

  9. CUSUM-Logistic Regression analysis for the rapid detection of errors in clinical laboratory test results.

    Science.gov (United States)

    Sampson, Maureen L; Gounden, Verena; van Deventer, Hendrik E; Remaley, Alan T

    2016-02-01

    The main drawback of the periodic analysis of quality control (QC) material is that test performance is not monitored in time periods between QC analyses, potentially leading to the reporting of faulty test results. The objective of this study was to develop a patient based QC procedure for the more timely detection of test errors. Results from a Chem-14 panel measured on the Beckman LX20 analyzer were used to develop the model. Each test result was predicted from the other 13 members of the panel by multiple regression, which resulted in correlation coefficients between the predicted and measured result of >0.7 for 8 of the 14 tests. A logistic regression model, which utilized the measured test result, the predicted test result, the day of the week and time of day, was then developed for predicting test errors. The output of the logistic regression was tallied by a daily CUSUM approach and used to predict test errors, with a fixed specificity of 90%. The mean average run length (ARL) before error detection by CUSUM-Logistic Regression (CSLR) was 20 with a mean sensitivity of 97%, which was considerably shorter than the mean ARL of 53 (sensitivity 87.5%) for a simple prediction model that only used the measured result for error detection. A CUSUM-Logistic Regression analysis of patient laboratory data can be an effective approach for the rapid and sensitive detection of clinical laboratory errors. Published by Elsevier Inc.

  10. Reducing patient identification errors related to glucose point-of-care testing

    Directory of Open Access Journals (Sweden)

    Gaurav Alreja

    2011-01-01

    Full Text Available Background: Patient identification (ID errors in point-of-care testing (POCT can cause test results to be transferred to the wrong patient′s chart or prevent results from being transmitted and reported. Despite the implementation of patient barcoding and ongoing operator training at our institution, patient ID errors still occur with glucose POCT. The aim of this study was to develop a solution to reduce identification errors with POCT. Materials and Methods: Glucose POCT was performed by approximately 2,400 clinical operators throughout our health system. Patients are identified by scanning in wristband barcodes or by manual data entry using portable glucose meters. Meters are docked to upload data to a database server which then transmits data to any medical record matching the financial number of the test result. With a new model, meters connect to an interface manager where the patient ID (a nine-digit account number is checked against patient registration data from admission, discharge, and transfer (ADT feeds and only matched results are transferred to the patient′s electronic medical record. With the new process, the patient ID is checked prior to testing, and testing is prevented until ID errors are resolved. Results: When averaged over a period of a month, ID errors were reduced to 3 errors/month (0.015% in comparison with 61.5 errors/month (0.319% before implementing the new meters. Conclusion: Patient ID errors may occur with glucose POCT despite patient barcoding. The verification of patient identification should ideally take place at the bedside before testing occurs so that the errors can be addressed in real time. The introduction of an ADT feed directly to glucose meters reduced patient ID errors in POCT.

  11. Linking Errors between Two Populations and Tests: A Case Study in International Surveys in Education

    Directory of Open Access Journals (Sweden)

    Dirk Hastedt

    2015-06-01

    Full Text Available This simulation study was prompted by the current increased interest in linking national studies to international large-scale assessments (ILSAs such as IEA's TIMSS, IEA's PIRLS, and OECD's PISA. Linkage in this scenario is achieved by including items from the international assessments in the national assessments on the premise that the average achievement scores from the latter can be linked to the international metric. In addition to raising issues associated with different testing conditions, administrative procedures, and the like, this approach also poses psychometric challenges. This paper endeavors to shed some light on the effects that can be expected, the linkage errors in particular, by countries using this practice. The ILSA selected for this simulation study was IEA TIMSS 2011, and the three countries used as the national assessment cases were Botswana, Honduras, and Tunisia, all of which participated in TIMSS 2011. The items selected as items common to the simulated national tests and the international test came from the Grade 4 TIMSS 2011 mathematics items that IEA released into the public domain after completion of this assessment. The findings of the current study show that linkage errors seemed to achieve acceptable levels if 30 or more items were used for the linkage, although the errors were still significantly higher compared to the TIMSS' cutoffs. Comparison of the estimated country averages based on the simulated national surveys and the averages based on the international TIMSS assessment revealed only one instance across the three countries of the estimates approaching parity. Also, the percentages of students in these countries who actually reached the defined benchmarks on the TIMSS achievement scale differed significantly from the results based on TIMSS and the results for the simulated national assessments. As a conclusion, we advise against using groups of released items from international assessments in national

  12. Observational constraint on the interacting dark energy models including the Sandage-Loeb test

    Science.gov (United States)

    Zhang, Ming-Jian; Liu, Wen-Biao

    2014-05-01

    Two types of interacting dark energy models are investigated using the type Ia supernova (SNIa), observational data (OHD), cosmic microwave background shift parameter, and the secular Sandage-Loeb (SL) test. In the investigation, we have used two sets of parameter priors including WMAP-9 and Planck 2013. They have shown some interesting differences. We find that the inclusion of SL test can obviously provide a more stringent constraint on the parameters in both models. For the constant coupling model, the interaction term has been improved to be only a half of the original scale on corresponding errors. Comparing with only SNIa and OHD, we find that the inclusion of the SL test almost reduces the best-fit interaction to zero, which indicates that the higher-redshift observation including the SL test is necessary to track the evolution of the interaction. For the varying coupling model, data with the inclusion of the SL test show that the parameter at C.L. in Planck priors is , where the constant is characteristic for the severity of the coincidence problem. This indicates that the coincidence problem will be less severe. We then reconstruct the interaction , and we find that the best-fit interaction is also negative, similar to the constant coupling model. However, for a high redshift, the interaction generally vanishes at infinity. We also find that the phantom-like dark energy with is favored over the CDM model.

  13. Nonlinear method for including the mass uncertainty of standards and the system measurement errors in the fitting of calibration curves

    International Nuclear Information System (INIS)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-01-01

    A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities with a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO 3 can have an accuracy of 0.2% in 1000 s. 5 figures

  14. Neutrino masses and cosmological parameters from a Euclid-like survey: Markov Chain Monte Carlo forecasts including theoretical errors

    CERN Document Server

    Audren, Benjamin; Bird, Simeon; Haehnelt, Martin G.; Viel, Matteo

    2013-01-01

    We present forecasts for the accuracy of determining the parameters of a minimal cosmological model and the total neutrino mass based on combined mock data for a future Euclid-like galaxy survey and Planck. We consider two different galaxy surveys: a spectroscopic redshift survey and a cosmic shear survey. We make use of the Monte Carlo Markov Chains (MCMC) technique and assume two sets of theoretical errors. The first error is meant to account for uncertainties in the modelling of the effect of neutrinos on the non-linear galaxy power spectrum and we assume this error to be fully correlated in Fourier space. The second error is meant to parametrize the overall residual uncertainties in modelling the non-linear galaxy power spectrum at small scales, and is conservatively assumed to be uncorrelated and to increase with the ratio of a given scale to the scale of non-linearity. It hence increases with wavenumber and decreases with redshift. With these two assumptions for the errors and assuming further conservat...

  15. Nine Loci for Ocular Axial Length Identified through Genome-wide Association Studies, Including Shared Loci with Refractive Error

    NARCIS (Netherlands)

    Cheng, Ching-Yu; Schache, Maria; Ikram, M. Kamran; Young, Terri L.; Guggenheim, Jeremy A.; Vitart, Veronique; Macgregor, Stuart; Verhoeven, Virginie J. M.; Barathi, Veluchamy A.; Liao, Jiemin; Hysi, Pirro G.; Bailey-Wilson, Joan E.; St Pourcain, Beate; Kemp, John P.; McMahon, George; Timpson, Nicholas J.; Evans, David M.; Montgomery, Grant W.; Mishra, Aniket; Wang, Ya Xing; Wang, Jie Jin; Rochtchina, Elena; Polasek, Ozren; Wright, Alan F.; Amin, Najaf; van Leeuwen, Elisabeth M.; Wilson, James F.; Pennell, Craig E.; van Duijn, Cornelia M.; de Jong, Paulus T. V. M.; Vingerling, Johannes R.; Zhou, Xin; Chen, Peng; Li, Ruoying; Tay, Wan-Ting; Zheng, Yingfeng; Chew, Merwyn; Burdon, Kathryn P.; Craig, Jamie E.; Iyengar, Sudha K.; Igo, Robert P.; Lass, Jonathan H.; Chew, Emily Y.; Haller, Toomas; Mihailov, Evelin; Metspalu, Andres; Wedenoja, Juho; Simpson, Claire L.; Wojciechowski, Robert; Chen, Wei

    2013-01-01

    Refractive errors are common eye disorders of public health importance worldwide. Ocular axial length (AL) is the major determinant of refraction and thus of myopia and hyperopia. We conducted a meta-analysis of genome-wide association studies for AL, combining 12,531 Europeans and 8,216 Asians. We

  16. Review of Pre-Analytical Errors in Oral Glucose Tolerance Testing in a Tertiary Care Hospital.

    Science.gov (United States)

    Nanda, Rachita; Patel, Suprava; Sahoo, Sibashish; Mohapatra, Eli

    2018-03-13

    The pre-pre-analytical and pre-analytical phases form a major chunk of the errors in a laboratory. The process has taken into consideration a very common procedure which is the oral glucose tolerance test to identify the pre-pre-analytical errors. Quality indicators provide evidence of quality, support accountability and help in the decision making of laboratory personnel. The aim of this research is to evaluate pre-analytical performance of the oral glucose tolerance test procedure. An observational study that was conducted overa period of three months, in the phlebotomy and accessioning unit of our laboratory using questionnaire that examined the pre-pre-analytical errors through a scoring system. The pre-analytical phase was analyzed for each sample collected as per seven quality indicators. About 25% of the population gave wrong answer with regard to the question that tested the knowledge of patient preparation. The appropriateness of test result QI-1 had the most error. Although QI-5 for sample collection had a low error rate, it is a very important indicator as any wrongly collected sample can alter the test result. Evaluating the pre-analytical and pre-pre-analytical phase is essential and must be conducted routinely on a yearly basis to identify errors and take corrective action and to facilitate their gradual introduction into routine practice.

  17. Design of a single-borehole hydraulic test programme allowing for interpretation-based errors

    International Nuclear Information System (INIS)

    Black, J.H.

    1987-07-01

    Hydraulic testing using packers in single boreholes is one of the most important sources of data to safety assessment modelling in connection with the disposal of radioactive waste. It is also one of the most time-consuming and expensive. It is important that the results are as reliable as possible and as accurate as necessary for the use that is made of them. There are many causes of possible error and inaccuracy ranging from poor field practice to inappropriate interpretation procedure. The report examines and attempts to quantify the size of error arising from the accidental use of an inappropriate or inadequate interpretation procedure. In doing so, it can be seen which interpretation procedure or combination of procedures results in least error. Lastly, the report attempts to use the previous conclusions from interpretation to propose forms of field test procedure where interpretation-based errors will be minimised. Hydraulic tests (sometimes known as packer tests) come in three basic forms: slug/pulse, constant flow and constant head. They have different characteristics, some measuring a variable volume of rock (dependent on hydraulic conductivity) and some having a variable duration (dependent on hydraulic conductivity). A combination of different tests in the same interval is seen as desirable. For the purposes of assessing interpretation-based errors, slug and pulse tests are considered together as are constant flow and constant head tests. The same method is used in each case to assess errors. The method assumes that the simplest analysis procedure (cylindrical flow in homogeneous isotropic porous rock) will be used on each set of field data. The error is assessed by calculating synthetic data for alternative configurations (e.g. fissured rock, anisotropic rock, inhomogeneous rock - i.e. skin - etc.) and then analyzing this data using the simplest analysis procedure. 28 refs., 26 figs

  18. Notes on power of normality tests of error terms in regression models

    Energy Technology Data Exchange (ETDEWEB)

    Střelec, Luboš [Department of Statistics and Operation Analysis, Faculty of Business and Economics, Mendel University in Brno, Zemědělská 1, Brno, 61300 (Czech Republic)

    2015-03-10

    Normality is one of the basic assumptions in applying statistical procedures. For example in linear regression most of the inferential procedures are based on the assumption of normality, i.e. the disturbance vector is assumed to be normally distributed. Failure to assess non-normality of the error terms may lead to incorrect results of usual statistical inference techniques such as t-test or F-test. Thus, error terms should be normally distributed in order to allow us to make exact inferences. As a consequence, normally distributed stochastic errors are necessary in order to make a not misleading inferences which explains a necessity and importance of robust tests of normality. Therefore, the aim of this contribution is to discuss normality testing of error terms in regression models. In this contribution, we introduce the general RT class of robust tests for normality, and present and discuss the trade-off between power and robustness of selected classical and robust normality tests of error terms in regression models.

  19. Notes on power of normality tests of error terms in regression models

    International Nuclear Information System (INIS)

    Střelec, Luboš

    2015-01-01

    Normality is one of the basic assumptions in applying statistical procedures. For example in linear regression most of the inferential procedures are based on the assumption of normality, i.e. the disturbance vector is assumed to be normally distributed. Failure to assess non-normality of the error terms may lead to incorrect results of usual statistical inference techniques such as t-test or F-test. Thus, error terms should be normally distributed in order to allow us to make exact inferences. As a consequence, normally distributed stochastic errors are necessary in order to make a not misleading inferences which explains a necessity and importance of robust tests of normality. Therefore, the aim of this contribution is to discuss normality testing of error terms in regression models. In this contribution, we introduce the general RT class of robust tests for normality, and present and discuss the trade-off between power and robustness of selected classical and robust normality tests of error terms in regression models

  20. Multi-temporal AirSWOT elevations on the Willamette river: error characterization and algorithm testing

    Science.gov (United States)

    Tuozzolo, S.; Frasson, R. P. M.; Durand, M. T.

    2017-12-01

    We analyze a multi-temporal dataset of in-situ and airborne water surface measurements from the March 2015 AirSWOT field campaign on the Willamette River in Western Oregon, which included six days of AirSWOT flights over a 75km stretch of the river. We examine systematic errors associated with dark water and layover effects in the AirSWOT dataset, and test the efficacies of different filtering and spatial averaging techniques at reconstructing the water surface profile. Finally, we generate a spatially-averaged time-series of water surface elevation and water surface slope. These AirSWOT-derived reach-averaged values are ingested in a prospective SWOT discharge algorithm to assess its performance on SWOT-like data collected from a borderline SWOT-measurable river (mean width = 90m).

  1. Tests for detecting overdispersion in models with measurement error in covariates.

    Science.gov (United States)

    Yang, Yingsi; Wong, Man Yu

    2015-11-30

    Measurement error in covariates can affect the accuracy in count data modeling and analysis. In overdispersion identification, the true mean-variance relationship can be obscured under the influence of measurement error in covariates. In this paper, we propose three tests for detecting overdispersion when covariates are measured with error: a modified score test and two score tests based on the proposed approximate likelihood and quasi-likelihood, respectively. The proposed approximate likelihood is derived under the classical measurement error model, and the resulting approximate maximum likelihood estimator is shown to have superior efficiency. Simulation results also show that the score test based on approximate likelihood outperforms the test based on quasi-likelihood and other alternatives in terms of empirical power. By analyzing a real dataset containing the health-related quality-of-life measurements of a particular group of patients, we demonstrate the importance of the proposed methods by showing that the analyses with and without measurement error correction yield significantly different results. Copyright © 2015 John Wiley & Sons, Ltd.

  2. A Psychometric Review of Norm-Referenced Tests Used to Assess Phonological Error Patterns

    Science.gov (United States)

    Kirk, Celia; Vigeland, Laura

    2014-01-01

    Purpose: The authors provide a review of the psychometric properties of 6 norm-referenced tests designed to measure children's phonological error patterns. Three aspects of the tests' psychometric adequacy were evaluated: the normative sample, reliability, and validity. Method: The specific criteria used for determining the psychometric…

  3. Measurement error of a simplified protocol for quantitative sensory tests in chronic pain patients

    DEFF Research Database (Denmark)

    Müller, Monika; Biurrun Manresa, José; Limacher, Andreas

    2017-01-01

    BACKGROUND AND OBJECTIVES: Large-scale application of Quantitative Sensory Tests (QST) is impaired by lacking standardized testing protocols. One unclear methodological aspect is the number of records needed to minimize measurement error. Traditionally, measurements are repeated 3 to 5 times...

  4. Rank-based Tests of the Cointegrating Rank in Semiparametric Error Correction Models

    NARCIS (Netherlands)

    Hallin, M.; van den Akker, R.; Werker, B.J.M.

    2012-01-01

    Abstract: This paper introduces rank-based tests for the cointegrating rank in an Error Correction Model with i.i.d. elliptical innovations. The tests are asymptotically distribution-free, and their validity does not depend on the actual distribution of the innovations. This result holds despite the

  5. Study of the Switching Errors in an RSFQ Switch by Using a Computerized Test Setup

    International Nuclear Information System (INIS)

    Kim, Se Hoon; Baek, Seung Hun; Yang, Jung Kuk; Kim, Jun Ho; Kang, Joon Hee

    2005-01-01

    The problem of fluctuation-induced digital errors in a rapid single flux quantum (RSFQ) circuit has been a very important issue. In this work, we calculated the bit error rate of an RSFQ switch used in superconductive arithmetic logic unit (ALU). RSFQ switch should have a very low error rate in the optimal bias. Theoretical estimates of the RSFQ error rate are on the order of 10 -50 per bit operation. In this experiment, we prepared two identical circuits placed in parallel. Each circuit was composed of 10 Josephson transmission lines (JTLs) connected in series with an RSFQ switch placed in the middle of the 10 JTLs. We used a splitter to feed the same input signal to both circuits. The outputs of the two circuits were compared with an RSFQ exclusive OR (XOR) to measure the bit error rate of the RSFQ switch. By using a computerized bit-error-rate test setup, we measured the bit error rate of 2.18 x 10 -12 when the bias to the RSFQ switch was 0.398 mA that was quite off from the optimum bias of 0.6 mA.

  6. Accounting for measurement error in log regression models with applications to accelerated testing.

    Directory of Open Access Journals (Sweden)

    Robert Richardson

    Full Text Available In regression settings, parameter estimates will be biased when the explanatory variables are measured with error. This bias can significantly affect modeling goals. In particular, accelerated lifetime testing involves an extrapolation of the fitted model, and a small amount of bias in parameter estimates may result in a significant increase in the bias of the extrapolated predictions. Additionally, bias may arise when the stochastic component of a log regression model is assumed to be multiplicative when the actual underlying stochastic component is additive. To account for these possible sources of bias, a log regression model with measurement error and additive error is approximated by a weighted regression model which can be estimated using Iteratively Re-weighted Least Squares. Using the reduced Eyring equation in an accelerated testing setting, the model is compared to previously accepted approaches to modeling accelerated testing data with both simulations and real data.

  7. Accounting for measurement error in log regression models with applications to accelerated testing.

    Science.gov (United States)

    Richardson, Robert; Tolley, H Dennis; Evenson, William E; Lunt, Barry M

    2018-01-01

    In regression settings, parameter estimates will be biased when the explanatory variables are measured with error. This bias can significantly affect modeling goals. In particular, accelerated lifetime testing involves an extrapolation of the fitted model, and a small amount of bias in parameter estimates may result in a significant increase in the bias of the extrapolated predictions. Additionally, bias may arise when the stochastic component of a log regression model is assumed to be multiplicative when the actual underlying stochastic component is additive. To account for these possible sources of bias, a log regression model with measurement error and additive error is approximated by a weighted regression model which can be estimated using Iteratively Re-weighted Least Squares. Using the reduced Eyring equation in an accelerated testing setting, the model is compared to previously accepted approaches to modeling accelerated testing data with both simulations and real data.

  8. Exploring the initial steps of the testing process: frequency and nature of pre-preanalytic errors.

    Science.gov (United States)

    Carraro, Paolo; Zago, Tatiana; Plebani, Mario

    2012-03-01

    Few data are available on the nature of errors in the so-called pre-preanalytic phase, the initial steps of the testing process. We therefore sought to evaluate pre-preanalytic errors using a study design that enabled us to observe the initial procedures performed in the ward, from the physician's test request to the delivery of specimens in the clinical laboratory. After a 1-week direct observational phase designed to identify the operating procedures followed in 3 clinical wards, we recorded all nonconformities and errors occurring over a 6-month period. Overall, the study considered 8547 test requests, for which 15 917 blood sample tubes were collected and 52 982 tests undertaken. No significant differences in error rates were found between the observational phase and the overall study period, but underfilling of coagulation tubes was found to occur more frequently in the direct observational phase (P = 0.043). In the overall study period, the frequency of errors was found to be particularly high regarding order transmission [29 916 parts per million (ppm)] and hemolysed samples (2537 ppm). The frequency of patient misidentification was 352 ppm, and the most frequent nonconformities were test requests recorded in the diary without the patient's name and failure to check the patient's identity at the time of blood draw. The data collected in our study confirm the relative frequency of pre-preanalytic errors and underline the need to consensually prepare and adopt effective standard operating procedures in the initial steps of laboratory testing and to monitor compliance with these procedures over time.

  9. Self-test web-based pure-tone audiometry: validity evaluation and measurement error analysis.

    Science.gov (United States)

    Masalski, Marcin; Kręcicki, Tomasz

    2013-04-12

    Potential methods of application of self-administered Web-based pure-tone audiometry conducted at home on a PC with a sound card and ordinary headphones depend on the value of measurement error in such tests. The aim of this research was to determine the measurement error of the hearing threshold determined in the way described above and to identify and analyze factors influencing its value. The evaluation of the hearing threshold was made in three series: (1) tests on a clinical audiometer, (2) self-tests done on a specially calibrated computer under the supervision of an audiologist, and (3) self-tests conducted at home. The research was carried out on the group of 51 participants selected from patients of an audiology outpatient clinic. From the group of 51 patients examined in the first two series, the third series was self-administered at home by 37 subjects (73%). The average difference between the value of the hearing threshold determined in series 1 and in series 2 was -1.54dB with standard deviation of 7.88dB and a Pearson correlation coefficient of .90. Between the first and third series, these values were -1.35dB±10.66dB and .84, respectively. In series 3, the standard deviation was most influenced by the error connected with the procedure of hearing threshold identification (6.64dB), calibration error (6.19dB), and additionally at the frequency of 250Hz by frequency nonlinearity error (7.28dB). The obtained results confirm the possibility of applying Web-based pure-tone audiometry in screening tests. In the future, modifications of the method leading to the decrease in measurement error can broaden the scope of Web-based pure-tone audiometry application.

  10. Biostatistics with emphasis on life table survival rate calculations (including Kaplan Meier) and the logrank test

    International Nuclear Information System (INIS)

    Mould, Richard F.

    1995-01-01

    Purpose/Objective: To explain some of the most useful statistical calculation procedures which are relevant to radiation oncologists and to provide insights on what tests and procedures should be used in various situations such as when survival rates and their associated standard errors have to be determined. To describe some of the problems and pitfalls in clinical trial designs which have to be overcome if a trial is to have the possibility of reaching a successful conclusion. To review methods of computing criteria to quantitatively describe criteria of success (eg. quality of life, long-term survival, cure) of radiation oncology and to suggest possible future statistical improvements in this area. Chi-Squared Test: The chi-squared test is probably the most useful of the tests of statistical significance for the radiation oncologist. Applications will be described, including goodness of fit tests and 2x2 contingency tables which are the simplest of the generalized nxm contingency tables. Degrees of Freedom and P<0.05 for Significance Testing: An Introduction will be given to the meaning of P<0.05 in relation to significance testing and the use of tables of critical values of a test statistic (eg. chi-squared) which are given as a function of degrees of freedom and P-values. Survival Rate Calculations for Grouped and Ungrouped Data: The life-table method (sometimes termed the actuarial method) will be explained for both grouped data (eg. survival times grouped in annual intervals for patients who have died and for those who are still alive or lost to follow-up) and for ungrouped data (when individual survival times are used). The method for ungrouped data is variously termed the Kaplan-Meier or Product Limit method. Logrank Test: This is the most useful test for comparison of the survival experience of two groups of patients and its use will be explained. In part the computation is similar to that for the Kaplan-Meier/Product Limit method

  11. Biostatistics with emphasis on life table survival rate calculations (including Kaplan Meier) and the logrank test

    Energy Technology Data Exchange (ETDEWEB)

    Mould, Richard F

    1995-07-01

    Purpose/Objective: To explain some of the most useful statistical calculation procedures which are relevant to radiation oncologists and to provide insights on what tests and procedures should be used in various situations such as when survival rates and their associated standard errors have to be determined. To describe some of the problems and pitfalls in clinical trial designs which have to be overcome if a trial is to have the possibility of reaching a successful conclusion. To review methods of computing criteria to quantitatively describe criteria of success (eg. quality of life, long-term survival, cure) of radiation oncology and to suggest possible future statistical improvements in this area. Chi-Squared Test: The chi-squared test is probably the most useful of the tests of statistical significance for the radiation oncologist. Applications will be described, including goodness of fit tests and 2x2 contingency tables which are the simplest of the generalized nxm contingency tables. Degrees of Freedom and P<0.05 for Significance Testing: An Introduction will be given to the meaning of P<0.05 in relation to significance testing and the use of tables of critical values of a test statistic (eg. chi-squared) which are given as a function of degrees of freedom and P-values. Survival Rate Calculations for Grouped and Ungrouped Data: The life-table method (sometimes termed the actuarial method) will be explained for both grouped data (eg. survival times grouped in annual intervals for patients who have died and for those who are still alive or lost to follow-up) and for ungrouped data (when individual survival times are used). The method for ungrouped data is variously termed the Kaplan-Meier or Product Limit method. Logrank Test: This is the most useful test for comparison of the survival experience of two groups of patients and its use will be explained. In part the computation is similar to that for the Kaplan-Meier/Product Limit method.

  12. Nine Loci for Ocular Axial Length Identified through Genome-wide Association Studies, Including Shared Loci with Refractive Error

    Science.gov (United States)

    Cheng, Ching-Yu; Schache, Maria; Ikram, M. Kamran; Young, Terri L.; Guggenheim, Jeremy A.; Vitart, Veronique; MacGregor, Stuart; Verhoeven, Virginie J.M.; Barathi, Veluchamy A.; Liao, Jiemin; Hysi, Pirro G.; Bailey-Wilson, Joan E.; St. Pourcain, Beate; Kemp, John P.; McMahon, George; Timpson, Nicholas J.; Evans, David M.; Montgomery, Grant W.; Mishra, Aniket; Wang, Ya Xing; Wang, Jie Jin; Rochtchina, Elena; Polasek, Ozren; Wright, Alan F.; Amin, Najaf; van Leeuwen, Elisabeth M.; Wilson, James F.; Pennell, Craig E.; van Duijn, Cornelia M.; de Jong, Paulus T.V.M.; Vingerling, Johannes R.; Zhou, Xin; Chen, Peng; Li, Ruoying; Tay, Wan-Ting; Zheng, Yingfeng; Chew, Merwyn; Rahi, Jugnoo S.; Hysi, Pirro G.; Yoshimura, Nagahisa; Yamashiro, Kenji; Miyake, Masahiro; Delcourt, Cécile; Maubaret, Cecilia; Williams, Cathy; Guggenheim, Jeremy A.; Northstone, Kate; Ring, Susan M.; Davey-Smith, George; Craig, Jamie E.; Burdon, Kathryn P.; Fogarty, Rhys D.; Iyengar, Sudha K.; Igo, Robert P.; Chew, Emily; Janmahasathian, Sarayut; Iyengar, Sudha K.; Igo, Robert P.; Chew, Emily; Janmahasathian, Sarayut; Stambolian, Dwight; Wilson, Joan E. Bailey; MacGregor, Stuart; Lu, Yi; Jonas, Jost B.; Xu, Liang; Saw, Seang-Mei; Baird, Paul N.; Rochtchina, Elena; Mitchell, Paul; Wang, Jie Jin; Jonas, Jost B.; Nangia, Vinay; Hayward, Caroline; Wright, Alan F.; Vitart, Veronique; Polasek, Ozren; Campbell, Harry; Vitart, Veronique; Rudan, Igor; Vatavuk, Zoran; Vitart, Veronique; Paterson, Andrew D.; Hosseini, S. Mohsen; Iyengar, Sudha K.; Igo, Robert P.; Fondran, Jeremy R.; Young, Terri L.; Feng, Sheng; Verhoeven, Virginie J.M.; Klaver, Caroline C.; van Duijn, Cornelia M.; Metspalu, Andres; Haller, Toomas; Mihailov, Evelin; Pärssinen, Olavi; Wedenoja, Juho; Wilson, Joan E. Bailey; Wojciechowski, Robert; Baird, Paul N.; Schache, Maria; Pfeiffer, Norbert; Höhn, René; Pang, Chi Pui; Chen, Peng; Meitinger, Thomas; Oexle, Konrad; Wegner, Aharon; Yoshimura, Nagahisa; Yamashiro, Kenji; Miyake, Masahiro; Pärssinen, Olavi; Yip, Shea Ping; Ho, Daniel W.H.; Pirastu, Mario; Murgia, Federico; Portas, Laura; Biino, Genevra; Wilson, James F.; Fleck, Brian; Vitart, Veronique; Stambolian, Dwight; Wilson, Joan E. Bailey; Hewitt, Alex W.; Ang, Wei; Verhoeven, Virginie J.M.; Klaver, Caroline C.; van Duijn, Cornelia M.; Saw, Seang-Mei; Wong, Tien-Yin; Teo, Yik-Ying; Fan, Qiao; Cheng, Ching-Yu; Zhou, Xin; Ikram, M. Kamran; Saw, Seang-Mei; Teo, Yik-Ying; Fan, Qiao; Cheng, Ching-Yu; Zhou, Xin; Ikram, M. Kamran; Saw, Seang-Mei; Wong, Tien-Yin; Teo, Yik-Ying; Fan, Qiao; Cheng, Ching-Yu; Zhou, Xin; Ikram, M. Kamran; Saw, Seang-Mei; Wong, Tien-Yin; Teo, Yik-Ying; Fan, Qiao; Cheng, Ching-Yu; Zhou, Xin; Ikram, M. Kamran; Saw, Seang-Mei; Tai, E-Shyong; Teo, Yik-Ying; Fan, Qiao; Cheng, Ching-Yu; Zhou, Xin; Ikram, M. Kamran; Saw, Seang-Mei; Teo, Yik-Ying; Fan, Qiao; Cheng, Ching-Yu; Zhou, Xin; Ikram, M. Kamran; Mackey, David A.; MacGregor, Stuart; Hammond, Christopher J.; Hysi, Pirro G.; Deangelis, Margaret M.; Morrison, Margaux; Zhou, Xiangtian; Chen, Wei; Paterson, Andrew D.; Hosseini, S. Mohsen; Mizuki, Nobuhisa; Meguro, Akira; Lehtimäki, Terho; Mäkelä, Kari-Matti; Raitakari, Olli; Kähönen, Mika; Burdon, Kathryn P.; Craig, Jamie E.; Iyengar, Sudha K.; Igo, Robert P.; Lass, Jonathan H.; Reinhart, William; Belin, Michael W.; Schultze, Robert L.; Morason, Todd; Sugar, Alan; Mian, Shahzad; Soong, Hunson Kaz; Colby, Kathryn; Jurkunas, Ula; Yee, Richard; Vital, Mark; Alfonso, Eduardo; Karp, Carol; Lee, Yunhee; Yoo, Sonia; Hammersmith, Kristin; Cohen, Elisabeth; Laibson, Peter; Rapuano, Christopher; Ayres, Brandon; Croasdale, Christopher; Caudill, James; Patel, Sanjay; Baratz, Keith; Bourne, William; Maguire, Leo; Sugar, Joel; Tu, Elmer; Djalilian, Ali; Mootha, Vinod; McCulley, James; Bowman, Wayne; Cavanaugh, H. Dwight; Verity, Steven; Verdier, David; Renucci, Ann; Oliva, Matt; Rotkis, Walter; Hardten, David R.; Fahmy, Ahmad; Brown, Marlene; Reeves, Sherman; Davis, Elizabeth A.; Lindstrom, Richard; Hauswirth, Scott; Hamilton, Stephen; Lee, W. Barry; Price, Francis; Price, Marianne; Kelly, Kathleen; Peters, Faye; Shaughnessy, Michael; Steinemann, Thomas; Dupps, B.J.; Meisler, David M.; Mifflin, Mark; Olson, Randal; Aldave, Anthony; Holland, Gary; Mondino, Bartly J.; Rosenwasser, George; Gorovoy, Mark; Dunn, Steven P.; Heidemann, David G.; Terry, Mark; Shamie, Neda; Rosenfeld, Steven I.; Suedekum, Brandon; Hwang, David; Stone, Donald; Chodosh, James; Galentine, Paul G.; Bardenstein, David; Goddard, Katrina; Chin, Hemin; Mannis, Mark; Varma, Rohit; Borecki, Ingrid; Chew, Emily Y.; Haller, Toomas; Mihailov, Evelin; Metspalu, Andres; Wedenoja, Juho; Simpson, Claire L.; Wojciechowski, Robert; Höhn, René; Mirshahi, Alireza; Zeller, Tanja; Pfeiffer, Norbert; Lackner, Karl J.; Donnelly, Peter; Barroso, Ines; Blackwell, Jenefer M.; Bramon, Elvira; Brown, Matthew A.; Casas, Juan P.; Corvin, Aiden; Deloukas, Panos; Duncanson, Audrey; Jankowski, Janusz; Markus, Hugh S.; Mathew, Christopher G.; Palmer, Colin N.A.; Plomin, Robert; Rautanen, Anna; Sawcer, Stephen J.; Trembath, Richard C.; Viswanathan, Ananth C.; Wood, Nicholas W.; Spencer, Chris C.A.; Band, Gavin; Bellenguez, Céline; Freeman, Colin; Hellenthal, Garrett; Giannoulatou, Eleni; Pirinen, Matti; Pearson, Richard; Strange, Amy; Su, Zhan; Vukcevic, Damjan; Donnelly, Peter; Langford, Cordelia; Hunt, Sarah E.; Edkins, Sarah; Gwilliam, Rhian; Blackburn, Hannah; Bumpstead, Suzannah J.; Dronov, Serge; Gillman, Matthew; Gray, Emma; Hammond, Naomi; Jayakumar, Alagurevathi; McCann, Owen T.; Liddle, Jennifer; Potter, Simon C.; Ravindrarajah, Radhi; Ricketts, Michelle; Waller, Matthew; Weston, Paul; Widaa, Sara; Whittaker, Pamela; Barroso, Ines; Deloukas, Panos; Mathew, Christopher G.; Blackwell, Jenefer M.; Brown, Matthew A.; Corvin, Aiden; Spencer, Chris C.A.; Bettecken, Thomas; Meitinger, Thomas; Oexle, Konrad; Pirastu, Mario; Portas, Laura; Nag, Abhishek; Williams, Katie M.; Yonova-Doing, Ekaterina; Klein, Ronald; Klein, Barbara E.; Hosseini, S. Mohsen; Paterson, Andrew D.; Genuth, S.; Nathan, D.M.; Zinman, B.; Crofford, O.; Crandall, J.; Reid, M.; Brown-Friday, J.; Engel, S.; Sheindlin, J.; Martinez, H.; Shamoon, H.; Engel, H.; Phillips, M.; Gubitosi-Klug, R.; Mayer, L.; Pendegast, S.; Zegarra, H.; Miller, D.; Singerman, L.; Smith-Brewer, S.; Novak, M.; Quin, J.; Dahms, W.; Genuth, Saul; Palmert, M.; Brillon, D.; Lackaye, M.E.; Kiss, S.; Chan, R.; Reppucci, V.; Lee, T.; Heinemann, M.; Whitehouse, F.; Kruger, D.; Jones, J.K.; McLellan, M.; Carey, J.D.; Angus, E.; Thomas, A.; Galprin, A.; Bergenstal, R.; Johnson, M.; Spencer, M.; Morgan, K.; Etzwiler, D.; Kendall, D.; Aiello, Lloyd Paul; Golden, E.; Jacobson, A.; Beaser, R.; Ganda, O.; Hamdy, O.; Wolpert, H.; Sharuk, G.; Arrigg, P.; Schlossman, D.; Rosenzwieg, J.; Rand, L.; Nathan, D.M.; Larkin, M.; Ong, M.; Godine, J.; Cagliero, E.; Lou, P.; Folino, K.; Fritz, S.; Crowell, S.; Hansen, K.; Gauthier-Kelly, C.; Service, J.; Ziegler, G.; Luttrell, L.; Caulder, S.; Lopes-Virella, M.; Colwell, J.; Soule, J.; Fernandes, J.; Hermayer, K.; Kwon, S.; Brabham, M.; Blevins, A.; Parker, J.; Lee, D.; Patel, N.; Pittman, C.; Lindsey, P.; Bracey, M.; Lee, K.; Nutaitis, M.; Farr, A.; Elsing, S.; Thompson, T.; Selby, J.; Lyons, T.; Yacoub-Wasef, S.; Szpiech, M.; Wood, D.; Mayfield, R.; Molitch, M.; Schaefer, B.; Jampol, L.; Lyon, A.; Gill, M.; Strugula, Z.; Kaminski, L.; Mirza, R.; Simjanoski, E.; Ryan, D.; Kolterman, O.; Lorenzi, G.; Goldbaum, M.; Sivitz, W.; Bayless, M.; Counts, D.; Johnsonbaugh, S.; Hebdon, M.; Salemi, P.; Liss, R.; Donner, T.; Gordon, J.; Hemady, R.; Kowarski, A.; Ostrowski, D.; Steidl, S.; Jones, B.; Herman, W.H.; Martin, C.L.; Pop-Busui, R.; Sarma, A.; Albers, J.; Feldman, E.; Kim, K.; Elner, S.; Comer, G.; Gardner, T.; Hackel, R.; Prusak, R.; Goings, L.; Smith, A.; Gothrup, J.; Titus, P.; Lee, J.; Brandle, M.; Prosser, L.; Greene, D.A.; Stevens, M.J.; Vine, A.K.; Bantle, J.; Wimmergren, N.; Cochrane, A.; Olsen, T.; Steuer, E.; Rath, P.; Rogness, B.; Hainsworth, D.; Goldstein, D.; Hitt, S.; Giangiacomo, J.; Schade, D.S.; Canady, J.L.; Chapin, J.E.; Ketai, L.H.; Braunstein, C.S.; Bourne, P.A.; Schwartz, S.; Brucker, A.; Maschak-Carey, B.J.; Baker, L.; Orchard, T.; Silvers, N.; Ryan, C.; Songer, T.; Doft, B.; Olson, S.; Bergren, R.L.; Lobes, L.; Rath, P. Paczan; Becker, D.; Rubinstein, D.; Conrad, P.W.; Yalamanchi, S.; Drash, A.; Morrison, A.; Bernal, M.L.; Vaccaro-Kish, J.; Malone, J.; Pavan, P.R.; Grove, N.; Iyer, M.N.; Burrows, A.F.; Tanaka, E.A.; Gstalder, R.; Dagogo-Jack, S.; Wigley, C.; Ricks, H.; Kitabchi, A.; Murphy, M.B.; Moser, S.; Meyer, D.; Iannacone, A.; Chaum, E.; Yoser, S.; Bryer-Ash, M.; Schussler, S.; Lambeth, H.; Raskin, P.; Strowig, S.; Zinman, B.; Barnie, A.; Devenyi, R.; Mandelcorn, M.; Brent, M.; Rogers, S.; Gordon, A.; Palmer, J.; Catton, S.; Brunzell, J.; Wessells, H.; de Boer, I.H.; Hokanson, J.; Purnell, J.; Ginsberg, J.; Kinyoun, J.; Deeb, S.; Weiss, M.; Meekins, G.; Distad, J.; Van Ottingham, L.; Dupre, J.; Harth, J.; Nicolle, D.; Driscoll, M.; Mahon, J.; Canny, C.; May, M.; Lipps, J.; Agarwal, A.; Adkins, T.; Survant, L.; Pate, R.L.; Munn, G.E.; Lorenz, R.; Feman, S.; White, N.; Levandoski, L.; Boniuk, I.; Grand, G.; Thomas, M.; Joseph, D.D.; Blinder, K.; Shah, G.; Boniuk; Burgess; Santiago, J.; Tamborlane, W.; Gatcomb, P.; Stoessel, K.; Taylor, K.; Goldstein, J.; Novella, S.; Mojibian, H.; Cornfeld, D.; Lima, J.; Bluemke, D.; Turkbey, E.; van der Geest, R.J.; Liu, C.; Malayeri, A.; Jain, A.; Miao, C.; Chahal, H.; Jarboe, R.; Maynard, J.; Gubitosi-Klug, R.; Quin, J.; Gaston, P.; Palmert, M.; Trail, R.; Dahms, W.; Lachin, J.; Cleary, P.; Backlund, J.; Sun, W.; Braffett, B.; Klumpp, K.; Chan, K.; Diminick, L.; Rosenberg, D.; Petty, B.; Determan, A.; Kenny, D.; Rutledge, B.; Younes, Naji; Dews, L.; Hawkins, M.; Cowie, C.; Fradkin, J.; Siebert, C.; Eastman, R.; Danis, R.; Gangaputra, S.; Neill, S.; Davis, M.; Hubbard, L.; Wabers, H.; Burger, M.; Dingledine, J.; Gama, V.; Sussman, R.; Steffes, M.; Bucksa, J.; Nowicki, M.; Chavers, B.; O’Leary, D.; Polak, J.; Harrington, A.; Funk, L.; Crow, R.; Gloeb, B.; Thomas, S.; O’Donnell, C.; Soliman, E.; Zhang, Z.M.; Prineas, R.; Campbell, C.; Ryan, C.; Sandstrom, D.; Williams, T.; Geckle, M.; Cupelli, E.; Thoma, F.; Burzuk, B.; Woodfill, T.; Low, P.; Sommer, C.; Nickander, K.; Budoff, M.; Detrano, R.; Wong, N.; Fox, M.; Kim, L.; Oudiz, R.; Weir, G.; Espeland, M.; Manolio, T.; Rand, L.; Singer, D.; Stern, M.; Boulton, A.E.; Clark, C.; D’Agostino, R.; Lopes-Virella, M.; Garvey, W.T.; Lyons, T.J.; Jenkins, A.; Virella, G.; Jaffa, A.; Carter, Rickey; Lackland, D.; Brabham, M.; McGee, D.; Zheng, D.; Mayfield, R.K.; Boright, A.; Bull, S.; Sun, L.; Scherer, S.; Zinman, B.; Natarajan, R.; Miao, F.; Zhang, L.; Chen;, Z.; Nathan, D.M.; Makela, Kari-Matti; Lehtimaki, Terho; Kahonen, Mika; Raitakari, Olli; Yoshimura, Nagahisa; Matsuda, Fumihiko; Chen, Li Jia; Pang, Chi Pui; Yip, Shea Ping; Yap, Maurice K.H.; Meguro, Akira; Mizuki, Nobuhisa; Inoko, Hidetoshi; Foster, Paul J.; Zhao, Jing Hua; Vithana, Eranga; Tai, E-Shyong; Fan, Qiao; Xu, Liang; Campbell, Harry; Fleck, Brian; Rudan, Igor; Aung, Tin; Hofman, Albert; Uitterlinden, André G.; Bencic, Goran; Khor, Chiea-Chuen; Forward, Hannah; Pärssinen, Olavi; Mitchell, Paul; Rivadeneira, Fernando; Hewitt, Alex W.; Williams, Cathy; Oostra, Ben A.; Teo, Yik-Ying; Hammond, Christopher J.; Stambolian, Dwight; Mackey, David A.; Klaver, Caroline C.W.; Wong, Tien-Yin; Saw, Seang-Mei; Baird, Paul N.

    2013-01-01

    Refractive errors are common eye disorders of public health importance worldwide. Ocular axial length (AL) is the major determinant of refraction and thus of myopia and hyperopia. We conducted a meta-analysis of genome-wide association studies for AL, combining 12,531 Europeans and 8,216 Asians. We identified eight genome-wide significant loci for AL (RSPO1, C3orf26, LAMA2, GJD2, ZNRF3, CD55, MIP, and ALPPL2) and confirmed one previously reported AL locus (ZC3H11B). Of the nine loci, five (LAMA2, GJD2, CD55, ALPPL2, and ZC3H11B) were associated with refraction in 18 independent cohorts (n = 23,591). Differential gene expression was observed for these loci in minus-lens-induced myopia mouse experiments and human ocular tissues. Two of the AL genes, RSPO1 and ZNRF3, are involved in Wnt signaling, a pathway playing a major role in the regulation of eyeball size. This study provides evidence of shared genes between AL and refraction, but importantly also suggests that these traits may have unique pathways. PMID:24144296

  13. Unit Root Testing and Estimation in Nonlinear ESTAR Models with Normal and Non-Normal Errors.

    Directory of Open Access Journals (Sweden)

    Umair Khalil

    Full Text Available Exponential Smooth Transition Autoregressive (ESTAR models can capture non-linear adjustment of the deviations from equilibrium conditions which may explain the economic behavior of many variables that appear non stationary from a linear viewpoint. Many researchers employ the Kapetanios test which has a unit root as the null and a stationary nonlinear model as the alternative. However this test statistics is based on the assumption of normally distributed errors in the DGP. Cook has analyzed the size of the nonlinear unit root of this test in the presence of heavy-tailed innovation process and obtained the critical values for both finite variance and infinite variance cases. However the test statistics of Cook are oversized. It has been found by researchers that using conventional tests is dangerous though the best performance among these is a HCCME. The over sizing for LM tests can be reduced by employing fixed design wild bootstrap remedies which provide a valuable alternative to the conventional tests. In this paper the size of the Kapetanios test statistic employing hetroscedastic consistent covariance matrices has been derived and the results are reported for various sample sizes in which size distortion is reduced. The properties for estimates of ESTAR models have been investigated when errors are assumed non-normal. We compare the results obtained through the fitting of nonlinear least square with that of the quantile regression fitting in the presence of outliers and the error distribution was considered to be from t-distribution for various sample sizes.

  14. The Effect of Error in Item Parameter Estimates on the Test Response Function Method of Linking.

    Science.gov (United States)

    Kaskowitz, Gary S.; De Ayala, R. J.

    2001-01-01

    Studied the effect of item parameter estimation for computation of linking coefficients for the test response function (TRF) linking/equating method. Simulation results showed that linking was more accurate when there was less error in the parameter estimates, and that 15 or 25 common items provided better results than 5 common items under both…

  15. Testing Error Management Theory: Exploring the Commitment Skepticism Bias and the Sexual Overperception Bias

    Science.gov (United States)

    Henningsen, David Dryden; Henningsen, Mary Lynn Miller

    2010-01-01

    Research on error management theory indicates that men tend to overestimate women's sexual interest and women underestimate men's interest in committed relationships (Haselton & Buss, 2000). We test the assumptions of the theory in face-to-face, stranger interactions with 111 man-woman dyads. Support for the theory emerges, but potential boundary…

  16. Evaluation of Two Methods for Modeling Measurement Errors When Testing Interaction Effects with Observed Composite Scores

    Science.gov (United States)

    Hsiao, Yu-Yu; Kwok, Oi-Man; Lai, Mark H. C.

    2018-01-01

    Path models with observed composites based on multiple items (e.g., mean or sum score of the items) are commonly used to test interaction effects. Under this practice, researchers generally assume that the observed composites are measured without errors. In this study, we reviewed and evaluated two alternative methods within the structural…

  17. Impact of controlling the sum of error probability in the sequential probability ratio test

    Directory of Open Access Journals (Sweden)

    Bijoy Kumarr Pradhan

    2013-05-01

    Full Text Available A generalized modified method is proposed to control the sum of error probabilities in sequential probability ratio test to minimize the weighted average of the two average sample numbers under a simple null hypothesis and a simple alternative hypothesis with the restriction that the sum of error probabilities is a pre-assigned constant to find the optimal sample size and finally a comparison is done with the optimal sample size found from fixed sample size procedure. The results are applied to the cases when the random variate follows a normal law as well as Bernoullian law.

  18. FPGA Based Test Module for Error Bit Evaluation in Serial Links

    Directory of Open Access Journals (Sweden)

    J. Kolouch

    2006-04-01

    Full Text Available A test module for serial links is described. In the link transmitter, one module generates pseudorandom pulse signal that is transmitted by the link. Second module located in the link receiver generates the same signal and compares it to the received signal. Errors caused by the signal transmission can be then detected and results sent to a master computer for further processing like statistical evaluation. The module can be used for long-term error monitoring without need for human operator presence.

  19. Error-Rate Estimation Based on Multi-Signal Flow Graph Model and Accelerated Radiation Tests.

    Directory of Open Access Journals (Sweden)

    Wei He

    Full Text Available A method of evaluating the single-event effect soft-error vulnerability of space instruments before launched has been an active research topic in recent years. In this paper, a multi-signal flow graph model is introduced to analyze the fault diagnosis and meantime to failure (MTTF for space instruments. A model for the system functional error rate (SFER is proposed. In addition, an experimental method and accelerated radiation testing system for a signal processing platform based on the field programmable gate array (FPGA is presented. Based on experimental results of different ions (O, Si, Cl, Ti under the HI-13 Tandem Accelerator, the SFER of the signal processing platform is approximately 10-3(error/particle/cm2, while the MTTF is approximately 110.7 h.

  20. Reduction of sources of error and simplification of the Carbon-14 urea breath test

    International Nuclear Information System (INIS)

    Bellon, M.S.

    1997-01-01

    Full text: Carbon-14 urea breath testing is established in the diagnosis of H. pylori infection. The aim of this study was to investigate possible further simplification and identification of error sources in the 14 C urea kit extensively used at the Royal Adelaide Hospital. Thirty six patients with validated H. pylon status were tested with breath samples taken at 10,15, and 20 min. Using the single sample value at 15 min, there was no change in the diagnostic category. Reduction or errors in analysis depends on attention to the following details: Stability of absorption solution, (now > 2 months), compatibility of scintillation cocktail/absorption solution. (with particular regard to photoluminescence and chemiluminescence), reduction in chemical quenching (moisture reduction), understanding counting hardware and relevance, and appropriate response to deviation in quality assurance. With this experience, we are confident of the performance and reliability of the RAPID-14 urea breath test kit now available commercially

  1. Bit error rate testing of fiber optic data links for MMIC-based phased array antennas

    Science.gov (United States)

    Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.

    1990-01-01

    The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.

  2. Automated systems help prevent operator error during [reactor] I and C [instrumentation and control] testing

    International Nuclear Information System (INIS)

    Courcoux, R.

    1989-01-01

    On a nuclear steam supply system, even a minor failure can involve actuation of the whole reactor protection system (RPS). To reduce the likelihood of human error leading to unwanted trips during the maintenance of instrumentation and control systems, Framatome has been developing and installing various automated testing systems. Such automated systems are particularly helpful when periodic tests with a potential for RPS actuation have to be carried out, or when the test is on the critical path for the refuelling outage. The Sensitive Channel Programme described is an example of the sort of work that has been done. (author)

  3. Can the Bruckner test be used as a rapid screening test to detect significant refractive errors in children?

    Directory of Open Access Journals (Sweden)

    Kothari Mihir

    2007-01-01

    Full Text Available Purpose: To assess the suitability of Brückner test as a screening test to detect significant refractive errors in children. Materials and Methods: A pediatric ophthalmologist prospectively observed the size and location of pupillary crescent on Brückner test as hyperopic, myopic or astigmatic. This was compared with the cycloplegic refraction. Detailed ophthalmic examination was done for all. Sensitivity, specificity, positive predictive value and negative predictive value of Brückner test were determined for the defined cutoff levels of ametropia. Results: Ninety-six subjects were examined. Mean age was 8.6 years (range 1 to 16 years. Brückner test could be completed for all; the time taken to complete this test was 10 seconds per subject. The ophthalmologist identified 131 eyes as ametropic, 61 as emmetropic. The Brückner test had sensitivity 91%, specificity 72.8%, positive predictive value 85.5% and negative predictive value 83.6%. Of 10 false negatives four had compound hypermetropic astigmatism and three had myopia. Conclusions: Brückner test can be used to rapidly screen the children for significant refractive errors. The potential benefits from such use may be maximized if programs use the test with lower crescent measurement cutoffs, a crescent measurement ruler and a distance fixation target.

  4. Errors of car wheels rotation rate measurement using roller follower on test benches

    Science.gov (United States)

    Potapov, A. S.; Svirbutovich, O. A.; Krivtsov, S. N.

    2018-03-01

    The article deals with rotation rate measurement errors, which depend on the motor vehicle rate, on the roller, test benches. Monitoring of the vehicle performance under operating conditions is performed on roller test benches. Roller test benches are not flawless. They have some drawbacks affecting the accuracy of vehicle performance monitoring. Increase in basic velocity of the vehicle requires increase in accuracy of wheel rotation rate monitoring. It determines the degree of accuracy of mode identification for a wheel of the tested vehicle. To ensure measurement accuracy for rotation velocity of rollers is not an issue. The problem arises when measuring rotation velocity of a car wheel. The higher the rotation velocity of the wheel is, the lower the accuracy of measurement is. At present, wheel rotation frequency monitoring on roller test benches is carried out by following-up systems. Their sensors are rollers following wheel rotation. The rollers of the system are not kinematically linked to supporting rollers of the test bench. The roller follower is forced against the wheels of the tested vehicle by means of a spring-lever mechanism. Experience of the test bench equipment operation has shown that measurement accuracy is satisfactory at small rates of vehicles diagnosed on roller test benches. With a rising diagnostics rate, rotation velocity measurement errors occur in both braking and pulling modes because a roller spins about a tire tread. The paper shows oscillograms of changes in wheel rotation velocity and rotation velocity measurement system’s signals when testing a vehicle on roller test benches at specified rates.

  5. ERROR REDUCTION IN DUCT LEAKAGE TESTING THROUGH DATA CROSS-CHECKS

    Energy Technology Data Exchange (ETDEWEB)

    ANDREWS, J.W.

    1998-12-31

    One way to reduce uncertainty in scientific measurement is to devise a protocol in which more quantities are measured than are absolutely required, so that the result is over constrained. This report develops a method for so combining data from two different tests for air leakage in residential duct systems. An algorithm, which depends on the uncertainty estimates for the measured quantities, optimizes the use of the excess data. In many cases it can significantly reduce the error bar on at least one of the two measured duct leakage rates (supply or return), and it provides a rational method of reconciling any conflicting results from the two leakage tests.

  6. Preanalytical errors in primary healthcare: a questionnaire study of information search procedures, test request management and test tube labelling.

    Science.gov (United States)

    Söderberg, Johan; Brulin, Christine; Grankvist, Kjell; Wallin, Olof

    2009-01-01

    Most errors in laboratory medicine occur in the preanalytical phase and are the result of human mistakes. This study investigated information search procedures, test request management and test tube labelling in primary healthcare compared to the same procedures amongst clinical laboratory staff. A questionnaire was completed by 317 venous blood sampling staff in 70 primary healthcare centres and in two clinical laboratories (response rate = 94%). Correct procedures were not always followed. Only 60% of the primary healthcare staff reported that they always sought information in the updated, online laboratory manual. Only 12% reported that they always labelled the test tubes prior to drawing blood samples. No major differences between primary healthcare centres and clinical laboratories were found, except for test tube labelling, whereby the laboratory staff reported better practices. Re-education and access to documented routines were not clearly associated with better practices. The preanalytical procedure in the surveyed primary healthcare centres was associated with a risk of errors which could affect patient safety. To improve patient safety in laboratory testing, all healthcare providers should survey their preanalytical procedures and improve the total testing process with a systems perspective.

  7. Testing Constancy of the Error Covariance Matrix in Vector Models against Parametric Alternatives using a Spectral Decomposition

    DEFF Research Database (Denmark)

    Yang, Yukay

    I consider multivariate (vector) time series models in which the error covariance matrix may be time-varying. I derive a test of constancy of the error covariance matrix against the alternative that the covariance matrix changes over time. I design a new family of Lagrange-multiplier tests against...... to consider multivariate volatility modelling....

  8. New error calibration tests for gravity models using subset solutions and independent data - Applied to GEM-T3

    Science.gov (United States)

    Lerch, F. J.; Nerem, R. S.; Chinn, D. S.; Chan, J. C.; Patel, G. B.; Klosko, S. M.

    1993-01-01

    A new method has been developed to provide a direct test of the error calibrations of gravity models based on actual satellite observations. The basic approach projects the error estimates of the gravity model parameters onto satellite observations, and the results of these projections are then compared with data residual computed from the orbital fits. To allow specific testing of the gravity error calibrations, subset solutions are computed based on the data set and data weighting of the gravity model. The approach is demonstrated using GEM-T3 to show that the gravity error estimates are well calibrated and that reliable predictions of orbit accuracies can be achieved for independent orbits.

  9. Working memory and inhibitory control across the life span: Intrusion errors in the Reading Span Test.

    Science.gov (United States)

    Robert, Christelle; Borella, Erika; Fagot, Delphine; Lecerf, Thierry; de Ribaupierre, Anik

    2009-04-01

    The aim of this study was to examine to what extent inhibitory control and working memory capacity are related across the life span. Intrusion errors committed by children and younger and older adults were investigated in two versions of the Reading Span Test. In Experiment 1, a mixed Reading Span Test with items of various list lengths was administered. Older adults and children recalled fewer correct words and produced more intrusions than did young adults. Also, age-related differences were found in the type of intrusions committed. In Experiment 2, an adaptive Reading Span Test was administered, in which the list length of items was adapted to each individual's working memory capacity. Age groups differed neither on correct recall nor on the rate of intrusions, but they differed on the type of intrusions. Altogether, these findings indicate that the availability of attentional resources influences the efficiency of inhibition across the life span.

  10. Towards eliminating systematic errors caused by the experimental conditions in Biochemical Methane Potential (BMP) tests

    International Nuclear Information System (INIS)

    Strömberg, Sten; Nistor, Mihaela; Liu, Jing

    2014-01-01

    Highlights: • The evaluated factors introduce significant systematic errors (10–38%) in BMP tests. • Ambient temperature (T) has the most substantial impact (∼10%) at low altitude. • Ambient pressure (p) has the most substantial impact (∼68%) at high altitude. • Continuous monitoring of T and p is not necessary for kinetic calculations. - Abstract: The Biochemical Methane Potential (BMP) test is increasingly recognised as a tool for selecting and pricing biomass material for production of biogas. However, the results for the same substrate often differ between laboratories and much work to standardise such tests is still needed. In the current study, the effects from four environmental factors (i.e. ambient temperature and pressure, water vapour content and initial gas composition of the reactor headspace) on the degradation kinetics and the determined methane potential were evaluated with a 2 4 full factorial design. Four substrates, with different biodegradation profiles, were investigated and the ambient temperature was found to be the most significant contributor to errors in the methane potential. Concerning the kinetics of the process, the environmental factors’ impact on the calculated rate constants was negligible. The impact of the environmental factors on the kinetic parameters and methane potential from performing a BMP test at different geographical locations around the world was simulated by adjusting the data according to the ambient temperature and pressure of some chosen model sites. The largest effect on the methane potential was registered from tests performed at high altitudes due to a low ambient pressure. The results from this study illustrate the importance of considering the environmental factors’ influence on volumetric gas measurement in BMP tests. This is essential to achieve trustworthy and standardised results that can be used by researchers and end users from all over the world

  11. Towards eliminating systematic errors caused by the experimental conditions in Biochemical Methane Potential (BMP) tests

    Energy Technology Data Exchange (ETDEWEB)

    Strömberg, Sten, E-mail: sten.stromberg@biotek.lu.se [Department of Biotechnology, Lund University, Getingevägen 60, 221 00 Lund (Sweden); Nistor, Mihaela, E-mail: mn@bioprocesscontrol.com [Bioprocess Control, Scheelevägen 22, 223 63 Lund (Sweden); Liu, Jing, E-mail: jing.liu@biotek.lu.se [Department of Biotechnology, Lund University, Getingevägen 60, 221 00 Lund (Sweden); Bioprocess Control, Scheelevägen 22, 223 63 Lund (Sweden)

    2014-11-15

    Highlights: • The evaluated factors introduce significant systematic errors (10–38%) in BMP tests. • Ambient temperature (T) has the most substantial impact (∼10%) at low altitude. • Ambient pressure (p) has the most substantial impact (∼68%) at high altitude. • Continuous monitoring of T and p is not necessary for kinetic calculations. - Abstract: The Biochemical Methane Potential (BMP) test is increasingly recognised as a tool for selecting and pricing biomass material for production of biogas. However, the results for the same substrate often differ between laboratories and much work to standardise such tests is still needed. In the current study, the effects from four environmental factors (i.e. ambient temperature and pressure, water vapour content and initial gas composition of the reactor headspace) on the degradation kinetics and the determined methane potential were evaluated with a 2{sup 4} full factorial design. Four substrates, with different biodegradation profiles, were investigated and the ambient temperature was found to be the most significant contributor to errors in the methane potential. Concerning the kinetics of the process, the environmental factors’ impact on the calculated rate constants was negligible. The impact of the environmental factors on the kinetic parameters and methane potential from performing a BMP test at different geographical locations around the world was simulated by adjusting the data according to the ambient temperature and pressure of some chosen model sites. The largest effect on the methane potential was registered from tests performed at high altitudes due to a low ambient pressure. The results from this study illustrate the importance of considering the environmental factors’ influence on volumetric gas measurement in BMP tests. This is essential to achieve trustworthy and standardised results that can be used by researchers and end users from all over the world.

  12. A Rigorous Temperature-Dependent Stochastic Modelling and Testing for MEMS-Based Inertial Sensor Errors

    Directory of Open Access Journals (Sweden)

    Spiros Pagiatakis

    2009-10-01

    Full Text Available In this paper, we examine the effect of changing the temperature points on MEMS-based inertial sensor random error. We collect static data under different temperature points using a MEMS-based inertial sensor mounted inside a thermal chamber. Rigorous stochastic models, namely Autoregressive-based Gauss-Markov (AR-based GM models are developed to describe the random error behaviour. The proposed AR-based GM model is initially applied to short stationary inertial data to develop the stochastic model parameters (correlation times. It is shown that the stochastic model parameters of a MEMS-based inertial unit, namely the ADIS16364, are temperature dependent. In addition, field kinematic test data collected at about 17 °C are used to test the performance of the stochastic models at different temperature points in the filtering stage using Unscented Kalman Filter (UKF. It is shown that the stochastic model developed at 20 °C provides a more accurate inertial navigation solution than the ones obtained from the stochastic models developed at −40 °C, −20 °C, 0 °C, +40 °C, and +60 °C. The temperature dependence of the stochastic model is significant and should be considered at all times to obtain optimal navigation solution for MEMS-based INS/GPS integration.

  13. A Rigorous Temperature-Dependent Stochastic Modelling and Testing for MEMS-Based Inertial Sensor Errors.

    Science.gov (United States)

    El-Diasty, Mohammed; Pagiatakis, Spiros

    2009-01-01

    In this paper, we examine the effect of changing the temperature points on MEMS-based inertial sensor random error. We collect static data under different temperature points using a MEMS-based inertial sensor mounted inside a thermal chamber. Rigorous stochastic models, namely Autoregressive-based Gauss-Markov (AR-based GM) models are developed to describe the random error behaviour. The proposed AR-based GM model is initially applied to short stationary inertial data to develop the stochastic model parameters (correlation times). It is shown that the stochastic model parameters of a MEMS-based inertial unit, namely the ADIS16364, are temperature dependent. In addition, field kinematic test data collected at about 17 °C are used to test the performance of the stochastic models at different temperature points in the filtering stage using Unscented Kalman Filter (UKF). It is shown that the stochastic model developed at 20 °C provides a more accurate inertial navigation solution than the ones obtained from the stochastic models developed at -40 °C, -20 °C, 0 °C, +40 °C, and +60 °C. The temperature dependence of the stochastic model is significant and should be considered at all times to obtain optimal navigation solution for MEMS-based INS/GPS integration.

  14. Application of the HWVP measurement error model and feed test algorithms to pilot scale feed testing

    International Nuclear Information System (INIS)

    Adams, T.L.

    1996-03-01

    The purpose of the feed preparation subsystem in the Hanford Waste Vitrification Plant (HWVP) is to provide, for control of the properties of the slurry that are sent to the melter. The slurry properties are adjusted so that two classes of constraints are satisfied. Processability constraints guarantee that the process conditions required by the melter can be obtained. For example, there are processability constraints associated with electrical conductivity and viscosity. Acceptability constraints guarantee that the processed glass can be safely stored in a repository. An example of an acceptability constraint is the durability of the product glass. The primary control focus for satisfying both processability and acceptability constraints is the composition of the slurry. The primary mechanism for adjusting the composition of the slurry is mixing the waste slurry with frit of known composition. Spent frit from canister decontamination is also recycled by adding it to the melter feed. A number of processes in addition to mixing are used to condition the waste slurry prior to melting, including evaporation and the addition of formic acid. These processes also have an effect on the feed composition

  15. Analogue particle identifier and test unit for automatic measuring of errors

    International Nuclear Information System (INIS)

    Boden, A.; Lauch, J.

    1979-04-01

    A high accuracy analogue particle identifier is described. The unit is used for particle identification or data correction of experimental based errors in magnetic spectrometers. Signals which are proportional to the energy, the time-of-flight or the position of absorption of the particles are supplied to an analogue computation circuit (multifunction converter). Three computation functions are available for different applications. The output of the identifier produces correction signals or pulses whose amplitudes are proportional to the mass of the particles. Particle identification and data correction can be optimized by the adjustment of variable parameters. An automatic test unit has been developed for adjustment and routine checking of particle identifiers. The computation functions can be tested by this unit with an accuracy of 1%. (orig.) [de

  16. Cognitive tests predict real-world errors: the relationship between drug name confusion rates in laboratory-based memory and perception tests and corresponding error rates in large pharmacy chains.

    Science.gov (United States)

    Schroeder, Scott R; Salomon, Meghan M; Galanter, William L; Schiff, Gordon D; Vaida, Allen J; Gaunt, Michael J; Bryson, Michelle L; Rash, Christine; Falck, Suzanne; Lambert, Bruce L

    2017-05-01

    Drug name confusion is a common type of medication error and a persistent threat to patient safety. In the USA, roughly one per thousand prescriptions results in the wrong drug being filled, and most of these errors involve drug names that look or sound alike. Prior to approval, drug names undergo a variety of tests to assess their potential for confusability, but none of these preapproval tests has been shown to predict real-world error rates. We conducted a study to assess the association between error rates in laboratory-based tests of drug name memory and perception and real-world drug name confusion error rates. Eighty participants, comprising doctors, nurses, pharmacists, technicians and lay people, completed a battery of laboratory tests assessing visual perception, auditory perception and short-term memory of look-alike and sound-alike drug name pairs (eg, hydroxyzine/hydralazine). Laboratory test error rates (and other metrics) significantly predicted real-world error rates obtained from a large, outpatient pharmacy chain, with the best-fitting model accounting for 37% of the variance in real-world error rates. Cross-validation analyses confirmed these results, showing that the laboratory tests also predicted errors from a second pharmacy chain, with 45% of the variance being explained by the laboratory test data. Across two distinct pharmacy chains, there is a strong and significant association between drug name confusion error rates observed in the real world and those observed in laboratory-based tests of memory and perception. Regulators and drug companies seeking a validated preapproval method for identifying confusing drug names ought to consider using these simple tests. By using a standard battery of memory and perception tests, it should be possible to reduce the number of confusing look-alike and sound-alike drug name pairs that reach the market, which will help protect patients from potentially harmful medication errors. Published by the BMJ

  17. Testing the algorithms for automatic identification of errors on the measured quantities of the nuclear power plant. Verification tests

    International Nuclear Information System (INIS)

    Svatek, J.

    1999-12-01

    During the development and implementation of supporting software for the control room and emergency control centre at the Dukovany nuclear power plant it appeared necessary to validate the input quantities in order to assure operating reliability of the software tools. Therefore, the development of software for validation of the measured quantities of the plant data sources was initiated, and the software had to be debugged and verified. The report contains the proposal for and description of the verification tests for testing the algorithms of automatic identification of errors on the observed quantities of the NPP by means of homemade validation software. In particular, the algorithms treated serve the validation of the hot leg temperature at primary circuit loop no. 2 or 4 at the Dukovany-2 reactor unit using data from the URAN and VK3 information systems, recorded during 3 different days. (author)

  18. Standardising the lactulose mannitol test of gut permeability to minimise error and promote comparability.

    Directory of Open Access Journals (Sweden)

    Ivana R Sequeira

    Full Text Available BACKGROUND: Lactulose mannitol ratio tests are clinically useful for assessing disorders characterised by changes in gut permeability and for assessing mixing in the intestinal lumen. Variations between currently used test protocols preclude meaningful comparisons between studies. We determined the optimal sampling period and related this to intestinal residence. METHODS: Half-hourly lactulose and mannitol urinary excretions were determined over 6 hours in 40 healthy female volunteers after administration of either 600 mg aspirin or placebo, in randomised order at weekly intervals. Gastric and small intestinal transit times were assessed by the SmartPill in 6 subjects from the same population. Half-hourly percentage recoveries of lactulose and mannitol were grouped on a basis of compartment transit time. The rate of increase or decrease of each sugar within each group was explored by simple linear regression to assess the optimal period of sampling. KEY RESULTS: The between subject standard errors for each half-hourly lactulose and mannitol excretion were lowest, the correlation of the quantity of each sugar excreted with time was optimal and the difference between the two sugars in this temporal relationship maximal during the period from 2½-4 h after ingestion. Half-hourly lactulose excretions were generally increased after dosage with aspirin whilst those of mannitol were unchanged as was the temporal pattern and period of lowest between subject standard error for both sugars. CONCLUSION: The results indicate that between subject variation in the percentage excretion of the two sugars would be minimised and the differences in the temporal patterns of excretion would be maximised if the period of collection of urine used in clinical tests of small intestinal permeability were restricted to 2½-4 h post dosage. This period corresponds to a period when the column of digesta column containing the probes is passing from the small to the large

  19. Error analysis for 1-1/2-loop semiscale system isothermal test data

    International Nuclear Information System (INIS)

    Feldman, E.M.; Naff, S.A.

    1975-05-01

    An error analysis was performed on the measurements made during the isothermal portion of the Semiscale Blowdown and Emergency Core Cooling (ECC) Project. A brief description of the measurement techniques employed, identification of potential sources of errors, and quantification of the errors associated with data is presented. (U.S.)

  20. Interpretation of Errors Made by Mandarin-Speaking Children on the Preschool Language Scales--5th Edition Screening Test

    Science.gov (United States)

    Ren, Yonggang; Rattanasone, Nan Xu; Wyver, Shirley; Hinton, Amber; Demuth, Katherine

    2016-01-01

    We investigated typical errors made by Mandarin-speaking children when measured by the Preschool Language Scales-fifth edition, Screening Test (PLS-5 Screening Test). The intention was to provide preliminary data for the development of a guideline for early childhood educators and psychologists who use the test with Mandarin-speaking children.…

  1. Properties of permutation-based gene tests and controlling type 1 error using a summary statistic based gene test.

    Science.gov (United States)

    Swanson, David M; Blacker, Deborah; Alchawa, Taofik; Ludwig, Kerstin U; Mangold, Elisabeth; Lange, Christoph

    2013-11-07

    The advent of genome-wide association studies has led to many novel disease-SNP associations, opening the door to focused study on their biological underpinnings. Because of the importance of analyzing these associations, numerous statistical methods have been devoted to them. However, fewer methods have attempted to associate entire genes or genomic regions with outcomes, which is potentially more useful knowledge from a biological perspective and those methods currently implemented are often permutation-based. One property of some permutation-based tests is that their power varies as a function of whether significant markers are in regions of linkage disequilibrium (LD) or not, which we show from a theoretical perspective. We therefore develop two methods for quantifying the degree of association between a genomic region and outcome, both of whose power does not vary as a function of LD structure. One method uses dimension reduction to "filter" redundant information when significant LD exists in the region, while the other, called the summary-statistic test, controls for LD by scaling marker Z-statistics using knowledge of the correlation matrix of markers. An advantage of this latter test is that it does not require the original data, but only their Z-statistics from univariate regressions and an estimate of the correlation structure of markers, and we show how to modify the test to protect the type 1 error rate when the correlation structure of markers is misspecified. We apply these methods to sequence data of oral cleft and compare our results to previously proposed gene tests, in particular permutation-based ones. We evaluate the versatility of the modification of the summary-statistic test since the specification of correlation structure between markers can be inaccurate. We find a significant association in the sequence data between the 8q24 region and oral cleft using our dimension reduction approach and a borderline significant association using the

  2. Capital productivity in industrialised economies: Evidence from error-correction model and lagrange multiplier tests

    Directory of Open Access Journals (Sweden)

    Trofimov Ivan D.

    2017-01-01

    Full Text Available The paper re-examines the “stylized facts” of the balanced growth in developed economies, looking specifically at capital productivity variable. The economic data is obtained from European Commission AMECO database, spanning 1961-2014 period. For a sample of 22 OECD economies, the paper applies univariate LM unit root tests with one or two structural breaks, and estimates error-correction and linear trend models with breaks. It is shown that diverse statistical patterns were present across economies and overall mixed evidence is provided as to the stability of capital productivity and balanced growth in general. Specifically, both upward and downward trends in capital productivity were present, while in several economies mean reversion and random walk patterns were observed. The data and results were largely in line with major theoretical explanations pertaining to capital productivity. With regard to determinants of the capital productivity movements, the structure of capital stock and the prices of capital goods were likely most salient.

  3. Potential Functional Embedding Theory at the Correlated Wave Function Level. 2. Error Sources and Performance Tests.

    Science.gov (United States)

    Cheng, Jin; Yu, Kuang; Libisch, Florian; Dieterich, Johannes M; Carter, Emily A

    2017-03-14

    Quantum mechanical embedding theories partition a complex system into multiple spatial regions that can use different electronic structure methods within each, to optimize trade-offs between accuracy and cost. The present work incorporates accurate but expensive correlated wave function (CW) methods for a subsystem containing the phenomenon or feature of greatest interest, while self-consistently capturing quantum effects of the surroundings using fast but less accurate density functional theory (DFT) approximations. We recently proposed two embedding methods [for a review, see: Acc. Chem. Res. 2014 , 47 , 2768 ]: density functional embedding theory (DFET) and potential functional embedding theory (PFET). DFET provides a fast but non-self-consistent density-based embedding scheme, whereas PFET offers a more rigorous theoretical framework to perform fully self-consistent, variational CW/DFT calculations [as defined in part 1, CW/DFT means subsystem 1(2) is treated with CW(DFT) methods]. When originally presented, PFET was only tested at the DFT/DFT level of theory as a proof of principle within a planewave (PW) basis. Part 1 of this two-part series demonstrated that PFET can be made to work well with mixed Gaussian type orbital (GTO)/PW bases, as long as optimized GTO bases and consistent electron-ion potentials are employed throughout. Here in part 2 we conduct the first PFET calculations at the CW/DFT level and compare them to DFET and full CW benchmarks. We test the performance of PFET at the CW/DFT level for a variety of types of interactions (hydrogen bonding, metallic, and ionic). By introducing an intermediate CW/DFT embedding scheme denoted DFET/PFET, we show how PFET remedies different types of errors in DFET, serving as a more robust type of embedding theory.

  4. Flight Test Results of a GPS-Based Pitot-Static Calibration Method Using Output-Error Optimization for a Light Twin-Engine Airplane

    Science.gov (United States)

    Martos, Borja; Kiszely, Paul; Foster, John V.

    2011-01-01

    As part of the NASA Aviation Safety Program (AvSP), a novel pitot-static calibration method was developed to allow rapid in-flight calibration for subscale aircraft while flying within confined test areas. This approach uses Global Positioning System (GPS) technology coupled with modern system identification methods that rapidly computes optimal pressure error models over a range of airspeed with defined confidence bounds. This method has been demonstrated in subscale flight tests and has shown small 2- error bounds with significant reduction in test time compared to other methods. The current research was motivated by the desire to further evaluate and develop this method for full-scale aircraft. A goal of this research was to develop an accurate calibration method that enables reductions in test equipment and flight time, thus reducing costs. The approach involved analysis of data acquisition requirements, development of efficient flight patterns, and analysis of pressure error models based on system identification methods. Flight tests were conducted at The University of Tennessee Space Institute (UTSI) utilizing an instrumented Piper Navajo research aircraft. In addition, the UTSI engineering flight simulator was used to investigate test maneuver requirements and handling qualities issues associated with this technique. This paper provides a summary of piloted simulation and flight test results that illustrates the performance and capabilities of the NASA calibration method. Discussion of maneuver requirements and data analysis methods is included as well as recommendations for piloting technique.

  5. A multiobserver study of the effects of including point-of-care patient photographs with portable radiography: a means to detect wrong-patient errors.

    Science.gov (United States)

    Tridandapani, Srini; Ramamurthy, Senthil; Provenzale, James; Obuchowski, Nancy A; Evanoff, Michael G; Bhatti, Pamela

    2014-08-01

    To evaluate whether the presence of facial photographs obtained at the point-of-care of portable radiography leads to increased detection of wrong-patient errors. In this institutional review board-approved study, 166 radiograph-photograph combinations were obtained from 30 patients. Consecutive radiographs from the same patients resulted in 83 unique pairs (ie, a new radiograph and prior, comparison radiograph) for interpretation. To simulate wrong-patient errors, mismatched pairs were generated by pairing radiographs from different patients chosen randomly from the sample. Ninety radiologists each interpreted a unique randomly chosen set of 10 radiographic pairs, containing up to 10% mismatches (ie, error pairs). Radiologists were randomly assigned to interpret radiographs with or without photographs. The number of mismatches was identified, and interpretation times were recorded. Ninety radiologists with 21 ± 10 (mean ± standard deviation) years of experience were recruited to participate in this observer study. With the introduction of photographs, the proportion of errors detected increased from 31% (9 of 29) to 77% (23 of 30; P = .006). The odds ratio for detection of error with photographs to detection without photographs was 7.3 (95% confidence interval: 2.29-23.18). Observer qualifications, training, or practice in cardiothoracic radiology did not influence sensitivity for error detection. There is no significant difference in interpretation time for studies without photographs and those with photographs (60 ± 22 vs. 61 ± 25 seconds; P = .77). In this observer study, facial photographs obtained simultaneously with portable chest radiographs increased the identification of any wrong-patient errors, without substantial increase in interpretation time. This technique offers a potential means to increase patient safety through correct patient identification. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  6. Type I Error Rates and Power Estimates of Selected Parametric and Nonparametric Tests of Scale.

    Science.gov (United States)

    Olejnik, Stephen F.; Algina, James

    1987-01-01

    Estimated Type I Error rates and power are reported for the Brown-Forsythe, O'Brien, Klotz, and Siegal-Tukey procedures. The effect of aligning the data using deviations from group means or group medians is investigated. (RB)

  7. Testing the inverse-square law of gravity: Error and design with the upward continuation integral

    International Nuclear Information System (INIS)

    Thomas, J.

    1989-01-01

    It has been reported that the inverse-square law of gravity is violated over a range of a few hundred meters. I present a different method for the analysis of the data from that experiment. In this method, the experimental error can be evaluated analytically and I confirm the previous analysis but show that it is a 2σ effect. The method can also be used to design new experiments that will yield minimum errors for a fixed number of data points

  8. On a Test of Hypothesis to Verify the Operating Risk Due to Accountancy Errors

    Directory of Open Access Journals (Sweden)

    Paola Maddalena Chiodini

    2014-12-01

    Full Text Available According to the Statement on Auditing Standards (SAS No. 39 (AU 350.01, audit sampling is defined as “the application of an audit procedure to less than 100 % of the items within an account balance or class of transactions for the purpose of evaluating some characteristic of the balance or class”. The audit system develops in different steps: some are not susceptible to sampling procedures, while others may be held using sampling techniques. The auditor may also be interested in two types of accounting error: the number of incorrect records in the sample that overcome a given threshold (natural error rate, which may be indicative of possible fraud, and the mean amount of monetary errors found in incorrect records. The aim of this study is to monitor jointly both types of errors through an appropriate system of hypotheses, with particular attention to the second type error that indicates the risk of non-reporting errors overcoming the upper precision limits.

  9. A permutation test to analyse systematic bias and random measurement errors of medical devices via boosting location and scale models.

    Science.gov (United States)

    Mayr, Andreas; Schmid, Matthias; Pfahlberg, Annette; Uter, Wolfgang; Gefeller, Olaf

    2017-06-01

    Measurement errors of medico-technical devices can be separated into systematic bias and random error. We propose a new method to address both simultaneously via generalized additive models for location, scale and shape (GAMLSS) in combination with permutation tests. More precisely, we extend a recently proposed boosting algorithm for GAMLSS to provide a test procedure to analyse potential device effects on the measurements. We carried out a large-scale simulation study to provide empirical evidence that our method is able to identify possible sources of systematic bias as well as random error under different conditions. Finally, we apply our approach to compare measurements of skin pigmentation from two different devices in an epidemiological study.

  10. More Than Just Accuracy: A Novel Method to Incorporate Multiple Test Attributes in Evaluating Diagnostic Tests Including Point of Care Tests.

    Science.gov (United States)

    Thompson, Matthew; Weigl, Bernhard; Fitzpatrick, Annette; Ide, Nicole

    2016-01-01

    Current frameworks for evaluating diagnostic tests are constrained by a focus on diagnostic accuracy, and assume that all aspects of the testing process and test attributes are discrete and equally important. Determining the balance between the benefits and harms associated with new or existing tests has been overlooked. Yet, this is critically important information for stakeholders involved in developing, testing, and implementing tests. This is particularly important for point of care tests (POCTs) where tradeoffs exist between numerous aspects of the testing process and test attributes. We developed a new model that multiple stakeholders (e.g., clinicians, patients, researchers, test developers, industry, regulators, and health care funders) can use to visualize the multiple attributes of tests, the interactions that occur between these attributes, and their impacts on health outcomes. We use multiple examples to illustrate interactions between test attributes (test availability, test experience, and test results) and outcomes, including several POCTs. The model could be used to prioritize research and development efforts, and inform regulatory submissions for new diagnostics. It could potentially provide a way to incorporate the relative weights that various subgroups or clinical settings might place on different test attributes. Our model provides a novel way that multiple stakeholders can use to visualize test attributes, their interactions, and impacts on individual and population outcomes. We anticipate that this will facilitate more informed decision making around diagnostic tests.

  11. The Standard Error of a Proportion for Different Scores and Test Length.

    Directory of Open Access Journals (Sweden)

    David A. Walker

    2005-06-01

    Full Text Available This paper examines Smith's (2003 proposed standard error of a proportion index..associated with the idea of reliability as sufficiency of information. A detailed table..indexing all of the standard error values affiliated with assessments that range from 5 to..100 items, where students scored as low as 50% correct and 50% incorrect to as high as..95% correct and 5% incorrect, calculated in increments of 1 percentage point, is..presented, along with distributional qualities. Examples using this measure for classroom..teachers and higher education instructors of assessment are provided.

  12. Computer Simulation Tests of Feedback Error Learning Controller with IDM and ISM for Functional Electrical Stimulation in Wrist Joint Control

    OpenAIRE

    Watanabe, Takashi; Sugi, Yoshihiro

    2010-01-01

    Feedforward controller would be useful for hybrid Functional Electrical Stimulation (FES) system using powered orthotic devices. In this paper, Feedback Error Learning (FEL) controller for FES (FEL-FES controller) was examined using an inverse statics model (ISM) with an inverse dynamics model (IDM) to realize a feedforward FES controller. For FES application, the ISM was tested in learning off line using training data obtained by PID control of very slow movements. Computer simulation tests ...

  13. FY 2016 Status Report: Documentation of All CIRFT Data including Hydride Reorientation Tests (Draft M2)

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Jy-An John [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Materials Science and Technology Division; Wang, Hong [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Materials Science and Technology Division; Jiang, Hao [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Materials Science and Technology Division; Yan, Yong [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Materials Science and Technology Division; Bevard, Bruce B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Materials Science and Technology Division; Scaglione, John M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Materials Science and Technology Division

    2016-09-04

    The first portion of this report provides a detailed description of fiscal year (FY) 2015 test result corrections and analysis updates based on FY 2016 updates to the Cyclic Integrated Reversible-Bending Fatigue Tester (CIRFT) program methodology, which is used to evaluate the vibration integrity of spent nuclear fuel (SNF) under normal conditions of transport (NCT). The CIRFT consists of a U-frame test setup and a real-time curvature measurement method. The three-component U-frame setup of the CIRFT has two rigid arms and linkages connecting to a universal testing machine. The curvature SNF rod bending is obtained through a three-point deflection measurement method. Three linear variable differential transformers (LVDTs) are clamped to the side connecting plates of the U-frame and used to capture deformation of the rod. The second portion of this report provides the latest CIRFT data, including data for the hydride reorientation test. The variations in fatigue life are provided in terms of moment, equivalent stress, curvature, and equivalent strain for the tested SNFs. The equivalent stress plot collapsed the data points from all of the SNF samples into a single zone. A detailed examination revealed that, at the same stress level, fatigue lives display a descending order as follows: H. B. Robinson Nuclear Power Station (HBR), LMK, and mixed uranium-plutonium oxide (MOX). Just looking at the strain, LMK fuel has a slightly longer fatigue life than HBR fuel, but the difference is subtle. The third portion of this report provides finite element analysis (FEA) dynamic deformation simulation of SNF assemblies . In a horizontal layout under NCT, the fuel assembly’s skeleton, which is formed by guide tubes and spacer grids, is the primary load bearing apparatus carrying and transferring vibration loads within an SNF assembly. These vibration loads include interaction forces between the SNF assembly and the canister basket walls. Therefore, the integrity of the guide

  14. [Measures to prevent patient identification errors in blood collection/physiological function testing utilizing a laboratory information system].

    Science.gov (United States)

    Shimazu, Chisato; Hoshino, Satoshi; Furukawa, Taiji

    2013-08-01

    We constructed an integrated personal identification workflow chart using both bar code reading and an all in-one laboratory information system. The information system not only handles test data but also the information needed for patient guidance in the laboratory department. The reception terminals at the entrance, displays for patient guidance and patient identification tools at blood-sampling booths are all controlled by the information system. The number of patient identification errors was greatly reduced by the system. However, identification errors have not been abolished in the ultrasound department. After re-evaluation of the patient identification process in this department, we recognized that the major reason for the errors came from excessive identification workflow. Ordinarily, an ultrasound test requires patient identification 3 times, because 3 different systems are required during the entire test process, i.e. ultrasound modality system, laboratory information system and a system for producing reports. We are trying to connect the 3 different systems to develop a one-time identification workflow, but it is not a simple task and has not been completed yet. Utilization of the laboratory information system is effective, but is not yet perfect for patient identification. The most fundamental procedure for patient identification is to ask a person's name even today. Everyday checks in the ordinary workflow and everyone's participation in safety-management activity are important for the prevention of patient identification errors.

  15. The speed of memory errors shows the influence of misleading information: Testing the diffusion model and discrete-state models.

    Science.gov (United States)

    Starns, Jeffrey J; Dubé, Chad; Frelinger, Matthew E

    2018-05-01

    In this report, we evaluate single-item and forced-choice recognition memory for the same items and use the resulting accuracy and reaction time data to test the predictions of discrete-state and continuous models. For the single-item trials, participants saw a word and indicated whether or not it was studied on a previous list. The forced-choice trials had one studied and one non-studied word that both appeared in the earlier single-item trials and both received the same response. Thus, forced-choice trials always had one word with a previous correct response and one with a previous error. Participants were asked to select the studied word regardless of whether they previously called both words "studied" or "not studied." The diffusion model predicts that forced-choice accuracy should be lower when the word with a previous error had a fast versus a slow single-item RT, because fast errors are associated with more compelling misleading memory retrieval. The two-high-threshold (2HT) model does not share this prediction because all errors are guesses, so error RT is not related to memory strength. A low-threshold version of the discrete state approach predicts an effect similar to the diffusion model, because errors are a mixture of responses based on misleading retrieval and guesses, and the guesses should tend to be slower. Results showed that faster single-trial errors were associated with lower forced-choice accuracy, as predicted by the diffusion and low-threshold models. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Inflation of type I error rates by unequal variances associated with parametric, nonparametric, and Rank-Transformation Tests

    Directory of Open Access Journals (Sweden)

    Donald W. Zimmerman

    2004-01-01

    Full Text Available It is well known that the two-sample Student t test fails to maintain its significance level when the variances of treatment groups are unequal, and, at the same time, sample sizes are unequal. However, introductory textbooks in psychology and education often maintain that the test is robust to variance heterogeneity when sample sizes are equal. The present study discloses that, for a wide variety of non-normal distributions, especially skewed distributions, the Type I error probabilities of both the t test and the Wilcoxon-Mann-Whitney test are substantially inflated by heterogeneous variances, even when sample sizes are equal. The Type I error rate of the t test performed on ranks replacing the scores (rank-transformed data is inflated in the same way and always corresponds closely to that of the Wilcoxon-Mann-Whitney test. For many probability densities, the distortion of the significance level is far greater after transformation to ranks and, contrary to known asymptotic properties, the magnitude of the inflation is an increasing function of sample size. Although nonparametric tests of location also can be sensitive to differences in the shape of distributions apart from location, the Wilcoxon-Mann-Whitney test and rank-transformation tests apparently are influenced mainly by skewness that is accompanied by specious differences in the means of ranks.

  17. Human error considerations and annunciator effects in determining optimal test intervals for periodically inspected standby systems

    International Nuclear Information System (INIS)

    McWilliams, T.P.; Martz, H.F.

    1981-01-01

    This paper incorporates the effects of four types of human error in a model for determining the optimal time between periodic inspections which maximizes the steady state availability for standby safety systems. Such safety systems are characteristic of nuclear power plant operations. The system is modeled by means of an infinite state-space Markov chain. Purpose of the paper is to demonstrate techniques for computing steady-state availability A and the optimal periodic inspection interval tau* for the system. The model can be used to investigate the effects of human error probabilities on optimal availability, study the benefits of annunciating the standby-system, and to determine optimal inspection intervals. Several examples which are representative of nuclear power plant applications are presented

  18. Errors on the Trail Making Test Are Associated with Right Hemispheric Frontal Lobe Damage in Stroke Patients

    Directory of Open Access Journals (Sweden)

    Bruno Kopp

    2015-01-01

    Full Text Available Measures of performance on the Trail Making Test (TMT are among the most popular neuropsychological assessment techniques. Completion time on TMT-A is considered to provide a measure of processing speed, whereas completion time on TMT-B is considered to constitute a behavioral measure of the ability to shift between cognitive sets (cognitive flexibility, commonly attributed to the frontal lobes. However, empirical evidence linking performance on the TMT-B to localized frontal lesions is mostly lacking. Here, we examined the association of frontal lesions following stroke with TMT-B performance measures (i.e., completion time and completion accuracy measures using voxel-based lesion-behavior mapping, with a focus on right hemispheric frontal lobe lesions. Our results suggest that the number of errors, but not completion time on the TMT-B, is associated with right hemispheric frontal lesions. This finding contradicts common clinical practice—the use of completion time on the TMT-B to measure cognitive flexibility, and it underscores the need for additional research on the association between cognitive flexibility and the frontal lobes. Further work in a larger sample, including left frontal lobe damage and with more power to detect effects of right posterior brain injury, is necessary to determine whether our observation is specific for right frontal lesions.

  19. Bit-error-rate testing of fiber optic data links for MMIC-based phased array antennas

    Science.gov (United States)

    Shalkhauser, K. A.; Kunath, R. R.; Daryoush, A. S.

    1990-01-01

    The measured bit-error-rate (BER) performance of a fiber optic data link to be used in satellite communications systems is presented and discussed. In the testing, the link was measured for its ability to carry high burst rate, serial-minimum shift keyed (SMSK) digital data similar to those used in actual space communications systems. The fiber optic data link, as part of a dual-segment injection-locked RF fiber optic link system, offers a means to distribute these signals to the many radiating elements of a phased array antenna. Test procedures, experimental arrangements, and test results are presented.

  20. Cost-effectiveness analysis of chemical testing for decision-support: How to include animal welfare?

    NARCIS (Netherlands)

    Gabbert, S.G.M.; Ierland, van E.C.

    2010-01-01

    Toxicity testing for regulatory purposes raises the question of test selection for a particular endpoint. Given the public's concern for animal welfare, test selection is a multi-objective decision problem that requires balancing information outcome, animal welfare loss, and monetary testing costs.

  1. Considerations When Including Students with Disabilities in Test Security Policies. NCEO Policy Directions. Number 23

    Science.gov (United States)

    Lazarus, Sheryl; Thurlow, Martha

    2015-01-01

    Sound test security policies and procedures are needed to ensure test security and confidentiality, and to help prevent cheating. In this era when cheating on tests draws regular media attention, there is a need for thoughtful consideration of the ways in which possible test security measures may affect accessibility for some students with…

  2. Prioritising interventions against medication errors

    DEFF Research Database (Denmark)

    Lisby, Marianne; Pape-Larsen, Louise; Sørensen, Ann Lykkegaard

    errors are therefore needed. Development of definition: A definition of medication errors including an index of error types for each stage in the medication process was developed from existing terminology and through a modified Delphi-process in 2008. The Delphi panel consisted of 25 interdisciplinary......Abstract Authors: Lisby M, Larsen LP, Soerensen AL, Nielsen LP, Mainz J Title: Prioritising interventions against medication errors – the importance of a definition Objective: To develop and test a restricted definition of medication errors across health care settings in Denmark Methods: Medication...... errors constitute a major quality and safety problem in modern healthcare. However, far from all are clinically important. The prevalence of medication errors ranges from 2-75% indicating a global problem in defining and measuring these [1]. New cut-of levels focusing the clinical impact of medication...

  3. Visual Performance on the Small Letter Contrast Test: Effects of Aging, Low Luminance and Refractive Error

    Science.gov (United States)

    2000-08-01

    luminance performance and aviation, many aviators develop ametropias refractive error having comparable effects on during their careers. We were... statistically (0.04 logMAR, the non-aviator group. Separate investigators at p=0.01), but not clinically significant (ə/2 line different research facilities... statistically significant (0.11 ± 0.1 logCS, t=4.0, sensitivity on the SLCT decreased for the aviator pɘ.001), yet there is significant overlap group at a

  4. Vertical drop test of a transport fuselage center section including the wheel wells

    Science.gov (United States)

    Williams, M. S.; Hayduk, R. J.

    1983-01-01

    A Boeing 707 fuselage section was drop tested to measure structural, seat, and anthropomorphic dummy response to vertical crash loads. The specimen had nominally zero pitch, roll and yaw at impact with a sink speed of 20 ft/sec. Results from this drop test and other drop tests of different transport sections will be used to prepare for a full-scale crash test of a B-720.

  5. Reply: Birnbaum's (2012 statistical tests of independence have unknown Type-I error rates and do not replicate within participant

    Directory of Open Access Journals (Sweden)

    Yun-shil Cha

    2013-01-01

    Full Text Available Birnbaum (2011, 2012 questioned the iid (independent and identically distributed sampling assumptions used by state-of-the-art statistical tests in Regenwetter, Dana and Davis-Stober's (2010, 2011 analysis of the ``linear order model''. Birnbaum (2012 cited, but did not use, a test of iid by Smith and Batchelder (2008 with analytically known properties. Instead, he created two new test statistics with unknown sampling distributions. Our rebuttal has five components: 1 We demonstrate that the Regenwetter et al. data pass Smith and Batchelder's test of iid with flying colors. 2 We provide evidence from Monte Carlo simulations that Birnbaum's (2012 proposed tests have unknown Type-I error rates, which depend on the actual choice probabilities and on how data are coded as well as on the null hypothesis of iid sampling. 3 Birnbaum analyzed only a third of Regenwetter et al.'s data. We show that his two new tests fail to replicate on the other two-thirds of the data, within participants. 4 Birnbaum selectively picked data of one respondent to suggest that choice probabilities may have changed partway into the experiment. Such nonstationarity could potentially cause a seemingly good fit to be a Type-II error. We show that the linear order model fits equally well if we allow for warm-up effects. 5 Using hypothetical data, Birnbaum (2012 claimed to show that ``true-and-error'' models for binary pattern probabilities overcome the alleged short-comings of Regenwetter et al.'s approach. We disprove this claim on the same data.

  6. Navigation strategies as revealed by error patterns on the Magic Carpet test in children with cerebral palsy

    Directory of Open Access Journals (Sweden)

    Vittorio eBelmonti

    2015-07-01

    Full Text Available IntroductionShort-term memory develops differently in navigation vs. manual space. The Magic Carpet (MC is a novel navigation test derived from the Walking Corsi Test and the manual Corsi Block-tapping Task (CBT. The MC requires mental rotations and executive function. In Cerebral Palsy (CP, CBT and MC scores relate differently to clinical and lesional factors. Hypotheses of this study are: that frontal lesions specifically affect navigation in CP; that brain lesions affect MC cognitive strategies.Material and methodsTwenty-two children with spastic CP, aged 5 to 14 years, 14 with a unilateral and 8 with a bilateral form, underwent the CBT and the MC. Errors were classified into 7 patterns by a recently described algorithm. Brain lesions were quantified according to a novel semi-quantitative MRI scale. Control data were partially drawn from a previous study on 91 typically developing children.ResultsChildren with CP performed worse than controls on both tests. Right hemispheric impairment correlated with spatial memory. MC span was reduced less than CBT span and was more selectively related to right middle white-matter and frontal lesions. Error patterns were differently distributed in CP than in typical development and depended on right brain impairment: children with more extensive right lesions made more positional than sequential errors.DiscussionIn CP, navigation is affected only by extensive lesions involving the right frontal lobe. In addition, these are associated with abnormal cognitive strategies. Whereas in typical development positional errors, preserving serial order, increase with age and performance, in CP they are associated with poorer performance and more extensive right-brain lesions. The explanation may lie in lesion side: right brain is crucial for mental rotations, necessary for spatial updating. Left-lateralized spatial memory strategies, relying on serial order, are not efficient if not accompanied by right-brain spatial

  7. Harmonization of malaria rapid diagnostic tests: best practices in labelling including instructions for use.

    Science.gov (United States)

    Jacobs, Jan; Barbé, Barbara; Gillet, Philippe; Aidoo, Michael; Serra-Casas, Elisa; Van Erps, Jan; Daviaud, Joelle; Incardona, Sandra; Cunningham, Jane; Visser, Theodoor

    2014-12-17

    Rapid diagnostic tests (RDTs) largely account for the scale-up of malaria diagnosis in endemic settings. However, diversity in labelling including the instructions for use (IFU) limits their interchangeability and user-friendliness. Uniform, easy to follow and consistent labelling, aligned with international standards and appropriate for the level of the end user's education and training, is crucial but a consolidated resource of information regarding best practices for IFU and labelling of RDT devices, packaging and accessories is not available. The Roll Back Malaria Partnership (RBM) commissioned the compilation of international standards and regulatory documents and published literature containing specifications and/or recommendations for RDT design, packaging and labelling of in vitro diagnostics (IVD) (which includes RDTs), complemented with a questionnaire based survey of RDT manufacturers and implementers. A summary of desirable RDT labelling characteristics was compiled, which was reviewed and discussed during a RBM Stakeholder consultation meeting and subsequently amended and refined by a dedicated task force consisting of country programme implementers, experts in RDT implementation, IVD regulatory experts and manufacturers. This process led to the development of consensus documents with a list of suggested terms and abbreviations as well as specifications for labelling of box, device packaging, cassettes, buffer bottle and accessories (lancets, alcohol swabs, transfer devices, desiccants). Emphasis was placed on durability (permanent printing or water-resistant labels), legibility (font size, letter type), comprehension (use of symbols) and ease of reference (e.g. place of labelling on the box or cassette packaging allowing quick oversight). A generic IFU template was developed, comprising background information, a template for procedure and reading/interpretation, a selection of appropriate references and a symbol key of internationally recognized

  8. Screening for Specific Language Impairment in Preschool Children: Evaluating a Screening Procedure Including the Token Test

    Science.gov (United States)

    Willinger, Ulrike; Schmoeger, Michaela; Deckert, Matthias; Eisenwort, Brigitte; Loader, Benjamin; Hofmair, Annemarie; Auff, Eduard

    2017-01-01

    Specific language impairment (SLI) comprises impairments in receptive and/or expressive language. Aim of this study was to evaluate a screening for SLI. 61 children with SLI (SLI-children, age-range 4-6 years) and 61 matched typically developing controls were tested for receptive language ability (Token Test-TT) and for intelligence (Wechsler…

  9. Sensitivity of SWOT discharge algorithm to measurement errors: Testing on the Sacramento River

    Science.gov (United States)

    Durand, Micheal; Andreadis, Konstantinos; Yoon, Yeosang; Rodriguez, Ernesto

    2013-04-01

    Scheduled for launch in 2019, the Surface Water and Ocean Topography (SWOT) satellite mission will utilize a Ka-band radar interferometer to measure river heights, widths, and slopes, globally, as well as characterize storage change in lakes and ocean surface dynamics with a spatial resolution ranging from 10 - 70 m, with temporal revisits on the order of a week. A discharge algorithm has been formulated to solve the inverse problem of characterizing river bathymetry and the roughness coefficient from SWOT observations. The algorithm uses a Bayesian Markov Chain estimation approach, treats rivers as sets of interconnected reaches (typically 5 km - 10 km in length), and produces best estimates of river bathymetry, roughness coefficient, and discharge, given SWOT observables. AirSWOT (the airborne version of SWOT) consists of a radar interferometer similar to SWOT, but mounted aboard an aircraft. AirSWOT spatial resolution will range from 1 - 35 m. In early 2013, AirSWOT will perform several flights over the Sacramento River, capturing river height, width, and slope at several different flow conditions. The Sacramento River presents an excellent target given that the river includes some stretches heavily affected by management (diversions, bypasses, etc.). AirSWOT measurements will be used to validate SWOT observation performance, but are also a unique opportunity for testing and demonstrating the capabilities and limitations of the discharge algorithm. This study uses HEC-RAS simulations of the Sacramento River to first, characterize expected discharge algorithm accuracy on the Sacramento River, and second to explore the required AirSWOT measurements needed to perform a successful inverse with the discharge algorithm. We focus on the sensitivity of the algorithm accuracy to the uncertainty in AirSWOT measurements of height, width, and slope.

  10. Lyral has been included in the patch test standard series in Germany.

    Science.gov (United States)

    Geier, Johannes; Brasch, Jochen; Schnuch, Axel; Lessmann, Holger; Pirker, Claudia; Frosch, Peter J

    2002-05-01

    Lyral 5% pet. was tested in 3245 consecutive patch test patients in 20 departments of dermatology in order (i) to check the diagnostic quality of this patch test preparation, (ii) to examine concomitant reactions to Lyral and fragrance mix (FM), and (iii) to assess the frequency of contact allergy to Lyral in an unselected patch test population of German dermatological clinics. 62 patients reacted to Lyral, i.e. 1.9%. One third of the positive reactions were + + and + + +. The reaction index was 0.27. Thus, the test preparation can be regarded a good diagnostic tool. Lyral and fragrance mix (FM) were tested in parallel in 3185 patients. Of these, 300 (9.4%) reacted to FM, and 59 (1.9%) to Lyral. In 40 patients, positive reactions to both occurred, which is 13.3% of those reacting to FM, and 67.8% of those reacting to Lyral. So the concordance of positive test reactions to Lyral and FM was only slight. Based on these results, the German Contact Dermatitis Research Group (DKG) decided to add Lyral 5% pet. to the standard series.

  11. Scope and status of the USA Engineering Test Facility including relevant TFTR research and development

    International Nuclear Information System (INIS)

    Becraft, W.R.; Reardon, P.J.

    1980-01-01

    The vehicle by which the fusion program would move into the engineering testing phase of fusion power development is designated the Engineering Test Facility (ETF). The progress toward the design and construction of the ETF will reflect the significant achievements of past, present, and future experimental tokamak devices. Some of the features of this foundation of experimental results and relevant engineering designs and operation will derive from the Tokamak Fusion Test Reactor (TFTR) Project, now nearing the completion of its construction phase. The ETF would provide a test-bed for reactor components in the fusion environment. In order to initiate preliminary planning for the ETF decision, the Office of Fusion Energy (OFE) established the ETF Design Center activity to prepare the design of the ETF. This paper describes the design status of the ETF and discusses some highlights of the TFTR R and D work

  12. Scope and status of the USA Engineering Test Facility including relevant TFTR research and development

    International Nuclear Information System (INIS)

    Becraft, W.R.; Reardon, P.J.

    1981-01-01

    The vehicle by which the fusion programme would move into the engineering testing phase of fusion power development is designated the Engineering Test Facility (ETF). The progress toward the design and construction of the ETF will reflect the significant achievements of past, present, and future experimental tokamak devices. Some of the features of this foundation of experimental results and relevant engineering designs and operation will derive from the Tokamak Fusion Test Reactor (TFTR) Project, now nearing the completion of its construction phase. The ETF would provide a test-bed for reactor components in the fusion environment. To initiate preliminary planning for the ETF decision, the Office of Fusion Energy (OFE) established the ETF Design Center activity to prepare the design of the ETF. This paper describes the design status of the ETF and discusses some highlights of the TFTR R and D work. (author)

  13. Preliminary hazard analysis for the Brayton Isotope Ground Demonstration System (including vacuum test chamber)

    International Nuclear Information System (INIS)

    Miller, L.G.

    1975-01-01

    The Preliminary Hazard Analysis (PHA) of the BIPS-GDS is a tabular summary of hazards and undesired events which may lead to system damage or failure and/or hazard to personnel. The PHA reviews the GDS as it is envisioned to operate in the Vacuum Test Chamber (VTC) of the GDS Test Facility. The VTC and other equipment which will comprise the test facility are presently in an early stage of preliminary design and will undoubtedly undergo numerous changes before the design is frozen. The PHA and the FMECA to follow are intended to aid the design effort by identifying areas of concern which are critical to the safety and reliability of the BIPS-GDS and test facility

  14. Soft errors in modern electronic systems

    CERN Document Server

    Nicolaidis, Michael

    2010-01-01

    This book provides a comprehensive presentation of the most advanced research results and technological developments enabling understanding, qualifying and mitigating the soft errors effect in advanced electronics, including the fundamental physical mechanisms of radiation induced soft errors, the various steps that lead to a system failure, the modelling and simulation of soft error at various levels (including physical, electrical, netlist, event driven, RTL, and system level modelling and simulation), hardware fault injection, accelerated radiation testing and natural environment testing, s

  15. Screening for Specific Language Impairment in Preschool Children: Evaluating a Screening Procedure Including the Token Test.

    Science.gov (United States)

    Willinger, Ulrike; Schmoeger, Michaela; Deckert, Matthias; Eisenwort, Brigitte; Loader, Benjamin; Hofmair, Annemarie; Auff, Eduard

    2017-10-01

    Specific language impairment (SLI) comprises impairments in receptive and/or expressive language. Aim of this study was to evaluate a screening for SLI. 61 children with SLI (SLI-children, age-range 4-6 years) and 61 matched typically developing controls were tested for receptive language ability (Token Test-TT) and for intelligence (Wechsler Preschool-and-Primary-Scale-of-Intelligence-WPPSI). Group differences were analyzed using t tests, as well as direct and stepwise discriminant analyses. The predictive value of the WPPSI with respect to TT performance was analyzed using regression analyses. SLI-children performed significantly worse on both TT and WPPSI ([Formula: see text]). The TT alone yielded an overall classification rate of 79%, the TT and the WPPSI together yielded an overall classification rate of 80%. TT performance was significantly predicted by verbal intelligence in SLI-children and nonverbal intelligence in controls whilst WPPSI subtest arithmetic was predictive in both groups. Without further research, the Token Test cannot be seen as a valid and sufficient tool for the screening of SLI in preschool children but rather as a tool for the assessment of more general intellectual capacities. SLI-children at this age already show impairments typically associated with SLI which indicates the necessity of early developmental support or training. Token Test performance is possibly an indicator for a more general developmental factor rather than an exclusive indicator for language difficulties.

  16. Nuclear Rocket Test Facility Decommissioning Including Controlled Explosive Demolition of a Neutron-Activated Shield Wall

    International Nuclear Information System (INIS)

    Michael Kruzic

    2007-01-01

    Located in Area 25 of the Nevada Test Site, the Test Cell A Facility was used in the 1960s for the testing of nuclear rocket engines, as part of the Nuclear Rocket Development Program. The facility was decontaminated and decommissioned (D and D) in 2005 using the Streamlined Approach For Environmental Restoration (SAFER) process, under the Federal Facilities Agreement and Consent Order (FFACO). Utilities and process piping were verified void of contents, hazardous materials were removed, concrete with removable contamination decontaminated, large sections mechanically demolished, and the remaining five-foot, five-inch thick radiologically-activated reinforced concrete shield wall demolished using open-air controlled explosive demolition (CED). CED of the shield wall was closely monitored and resulted in no radiological exposure or atmospheric release

  17. Learning from prescribing errors

    OpenAIRE

    Dean, B

    2002-01-01

    

 The importance of learning from medical error has recently received increasing emphasis. This paper focuses on prescribing errors and argues that, while learning from prescribing errors is a laudable goal, there are currently barriers that can prevent this occurring. Learning from errors can take place on an individual level, at a team level, and across an organisation. Barriers to learning from prescribing errors include the non-discovery of many prescribing errors, lack of feedback to th...

  18. Evaluation of field test kits including immunoassays for the detection of contaminants in soil and water

    International Nuclear Information System (INIS)

    Waters, L.C.; Smith, R.R.; Counts, R.W.; Stewart, J.H.; Jenkins, R.A.

    1993-01-01

    Effective field test methods are needed for hazardous waste site characterization and remediation. Useful field methods should be rapid, analyte-specific, cost-effective and accurate in the concentration range at which the analyte is regulated. In this study, field test kits for polychlorinated biphenyls (PCBs), mercury, lead and nitrate were evaluated with reference to these criteria. PCBs and mercury, in soils, were analyzed by immunoassay. Ionic lead and nitrate, in water, were measured chemically using test strips. Except for lead, each analyte was measured in both spiked and actual field samples. Twenty to 40 samples per day can be analyzed with the immunoassays and even more with the strip tests. The sensitivity of the immunoassays is in the 1-3 ppM range. Nitrate was consistently detected at ≥5 ppM; lead ions at ≥20 ppM. Results obtained using these methods compared favorably with those obtained by standard laboratory methods. In addition to being useful field screening methods, these kits can be used in the laboratory to sort out negative samples and/or to define proper dilutions for positive samples requiring further analysis

  19. The terminator "toy" chemistry test: a simple tool to assess errors in transport schemes

    Directory of Open Access Journals (Sweden)

    P. H. Lauritzen

    2015-05-01

    Full Text Available This test extends the evaluation of transport schemes from prescribed advection of inert scalars to reactive species. The test consists of transporting two interacting chemical species in the Nair and Lauritzen 2-D idealized flow field. The sources and sinks for these two species are given by a simple, but non-linear, "toy" chemistry that represents combination (X + X → X2 and dissociation (X2 → X + X. This chemistry mimics photolysis-driven conditions near the solar terminator, where strong gradients in the spatial distribution of the species develop near its edge. Despite the large spatial variations in each species, the weighted sum XT = X + 2X2 should always be preserved at spatial scales at which molecular diffusion is excluded. The terminator test demonstrates how well the advection–transport scheme preserves linear correlations. Chemistry–transport (physics–dynamics coupling can also be studied with this test. Examples of the consequences of this test are shown for illustration.

  20. Corrected RMS Error and Effective Number of Bits for Sinewave ADC Tests

    International Nuclear Information System (INIS)

    Jerome J. Blair

    2002-01-01

    A new definition is proposed for the effective number of bits of an ADC. This definition removes the variation in the calculated effective bits when the amplitude and offset of the sinewave test signal is slightly varied. This variation is most pronounced when test signals with amplitudes of a small number of code bin widths are applied to very low noise ADC's. The effectiveness of the proposed definition is compared with that of other proposed definitions over a range of signal amplitudes and noise levels

  1. Impact of Probe Placement Error on MIMO OTA Test Zone Performance

    DEFF Research Database (Denmark)

    Fan, Wei; Nielsen, Jesper Ødum; Carreño, Xavier

    2012-01-01

    Standardization work for MIMO OTA testing methods is currently ongoing, where a multi-probe anechoic chamber based solution is an important candidate. In this paper, the probes located on an OTA ring are used to synthesize a plane wave field in the center of the OTA ring, and the EM field for each...

  2. Solar Energy Education. Home economics: teacher's guide. Field test edition. [Includes glossary

    Energy Technology Data Exchange (ETDEWEB)

    1981-06-01

    An instructional aid is provided for home economics teachers who wish to integrate the subject of solar energy into their classroom activities. This teacher's guide was produced along with the student activities book for home economics by the US Department of Energy Solar Energy Education. A glossary of solar energy terms is included. (BCS)

  3. Testing For Seasonal Cointegration and Error Correction: The U.S. Pecan Price-Inventory Relationship

    OpenAIRE

    Ibrahim, Mohammed; Florkowski, Wojciech J.

    2005-01-01

    Using time series data we examine behavior of pecan prices and inventories at zero and seasonal frequencies, given results of seasonal cointegration tests. Both, seasonally unadjusted and adjusted quarterly data are used (1991-2002). Results suggest that, first, shelled and total pecan inventories and shelled pecan prices have common unit roots at both the non-seasonal and seasonal frequencies; second, there is no long run equilibrium between pecan prices and shelled or total inventories when...

  4. Data compression/error correction digital test system. Appendix 2: Theory of operation

    Science.gov (United States)

    1972-01-01

    An overall block diagram of the DC/EC digital system test is shown. The system is divided into two major units: the transmitter and the receiver. In operation, the transmitter and receiver are connected only by a real or simulated transmission link. The system inputs consist of: (1) standard format TV video, (2) two channels of analog voice, and (3) one serial PCM bit stream.

  5. In Situ Estuarine and Marine Toxicity Testing: A Review, Including Recommendations for Future Use in Ecological Risk Assessment

    Science.gov (United States)

    2009-09-01

    field and microcosms than they do under laboratory test conditions. In the case of tributyltin ( TBT ) exposures in San Diego Bay, he found that...TECHNICAL REPORT 1986 September 2009 In Situ Estuarine and Marine Toxicity Testing A Review, Including Recommendations for Future Use in...Pacific TECHNICAL REPORT 1986 September 2009 In Situ Estuarine and Marine Toxicity Testing A Review, Including Recommendations for Future Use in

  6. Linking Errors between Two Populations and Tests: A Case Study in International Surveys in Education

    Science.gov (United States)

    Hastedt, Dirk; Desa, Deana

    2015-01-01

    This simulation study was prompted by the current increased interest in linking national studies to international large-scale assessments (ILSAs) such as IEA's TIMSS, IEA's PIRLS, and OECD's PISA. Linkage in this scenario is achieved by including items from the international assessments in the national assessments on the premise that the average…

  7. Computer Simulation Tests of Feedback Error Learning Controller with IDM and ISM for Functional Electrical Stimulation in Wrist Joint Control

    Directory of Open Access Journals (Sweden)

    Takashi Watanabe

    2010-01-01

    Full Text Available Feedforward controller would be useful for hybrid Functional Electrical Stimulation (FES system using powered orthotic devices. In this paper, Feedback Error Learning (FEL controller for FES (FEL-FES controller was examined using an inverse statics model (ISM with an inverse dynamics model (IDM to realize a feedforward FES controller. For FES application, the ISM was tested in learning off line using training data obtained by PID control of very slow movements. Computer simulation tests in controlling wrist joint movements showed that the ISM performed properly in positioning task and that IDM learning was improved by using the ISM showing increase of output power ratio of the feedforward controller. The simple ISM learning method and the FEL-FES controller using the ISM would be useful in controlling the musculoskeletal system that has nonlinear characteristics to electrical stimulation and therefore is expected to be useful in applying to hybrid FES system using powered orthotic device.

  8. Report from LHC MDs 1391 and 1483: Tests of new methods for study of nonlinear errors in the LHC experimental insertions

    CERN Document Server

    Maclean, Ewen Hamish; Fuchsberger, Kajetan; Giovannozzi, Massimo; Persson, Tobias Hakan Bjorn; Tomas Garcia, Rogelio; CERN. Geneva. ATS Department

    2017-01-01

    Nonlinear errors in experimental insertions can pose a significant challenge to the operability of low-β∗ colliders. Previously such errors in the LHC have been studied via their feed-down to tune and coupling under the influence of the nominal crossing angle bumps. This method has proved useful in validating various components of the magnetic model. To understand and correct those errors where significant discrepancies exist with the magnetic model however, will require further development of this technique, in addition to the application of novel methods. In 2016 studies were performed to test new methods for the study of the IR-nonlinear errors.

  9. 40 CFR 1039.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Science.gov (United States)

    2010-07-01

    ...-state duty cycles, including ramped-modal testing? 1039.505 Section 1039.505 Protection of Environment... duty cycles, including ramped-modal testing? This section describes how to test engines under steady-state conditions. In some cases, we allow you to choose the appropriate steady-state duty cycle for an...

  10. Impact analysis and testing of tritiated heavy water transportation packages including hydrodynamic effects

    International Nuclear Information System (INIS)

    Sauve, R.G.; Tulk, J.D.; Gavin, M.E.

    1989-01-01

    Ontario Hydro has recently designed a new Type B(M) Tritiated Heavy Water Transportation Package (THWTP) for the road transportation of tritiated heavy water from its operating nuclear stations to the Tritium Removal Facility in Ontario. These packages must demonstrate the ability to withstand severe shock and impact scenarios such as those prescribed by IAEA standards. The package, shown in figure 1, comprises an inner container filled with tritiated heavy water, and a 19 lb/ft 3 polyurethane foam-filled overpack. The overpack is of sandwich construction with 304L stainless steel liners and 10.5 inch thick nominal foam walls. The outer shell is 0.75 inch thick and the inner shell is 0.25 inch thick. The primary containment boundary consists of the overpack inner liner, the containment lid and outer containment seals in the lid region. The total weight of the container including the 12,000 lb. payload is 36,700 lb. The objective of the present study is to evaluate the hydrodynamic effect of the tritiated heavy water payload on the structural integrity of the THWTP during a flat end drop from a height of 9 m. The study consisted of three phases: (i) developing an analytical model to simulate the hydrodynamic effects of the heavy water payload during impact; (ii) performing an impact analysis for a 9 m flat end drop of the THWTP including fluid structure interaction; (iii) verification of the analytical models by experiment

  11. Results of a Saxitoxin Proficiency Test Including Characterization of Reference Material and Stability Studies

    Directory of Open Access Journals (Sweden)

    Kirsi Harju

    2015-11-01

    Full Text Available A saxitoxin (STX proficiency test (PT was organized as part of the Establishment of Quality Assurance for the Detection of Biological Toxins of Potential Bioterrorism Risk (EQuATox project. The aim of this PT was to provide an evaluation of existing methods and the European laboratories’ capabilities for the analysis of STX and some of its analogues in real samples. Homogenized mussel material and algal cell materials containing paralytic shellfish poisoning (PSP toxins were produced as reference sample matrices. The reference material was characterized using various analytical methods. Acidified algal extract samples at two concentration levels were prepared from a bulk culture of PSP toxins producing dinoflagellate Alexandrium ostenfeldii. The homogeneity and stability of the prepared PT samples were studied and found to be fit-for-purpose. Thereafter, eight STX PT samples were sent to ten participating laboratories from eight countries. The PT offered the participating laboratories the possibility to assess their performance regarding the qualitative and quantitative detection of PSP toxins. Various techniques such as official Association of Official Analytical Chemists (AOAC methods, immunoassays, and liquid chromatography-mass spectrometry were used for sample analyses.

  12. Two-dimensional errors

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements

  13. Part two: Error propagation

    International Nuclear Information System (INIS)

    Picard, R.R.

    1989-01-01

    Topics covered in this chapter include a discussion of exact results as related to nuclear materials management and accounting in nuclear facilities; propagation of error for a single measured value; propagation of error for several measured values; error propagation for materials balances; and an application of error propagation to an example of uranium hexafluoride conversion process

  14. Correction of nutrition test errors for more accurate quantification of the link between dental health and malnutrition.

    Science.gov (United States)

    Dion, Nathalie; Cotart, Jean-Louis; Rabilloud, Muriel

    2007-04-01

    We quantified the link between tooth deterioration and malnutrition in institutionalized elderly subjects, taking into account the major risk factors for malnutrition and adjusting for the measurement error made in using the Mini Nutritional Assessment questionnaire. Data stem from a survey conducted in 2005 in 1094 subjects >or=60 y of age from a large sample of 100 institutions of the Rhône-Alpes region of France. A Bayesian approach was used to quantify the effect of tooth deterioration on malnutrition through a two-level logistic regression. This approach allowed taking into account the uncertainty on sensitivity and specificity of the Mini Nutritional Assessment questionnaire to adjust for the measurement error of that test. After adjustment for other risk factors, the risk of malnutrition increased significantly and continuously 1.15 times (odds ratio 1.15, 95% credibility interval 1.06-1.25) whenever the masticatory percentage decreased by 10 points, which is equivalent to the loss of two molars. The strongest factors that augmented the probability of malnutrition were deglutition disorders, depression, and verbal inconsistency. Dependency was also an important factor; the odds of malnutrition nearly doubled for each additional grade of dependency (graded 6 to 1). Diabetes, central neurodegenerative disease, and carcinoma tended to increase the probability of malnutrition but their effect was not statistically significant. Dental status should be considered a serious risk factor for malnutrition. Regular dental examination and care should preserve functional dental integrity to prevent malnutrition in institutionalized elderly people.

  15. Assessing error sources for Landsat time series analysis for tropical test sites in Viet Nam and Ethiopia

    Science.gov (United States)

    Schultz, Michael; Verbesselt, Jan; Herold, Martin; Avitabile, Valerio

    2013-10-01

    Researchers who use remotely sensed data can spend half of their total effort analysing prior data. If this data preprocessing does not match the application, this time spent on data analysis can increase considerably and can lead to inaccuracies. Despite the existence of a number of methods for pre-processing Landsat time series, each method has shortcomings, particularly for mapping forest changes under varying illumination, data availability and atmospheric conditions. Based on the requirements of mapping forest changes as defined by the United Nations (UN) Reducing Emissions from Forest Degradation and Deforestation (REDD) program, the accurate reporting of the spatio-temporal properties of these changes is necessary. We compared the impact of three fundamentally different radiometric preprocessing techniques Moderate Resolution Atmospheric TRANsmission (MODTRAN), Second Simulation of a Satellite Signal in the Solar Spectrum (6S) and simple Dark Object Subtraction (DOS) on mapping forest changes using Landsat time series data. A modification of Breaks For Additive Season and Trend (BFAST) monitor was used to jointly map the spatial and temporal agreement of forest changes at test sites in Ethiopia and Viet Nam. The suitability of the pre-processing methods for the occurring forest change drivers was assessed using recently captured Ground Truth and high resolution data (1000 points). A method for creating robust generic forest maps used for the sampling design is presented. An assessment of error sources has been performed identifying haze as a major source for time series analysis commission error.

  16. The surveillance error grid.

    Science.gov (United States)

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  17. Standard Practice for Minimizing Dosimetry Errors in Radiation Hardness Testing of Silicon Electronic Devices Using Co-60 Sources

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This practice covers recommended procedures for the use of dosimeters, such as thermoluminescent dosimeters (TLD's), to determine the absorbed dose in a region of interest within an electronic device irradiated using a Co-60 source. Co-60 sources are commonly used for the absorbed dose testing of silicon electronic devices. Note 1—This absorbed-dose testing is sometimes called “total dose testing” to distinguish it from “dose rate testing.” Note 2—The effects of ionizing radiation on some types of electronic devices may depend on both the absorbed dose and the absorbed dose rate; that is, the effects may be different if the device is irradiated to the same absorbed-dose level at different absorbed-dose rates. Absorbed-dose rate effects are not covered in this practice but should be considered in radiation hardness testing. 1.2 The principal potential error for the measurement of absorbed dose in electronic devices arises from non-equilibrium energy deposition effects in the vicinity o...

  18. Resimulation of noise: a precision estimator for least square error curve-fitting tested for axial strain time constant imaging

    Science.gov (United States)

    Nair, S. P.; Righetti, R.

    2015-05-01

    Recent elastography techniques focus on imaging information on properties of materials which can be modeled as viscoelastic or poroelastic. These techniques often require the fitting of temporal strain data, acquired from either a creep or stress-relaxation experiment to a mathematical model using least square error (LSE) parameter estimation. It is known that the strain versus time relationships for tissues undergoing creep compression have a non-linear relationship. In non-linear cases, devising a measure of estimate reliability can be challenging. In this article, we have developed and tested a method to provide non linear LSE parameter estimate reliability: which we called Resimulation of Noise (RoN). RoN provides a measure of reliability by estimating the spread of parameter estimates from a single experiment realization. We have tested RoN specifically for the case of axial strain time constant parameter estimation in poroelastic media. Our tests show that the RoN estimated precision has a linear relationship to the actual precision of the LSE estimator. We have also compared results from the RoN derived measure of reliability against a commonly used reliability measure: the correlation coefficient (CorrCoeff). Our results show that CorrCoeff is a poor measure of estimate reliability for non-linear LSE parameter estimation. While the RoN is specifically tested only for axial strain time constant imaging, a general algorithm is provided for use in all LSE parameter estimation.

  19. A Sensitivity Study of Human Errors in Optimizing Surveillance Test Interval (STI) and Allowed Outage Time (AOT) of Standby Safety System

    International Nuclear Information System (INIS)

    Chung, Dae Wook; Shin, Won Ky; You, Young Woo; Yang, Hui Chang

    1998-01-01

    In most cases, the surveillance test intervals (STIs), allowed outage times (AOTS) and testing strategies of safety components in nuclear power plant are prescribed in plant technical specifications. And, in general, it is required that standby safety system shall be redundant (i.e., composed of multiple components) and these components are tested by either staggered test strategy or sequential test strategy. In this study, a linear model is presented to incorporate the effects of human errors associated with test into the evaluation of unavailability. The average unavailabilities of 1/4, 2/4 redundant systems are computed considering human error and testing strategy. The adverse effects of test on system unavailability, such as component wear and test-induced transient have been modelled. The final outcome of this study would be the optimized human error domain from 3-D human error sensitivity analysis by selecting finely classified segment. The results of sensitivity analysis show that the STI and AOT can be optimized provided human error probability is maintained within allowable range. (authors)

  20. Semantic error patterns on the Boston Naming Test in normal aging, amnestic mild cognitive impairment, and mild Alzheimer's disease: is there semantic disruption?

    Science.gov (United States)

    Balthazar, Marcio Luiz Figueredo; Cendes, Fernando; Damasceno, Benito Pereira

    2008-11-01

    Naming difficulty is common in Alzheimer's disease (AD), but the nature of this problem is not well established. The authors investigated the presence of semantic breakdown and the pattern of general and semantic errors in patients with mild AD, patients with amnestic mild cognitive impairment (aMCI), and normal controls by examining their spontaneous answers on the Boston Naming Test (BNT) and verifying whether they needed or were benefited by semantic and phonemic cues. The errors in spontaneous answers were classified in four mutually exclusive categories (semantic errors, visual paragnosia, phonological errors, and omission errors), and the semantic errors were further subclassified as coordinate, superordinate, and circumlocutory. Patients with aMCI performed normally on the BNT and needed fewer semantic and phonemic cues than patients with mild AD. After semantic cues, subjects with aMCI and control subjects gave more correct answers than patients with mild AD, but after phonemic cues, there was no difference between the three groups, suggesting that the low performance of patients with AD cannot be completely explained by semantic breakdown. Patterns of spontaneous naming errors and subtypes of semantic errors were similar in the three groups, with decreasing error frequency from coordinate to superordinate to circumlocutory subtypes.

  1. Identifying subassemblies by ultrasound to prevent fuel handling error in sodium fast reactors: First test performed in water

    International Nuclear Information System (INIS)

    Paumel, Kevin; Lhuillier, Christian

    2015-01-01

    Identifying subassemblies by ultrasound is a method that is being considered to prevent handling errors in sodium fast reactors. It is based on the reading of a code (aligned notches) engraved on the subassembly head by an emitting/receiving ultrasonic sensor. This reading is carried out in sodium with high temperature transducers. The resulting one-dimensional C-scan can be likened to a binary code expressing the subassembly type and number. The first test performed in water investigated two parameters: width and depth of the notches. The code remained legible for notches as thin as 1.6 mm wide. The impact of the depth seems minor in the range under investigation. (authors)

  2. The opportunistic screening of refractive errors in school-going children by pediatrician using enhanced Brückner test

    Directory of Open Access Journals (Sweden)

    Piyush Jain

    2016-01-01

    Full Text Available Aim: The aim of this study was to compare the results of enhanced Brückner test (EBT performed by a pediatrician and an experienced pediatric ophthalmologist. Subjects and Methods: In this prospective double-masked cohort study, a pediatrician and a pediatric ophthalmologist performed the EBT in a classroom of a school in semi-dark lighting condition using a direct ophthalmoscope. The results of the test were compared using 2 × 2 Bayesian table and kappa statistics. The findings of the pediatric ophthalmologists were considered gold standard. Results: Two hundred and thirty-six eyes of 118 subjects, mean age 6.8 ± 0.5 years (range, 5.4–7.8 years, were examined. The time taken to complete this test was <10 s per subject. The ophthalmologist identified 59 eyes as ametropic (12 hyperopic and 47 myopic eyes and 177 as emmetropic compared to 61 eyes as ametropic and 175 emmetropic by pediatrician. The prevalence of the test positive was 25.9%. The sensitivity of the pediatrician was 90.2%, specificity was 97.7%, predictive value of the positive test was 93.2%, and predictive value of the negative test was 96.6%. The clinical agreement (kappa between the pediatric ophthalmologist and the pediatrician was 0.9. Conclusion: The results of the EBT performed by pediatrician were comparable to that of an experienced pediatric ophthalmologist. Opportunistic screening of refractive errors using EBT by a pediatrician can be an important approach in the detection of ametropia in children.

  3. 40 CFR 1048.505 - How do I test engines using steady-state duty cycles, including ramped-modal testing?

    Science.gov (United States)

    2010-07-01

    ...-state duty cycles, including ramped-modal testing? 1048.505 Section 1048.505 Protection of Environment... SPARK-IGNITION ENGINES Test Procedures § 1048.505 How do I test engines using steady-state duty cycles... some cases, we allow you to choose the appropriate steady-state duty cycle for an engine. In these...

  4. Type I error rates of rare single nucleotide variants are inflated in tests of association with non-normally distributed traits using simple linear regression methods.

    Science.gov (United States)

    Schwantes-An, Tae-Hwi; Sung, Heejong; Sabourin, Jeremy A; Justice, Cristina M; Sorant, Alexa J M; Wilson, Alexander F

    2016-01-01

    In this study, the effects of (a) the minor allele frequency of the single nucleotide variant (SNV), (b) the degree of departure from normality of the trait, and (c) the position of the SNVs on type I error rates were investigated in the Genetic Analysis Workshop (GAW) 19 whole exome sequence data. To test the distribution of the type I error rate, 5 simulated traits were considered: standard normal and gamma distributed traits; 2 transformed versions of the gamma trait (log 10 and rank-based inverse normal transformations); and trait Q1 provided by GAW 19. Each trait was tested with 313,340 SNVs. Tests of association were performed with simple linear regression and average type I error rates were determined for minor allele frequency classes. Rare SNVs (minor allele frequency < 0.05) showed inflated type I error rates for non-normally distributed traits that increased as the minor allele frequency decreased. The inflation of average type I error rates increased as the significance threshold decreased. Normally distributed traits did not show inflated type I error rates with respect to the minor allele frequency for rare SNVs. There was no consistent effect of transformation on the uniformity of the distribution of the location of SNVs with a type I error.

  5. Resemblance profiles as clustering decision criteria: Estimating statistical power, error, and correspondence for a hypothesis test for multivariate structure.

    Science.gov (United States)

    Kilborn, Joshua P; Jones, David L; Peebles, Ernst B; Naar, David F

    2017-04-01

    Clustering data continues to be a highly active area of data analysis, and resemblance profiles are being incorporated into ecological methodologies as a hypothesis testing-based approach to clustering multivariate data. However, these new clustering techniques have not been rigorously tested to determine the performance variability based on the algorithm's assumptions or any underlying data structures. Here, we use simulation studies to estimate the statistical error rates for the hypothesis test for multivariate structure based on dissimilarity profiles (DISPROF). We concurrently tested a widely used algorithm that employs the unweighted pair group method with arithmetic mean (UPGMA) to estimate the proficiency of clustering with DISPROF as a decision criterion. We simulated unstructured multivariate data from different probability distributions with increasing numbers of objects and descriptors, and grouped data with increasing overlap, overdispersion for ecological data, and correlation among descriptors within groups. Using simulated data, we measured the resolution and correspondence of clustering solutions achieved by DISPROF with UPGMA against the reference grouping partitions used to simulate the structured test datasets. Our results highlight the dynamic interactions between dataset dimensionality, group overlap, and the properties of the descriptors within a group (i.e., overdispersion or correlation structure) that are relevant to resemblance profiles as a clustering criterion for multivariate data. These methods are particularly useful for multivariate ecological datasets that benefit from distance-based statistical analyses. We propose guidelines for using DISPROF as a clustering decision tool that will help future users avoid potential pitfalls during the application of methods and the interpretation of results.

  6. Outlier removal, sum scores, and the inflation of the Type I error rate in independent samples t tests: the power of alternatives and recommendations.

    Science.gov (United States)

    Bakker, Marjan; Wicherts, Jelte M

    2014-09-01

    In psychology, outliers are often excluded before running an independent samples t test, and data are often nonnormal because of the use of sum scores based on tests and questionnaires. This article concerns the handling of outliers in the context of independent samples t tests applied to nonnormal sum scores. After reviewing common practice, we present results of simulations of artificial and actual psychological data, which show that the removal of outliers based on commonly used Z value thresholds severely increases the Type I error rate. We found Type I error rates of above 20% after removing outliers with a threshold value of Z = 2 in a short and difficult test. Inflations of Type I error rates are particularly severe when researchers are given the freedom to alter threshold values of Z after having seen the effects thereof on outcomes. We recommend the use of nonparametric Mann-Whitney-Wilcoxon tests or robust Yuen-Welch tests without removing outliers. These alternatives to independent samples t tests are found to have nominal Type I error rates with a minimal loss of power when no outliers are present in the data and to have nominal Type I error rates and good power when outliers are present. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  7. Towards reporting standards for neuropsychological study results: A proposal to minimize communication errors with standardized qualitative descriptors for normalized test scores.

    Science.gov (United States)

    Schoenberg, Mike R; Rum, Ruba S

    2017-11-01

    Rapid, clear and efficient communication of neuropsychological results is essential to benefit patient care. Errors in communication are a lead cause of medical errors; nevertheless, there remains a lack of consistency in how neuropsychological scores are communicated. A major limitation in the communication of neuropsychological results is the inconsistent use of qualitative descriptors for standardized test scores and the use of vague terminology. PubMed search from 1 Jan 2007 to 1 Aug 2016 to identify guidelines or consensus statements for the description and reporting of qualitative terms to communicate neuropsychological test scores was conducted. The review found the use of confusing and overlapping terms to describe various ranges of percentile standardized test scores. In response, we propose a simplified set of qualitative descriptors for normalized test scores (Q-Simple) as a means to reduce errors in communicating test results. The Q-Simple qualitative terms are: 'very superior', 'superior', 'high average', 'average', 'low average', 'borderline' and 'abnormal/impaired'. A case example illustrates the proposed Q-Simple qualitative classification system to communicate neuropsychological results for neurosurgical planning. The Q-Simple qualitative descriptor system is aimed as a means to improve and standardize communication of standardized neuropsychological test scores. Research are needed to further evaluate neuropsychological communication errors. Conveying the clinical implications of neuropsychological results in a manner that minimizes risk for communication errors is a quintessential component of evidence-based practice. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. A review of a radioactive material shipping container including design, testing, upgrading compliance program and shipping logistics

    International Nuclear Information System (INIS)

    Celovsky, A.; Lesco, R.; Gale, B.; Sypes, J.

    2003-01-01

    Ten years ago Atomic Energy of Canada developed a Type B(U)-85 shipping container for the global transport of highly radioactive materials. This paper reviews the development of the container, including a summary of the design requirements, a review of the selected materials and key design elements, and the results of the major qualification tests (drop testing, fire test, leak tightness testing, and shielding integrity tests). As a result of the testing, improvements to the structural, thermal and containment design were made. Such improvements, and reasons thereof, are noted. Also provided is a summary of the additional analysis work required to upgrade the package from a Type B(U) to a Type B(F), i.e. essentially upgrading the container to include fissile radioisotopes to the authorized radioactive contents list. Having a certified shipping container is only one aspect governing the global shipments of radioactive material. By necessity the shipment of radioactive material is a highly regulated environment. This paper also explores the experiences with other key aspects of radioactive shipments, including the service procedures used to maintain the container certification, the associated compliance program for radioactive material shipments, and the shipping logistics involved in the transport. (author)

  9. Test-Retest Reliability of the Adaptive Chemistry Assessment Survey for Teachers: Measurement Error and Alternatives to Correlation

    Science.gov (United States)

    Harshman, Jordan; Yezierski, Ellen

    2016-01-01

    Determining the error of measurement is a necessity for researchers engaged in bench chemistry, chemistry education research (CER), and a multitude of other fields. Discussions regarding what constructs measurement error entails and how to best measure them have occurred, but the critiques about traditional measures have yielded few alternatives.…

  10. Quantitative EEG analysis using error reduction ratio-causality test; validation on simulated and real EEG data.

    Science.gov (United States)

    Sarrigiannis, Ptolemaios G; Zhao, Yifan; Wei, Hua-Liang; Billings, Stephen A; Fotheringham, Jayne; Hadjivassiliou, Marios

    2014-01-01

    To introduce a new method of quantitative EEG analysis in the time domain, the error reduction ratio (ERR)-causality test. To compare performance against cross-correlation and coherence with phase measures. A simulation example was used as a gold standard to assess the performance of ERR-causality, against cross-correlation and coherence. The methods were then applied to real EEG data. Analysis of both simulated and real EEG data demonstrates that ERR-causality successfully detects dynamically evolving changes between two signals, with very high time resolution, dependent on the sampling rate of the data. Our method can properly detect both linear and non-linear effects, encountered during analysis of focal and generalised seizures. We introduce a new quantitative EEG method of analysis. It detects real time levels of synchronisation in the linear and non-linear domains. It computes directionality of information flow with corresponding time lags. This novel dynamic real time EEG signal analysis unveils hidden neural network interactions with a very high time resolution. These interactions cannot be adequately resolved by the traditional methods of coherence and cross-correlation, which provide limited results in the presence of non-linear effects and lack fidelity for changes appearing over small periods of time. Copyright © 2013 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  11. Analysis of errors in forensic science

    Directory of Open Access Journals (Sweden)

    Mingxiao Du

    2017-01-01

    Full Text Available Reliability of expert testimony is one of the foundations of judicial justice. Both expert bias and scientific errors affect the reliability of expert opinion, which in turn affects the trustworthiness of the findings of fact in legal proceedings. Expert bias can be eliminated by replacing experts; however, it may be more difficult to eliminate scientific errors. From the perspective of statistics, errors in operation of forensic science include systematic errors, random errors, and gross errors. In general, process repetition and abiding by the standard ISO/IEC:17025: 2005, general requirements for the competence of testing and calibration laboratories, during operation are common measures used to reduce errors that originate from experts and equipment, respectively. For example, to reduce gross errors, the laboratory can ensure that a test is repeated several times by different experts. In applying for forensic principles and methods, the Federal Rules of Evidence 702 mandate that judges consider factors such as peer review, to ensure the reliability of the expert testimony. As the scientific principles and methods may not undergo professional review by specialists in a certain field, peer review serves as an exclusive standard. This study also examines two types of statistical errors. As false-positive errors involve a higher possibility of an unfair decision-making, they should receive more attention than false-negative errors.

  12. Evaluation and Comparison of Multiple Test Methods, Including Real-time PCR, for Legionella Detection in Clinical Specimens

    Science.gov (United States)

    Peci, Adriana; Winter, Anne-Luise; Gubbay, Jonathan B.

    2016-01-01

    Legionella is a Gram-negative bacterium that can cause Pontiac fever, a mild upper respiratory infection and Legionnaire’s disease, a more severe illness. We aimed to compare the performance of urine antigen, culture, and polymerase chain reaction (PCR) test methods and to determine if sputum is an acceptable alternative to the use of more invasive bronchoalveolar lavage (BAL). Data for this study included specimens tested for Legionella at Public Health Ontario Laboratories from 1st January, 2010 to 30th April, 2014, as part of routine clinical testing. We found sensitivity of urinary antigen test (UAT) compared to culture to be 87%, specificity 94.7%, positive predictive value (PPV) 63.8%, and negative predictive value (NPV) 98.5%. Sensitivity of UAT compared to PCR was 74.7%, specificity 98.3%, PPV 77.7%, and NPV 98.1%. Out of 146 patients who had a Legionella-positive result by PCR, only 66 (45.2%) also had a positive result by culture. Sensitivity for culture was the same using either sputum or BAL (13.6%); sensitivity for PCR was 10.3% for sputum and 12.8% for BAL. Both sputum and BAL yield similar results regardless testing methods (Fisher Exact p-values = 1.0, for each test). In summary, all test methods have inherent weaknesses in identifying Legionella; therefore, more than one testing method should be used. Obtaining a single specimen type from patients with pneumonia limits the ability to diagnose Legionella, particularly when urine is the specimen type submitted. Given ease of collection and similar sensitivity to BAL, clinicians are encouraged to submit sputum in addition to urine when BAL submission is not practical from patients being tested for Legionella. PMID:27630979

  13. Single-variant and multi-variant trend tests for genetic association with next-generation sequencing that are robust to sequencing error.

    Science.gov (United States)

    Kim, Wonkuk; Londono, Douglas; Zhou, Lisheng; Xing, Jinchuan; Nato, Alejandro Q; Musolf, Anthony; Matise, Tara C; Finch, Stephen J; Gordon, Derek

    2012-01-01

    As with any new technology, next-generation sequencing (NGS) has potential advantages and potential challenges. One advantage is the identification of multiple causal variants for disease that might otherwise be missed by SNP-chip technology. One potential challenge is misclassification error (as with any emerging technology) and the issue of power loss due to multiple testing. Here, we develop an extension of the linear trend test for association that incorporates differential misclassification error and may be applied to any number of SNPs. We call the statistic the linear trend test allowing for error, applied to NGS, or LTTae,NGS. This statistic allows for differential misclassification. The observed data are phenotypes for unrelated cases and controls, coverage, and the number of putative causal variants for every individual at all SNPs. We simulate data considering multiple factors (disease mode of inheritance, genotype relative risk, causal variant frequency, sequence error rate in cases, sequence error rate in controls, number of loci, and others) and evaluate type I error rate and power for each vector of factor settings. We compare our results with two recently published NGS statistics. Also, we create a fictitious disease model based on downloaded 1000 Genomes data for 5 SNPs and 388 individuals, and apply our statistic to those data. We find that the LTTae,NGS maintains the correct type I error rate in all simulations (differential and non-differential error), while the other statistics show large inflation in type I error for lower coverage. Power for all three methods is approximately the same for all three statistics in the presence of non-differential error. Application of our statistic to the 1000 Genomes data suggests that, for the data downloaded, there is a 1.5% sequence misclassification rate over all SNPs. Finally, application of the multi-variant form of LTTae,NGS shows high power for a number of simulation settings, although it can have

  14. Evaluation and comparison of multiple test methods, including real-time PCR, for Legionella detection in clinical specimens.

    Directory of Open Access Journals (Sweden)

    Adriana Peci

    2016-08-01

    Full Text Available Legionella is a gram-negative bacterium that can cause Pontiac fever, a mild upper respiratory infection and Legionnaire’s disease, a more severe illness. We aimed to compare the performance of urine antigen, culture and PCR test methods and to determine if sputum is an alternative to the use of more invasive bronchoalveolar lavage (BAL. Data for this study included specimens tested for Legionella at PHOL from January 1, 2010 to April 30, 2014, as part of routine clinical testing. We found sensitivity of UAT compared to culture to be 87%, specificity 94.7%, positive predictive value (PPV 63.8% and negative predictive value (NPV 98.5%. Sensitivity of UAT compared to PCR was 74.7%, specificity 98.3%, PPV 77.7% and NPV 98.1%. Of 146 patients who had a Legionella positive result by PCR, only 66(45.2% also had a positive result by culture. Sensitivity for culture was the same using either sputum or BAL (13.6%; sensitivity for PCR was 10.3% for sputum and 12.8% for BAL. Both sputum and BAL yield similar results despite testing methods (Fisher Exact p-values=1.0, for each test. In summary, all test methods have inherent weaknesses in identifying Legionella; thereforemore than one testing method should be used. Obtaining a single specimen type from patients with pneumonia limits the ability to diagnose Legionella, particularly when urine is the specimen type submitted. Given ease of collection, and similar sensitivity to BAL, clinicians are encouraged to submit sputum in addition to urine when BAL submission is not practical, from patients being tested for Legionella.

  15. Audit of Trichomonas vaginalis test requesting by community referrers after a change from culture to molecular testing, including a cost analysis.

    Science.gov (United States)

    Bissessor, Liselle; Wilson, Janet; McAuliffe, Gary; Upton, Arlo

    2017-06-16

    Trichomonas vaginalis (TV) prevalence varies among different communities and peoples. The availability of robust molecular platforms for the detection of TV has advanced diagnosis; however, molecular tests are more costly than phenotypic methodologies, and testing all urogenital samples is costly. We recently replaced culture methods with the Aptima Trichomonas vaginalis nucleic acid amplification test on specific request and as reflex testing by the laboratory, and have audited this change. Data were collected from August 2015 (microbroth culture and microscopy) and August 2016 (Aptima TV assay) including referrer, testing volumes, results and test cost estimates. In August 2015, 10,299 vaginal swabs, and in August 2016, 2,189 specimens (urogenital swabs and urines), were tested. The positivity rate went from 0.9% to 5.3%, and overall more TV infections were detected in 2016. The number needed to test and cost for one positive TV result respectively was 111 and $902.55 in 2015, and 19 and $368.92 in 2016. Request volumes and positivity rates differed among referrers. The methodology change was associated with higher overall detection of TV, and reductions in the numbers needed to test/cost for one TV diagnosis. Our audit suggests that there is room for improvement with TV test requesting in our community.

  16. Recommendation to include fragrance mix 2 and hydroxyisohexyl 3-cyclohexene carboxaldehyde (Lyral) in the European baseline patch test series.

    Science.gov (United States)

    Bruze, Magnus; Andersen, Klaus Ejner; Goossens, An

    2008-03-01

    The currently used fragrance mix in the European baseline patch test series (baseline series) fails to detect a substantial number of clinically relevant fragrance allergies. To investigate whether it is justified to include hydroxyisohexyl 3-cyclohexene carboxaldehyde (Lyral) and fragrance mix 2 containing hydroxyisohexyl 3-cyclohexene carboxaldehyde, citral, farnesol, coumarin, citronellol, and alpha-hexyl cinnamal in the European baseline patch test series. Survey of the literature on reported frequencies of contact allergy and allergic contact dermatitis from fragrance mix 2 and hydroxyisohexyl 3-cyclohexene carboxaldehyde (Lyral) as well as reported results of experimental provocation test. Fragrance mix 2 has been demonstrated to be a useful additional marker of fragrance allergy with contact allergy rates up to 5% when included in various national baseline patch test series. Of the fragrance substances present in fragrance mix 2, hydroxyisohexyl 3-cyclohexene carboxaldehyde is the most common sensitizer. Contact allergy rates between 1.5% and 3% have been reported for hydroxyisohexyl 3-cyclohexene carboxaldehyde in petrolatum (pet.) at 5% from various European centres when tested in consecutive dermatitis patients. From 2008, pet. preparations of fragrance mix 2 at 14% w/w (5.6 mg/cm(2)) and hydroxyisohexyl 3-cyclohexene carboxaldehyde at 5% w/w (2.0 mg/cm(2)) are recommended for inclusion in the baseline series. With the Finn Chamber technique, a dose of 20 mg pet. preparation is recommended. Whenever there is a positive reaction to fragrance mix 2, additional patch testing with the 6 ingredients, 5 if there are simultaneous positive reactions to hydroxyisohexyl 3-cyclohexene carboxaldehyde and fragrance mix 2, is recommended.

  17. Error Patterns

    NARCIS (Netherlands)

    Hoede, C.; Li, Z.

    2001-01-01

    In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,

  18. On setting NRC alarm thresholds for inventory differences and process unit loss estimators: Clarifying their statistical basis with hypothesis testing methods and error propagation models from Jaech, Bowen and Bennett and IAEA

    International Nuclear Information System (INIS)

    Ong, L.

    1995-01-01

    Major fuel cycle facilities in the US private sector are required to respond-at predetermined alarm levels-to various special nuclear material loss estimators in the material control and accounting (MC and A) area. This paper presents US Nuclear Regulatory Commission (NRC) policy, along with the underlying statistical rationale, for establishing and inspecting the application of thresholds to detect excessive inventory differences (ID). Accordingly, escalating responsive action must be taken to satisfy NRC's MC and A regulations for low-enriched uranium (LEU) fuel conversion/fabrication plants and LEU enrichment facilities. The establishment of appropriate ID detection thresholds depends on a site-specific goal quantity, a specified probability of detection and the standard error of the ID. Regulatory guidelines for ID significance tests and process control tests conducted by licensees with highly enriched uranium are similarly rationalized in definitive hypothesis testing including null and alternative hypotheses; statistical efforts of the first, second, third, and fourth kinds; and suitable test statistics, uncertainty estimates, prevailing assumptions, and critical values for comparisons. Conceptual approaches are described in the context of significance test considerations and measurement error models including the treatment of so called ''systematic error variance'' effects as observations of random variables in the statistical sense

  19. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  20. A possible alternative to the error prone modified Hodge test to correctly identify the carbapenemase producing Gram-negative bacteria

    Directory of Open Access Journals (Sweden)

    S S Jeremiah

    2014-01-01

    Full Text Available Context: The modified Hodge test (MHT is widely used as a screening test for the detection of carbapenemases in Gram-negative bacteria. This test has several pitfalls in terms of validity and interpretation. Also the test has a very low sensitivity in detecting the New Delhi metallo-β-lactamase (NDM. Considering the degree of dissemination of the NDM and the growing pandemic of carbapenem resistance, a more accurate alternative test is needed at the earliest. Aims: The study intends to compare the performance of the MHT with the commercially available Neo-Sensitabs - Carbapenemases/Metallo-β-Lactamase (MBL Confirmative Identification pack to find out whether the latter could be an efficient alternative to the former. Settings and Design: A total of 105 isolates of Klebsiella pneumoniae resistant to imipenem and meropenem, collected prospectively over a period of 2 years were included in the study. Subjects and Methods: The study isolates were tested with the MHT, the Neo-Sensitabs - Carbapenemases/MBL Confirmative Identification pack and polymerase chain reaction (PCR for detecting the blaNDM-1 gene. Results: Among the 105 isolates, the MHT identified 100 isolates as carbapenemase producers. In the five isolates negative for the MHT, four were found to produce MBLs by the Neo-Sensitabs. The Neo-Sensitabs did not have any false negatives when compared against the PCR. Conclusions: The MHT can give false negative results, which lead to failure in detecting the carbapenemase producers. Also considering the other pitfalls of the MHT, the Neo-Sensitabs - Carbapenemases/MBL Confirmative Identification pack could be a more efficient alternative for detection of carbapenemase production in Gram-negative bacteria.

  1. A possible alternative to the error prone modified Hodge test to correctly identify the carbapenemase producing Gram-negative bacteria.

    Science.gov (United States)

    Jeremiah, S S; Balaji, V; Anandan, S; Sahni, R D

    2014-01-01

    The modified Hodge test (MHT) is widely used as a screening test for the detection of carbapenemases in Gram-negative bacteria. This test has several pitfalls in terms of validity and interpretation. Also the test has a very low sensitivity in detecting the New Delhi metallo-β-lactamase (NDM). Considering the degree of dissemination of the NDM and the growing pandemic of carbapenem resistance, a more accurate alternative test is needed at the earliest. The study intends to compare the performance of the MHT with the commercially available Neo-Sensitabs - Carbapenemases/Metallo-β-Lactamase (MBL) Confirmative Identification pack to find out whether the latter could be an efficient alternative to the former. A total of 105 isolates of Klebsiella pneumoniae resistant to imipenem and meropenem, collected prospectively over a period of 2 years were included in the study. The study isolates were tested with the MHT, the Neo-Sensitabs - Carbapenemases/MBL Confirmative Identification pack and polymerase chain reaction (PCR) for detecting the blaNDM-1 gene. Among the 105 isolates, the MHT identified 100 isolates as carbapenemase producers. In the five isolates negative for the MHT, four were found to produce MBLs by the Neo-Sensitabs. The Neo-Sensitabs did not have any false negatives when compared against the PCR. The MHT can give false negative results, which lead to failure in detecting the carbapenemase producers. Also considering the other pitfalls of the MHT, the Neo-Sensitabs--Carbapenemases/MBL Confirmative Identification pack could be a more efficient alternative for detection of carbapenemase production in Gram-negative bacteria.

  2. Minimum Error Entropy Classification

    CERN Document Server

    Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A

    2013-01-01

    This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.

  3. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  4. Experimental test of a hot water storage system including a macro-encapsulated phase change material (PCM)

    Science.gov (United States)

    Mongibello, L.; Atrigna, M.; Bianco, N.; Di Somma, M.; Graditi, G.; Risi, N.

    2017-01-01

    Thermal energy storage systems (TESs) are of fundamental importance for many energetic systems, essentially because they permit a certain degree of decoupling between the heat or cold production and the use of the heat or cold produced. In the last years, many works have analysed the addition of a PCM inside a hot water storage tank, as it can allow a reduction of the size of the storage tank due to the possibility of storing thermal energy as latent heat, and as a consequence its cost and encumbrance. The present work focuses on experimental tests realized by means of an indoor facility in order to analyse the dynamic behaviour of a hot water storage tank including PCM modules during a charging phase. A commercial bio-based PCM has been used for the purpose, with a melting temperature of 58°C. The experimental results relative to the hot water tank including the PCM modules are presented in terms of temporal evolution of the axial temperature profile, heat transfer and stored energy, and are compared with the ones obtained by using only water as energy storage material. Interesting insights, relative to the estimation of the percentage of melted PCM at the end of the experimental test, are presented and discussed.

  5. Including osteoprotegerin and collagen IV in a score-based blood test for liver fibrosis increases diagnostic accuracy.

    Science.gov (United States)

    Bosselut, Nelly; Taibi, Ludmia; Guéchot, Jérôme; Zarski, Jean-Pierre; Sturm, Nathalie; Gelineau, Marie-Christine; Poggi, Bernard; Thoret, Sophie; Lasnier, Elisabeth; Baudin, Bruno; Housset, Chantal; Vaubourdolle, Michel

    2013-01-16

    Noninvasive methods for liver fibrosis evaluation in chronic liver diseases have been recently developed, i.e. transient elastography (Fibroscan™) and blood tests (Fibrometer®, Fibrotest®, and Hepascore®). In this study, we aimed to design a new score in chronic hepatitis C (CHC) by selecting blood markers in a large panel and we compared its diagnostic performance with those of other noninvasive methods. Sixteen blood tests were performed in 306 untreated CHC patients included in a multicenter prospective study (ANRS HC EP 23 Fibrostar) using METAVIR histological fibrosis stage as reference. The new score was constructed by non linear regression using the most accurate biomarkers. Five markers (alpha-2-macroglobulin, apolipoprotein-A1, AST, collagen IV and osteoprotegerin) were included in the new function called Coopscore©. Using the Obuchowski Index, Coopscore© shows higher diagnostic performances than for Fibrometer®, Fibrotest®, Hepascore® and Fibroscan™ in CHC. Association between Fibroscan™ and Coopscore© might avoid 68% of liver biopsies for the diagnosis of significant fibrosis. Coopscore© provides higher accuracy than other noninvasive methods for the diagnosis of liver fibrosis in CHC. The association of Coopscore© with Fibroscan™ increases its predictive value. Copyright © 2012 Elsevier B.V. All rights reserved.

  6. Comparison and clinical utility evaluation of four multiple allergen simultaneous tests including two newly introduced fully automated analyzers

    Directory of Open Access Journals (Sweden)

    John Hoon Rim

    2016-04-01

    Full Text Available Background: We compared the diagnostic performances of two newly introduced fully automated multiple allergen simultaneous tests (MAST analyzers with two conventional MAST assays. Methods: The serum samples from a total of 53 and 104 patients were tested for food panels and inhalant panels, respectively, in four analyzers including AdvanSure AlloScreen (LG Life Science, Korea, AdvanSure Allostation Smart II (LG Life Science, PROTIA Allergy-Q (ProteomeTech, Korea, and RIDA Allergy Screen (R-Biopharm, Germany. We compared not only the total agreement percentages but also positive propensities among four analyzers. Results: Evaluation of AdvanSure Allostation Smart II as upgraded version of AdvanSure AlloScreen revealed good concordance with total agreement percentages of 93.0% and 92.2% in food and inhalant panel, respectively. Comparisons of AdvanSure Allostation Smart II or PROTIA Allergy-Q with RIDA Allergy Screen also showed good concordance performance with positive propensities of two new analyzers for common allergens (Dermatophagoides farina and Dermatophagoides pteronyssinus. The changes of cut-off level resulted in various total agreement percentage fluctuations among allergens by different analyzers, although current cut-off level of class 2 appeared to be generally suitable. Conclusions: AdvanSure Allostation Smart II and PROTIA Allergy-Q presented favorable agreement performances with RIDA Allergy Screen, although positive propensities were noticed in common allergens. Keywords: Multiple allergen simultaneous test, Automated analyzer

  7. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  8. Use of a non-linear method for including the mass uncertainty of gravimetric standards and system measurement errors in the fitting of calibration curves for XRFA freeze-dried UNO3 standards

    International Nuclear Information System (INIS)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-05-01

    A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities with a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO 3 can have an accuracy of 0.2% in 1000 s

  9. Detailed analysis of the supermarket task included on the Japanese version of the Rapid Dementia Screening Test.

    Science.gov (United States)

    Moriyama, Yasushi; Yoshino, Aihide; Muramatsu, Taro; Mimura, Masaru

    2017-05-01

    The supermarket task, which is included in the Japanese version of the Rapid Dementia Screening Test, requires the quick (1 min) generation of words for things that can be bought in a supermarket. Cluster size and switches are investigated during this task. We investigated how the severity of dementia related to cluster size and switches on the supermarket task in patients with Alzheimer's disease. We administered the Japanese version of the Rapid Dementia Screening Test to 250 patients with very mild to severe Alzheimer's disease and to 49 healthy volunteers. Patients had Mini-Mental State Examination scores from 12 to 26 and Clinical Dementia Rating scale scores from 0.5 to 3. Patients were divided into four groups based on their Clinical Dementia Rating score (0.5, 1, 2, 3). We performed statistical analyses between the four groups and control subjects based on cluster size and switch scores on the supermarket task. The score for cluster size and switches deteriorated according to the severity of dementia. Moreover, for subjects with a Clinical Dementia Rating score of 0.5, cluster size was impaired, but switches were intact. Our findings indicate that the scores for cluster size and switches on the supermarket task may be useful for detecting the severity of symptoms of dementia in patients with Alzheimer's disease. © 2016 The Authors. Psychogeriatrics © 2016 Japanese Psychogeriatric Society.

  10. Understanding native Russian listeners' errors on an English word recognition test: model-based analysis of phoneme confusion.

    Science.gov (United States)

    Shi, Lu-Feng; Morozova, Natalia

    2012-08-01

    Word recognition is a basic component in a comprehensive hearing evaluation, but data are lacking for listeners speaking two languages. This study obtained such data for Russian natives in the US and analysed the data using the perceptual assimilation model (PAM) and speech learning model (SLM). Listeners were randomly presented 200 NU-6 words in quiet. Listeners responded verbally and in writing. Performance was scored on words and phonemes (word-initial consonants, vowels, and word-final consonants). Seven normal-hearing, adult monolingual English natives (NM), 16 English-dominant (ED), and 15 Russian-dominant (RD) Russian natives participated. ED and RD listeners differed significantly in their language background. Consistent with the SLM, NM outperformed ED listeners and ED outperformed RD listeners, whether responses were scored on words or phonemes. NM and ED listeners shared similar phoneme error patterns, whereas RD listeners' errors had unique patterns that could be largely understood via the PAM. RD listeners had particular difficulty differentiating vowel contrasts /i-I/, /æ-ε/, and /ɑ-Λ/, word-initial consonant contrasts /p-h/ and /b-f/, and word-final contrasts /f-v/. Both first-language phonology and second-language learning history affect word and phoneme recognition. Current findings may help clinicians differentiate word recognition errors due to language background from hearing pathologies.

  11. Application of Barcoding to Reduce Error of Patient Identification and to Increase Patient's Information Confidentiality of Test Tube Labelling in a Psychiatric Teaching Hospital.

    Science.gov (United States)

    Liu, Hsiu-Chu; Li, Hsing; Chang, Hsin-Fei; Lu, Mei-Rou; Chen, Feng-Chuan

    2015-01-01

    Learning from the experience of another medical center in Taiwan, Kaohsiung Municipal Kai-Suan Psychiatric Hospital has changed the nursing informatics system step by step in the past year and a half . We considered ethics in the original idea of implementing barcodes on the test tube labels to process the identification of the psychiatric patients. The main aims of this project are to maintain the confidential information and to transport the sample effectively. The primary nurses had been using different work sheets for this project to ensure the acceptance of the new barcode system. In the past two years the errors in the blood testing process were as high as 11,000 in 14,000 events per year, resulting in wastage of resources. The actions taken by the nurses and the new barcode system implementation can improve the clinical nursing care quality, safety of the patients, and efficiency, while decreasing the cost due to the human error.

  12. Spotting software errors sooner

    International Nuclear Information System (INIS)

    Munro, D.

    1989-01-01

    Static analysis is helping to identify software errors at an earlier stage and more cheaply than conventional methods of testing. RTP Software's MALPAS system also has the ability to check that a code conforms to its original specification. (author)

  13. Einstein's error

    International Nuclear Information System (INIS)

    Winterflood, A.H.

    1980-01-01

    In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)

  14. Friction Reduction Tested for a Downsized Diesel Engine with Low-Viscosity Lubricants Including a Novel Polyalkylene Glycol

    Directory of Open Access Journals (Sweden)

    David E. Sander

    2017-04-01

    Full Text Available With the increasing pressure to reduce emissions, friction reduction is always an up-to-date topic in the automotive industry. Among the various possibilities to reduce mechanical friction, the usage of a low-viscosity lubricant in the engine is one of the most effective and most economic options. Therefore, lubricants of continuously lower viscosity are being developed and offered on the market that promise to reduce engine friction while avoiding deleterious mixed lubrication and wear. In this work, a 1.6 L downsized Diesel engine is used on a highly accurate engine friction test-rig to determine the potential for friction reduction using low viscosity lubricants under realistic operating conditions including high engine loads. In particular, two hydrocarbon-based lubricants, 0W30 and 0W20, are investigated as well as a novel experimental lubricant, which is based on a polyalkylene glycol base stock. Total engine friction is measured for all three lubricants, which show a general 5% advantage for the 0W20 in comparison to the 0W30 lubricant. The polyalkylene glycol-based lubricant, however, shows strongly reduced friction losses, which are about 25% smaller than for the 0W20 lubricant. As the 0W20 and the polyalkylene glycol-based lubricant have the same HTHS-viscosity , the findings contradict the common understanding that the HTHS-viscosity is the dominant driver related to the friction losses.

  15. Dependence of Error Level on the Number of Probes in Over-the-Air Multiprobe Test Systems

    Directory of Open Access Journals (Sweden)

    Afroza Khatun

    2012-01-01

    Full Text Available Development of MIMO over-the-air (OTA test methodology is ongoing. Several test methods have been proposed. Anechoic chamber-based multiple-probe technique is one promising candidate for MIMO-OTA testing. The required number of probes for synthesizing the desired fields inside the multiprobe system is an important issue as it has a large impact on the cost of the test system. In this paper, we review the existing investigations on this important topic and end up presenting rules for the required number of probes as a function of the test zone size in wavelengths for certain chosen uncertainty levels of the field synthesis.

  16. A Proposal for a Methodology to Develop a Cyber-Attack Penetration Test Scenario Including NPPs Safety

    Energy Technology Data Exchange (ETDEWEB)

    Lee, In Hyo [KAIST, Daejeon (Korea, Republic of); Son, Han Seong [Joongbu Univ., Geumsan (Korea, Republic of); Kim, Si Won [Korea Institute of Nuclear Nonproliferation and Control, Daejeon (Korea, Republic of); Kang, Hyun Gook [Rensselaer Polytechnic Institute, Troy (United States)

    2016-10-15

    Penetration test is a method to evaluate the cyber security of NPPs; so, this approach was performed in some studies. Because they focused on vulnerability finding or test bed construction, scenario based approach was not performed. However, to test the cyber security of NPPs, a proper test scenario should be needed. Ahn et al. developed cyber-attack scenarios but those scenarios couldn't be applied in penetration test because they developed the scenarios based on past incidents of NPPs induced by cyber-attack. That is, those scenarios only covered scenarios which were happened before; so, they couldn't cover other various scenarios and couldn't reflect them into a penetration test. In this study, a method to develop a cyber-attack penetration test scenario of NPPs especially focused on safety point of view is suggested. To evaluate the cyber security of NPPs, penetration test can be a possible way. In this study, a method to develop a penetration test scenario was explained. Especially, the goal of hacker was focused on nuclear fuel integrity deterioration. So, in the methodology, Level 1 PSA results were utilized to reflect plant safety into the security. From the PSA results, basic event was post processed and possible cyber-attacks were reviewed with vulnerabilities of digital control system.

  17. A Proposal for a Methodology to Develop a Cyber-Attack Penetration Test Scenario Including NPPs Safety

    International Nuclear Information System (INIS)

    Lee, In Hyo; Son, Han Seong; Kim, Si Won; Kang, Hyun Gook

    2016-01-01

    Penetration test is a method to evaluate the cyber security of NPPs; so, this approach was performed in some studies. Because they focused on vulnerability finding or test bed construction, scenario based approach was not performed. However, to test the cyber security of NPPs, a proper test scenario should be needed. Ahn et al. developed cyber-attack scenarios but those scenarios couldn't be applied in penetration test because they developed the scenarios based on past incidents of NPPs induced by cyber-attack. That is, those scenarios only covered scenarios which were happened before; so, they couldn't cover other various scenarios and couldn't reflect them into a penetration test. In this study, a method to develop a cyber-attack penetration test scenario of NPPs especially focused on safety point of view is suggested. To evaluate the cyber security of NPPs, penetration test can be a possible way. In this study, a method to develop a penetration test scenario was explained. Especially, the goal of hacker was focused on nuclear fuel integrity deterioration. So, in the methodology, Level 1 PSA results were utilized to reflect plant safety into the security. From the PSA results, basic event was post processed and possible cyber-attacks were reviewed with vulnerabilities of digital control system

  18. [The effect of prison crowding on prisoners' violence in Japan: testing with cointegration regressions and error correction models].

    Science.gov (United States)

    Yuma, Yoshikazu

    2010-08-01

    This research examined the effect of prison population densities (PPD) on inmate-inmate prison violence rates (PVR) in Japan using one-year-interval time-series data (1972-2006). Cointegration regressions revealed a long-run equilibrium relationship between PPD and PVR. PPD had a significant and increasing effect on PVR in the long-term. Error correction models showed that in the short-term, the effect of PPD was significant and positive on PVR, even after controlling for the effects of the proportions of males, age younger than 30 years, less than one-year incarceration, and prisoner/staff ratio. The results were discussed in regard to (a) differences between Japanese prisons and prisons in the United States, and (b) methodological problems found in previous research.

  19. Combining empirical approaches and error modelling to enhance predictive uncertainty estimation in extrapolation for operational flood forecasting. Tests on flood events on the Loire basin, France.

    Science.gov (United States)

    Berthet, Lionel; Marty, Renaud; Bourgin, François; Viatgé, Julie; Piotte, Olivier; Perrin, Charles

    2017-04-01

    An increasing number of operational flood forecasting centres assess the predictive uncertainty associated with their forecasts and communicate it to the end users. This information can match the end-users needs (i.e. prove to be useful for an efficient crisis management) only if it is reliable: reliability is therefore a key quality for operational flood forecasts. In 2015, the French flood forecasting national and regional services (Vigicrues network; www.vigicrues.gouv.fr) implemented a framework to compute quantitative discharge and water level forecasts and to assess the predictive uncertainty. Among the possible technical options to achieve this goal, a statistical analysis of past forecasting errors of deterministic models has been selected (QUOIQUE method, Bourgin, 2014). It is a data-based and non-parametric approach based on as few assumptions as possible about the forecasting error mathematical structure. In particular, a very simple assumption is made regarding the predictive uncertainty distributions for large events outside the range of the calibration data: the multiplicative error distribution is assumed to be constant, whatever the magnitude of the flood. Indeed, the predictive distributions may not be reliable in extrapolation. However, estimating the predictive uncertainty for these rare events is crucial when major floods are of concern. In order to improve the forecasts reliability for major floods, an attempt at combining the operational strength of the empirical statistical analysis and a simple error modelling is done. Since the heteroscedasticity of forecast errors can considerably weaken the predictive reliability for large floods, this error modelling is based on the log-sinh transformation which proved to reduce significantly the heteroscedasticity of the transformed error in a simulation context, even for flood peaks (Wang et al., 2012). Exploratory tests on some operational forecasts issued during the recent floods experienced in

  20. The numeracy test workbook everything you need for a successful programme of self study including quick tests and full-length realistic mock-ups

    CERN Document Server

    Bryon, Mike

    2011-01-01

    One of the most common types of psychometric test used in assessment and selection procedures, The Numeracy Test Workbook provides practice questions and mock tests designed to build confidence and improve performance.

  1. A Monte Carlo Study of Levene's Test of Homogeneity of Variance: Empirical Frequencies of Type I Error in Normal Distributions.

    Science.gov (United States)

    Neel, John H.; Stallings, William M.

    An influential statistics test recommends a Levene text for homogeneity of variance. A recent note suggests that Levene's test is upwardly biased for small samples. Another report shows inflated Alpha estimates and low power. Neither study utilized more than two sample sizes. This Monte Carlo study involved sampling from a normal population for…

  2. Including Bioconcentration Kinetics for the Prioritization and Interpretation of Regulatory Aquatic Toxicity Tests of Highly Hydrophobic Chemicals

    DEFF Research Database (Denmark)

    Kwon, Jung-Hwan; Lee, So-Young; Kang, Hyun-Joong

    2016-01-01

    experiments. In this work, internal concentrations of highly hydrophobic chemicals were predicted for standard acute ecotoxicity tests at three trophic levels, algae, invertebrate, and fish. As demonstrated by comparison with maximum aqueous concentrations at water solubility, chemicals with an octanol...

  3. Analysis of the methodical component of core power density field calculation error on the basis of Mochovce-1 commissioning tests

    International Nuclear Information System (INIS)

    Brik, A.

    2009-01-01

    In the first decade of June 2008, during the power commissioning of the reactor at the Mochovce NPP unit 1, the experiment with reducing the thermal power of core almost to the balance-of-plant (BOP) needs was performed. After the reactor has operated for seven hours at low power (about 200 220 MW (thermal)), its power was increased (at a rate of about 0.25% of N nom /min) to the initial level, close to 107% (1471 MW). During the experiment, core parameters, which were subsequently used for comparing the measured data with the results of experiment simulation calculations, were recorded in the reactor in-core monitoring system database. Calculated and measured levels of critical concentrations of boric acid were compared, along with power density distributions by fuel elements and assemblies obtained both by the KRUIZ in-core monitoring system and on the basis of calculations simulating reactor operation in accordance with the given core power variation schedule. The final stage consisted of assessing the methodical component of power density micro- and macro-fields calculation error in the core of Mochovce-1 reactor operating with varying load. (author)

  4. Analysis of the methodical component of core power density field calculation error on the basis of Mochovce-1 commissioning tests

    International Nuclear Information System (INIS)

    Brik, A.

    2009-01-01

    In the first decade of June 2008, during the power commissioning of the reactor at Mochovce NPP unit 1, the experiment with reducing the thermal power of core almost to the balance-of-plant needs was performed. After the reactor has operated for seven hours at low power (about 200 220 MW (thermal)), its power was increased (at a rate of about 0.25% of N nom /min) to the initial level, close to 107% (1471 MW). During the experiment, core parameters, which were subsequently used for comparing the measured data with the results of experiment simulation calculations, were recorded in the reactor in-core monitoring system's database. Calculated and measured levels of critical concentrations of boric acid were compared, along with power density distributions by fuel elements and assemblies obtained both by the KRUIZ in-core monitoring system and on the basis of calculations simulating reactor operation in accordance with the given core power variation schedule. The final stage consisted of assessing the methodical component of power density micro- and macro-fields' calculation error in the core of Mochovce-1 reactor operating with varying load. (Authors)

  5. Evaluation of three rapid oral fluid test devices on the screening of multiple drugs of abuse including ketamine.

    Science.gov (United States)

    Tang, Magdalene H Y; Ching, C K; Poon, Simon; Chan, Suzanne S S; Ng, W Y; Lam, M; Wong, C K; Pao, Ronnie; Lau, Angus; Mak, Tony W L

    2018-05-01

    Rapid oral fluid testing (ROFT) devices have been extensively evaluated for their ability to detect common drugs of abuse; however, the performance of such devices on simultaneous screening for ketamine has been scarcely investigated. The present study evaluated three ROFT devices (DrugWipe ® 6S, Ora-Check ® and SalivaScreen ® ) on the detection of ketamine, opiates, methamphetamine, cannabis, cocaine and MDMA. A liquid chromatography tandem mass spectrometry (LCMS) assay was firstly established and validated for confirmation analysis of the six types of drugs and/or their metabolites. In the field test, the three ROFT devices were tested on subjects recruited from substance abuse clinics/rehabilitation centre. Oral fluid was also collected using Quantisal ® for confirmation analysis. A total of 549 samples were collected in the study. LCMS analysis on 491 samples revealed the following drugs: codeine (55%), morphine (49%), heroin (40%), methamphetamine (35%), THC (8%), ketamine (4%) and cocaine (2%). No MDMA-positive cases were observed. Results showed that the overall specificity and accuracy were satisfactory and met the DRUID standard of >80% for all 3 devices. Ora-Check ® had poor sensitivities (ketamine 36%, methamphetamine 63%, opiates 53%, cocaine 60%, THC 0%). DrugWipe ® 6S showed good sensitivities in the methamphetamine (83%) and opiates (93%) tests but performed relatively poorly for ketamine (41%), cocaine (43%) and THC (22%). SalivaScreen ® also demonstrated good sensitivities in the methamphetamine (83%) and opiates (100%) tests, and had the highest sensitivity for ketamine (76%) and cocaine (71%); however, it failed to detect any of the 28 THC-positive cases. The test completion rate (proportion of tests completed with quality control passed) were: 52% (Ora-Check ® ), 78% (SalivaScreen ® ) and 99% (DrugWipe ® 6S). Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Accelerated tests for the soft error rate determination of single radiation particles in components of terrestrial and avionic electronic systems

    International Nuclear Information System (INIS)

    Flament, O.; Baggio, J.

    2010-01-01

    This paper describes the main features of the accelerated test procedures used to determine reliability data of microelectronics devices used in terrestrial environment.This paper focuses on the high energy particle test that could be performed through spallation neutron source or quasi-mono-energetic neutron or proton. Improvements of standards are illustrated with respect to the state of the art of knowledge in radiation effects and scaling down of microelectronics technologies. (authors)

  7. Internal consistency, test-retest reliability and measurement error of the self-report version of the social skills rating system in a sample of Australian adolescents.

    Directory of Open Access Journals (Sweden)

    Sharmila Vaz

    Full Text Available The social skills rating system (SSRS is used to assess social skills and competence in children and adolescents. While its characteristics based on United States samples (US are published, corresponding Australian figures are unavailable. Using a 4-week retest design, we examined the internal consistency, retest reliability and measurement error (ME of the SSRS secondary student form (SSF in a sample of Year 7 students (N = 187, from five randomly selected public schools in Perth, western Australia. Internal consistency (IC of the total scale and most subscale scores (except empathy on the frequency rating scale was adequate to permit independent use. On the importance rating scale, most IC estimates for girls fell below the benchmark. Test-retest estimates of the total scale and subscales were insufficient to permit reliable use. ME of the total scale score (frequency rating for boys was equivalent to the US estimate, while that for girls was lower than the US error. ME of the total scale score (importance rating was larger than the error using the frequency rating scale. The study finding supports the idea of using multiple informants (e.g. teacher and parent reports, not just student as recommended in the manual. Future research needs to substantiate the clinical meaningfulness of the MEs calculated in this study by corroborating them against the respective Minimum Clinically Important Difference (MCID.

  8. Internal consistency, test-retest reliability and measurement error of the self-report version of the social skills rating system in a sample of Australian adolescents.

    Science.gov (United States)

    Vaz, Sharmila; Parsons, Richard; Passmore, Anne Elizabeth; Andreou, Pantelis; Falkmer, Torbjörn

    2013-01-01

    The social skills rating system (SSRS) is used to assess social skills and competence in children and adolescents. While its characteristics based on United States samples (US) are published, corresponding Australian figures are unavailable. Using a 4-week retest design, we examined the internal consistency, retest reliability and measurement error (ME) of the SSRS secondary student form (SSF) in a sample of Year 7 students (N = 187), from five randomly selected public schools in Perth, western Australia. Internal consistency (IC) of the total scale and most subscale scores (except empathy) on the frequency rating scale was adequate to permit independent use. On the importance rating scale, most IC estimates for girls fell below the benchmark. Test-retest estimates of the total scale and subscales were insufficient to permit reliable use. ME of the total scale score (frequency rating) for boys was equivalent to the US estimate, while that for girls was lower than the US error. ME of the total scale score (importance rating) was larger than the error using the frequency rating scale. The study finding supports the idea of using multiple informants (e.g. teacher and parent reports), not just student as recommended in the manual. Future research needs to substantiate the clinical meaningfulness of the MEs calculated in this study by corroborating them against the respective Minimum Clinically Important Difference (MCID).

  9. An alternative to the balance error scoring system: using a low-cost balance board to improve the validity/reliability of sports-related concussion balance testing.

    Science.gov (United States)

    Chang, Jasper O; Levy, Susan S; Seay, Seth W; Goble, Daniel J

    2014-05-01

    Recent guidelines advocate sports medicine professionals to use balance tests to assess sensorimotor status in the management of concussions. The present study sought to determine whether a low-cost balance board could provide a valid, reliable, and objective means of performing this balance testing. Criterion validity testing relative to a gold standard and 7 day test-retest reliability. University biomechanics laboratory. Thirty healthy young adults. Balance ability was assessed on 2 days separated by 1 week using (1) a gold standard measure (ie, scientific grade force plate), (2) a low-cost Nintendo Wii Balance Board (WBB), and (3) the Balance Error Scoring System (BESS). Validity of the WBB center of pressure path length and BESS scores were determined relative to the force plate data. Test-retest reliability was established based on intraclass correlation coefficients. Composite scores for the WBB had excellent validity (r = 0.99) and test-retest reliability (R = 0.88). Both the validity (r = 0.10-0.52) and test-retest reliability (r = 0.61-0.78) were lower for the BESS. These findings demonstrate that a low-cost balance board can provide improved balance testing accuracy/reliability compared with the BESS. This approach provides a potentially more valid/reliable, yet affordable, means of assessing sports-related concussion compared with current methods.

  10. Recommendation to include fragrance mix 2 and hydroxyisohexyl 3-cyclohexene carboxaldehyde (Lyral) in the European baseline patch test series

    DEFF Research Database (Denmark)

    Bruze, Magnus; Andersen, Klaus Ejner; Goossens, An

    2008-01-01

    various European centres when tested in consecutive dermatitis patients. CONCLUSIONS: From 2008, pet. preparations of fragrance mix 2 at 14% w/w (5.6 mg/cm(2)) and hydroxyisohexyl 3-cyclohexene carboxaldehyde at 5% w/w (2.0 mg/cm(2)) are recommended for inclusion in the baseline series. With the Finn...

  11. The opportunistic screening of refractive errors in school-going children by pediatrician using enhanced Brückner test.

    Science.gov (United States)

    Jain, Piyush; Kothari, Mihir T; Gode, Vaibhav

    2016-10-01

    The aim of this study was to compare the results of enhanced Brückner test (EBT) performed by a pediatrician and an experienced pediatric ophthalmologist. In this prospective double-masked cohort study, a pediatrician and a pediatric ophthalmologist performed the EBT in a classroom of a school in semi-dark lighting condition using a direct ophthalmoscope. The results of the test were compared using 2 × 2 Bayesian table and kappa statistics. The findings of the pediatric ophthalmologists were considered gold standard. Two hundred and thirty-six eyes of 118 subjects, mean age 6.8 ± 0.5 years (range, 5.4-7.8 years), were examined. The time taken to complete this test was ametropia in children.

  12. The opportunistic screening of refractive errors in school-going children by pediatrician using enhanced Br?ckner test

    OpenAIRE

    Jain, Piyush; Kothari, Mihir T; Gode, Vaibhav

    2016-01-01

    Aim: The aim of this study was to compare the results of enhanced Brückner test (EBT) performed by a pediatrician and an experienced pediatric ophthalmologist. Subjects and Methods: In this prospective double-masked cohort study, a pediatrician and a pediatric ophthalmologist performed the EBT in a classroom of a school in semi-dark lighting condition using a direct ophthalmoscope. The results of the test were compared using 2 × 2 Bayesian table and kappa statistics. The findings of the pedia...

  13. Cultural differences in categorical memory errors persist with age.

    Science.gov (United States)

    Gutchess, Angela; Boduroglu, Aysecan

    2018-01-02

    This cross-sectional experiment examined the influence of aging on cross-cultural differences in memory errors. Previous research revealed that Americans committed more categorical memory errors than Turks; we tested whether the cognitive constraints associated with aging impacted the pattern of memory errors across cultures. Furthermore, older adults are vulnerable to memory errors for semantically-related information, and we assessed whether this tendency occurs across cultures. Younger and older adults from the US and Turkey studied word pairs, with some pairs sharing a categorical relationship and some unrelated. Participants then completed a cued recall test, generating the word that was paired with the first. These responses were scored for correct responses or different types of errors, including categorical and semantic. The tendency for Americans to commit more categorical memory errors emerged for both younger and older adults. In addition, older adults across cultures committed more memory errors, and these were for semantically-related information (including both categorical and other types of semantic errors). Heightened vulnerability to memory errors with age extends across cultural groups, and Americans' proneness to commit categorical memory errors occurs across ages. The findings indicate some robustness in the ways that age and culture influence memory errors.

  14. An Asset Pricing Approach to Testing General Term Structure Models including Heath-Jarrow-Morton Specifications and Affine Subclasses

    DEFF Research Database (Denmark)

    Christensen, Bent Jesper; van der Wel, Michel

    of the risk premium is associated with the slope factor, and individual risk prices depend on own past values, factor realizations, and past values of other risk prices, and are significantly related to the output gap, consumption, and the equity risk price. The absence of arbitrage opportunities is strongly...... is tested, but in addition to the standard bilinear term in factor loadings and market prices of risk, the relevant mean restriction in the term structure case involves an additional nonlinear (quadratic) term in factor loadings. We estimate our general model using likelihood-based dynamic factor model...... techniques for a variety of volatility factors, and implement the relevant likelihood ratio tests. Our factor model estimates are similar across a general state space implementation and an alternative robust two-step principal components approach. The evidence favors time-varying market prices of risk. Most...

  15. Probabilistic seismic safety assessment of a CANDU 6 nuclear power plant including ambient vibration tests: Case study

    Energy Technology Data Exchange (ETDEWEB)

    Nour, Ali [Hydro Québec, Montréal, Québec H2L4P5 (Canada); École Polytechnique de Montréal, Montréal, Québec H3C3A7 (Canada); Cherfaoui, Abdelhalim; Gocevski, Vladimir [Hydro Québec, Montréal, Québec H2L4P5 (Canada); Léger, Pierre [École Polytechnique de Montréal, Montréal, Québec H3C3A7 (Canada)

    2016-08-01

    Highlights: • In this case study, the seismic PSA methodology adopted for a CANDU 6 is presented. • Ambient vibrations testing to calibrate a 3D FEM and to reduce uncertainties is performed. • Procedure for the development of FRS for the RB considering wave incoherency effect is proposed. • Seismic fragility analysis for the RB is presented. - Abstract: Following the 2011 Fukushima Daiichi nuclear accident in Japan there is a worldwide interest in reducing uncertainties in seismic safety assessment of existing nuclear power plant (NPP). Within the scope of a Canadian refurbishment project of a CANDU 6 (NPP) put in service in 1983, structures and equipment must sustain a new seismic demand characterised by the uniform hazard spectrum (UHS) obtained from a site specific study defined for a return period of 1/10,000 years. This UHS exhibits larger spectral ordinates in the high-frequency range than those used in design. To reduce modeling uncertainties as part of a seismic probabilistic safety assessment (PSA), Hydro-Québec developed a procedure using ambient vibrations testing to calibrate a detailed 3D finite element model (FEM) of the containment and reactor building (RB). This calibrated FE model is then used for generating floor response spectra (FRS) based on ground motion time histories compatible with the UHS. Seismic fragility analyses of the reactor building (RB) and structural components are also performed in the context of a case study. Because the RB is founded on a large circular raft, it is possible to consider the effect of the seismic wave incoherency to filter out the high-frequency content, mainly above 10 Hz, using the incoherency transfer function (ITF) method. This allows reducing significantly the non-necessary conservatism in resulting FRS, an important issue for an existing NPP. The proposed case study, and related methodology using ambient vibration testing, is particularly useful to engineers involved in seismic re-evaluation of

  16. The preliminary development and testing of a global trigger tool to detect error and patient harm in primary-care records.

    Science.gov (United States)

    de Wet, C; Bowie, P

    2009-04-01

    A multi-method strategy has been proposed to understand and improve the safety of primary care. The trigger tool is a relatively new method that has shown promise in American and secondary healthcare settings. It involves the focused review of a random sample of patient records using a series of "triggers" that alert reviewers to potential errors and previously undetected adverse events. To develop and test a global trigger tool to detect errors and adverse events in primary-care records. Trigger tool development was informed by previous research and content validated by expert opinion. The tool was applied by trained reviewers who worked in pairs to conduct focused audits of 100 randomly selected electronic patient records in each of five urban general practices in central Scotland. Review of 500 records revealed 2251 consultations and 730 triggers. An adverse event was found in 47 records (9.4%), indicating that harm occurred at a rate of one event per 48 consultations. Of these, 27 were judged to be preventable (42%). A further 17 records (3.4%) contained evidence of a potential adverse event. Harm severity was low to moderate for most patients (82.9%). Error and harm rates were higher in those aged > or =60 years, and most were medication-related (59%). The trigger tool was successful in identifying undetected patient harm in primary-care records and may be the most reliable method for achieving this. However, the feasibility of its routine application is open to question. The tool may have greater utility as a research rather than an audit technique. Further testing in larger, representative study samples is required.

  17. Leggett-Garg tests of macrorealism for bosonic systems including double-well Bose-Einstein condensates and atom interferometers

    Science.gov (United States)

    Rosales-Zárate, L.; Opanchuk, B.; He, Q. Y.; Reid, M. D.

    2018-04-01

    We construct quantifiable generalizations of Leggett-Garg tests for macro- and mesoscopic realism and noninvasive measurability that apply when not all outcomes of measurement can be identified as arising from one of two macroscopically distinguishable states. We show how quantum mechanics predicts a negation of the Leggett-Garg premises for strategies involving ideal negative-result, weak, and minimally invasive ("nonclumsy") projective measurements on dynamical entangled systems, as might be realized with Bose-Einstein condensates in a double-well potential, path-entangled NOON states, and atom interferometers. Potential loopholes associated with each strategy are discussed.

  18. Simultaneous estimation of cross-validation errors in least squares collocation applied for statistical testing and evaluation of the noise variance components

    Science.gov (United States)

    Behnabian, Behzad; Mashhadi Hossainali, Masoud; Malekzadeh, Ahad

    2018-02-01

    The cross-validation technique is a popular method to assess and improve the quality of prediction by least squares collocation (LSC). We present a formula for direct estimation of the vector of cross-validation errors (CVEs) in LSC which is much faster than element-wise CVE computation. We show that a quadratic form of CVEs follows Chi-squared distribution. Furthermore, a posteriori noise variance factor is derived by the quadratic form of CVEs. In order to detect blunders in the observations, estimated standardized CVE is proposed as the test statistic which can be applied when noise variances are known or unknown. We use LSC together with the methods proposed in this research for interpolation of crustal subsidence in the northern coast of the Gulf of Mexico. The results show that after detection and removing outliers, the root mean square (RMS) of CVEs and estimated noise standard deviation are reduced about 51 and 59%, respectively. In addition, RMS of LSC prediction error at data points and RMS of estimated noise of observations are decreased by 39 and 67%, respectively. However, RMS of LSC prediction error on a regular grid of interpolation points covering the area is only reduced about 4% which is a consequence of sparse distribution of data points for this case study. The influence of gross errors on LSC prediction results is also investigated by lower cutoff CVEs. It is indicated that after elimination of outliers, RMS of this type of errors is also reduced by 19.5% for a 5 km radius of vicinity. We propose a method using standardized CVEs for classification of dataset into three groups with presumed different noise variances. The noise variance components for each of the groups are estimated using restricted maximum-likelihood method via Fisher scoring technique. Finally, LSC assessment measures were computed for the estimated heterogeneous noise variance model and compared with those of the homogeneous model. The advantage of the proposed method is the

  19. Experimental validation of control strategies for a microgrid test facility including a storage system and renewable generation sets

    DEFF Research Database (Denmark)

    Baccino, Francesco; Marinelli, Mattia; Silvestro, Federico

    2012-01-01

    The paper is aimed at describing and validating some control strategies in the SYSLAB experimental test facility characterized by the presence of a low voltage network with a 15 kW-190 kWh Vanadium Redox Flow battery system and a 11 kW wind turbine. The generation set is connected to the local...... network and is fully controllable by the SCADA system. The control strategies, implemented on a local pc interfaced to the SCADA, are realized in Matlab-Simulink. The main purpose is to control the charge/discharge action of the storage system in order to present at the point of common coupling...... the desired power or energy profiles....

  20. Design of a Channel Error Simulator using Virtual Instrument Techniques for the Initial Testing of TCP/IP and SCPS Protocols

    Science.gov (United States)

    Horan, Stephen; Wang, Ru-Hai

    1999-01-01

    There exists a need for designers and developers to have a method to conveniently test a variety of communications parameters for an overall system design. This is no different when testing network protocols as when testing modulation formats. In this report, we discuss a means of providing a networking test device specifically designed to be used for space communications. This test device is a PC-based Virtual Instrument (VI) programmed using the LabVIEW(TM) version 5 software suite developed by National Instruments(TM)TM. This instrument was designed to be portable and usable by others without special, additional equipment. The programming was designed to replicate a VME-based hardware module developed earlier at New Mexico State University (NMSU) and to provide expanded capabilities exceeding the baseline configuration existing in that module. This report describes the design goals for the VI module in the next section and follows that with a description of the design of the VI instrument. This is followed with a description of the validation tests run on the VI. An application of the error-generating VI to networking protocols is then given.

  1. Uncertainty quantification and error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  2. Corrective Action Decision Document for Corrective Action Unit 204: Storage Bunkers, Nevada Test Site, Nevada: Revision 0, Including Errata Sheet

    Energy Technology Data Exchange (ETDEWEB)

    U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office

    2004-04-01

    This Corrective Action Decision Document identifies the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office's corrective action alternative recommendation for each of the corrective action sites (CASs) within Corrective Action Unit (CAU) 204: Storage Bunkers, Nevada Test Site (NTS), Nevada, under the Federal Facility Agreement and Consent Order. An evaluation of analytical data from the corrective action investigation, review of current and future operations at each CAS, and a detailed comparative analysis of potential corrective action alternatives were used to determine the appropriate corrective action for each CAS. There are six CASs in CAU 204, which are all located between Areas 1, 2, 3, and 5 on the NTS. The No Further Action alternative was recommended for CASs 01-34-01, 02-34-01, 03-34-01, and 05-99-02; and a Closure in Place with Administrative Controls recommendation was the preferred corrective action for CASs 05-18-02 and 05-33-01. These alternatives were judged to meet all requirements for the technical components evaluated as well as applicable state and federal regulations for closure of the sites and will eliminate potential future exposure pathways to the contaminated media at CAU 204.

  3. Inventory of forest and rangeland resources, including forest stress. [Atlanta, Georgia, Black Hills, and Manitou, Colorado test sites

    Science.gov (United States)

    Heller, R. C.; Aldrich, R. C.; Weber, F. P.; Driscoll, R. S. (Principal Investigator)

    1974-01-01

    The author has identified the following significant results. Some current beetle-killed ponderosa pine can be detected on S190-B photography imaged over the Bear Lodge mountains in the Black Hills National Forest. Detections were made on SL-3 imagery (September 13, 1973) using a zoom lens microscope to view the photography. At this time correlations have not been made to all of the known infestation spots in the Bear Lodge mountains; rather, known infestations have been located on the SL-3 imagery. It was determined that the beetle-killed trees were current kills by stereo viewing of SL-3 imagery on one side and SL-2 on the other. A successful technique was developed for mapping current beetle-killed pine using MSS imagery from mission 247 flown by the C-130 over the Black Hills test site in September 1973. Color enhancement processing on the NASA/JSC, DAS system using three MSS channels produced an excellent quality detection map for current kill pine. More importantly it provides a way to inventory the dead trees by relating PCM counts to actual numbers of dead trees.

  4. Effect of yoga practices on pulmonary function tests including transfer factor of lung for carbon monoxide (TLCO) in asthma patients.

    Science.gov (United States)

    Singh, Savita; Soni, Ritu; Singh, K P; Tandon, O P

    2012-01-01

    Prana is the energy, when the self-energizing force embraces the body with extension and expansion and control, it is pranayama. It may affect the milieu at the bronchioles and the alveoli particularly at the alveolo-capillary membrane to facilitate diffusion and transport of gases. It may also increase oxygenation at tissue level. Aim of our study is to compare pulmonary functions and diffusion capacity in patients of bronchial asthma before and after yogic intervention of 2 months. Sixty stable asthmatic-patients were randomized into two groups i.e group 1 (Yoga training group) and group 2 (control group). Each group included thirty patients. Lung functions were recorded on all patients at baseline, and then after two months. Group 1 subjects showed a statistically significant improvement (Pincreased significantly. It was concluded that pranayama & yoga breathing and stretching postures are used to increase respiratory stamina, relax the chest muscles, expand the lungs, raise energy levels, and calm the body.

  5. Reliability, standard error, and minimum detectable change of clinical pressure pain threshold testing in people with and without acute neck pain.

    Science.gov (United States)

    Walton, David M; Macdermid, Joy C; Nielson, Warren; Teasell, Robert W; Chiasson, Marco; Brown, Lauren

    2011-09-01

    Clinical measurement. To evaluate the intrarater, interrater, and test-retest reliability of an accessible digital algometer, and to determine the minimum detectable change in normal healthy individuals and a clinical population with neck pain. Pressure pain threshold testing may be a valuable assessment and prognostic indicator for people with neck pain. To date, most of this research has been completed using algometers that are too resource intensive for routine clinical use. Novice raters (physiotherapy students or clinical physiotherapists) were trained to perform algometry testing over 2 clinically relevant sites: the angle of the upper trapezius and the belly of the tibialis anterior. A convenience sample of normal healthy individuals and a clinical sample of people with neck pain were tested by 2 different raters (all participants) and on 2 different days (healthy participants only). Intraclass correlation coefficient (ICC), standard error of measurement, and minimum detectable change were calculated. A total of 60 healthy volunteers and 40 people with neck pain were recruited. Intrarater reliability was almost perfect (ICC = 0.94-0.97), interrater reliability was substantial to near perfect (ICC = 0.79-0.90), and test-retest reliability was substantial (ICC = 0.76-0.79). Smaller change was detectable in the trapezius compared to the tibialis anterior. This study provides evidence that novice raters can perform digital algometry with adequate reliability for research and clinical use in people with and without neck pain.

  6. Evaluation of the effect of noise on the rate of errors and speed of work by the ergonomic test of two-hand co-ordination

    Directory of Open Access Journals (Sweden)

    Ehsanollah Habibi

    2013-01-01

    Full Text Available Background: Among the most important and effective factors affecting the efficiency of the human workforce are accuracy, promptness, and ability. In the context of promoting levels and quality of productivity, the aim of this study was to investigate the effects of exposure to noise on the rate of errors, speed of work, and capability in performing manual activities. Methods: This experimental study was conducted on 96 students (52 female and 44 male of the Isfahan Medical Science University with the average and standard deviations of age, height, and weight of 22.81 (3.04 years, 171.67 (8.51 cm, and 65.05 (13.13 kg, respectively. Sampling was conducted with a randomized block design. Along with controlling for intervening factors, a combination of sound pressure levels [65 dB (A, 85 dB (A, and 95 dB (A] and exposure times (0, 20, and 40 were used for evaluation of precision and speed of action of the participants, in the ergonomic test of two-hand coordination. Data was analyzed by SPSS18 software using a descriptive and analytical statistical method by analysis of covariance (ANCOVA repeated measures. Results: The results of this study showed that increasing sound pressure level from 65 to 95 dB in network ′A′ increased the speed of work (P 0.05. Male participants got annoyed from the noise more than females. Also, increase in sound pressure level increased the rate of error (P < 0.05. Conclusions: According to the results of this research, increasing the sound pressure level decreased efficiency and increased the errors and in exposure to sounds less than 85 dB in the beginning, the efficiency decreased initially and then increased in a mild slope.

  7. Investigating Medication Errors in Educational Health Centers of Kermanshah

    Directory of Open Access Journals (Sweden)

    Mohsen Mohammadi

    2015-08-01

    Full Text Available Background and objectives : Medication errors can be a threat to the safety of patients. Preventing medication errors requires reporting and investigating such errors. The present study was conducted with the purpose of investigating medication errors in educational health centers of Kermanshah. Material and Methods: The present research is an applied, descriptive-analytical study and is done as a survey. Error Report of Ministry of Health and Medical Education was used for data collection. The population of the study included all the personnel (nurses, doctors, paramedics of educational health centers of Kermanshah. Among them, those who reported the committed errors were selected as the sample of the study. The data analysis was done using descriptive statistics and Chi 2 Test using SPSS version 18. Results: The findings of the study showed that most errors were related to not using medication properly, the least number of errors were related to improper dose, and the majority of errors occurred in the morning. The most frequent reason for errors was staff negligence and the least frequent was the lack of knowledge. Conclusion: The health care system should create an environment for detecting and reporting errors by the personnel, recognizing related factors causing errors, training the personnel and create a good working environment and standard workload.

  8. Characterization and error analysis of an N×N unfolding procedure applied to filtered, photoelectric x-ray detector arrays. I. Formulation and testing

    Science.gov (United States)

    Fehl, D. L.; Chandler, G. A.; Stygar, W. A.; Olson, R. E.; Ruiz, C. L.; Hohlfelder, J. J.; Mix, L. P.; Biggs, F.; Berninger, M.; Frederickson, P. O.; Frederickson, R.

    2010-12-01

    test and unfolded spectra increasingly diverged as larger fractions of Sbb(E,T) fell below the detection threshold (˜137eV) of the diagnostic. (c) Comparison with other analyses and diagnostics.—The results of the histogram algorithm are compared with other analyses, including a test with data acquired by the DANTE filtered-XRD array at the NOVA laser facility. Overall, the histogram algorithm is found to be most useful for x-ray flux estimates, as opposed to spectral details. The following companion paper [D. L. Fehl , Phys. Rev. ST Accel. Beams 13, 120403 (2010)PRABFM1098-4402] considers (a) uncertainties in Sunfold and Funfold induced by both data noise and calibrational errors in the response functions; and (b) generalization of the algorithm to arbitrary spectra. These techniques apply to other diagnostics with analogous channel responses and supported by unfold algorithms of invertible matrix form.

  9. Characterization and error analysis of an N×N unfolding procedure applied to filtered, photoelectric x-ray detector arrays. I. Formulation and testing

    Directory of Open Access Journals (Sweden)

    D. L. Fehl

    2010-12-01

    -ray flux over the wider range, 75≤T≤250  eV. For lower T, the test and unfolded spectra increasingly diverged as larger fractions of S_{bb}(E,T fell below the detection threshold (∼137  eV of the diagnostic. (c Comparison with other analyses and diagnostics.—The results of the histogram algorithm are compared with other analyses, including a test with data acquired by the DANTE filtered-XRD array at the NOVA laser facility. Overall, the histogram algorithm is found to be most useful for x-ray flux estimates, as opposed to spectral details. The following companion paper [D. L. Fehl et al., Phys. Rev. ST Accel. Beams 13, 120403 (2010PRABFM1098-4402] considers (a uncertainties in S_{unfold} and F_{unfold} induced by both data noise and calibrational errors in the response functions; and (b generalization of the algorithm to arbitrary spectra. These techniques apply to other diagnostics with analogous channel responses and supported by unfold algorithms of invertible matrix form.

  10. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  11. Errors in clinical laboratories or errors in laboratory medicine?

    Science.gov (United States)

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  12. Use of coal fly ash and other waste products in soil stabilization and road construction-including non-destructive testing of roadways.

    Science.gov (United States)

    2012-02-01

    An extensive laboratory testing program was performed on subgrade soils stabilized using fly ash and lime kiln dust. The laboratory : program included measurements of: compaction curves, small strain elastic moduli, resilient modulus (Mr), Briaud Com...

  13. Use of coal fly ash and other waste products in soil stabilization and road construction including non-destructive testing of roadways.

    Science.gov (United States)

    2012-06-01

    An extensive laboratory testing program was performed on subgrade soils stabilized using fly ash and : lime kiln dust. The laboratory program included measurements of: compaction curves, small strain elastic moduli, : resilient modulus (Mr), Briaud C...

  14. Apologies and Medical Error

    Science.gov (United States)

    2008-01-01

    One way in which physicians can respond to a medical error is to apologize. Apologies—statements that acknowledge an error and its consequences, take responsibility, and communicate regret for having caused harm—can decrease blame, decrease anger, increase trust, and improve relationships. Importantly, apologies also have the potential to decrease the risk of a medical malpractice lawsuit and can help settle claims by patients. Patients indicate they want and expect explanations and apologies after medical errors and physicians indicate they want to apologize. However, in practice, physicians tend to provide minimal information to patients after medical errors and infrequently offer complete apologies. Although fears about potential litigation are the most commonly cited barrier to apologizing after medical error, the link between litigation risk and the practice of disclosure and apology is tenuous. Other barriers might include the culture of medicine and the inherent psychological difficulties in facing one’s mistakes and apologizing for them. Despite these barriers, incorporating apology into conversations between physicians and patients can address the needs of both parties and can play a role in the effective resolution of disputes related to medical error. PMID:18972177

  15. Laboratory errors and patient safety.

    Science.gov (United States)

    Miligy, Dawlat A

    2015-01-01

    Laboratory data are extensively used in medical practice; consequently, laboratory errors have a tremendous impact on patient safety. Therefore, programs designed to identify and reduce laboratory errors, as well as, setting specific strategies are required to minimize these errors and improve patient safety. The purpose of this paper is to identify part of the commonly encountered laboratory errors throughout our practice in laboratory work, their hazards on patient health care and some measures and recommendations to minimize or to eliminate these errors. Recording the encountered laboratory errors during May 2008 and their statistical evaluation (using simple percent distribution) have been done in the department of laboratory of one of the private hospitals in Egypt. Errors have been classified according to the laboratory phases and according to their implication on patient health. Data obtained out of 1,600 testing procedure revealed that the total number of encountered errors is 14 tests (0.87 percent of total testing procedures). Most of the encountered errors lay in the pre- and post-analytic phases of testing cycle (representing 35.7 and 50 percent, respectively, of total errors). While the number of test errors encountered in the analytic phase represented only 14.3 percent of total errors. About 85.7 percent of total errors were of non-significant implication on patients health being detected before test reports have been submitted to the patients. On the other hand, the number of test errors that have been already submitted to patients and reach the physician represented 14.3 percent of total errors. Only 7.1 percent of the errors could have an impact on patient diagnosis. The findings of this study were concomitant with those published from the USA and other countries. This proves that laboratory problems are universal and need general standardization and bench marking measures. Original being the first data published from Arabic countries that

  16. Error Budgeting

    Energy Technology Data Exchange (ETDEWEB)

    Vinyard, Natalia Sergeevna [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Perry, Theodore Sonne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Usov, Igor Olegovich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-10-04

    We calculate opacity from k (hn)=-ln[T(hv)]/pL, where T(hv) is the transmission for photon energy hv, p is sample density, and L is path length through the sample. The density and path length are measured together by Rutherford backscatter. Δk = $\\partial k$\\ $\\partial T$ ΔT + $\\partial k$\\ $\\partial (pL)$. We can re-write this in terms of fractional error as Δk/k = Δ1n(T)/T + Δ(pL)/(pL). Transmission itself is calculated from T=(U-E)/(V-E)=B/B0, where B is transmitted backlighter (BL) signal and B0 is unattenuated backlighter signal. Then ΔT/T=Δln(T)=ΔB/B+ΔB0/B0, and consequently Δk/k = 1/T (ΔB/B + ΔB$_0$/B$_0$ + Δ(pL)/(pL). Transmission is measured in the range of 0.2

  17. A comparison between the original and Tablet-based Symbol Digit Modalities Test in patients with schizophrenia: Test-retest agreement, random measurement error, practice effect, and ecological validity.

    Science.gov (United States)

    Tang, Shih-Fen; Chen, I-Hui; Chiang, Hsin-Yu; Wu, Chien-Te; Hsueh, I-Ping; Yu, Wan-Hui; Hsieh, Ching-Lin

    2017-11-27

    We aimed to compare the test-retest agreement, random measurement error, practice effect, and ecological validity of the original and Tablet-based Symbol Digit Modalities Test (T-SDMT) over five serial assessments, and to examine the concurrent validity of the T-SDMT in patients with schizophrenia. Sixty patients with chronic schizophrenia completed five serial assessments (one week apart) of the SDMT and T-SDMT and one assessment of the Activities of Daily Living Rating Scale III at the first time point. Both measures showed high test-retest agreement, similar levels of random measurement error over five serial assessments. Moreover, the practice effects of the two measures did not reach a plateau phase after five serial assessments in young and middle-aged participants. Nevertheless, only the practice effect of the T-SDMT became trivial after the first assessment. Like the SDMT, the T-SDMT had good ecological validity. The T-SDMT also had good concurrent validity with the SDMT. In addition, only the T-SDMT had discriminative validity to discriminate processing speed in young and middle-aged participants. Compared to the SDMT, the T-SDMT had overall slightly better psychometric properties, so it can be an alternative measure to the SDMT for assessing processing speed in patients with schizophrenia. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. Error Control for Network-on-Chip Links

    CERN Document Server

    Fu, Bo

    2012-01-01

    As technology scales into nanoscale regime, it is impossible to guarantee the perfect hardware design. Moreover, if the requirement of 100% correctness in hardware can be relaxed, the cost of manufacturing, verification, and testing will be significantly reduced. Many approaches have been proposed to address the reliability problem of on-chip communications. This book focuses on the use of error control codes (ECCs) to improve on-chip interconnect reliability. Coverage includes detailed description of key issues in NOC error control faced by circuit and system designers, as well as practical error control techniques to minimize the impact of these errors on system performance. Provides a detailed background on the state of error control methods for on-chip interconnects; Describes the use of more complex concatenated codes such as Hamming Product Codes with Type-II HARQ, while emphasizing integration techniques for on-chip interconnect links; Examines energy-efficient techniques for integrating multiple error...

  19. Simulation testing the robustness of stock assessment models to error: some results from the ICES strategic initiative on stock assessment methods

    DEFF Research Database (Denmark)

    Deroba, J. J.; Butterworth, D. S.; Methot, R. D.

    2015-01-01

    The World Conference on Stock Assessment Methods (July 2013) included a workshop on testing assessment methods through simulations. The exercise was made up of two steps applied to datasets from 14 representative fish stocks from around the world. Step 1 involved applying stock assessments to dat...

  20. Evaluation of the Repeatability of the Delta Q Duct Leakage Testing TechniqueIncluding Investigation of Robust Analysis Techniques and Estimates of Weather Induced Uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Dickerhoff, Darryl; Walker, Iain

    2008-08-01

    found in for the pressure station approach. Walker and Dickerhoff also included estimates of DeltaQ test repeatability based on the results of field tests where two houses were tested multiple times. The two houses were quite leaky (20-25 Air Changes per Hour at 50Pa (0.2 in. water) (ACH50)) and were located in the San Francisco Bay area. One house was tested on a calm day and the other on a very windy day. Results were also presented for two additional houses that were tested by other researchers in Minneapolis, MN and Madison, WI, that had very tight envelopes (1.8 and 2.5 ACH50). These tight houses had internal duct systems and were tested without operating the central blower--sometimes referred to as control tests. The standard deviations between the multiple tests for all four houses were found to be about 1% of the envelope air flow at 50 Pa (0.2 in. water) (Q50) that led to the suggestion of this as a rule of thumb for estimating DeltaQ uncertainty. Because DeltaQ is based on measuring envelope air flows it makes sense for uncertainty to scale with envelope leakage. However, these tests were on a limited data set and one of the objectives of the current study is to increase the number of tested houses. This study focuses on answering two questions: (1) What is the uncertainty associated with changes in weather (primarily wind) conditions during DeltaQ testing? (2) How can these uncertainties be reduced? The first question is addressing issues of repeatability. To study this five houses were tested as many times as possible over a day. Weather data was recorded on-site--including the local windspeed. The result from these five houses were combined with the two Bay Area homes from the previous studies. The variability of the tests (represented by the standard deviation) is the repeatability of the test method for that house under the prevailing weather conditions. Because the testing was performed over a day a wide range of wind speeds was achieved following

  1. Errors in Neonatology

    Directory of Open Access Journals (Sweden)

    Antonio Boldrini

    2013-06-01

    Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research

  2. Idiographic duo-trio tests using a constant-reference based on preference of each consumer: Sample presentation sequence in difference test can be customized for individual consumers to reduce error.

    Science.gov (United States)

    Kim, Min-A; Sim, Hye-Min; Lee, Hye-Seong

    2016-11-01

    As reformulations and processing changes are increasingly needed in the food industry to produce healthier, more sustainable, and cost effective products while maintaining superior quality, reliable measurements of consumers' sensory perception and discrimination are becoming more critical. Consumer discrimination methods using a preferred-reference duo-trio test design have been shown to be effective in improving the discrimination performance by customizing sample presentation sequences. However, this design can add complexity to the discrimination task for some consumers, resulting in more errors in sensory discrimination. The objective of the present study was to investigate the effects of different types of test instructions using the preference-reference duo-trio test design where a paired-preference test is followed by 6 repeated preferred-reference duo-trio tests, in comparison to the analytical method using the balanced-reference duo-trio. Analyses of d' estimates (product-related measure) and probabilistic sensory discriminators in momentary numbers of subjects showing statistical significance (subject-related measure) revealed that only preferred-reference duo-trio test using affective reference-framing, either by providing no information about the reference or information on a previously preferred sample, improved the sensory discrimination more than the analytical method. No decrease in discrimination performance was observed with any type of instruction, confirming that consumers could handle the test methods. These results suggest that when repeated tests are feasible, using the affective discrimination method would be operationally more efficient as well as ecologically more reliable for measuring consumers' sensory discrimination ability. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Errors in abdominal computed tomography

    International Nuclear Information System (INIS)

    Stephens, S.; Marting, I.; Dixon, A.K.

    1989-01-01

    Sixty-nine patients are presented in whom a substantial error was made on the initial abdominal computed tomography report. Certain features of these errors have been analysed. In 30 (43.5%) a lesion was simply not recognised (error of observation); in 39 (56.5%) the wrong conclusions were drawn about the nature of normal or abnormal structures (error of interpretation). The 39 errors of interpretation were more complex; in 7 patients an abnormal structure was noted but interpreted as normal, whereas in four a normal structure was thought to represent a lesion. Other interpretive errors included those where the wrong cause for a lesion had been ascribed (24 patients), and those where the abnormality was substantially under-reported (4 patients). Various features of these errors are presented and discussed. Errors were made just as often in relation to small and large lesions. Consultants made as many errors as senior registrar radiologists. It is like that dual reporting is the best method of avoiding such errors and, indeed, this is widely practised in our unit. (Author). 9 refs.; 5 figs.; 1 tab

  4. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  5. Error and its meaning in forensic science.

    Science.gov (United States)

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes. © 2013 American Academy of Forensic Sciences.

  6. Cleanup procedures at the Nevada Test Site and at other radioactively contaminated sites including representative costs of cleanup and treatment of contaminated areas

    International Nuclear Information System (INIS)

    Talmage, S.S.; Chilton, B.D.

    1987-09-01

    This review summarizes available information on cleanup procedures at the Nevada Test Site and at other radioactively contaminated sites. Radionuclide distribution and inventory, size of the contaminated areas, equipment, and cleanup procedures and results are included. Information about the cost of cleanup and treatment for contaminated land is presented. Selected measures that could be useful in estimating the costs of cleaning up radioactively contaminated areas are described. 76 refs., 16 tabs

  7. Error calculations statistics in radioactive measurements

    International Nuclear Information System (INIS)

    Verdera, Silvia

    1994-01-01

    Basic approach and procedures frequently used in the practice of radioactive measurements.Statistical principles applied are part of Good radiopharmaceutical Practices and quality assurance.Concept of error, classification as systematic and random errors.Statistic fundamentals,probability theories, populations distributions, Bernoulli, Poisson,Gauss, t-test distribution,Ξ2 test, error propagation based on analysis of variance.Bibliography.z table,t-test table, Poisson index ,Ξ2 test

  8. Errors in laboratory medicine: practical lessons to improve patient safety.

    Science.gov (United States)

    Howanitz, Peter J

    2005-10-01

    Patient safety is influenced by the frequency and seriousness of errors that occur in the health care system. Error rates in laboratory practices are collected routinely for a variety of performance measures in all clinical pathology laboratories in the United States, but a list of critical performance measures has not yet been recommended. The most extensive databases describing error rates in pathology were developed and are maintained by the College of American Pathologists (CAP). These databases include the CAP's Q-Probes and Q-Tracks programs, which provide information on error rates from more than 130 interlaboratory studies. To define critical performance measures in laboratory medicine, describe error rates of these measures, and provide suggestions to decrease these errors, thereby ultimately improving patient safety. A review of experiences from Q-Probes and Q-Tracks studies supplemented with other studies cited in the literature. Q-Probes studies are carried out as time-limited studies lasting 1 to 4 months and have been conducted since 1989. In contrast, Q-Tracks investigations are ongoing studies performed on a yearly basis and have been conducted only since 1998. Participants from institutions throughout the world simultaneously conducted these studies according to specified scientific designs. The CAP has collected and summarized data for participants about these performance measures, including the significance of errors, the magnitude of error rates, tactics for error reduction, and willingness to implement each of these performance measures. A list of recommended performance measures, the frequency of errors when these performance measures were studied, and suggestions to improve patient safety by reducing these errors. Error rates for preanalytic and postanalytic performance measures were higher than for analytic measures. Eight performance measures were identified, including customer satisfaction, test turnaround times, patient identification

  9. Estimating Implementation and Operational Costs of an Integrated Tiered CD4 Service including Laboratory and Point of Care Testing in a Remote Health District in South Africa

    Science.gov (United States)

    Cassim, Naseem; Coetzee, Lindi M.; Schnippel, Kathryn; Glencross, Deborah K.

    2014-01-01

    Background An integrated tiered service delivery model (ITSDM) has been proposed to provide ‘full-coverage’ of CD4 services throughout South Africa. Five tiers are described, defined by testing volumes and number of referring health-facilities. These include: (1) Tier-1/decentralized point-of-care service (POC) in a single site; Tier-2/POC-hub servicing processing 600 samples/day and serving >100 or >200 health-clinics, respectively. The objective of this study was to establish costs of existing and ITSDM-tiers 1, 2 and 3 in a remote, under-serviced district in South Africa. Methods Historical health-facility workload volumes from the Pixley-ka-Seme district, and the total volumes of CD4 tests performed by the adjacent district referral CD4 laboratories, linked to locations of all referring clinics and related laboratory-to-result turn-around time (LTR-TAT) data, were extracted from the NHLS Corporate-Data-Warehouse for the period April-2012 to March-2013. Tiers were costed separately (as a cost-per-result) including equipment, staffing, reagents and test consumable costs. A one-way sensitivity analyses provided for changes in reagent price, test volumes and personnel time. Results The lowest cost-per-result was noted for the existing laboratory-based Tiers- 4 and 5 ($6.24 and $5.37 respectively), but with related increased LTR-TAT of >24–48 hours. Full service coverage with TAT cost-per-result of $32.32 and $15.88 respectively. A single district Tier-3 laboratory also ensured ‘full service coverage’ and Implementing a single Tier-3/community laboratory to extend and improve delivery of services in Pixley-ka-Seme, with an estimated local ∼12–24-hour LTR-TAT, is ∼$2 more than existing referred services per-test, but 2–4 fold cheaper than implementing eight Tier-2/POC-hubs or providing twenty-seven Tier-1/POCT CD4 services. PMID:25517412

  10. Non-invasive coronary angiography for patients with acute atypical chest pain discharged after negative screening including maximal negative treadmill stress test. A prospective study.

    Science.gov (United States)

    Bonello, L; Armero, S; Jacquier, A; Com, O; Sarran, A; Sbragia, P; Panuel, M; Arques, S; Paganelli, F

    2009-05-01

    Among patients admitted in the emergency department for acute atypical chest pain those with an acute coronary syndrome (ACS) who are mistakenly discharged home have high mortality. A recent retrospective study has demonstrated that multislice computed tomography (MSCT) coronary angiography could improve triage of these patients. We aimed to prospectively confirm these data on patients with a negative screening including maximal treadmill stress. 30 patients discharged from the emergency department after negative screening for an ACS were included. All patients underwent MSCT angiography of the coronary artery. Patients with coronary atheroma on MSCT had an invasive coronary angiography to confirm these findings. Seven patients (23%) had obstructive coronary artery disease on MSCT. Invasive coronary angiography (ICA) confirmed the diagnosis in all patients. In patients with no previously known coronary artery disease admitted to the emergency department with atypical acute chest pain and discharged after negative screening, including maximal treadmill stress test, MSCT coronary angiography is useful for the diagnosis of obstructive coronary artery disease.

  11. Learning from Errors

    OpenAIRE

    Martínez-Legaz, Juan Enrique; Soubeyran, Antoine

    2003-01-01

    We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.

  12. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  13. How Do Simulated Error Experiences Impact Attitudes Related to Error Prevention?

    Science.gov (United States)

    Breitkreuz, Karen R; Dougal, Renae L; Wright, Melanie C

    2016-10-01

    The objective of this project was to determine whether simulated exposure to error situations changes attitudes in a way that may have a positive impact on error prevention behaviors. Using a stratified quasi-randomized experiment design, we compared risk perception attitudes of a control group of nursing students who received standard error education (reviewed medication error content and watched movies about error experiences) to an experimental group of students who reviewed medication error content and participated in simulated error experiences. Dependent measures included perceived memorability of the educational experience, perceived frequency of errors, and perceived caution with respect to preventing errors. Experienced nursing students perceived the simulated error experiences to be more memorable than movies. Less experienced students perceived both simulated error experiences and movies to be highly memorable. After the intervention, compared with movie participants, simulation participants believed errors occurred more frequently. Both types of education increased the participants' intentions to be more cautious and reported caution remained higher than baseline for medication errors 6 months after the intervention. This study provides limited evidence of an advantage of simulation over watching movies describing actual errors with respect to manipulating attitudes related to error prevention. Both interventions resulted in long-term impacts on perceived caution in medication administration. Simulated error experiences made participants more aware of how easily errors can occur, and the movie education made participants more aware of the devastating consequences of errors.

  14. Data Analysis & Statistical Methods for Command File Errors

    Science.gov (United States)

    Meshkat, Leila; Waggoner, Bruce; Bryant, Larry

    2014-01-01

    This paper explains current work on modeling for managing the risk of command file errors. It is focused on analyzing actual data from a JPL spaceflight mission to build models for evaluating and predicting error rates as a function of several key variables. We constructed a rich dataset by considering the number of errors, the number of files radiated, including the number commands and blocks in each file, as well as subjective estimates of workload and operational novelty. We have assessed these data using different curve fitting and distribution fitting techniques, such as multiple regression analysis, and maximum likelihood estimation to see how much of the variability in the error rates can be explained with these. We have also used goodness of fit testing strategies and principal component analysis to further assess our data. Finally, we constructed a model of expected error rates based on the what these statistics bore out as critical drivers to the error rate. This model allows project management to evaluate the error rate against a theoretically expected rate as well as anticipate future error rates.

  15. Calibration, field-testing, and error analysis of a gamma-ray probe for in situ measurement of dry bulk density

    International Nuclear Information System (INIS)

    Bertuzzi, P.; Bruckler, L.; Gabilly, Y.; Gaudu, J.C.

    1987-01-01

    This paper describes a new gamma-ray probe for measuring dry bulk density in the field. This equipment can be used with three different tube spacings (15, 20 and 30 cm). Calibration procedures and local error analyses are proposed for two cases: (1) for the case where the access tubes are parallel, calibration equations are given for three tube spacings. The linear correlation coefficient obtained in the laboratory is satisfactory (0.999), and a local error analysis shows that the standard deviation in the measured dry bulk density is small (+/- 0.02 g/cm 3 ); (2) when the access tubes are not parallel, a new calibration procedure is presented that accounts for and corrects measurement bias due to the deviating probe spacing. The standard deviation associated with the measured dry bulk density is greater (+/- 0.05 g/cm 3 ), but the measurements themselves are regarded as unbiased. After comparisons of core samplings and gamma-ray probe measurements, a field validation of the gamma-ray measurements is presented. Field validation was carried out on a variety of soils (clay, clay loam, loam, and silty clay loam), using gravimetric water contents that varied from 0.11 0.27 and dry bulk densities ranging from 1.30-1.80 g°cm -3 . Finally, an example of dry bulk density field variability is shown, and the spatial variability is analyzed in regard to the measurement errors

  16. Corrective Action Investigation Plan for Corrective Action Unit 410: Waste Disposal Trenches, Tonopah Test Range, Nevada, Revision 0 (includes ROTCs 1, 2, and 3)

    Energy Technology Data Exchange (ETDEWEB)

    NNSA/NV

    2002-07-16

    This Corrective Action Investigation Plan contains the U.S. Department of Energy, National Nuclear Security Administration Nevada Operations Office's approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 410 under the Federal Facility Agreement and Consent Order. Corrective Action Unit 410 is located on the Tonopah Test Range (TTR), which is included in the Nevada Test and Training Range (formerly the Nellis Air Force Range) approximately 140 miles northwest of Las Vegas, Nevada. This CAU is comprised of five Corrective Action Sites (CASs): TA-19-002-TAB2, Debris Mound; TA-21-003-TANL, Disposal Trench; TA-21-002-TAAL, Disposal Trench; 09-21-001-TA09, Disposal Trenches; 03-19-001, Waste Disposal Site. This CAU is being investigated because contaminants may be present in concentrations that could potentially pose a threat to human health and/or the environment, and waste may have been disposed of with out appropriate controls. Four out of five of these CASs are the result of weapons testing and disposal activities at the TTR, and they are grouped together for site closure based on the similarity of the sites (waste disposal sites and trenches). The fifth CAS, CAS 03-19-001, is a hydrocarbon spill related to activities in the area. This site is grouped with this CAU because of the location (TTR). Based on historical documentation and process know-ledge, vertical and lateral migration routes are possible for all CASs. Migration of contaminants may have occurred through transport by infiltration of precipitation through surface soil which serves as a driving force for downward migration of contaminants. Land-use scenarios limit future use of these CASs to industrial activities. The suspected contaminants of potential concern which have been identified are volatile organic compounds; semivolatile organic compounds; high explosives; radiological constituents including depleted

  17. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    Science.gov (United States)

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Comparison of Glycomacropeptide with Phenylalanine Free-Synthetic Amino Acids in Test Meals to PKU Patients: No Significant Differences in Biomarkers, Including Plasma Phe Levels

    Directory of Open Access Journals (Sweden)

    Kirsten K. Ahring

    2018-01-01

    Full Text Available Introduction. Management of phenylketonuria (PKU is achieved through low-phenylalanine (Phe diet, supplemented with low-protein food and mixture of free-synthetic (FS amino acid (AA. Casein glycomacropeptide (CGMP is a natural peptide released in whey during cheese-making and does not contain Phe. Lacprodan® CGMP-20 used in this study contained a small amount of Phe due to minor presence of other proteins/peptides. Objective. The purpose of this study was to compare absorption of CGMP-20 to FSAA with the aim of evaluating short-term effects on plasma AAs as well as biomarkers related to food intake. Methods. This study included 8 patients, who had four visits and tested four drink mixtures (DM1–4, consisting of CGMP, FSAA, or a combination. Plasma blood samples were collected at baseline, 15, 30, 60, 120, and 240 minutes (min after the meal. AA profiles and ghrelin were determined 6 times, while surrogate biomarkers were determined at baseline and 240 min. A visual analogue scale (VAS was used for evaluation of taste and satiety. Results. The surrogate biomarker concentrations and VAS scores for satiety and taste were nonsignificant between the four DMs, and there were only few significant results for AA profiles (not Phe. Conclusion. CGMP and FSAA had the overall same nonsignificant short-term effect on biomarkers, including Phe. This combination of FSAA and CGMP is a suitable supplement for PKU patients.

  19. The computation of equating errors in international surveys in education.

    Science.gov (United States)

    Monseur, Christian; Berezner, Alla

    2007-01-01

    Since the IEA's Third International Mathematics and Science Study, one of the major objectives of international surveys in education has been to report trends in achievement. The names of the two current IEA surveys reflect this growing interest: Trends in International Mathematics and Science Study (TIMSS) and Progress in International Reading Literacy Study (PIRLS). Similarly a central concern of the OECD's PISA is with trends in outcomes over time. To facilitate trend analyses these studies link their tests using common item equating in conjunction with item response modelling methods. IEA and PISA policies differ in terms of reporting the error associated with trends. In IEA surveys, the standard errors of the trend estimates do not include the uncertainty associated with the linking step while PISA does include a linking error component in the standard errors of trend estimates. In other words, PISA implicitly acknowledges that trend estimates partly depend on the selected common items, while the IEA's surveys do not recognise this source of error. Failing to recognise the linking error leads to an underestimation of the standard errors and thus increases the Type I error rate, thereby resulting in reporting of significant changes in achievement when in fact these are not significant. The growing interest of policy makers in trend indicators and the impact of the evaluation of educational reforms appear to be incompatible with such underestimation. However, the procedure implemented by PISA raises a few issues about the underlying assumptions for the computation of the equating error. After a brief introduction, this paper will describe the procedure PISA implemented to compute the linking error. The underlying assumptions of this procedure will then be discussed. Finally an alternative method based on replication techniques will be presented, based on a simulation study and then applied to the PISA 2000 data.

  20. Corrective Action Investigation Plan for Corrective Action Unit 204: Storage Bunkers, Nevada Test Site, Nevada (December 2002, Revision No.: 0), Including Record of Technical Change No. 1

    Energy Technology Data Exchange (ETDEWEB)

    NNSA/NSO

    2002-12-12

    The Corrective Action Investigation Plan contains the U.S. Department of Energy, National Nuclear Security Administration Nevada Operations Office's approach to collect the data necessary to evaluate corrective action alternatives appropriate for the closure of Corrective Action Unit (CAU) 204 under the Federal Facility Agreement and Consent Order. Corrective Action Unit 204 is located on the Nevada Test Site approximately 65 miles northwest of Las Vegas, Nevada. This CAU is comprised of six Corrective Action Sites (CASs) which include: 01-34-01, Underground Instrument House Bunker; 02-34-01, Instrument Bunker; 03-34-01, Underground Bunker; 05-18-02, Chemical Explosives Storage; 05-33-01, Kay Blockhouse; 05-99-02, Explosive Storage Bunker. Based on site history, process knowledge, and previous field efforts, contaminants of potential concern for Corrective Action Unit 204 collectively include radionuclides, beryllium, high explosives, lead, polychlorinated biphenyls, total petroleum hydrocarbons, silver, warfarin, and zinc phosphide. The primary question for the investigation is: ''Are existing data sufficient to evaluate appropriate corrective actions?'' To address this question, resolution of two decision statements is required. Decision I is to ''Define the nature of contamination'' by identifying any contamination above preliminary action levels (PALs); Decision II is to ''Determine the extent of contamination identified above PALs. If PALs are not exceeded, the investigation is completed. If PALs are exceeded, then Decision II must be resolved. In addition, data will be obtained to support waste management decisions. Field activities will include radiological land area surveys, geophysical surveys to identify any subsurface metallic and nonmetallic debris, field screening for applicable contaminants of potential concern, collection and analysis of surface and subsurface soil samples from biased locations

  1. Administration and Scoring Errors of Graduate Students Learning the WISC-IV: Issues and Controversies

    Science.gov (United States)

    Mrazik, Martin; Janzen, Troy M.; Dombrowski, Stefan C.; Barford, Sean W.; Krawchuk, Lindsey L.

    2012-01-01

    A total of 19 graduate students enrolled in a graduate course conducted 6 consecutive administrations of the Wechsler Intelligence Scale for Children, 4th edition (WISC-IV, Canadian version). Test protocols were examined to obtain data describing the frequency of examiner errors, including administration and scoring errors. Results identified 511…

  2. Human Errors and Bridge Management Systems

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle; Nowak, A. S.

    on basis of reliability profiles for bridges without human errors are extended to include bridges with human errors. The first rehabilitation distributions for bridges without and with human errors are combined into a joint first rehabilitation distribution. The methodology presented is illustrated...... for reinforced concrete bridges....

  3. Political violence and child adjustment in Northern Ireland: Testing pathways in a social-ecological model including single-and two-parent families.

    Science.gov (United States)

    Cummings, E Mark; Schermerhorn, Alice C; Merrilees, Christine E; Goeke-Morey, Marcie C; Shirlow, Peter; Cairns, Ed

    2010-07-01

    Moving beyond simply documenting that political violence negatively impacts children, we tested a social-ecological hypothesis for relations between political violence and child outcomes. Participants were 700 mother-child (M = 12.1 years, SD = 1.8) dyads from 18 working-class, socially deprived areas in Belfast, Northern Ireland, including single- and two-parent families. Sectarian community violence was associated with elevated family conflict and children's reduced security about multiple aspects of their social environment (i.e., family, parent-child relations, and community), with links to child adjustment problems and reductions in prosocial behavior. By comparison, and consistent with expectations, links with negative family processes, child regulatory problems, and child outcomes were less consistent for nonsectarian community violence. Support was found for a social-ecological model for relations between political violence and child outcomes among both single- and two-parent families, with evidence that emotional security and adjustment problems were more negatively affected in single-parent families. The implications for understanding social ecologies of political violence and children's functioning are discussed.

  4. Political violence and child adjustment in Northern Ireland: Testing pathways in a social ecological model including single and two-parent families

    Science.gov (United States)

    Cummings, E. Mark; Schermerhorn, Alice C.; Merrilees, Christine E.; Goeke-Morey, Marcie C.; Shirlow, Peter; Cairns, Ed

    2013-01-01

    Moving beyond simply documenting that political violence negatively impacts children, a social ecological hypothesis for relations between political violence and child outcomes was tested. Participants were 700 mother-child (M=12.1years, SD=1.8) dyads from 18 working class, socially deprived areas in Belfast, Northern Ireland, including single- and two-parent families. Sectarian community violence was associated with elevated family conflict and children’s reduced security about multiple aspects of their social environment (i.e., family, parent-child relations, and community), with links to child adjustment problems and reductions in prosocial behavior. By comparison, and consistent with expectations, links with negative family processes, child regulatory problems and child outcomes were less consistent for nonsectarian community violence. Support was found for a social ecological model for relations between political violence and child outcomes among both single and two parent families, with evidence that emotional security and adjustment problems were more negatively affected in single-parent families. The implications for understanding social ecologies of political violence and children’s functioning are discussed. PMID:20604605

  5. Performance, postmodernity and errors

    DEFF Research Database (Denmark)

    Harder, Peter

    2013-01-01

    speaker’s competency (note the –y ending!) reflects adaptation to the community langue, including variations. This reversal of perspective also reverses our understanding of the relationship between structure and deviation. In the heyday of structuralism, it was tempting to confuse the invariant system...... with the prestige variety, and conflate non-standard variation with parole/performance and class both as erroneous. Nowadays the anti-structural sentiment of present-day linguistics makes it tempting to confuse the rejection of ideal abstract structure with a rejection of any distinction between grammatical...... as deviant from the perspective of function-based structure and discuss to what extent the recognition of a community langue as a source of adaptive pressure may throw light on different types of deviation, including language handicaps and learner errors....

  6. Numerical optimization with computational errors

    CERN Document Server

    Zaslavski, Alexander J

    2016-01-01

    This book studies the approximate solutions of optimization problems in the presence of computational errors. A number of results are presented on the convergence behavior of algorithms in a Hilbert space; these algorithms are examined taking into account computational errors. The author illustrates that algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Known computational errors are examined with the aim of determining an approximate solution. Researchers and students interested in the optimization theory and its applications will find this book instructive and informative. This monograph contains 16 chapters; including a chapters devoted to the subgradient projection algorithm, the mirror descent algorithm, gradient projection algorithm, the Weiszfelds method, constrained convex minimization problems, the convergence of a proximal point method in a Hilbert space, the continuous subgradient method, penalty methods and Newton’s meth...

  7. Prescription Errors in Psychiatry

    African Journals Online (AJOL)

    Arun Kumar Agnihotri

    clinical pharmacists in detecting errors before they have a (sometimes serious) clinical impact should not be underestimated. Research on medication error in mental health care is limited. .... participation in ward rounds and adverse drug.

  8. A New Extension of the Binomial Error Model for Responses to Items of Varying Difficulty in Educational Testing and Attitude Surveys.

    Directory of Open Access Journals (Sweden)

    James A Wiley

    Full Text Available We put forward a new item response model which is an extension of the binomial error model first introduced by Keats and Lord. Like the binomial error model, the basic latent variable can be interpreted as a probability of responding in a certain way to an arbitrarily specified item. For a set of dichotomous items, this model gives predictions that are similar to other single parameter IRT models (such as the Rasch model but has certain advantages in more complex cases. The first is that in specifying a flexible two-parameter Beta distribution for the latent variable, it is easy to formulate models for randomized experiments in which there is no reason to believe that either the latent variable or its distribution vary over randomly composed experimental groups. Second, the elementary response function is such that extensions to more complex cases (e.g., polychotomous responses, unfolding scales are straightforward. Third, the probability metric of the latent trait allows tractable extensions to cover a wide variety of stochastic response processes.

  9. An adaptive orienting theory of error processing.

    Science.gov (United States)

    Wessel, Jan R

    2018-03-01

    The ability to detect and correct action errors is paramount to safe and efficient goal-directed behaviors. Existing work on the neural underpinnings of error processing and post-error behavioral adaptations has led to the development of several mechanistic theories of error processing. These theories can be roughly grouped into adaptive and maladaptive theories. While adaptive theories propose that errors trigger a cascade of processes that will result in improved behavior after error commission, maladaptive theories hold that error commission momentarily impairs behavior. Neither group of theories can account for all available data, as different empirical studies find both impaired and improved post-error behavior. This article attempts a synthesis between the predictions made by prominent adaptive and maladaptive theories. Specifically, it is proposed that errors invoke a nonspecific cascade of processing that will rapidly interrupt and inhibit ongoing behavior and cognition, as well as orient attention toward the source of the error. It is proposed that this cascade follows all unexpected action outcomes, not just errors. In the case of errors, this cascade is followed by error-specific, controlled processing, which is specifically aimed at (re)tuning the existing task set. This theory combines existing predictions from maladaptive orienting and bottleneck theories with specific neural mechanisms from the wider field of cognitive control, including from error-specific theories of adaptive post-error processing. The article aims to describe the proposed framework and its implications for post-error slowing and post-error accuracy, propose mechanistic neural circuitry for post-error processing, and derive specific hypotheses for future empirical investigations. © 2017 Society for Psychophysiological Research.

  10. Measurement error models with interactions

    Science.gov (United States)

    Midthune, Douglas; Carroll, Raymond J.; Freedman, Laurence S.; Kipnis, Victor

    2016-01-01

    \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$X$\\end{document} given \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$W$\\end{document} and \\documentclass[12pt]{minimal} \\usepackage{amsmath} \\usepackage{wasysym} \\usepackage{amsfonts} \\usepackage{amssymb} \\usepackage{amsbsy} \\usepackage{upgreek} \\usepackage{mathrsfs} \\setlength{\\oddsidemargin}{-69pt} \\begin{document} }{}$Z$\\end{document} and use it to extend the method of regression calibration to this class of measurement error models. We apply the model to dietary data and test whether self-reported dietary intake includes an interaction between true intake and body mass index. We also perform simulations to compare the model to simpler approximate calibration models. PMID:26530858

  11. Diagnosis of Cognitive Errors by Statistical Pattern Recognition Methods.

    Science.gov (United States)

    Tatsuoka, Kikumi K.; Tatsuoka, Maurice M.

    The rule space model permits measurement of cognitive skill acquisition, diagnosis of cognitive errors, and detection of the strengths and weaknesses of knowledge possessed by individuals. Two ways to classify an individual into his or her most plausible latent state of knowledge include: (1) hypothesis testing--Bayes' decision rules for minimum…

  12. Analysis of Students' Error in Learning of Quadratic Equations

    Science.gov (United States)

    Zakaria, Effandi; Ibrahim; Maat, Siti Mistima

    2010-01-01

    The purpose of the study was to determine the students' error in learning quadratic equation. The samples were 30 form three students from a secondary school in Jambi, Indonesia. Diagnostic test was used as the instrument of this study that included three components: factorization, completing the square and quadratic formula. Diagnostic interview…

  13. Relating Complexity and Error Rates of Ontology Concepts. More Complex NCIt Concepts Have More Errors.

    Science.gov (United States)

    Min, Hua; Zheng, Ling; Perl, Yehoshua; Halper, Michael; De Coronado, Sherri; Ochs, Christopher

    2017-05-18

    Ontologies are knowledge structures that lend support to many health-information systems. A study is carried out to assess the quality of ontological concepts based on a measure of their complexity. The results show a relation between complexity of concepts and error rates of concepts. A measure of lateral complexity defined as the number of exhibited role types is used to distinguish between more complex and simpler concepts. Using a framework called an area taxonomy, a kind of abstraction network that summarizes the structural organization of an ontology, concepts are divided into two groups along these lines. Various concepts from each group are then subjected to a two-phase QA analysis to uncover and verify errors and inconsistencies in their modeling. A hierarchy of the National Cancer Institute thesaurus (NCIt) is used as our test-bed. A hypothesis pertaining to the expected error rates of the complex and simple concepts is tested. Our study was done on the NCIt's Biological Process hierarchy. Various errors, including missing roles, incorrect role targets, and incorrectly assigned roles, were discovered and verified in the two phases of our QA analysis. The overall findings confirmed our hypothesis by showing a statistically significant difference between the amounts of errors exhibited by more laterally complex concepts vis-à-vis simpler concepts. QA is an essential part of any ontology's maintenance regimen. In this paper, we reported on the results of a QA study targeting two groups of ontology concepts distinguished by their level of complexity, defined in terms of the number of exhibited role types. The study was carried out on a major component of an important ontology, the NCIt. The findings suggest that more complex concepts tend to have a higher error rate than simpler concepts. These findings can be utilized to guide ongoing efforts in ontology QA.

  14. Rectifying calibration error of Goldmann applanation tonometer is easy!

    Directory of Open Access Journals (Sweden)

    Nikhil S Choudhari

    2014-01-01

    Full Text Available Purpose: Goldmann applanation tonometer (GAT is the current Gold standard tonometer. However, its calibration error is common and can go unnoticed in clinics. Its company repair has limitations. The purpose of this report is to describe a self-taught technique of rectifying calibration error of GAT. Materials and Methods: Twenty-nine slit-lamp-mounted Haag-Streit Goldmann tonometers (Model AT 900 C/M; Haag-Streit, Switzerland were included in this cross-sectional interventional pilot study. The technique of rectification of calibration error of the tonometer involved cleaning and lubrication of the instrument followed by alignment of weights when lubrication alone didn′t suffice. We followed the South East Asia Glaucoma Interest Group′s definition of calibration error tolerance (acceptable GAT calibration error within ±2, ±3 and ±4 mm Hg at the 0, 20 and 60-mm Hg testing levels, respectively. Results: Twelve out of 29 (41.3% GATs were out of calibration. The range of positive and negative calibration error at the clinically most important 20-mm Hg testing level was 0.5 to 20 mm Hg and -0.5 to -18 mm Hg, respectively. Cleaning and lubrication alone sufficed to rectify calibration error of 11 (91.6% faulty instruments. Only one (8.3% faulty GAT required alignment of the counter-weight. Conclusions: Rectification of calibration error of GAT is possible in-house. Cleaning and lubrication of GAT can be carried out even by eye care professionals and may suffice to rectify calibration error in the majority of faulty instruments. Such an exercise may drastically reduce the downtime of the Gold standard tonometer.

  15. A causal link between prediction errors, dopamine neurons and learning.

    Science.gov (United States)

    Steinberg, Elizabeth E; Keiflin, Ronald; Boivin, Josiah R; Witten, Ilana B; Deisseroth, Karl; Janak, Patricia H

    2013-07-01

    Situations in which rewards are unexpectedly obtained or withheld represent opportunities for new learning. Often, this learning includes identifying cues that predict reward availability. Unexpected rewards strongly activate midbrain dopamine neurons. This phasic signal is proposed to support learning about antecedent cues by signaling discrepancies between actual and expected outcomes, termed a reward prediction error. However, it is unknown whether dopamine neuron prediction error signaling and cue-reward learning are causally linked. To test this hypothesis, we manipulated dopamine neuron activity in rats in two behavioral procedures, associative blocking and extinction, that illustrate the essential function of prediction errors in learning. We observed that optogenetic activation of dopamine neurons concurrent with reward delivery, mimicking a prediction error, was sufficient to cause long-lasting increases in cue-elicited reward-seeking behavior. Our findings establish a causal role for temporally precise dopamine neuron signaling in cue-reward learning, bridging a critical gap between experimental evidence and influential theoretical frameworks.

  16. Errors in practical measurement in surveying, engineering, and technology

    International Nuclear Information System (INIS)

    Barry, B.A.; Morris, M.D.

    1991-01-01

    This book discusses statistical measurement, error theory, and statistical error analysis. The topics of the book include an introduction to measurement, measurement errors, the reliability of measurements, probability theory of errors, measures of reliability, reliability of repeated measurements, propagation of errors in computing, errors and weights, practical application of the theory of errors in measurement, two-dimensional errors and includes a bibliography. Appendices are included which address significant figures in measurement, basic concepts of probability and the normal probability curve, writing a sample specification for a procedure, classification, standards of accuracy, and general specifications of geodetic control surveys, the geoid, the frequency distribution curve and the computer and calculator solution of problems

  17. Rheumatoid factor testing in Spanish primary care: A population-based cohort study including 4.8 million subjects and almost half a million measurements.

    Science.gov (United States)

    Morsley, Klara; Miller, Anne; Luqmani, Raashid; Fina-Aviles, Francesc; Javaid, Muhammad Kassim; Edwards, Christopher J; Pinedo-Villanueva, Rafael; Medina, Manuel; Calero, Sebastian; Cooper, Cyrus; Arden, Nigel; Prieto-Alhambra, Daniel

    2018-02-26

    Rheumatoid factor (RF) testing is used in primary care in the diagnosis of rheumatoid arthritis (RA); however a positive RF may occur without RA. Incorrect use of RF testing may lead to increased costs and delayed diagnoses. The aim was to assess the performance of RF as a test for RA and to estimate the costs associated with its use in a primary care setting. A retrospective cohort study using the Information System for the Development of Research in Primary Care database (contains primary care records and laboratory results of >80% of the Catalonian population, Spain). Participants were patients ≥18 years with ≥1 RF test performed between 01/01/2006 and 31/12/2011, without a pre-existing diagnosis of RA. Outcome measures were an incident diagnosis of RA within 1 year of testing, and the cost of testing per case of RA. 495,434/4,796,498 (10.3%) patients were tested at least once. 107,362 (21.7%) of those tested were sero-positive of which 2768 (2.6%) were diagnosed with RA within 1 year as were 1141/388,072 (0.3%) sero-negative participants. The sensitivity of RF was 70.8% (95% CI 69.4-72.2), specificity 78.7% (78.6-78.8), and positive and negative predictive values 2.6% (2.5-2.7) and 99.7% (99.6-99.7) respectively. Approximately €3,963,472 was spent, with a cost of €1432 per true positive case. Although 10% of patients were tested for RF, most did not have RA. Limiting testing to patients with a higher pre-test probability would significantly reduce the cost of testing. Copyright © 2017 Elsevier España, S.L.U. and Sociedad Española de Reumatología y Colegio Mexicano de Reumatología. All rights reserved.

  18. Errors in otology.

    Science.gov (United States)

    Kartush, J M

    1996-11-01

    Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.

  19. Blockage and flow studies of a generalized test apparatus including various wing configurations in the Langley 7-inch Mach 7 Pilot Tunnel

    Science.gov (United States)

    Albertson, C. W.

    1982-03-01

    A 1/12th scale model of the Curved Surface Test Apparatus (CSTA), which will be used to study aerothermal loads and evaluate Thermal Protection Systems (TPS) on a fuselage-type configuration in the Langley 8-Foot High Temperature Structures Tunnel (8 ft HTST), was tested in the Langley 7-Inch Mach 7 Pilot Tunnel. The purpose of the tests was to study the overall flow characteristics and define an envelope for testing the CSTA in the 8 ft HTST. Wings were tested on the scaled CSTA model to select a wing configuration with the most favorable characteristics for conducting TPS evaluations for curved and intersecting surfaces. The results indicate that the CSTA and selected wing configuration can be tested at angles of attack up to 15.5 and 10.5 degrees, respectively. The base pressure for both models was at the expected low level for most test conditions. Results generally indicate that the CSTA and wing configuration will provide a useful test bed for aerothermal pads and thermal structural concept evaluation over a broad range of flow conditions in the 8 ft HTST.

  20. Advanced hardware design for error correcting codes

    CERN Document Server

    Coussy, Philippe

    2015-01-01

    This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.

  1. Approximate error conjugation gradient minimization methods

    Science.gov (United States)

    Kallman, Jeffrey S

    2013-05-21

    In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.

  2. Propagation of angular errors in two-axis rotation systems

    Science.gov (United States)

    Torrington, Geoffrey K.

    2003-10-01

    Two-Axis Rotation Systems, or "goniometers," are used in diverse applications including telescope pointing, automotive headlamp testing, and display testing. There are three basic configurations in which a goniometer can be built depending on the orientation and order of the stages. Each configuration has a governing set of equations which convert motion between the system "native" coordinates to other base systems, such as direction cosines, optical field angles, or spherical-polar coordinates. In their simplest form, these equations neglect errors present in real systems. In this paper, a statistical treatment of error source propagation is developed which uses only tolerance data, such as can be obtained from the system mechanical drawings prior to fabrication. It is shown that certain error sources are fully correctable, partially correctable, or uncorrectable, depending upon the goniometer configuration and zeroing technique. The system error budget can be described by a root-sum-of-squares technique with weighting factors describing the sensitivity of each error source. This paper tabulates weighting factors at 67% (k=1) and 95% (k=2) confidence for various levels of maximum travel for each goniometer configuration. As a practical example, this paper works through an error budget used for the procurement of a system at Sandia National Laboratories.

  3. Error-related anterior cingulate cortex activity and the prediction of conscious error awareness

    Directory of Open Access Journals (Sweden)

    Catherine eOrr

    2012-06-01

    Full Text Available Research examining the neural mechanisms associated with error awareness has consistently identified dorsal anterior cingulate activity (ACC as necessary but not predictive of conscious error detection. Two recent studies (Steinhauser and Yeung, 2010; Wessel et al. 2011 have found a contrary pattern of greater dorsal ACC activity (in the form of the error-related negativity during detected errors, but suggested that the greater activity may instead reflect task influences (e.g., response conflict, error probability and or individual variability (e.g., statistical power. We re-analyzed fMRI BOLD data from 56 healthy participants who had previously been administered the Error Awareness Task, a motor Go/No-go response inhibition task in which subjects make errors of commission of which they are aware (Aware errors, or unaware (Unaware errors. Consistent with previous data, the activity in a number of cortical regions was predictive of error awareness, including bilateral inferior parietal and insula cortices, however in contrast to previous studies, including our own smaller sample studies using the same task, error-related dorsal ACC activity was significantly greater during aware errors when compared to unaware errors. While the significantly faster RT for aware errors (compared to unaware was consistent with the hypothesis of higher response conflict increasing ACC activity, we could find no relationship between dorsal ACC activity and the error RT difference. The data suggests that individual variability in error awareness is associated with error-related dorsal ACC activity, and therefore this region may be important to conscious error detection, but it remains unclear what task and individual factors influence error awareness.

  4. Comparison between calorimeter and HLNC errors

    International Nuclear Information System (INIS)

    Goldman, A.S.; De Ridder, P.; Laszlo, G.

    1991-01-01

    This paper summarizes an error analysis that compares systematic and random errors of total plutonium mass estimated for high-level neutron coincidence counter (HLNC) and calorimeter measurements. This task was part of an International Atomic Energy Agency (IAEA) study on the comparison of the two instruments to determine if HLNC measurement errors met IAEA standards and if the calorimeter gave ''significantly'' better precision. Our analysis was based on propagation of error models that contained all known sources of errors including uncertainties associated with plutonium isotopic measurements. 5 refs., 2 tabs

  5. Spacecraft and propulsion technician error

    Science.gov (United States)

    Schultz, Daniel Clyde

    Commercial aviation and commercial space similarly launch, fly, and land passenger vehicles. Unlike aviation, the U.S. government has not established maintenance policies for commercial space. This study conducted a mixed methods review of 610 U.S. space launches from 1984 through 2011, which included 31 failures. An analysis of the failure causal factors showed that human error accounted for 76% of those failures, which included workmanship error accounting for 29% of the failures. With the imminent future of commercial space travel, the increased potential for the loss of human life demands that changes be made to the standardized procedures, training, and certification to reduce human error and failure rates. Several recommendations were made by this study to the FAA's Office of Commercial Space Transportation, space launch vehicle operators, and maintenance technician schools in an effort to increase the safety of the space transportation passengers.

  6. Reflex test reminders in required cancer synoptic templates decrease order entry error: An analysis of mismatch repair immunohistochemical orders to screen for Lynch syndrome

    Directory of Open Access Journals (Sweden)

    Mark R Kilgore

    2016-01-01

    Full Text Available Background: Endometrial carcinoma (EC is the most common extracolonic malignant neoplasm associated with Lynch syndrome (LS. LS is caused by autosomal dominant germline mutations in DNA mismatch repair (MMR genes. Screening for LS in EC is often evaluated by loss of immunohistochemical (IHC expression of DNA MMR enzymes MLH1, MSH2, MSH6, and PMS2 (MMR IHC. In July 2013, our clinicians asked that we screen all EC in patients ≤60 for loss of MMR IHC expression. Despite this policy, several cases were not screened or screening was delayed. We implemented an informatics-based approach to ensure that all women who met criteria would have timely screening. Subjects and Methods: Reports are created in PowerPath (Sunquest Information Systems, Tucson, AZ with custom synoptic templates. We implemented an algorithm on March 6, 2014 requiring pathologists to address MMR IHC in patients ≤60 with EC before sign out (S/O. Pathologists must answer these questions: is patient ≤60 (yes/no, if yes, follow-up questions (IHC done previously, ordered with addendum to follow, results included in report, N/A, or not ordered, if not ordered, one must explain. We analyzed cases from July 18, 2013 to August 31, 2016 preimplementation (PreImp and postimplementation (PostImp that met criteria. Data analysis was performed using the standard data package included with GraphPad Prism® 7.00 (GraphPad Software, Inc., La Jolla, CA, USA. Results: There were 147 patients who met criteria (29 PreImp and 118 PostImp. IHC was ordered in a more complete and timely fashion PostImp than PreImp. PreImp, 4/29 (13.8% cases did not get any IHC, but PostImp, only 4/118 (3.39% were missed (P = 0.0448. Of cases with IHC ordered, 60.0% (15/25 were ordered before or at S/O PreImp versus 91.2% (104/114 PostImp (P = 0.0004. Relative to day of S/O, the mean days of order delay were longer and more variable PreImp versus PostImp (12.9 ± 40.7 vs. -0.660 ± 1.15; P = 0.0227, with the average

  7. Reflex test reminders in required cancer synoptic templates decrease order entry error: An analysis of mismatch repair immunohistochemical orders to screen for Lynch syndrome.

    Science.gov (United States)

    Kilgore, Mark R; McIlwain, Carrie A; Schmidt, Rodney A; Norquist, Barbara M; Swisher, Elizabeth M; Garcia, Rochelle L; Rendi, Mara H

    2016-01-01

    Endometrial carcinoma (EC) is the most common extracolonic malignant neoplasm associated with Lynch syndrome (LS). LS is caused by autosomal dominant germline mutations in DNA mismatch repair (MMR) genes. Screening for LS in EC is often evaluated by loss of immunohistochemical (IHC) expression of DNA MMR enzymes MLH1, MSH2, MSH6, and PMS2 (MMR IHC). In July 2013, our clinicians asked that we screen all EC in patients ≤60 for loss of MMR IHC expression. Despite this policy, several cases were not screened or screening was delayed. We implemented an informatics-based approach to ensure that all women who met criteria would have timely screening. Reports are created in PowerPath (Sunquest Information Systems, Tucson, AZ) with custom synoptic templates. We implemented an algorithm on March 6, 2014 requiring pathologists to address MMR IHC in patients ≤60 with EC before sign out (S/O). Pathologists must answer these questions: is patient ≤60 (yes/no), if yes, follow-up questions (IHC done previously, ordered with addendum to follow, results included in report, N/A, or not ordered), if not ordered, one must explain. We analyzed cases from July 18, 2013 to August 31, 2016 preimplementation (PreImp) and postimplementation (PostImp) that met criteria. Data analysis was performed using the standard data package included with GraphPad Prism ® 7.00 (GraphPad Software, Inc., La Jolla, CA, USA). There were 147 patients who met criteria (29 PreImp and 118 PostImp). IHC was ordered in a more complete and timely fashion PostImp than PreImp. PreImp, 4/29 (13.8%) cases did not get any IHC, but PostImp, only 4/118 (3.39%) were missed ( P = 0.0448). Of cases with IHC ordered, 60.0% (15/25) were ordered before or at S/O PreImp versus 91.2% (104/114) PostImp ( P = 0.0004). Relative to day of S/O, the mean days of order delay were longer and more variable PreImp versus PostImp (12.9 ± 40.7 vs. -0.660 ± 1.15; P = 0.0227), with the average being before S/O PostImp. This algorithm

  8. [Analysis of intrusion errors in free recall].

    Science.gov (United States)

    Diesfeldt, H F A

    2017-06-01

    Extra-list intrusion errors during five trials of the eight-word list-learning task of the Amsterdam Dementia Screening Test (ADST) were investigated in 823 consecutive psychogeriatric patients (87.1% suffering from major neurocognitive disorder). Almost half of the participants (45.9%) produced one or more intrusion errors on the verbal recall test. Correct responses were lower when subjects made intrusion errors, but learning slopes did not differ between subjects who committed intrusion errors and those who did not so. Bivariate regression analyses revealed that participants who committed intrusion errors were more deficient on measures of eight-word recognition memory, delayed visual recognition and tests of executive control (the Behavioral Dyscontrol Scale and the ADST-Graphical Sequences as measures of response inhibition). Using hierarchical multiple regression, only free recall and delayed visual recognition retained an independent effect in the association with intrusion errors, such that deficient scores on tests of episodic memory were sufficient to explain the occurrence of intrusion errors. Measures of inhibitory control did not add significantly to the explanation of intrusion errors in free recall, which makes insufficient strength of memory traces rather than a primary deficit in inhibition the preferred account for intrusion errors in free recall.

  9. Wavefront error sensing for LDR

    Science.gov (United States)

    Tubbs, Eldred F.; Glavich, T. A.

    1988-01-01

    Wavefront sensing is a significant aspect of the LDR control problem and requires attention at an early stage of the control system definition and design. A combination of a Hartmann test for wavefront slope measurement and an interference test for piston errors of the segments was examined and is presented as a point of departure for further discussion. The assumption is made that the wavefront sensor will be used for initial alignment and periodic alignment checks but that it will not be used during scientific observations. The Hartmann test and the interferometric test are briefly examined.

  10. The error in total error reduction.

    Science.gov (United States)

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Biomechanical evaluation of bending strength of spinal pedicle screws, including cylindrical, conical, dual core and double dual core designs using numerical simulations and mechanical tests.

    Science.gov (United States)

    Amaritsakul, Yongyut; Chao, Ching-Kong; Lin, Jinn

    2014-09-01

    Pedicle screws are used for treating several types of spinal injuries. Although several commercial versions are presently available, they are mostly either fully cylindrical or fully conical. In this study, the bending strengths of seven types of commercial pedicle screws and a newly designed double dual core screw were evaluated by finite element analyses and biomechanical tests. All the screws had an outer diameter of 7 mm, and the biomechanical test consisted of a cantilever bending test in which a vertical point load was applied using a level arm of 45 mm. The boundary and loading conditions of the biomechanical tests were applied to the model used for the finite element analyses. The results showed that only the conical screws with fixed outer diameter and the new double dual core screw could withstand 1,000,000 cycles of a 50-500 N cyclic load. The new screw, however, exhibited lower stiffness than the conical screw, indicating that it could afford patients more flexible movements. Moreover, the new screw produced a level of stability comparable to that of the conical screw, and it was also significantly stronger than the other screws. The finite element analysis further revealed that the point of maximum tensile stress in the screw model was comparable to the point at which fracture occurred during the fatigue test. Copyright © 2014 IPEM. Published by Elsevier Ltd. All rights reserved.

  12. Errors in Neonatology

    OpenAIRE

    Antonio Boldrini; Rosa T. Scaramuzzo; Armando Cuttano

    2013-01-01

    Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy). Results: In Neonatology the main err...

  13. Systematic Procedural Error

    National Research Council Canada - National Science Library

    Byrne, Michael D

    2006-01-01

    .... This problem has received surprisingly little attention from cognitive psychologists. The research summarized here examines such errors in some detail both empirically and through computational cognitive modeling...

  14. Human errors and mistakes

    International Nuclear Information System (INIS)

    Wahlstroem, B.

    1993-01-01

    Human errors have a major contribution to the risks for industrial accidents. Accidents have provided important lesson making it possible to build safer systems. In avoiding human errors it is necessary to adapt the systems to their operators. The complexity of modern industrial systems is however increasing the danger of system accidents. Models of the human operator have been proposed, but the models are not able to give accurate predictions of human performance. Human errors can never be eliminated, but their frequency can be decreased by systematic efforts. The paper gives a brief summary of research in human error and it concludes with suggestions for further work. (orig.)

  15. Medication errors in pediatric inpatients

    DEFF Research Database (Denmark)

    Rishoej, Rikke Mie; Almarsdóttir, Anna Birna; Christesen, Henrik Thybo

    2017-01-01

    The aim was to describe medication errors (MEs) in hospitalized children reported to the national mandatory reporting and learning system, the Danish Patient Safety Database (DPSD). MEs were extracted from DPSD from the 5-year period of 2010–2014. We included reports from public hospitals on pati...... safety in pediatric inpatients.(Table presented.)...

  16. Evaluating the prevalence and impact of examiner errors on the Wechsler scales of intelligence: A meta-analysis.

    Science.gov (United States)

    Styck, Kara M; Walsh, Shana M

    2016-01-01

    The purpose of the present investigation was to conduct a meta-analysis of the literature on examiner errors for the Wechsler scales of intelligence. Results indicate that a mean of 99.7% of protocols contained at least 1 examiner error when studies that included a failure to record examinee responses as an error were combined and a mean of 41.2% of protocols contained at least 1 examiner error when studies that ignored errors of omission were combined. Furthermore, graduate student examiners were significantly more likely to make at least 1 error on Wechsler intelligence test protocols than psychologists. However, psychologists made significantly more errors per protocol than graduate student examiners regardless of the inclusion or exclusion of failure to record examinee responses as errors. On average, 73.1% of Full-Scale IQ (FSIQ) scores changed as a result of examiner errors, whereas 15.8%-77.3% of scores on the Verbal Comprehension Index (VCI), Perceptual Reasoning Index (PRI), Working Memory Index (WMI), and Processing Speed Index changed as a result of examiner errors. In addition, results suggest that examiners tend to overestimate FSIQ scores and underestimate VCI scores. However, no strong pattern emerged for the PRI and WMI. It can be concluded that examiner errors occur frequently and impact index and FSIQ scores. Consequently, current estimates for the standard error of measurement of popular IQ tests may not adequately capture the variance due to the examiner. (c) 2016 APA, all rights reserved).

  17. Analysis of technological innovation in Danish wind turbine industry - including the Test Station for Windturbines dual roll as research institution and certification authority

    International Nuclear Information System (INIS)

    Dannemand Andersen, P.

    1993-01-01

    The overall aim of this thesis is to examine the interactions between the Danish wind turbine industry and the Test Station for Wind Turbines. Because these interactions are concerning technological innovation, it follows that the innovation processes within the enterprises must be analyzed and modelled. The study is carried out as an iterative model-developing process using case study methods. The findings from some less structured interviews are discussed with literature and forms a basis for models and new interviews. The thesis is based on interviews with 20 R and D engineers in the Danish wind turbine industry, 7 engineers at The Test Station and 7 people involved in wind power abroad (American and British). The theoretical frame for this thesis is sociology/organizational theory and industrial engineering. The thesis consists of five main sections, dealing with technology and knowledge, innovation processes, organizational culture, innovation and interaction between the Test Station's research activities and the companies' innovation processes, and finally interaction through the Test Stations certification activity. First a taxonomy for technology and knowledge is established in order to clarify what kind of technology the interactions are all about, and what kind of knowledge is transferred during the interactions. This part of the thesis also contains an analysis of the patents drawn by the Danish wind turbine industry. The analysis shows that the Danish wind turbine industry do not use patents. Instead the nature of the technology and the speed of innovation are used to protect the industry's knowledge. (EG) (192 refs.)

  18. Political Violence and Child Adjustment in Northern Ireland: Testing Pathways in a Social-Ecological Model Including Single- and Two-Parent Families

    Science.gov (United States)

    Cummings, E. Mark; Schermerhorn, Alice C.; Merrilees, Christine E.; Goeke-Morey, Marcie C.; Shirlow, Peter; Cairns, Ed

    2010-01-01

    Moving beyond simply documenting that political violence negatively impacts children, we tested a social-ecological hypothesis for relations between political violence and child outcomes. Participants were 700 mother-child (M = 12.1 years, SD = 1.8) dyads from 18 working-class, socially deprived areas in Belfast, Northern Ireland, including…

  19. Medication errors as malpractice-a qualitative content analysis of 585 medication errors by nurses in Sweden.

    Science.gov (United States)

    Björkstén, Karin Sparring; Bergqvist, Monica; Andersén-Karlsson, Eva; Benson, Lina; Ulfvarson, Johanna

    2016-08-24

    Many studies address the prevalence of medication errors but few address medication errors serious enough to be regarded as malpractice. Other studies have analyzed the individual and system contributory factor leading to a medication error. Nurses have a key role in medication administration, and there are contradictory reports on the nurses' work experience in relation to the risk and type for medication errors. All medication errors where a nurse was held responsible for malpractice (n = 585) during 11 years in Sweden were included. A qualitative content analysis and classification according to the type and the individual and system contributory factors was made. In order to test for possible differences between nurses' work experience and associations within and between the errors and contributory factors, Fisher's exact test was used, and Cohen's kappa (k) was performed to estimate the magnitude and direction of the associations. There were a total of 613 medication errors in the 585 cases, the most common being "Wrong dose" (41 %), "Wrong patient" (13 %) and "Omission of drug" (12 %). In 95 % of the cases, an average of 1.4 individual contributory factors was found; the most common being "Negligence, forgetfulness or lack of attentiveness" (68 %), "Proper protocol not followed" (25 %), "Lack of knowledge" (13 %) and "Practice beyond scope" (12 %). In 78 % of the cases, an average of 1.7 system contributory factors was found; the most common being "Role overload" (36 %), "Unclear communication or orders" (30 %) and "Lack of adequate access to guidelines or unclear organisational routines" (30 %). The errors "Wrong patient due to mix-up of patients" and "Wrong route" and the contributory factors "Lack of knowledge" and "Negligence, forgetfulness or lack of attentiveness" were more common in less experienced nurses. The experienced nurses were more prone to "Practice beyond scope of practice" and to make errors in spite of "Lack of adequate

  20. Impact of Measurement Error on Synchrophasor Applications

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Yilu [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gracia, Jose R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ewing, Paul D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Zhao, Jiecheng [Univ. of Tennessee, Knoxville, TN (United States); Tan, Jin [Univ. of Tennessee, Knoxville, TN (United States); Wu, Ling [Univ. of Tennessee, Knoxville, TN (United States); Zhan, Lingwei [Univ. of Tennessee, Knoxville, TN (United States)

    2015-07-01

    Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include the possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.

  1. Error Analysis of Determining Airplane Location by Global Positioning System

    OpenAIRE

    Hajiyev, Chingiz; Burat, Alper

    1999-01-01

    This paper studies the error analysis of determining airplane location by global positioning system (GPS) using statistical testing method. The Newton Rhapson method positions the airplane at the intersection point of four spheres. Absolute errors, relative errors and standard deviation have been calculated The results show that the positioning error of the airplane varies with the coordinates of GPS satellite and the airplane.

  2. Negligence, genuine error, and litigation

    Directory of Open Access Journals (Sweden)

    Sohn DH

    2013-02-01

    Full Text Available David H SohnDepartment of Orthopedic Surgery, University of Toledo Medical Center, Toledo, OH, USAAbstract: Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort system in the United States; and review current and future solutions, including medical malpractice reform, alternative dispute resolution, health courts, and no-fault compensation systems. The current political environment favors investigation of non-cap tort reform remedies; investment into more rational oversight systems, such as health courts or no-fault systems may reap both quantitative and qualitative benefits for a less costly and safer health system.Keywords: medical malpractice, tort reform, no fault compensation, alternative dispute resolution, system errors

  3. A mode of error: Immunoglobulin binding protein (a subset of anti-citrullinated proteins can cause false positive tuberculosis test results in rheumatoid arthritis

    Directory of Open Access Journals (Sweden)

    Maria Greenwald

    2017-12-01

    Full Text Available Citrullinated Immunoglobulin Binding Protein (BiP is a newly described autoimmune target in rheumatoid arthritis (RA, one of many cyclic citrullinated peptides(CCP or ACPA. BiP is over-expressed in RA patients causing T cell expansion and increased interferon levels during incubation for the QuantiFERON-Gold tuberculosis test (QFT-G TB. The QFT-G TB has never been validated where interferon is increased by underlying disease, as for example RA.Of ACPA-positive RA patients (n = 126, we found a 13% false-positive TB test rate by QFT-G TB. Despite subsequent biologic therapy for 3 years of all 126 RA patients, none showed evidence of TB without INH. Most of the false-positive RA patients after treatment with biologic therapy reverted to a negative QFT-G test. False TB tests correlated with ACPA level (p < 0.02.Three healthy women without arthritis or TB exposure had negative QFT-G TB. In vitro, all three tested positive every time for TB correlating to the dose of BiP or anti-BiP added, at 2 ug/ml, 5 ug/ml, 10 ug/ml, and 20 ug/ml.BiP naturally found in the majority of ACPA-positive RA patients can result in a false positive QFT-G TB. Subsequent undertreatment of RA, if biologic therapy is withheld, and overtreatment of presumed latent TB may harm patients. Keywords: Tuberculosis, IGRA, Rheumatoid arthritis, Interferon, Anti-citrullinated peptide antibody (ACPA, Immunoglobulin binding protein (BiP

  4. Clinical errors and medical negligence.

    Science.gov (United States)

    Oyebode, Femi

    2013-01-01

    This paper discusses the definition, nature and origins of clinical errors including their prevention. The relationship between clinical errors and medical negligence is examined as are the characteristics of litigants and events that are the source of litigation. The pattern of malpractice claims in different specialties and settings is examined. Among hospitalized patients worldwide, 3-16% suffer injury as a result of medical intervention, the most common being the adverse effects of drugs. The frequency of adverse drug effects appears superficially to be higher in intensive care units and emergency departments but once rates have been corrected for volume of patients, comorbidity of conditions and number of drugs prescribed, the difference is not significant. It is concluded that probably no more than 1 in 7 adverse events in medicine result in a malpractice claim and the factors that predict that a patient will resort to litigation include a prior poor relationship with the clinician and the feeling that the patient is not being kept informed. Methods for preventing clinical errors are still in their infancy. The most promising include new technologies such as electronic prescribing systems, diagnostic and clinical decision-making aids and error-resistant systems. Copyright © 2013 S. Karger AG, Basel.

  5. Learning from Errors

    Science.gov (United States)

    Metcalfe, Janet

    2017-01-01

    Although error avoidance during learning appears to be the rule in American classrooms, laboratory studies suggest that it may be a counterproductive strategy, at least for neurologically typical students. Experimental investigations indicate that errorful learning followed by corrective feedback is beneficial to learning. Interestingly, the…

  6. Medication errors: an overview for clinicians.

    Science.gov (United States)

    Wittich, Christopher M; Burkle, Christopher M; Lanier, William L

    2014-08-01

    Medication error is an important cause of patient morbidity and mortality, yet it can be a confusing and underappreciated concept. This article provides a review for practicing physicians that focuses on medication error (1) terminology and definitions, (2) incidence, (3) risk factors, (4) avoidance strategies, and (5) disclosure and legal consequences. A medication error is any error that occurs at any point in the medication use process. It has been estimated by the Institute of Medicine that medication errors cause 1 of 131 outpatient and 1 of 854 inpatient deaths. Medication factors (eg, similar sounding names, low therapeutic index), patient factors (eg, poor renal or hepatic function, impaired cognition, polypharmacy), and health care professional factors (eg, use of abbreviations in prescriptions and other communications, cognitive biases) can precipitate medication errors. Consequences faced by physicians after medication errors can include loss of patient trust, civil actions, criminal charges, and medical board discipline. Methods to prevent medication errors from occurring (eg, use of information technology, better drug labeling, and medication reconciliation) have been used with varying success. When an error is discovered, patients expect disclosure that is timely, given in person, and accompanied with an apology and communication of efforts to prevent future errors. Learning more about medication errors may enhance health care professionals' ability to provide safe care to their patients. Copyright © 2014 Mayo Foundation for Medical Education and Research. Published by Elsevier Inc. All rights reserved.

  7. Output Error Method for Tiltrotor Unstable in Hover

    Directory of Open Access Journals (Sweden)

    Lichota Piotr

    2017-03-01

    Full Text Available This article investigates unstable tiltrotor in hover system identification from flight test data. The aircraft dynamics was described by a linear model defined in Body-Fixed-Coordinate System. Output Error Method was selected in order to obtain stability and control derivatives in lateral motion. For estimating model parameters both time and frequency domain formulations were applied. To improve the system identification performed in the time domain, a stabilization matrix was included for evaluating the states. In the end, estimates obtained from various Output Error Method formulations were compared in terms of parameters accuracy and time histories. Evaluations were performed in MATLAB R2009b environment.

  8. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  9. A method of multi-crack shape identification from eddy current testing signals of steam generator tubes including support plates as noise sources

    International Nuclear Information System (INIS)

    Nagaya, Yoshiaki; Endo, Hisashi; Takagi, Toshiyuki; Uchimoto, Tetsuya

    2005-01-01

    This paper deals with identifying multiple cracks from eddy current testing (ECT) signals obtained in a steam generator tube with a support plate and deposits. Assume two-dimensionally scanned ECT signals to be a picture image, then the signal processing by a multi-frequency technique eliminates noise caused by the support plate and deposits. A template matching with help of genetic algorithms detects number and positions of cracks from the image after the signal processing. Inverse analysis estimates the crack profile based on the predicted position of cracks. The number and positions of the cracks are sufficiently well predicted. Crack shape reconstructions are achieved with a satisfactory degree of accuracy. (author)

  10. A general tank test of a model of the hull of the Pem-1 flying boat including a special working chart for the determination of hull performance

    Science.gov (United States)

    Dawson, John R

    1938-01-01

    The results of a general tank test of a 1/6 full-size model of the hull of the Pem-1 flying boat (N.A.C.A. model 18) are given in non-dimensional form. In addition to the usual curves, the results are presented in a new form that makes it possible to apply them more conveniently than in the forms previously used. The resistance was compared with that of N.A.C.A. models 11-C and 26(Sikorsky S-40) and was found to be generally less than the resistance of either.

  11. Error tracking in a clinical biochemistry laboratory

    DEFF Research Database (Denmark)

    Szecsi, Pal Bela; Ødum, Lars

    2009-01-01

    BACKGROUND: We report our results for the systematic recording of all errors in a standard clinical laboratory over a 1-year period. METHODS: Recording was performed using a commercial database program. All individuals in the laboratory were allowed to report errors. The testing processes were cl...

  12. Reward positivity: Reward prediction error or salience prediction error?

    Science.gov (United States)

    Heydari, Sepideh; Holroyd, Clay B

    2016-08-01

    The reward positivity is a component of the human ERP elicited by feedback stimuli in trial-and-error learning and guessing tasks. A prominent theory holds that the reward positivity reflects a reward prediction error signal that is sensitive to outcome valence, being larger for unexpected positive events relative to unexpected negative events (Holroyd & Coles, 2002). Although the theory has found substantial empirical support, most of these studies have utilized either monetary or performance feedback to test the hypothesis. However, in apparent contradiction to the theory, a recent study found that unexpected physical punishments also elicit the reward positivity (Talmi, Atkinson, & El-Deredy, 2013). The authors of this report argued that the reward positivity reflects a salience prediction error rather than a reward prediction error. To investigate this finding further, in the present study participants navigated a virtual T maze and received feedback on each trial under two conditions. In a reward condition, the feedback indicated that they would either receive a monetary reward or not and in a punishment condition the feedback indicated that they would receive a small shock or not. We found that the feedback stimuli elicited a typical reward positivity in the reward condition and an apparently delayed reward positivity in the punishment condition. Importantly, this signal was more positive to the stimuli that predicted the omission of a possible punishment relative to stimuli that predicted a forthcoming punishment, which is inconsistent with the salience hypothesis. © 2016 Society for Psychophysiological Research.

  13. An aerial radiological survey of the Tonopah Test Range including Clean Slate 1,2,3, Roller Coaster, decontamination area, Cactus Springs Ranch target areas. Central Nevada

    International Nuclear Information System (INIS)

    Proctor, A.E.; Hendricks, T.J.

    1995-08-01

    An aerial radiological survey was conducted of major sections of the Tonopah Test Range (TTR) in central Nevada from August through October 1993. The survey consisted of aerial measurements of both natural and man-made gamma radiation emanating from the terrestrial surface. The initial purpose of the survey was to locate depleted uranium (detecting 238 U) from projectiles which had impacted on the TTR. The examination of areas near Cactus Springs Ranch (located near the western boundary of the TTR) and an animal burial area near the Double Track site were secondary objectives. When more widespread than expected 241 Am contamination was found around the Clean Slates sites, the survey was expanded to cover the area surrounding the Clean Slates and also the Double Track site. Results are reported as radiation isopleths superimposed on aerial photographs of the area

  14. A prospective three-step intervention study to prevent medication errors in drug handling in paediatric care.

    Science.gov (United States)

    Niemann, Dorothee; Bertsche, Astrid; Meyrath, David; Koepf, Ellen D; Traiser, Carolin; Seebald, Katja; Schmitt, Claus P; Hoffmann, Georg F; Haefeli, Walter E; Bertsche, Thilo

    2015-01-01

    To prevent medication errors in drug handling in a paediatric ward. One in five preventable adverse drug events in hospitalised children is caused by medication errors. Errors in drug prescription have been studied frequently, but data regarding drug handling, including drug preparation and administration, are scarce. A three-step intervention study including monitoring procedure was used to detect and prevent medication errors in drug handling. After approval by the ethics committee, pharmacists monitored drug handling by nurses on an 18-bed paediatric ward in a university hospital prior to and following each intervention step. They also conducted a questionnaire survey aimed at identifying knowledge deficits. Each intervention step targeted different causes of errors. The handout mainly addressed knowledge deficits, the training course addressed errors caused by rule violations and slips, and the reference book addressed knowledge-, memory- and rule-based errors. The number of patients who were subjected to at least one medication error in drug handling decreased from 38/43 (88%) to 25/51 (49%) following the third intervention, and the overall frequency of errors decreased from 527 errors in 581 processes (91%) to 116/441 (26%). The issue of the handout reduced medication errors caused by knowledge deficits regarding, for instance, the correct 'volume of solvent for IV drugs' from 49-25%. Paediatric drug handling is prone to errors. A three-step intervention effectively decreased the high frequency of medication errors by addressing the diversity of their causes. Worldwide, nurses are in charge of drug handling, which constitutes an error-prone but often-neglected step in drug therapy. Detection and prevention of errors in daily routine is necessary for a safe and effective drug therapy. Our three-step intervention reduced errors and is suitable to be tested in other wards and settings. © 2014 John Wiley & Sons Ltd.

  15. Uncorrected refractive errors.

    Science.gov (United States)

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  16. Uncorrected refractive errors

    Directory of Open Access Journals (Sweden)

    Kovin S Naidoo

    2012-01-01

    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  17. Measurement Error, Reliability, and Minimum Detectable Change in the Mini-Mental State Examination, Montreal Cognitive Assessment, and Color Trails Test among Community Living Middle-Aged and Older Adults.

    Science.gov (United States)

    Feeney, Joanne; Savva, George M; O'Regan, Claire; King-Kallimanis, Bellinda; Cronin, Hilary; Kenny, Rose Anne

    2016-05-31

    Knowing the reliability of cognitive tests, particularly those commonly used in clinical practice, is important in order to interpret the clinical significance of a change in performance or a low score on a single test. To report the intra-class correlation (ICC), standard error of measurement (SEM) and minimum detectable change (MDC) for the Mini-Mental State Examination (MMSE), Montreal Cognitive Assessment (MoCA), and Color Trails Test (CTT) among community dwelling older adults. 130 participants aged 55 and older without severe cognitive impairment underwent two cognitive assessments between two and four months apart. Half the group changed rater between assessments and half changed time of day. Mean (standard deviation) MMSE was 28.1 (2.1) at baseline and 28.4 (2.1) at repeat. Mean (SD) MoCA increased from 24.8 (3.6) to 25.2 (3.6). There was a rater effect on CTT, but not on the MMSE or MoCA. The SEM of the MMSE was 1.0, leading to an MDC (based on a 95% confidence interval) of 3 points. The SEM of the MoCA was 1.5, implying an MDC95 of 4 points. MoCA (ICC = 0.81) was more reliable than MMSE (ICC = 0.75), but all tests examined showed substantial within-patient variation. An individual's score would have to change by greater than or equal to 3 points on the MMSE and 4 points on the MoCA for the rater to be confident that the change was not due to measurement error. This has important implications for epidemiologists and clinicians in dementia screening and diagnosis.

  18. Multicenter Assessment of Gram Stain Error Rates.

    Science.gov (United States)

    Samuel, Linoj P; Balada-Llasat, Joan-Miquel; Harrington, Amanda; Cavagnolo, Robert

    2016-06-01

    Gram stains remain the cornerstone of diagnostic testing in the microbiology laboratory for the guidance of empirical treatment prior to availability of culture results. Incorrectly interpreted Gram stains may adversely impact patient care, and yet there are no comprehensive studies that have evaluated the reliability of the technique and there are no established standards for performance. In this study, clinical microbiology laboratories at four major tertiary medical care centers evaluated Gram stain error rates across all nonblood specimen types by using standardized criteria. The study focused on several factors that primarily contribute to errors in the process, including poor specimen quality, smear preparation, and interpretation of the smears. The number of specimens during the evaluation period ranged from 976 to 1,864 specimens per site, and there were a total of 6,115 specimens. Gram stain results were discrepant from culture for 5% of all specimens. Fifty-eight percent of discrepant results were specimens with no organisms reported on Gram stain but significant growth on culture, while 42% of discrepant results had reported organisms on Gram stain that were not recovered in culture. Upon review of available slides, 24% (63/263) of discrepant results were due to reader error, which varied significantly based on site (9% to 45%). The Gram stain error rate also varied between sites, ranging from 0.4% to 2.7%. The data demonstrate a significant variability between laboratories in Gram stain performance and affirm the need for ongoing quality assessment by laboratories. Standardized monitoring of Gram stains is an essential quality control tool for laboratories and is necessary for the establishment of a quality benchmark across laboratories. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  19. Analysis of Medication Error Reports

    Energy Technology Data Exchange (ETDEWEB)

    Whitney, Paul D.; Young, Jonathan; Santell, John; Hicks, Rodney; Posse, Christian; Fecht, Barbara A.

    2004-11-15

    In medicine, as in many areas of research, technological innovation and the shift from paper based information to electronic records has created a climate of ever increasing availability of raw data. There has been, however, a corresponding lag in our abilities to analyze this overwhelming mass of data, and classic forms of statistical analysis may not allow researchers to interact with data in the most productive way. This is true in the emerging area of patient safety improvement. Traditionally, a majority of the analysis of error and incident reports has been carried out based on an approach of data comparison, and starts with a specific question which needs to be answered. Newer data analysis tools have been developed which allow the researcher to not only ask specific questions but also to “mine” data: approach an area of interest without preconceived questions, and explore the information dynamically, allowing questions to be formulated based on patterns brought up by the data itself. Since 1991, United States Pharmacopeia (USP) has been collecting data on medication errors through voluntary reporting programs. USP’s MEDMARXsm reporting program is the largest national medication error database and currently contains well over 600,000 records. Traditionally, USP has conducted an annual quantitative analysis of data derived from “pick-lists” (i.e., items selected from a list of items) without an in-depth analysis of free-text fields. In this paper, the application of text analysis and data analysis tools used by Battelle to analyze the medication error reports already analyzed in the traditional way by USP is described. New insights and findings were revealed including the value of language normalization and the distribution of error incidents by day of the week. The motivation for this effort is to gain additional insight into the nature of medication errors to support improvements in medication safety.

  20. Medication errors: definitions and classification

    Science.gov (United States)

    Aronson, Jeffrey K

    2009-01-01

    To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey–Lewis method (based on an understanding of theory and practice). A medication error is ‘a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient’. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is ‘a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient’. The converse of this, ‘balanced prescribing’ is ‘the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm’. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. A prescription error is ‘a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription’. The ‘normal features’ include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies. PMID:19594526

  1. Preventing Errors in Laterality

    OpenAIRE

    Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie

    2014-01-01

    An error in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such errors, very few studies have been published that describe these errors in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in sep...

  2. Errors and violations

    International Nuclear Information System (INIS)

    Reason, J.

    1988-01-01

    This paper is in three parts. The first part summarizes the human failures responsible for the Chernobyl disaster and argues that, in considering the human contribution to power plant emergencies, it is necessary to distinguish between: errors and violations; and active and latent failures. The second part presents empirical evidence, drawn from driver behavior, which suggest that errors and violations have different psychological origins. The concluding part outlines a resident pathogen view of accident causation, and seeks to identify the various system pathways along which errors and violations may be propagated

  3. Corrective Action Investigation Plan for Corrective Action Unit 536: Area 3 Release Site, Nevada Test Site, Nevada (Rev. 0 / June 2003), Including Record of Technical Change No. 1

    Energy Technology Data Exchange (ETDEWEB)

    None

    2003-06-27

    This Corrective Action Investigation Plan contains the U.S. Department of Energy, National Nuclear Security Administration Nevada Site Office's approach to collect the data necessary to evaluate corrective action alternatives (CAAs) appropriate for the closure of Corrective Action Unit (CAU) 536: Area 3 Release Site, Nevada Test Site, Nevada, under the Federal Facility Agreement and Consent Order. Corrective Action Unit 536 consists of a single Corrective Action Site (CAS): 03-44-02, Steam Jenny Discharge. The CAU 536 site is being investigated because existing information on the nature and extent of possible contamination is insufficient to evaluate and recommend corrective action alternatives for CAS 03-44-02. The additional information will be obtained by conducting a corrective action investigation (CAI) prior to evaluating CAAs and selecting the appropriate corrective action for this CAS. The results of this field investigation are to be used to support a defensible evaluation of corrective action alternatives in the corrective action decision document. Record of Technical Change No. 1 is dated 3-2004.

  4. Human errors related to maintenance and modifications

    International Nuclear Information System (INIS)

    Laakso, K.; Pyy, P.; Reiman, L.

    1998-01-01

    about weakness in audits made by the operating organisation and in tests relating to plant operation. The number of plant-specific maintenance records used as input material was high and the findings were discussed thoroughly with the plant maintenance personnel. The results indicated that instrumentation is more prone to human error than the rest of maintenance. Most errors stem from refuelling outage periods and about a half of them were identified during the same outage they were committed. Plant modifications are a significant source of common cause failures. The number of dependent errors could be reduced by improved co-ordination and auditing, post-installation checking, training and start-up testing programmes. (orig.)

  5. Practical application of the theory of errors in measurement

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the practical application of the theory of errors in measurement. The topics of the chapter include fixing on a maximum desired error, selecting a maximum error, the procedure for limiting the error, utilizing a standard procedure, setting specifications for a standard procedure, and selecting the number of measurements to be made

  6. Modelling and mitigation of soft-errors in CMOS processors

    NARCIS (Netherlands)

    Rohani, A.

    2014-01-01

    The topic of this thesis is about soft-errors in digital systems. Different aspects of soft-errors have been addressed here, including an accurate simulation model to emulate soft-errors in a gate-level net list, a simulation framework to study the impact of soft-errors in a VHDL design and an

  7. Mapping of Schistosomiasis and Soil-Transmitted Helminths in Namibia: The First Large-Scale Protocol to Formally Include Rapid Diagnostic Tests.

    Directory of Open Access Journals (Sweden)

    José Carlos Sousa-Figueiredo

    Full Text Available Namibia is now ready to begin mass drug administration of praziquantel and albendazole against schistosomiasis and soil-transmitted helminths, respectively. Although historical data identifies areas of transmission of these neglected tropical diseases (NTDs, there is a need to update epidemiological data. For this reason, Namibia adopted a new protocol for mapping of schistosomiasis and geohelminths, formally integrating rapid diagnostic tests (RDTs for infections and morbidity. In this article, we explain the protocol in detail, and introduce the concept of 'mapping resolution', as well as present results and treatment recommendations for northern Namibia.This new protocol allowed a large sample to be surveyed (N = 17,896 children from 299 schools at relatively low cost (7 USD per person mapped and very quickly (28 working days. All children were analysed by RDTs, but only a sub-sample was also diagnosed by light microscopy. Overall prevalence of schistosomiasis in the surveyed areas was 9.0%, highly associated with poorer access to potable water (OR = 1.5, P<0.001 and defective (OR = 1.2, P<0.001 or absent sanitation infrastructure (OR = 2.0, P<0.001. Overall prevalence of geohelminths, more particularly hookworm infection, was 12.2%, highly associated with presence of faecal occult blood (OR = 1.9, P<0.001. Prevalence maps were produced and hot spots identified to better guide the national programme in drug administration, as well as targeted improvements in water, sanitation and hygiene. The RDTs employed (circulating cathodic antigen and microhaematuria for Schistosoma mansoni and S. haematobium, respectively performed well, with sensitivities above 80% and specificities above 95%.This protocol is cost-effective and sensitive to budget limitations and the potential economic and logistical strains placed on the national Ministries of Health. Here we present a high resolution map of disease prevalence levels, and treatment regimens are

  8. Help prevent hospital errors

    Science.gov (United States)

    ... this page: //medlineplus.gov/ency/patientinstructions/000618.htm Help prevent hospital errors To use the sharing features ... in the hospital. If You Are Having Surgery, Help Keep Yourself Safe Go to a hospital you ...

  9. Pedal Application Errors

    Science.gov (United States)

    2012-03-01

    This project examined the prevalence of pedal application errors and the driver, vehicle, roadway and/or environmental characteristics associated with pedal misapplication crashes based on a literature review, analysis of news media reports, a panel ...

  10. Rounding errors in weighing

    International Nuclear Information System (INIS)

    Jeach, J.L.

    1976-01-01

    When rounding error is large relative to weighing error, it cannot be ignored when estimating scale precision and bias from calibration data. Further, if the data grouping is coarse, rounding error is correlated with weighing error and may also have a mean quite different from zero. These facts are taken into account in a moment estimation method. A copy of the program listing for the MERDA program that provides moment estimates is available from the author. Experience suggests that if the data fall into four or more cells or groups, it is not necessary to apply the moment estimation method. Rather, the estimate given by equation (3) is valid in this instance. 5 tables

  11. Errors in energy bills

    International Nuclear Information System (INIS)

    Kop, L.

    2001-01-01

    On request, the Dutch Association for Energy, Environment and Water (VEMW) checks the energy bills for her customers. It appeared that in the year 2000 many small, but also big errors were discovered in the bills of 42 businesses

  12. Medical Errors Reduction Initiative

    National Research Council Canada - National Science Library

    Mutter, Michael L

    2005-01-01

    The Valley Hospital of Ridgewood, New Jersey, is proposing to extend a limited but highly successful specimen management and medication administration medical errors reduction initiative on a hospital-wide basis...

  13. Understanding and Confronting Our Mistakes: The Epidemiology of Error in Radiology and Strategies for Error Reduction.

    Science.gov (United States)

    Bruno, Michael A; Walker, Eric A; Abujudeh, Hani H

    2015-10-01

    Arriving at a medical diagnosis is a highly complex process that is extremely error prone. Missed or delayed diagnoses often lead to patient harm and missed opportunities for treatment. Since medical imaging is a major contributor to the overall diagnostic process, it is also a major potential source of diagnostic error. Although some diagnoses may be missed because of the technical or physical limitations of the imaging modality, including image resolution, intrinsic or extrinsic contrast, and signal-to-noise ratio, most missed radiologic diagnoses are attributable to image interpretation errors by radiologists. Radiologic interpretation cannot be mechanized or automated; it is a human enterprise based on complex psychophysiologic and cognitive processes and is itself subject to a wide variety of error types, including perceptual errors (those in which an important abnormality is simply not seen on the images) and cognitive errors (those in which the abnormality is visually detected but the meaning or importance of the finding is not correctly understood or appreciated). The overall prevalence of radiologists' errors in practice does not appear to have changed since it was first estimated in the 1960s. The authors review the epidemiology of errors in diagnostic radiology, including a recently proposed taxonomy of radiologists' errors, as well as research findings, in an attempt to elucidate possible underlying causes of these errors. The authors also propose strategies for error reduction in radiology. On the basis of current understanding, specific suggestions are offered as to how radiologists can improve their performance in practice. © RSNA, 2015.

  14. Design for Error Tolerance

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1983-01-01

    An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability.......An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability....

  15. Thermodynamics of Error Correction

    Directory of Open Access Journals (Sweden)

    Pablo Sartori

    2015-12-01

    Full Text Available Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  16. Coherent error study in a retarding field energy analyzer

    International Nuclear Information System (INIS)

    Cui, Y.; Zou, Y.; Reiser, M.; Kishek, R.A.; Haber, I.; Bernal, S.; O'Shea, P.G.

    2005-01-01

    A novel cylindrical retarding electrostatic field energy analyzer for low-energy beams has been designed, simulated, and tested with electron beams of several keV, in which space charge effects play an important role. A cylindrical focusing electrode is used to overcome the beam expansion inside the device due to space-charge forces, beam emittance, etc. In this paper, we present the coherent error analysis for this energy analyzer with beam envelope equation including space charge and emittance effects. The study shows that this energy analyzer can achieve very high resolution (with relative error of around 10 -5 ) if taking away the coherent errors by using proper focusing voltages. The theoretical analysis is compared with experimental results

  17. Evolutionary modeling-based approach for model errors correction

    Directory of Open Access Journals (Sweden)

    S. Q. Wan

    2012-08-01

    Full Text Available The inverse problem of using the information of historical data to estimate model errors is one of the science frontier research topics. In this study, we investigate such a problem using the classic Lorenz (1963 equation as a prediction model and the Lorenz equation with a periodic evolutionary function as an accurate representation of reality to generate "observational data."

    On the basis of the intelligent features of evolutionary modeling (EM, including self-organization, self-adaptive and self-learning, the dynamic information contained in the historical data can be identified and extracted by computer automatically. Thereby, a new approach is proposed to estimate model errors based on EM in the present paper. Numerical tests demonstrate the ability of the new approach to correct model structural errors. In fact, it can actualize the combination of the statistics and dynamics to certain extent.

  18. Medication errors in anesthesia: unacceptable or unavoidable?

    Directory of Open Access Journals (Sweden)

    Ira Dhawan

    Full Text Available Abstract Medication errors are the common causes of patient morbidity and mortality. It adds financial burden to the institution as well. Though the impact varies from no harm to serious adverse effects including death, it needs attention on priority basis since medication errors' are preventable. In today's world where people are aware and medical claims are on the hike, it is of utmost priority that we curb this issue. Individual effort to decrease medication error alone might not be successful until a change in the existing protocols and system is incorporated. Often drug errors that occur cannot be reversed. The best way to ‘treat' drug errors is to prevent them. Wrong medication (due to syringe swap, overdose (due to misunderstanding or preconception of the dose, pump misuse and dilution error, incorrect administration route, under dosing and omission are common causes of medication error that occur perioperatively. Drug omission and calculation mistakes occur commonly in ICU. Medication errors can occur perioperatively either during preparation, administration or record keeping. Numerous human and system errors can be blamed for occurrence of medication errors. The need of the hour is to stop the blame - game, accept mistakes and develop a safe and ‘just' culture in order to prevent medication errors. The newly devised systems like VEINROM, a fluid delivery system is a novel approach in preventing drug errors due to most commonly used medications in anesthesia. Similar developments along with vigilant doctors, safe workplace culture and organizational support all together can help prevent these errors.

  19. Medication Errors: New EU Good Practice Guide on Risk Minimisation and Error Prevention.

    Science.gov (United States)

    Goedecke, Thomas; Ord, Kathryn; Newbould, Victoria; Brosch, Sabine; Arlett, Peter

    2016-06-01

    A medication error is an unintended failure in the drug treatment process that leads to, or has the potential to lead to, harm to the patient. Reducing the risk of medication errors is a shared responsibility between patients, healthcare professionals, regulators and the pharmaceutical industry at all levels of healthcare delivery. In 2015, the EU regulatory network released a two-part good practice guide on medication errors to support both the pharmaceutical industry and regulators in the implementation of the changes introduced with the EU pharmacovigilance legislation. These changes included a modification of the 'adverse reaction' definition to include events associated with medication errors, and the requirement for national competent authorities responsible for pharmacovigilance in EU Member States to collaborate and exchange information on medication errors resulting in harm with national patient safety organisations. To facilitate reporting and learning from medication errors, a clear distinction has been made in the guidance between medication errors resulting in adverse reactions, medication errors without harm, intercepted medication errors and potential errors. This distinction is supported by an enhanced MedDRA(®) terminology that allows for coding all stages of the medication use process where the error occurred in addition to any clinical consequences. To better understand the causes and contributing factors, individual case safety reports involving an error should be followed-up with the primary reporter to gather information relevant for the conduct of root cause analysis where this may be appropriate. Such reports should also be summarised in periodic safety update reports and addressed in risk management plans. Any risk minimisation and prevention strategy for medication errors should consider all stages of a medicinal product's life-cycle, particularly the main sources and types of medication errors during product development. This article

  20. Screening for Inborn Errors of Metabolism

    Directory of Open Access Journals (Sweden)

    F.A. Elshaari

    2013-09-01

    Full Text Available Inborn errors of metabolism (IEM are a heterogeneous group of monogenic diseases that affect the metabolic pathways. The detection of IEM relies on a high index of clinical suspicion and co-ordinated access to specialized laboratory services. Biochemical analysis forms the basis of the final confirmed diagnosis in several of these disorders. The investigations fall into four main categories1.General metabolic screening tests2.Specific metabolite assays3.Enzyme studies4.DNA analysis The first approach to the diagnosis is by a multi-component analysis of body fluids in clinically selected patients, referred to as metabolic screening tests. These include simple chemical tests in the urine, blood glucose, acid-base profile, lactate, ammonia and liver function tests. The results of these tests can help to suggest known groups of metabolic disorders so that specific metabolites such as amino acids, organic acids, etc. can be estimated. However, not all IEM needs the approach of general screening. Lysosomal, peroxisomal, thyroid and adrenal disorders are suspected mainly on clinical grounds and pertinent diagnostic tests can be performed. The final diagnosis relies on the demonstration of the specific enzyme defect, which can be further confirmed by DNA studies.

  1. Learning from Errors

    Directory of Open Access Journals (Sweden)

    MA. Lendita Kryeziu

    2015-06-01

    Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.

  2. Compact disk error measurements

    Science.gov (United States)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  3. Automatic error compensation in dc amplifiers

    International Nuclear Information System (INIS)

    Longden, L.L.

    1976-01-01

    When operational amplifiers are exposed to high levels of neutron fluence or total ionizing dose, significant changes may be observed in input voltages and currents. These changes may produce large errors at the output of direct-coupled amplifier stages. Therefore, the need exists for automatic compensation techniques. However, previously introduced techniques compensate only for errors in the main amplifier and neglect the errors induced by the compensating circuitry. In this paper, the techniques introduced compensate not only for errors in the main operational amplifier, but also for errors induced by the compensation circuitry. Included in the paper is a theoretical analysis of each compensation technique, along with advantages and disadvantages of each. Important design criteria and information necessary for proper selection of semiconductor switches will also be included. Introduced in this paper will be compensation circuitry for both resistive and capacitive feedback networks

  4. The benefit of generating errors during learning.

    Science.gov (United States)

    Potts, Rosalind; Shanks, David R

    2014-04-01

    Testing has been found to be a powerful learning tool, but educators might be reluctant to make full use of its benefits for fear that any errors made would be harmful to learning. We asked whether testing could be beneficial to memory even during novel learning, when nearly all responses were errors, and where errors were unlikely to be related to either cues or targets. In 4 experiments, participants learned definitions for unfamiliar English words, or translations for foreign vocabulary, by generating a response and being given corrective feedback, by reading the word and its definition or translation, or by selecting from a choice of definitions or translations followed by feedback. In a final test of all words, generating errors followed by feedback led to significantly better memory for the correct definition or translation than either reading or making incorrect choices, suggesting that the benefits of generation are not restricted to correctly generated items. Even when information to be learned is novel, errorful generation may play a powerful role in potentiating encoding of corrective feedback. Experiments 2A, 2B, and 3 revealed, via metacognitive judgments of learning, that participants are strikingly unaware of this benefit, judging errorful generation to be a less effective encoding method than reading or incorrect choosing, when in fact it was better. Predictions reflected participants' subjective experience during learning. If subjective difficulty leads to more effort at encoding, this could at least partly explain the errorful generation advantage.

  5. Identifying afterloading PDR and HDR brachytherapy errors using real-time fiber-coupled Al2O3:C dosimetry and a novel statistical error decision criterion

    International Nuclear Information System (INIS)

    Kertzscher, Gustavo; Andersen, Claus E.; Siebert, Frank-Andre; Nielsen, Soren Kynde; Lindegaard, Jacob C.; Tanderup, Kari

    2011-01-01

    Background and purpose: The feasibility of a real-time in vivo dosimeter to detect errors has previously been demonstrated. The purpose of this study was to: (1) quantify the sensitivity of the dosimeter to detect imposed treatment errors under well controlled and clinically relevant experimental conditions, and (2) test a new statistical error decision concept based on full uncertainty analysis. Materials and methods: Phantom studies of two gynecological cancer PDR and one prostate cancer HDR patient treatment plans were performed using tandem ring applicators or interstitial needles. Imposed treatment errors, including interchanged pairs of afterloader guide tubes and 2-20 mm source displacements, were monitored using a real-time fiber-coupled carbon doped aluminum oxide (Al 2 O 3 :C) crystal dosimeter that was positioned in the reconstructed tumor region. The error detection capacity was evaluated at three dose levels: dwell position, source channel, and fraction. The error criterion incorporated the correlated source position uncertainties and other sources of uncertainty, and it was applied both for the specific phantom patient plans and for a general case (source-detector distance 5-90 mm and position uncertainty 1-4 mm). Results: Out of 20 interchanged guide tube errors, time-resolved analysis identified 17 while fraction level analysis identified two. Channel and fraction level comparisons could leave 10 mm dosimeter displacement errors unidentified. Dwell position dose rate comparisons correctly identified displacements ≥5 mm. Conclusion: This phantom study demonstrates that Al 2 O 3 :C real-time dosimetry can identify applicator displacements ≥5 mm and interchanged guide tube errors during PDR and HDR brachytherapy. The study demonstrates the shortcoming of a constant error criterion and the advantage of a statistical error criterion.

  6. Error monitoring issues for common channel signaling

    Science.gov (United States)

    Hou, Victor T.; Kant, Krishna; Ramaswami, V.; Wang, Jonathan L.

    1994-04-01

    Motivated by field data which showed a large number of link changeovers and incidences of link oscillations between in-service and out-of-service states in common channel signaling (CCS) networks, a number of analyses of the link error monitoring procedures in the SS7 protocol were performed by the authors. This paper summarizes the results obtained thus far and include the following: (1) results of an exact analysis of the performance of the error monitoring procedures under both random and bursty errors; (2) a demonstration that there exists a range of error rates within which the error monitoring procedures of SS7 may induce frequent changeovers and changebacks; (3) an analysis of the performance ofthe SS7 level-2 transmission protocol to determine the tolerable error rates within which the delay requirements can be met; (4) a demonstration that the tolerable error rate depends strongly on various link and traffic characteristics, thereby implying that a single set of error monitor parameters will not work well in all situations; (5) some recommendations on a customizable/adaptable scheme of error monitoring with a discussion on their implementability. These issues may be particularly relevant in the presence of anticipated increases in SS7 traffic due to widespread deployment of Advanced Intelligent Network (AIN) and Personal Communications Service (PCS) as well as for developing procedures for high-speed SS7 links currently under consideration by standards bodies.

  7. Common patterns in 558 diagnostic radiology errors.

    Science.gov (United States)

    Donald, Jennifer J; Barnard, Stuart A

    2012-04-01

    As a Quality Improvement initiative our department has held regular discrepancy meetings since 2003. We performed a retrospective analysis of the cases presented and identified the most common pattern of error. A total of 558 cases were referred for discussion over 92 months, and errors were classified as perceptual or interpretative. The most common patterns of error for each imaging modality were analysed, and the misses were scored by consensus as subtle or non-subtle. Of 558 diagnostic errors, 447 (80%) were perceptual and 111 (20%) were interpretative errors. Plain radiography and computed tomography (CT) scans were the most frequent imaging modalities accounting for 246 (44%) and 241 (43%) of the total number of errors, respectively. In the plain radiography group 120 (49%) of the errors occurred in chest X-ray reports with perceptual miss of a lung nodule occurring in 40% of this subgroup. In the axial and appendicular skeleton missed fractures occurred most frequently, and metastatic bone disease was overlooked in 12 of 50 plain X-rays of the pelvis or spine. The majority of errors within the CT group were in reports of body scans with the commonest perceptual errors identified including 16 missed significant bone lesions, 14 cases of thromboembolic disease and 14 gastrointestinal tumours. Of the 558 errors, 312 (56%) were considered subtle and 246 (44%) non-subtle. Diagnostic errors are not uncommon and are most frequently perceptual in nature. Identification of the most common patterns of error has the potential to improve the quality of reporting by improving the search behaviour of radiologists. © 2012 The Authors. Journal of Medical Imaging and Radiation Oncology © 2012 The Royal Australian and New Zealand College of Radiologists.

  8. LIBERTARISMO & ERROR CATEGORIAL

    Directory of Open Access Journals (Sweden)

    Carlos G. Patarroyo G.

    2009-01-01

    Full Text Available En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibilidad de la libertad humana no necesariamente puede ser acusado de incurrir en ellos.

  9. Libertarismo & Error Categorial

    OpenAIRE

    PATARROYO G, CARLOS G

    2009-01-01

    En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibili...

  10. Error Free Software

    Science.gov (United States)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  11. Math Error Types and Correlates in Adolescents with and without Attention Deficit Hyperactivity Disorder.

    Science.gov (United States)

    Capodieci, Agnese; Martinussen, Rhonda

    2017-01-01

    Objective: The aim of this study was to examine the types of errors made by youth with and without a parent-reported diagnosis of attention deficit and hyperactivity disorder (ADHD) on a math fluency task and investigate the association between error types and youths' performance on measures of processing speed and working memory. Method: Participants included 30 adolescents with ADHD and 39 typically developing peers between 14 and 17 years old matched in age and IQ. All youth completed standardized measures of math calculation and fluency as well as two tests of working memory and processing speed. Math fluency error patterns were examined. Results: Adolescents with ADHD showed less proficient math fluency despite having similar math calculation scores as their peers. Group differences were also observed in error types with youth with ADHD making more switch errors than their peers. Conclusion: This research has important clinical applications for the assessment and intervention on math ability in students with ADHD.

  12. The Relationship of Error Rate and Comprehension in Second and Third Grade Oral Reading Fluency.

    Science.gov (United States)

    Abbott, Mary; Wills, Howard; Miller, Angela; Kaufman, Journ

    2012-01-01

    This study explored the relationships of oral reading speed and error rate on comprehension with second and third grade students with identified reading risk. The study included 920 2nd graders and 974 3rd graders. Participants were assessed using Dynamic Indicators of Basic Early Literacy Skills (DIBELS) and the Woodcock Reading Mastery Test (WRMT) Passage Comprehension subtest. Results from this study further illuminate the significant relationships between error rate, oral reading fluency, and reading comprehension performance, and grade-specific guidelines for appropriate error rate levels. Low oral reading fluency and high error rates predict the level of passage comprehension performance. For second grade students below benchmark, a fall assessment error rate of 28% predicts that student comprehension performance will be below average. For third grade students below benchmark, the fall assessment cut point is 14%. Instructional implications of the findings are discussed.

  13. Technical errors in MR arthrography

    International Nuclear Information System (INIS)

    Hodler, Juerg

    2008-01-01

    This article discusses potential technical problems of MR arthrography. It starts with contraindications, followed by problems relating to injection technique, contrast material and MR imaging technique. For some of the aspects discussed, there is only little published evidence. Therefore, the article is based on the personal experience of the author and on local standards of procedures. Such standards, as well as medico-legal considerations, may vary from country to country. Contraindications for MR arthrography include pre-existing infection, reflex sympathetic dystrophy and possibly bleeding disorders, avascular necrosis and known allergy to contrast media. Errors in injection technique may lead to extra-articular collection of contrast agent or to contrast agent leaking from the joint space, which may cause diagnostic difficulties. Incorrect concentrations of contrast material influence image quality and may also lead to non-diagnostic examinations. Errors relating to MR imaging include delays between injection and imaging and inadequate choice of sequences. Potential solutions to the various possible errors are presented. (orig.)

  14. Technical errors in MR arthrography

    Energy Technology Data Exchange (ETDEWEB)

    Hodler, Juerg [Orthopaedic University Hospital of Balgrist, Radiology, Zurich (Switzerland)

    2008-01-15

    This article discusses potential technical problems of MR arthrography. It starts with contraindications, followed by problems relating to injection technique, contrast material and MR imaging technique. For some of the aspects discussed, there is only little published evidence. Therefore, the article is based on the personal experience of the author and on local standards of procedures. Such standards, as well as medico-legal considerations, may vary from country to country. Contraindications for MR arthrography include pre-existing infection, reflex sympathetic dystrophy and possibly bleeding disorders, avascular necrosis and known allergy to contrast media. Errors in injection technique may lead to extra-articular collection of contrast agent or to contrast agent leaking from the joint space, which may cause diagnostic difficulties. Incorrect concentrations of contrast material influence image quality and may also lead to non-diagnostic examinations. Errors relating to MR imaging include delays between injection and imaging and inadequate choice of sequences. Potential solutions to the various possible errors are presented. (orig.)

  15. Negligence, genuine error, and litigation

    Science.gov (United States)

    Sohn, David H

    2013-01-01

    Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort system in the United States; and review current and future solutions, including medical malpractice reform, alternative dispute resolution, health courts, and no-fault compensation systems. The current political environment favors investigation of non-cap tort reform remedies; investment into more rational oversight systems, such as health courts or no-fault systems may reap both quantitative and qualitative benefits for a less costly and safer health system. PMID:23426783

  16. Repeated speech errors: evidence for learning.

    Science.gov (United States)

    Humphreys, Karin R; Menzies, Heather; Lake, Johanna K

    2010-11-01

    Three experiments elicited phonological speech errors using the SLIP procedure to investigate whether there is a tendency for speech errors on specific words to reoccur, and whether this effect can be attributed to implicit learning of an incorrect mapping from lemma to phonology for that word. In Experiment 1, when speakers made a phonological speech error in the study phase of the experiment (e.g. saying "beg pet" in place of "peg bet") they were over four times as likely to make an error on that same item several minutes later at test. A pseudo-error condition demonstrated that the effect is not simply due to a propensity for speakers to repeat phonological forms, regardless of whether or not they have been made in error. That is, saying "beg pet" correctly at study did not induce speakers to say "beg pet" in error instead of "peg bet" at test. Instead, the effect appeared to be due to learning of the error pathway. Experiment 2 replicated this finding, but also showed that after 48 h, errors made at study were no longer more likely to reoccur. As well as providing constraints on the longevity of the effect, this provides strong evidence that the error reoccurrences observed are not due to item-specific difficulty that leads individual speakers to make habitual mistakes on certain items. Experiment 3 showed that the diminishment of the effect 48 h later is not due to specific extra practice at the task. We discuss how these results fit in with a larger view of language as a dynamic system that is constantly adapting in response to experience. Copyright © 2010 Elsevier B.V. All rights reserved.

  17. Study of Errors among Nursing Students

    Directory of Open Access Journals (Sweden)

    Ella Koren

    2007-09-01

    Full Text Available The study of errors in the health system today is a topic of considerable interest aimed at reducing errors through analysis of the phenomenon and the conclusions reached. Errors that occur frequently among health professionals have also been observed among nursing students. True, in most cases they are actually “near errors,” but these could be a future indicator of therapeutic reality and the effect of nurses' work environment on their personal performance. There are two different approaches to such errors: (a The EPP (error prone person approach lays full responsibility at the door of the individual involved in the error, whether a student, nurse, doctor, or pharmacist. According to this approach, handling consists purely in identifying and penalizing the guilty party. (b The EPE (error prone environment approach emphasizes the environment as a primary contributory factor to errors. The environment as an abstract concept includes components and processes of interpersonal communications, work relations, human engineering, workload, pressures, technical apparatus, and new technologies. The objective of the present study was to examine the role played by factors in and components of personal performance as compared to elements and features of the environment. The study was based on both of the aforementioned approaches, which, when combined, enable a comprehensive understanding of the phenomenon of errors among the student population as well as a comparison of factors contributing to human error and to error deriving from the environment. The theoretical basis of the study was a model that combined both approaches: one focusing on the individual and his or her personal performance and the other focusing on the work environment. The findings emphasize the work environment of health professionals as an EPE. However, errors could have been avoided by means of strict adherence to practical procedures. The authors examined error events in the

  18. Learning mechanisms to limit medication administration errors.

    Science.gov (United States)

    Drach-Zahavy, Anat; Pud, Dorit

    2010-04-01

    This paper is a report of a study conducted to identify and test the effectiveness of learning mechanisms applied by the nursing staff of hospital wards as a means of limiting medication administration errors. Since the influential report ;To Err Is Human', research has emphasized the role of team learning in reducing medication administration errors. Nevertheless, little is known about the mechanisms underlying team learning. Thirty-two hospital wards were randomly recruited. Data were collected during 2006 in Israel by a multi-method (observations, interviews and administrative data), multi-source (head nurses, bedside nurses) approach. Medication administration error was defined as any deviation from procedures, policies and/or best practices for medication administration, and was identified using semi-structured observations of nurses administering medication. Organizational learning was measured using semi-structured interviews with head nurses, and the previous year's reported medication administration errors were assessed using administrative data. The interview data revealed four learning mechanism patterns employed in an attempt to learn from medication administration errors: integrated, non-integrated, supervisory and patchy learning. Regression analysis results demonstrated that whereas the integrated pattern of learning mechanisms was associated with decreased errors, the non-integrated pattern was associated with increased errors. Supervisory and patchy learning mechanisms were not associated with errors. Superior learning mechanisms are those that represent the whole cycle of team learning, are enacted by nurses who administer medications to patients, and emphasize a system approach to data analysis instead of analysis of individual cases.

  19. Error Correcting Codes

    Indian Academy of Sciences (India)

    Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.

  20. Error Correcting Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...

  1. What we can learn from naming errors of children with language impairment at preschool age.

    Science.gov (United States)

    Biran, Michal; Novogrodsky, Rama; Harel-Nov, Efrat; Gil, Mali; Mimouni-Bloch, Aviva

    2018-01-01

    Naming is a complex, multi-level process. It is composed of distinct semantic and phonological levels. Children with naming deficits produce different error types when failing to retrieve the target word. This study explored the error characteristics of children with language impairment compared to those with typical language development. 46 preschool children were tested on a naming test: 16 with language impairment and a naming deficit and 30 with typical language development. The analysis compared types of error in both groups. In a group level, children with language impairment produced different error patterns compared to the control group. Based on naming error analysis and performance on other language tests, two case studies of contrasting profiles suggest different sources for lexical retrieval difficulties in children. The findings reveal differences between the two groups in naming scores and naming errors, and support a qualitative impairment in early development of children with naming deficits. The differing profiles of naming deficits emphasise the importance of including error analysis in the diagnosis.

  2. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  3. Team errors: definition and taxonomy

    International Nuclear Information System (INIS)

    Sasou, Kunihide; Reason, James

    1999-01-01

    In error analysis or error management, the focus is usually upon individuals who have made errors. In large complex systems, however, most people work in teams or groups. Considering this working environment, insufficient emphasis has been given to 'team errors'. This paper discusses the definition of team errors and its taxonomy. These notions are also applied to events that have occurred in the nuclear power industry, aviation industry and shipping industry. The paper also discusses the relations between team errors and Performance Shaping Factors (PSFs). As a result, the proposed definition and taxonomy are found to be useful in categorizing team errors. The analysis also reveals that deficiencies in communication, resource/task management, excessive authority gradient, excessive professional courtesy will cause team errors. Handling human errors as team errors provides an opportunity to reduce human errors

  4. Validation of Metrics as Error Predictors

    Science.gov (United States)

    Mendling, Jan

    In this chapter, we test the validity of metrics that were defined in the previous chapter for predicting errors in EPC business process models. In Section 5.1, we provide an overview of how the analysis data is generated. Section 5.2 describes the sample of EPCs from practice that we use for the analysis. Here we discuss a disaggregation by the EPC model group and by error as well as a correlation analysis between metrics and error. Based on this sample, we calculate a logistic regression model for predicting error probability with the metrics as input variables in Section 5.3. In Section 5.4, we then test the regression function for an independent sample of EPC models from textbooks as a cross-validation. Section 5.5 summarizes the findings.

  5. Notes on human error analysis and prediction

    International Nuclear Information System (INIS)

    Rasmussen, J.

    1978-11-01

    The notes comprise an introductory discussion of the role of human error analysis and prediction in industrial risk analysis. Following this introduction, different classes of human errors and role in industrial systems are mentioned. Problems related to the prediction of human behaviour in reliability and safety analysis are formulated and ''criteria for analyzability'' which must be met by industrial systems so that a systematic analysis can be performed are suggested. The appendices contain illustrative case stories and a review of human error reports for the task of equipment calibration and testing as found in the US Licensee Event Reports. (author)

  6. Medication Error, What Is the Reason?

    Directory of Open Access Journals (Sweden)

    Ali Banaozar Mohammadi

    2015-09-01

    Full Text Available Background: Medication errors due to different reasons may alter the outcome of all patients, especially patients with drug poisoning. We introduce one of the most common type of medication error in the present article. Case:A 48 year old woman with suspected organophosphate poisoning was died due to lethal medication error. Unfortunately these types of errors are not rare and had some preventable reasons included lack of suitable and enough training and practicing of medical students and some failures in medical students’ educational curriculum. Conclusion:Hereby some important reasons are discussed because sometimes they are tre-mendous. We found that most of them are easily preventable. If someone be aware about the method of use, complications, dosage and contraindication of drugs, we can minimize most of these fatal errors.

  7. A qualitative description of human error

    International Nuclear Information System (INIS)

    Li Zhaohuan

    1992-11-01

    The human error has an important contribution to risk of reactor operation. The insight and analytical model are main parts in human reliability analysis. It consists of the concept of human error, the nature, the mechanism of generation, the classification and human performance influence factors. On the operating reactor the human error is defined as the task-human-machine mismatch. The human error event is focused on the erroneous action and the unfavored result. From the time limitation of performing a task, the operation is divided into time-limited and time-opened. The HCR (human cognitive reliability) model is suited for only time-limited. The basic cognitive process consists of the information gathering, cognition/thinking, decision making and action. The human erroneous action may be generated in any stage of this process. The more natural ways to classify human errors are presented. The human performance influence factors including personal, organizational and environmental factors are also listed

  8. A qualitative description of human error

    Energy Technology Data Exchange (ETDEWEB)

    Zhaohuan, Li [Academia Sinica, Beijing, BJ (China). Inst. of Atomic Energy

    1992-11-01

    The human error has an important contribution to risk of reactor operation. The insight and analytical model are main parts in human reliability analysis. It consists of the concept of human error, the nature, the mechanism of generation, the classification and human performance influence factors. On the operating reactor the human error is defined as the task-human-machine mismatch. The human error event is focused on the erroneous action and the unfavored result. From the time limitation of performing a task, the operation is divided into time-limited and time-opened. The HCR (human cognitive reliability) model is suited for only time-limited. The basic cognitive process consists of the information gathering, cognition/thinking, decision making and action. The human erroneous action may be generated in any stage of this process. The more natural ways to classify human errors are presented. The human performance influence factors including personal, organizational and environmental factors are also listed.

  9. A memory of errors in sensorimotor learning.

    Science.gov (United States)

    Herzfeld, David J; Vaswani, Pavan A; Marko, Mollie K; Shadmehr, Reza

    2014-09-12

    The current view of motor learning suggests that when we revisit a task, the brain recalls the motor commands it previously learned. In this view, motor memory is a memory of motor commands, acquired through trial-and-error and reinforcement. Here we show that the brain controls how much it is willing to learn from the current error through a principled mechanism that depends on the history of past errors. This suggests that the brain stores a previously unknown form of memory, a memory of errors. A mathematical formulation of this idea provides insights into a host of puzzling experimental data, including savings and meta-learning, demonstrating that when we are better at a motor task, it is partly because the brain recognizes the errors it experienced before. Copyright © 2014, American Association for the Advancement of Science.

  10. Passport officers' errors in face matching.

    Science.gov (United States)

    White, David; Kemp, Richard I; Jenkins, Rob; Matheson, Michael; Burton, A Mike

    2014-01-01

    Photo-ID is widely used in security settings, despite research showing that viewers find it very difficult to match unfamiliar faces. Here we test participants with specialist experience and training in the task: passport-issuing officers. First, we ask officers to compare photos to live ID-card bearers, and observe high error rates, including 14% false acceptance of 'fraudulent' photos. Second, we compare passport officers with a set of student participants, and find equally poor levels of accuracy in both groups. Finally, we observe that passport officers show no performance advantage over the general population on a standardised face-matching task. Across all tasks, we observe very large individual differences: while average performance of passport staff was poor, some officers performed very accurately--though this was not related to length of experience or training. We propose that improvements in security could be made by emphasising personnel selection.

  11. Passport officers' errors in face matching.

    Directory of Open Access Journals (Sweden)

    David White

    Full Text Available Photo-ID is widely used in security settings, despite research showing that viewers find it very difficult to match unfamiliar faces. Here we test participants with specialist experience and training in the task: passport-issuing officers. First, we ask officers to compare photos to live ID-card bearers, and observe high error rates, including 14% false acceptance of 'fraudulent' photos. Second, we compare passport officers with a set of student participants, and find equally poor levels of accuracy in both groups. Finally, we observe that passport officers show no performance advantage over the general population on a standardised face-matching task. Across all tasks, we observe very large individual differences: while average performance of passport staff was poor, some officers performed very accurately--though this was not related to length of experience or training. We propose that improvements in security could be made by emphasising personnel selection.

  12. Soft error mechanisms, modeling and mitigation

    CERN Document Server

    Sayil, Selahattin

    2016-01-01

    This book introduces readers to various radiation soft-error mechanisms such as soft delays, radiation induced clock jitter and pulses, and single event (SE) coupling induced effects. In addition to discussing various radiation hardening techniques for combinational logic, the author also describes new mitigation strategies targeting commercial designs. Coverage includes novel soft error mitigation techniques such as the Dynamic Threshold Technique and Soft Error Filtering based on Transmission gate with varied gate and body bias. The discussion also includes modeling of SE crosstalk noise, delay and speed-up effects. Various mitigation strategies to eliminate SE coupling effects are also introduced. Coverage also includes the reliability of low power energy-efficient designs and the impact of leakage power consumption optimizations on soft error robustness. The author presents an analysis of various power optimization techniques, enabling readers to make design choices that reduce static power consumption an...

  13. FMEA: a model for reducing medical errors.

    Science.gov (United States)

    Chiozza, Maria Laura; Ponzetti, Clemente

    2009-06-01

    Patient safety is a management issue, in view of the fact that clinical risk management has become an important part of hospital management. Failure Mode and Effect Analysis (FMEA) is a proactive technique for error detection and reduction, firstly introduced within the aerospace industry in the 1960s. Early applications in the health care industry dating back to the 1990s included critical systems in the development and manufacture of drugs and in the prevention of medication errors in hospitals. In 2008, the Technical Committee of the International Organization for Standardization (ISO), licensed a technical specification for medical laboratories suggesting FMEA as a method for prospective risk analysis of high-risk processes. Here we describe the main steps of the FMEA process and review data available on the application of this technique to laboratory medicine. A significant reduction of the risk priority number (RPN) was obtained when applying FMEA to blood cross-matching, to clinical chemistry analytes, as well as to point-of-care testing (POCT).

  14. A theory of cross-validation error

    OpenAIRE

    Turney, Peter D.

    1994-01-01

    This paper presents a theory of error in cross-validation testing of algorithms for predicting real-valued attributes. The theory justifies the claim that predicting real-valued attributes requires balancing the conflicting demands of simplicity and accuracy. Furthermore, the theory indicates precisely how these conflicting demands must be balanced, in order to minimize cross-validation error. A general theory is presented, then it is developed in detail for linear regression and instance-bas...

  15. Spectrum of diagnostic errors in radiology.

    Science.gov (United States)

    Pinto, Antonio; Brunese, Luca

    2010-10-28

    Diagnostic errors are important in all branches of medicine because they are an indication of poor patient care. Since the early 1970s, physicians have been subjected to an increasing number of medical malpractice claims. Radiology is one of the specialties most liable to claims of medical negligence. Most often, a plaintiff's complaint against a radiologist will focus on a failure to diagnose. The etiology of radiological error is multi-factorial. Errors fall into recurrent patterns. Errors arise from poor technique, failures of perception, lack of knowledge and misjudgments. The work of diagnostic radiology consists of the complete detection of all abnormalities in an imaging examination and their accurate diagnosis. Every radiologist should understand the sources of error in diagnostic radiology as well as the elements of negligence that form the basis of malpractice litigation. Error traps need to be uncovered and highlighted, in order to prevent repetition of the same mistakes. This article focuses on the spectrum of diagnostic errors in radiology, including a classification of the errors, and stresses the malpractice issues in mammography, chest radiology and obstetric sonography. Missed fractures in emergency and communication issues between radiologists and physicians are also discussed.

  16. Compensating for Type-I Errors in Video Quality Assessment

    DEFF Research Database (Denmark)

    Brunnström, Kjell; Tavakoli, Samira; Søgaard, Jacob

    2015-01-01

    This paper analyzes the impact on compensating for Type-I errors in video quality assessment. A Type-I error is to incorrectly conclude that there is an effect. The risk increases with the number of comparisons that are performed in statistical tests. Type-I errors are an issue often neglected...

  17. Apparently conclusive meta-analyses may be inconclusive--Trial sequential analysis adjustment of random error risk due to repetitive testing of accumulating data in apparently conclusive neonatal meta-analyses

    DEFF Research Database (Denmark)

    Brok, Jesper; Thorlund, Kristian; Wetterslev, Jørn

    2008-01-01

    BACKGROUND: Random error may cause misleading evidence in meta-analyses. The required number of participants in a meta-analysis (i.e. information size) should be at least as large as an adequately powered single trial. Trial sequential analysis (TSA) may reduce risk of random errors due to repeti......BACKGROUND: Random error may cause misleading evidence in meta-analyses. The required number of participants in a meta-analysis (i.e. information size) should be at least as large as an adequately powered single trial. Trial sequential analysis (TSA) may reduce risk of random errors due...

  18. Reduction in pediatric identification band errors: a quality collaborative.

    Science.gov (United States)

    Phillips, Shannon Connor; Saysana, Michele; Worley, Sarah; Hain, Paul D

    2012-06-01

    Accurate and consistent placement of a patient identification (ID) band is used in health care to reduce errors associated with patient misidentification. Multiple safety organizations have devoted time and energy to improving patient ID, but no multicenter improvement collaboratives have shown scalability of previously successful interventions. We hoped to reduce by half the pediatric patient ID band error rate, defined as absent, illegible, or inaccurate ID band, across a quality improvement learning collaborative of hospitals in 1 year. On the basis of a previously successful single-site intervention, we conducted a self-selected 6-site collaborative to reduce ID band errors in heterogeneous pediatric hospital settings. The collaborative had 3 phases: preparatory work and employee survey of current practice and barriers, data collection (ID band failure rate), and intervention driven by data and collaborative learning to accelerate change. The collaborative audited 11377 patients for ID band errors between September 2009 and September 2010. The ID band failure rate decreased from 17% to 4.1% (77% relative reduction). Interventions including education of frontline staff regarding correct ID bands as a safety strategy; a change to softer ID bands, including "luggage tag" type ID bands for some patients; and partnering with families and patients through education were applied at all institutions. Over 13 months, a collaborative of pediatric institutions significantly reduced the ID band failure rate. This quality improvement learning collaborative demonstrates that safety improvements tested in a single institution can be disseminated to improve quality of care across large populations of children.

  19. (How) do we learn from errors? A prospective study of the link between the ward's learning practices and medication administration errors.

    Science.gov (United States)

    Drach-Zahavy, A; Somech, A; Admi, H; Peterfreund, I; Peker, H; Priente, O

    2014-03-01

    Attention in the ward should shift from preventing medication administration errors to managing them. Nevertheless, little is known in regard with the practices nursing wards apply to learn from medication administration errors as a means of limiting them. To test the effectiveness of four types of learning practices, namely, non-integrated, integrated, supervisory and patchy learning practices in limiting medication administration errors. Data were collected from a convenient sample of 4 hospitals in Israel by multiple methods (observations and self-report questionnaires) at two time points. The sample included 76 wards (360 nurses). Medication administration error was defined as any deviation from prescribed medication processes and measured by a validated structured observation sheet. Wards' use of medication administration technologies, location of the medication station, and workload were observed; learning practices and demographics were measured by validated questionnaires. Results of the mixed linear model analysis indicated that the use of technology and quiet location of the medication cabinet were significantly associated with reduced medication administration errors (estimate=.03, perrors (estimate=.04, plearning practices, supervisory learning was the only practice significantly linked to reduced medication administration errors (estimate=-.04, plearning were significantly linked to higher levels of medication administration errors (estimate=-.03, plearning was not associated with it (p>.05). How wards manage errors might have implications for medication administration errors beyond the effects of typical individual, organizational and technology risk factors. Head nurse can facilitate learning from errors by "management by walking around" and monitoring nurses' medication administration behaviors. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Imagery of Errors in Typing

    Science.gov (United States)

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  1. Double positivity to bee and wasp venom: improved diagnostic procedure by recombinant allergen-based IgE testing and basophil activation test including data about cross-reactive carbohydrate determinants.

    Science.gov (United States)

    Eberlein, Bernadette; Krischan, Lilian; Darsow, Ulf; Ollert, Markus; Ring, Johannes

    2012-07-01

    Specific IgE (sIgE) antibodies to both bee and wasp venom can be due to a sensitivity to both insect venoms or due to cross-reactive carbohydrate determinants (CCDs). Investigating whether a basophil activation test (BAT) with both venoms as well as with bromelain and horseradish peroxidase (HRP) or recombinant allergen-based IgE testing can improve the diagnostic procedure. Twenty-two Hymenoptera-venom allergic patients with sIgE antibodies to both bee and wasp venom were studied. sIgE antibodies to MUXF3 CCD, bromelain, HRP, rApi m 1, and rVes v 5 were determined, and a BAT (Flow2 CAST) with venom extracts, bromelain, and HRP was performed. Further recombinant allergen-based IgE testing was done by using an ELISA, if required. The reactivity of basophils was calculated from the insect venom concentration at half-maximum stimulation. Double positivity/double negativity/single positivity to rApi m 1 and rVes v 5 was seen in 12/1/9 patients. Further recombinant allergen-based IgE testing in the last ones revealed positive results to the other venom in all cases except one. BAT was double positive/double negative/single positive in 6/2/14 patients. Four patients with negative results in sIgE antibodies to CCDs had positive results in BAT. BAT with bromelain/HRP showed a sensitivity of 50%/81% and a specificity of 91%/90%. Component-resolved IgE testing elucidates the pattern of double positivity, showing a majority of true double sensitizations independent of CCD sensitization. BAT seems to add more information about the culprit insect even if the true clinical relevance of BAT is not completely determined because of ethical limitations on diagnostic sting challenges. BAT with HRP is a good method to determine sensitivity to CCDs. Copyright © 2012 American Academy of Allergy, Asthma & Immunology. Published by Mosby, Inc. All rights reserved.

  2. Correction of refractive errors

    Directory of Open Access Journals (Sweden)

    Vladimir Pfeifer

    2005-10-01

    Full Text Available Background: Spectacles and contact lenses are the most frequently used, the safest and the cheapest way to correct refractive errors. The development of keratorefractive surgery has brought new opportunities for correction of refractive errors in patients who have the need to be less dependent of spectacles or contact lenses. Until recently, RK was the most commonly performed refractive procedure for nearsighted patients.Conclusions: The introduction of excimer laser in refractive surgery has given the new opportunities of remodelling the cornea. The laser energy can be delivered on the stromal surface like in PRK or deeper on the corneal stroma by means of lamellar surgery. In LASIK flap is created with microkeratome in LASEK with ethanol and in epi-LASIK the ultra thin flap is created mechanically.

  3. Error-Free Software

    Science.gov (United States)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  4. Minimum Tracking Error Volatility

    OpenAIRE

    Luca RICCETTI

    2010-01-01

    Investors assign part of their funds to asset managers that are given the task of beating a benchmark. The risk management department usually imposes a maximum value of the tracking error volatility (TEV) in order to keep the risk of the portfolio near to that of the selected benchmark. However, risk management does not establish a rule on TEV which enables us to understand whether the asset manager is really active or not and, in practice, asset managers sometimes follow passively the corres...

  5. Error-correction coding

    Science.gov (United States)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  6. Satellite Photometric Error Determination

    Science.gov (United States)

    2015-10-18

    Satellite Photometric Error Determination Tamara E. Payne, Philip J. Castro, Stephen A. Gregory Applied Optimization 714 East Monument Ave, Suite...advocate the adoption of new techniques based on in-frame photometric calibrations enabled by newly available all-sky star catalogs that contain highly...filter systems will likely be supplanted by the Sloan based filter systems. The Johnson photometric system is a set of filters in the optical

  7. Equating error in observed-score equating

    NARCIS (Netherlands)

    van der Linden, Willem J.

    2006-01-01

    Traditionally, error in equating observed scores on two versions of a test is defined as the difference between the transformations that equate the quantiles of their distributions in the sample and population of test takers. But it is argued that if the goal of equating is to adjust the scores of

  8. Video Error Correction Using Steganography

    Science.gov (United States)

    Robie, David L.; Mersereau, Russell M.

    2002-12-01

    The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  9. Video Error Correction Using Steganography

    Directory of Open Access Journals (Sweden)

    Robie David L

    2002-01-01

    Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  10. Error Correction of Loudspeakers

    DEFF Research Database (Denmark)

    Pedersen, Bo Rohde

    of a nonlinear feed forward controller. System identification is used for tracking the loudspeaker parameters. Different system identification methods are reviewed, and the investigations ends with a simple FIR based algorithm. Finally, the ­parameter tracking system is tested with music signals on a 6½ inch......Throughout this thesis, the topic of electrodynamic loudspeaker unit design and modelling are reviewed. The research behind this project has been to study loudspeaker design, based on new possibilities introduced by including digital signal processing, and thereby achieving more freedom...... in loudspeaker unit design. This freedom can be used for efficiency improvements where different loudspeaker design cases show design opportunities. Optimization by size and efficiency, instead of flat frequency response and linearity, is the basis of the loudspeaker efficiency designs studied. In the project...

  11. WACC: Definition, misconceptions and errors

    OpenAIRE

    Fernandez, Pablo

    2011-01-01

    The WACC is just the rate at which the Free Cash Flows must be discounted to obtain the same result as in the valuation using Equity Cash Flows discounted at the required return to equity (Ke) The WACC is neither a cost nor a required return: it is a weighted average of a cost and a required return. To refer to the WACC as the "cost of capital" may be misleading because it is not a cost. The paper includes 7 errors due to not remembering the definition of WACC and shows the relationship betwe...

  12. Deductive Error Diagnosis and Inductive Error Generalization for Intelligent Tutoring Systems.

    Science.gov (United States)

    Hoppe, H. Ulrich

    1994-01-01

    Examines the deductive approach to error diagnosis for intelligent tutoring systems. Topics covered include the principles of the deductive approach to diagnosis; domain-specific heuristics to solve the problem of generalizing error patterns; and deductive diagnosis and the hypertext-based learning environment. (Contains 26 references.) (JLB)

  13. KMRR thermal power measurement error estimation

    International Nuclear Information System (INIS)

    Rhee, B.W.; Sim, B.S.; Lim, I.C.; Oh, S.K.

    1990-01-01

    The thermal power measurement error of the Korea Multi-purpose Research Reactor has been estimated by a statistical Monte Carlo method, and compared with those obtained by the other methods including deterministic and statistical approaches. The results show that the specified thermal power measurement error of 5% cannot be achieved if the commercial RTDs are used to measure the coolant temperatures of the secondary cooling system and the error can be reduced below the requirement if the commercial RTDs are replaced by the precision RTDs. The possible range of the thermal power control operation has been identified to be from 100% to 20% of full power

  14. Error-related brain activity and error awareness in an error classification paradigm.

    Science.gov (United States)

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Common Errors in Ecological Data Sharing

    Directory of Open Access Journals (Sweden)

    Robert B. Cook

    2013-04-01

    Full Text Available Objectives: (1 to identify common errors in data organization and metadata completeness that would preclude a “reader” from being able to interpret and re-use the data for a new purpose; and (2 to develop a set of best practices derived from these common errors that would guide researchers in creating more usable data products that could be readily shared, interpreted, and used.Methods: We used directed qualitative content analysis to assess and categorize data and metadata errors identified by peer reviewers of data papers published in the Ecological Society of America’s (ESA Ecological Archives. Descriptive statistics provided the relative frequency of the errors identified during the peer review process.Results: There were seven overarching error categories: Collection & Organization, Assure, Description, Preserve, Discover, Integrate, and Analyze/Visualize. These categories represent errors researchers regularly make at each stage of the Data Life Cycle. Collection & Organization and Description errors were some of the most common errors, both of which occurred in over 90% of the papers.Conclusions: Publishing data for sharing and reuse is error prone, and each stage of the Data Life Cycle presents opportunities for mistakes. The most common errors occurred when the researcher did not provide adequate metadata to enable others to interpret and potentially re-use the data. Fortunately, there are ways to minimize these mistakes through carefully recording all details about study context, data collection, QA/ QC, and analytical procedures from the beginning of a research project and then including this descriptive information in the metadata.

  16. Analyzing temozolomide medication errors: potentially fatal.

    Science.gov (United States)

    Letarte, Nathalie; Gabay, Michael P; Bressler, Linda R; Long, Katie E; Stachnik, Joan M; Villano, J Lee

    2014-10-01

    The EORTC-NCIC regimen for glioblastoma requires different dosing of temozolomide (TMZ) during radiation and maintenance therapy. This complexity is exacerbated by the availability of multiple TMZ capsule strengths. TMZ is an alkylating agent and the major toxicity of this class is dose-related myelosuppression. Inadvertent overdose can be fatal. The websites of the Institute for Safe Medication Practices (ISMP), and the Food and Drug Administration (FDA) MedWatch database were reviewed. We searched the MedWatch database for adverse events associated with TMZ and obtained all reports including hematologic toxicity submitted from 1st November 1997 to 30th May 2012. The ISMP describes errors with TMZ resulting from the positioning of information on the label of the commercial product. The strength and quantity of capsules on the label were in close proximity to each other, and this has been changed by the manufacturer. MedWatch identified 45 medication errors. Patient errors were the most common, accounting for 21 or 47% of errors, followed by dispensing errors, which accounted for 13 or 29%. Seven reports or 16% were errors in the prescribing of TMZ. Reported outcomes ranged from reversible hematological adverse events (13%), to hospitalization for other adverse events (13%) or death (18%). Four error reports lacked detail and could not be categorized. Although the FDA issued a warning in 2003 regarding fatal medication errors and the product label warns of overdosing, errors in TMZ dosing occur for various reasons and involve both healthcare professionals and patients. Overdosing errors can be fatal.

  17. An Enhanced Error Model for EKF-Based Tightly-Coupled Integration of GPS and Land Vehicle's Motion Sensors.

    Science.gov (United States)

    Karamat, Tashfeen B; Atia, Mohamed M; Noureldin, Aboelmagd

    2015-09-22

    Reduced inertial sensor systems (RISS) have been introduced by many researchers as a low-cost, low-complexity sensor assembly that can be integrated with GPS to provide a robust integrated navigation system for land vehicles. In earlier works, the developed error models were simplified based on the assumption that the vehicle is mostly moving on a flat horizontal plane. Another limitation is the simplified estimation of the horizontal tilt angles, which is based on simple averaging of the accelerometers' measurements without modelling their errors or tilt angle errors. In this paper, a new error model is developed for RISS that accounts for the effect of tilt angle errors and the accelerometer's errors. Additionally, it also includes important terms in the system dynamic error model, which were ignored during the linearization process in earlier works. An augmented extended Kalman filter (EKF) is designed to incorporate tilt angle errors and transversal accelerometer errors. The new error model and the augmented EKF design are developed in a tightly-coupled RISS/GPS integrated navigation system. The proposed system was tested on real trajectories' data under degraded GPS environments, and the results were compared to earlier works on RISS/GPS systems. The findings demonstrated that the proposed enhanced system introduced significant improvements in navigational performance.

  18. Diagnostic errors in pediatric radiology

    International Nuclear Information System (INIS)

    Taylor, George A.; Voss, Stephan D.; Melvin, Patrice R.; Graham, Dionne A.

    2011-01-01

    Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement. (orig.)

  19. Audit of medication errors by anesthetists in North Western Nigeria ...

    African Journals Online (AJOL)

    ... errors do occur in the everyday practice of anesthetists in Nigeria as in other countries and can lead to morbidity and mortality in our patients. Routine audit and reporting of critical incidents including errors in drug administration should be encouraged. Reduction of medication errors is an important aspect of patient safety, ...

  20. Iatrogenic medication errors in a paediatric intensive care unit in ...

    African Journals Online (AJOL)

    Errors most frequently encountered included failure to calculate rates of infusion and the conversion of mL to mEq or mL to mg for potassium, phenobarbitone and digoxin. Of the 117 children admitted, 111 (94.9%) were exposed to at least one medication error. Two or more medication errors occurred in 34.1% of cases.

  1. Human Factors Process Task Analysis: Liquid Oxygen Pump Acceptance Test Procedure at the Advanced Technology Development Center

    Science.gov (United States)

    Diorio, Kimberly A.; Voska, Ned (Technical Monitor)

    2002-01-01

    This viewgraph presentation provides information on Human Factors Process Failure Modes and Effects Analysis (HF PFMEA). HF PFMEA includes the following 10 steps: Describe mission; Define System; Identify human-machine; List human actions; Identify potential errors; Identify factors that effect error; Determine likelihood of error; Determine potential effects of errors; Evaluate risk; Generate solutions (manage error). The presentation also describes how this analysis was applied to a liquid oxygen pump acceptance test.

  2. Evaluation of analytical errors in a clinical chemistry laboratory: a 3 year experience.

    Science.gov (United States)

    Sakyi, As; Laing, Ef; Ephraim, Rk; Asibey, Of; Sadique, Ok

    2015-01-01

    Proficient laboratory service is the cornerstone of modern healthcare systems and has an impact on over 70% of medical decisions on admission, discharge, and medications. In recent years, there is an increasing awareness of the importance of errors in laboratory practice and their possible negative impact on patient outcomes. We retrospectively analyzed data spanning a period of 3 years on analytical errors observed in our laboratory. The data covered errors over the whole testing cycle including pre-, intra-, and post-analytical phases and discussed strategies pertinent to our settings to minimize their occurrence. We described the occurrence of pre-analytical, analytical and post-analytical errors observed at the Komfo Anokye Teaching Hospital clinical biochemistry laboratory during a 3-year period from January, 2010 to December, 2012. Data were analyzed with Graph Pad Prism 5(GraphPad Software Inc. CA USA). A total of 589,510 tests was performed on 188,503 outpatients and hospitalized patients. The overall error rate for the 3 years was 4.7% (27,520/58,950). Pre-analytical, analytical and post-analytical errors contributed 3.7% (2210/58,950), 0.1% (108/58,950), and 0.9% (512/58,950), respectively. The number of tests reduced significantly over the 3-year period, but this did not correspond with a reduction in the overall error rate (P = 0.90) along with the years. Analytical errors are embedded within our total process setup especially pre-analytical and post-analytical phases. Strategic measures including quality assessment programs for staff involved in pre-analytical processes should be intensified.

  3. VOLUMETRIC ERROR COMPENSATION IN FIVE-AXIS CNC MACHINING CENTER THROUGH KINEMATICS MODELING OF GEOMETRIC ERROR

    Directory of Open Access Journals (Sweden)

    Pooyan Vahidi Pashsaki

    2016-06-01

    Full Text Available Accuracy of a five-axis CNC machine tool is affected by a vast number of error sources. This paper investigates volumetric error modeling and its compensation to the basis for creation of new tool path for improvement of work pieces accuracy. The volumetric error model of a five-axis machine tool with the configuration RTTTR (tilting head B-axis and rotary table in work piece side A΄ was set up taking into consideration rigid body kinematics and homogeneous transformation matrix, in which 43 error components are included. Volumetric error comprises 43 error components that can separately reduce geometrical and dimensional accuracy of work pieces. The machining accuracy of work piece is guaranteed due to the position of the cutting tool center point (TCP relative to the work piece. The cutting tool is deviated from its ideal position relative to the work piece and machining error is experienced. For compensation process detection of the present tool path and analysis of the RTTTR five-axis CNC machine tools geometrical error, translating current position of component to compensated positions using the Kinematics error model, converting newly created component to new tool paths using the compensation algorithms and finally editing old G-codes using G-code generator algorithm have been employed.

  4. Critical slowing down and error analysis in lattice QCD simulations

    International Nuclear Information System (INIS)

    Virotta, Francesco

    2012-01-01

    In this work we investigate the critical slowing down of lattice QCD simulations. We perform a preliminary study in the quenched approximation where we find that our estimate of the exponential auto-correlation time scales as τ exp (a)∝a -5 , where a is the lattice spacing. In unquenched simulations with O(a) improved Wilson fermions we do not obtain a scaling law but find results compatible with the behavior that we find in the pure gauge theory. The discussion is supported by a large set of ensembles both in pure gauge and in the theory with two degenerate sea quarks. We have moreover investigated the effect of slow algorithmic modes in the error analysis of the expectation value of typical lattice QCD observables (hadronic matrix elements and masses). In the context of simulations affected by slow modes we propose and test a method to obtain reliable estimates of statistical errors. The method is supposed to help in the typical algorithmic setup of lattice QCD, namely when the total statistics collected is of O(10)τ exp . This is the typical case when simulating close to the continuum limit where the computational costs for producing two independent data points can be extremely large. We finally discuss the scale setting in N f =2 simulations using the Kaon decay constant f K as physical input. The method is explained together with a thorough discussion of the error analysis employed. A description of the publicly available code used for the error analysis is included.

  5. MEDICAL ERROR: CIVIL AND LEGAL ASPECT.

    Science.gov (United States)

    Buletsa, S; Drozd, O; Yunin, O; Mohilevskyi, L

    2018-03-01

    The scientific article is focused on the research of the notion of medical error, medical and legal aspects of this notion have been considered. The necessity of the legislative consolidation of the notion of «medical error» and criteria of its legal estimation have been grounded. In the process of writing a scientific article, we used the empirical method, general scientific and comparative legal methods. A comparison of the concept of medical error in civil and legal aspects was made from the point of view of Ukrainian, European and American scientists. It has been marked that the problem of medical errors is known since ancient times and in the whole world, in fact without regard to the level of development of medicine, there is no country, where doctors never make errors. According to the statistics, medical errors in the world are included in the first five reasons of death rate. At the same time the grant of medical services practically concerns all people. As a man and his life, health in Ukraine are acknowledged by a higher social value, medical services must be of high-quality and effective. The grant of not quality medical services causes harm to the health, and sometimes the lives of people; it may result in injury or even death. The right to the health protection is one of the fundamental human rights assured by the Constitution of Ukraine; therefore the issue of medical errors and liability for them is extremely relevant. The authors make conclusions, that the definition of the notion of «medical error» must get the legal consolidation. Besides, the legal estimation of medical errors must be based on the single principles enshrined in the legislation and confirmed by judicial practice.

  6. Error propagation analysis for a sensor system

    International Nuclear Information System (INIS)

    Yeater, M.L.; Hockenbury, R.W.; Hawkins, J.; Wilkinson, J.

    1976-01-01

    As part of a program to develop reliability methods for operational use with reactor sensors and protective systems, error propagation analyses are being made for each model. An example is a sensor system computer simulation model, in which the sensor system signature is convoluted with a reactor signature to show the effect of each in revealing or obscuring information contained in the other. The error propagation analysis models the system and signature uncertainties and sensitivities, whereas the simulation models the signatures and by extensive repetitions reveals the effect of errors in various reactor input or sensor response data. In the approach for the example presented, the errors accumulated by the signature (set of ''noise'' frequencies) are successively calculated as it is propagated stepwise through a system comprised of sensor and signal processing components. Additional modeling steps include a Fourier transform calculation to produce the usual power spectral density representation of the product signature, and some form of pattern recognition algorithm

  7. Standard Errors for Matrix Correlations.

    Science.gov (United States)

    Ogasawara, Haruhiko

    1999-01-01

    Derives the asymptotic standard errors and intercorrelations for several matrix correlations assuming multivariate normality for manifest variables and derives the asymptotic standard errors of the matrix correlations for two factor-loading matrices. (SLD)

  8. Error forecasting schemes of error correction at receiver

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-08-01

    To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)

  9. MO-FG-202-07: Real-Time EPID-Based Detection Metric For VMAT Delivery Errors

    International Nuclear Information System (INIS)

    Passarge, M; Fix, M K; Manser, P; Stampanoni, M F M; Siebers, J V

    2016-01-01

    Purpose: To create and test an accurate EPID-frame-based VMAT QA metric to detect gross dose errors in real-time and to provide information about the source of error. Methods: A Swiss cheese model was created for an EPID-based real-time QA process. The system compares a treatmentplan- based reference set of EPID images with images acquired over each 2° gantry angle interval. The metric utilizes a sequence of independent consecutively executed error detection Methods: a masking technique that verifies infield radiation delivery and ensures no out-of-field radiation; output normalization checks at two different stages; global image alignment to quantify rotation, scaling and translation; standard gamma evaluation (3%, 3 mm) and pixel intensity deviation checks including and excluding high dose gradient regions. Tolerances for each test were determined. For algorithm testing, twelve different types of errors were selected to modify the original plan. Corresponding predictions for each test case were generated, which included measurement-based noise. Each test case was run multiple times (with different noise per run) to assess the ability to detect introduced errors. Results: Averaged over five test runs, 99.1% of all plan variations that resulted in patient dose errors were detected within 2° and 100% within 4° (∼1% of patient dose delivery). Including cases that led to slightly modified but clinically equivalent plans, 91.5% were detected by the system within 2°. Based on the type of method that detected the error, determination of error sources was achieved. Conclusion: An EPID-based during-treatment error detection system for VMAT deliveries was successfully designed and tested. The system utilizes a sequence of methods to identify and prevent gross treatment delivery errors. The system was inspected for robustness with realistic noise variations, demonstrating that it has the potential to detect a large majority of errors in real-time and indicate the error

  10. MO-FG-202-07: Real-Time EPID-Based Detection Metric For VMAT Delivery Errors

    Energy Technology Data Exchange (ETDEWEB)

    Passarge, M; Fix, M K; Manser, P [Division of Medical Radiation Physics and Department of Radiation Oncology, Inselspital, Bern University Hospital, and University of Bern, Bern (Switzerland); Stampanoni, M F M [Institute for Biomedical Engineering, ETH Zurich, and PSI, Villigen (Switzerland); Siebers, J V [Department of Radiation Oncology, University of Virginia, Charlottesville, VA (United States)

    2016-06-15

    Purpose: To create and test an accurate EPID-frame-based VMAT QA metric to detect gross dose errors in real-time and to provide information about the source of error. Methods: A Swiss cheese model was created for an EPID-based real-time QA process. The system compares a treatmentplan- based reference set of EPID images with images acquired over each 2° gantry angle interval. The metric utilizes a sequence of independent consecutively executed error detection Methods: a masking technique that verifies infield radiation delivery and ensures no out-of-field radiation; output normalization checks at two different stages; global image alignment to quantify rotation, scaling and translation; standard gamma evaluation (3%, 3 mm) and pixel intensity deviation checks including and excluding high dose gradient regions. Tolerances for each test were determined. For algorithm testing, twelve different types of errors were selected to modify the original plan. Corresponding predictions for each test case were generated, which included measurement-based noise. Each test case was run multiple times (with different noise per run) to assess the ability to detect introduced errors. Results: Averaged over five test runs, 99.1% of all plan variations that resulted in patient dose errors were detected within 2° and 100% within 4° (∼1% of patient dose delivery). Including cases that led to slightly modified but clinically equivalent plans, 91.5% were detected by the system within 2°. Based on the type of method that detected the error, determination of error sources was achieved. Conclusion: An EPID-based during-treatment error detection system for VMAT deliveries was successfully designed and tested. The system utilizes a sequence of methods to identify and prevent gross treatment delivery errors. The system was inspected for robustness with realistic noise variations, demonstrating that it has the potential to detect a large majority of errors in real-time and indicate the error

  11. The role of comprehensive check at the blood bank reception on blood requisitions in detecting potential transfusion errors.

    Science.gov (United States)

    Jain, Ashish; Kumari, Sonam; Marwaha, Neelam; Sharma, Ratti Ram

    2015-06-01

    Pre-transfusion testing includes proper requisitions, compatibility testing and pre-release checks. Proper labelling of samples and blood units and accurate patient details check helps to minimize the risk of errors in transfusion. This study was aimed to identify requisition errors before compatibility testing. The study was conducted in the blood bank of a tertiary care hospital in north India over a period of 3 months. The requisitions were screened at the reception counter and inside the pre-transfusion testing laboratory for errors. This included checking the Central Registration number (C.R. No.) and name of patient on the requisition form and the sample label; appropriateness of sample container and sample label; incomplete requisitions; blood group discrepancy. Out of the 17,148 blood requisitions, 474 (2.76 %) requisition errors were detected before the compatibility testing. There were 192 (1.11 %) requisitions where the C.R. No. on the form and the sample were not tallying and in 70 (0.40 %) requisitions patient's name on the requisition form and the sample were different. Highest number of requisitions errors were observed in those received from the Emergency and Trauma services (27.38 %) followed by Medical wards (15.82 %) and the lowest number (3.16 %) of requisition errors were observed from Hematology and Oncology wards. C.R. No. error was the most common error observed in our study. Thus a careful check of the blood requisitions at the blood bank reception counter helps in identifying the potential transfusion errors.

  12. Evaluating a medical error taxonomy.

    OpenAIRE

    Brixey, Juliana; Johnson, Todd R.; Zhang, Jiajie

    2002-01-01

    Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a stand...

  13. The use of error and uncertainty methods in the medical laboratory.

    Science.gov (United States)

    Oosterhuis, Wytze P; Bayat, Hassan; Armbruster, David; Coskun, Abdurrahman; Freeman, Kathleen P; Kallner, Anders; Koch, David; Mackenzie, Finlay; Migliarino, Gabriel; Orth, Matthias; Sandberg, Sverre; Sylte, Marit S; Westgard, Sten; Theodorsson, Elvar

    2018-01-26

    Error methods - compared with uncertainty methods - offer simpler, more intuitive and practical procedures for calculating measurement uncertainty and conducting quality assurance in laboratory medicine. However, uncertainty methods are preferred in other fields of science as reflected by the guide to the expression of uncertainty in measurement. When laboratory results are used for supporting medical diagnoses, the total uncertainty consists only partially of analytical variation. Biological variation, pre- and postanalytical variation all need to be included. Furthermore, all components of the measuring procedure need to be taken into account. Performance specifications for diagnostic tests should include the diagnostic uncertainty of the entire testing process. Uncertainty methods may be particularly useful for this purpose but have yet to show their strength in laboratory medicine. The purpose of this paper is to elucidate the pros and cons of error and uncertainty methods as groundwork for future consensus on their use in practical performance specifications. Error and uncertainty methods are complementary when evaluating measurement data.

  14. The specificity of the Stroop interference score of errors to ADHD in boys

    DEFF Research Database (Denmark)

    Sørensen, L; Plessen, K J; Adolfsdottir, S

    2014-01-01

    scores on the Inhibit scale from the Behavior Rating Inventory of Executive Function. These findings support that a Stroop interference score of errors is sensitive for inhibition problems in children with ADHD and encourages the use of Stroop versions including error recordings independent of response......The Stroop Interference Test is widely used to assess the inhibition function; however, divergent results have emerged from meta-analyses in children with ADHD. This has led to conflicting results as to whether the Stroop test detects the level of inhibition in these children. We hypothesized...... that the general approach to include interference scores depending on response time causes conflicting results, whereas recordings of errors may prove a superior measure of the inhibition function in children with ADHD. In the present study, 39 children with an ADHD diagnosis, two subgroups with and without...

  15. Analysis of the interface tracking errors

    International Nuclear Information System (INIS)

    Cerne, G.; Tiselj, I.; Petelin, S.

    2001-01-01

    An important limitation of the interface-tracking algorithm is the grid density, which determines the space scale of the surface tracking. In this paper the analysis of the interface tracking errors, which occur in a dispersed flow, is performed for the VOF interface tracking method. A few simple two-fluid tests are proposed for the investigation of the interface tracking errors and their grid dependence. When the grid density becomes too coarse to follow the interface changes, the errors can be reduced either by using denser nodalization or by switching to the two-fluid model during the simulation. Both solutions are analyzed and compared on a simple vortex-flow test.(author)

  16. Effects of OCR Errors on Ranking and Feedback Using the Vector Space Model.

    Science.gov (United States)

    Taghva, Kazem; And Others

    1996-01-01

    Reports on the performance of the vector space model in the presence of OCR (optical character recognition) errors in information retrieval. Highlights include precision and recall, a full-text test collection, smart vector representation, impact of weighting parameters, ranking variability, and the effect of relevance feedback. (Author/LRW)

  17. Error Patterns in Problem Solving.

    Science.gov (United States)

    Babbitt, Beatrice C.

    Although many common problem-solving errors within the realm of school mathematics have been previously identified, a compilation of such errors is not readily available within learning disabilities textbooks, mathematics education texts, or teacher's manuals for school mathematics texts. Using data on error frequencies drawn from both the Fourth…

  18. Medication administration errors in Eastern Saudi Arabia

    International Nuclear Information System (INIS)

    Mir Sadat-Ali

    2010-01-01

    To assess the prevalence and characteristics of medication errors (ME) in patients admitted to King Fahd University Hospital, Alkhobar, Kingdom of Saudi Arabia. Medication errors are documented by the nurses and physicians standard reporting forms (Hospital Based Incident Report). The study was carried out in King Fahd University Hospital, Alkhobar, Kingdom of Saudi Arabia and all the incident reports were collected during the period from January 2008 to December 2009. The incident reports were analyzed for age, gender, nationality, nursing unit, and time where ME was reported. The data were analyzed and the statistical significance differences between groups were determined by Student's t-test, and p-values of <0.05 using confidence interval of 95% were considered significant. There were 38 ME reported for the study period. The youngest patient was 5 days and the oldest 70 years. There were 31 Saudis, and 7 non-Saudi patients involved. The most common error was missed medication, which was seen in 15 (39.5%) patients. Over 15 (39.5%) of errors occurred in 2 units (pediatric medicine, and obstetrics and gynecology). Nineteen (50%) of the errors occurred during the 3-11 pm shift. Our study shows that the prevalence of ME in our institution is low, in comparison with the world literature. This could be due to under reporting of the errors, and we believe that ME reporting should be made less punitive so that ME can be studied and preventive measures implemented (Author).

  19. The developmental clock of dental enamel: a test for the periodicity of prism cross-striations in modern humans and an evaluation of the most likely sources of error in histological studies of this kind

    Science.gov (United States)

    Antoine, Daniel; Hillson, Simon; Dean, M Christopher

    2009-01-01

    Dental tissues contain regular microscopic structures believed to result from periodic variations in the secretion of matrix by enamel- and dentine-forming cells. Counts of these structures are an important tool for reconstructing the chronology of dental development in both modern and fossil hominids. Most studies rely on the periodicity of the regular cross-banding that occurs along the long axis of enamel prisms. These prism cross-striations are widely thought to reflect a circadian rhythm of enamel matrix secretion and are generally regarded as representing daily increments of tissue. Previously, some researchers have argued against the circadian periodicity of these structures and questioned their use in reconstructing dental development. Here we tested the periodicity of enamel cross-striations – and the accuracy to which they can be used – in the developing permanent dentition of five children, excavated from a 19th century crypt in London, whose age-at-death was independently known. The interruption of crown formation by death was used to calibrate cross-striation counts. All five individuals produced counts that were strongly consistent with those expected from the independently known ages, taking into account the position of the neonatal line and factors of preservation. These results confirm that cross-striations do indeed reflect a circadian rhythm in enamel matrix secretion. They further validate their use in reconstructing dental development and in determining the age-at-death of the remains of children whose dentitions are still forming at the time of death. Significantly they identify the most likely source of error and the common difficulties encountered in histological studies of this kind. PMID:19166472

  20. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Errorful and errorless learning: The impact of cue-target constraint in learning from errors.

    Science.gov (United States)

    Bridger, Emma K; Mecklinger, Axel

    2014-08-01

    The benefits of testing on learning are well described, and attention has recently turned to what happens when errors are elicited during learning: Is testing nonetheless beneficial, or can errors hinder learning? Whilst recent findings have indicated that tests boost learning even if errors are made on every trial, other reports, emphasizing the benefits of errorless learning, have indicated that errors lead to poorer later memory performance. The possibility that this discrepancy is a function of the materials that must be learned-in particular, the relationship between the cues and targets-was addressed here. Cued recall after either a study-only errorless condition or an errorful learning condition was contrasted across cue-target associations, for which the extent to which the target was constrained by the cue was either high or low. Experiment 1 showed that whereas errorful learning led to greater recall for low-constraint stimuli, it led to a significant decrease in recall for high-constraint stimuli. This interaction is thought to reflect the extent to which retrieval is constrained by the cue-target association, as well as by the presence of preexisting semantic associations. The advantage of errorful retrieval for low-constraint stimuli was replicated in Experiment 2, and the interaction with stimulus type was replicated in Experiment 3, even when guesses were randomly designated as being either correct or incorrect. This pattern provides support for inferences derived from reports in which participants made errors on all learning trials, whilst highlighting the impact of material characteristics on the benefits and disadvantages that accrue from errorful learning in episodic memory.

  2. Reduction in Chemotherapy Mixing Errors Using Six Sigma: Illinois CancerCare Experience.

    Science.gov (United States)

    Heard, Bridgette; Miller, Laura; Kumar, Pankaj

    2012-03-01

    Chemotherapy mixing errors (CTMRs), although rare, have serious consequences. Illinois CancerCare is a large practice with multiple satellite offices. The goal of this study was to reduce the number of CTMRs using Six Sigma methods. A Six Sigma team consisting of five participants (registered nurses and pharmacy technicians [PTs]) was formed. The team had 10 hours of Six Sigma training in the DMAIC (ie, Define, Measure, Analyze, Improve, Control) process. Measurement of errors started from the time the CT order was verified by the PT to the time of CT administration by the nurse. Data collection included retrospective error tracking software, system audits, and staff surveys. Root causes of CTMRs included inadequate knowledge of CT mixing protocol, inconsistencies in checking methods, and frequent changes in staffing of clinics. Initial CTMRs (n = 33,259) constituted 0.050%, with 77% of these errors affecting patients. The action plan included checklists, education, and competency testing. The postimplementation error rate (n = 33,376, annualized) over a 3-month period was reduced to 0.019%, with only 15% of errors affecting patients. Initial Sigma was calculated at 4.2; this process resulted in the improvement of Sigma to 5.2, representing a 100-fold reduction. Financial analysis demonstrated a reduction in annualized loss of revenue (administration charges and drug wastage) from $11,537.95 (Medicare Average Sales Price) before the start of the project to $1,262.40. The Six Sigma process is a powerful technique in the reduction of CTMRs.

  3. Error-finding and error-correcting methods for the start-up of the SLC

    International Nuclear Information System (INIS)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.; Selig, L.J.

    1987-02-01

    During the commissioning of an accelerator, storage ring, or beam transfer line, one of the important tasks of an accelertor physicist is to check the first-order optics of the beam line and to look for errors in the system. Conceptually, it is important to distinguish between techniques for finding the machine errors that are the cause of the problem and techniques for correcting the beam errors that are the result of the machine errors. In this paper we will limit our presentation to certain applications of these two methods for finding or correcting beam-focus errors and beam-kick errors that affect the profile and trajectory of the beam respectively. Many of these methods have been used successfully in the commissioning of SLC systems. In order not to waste expensive beam time we have developed and used a beam-line simulator to test the ideas that have not been tested experimentally. To save valuable physicist's time we have further automated the beam-kick error-finding procedures by adopting methods from the field of artificial intelligence to develop a prototype expert system. Our experience with this prototype has demonstrated the usefulness of expert systems in solving accelerator control problems. The expert system is able to find the same solutions as an expert physicist but in a more systematic fashion. The methods used in these procedures and some of the recent applications will be described in this paper

  4. Teacher knowledge of error analysis in differential calculus

    Directory of Open Access Journals (Sweden)

    Eunice K. Moru

    2014-12-01

    Full Text Available The study investigated teacher knowledge of error analysis in differential calculus. Two teachers were the sample of the study: one a subject specialist and the other a mathematics education specialist. Questionnaires and interviews were used for data collection. The findings of the study reflect that the teachers’ knowledge of error analysis was characterised by the following assertions, which are backed up with some evidence: (1 teachers identified the errors correctly, (2 the generalised error identification resulted in opaque analysis, (3 some of the identified errors were not interpreted from multiple perspectives, (4 teachers’ evaluation of errors was either local or global and (5 in remedying errors accuracy and efficiency were emphasised more than conceptual understanding. The implications of the findings of the study for teaching include engaging in error analysis continuously as this is one way of improving knowledge for teaching.

  5. Creating illusions of knowledge: learning errors that contradict prior knowledge.

    Science.gov (United States)

    Fazio, Lisa K; Barber, Sarah J; Rajaram, Suparna; Ornstein, Peter A; Marsh, Elizabeth J

    2013-02-01

    Most people know that the Pacific is the largest ocean on Earth and that Edison invented the light bulb. Our question is whether this knowledge is stable, or if people will incorporate errors into their knowledge bases, even if they have the correct knowledge stored in memory. To test this, we asked participants general-knowledge questions 2 weeks before they read stories that contained errors (e.g., "Franklin invented the light bulb"). On a later general-knowledge test, participants reproduced story errors despite previously answering the questions correctly. This misinformation effect was found even for questions that were answered correctly on the initial test with the highest level of confidence. Furthermore, prior knowledge offered no protection against errors entering the knowledge base; the misinformation effect was equivalent for previously known and unknown facts. Errors can enter the knowledge base even when learners have the knowledge necessary to catch the errors. 2013 APA, all rights reserved

  6. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    Science.gov (United States)

    Calvo, Roque; D’Amato, Roberto; Gómez, Emilio; Domingo, Rosario

    2016-01-01

    The development of an error compensation model for coordinate measuring machines (CMMs) and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included. PMID:27690052

  7. Integration of Error Compensation of Coordinate Measuring Machines into Feature Measurement: Part I—Model Development

    Directory of Open Access Journals (Sweden)

    Roque Calvo

    2016-09-01

    Full Text Available The development of an error compensation model for coordinate measuring machines (CMMs and its integration into feature measurement is presented. CMMs are widespread and dependable instruments in industry and laboratories for dimensional measurement. From the tip probe sensor to the machine display, there is a complex transformation of probed point coordinates through the geometrical feature model that makes the assessment of accuracy and uncertainty measurement results difficult. Therefore, error compensation is not standardized, conversely to other simpler instruments. Detailed coordinate error compensation models are generally based on CMM as a rigid-body and it requires a detailed mapping of the CMM’s behavior. In this paper a new model type of error compensation is proposed. It evaluates the error from the vectorial composition of length error by axis and its integration into the geometrical measurement model. The non-explained variability by the model is incorporated into the uncertainty budget. Model parameters are analyzed and linked to the geometrical errors and uncertainty of CMM response. Next, the outstanding measurement models of flatness, angle, and roundness are developed. The proposed models are useful for measurement improvement with easy integration into CMM signal processing, in particular in industrial environments where built-in solutions are sought. A battery of implementation tests are presented in Part II, where the experimental endorsement of the model is included.

  8. Modeling the Error of the Medtronic Paradigm Veo Enlite Glucose Sensor.

    Science.gov (United States)

    Biagi, Lyvia; Ramkissoon, Charrise M; Facchinetti, Andrea; Leal, Yenny; Vehi, Josep

    2017-06-12

    Continuous glucose monitors (CGMs) are prone to inaccuracy due to time lags, sensor drift, calibration errors, and measurement noise. The aim of this study is to derive the model of the error of the second generation Medtronic Paradigm Veo Enlite (ENL) sensor and compare it with the Dexcom SEVEN PLUS (7P), G4 PLATINUM (G4P), and advanced G4 for Artificial Pancreas studies (G4AP) systems. An enhanced methodology to a previously employed technique was utilized to dissect the sensor error into several components. The dataset used included 37 inpatient sessions in 10 subjects with type 1 diabetes (T1D), in which CGMs were worn in parallel and blood glucose (BG) samples were analyzed every 15 ± 5 min Calibration error and sensor drift of the ENL sensor was best described by a linear relationship related to the gain and offset. The mean time lag estimated by the model is 9.4 ± 6.5 min. The overall average mean absolute relative difference (MARD) of the ENL sensor was 11.68 ± 5.07% Calibration error had the highest contribution to total error in the ENL sensor. This was also reported in the 7P, G4P, and G4AP. The model of the ENL sensor error will be useful to test the in silico performance of CGM-based applications, i.e., the artificial pancreas, employing this kind of sensor.

  9. A graph edit dictionary for correcting errors in roof topology graphs reconstructed from point clouds

    Science.gov (United States)

    Xiong, B.; Oude Elberink, S.; Vosselman, G.

    2014-07-01

    In the task of 3D building model reconstruction from point clouds we face the problem of recovering a roof topology graph in the presence of noise, small roof faces and low point densities. Errors in roof topology graphs will seriously affect the final modelling results. The aim of this research is to automatically correct these errors. We define the graph correction as a graph-to-graph problem, similar to the spelling correction problem (also called the string-to-string problem). The graph correction is more complex than string correction, as the graphs are 2D while strings are only 1D. We design a strategy based on a dictionary of graph edit operations to automatically identify and correct the errors in the input graph. For each type of error the graph edit dictionary stores a representative erroneous subgraph as well as the corrected version. As an erroneous roof topology graph may contain several errors, a heuristic search is applied to find the optimum sequence of graph edits to correct the errors one by one. The graph edit dictionary can be expanded to include entries needed to cope with errors that were previously not encountered. Experiments show that the dictionary with only fifteen entries already properly corrects one quarter of erroneous graphs in about 4500 buildings, and even half of the erroneous graphs in one test area, achieving as high as a 95% acceptance rate of the reconstructed models.

  10. Error management for musicians: an interdisciplinary conceptual framework.

    Science.gov (United States)

    Kruse-Weber, Silke; Parncutt, Richard

    2014-01-01

    Musicians tend to strive for flawless performance and perfection, avoiding errors at all costs. Dealing with errors while practicing or performing is often frustrating and can lead to anger and despair, which can explain musicians' generally negative attitude toward errors and the tendency to aim for flawless learning in instrumental music education. But even the best performances are rarely error-free, and research in general pedagogy and psychology has shown that errors provide useful information for the learning process. Research in instrumental pedagogy is still neglecting error issues; the benefits of risk management (before the error) and error management (during and after the error) are still underestimated. It follows that dealing with errors is a key aspect of music practice at home, teaching, and performance in public. And yet, to be innovative, or to make their performance extraordinary, musicians need to risk errors. Currently, most music students only acquire the ability to manage errors implicitly - or not at all. A more constructive, creative, and differentiated culture of errors would balance error tolerance and risk-taking against error prevention in ways that enhance music practice and music performance. The teaching environment should lay the foundation for the development of such an approach. In this contribution, we survey recent research in aviation, medicine, economics, psychology, and interdisciplinary decision theory that has demonstrated that specific error-management training can promote metacognitive skills that lead to better adaptive transfer and better performance skills. We summarize how this research can be applied to music, and survey-relevant research that is specifically tailored to the needs of musicians, including generic guidelines for risk and error management in music teaching and performance. On this basis, we develop a conceptual framework for risk management that can provide orientation for further music education and

  11. Error management for musicians: an interdisciplinary conceptual framework

    Directory of Open Access Journals (Sweden)

    Silke eKruse-Weber

    2014-07-01

    Full Text Available Musicians tend to strive for flawless performance and perfection, avoiding errors at all costs. Dealing with errors while practicing or performing is often frustrating and can lead to anger and despair, which can explain musicians’ generally negative attitude toward errors and the tendency to aim for errorless learning in instrumental music education. But even the best performances are rarely error-free, and research in general pedagogy and psychology has shown that errors provide useful information for the learning process. Research in instrumental pedagogy is still neglecting error issues; the benefits of risk management (before the error and error management (during and after the error are still underestimated. It follows that dealing with errors is a key aspect of music practice at home, teaching, and performance in public. And yet, to be innovative, or to make their performance extraordinary, musicians need to risk errors. Currently, most music students only acquire the ability to manage errors implicitly - or not at all. A more constructive, creative and differentiated culture of errors would balance error tolerance and risk-taking against error prevention in ways that enhance music practice and music performance. The teaching environment should lay the foundation for the development of these abilities. In this contribution, we survey recent research in aviation, medicine, economics, psychology, and interdisciplinary decision theory that has demonstrated that specific error-management training can promote metacognitive skills that lead to better adaptive transfer and better performance skills. We summarize how this research can be applied to music, and survey relevant research that is specifically tailored to the needs of musicians, including generic guidelines for risk and error management in music teaching and performance. On this basis, we develop a conceptual framework for risk management that can provide orientation for further

  12. Controlling errors in unidosis carts

    Directory of Open Access Journals (Sweden)

    Inmaculada Díaz Fernández

    2010-01-01

    Full Text Available Objective: To identify errors in the unidosis system carts. Method: For two months, the Pharmacy Service controlled medication either returned or missing from the unidosis carts both in the pharmacy and in the wards. Results: Uncorrected unidosis carts show a 0.9% of medication errors (264 versus 0.6% (154 which appeared in unidosis carts previously revised. In carts not revised, the error is 70.83% and mainly caused when setting up unidosis carts. The rest are due to a lack of stock or unavailability (21.6%, errors in the transcription of medical orders (6.81% or that the boxes had not been emptied previously (0.76%. The errors found in the units correspond to errors in the transcription of the treatment (3.46%, non-receipt of the unidosis copy (23.14%, the patient did not take the medication (14.36%or was discharged without medication (12.77%, was not provided by nurses (14.09%, was withdrawn from the stocks of the unit (14.62%, and errors of the pharmacy service (17.56% . Conclusions: It is concluded the need to redress unidosis carts and a computerized prescription system to avoid errors in transcription.Discussion: A high percentage of medication errors is caused by human error. If unidosis carts are overlooked before sent to hospitalization units, the error diminishes to 0.3%.

  13. Test

    DEFF Research Database (Denmark)

    Bendixen, Carsten

    2014-01-01

    Bidrag med en kortfattet, introducerende, perspektiverende og begrebsafklarende fremstilling af begrebet test i det pædagogiske univers.......Bidrag med en kortfattet, introducerende, perspektiverende og begrebsafklarende fremstilling af begrebet test i det pædagogiske univers....

  14. An investigation of error correcting techniques for OMV and AXAF

    Science.gov (United States)

    Ingels, Frank; Fryer, John

    1991-01-01

    The original objectives of this project were to build a test system for the NASA 255/223 Reed/Solomon encoding/decoding chip set and circuit board. This test system was then to be interfaced with a convolutional system at MSFC to examine the performance of the concantinated codes. After considerable work, it was discovered that the convolutional system could not function as needed. This report documents the design, construction, and testing of the test apparatus for the R/S chip set. The approach taken was to verify the error correcting behavior of the chip set by injecting known error patterns onto data and observing the results. Error sequences were generated using pseudo-random number generator programs, with Poisson time distribution between errors and Gaussian burst lengths. Sample means, variances, and number of un-correctable errors were calculated for each data set before testing.

  15. Social aspects of clinical errors.

    Science.gov (United States)

    Richman, Joel; Mason, Tom; Mason-Whitehead, Elizabeth; McIntosh, Annette; Mercer, Dave

    2009-08-01

    Clinical errors, whether committed by doctors, nurses or other professions allied to healthcare, remain a sensitive issue requiring open debate and policy formulation in order to reduce them. The literature suggests that the issues underpinning errors made by healthcare professionals involve concerns about patient safety, professional disclosure, apology, litigation, compensation, processes of recording and policy development to enhance quality service. Anecdotally, we are aware of narratives of minor errors, which may well have been covered up and remain officially undisclosed whilst the major errors resulting in damage and death to patients alarm both professionals and public with resultant litigation and compensation. This paper attempts to unravel some of these issues by highlighting the historical nature of clinical errors and drawing parallels to contemporary times by outlining the 'compensation culture'. We then provide an overview of what constitutes a clinical error and review the healthcare professional strategies for managing such errors.

  16. At least some errors are randomly generated (Freud was wrong)

    Science.gov (United States)

    Sellen, A. J.; Senders, J. W.

    1986-01-01

    An experiment was carried out to expose something about human error generating mechanisms. In the context of the experiment, an error was made when a subject pressed the wrong key on a computer keyboard or pressed no key at all in the time allotted. These might be considered, respectively, errors of substitution and errors of omission. Each of seven subjects saw a sequence of three digital numbers, made an easily learned binary judgement about each, and was to press the appropriate one of two keys. Each session consisted of 1,000 presentations of randomly permuted, fixed numbers broken into 10 blocks of 100. One of two keys should have been pressed within one second of the onset of each stimulus. These data were subjected to statistical analyses in order to probe the nature of the error generating mechanisms. Goodness of fit tests for a Poisson distribution for the number of errors per 50 trial interval and for an exponential distribution of the length of the intervals between errors were carried out. There is evidence for an endogenous mechanism that may best be described as a random error generator. Furthermore, an item analysis of the number of errors produced per stimulus suggests the existence of a second mechanism operating on task driven factors producing exogenous errors. Some errors, at least, are the result of constant probability generating mechanisms with error rate idiosyncratically determined for each subject.

  17. Errors and mistakes in breast ultrasound diagnostics

    Directory of Open Access Journals (Sweden)

    Wiesław Jakubowski

    2012-09-01

    Full Text Available Sonomammography is often the first additional examination performed in the diagnostics of breast diseases. The development of ultrasound imaging techniques, particularly the introduction of high frequency transducers, matrix transducers, harmonic imaging and finally, elastography, influenced the improvement of breast disease diagnostics. Neverthe‑ less, as in each imaging method, there are errors and mistakes resulting from the techni‑ cal limitations of the method, breast anatomy (fibrous remodeling, insufficient sensitivity and, in particular, specificity. Errors in breast ultrasound diagnostics can be divided into impossible to be avoided and potentially possible to be reduced. In this article the most frequently made errors in ultrasound have been presented, including the ones caused by the presence of artifacts resulting from volumetric averaging in the near and far field, artifacts in cysts or in dilated lactiferous ducts (reverberations, comet tail artifacts, lateral beam artifacts, improper setting of general enhancement or time gain curve or range. Errors dependent on the examiner, resulting in the wrong BIRADS‑usg classification, are divided into negative and positive errors. The sources of these errors have been listed. The methods of minimization of the number of errors made have been discussed, includ‑ ing the ones related to the appropriate examination technique, taking into account data from case history and the use of the greatest possible number of additional options such as: harmonic imaging, color and power Doppler and elastography. In the article examples of errors resulting from the technical conditions of the method have been presented, and those dependent on the examiner which are related to the great diversity and variation of ultrasound images of pathological breast lesions.

  18. Beyond hypercorrection: remembering corrective feedback for low-confidence errors.

    Science.gov (United States)

    Griffiths, Lauren; Higham, Philip A

    2018-02-01

    Correcting errors based on corrective feedback is essential to successful learning. Previous studies have found that corrections to high-confidence errors are better remembered than low-confidence errors (the hypercorrection effect). The aim of this study was to investigate whether corrections to low-confidence errors can also be successfully retained in some cases. Participants completed an initial multiple-choice test consisting of control, trick and easy general-knowledge questions, rated their confidence after answering each question, and then received immediate corrective feedback. After a short delay, they were given a cued-recall test consisting of the same questions. In two experiments, we found high-confidence errors to control questions were better corrected on the second test compared to low-confidence errors - the typical hypercorrection effect. However, low-confidence errors to trick questions were just as likely to be corrected as high-confidence errors. Most surprisingly, we found that memory for the feedback and original responses, not confidence or surprise, were significant predictors of error correction. We conclude that for some types of material, there is an effortful process of elaboration and problem solving prior to making low-confidence errors that facilitates memory of corrective feedback.

  19. Test Review: Wilkinson, G. S., & Robertson, G. J. (2006). Wide Range Achievement Test--Fourth Edition. Lutz, FL: Psychological Assessment Resources. WRAT4 Introductory Kit (Includes Manual, 25 Test/Response Forms [Blue and Green], and Accompanying Test Materials): $243.00

    Science.gov (United States)

    Dell, Cindy Ann; Harrold, Barbara; Dell, Thomas

    2008-01-01

    The Wide Range Achievement Test-Fourth Edition (WRAT4) is designed to provide "a quick, simple, psychometrically sound assessment of academic skills". The test was first published in 1946 by Joseph F. Jastak, with the purpose of augmenting the cognitive performance measures of the Wechsler-Bellevue Scales, developed by David Wechsler.…

  20. Report from LHC MD 1391: First tests of the variation of amplitude detuning with crossing angle as an observable for high-order errors in low-β∗ colliders

    CERN Document Server

    Maclean, Ewen Hamish; Fuchsberger, Kajetan; Giovannozzi, Massimo; Persson, Tobias Hakan Bjorn; Tomas Garcia, Rogelio; CERN. Geneva. ATS Department

    2017-01-01

    Nonlinear errors in experimental insertions can pose a significant challenge to the operability of low-β∗ colliders. When crossing schemes are applied high-order errors, such as decapole and dodecapole multipole components in triplets and separation dipoles, can feed-down to give a normal octupole perturbation. Such fields may contribute to distortion of the assumed tune footprint, influencing lifetime and the Landau damping of instabilities. Conversely, comparison of amplitude detuning coefficients with and without crossing schemes applied should allow for the beam-based study of such high-order errors. In this note first measurements of amplitude detuning with crossing bumps in the experimental insertions are reported.

  1. Errors in radiographic recognition in the emergency room

    International Nuclear Information System (INIS)

    Britton, C.A.; Cooperstein, L.A.

    1986-01-01

    For 6 months we monitored the frequency and type of errors in radiographic recognition made by radiology residents on call in our emergency room. A relatively low error rate was observed, probably because the authors evaluated cognitive errors only, rather than include those of interpretation. The most common missed finding was a small fracture, particularly on the hands or feet. First-year residents were most likely to make an error, but, interestingly, our survey revealed a small subset of upper-level residents who made a disproportionate number of errors

  2. The effect of errors in charged particle beams

    International Nuclear Information System (INIS)

    Carey, D.C.

    1987-01-01

    Residual errors in a charged particle optical system determine how well the performance of the system conforms to the theory on which it is based. Mathematically possible optical modes can sometimes be eliminated as requiring precisions not attainable. Other plans may require introduction of means of correction for the occurrence of various errors. Error types include misalignments, magnet fabrication precision limitations, and magnet current regulation errors. A thorough analysis of a beam optical system requires computer simulation of all these effects. A unified scheme for the simulation of errors and their correction is discussed

  3. The error analysis of the determination of the activity coefficients via the isopiestic method

    International Nuclear Information System (INIS)

    Zhou Jun; Chen Qiyuan; Fang Zheng; Liang Yizeng; Liu Shijun; Zhou Yong

    2005-01-01

    Error analysis is very important to experimental designs. The error analysis of the determination of activity coefficients for a binary system via the isopiestic method shows that the error sources include not only the experimental errors of the analyzed molalities and the measured osmotic coefficients, but also the deviation of the regressed values from the experimental data when the regression function is used. It also shows that the accurate chemical analysis of the molality of the test solution is important, and it is preferable to keep the error of the measured osmotic coefficients changeless in all isopiestic experiments including those experiments on the very dilute solutions. The isopiestic experiments on the dilute solutions are very important, and the lowest molality should be low enough so that a theoretical method can be used below the lowest molality. And it is necessary that the isopiestic experiment should be done on the test solutions of lower than 0.1 mol . kg -1 . For most electrolytes solutions, it is usually preferable to require the lowest molality to be less than 0.05 mol . kg -1 . Moreover, the experimental molalities of the test solutions should be firstly arranged by keeping the interval of the logarithms of the molalities nearly constant, and secondly more number of high molalities should be arranged, and we propose to arrange the experimental molalities greater than 1 mol . kg -1 according to some kind of the arithmetical progression of the intervals of the molalities. After experiments, the error of the calculated activity coefficients of the solutes could be calculated from the actually values of the errors of the measured isopiestic molalities and the deviations of the regressed values from the experimental values with our obtained equations

  4. Passive quantum error correction of linear optics networks through error averaging

    Science.gov (United States)

    Marshman, Ryan J.; Lund, Austin P.; Rohde, Peter P.; Ralph, Timothy C.

    2018-02-01

    We propose and investigate a method of error detection and noise correction for bosonic linear networks using a method of unitary averaging. The proposed error averaging does not rely on ancillary photons or control and feedforward correction circuits, remaining entirely passive in its operation. We construct a general mathematical framework for this technique and then give a series of proof of principle examples including numerical analysis. Two methods for the construction of averaging are then compared to determine the most effective manner of implementation and probe the related error thresholds. Finally we discuss some of the potential uses of this scheme.

  5. Bifurcated states of the error-field-induced magnetic islands

    International Nuclear Information System (INIS)

    Zheng, L.-J.; Li, B.; Hazeltine, R.D.

    2008-01-01

    We find that the formation of the magnetic islands due to error fields shows bifurcation when neoclassical effects are included. The bifurcation, which follows from including bootstrap current terms in a description of island growth in the presence of error fields, provides a path to avoid the island-width pole in the classical description. The theory offers possible theoretical explanations for the recent DIII-D and JT-60 experimental observations concerning confinement deterioration with increasing error field

  6. Diagnostic errors related to acute abdominal pain in the emergency department.

    Science.gov (United States)

    Medford-Davis, Laura; Park, Elizabeth; Shlamovitz, Gil; Suliburk, James; Meyer, Ashley N D; Singh, Hardeep

    2016-04-01

    Diagnostic errors in the emergency department (ED) are harmful and costly. We reviewed a selected high-risk cohort of patients presenting to the ED with abdominal pain to evaluate for possible diagnostic errors and associated process breakdowns. We conducted a retrospective chart review of ED patients >18 years at an urban academic hospital. A computerised 'trigger' algorithm identified patients possibly at high risk for diagnostic errors to facilitate selective record reviews. The trigger determined patients to be at high risk because they: (1) presented to the ED with abdominal pain, and were discharged home and (2) had a return ED visit within 10 days that led to a hospitalisation. Diagnostic errors were defined as missed opportunities to make a correct or timely diagnosis based on the evidence available during the first ED visit, regardless of patient harm, and included errors that involved both ED and non-ED providers. Errors were determined by two independent record reviewers followed by team consensus in cases of disagreement. Diagnostic errors occurred in 35 of 100 high-risk cases. Over two-thirds had breakdowns involving the patient-provider encounter (most commonly history-taking or ordering additional tests) and/or follow-up and tracking of diagnostic information (most commonly follow-up of abnormal test results). The most frequently missed diagnoses were gallbladder pathology (n=10) and urinary infections (n=5). Diagnostic process breakdowns in ED patients with abdominal pain most commonly involved history-taking, ordering insufficient tests in the patient-provider encounter and problems with follow-up of abnormal test results. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  7. Dose error analysis for a scanned proton beam delivery system

    International Nuclear Information System (INIS)

    Coutrakon, G; Wang, N; Miller, D W; Yang, Y

    2010-01-01

    All particle beam scanning systems are subject to dose delivery errors due to errors in position, energy and intensity of the delivered beam. In addition, finite scan speeds, beam spill non-uniformities, and delays in detector, detector electronics and magnet responses will all contribute errors in delivery. In this paper, we present dose errors for an 8 x 10 x 8 cm 3 target of uniform water equivalent density with 8 cm spread out Bragg peak and a prescribed dose of 2 Gy. Lower doses are also analyzed and presented later in the paper. Beam energy errors and errors due to limitations of scanning system hardware have been included in the analysis. By using Gaussian shaped pencil beams derived from measurements in the research room of the James M Slater Proton Treatment and Research Center at Loma Linda, CA and executing treatment simulations multiple times, statistical dose errors have been calculated in each 2.5 mm cubic voxel in the target. These errors were calculated by delivering multiple treatments to the same volume and calculating the rms variation in delivered dose at each voxel in the target. The variations in dose were the result of random beam delivery errors such as proton energy, spot position and intensity fluctuations. The results show that with reasonable assumptions of random beam delivery errors, the spot scanning technique yielded an rms dose error in each voxel less than 2% or 3% of the 2 Gy prescribed dose. These calculated errors are within acceptable clinical limits for radiation therapy.

  8. Error framing effects on performance: cognitive, motivational, and affective pathways.

    Science.gov (United States)

    Steele-Johnson, Debra; Kalinoski, Zachary T

    2014-01-01

    Our purpose was to examine whether positive error framing, that is, making errors salient and cuing individuals to see errors as useful, can benefit learning when task exploration is constrained. Recent research has demonstrated the benefits of a newer approach to training, that is, error management training, that includes the opportunity to actively explore the task and framing errors as beneficial to learning complex tasks (Keith & Frese, 2008). Other research has highlighted the important role of errors in on-the-job learning in complex domains (Hutchins, 1995). Participants (N = 168) from a large undergraduate university performed a class scheduling task. Results provided support for a hypothesized path model in which error framing influenced cognitive, motivational, and affective factors which in turn differentially affected performance quantity and quality. Within this model, error framing had significant direct effects on metacognition and self-efficacy. Our results suggest that positive error framing can have beneficial effects even when tasks cannot be structured to support extensive exploration. Whereas future research can expand our understanding of error framing effects on outcomes, results from the current study suggest that positive error framing can facilitate learning from errors in real-time performance of tasks.

  9. Learning from errors in radiology to improve patient safety.

    Science.gov (United States)

    Saeed, Shaista Afzal; Masroor, Imrana; Shafqat, Gulnaz

    2013-10-01

    To determine the views and practices of trainees and consultant radiologists about error reporting. Cross-sectional survey. Radiology trainees and consultant radiologists in four tertiary care hospitals in Karachi approached in the second quarter of 2011. Participants were enquired as to their grade, sub-specialty interest, whether they kept a record/log of their errors (defined as a mistake that has management implications for the patient), number of errors they made in the last 12 months and the predominant type of error. They were also asked about the details of their department error meetings. All duly completed questionnaires were included in the study while the ones with incomplete information were excluded. A total of 100 radiologists participated in the survey. Of them, 34 were consultants and 66 were trainees. They had a wide range of sub-specialty interest like CT, Ultrasound, etc. Out of the 100 responders, 49 kept a personal record/log of their errors. In response to the recall of approximate errors they made in the last 12 months, 73 (73%) of participants recorded a varied response with 1 - 5 errors mentioned by majority i.e. 47 (64.5%). Most of the radiologists (97%) claimed receiving information about their errors through multiple sources like morbidity/mortality meetings, patients' follow-up, through colleagues and consultants. Perceptual error 66 (66%) were the predominant error type reported. Regular occurrence of error meetings and attending three or more error meetings in the last 12 months was reported by 35% participants. Majority among these described the atmosphere of these error meetings as informative and comfortable (n = 22, 62.8%). It is of utmost importance to develop a culture of learning from mistakes by conducting error meetings and improving the process of recording and addressing errors to enhance patient safety.

  10. Dopamine reward prediction error coding.

    Science.gov (United States)

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  11. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  12. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  13. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  14. Error Modelling for Multi-Sensor Measurements in Infrastructure-Free Indoor Navigation

    Directory of Open Access Journals (Sweden)

    Laura Ruotsalainen

    2018-02-01

    Full Text Available The long-term objective of our research is to develop a method for infrastructure-free simultaneous localization and mapping (SLAM and context recognition for tactical situational awareness. Localization will be realized by propagating motion measurements obtained using a monocular camera, a foot-mounted Inertial Measurement Unit (IMU, sonar, and a barometer. Due to the size and weight requirements set by tactical applications, Micro-Electro-Mechanical (MEMS sensors will be used. However, MEMS sensors suffer from biases and drift errors that may substantially decrease the position accuracy. Therefore, sophisticated error modelling and implementation of integration algorithms are key for providing a viable result. Algorithms used for multi-sensor fusion have traditionally been different versions of Kalman filters. However, Kalman filters are based on the assumptions that the state propagation and measurement models are linear with additive Gaussian noise. Neither of the assumptions is correct for tactical applications, especially for dismounted soldiers, or rescue personnel. Therefore, error modelling and implementation of advanced fusion algorithms are essential for providing a viable result. Our approach is to use particle filtering (PF, which is a sophisticated option for integrating measurements emerging from pedestrian motion having non-Gaussian error characteristics. This paper discusses the statistical modelling of the measurement errors from inertial sensors and vision based heading and translation measurements to include the correct error probability density functions (pdf in the particle filter implementation. Then, model fitting is used to verify the pdfs of the measurement errors. Based on the deduced error models of the measurements, particle filtering method is developed to fuse all this information, where the weights of each particle are computed based on the specific models derived. The performance of the developed method is

  15. Errors in ADAS-cog administration and scoring may undermine clinical trials results.

    Science.gov (United States)

    Schafer, K; De Santi, S; Schneider, L S

    2011-06-01

    The Alzheimer's Disease Assessment Scale - cognitive subscale (ADAS-cog) is the most widely used cognitive outcome measure in AD trials. Although errors in administration and scoring have been suggested as factors masking accurate estimates and potential effects of treatments, there have been few formal examinations of errors with the ADAS-cog. We provided ADAS-cog administration training using standard methods to raters who were designated as experienced, potential raters by sponsors or contract research organizations for two clinical trials. Training included 1 hour sessions on test administration, scoring, question periods, and required that raters individually view and score a model ADAS-cog administration. Raters scores were compared to the criterion scores established for the model administration. A total of 108 errors were made by 80.6% of the 72 raters; 37.5% made 1 error, 25.0% made 2 errors and 18.0% made 3 or more. Errors were made in all ADAS-cog subsections. The most common were in word finding difficulty (67% of the raters), word recognition (22%), and orientation (22%). For the raters who made 1, 2, or ≥ 3 errors the ADAS-cog score was 17.5 (95% CI, 17.3 - 17.8), 17.8 (17.0 - 18.5), and 18.8 (17.6 - 20.0), respectively, and compared to the criterion score, 18.3. ADAS-cog means differed significantly and the variances were more than twice as large between those who made errors on word finding and those who did not, 17.6 (SD=1.4) vs. 18.8 (SD=0.9), respectively (χ(2) = 37.2, P ADAS-cog scores and clinical trials outcomes. These errors may undermine detection of medication effects by contributing both to a biased point estimate and increased variance of the outcome.

  16. Architecture design for soft errors

    CERN Document Server

    Mukherjee, Shubu

    2008-01-01

    This book provides a comprehensive description of the architetural techniques to tackle the soft error problem. It covers the new methodologies for quantitative analysis of soft errors as well as novel, cost-effective architectural techniques to mitigate them. To provide readers with a better grasp of the broader problem deffinition and solution space, this book also delves into the physics of soft errors and reviews current circuit and software mitigation techniques.

  17. Dopamine reward prediction error coding

    OpenAIRE

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards?an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less...

  18. Comparison of computer workstation with film for detecting setup errors

    International Nuclear Information System (INIS)

    Fritsch, D.S.; Boxwala, A.A.; Raghavan, S.; Coffee, C.; Major, S.A.; Muller, K.E.; Chaney, E.L.

    1997-01-01

    Purpose/Objective: Workstations designed for portal image interpretation by radiation oncologists provide image displays and image processing and analysis tools that differ significantly compared with the standard clinical practice of inspecting portal films on a light box. An implied but unproved assumption associated with the clinical implementation of workstation technology is that patient care is improved, or at least not adversely affected. The purpose of this investigation was to conduct observer studies to test the hypothesis that radiation oncologists can detect setup errors using a workstation at least as accurately as when following standard clinical practice. Materials and Methods: A workstation, PortFolio, was designed for radiation oncologists to display and inspect digital portal images for setup errors. PortFolio includes tools to enhance images; align cross-hairs, field edges, and anatomic structures on reference and acquired images; measure distances and angles; and view registered images superimposed on one another. In a well designed and carefully controlled observer study, nine radiation oncologists, including attendings and residents, used PortFolio to detect setup errors in realistic digitally reconstructed portal (DRPR) images computed from the NLM visible human data using a previously described approach † . Compared with actual portal images where absolute truth is ill defined or unknown, the DRPRs contained known translation or rotation errors in the placement of the fields over target regions in the pelvis and head. Twenty DRPRs with randomly induced errors were computed for each site. The induced errors were constrained to a plane at the isocenter of the target volume and perpendicular to the central axis of the treatment beam. Images used in the study were also printed on film. Observers interpreted the film-based images using standard clinical practice. The images were reviewed in eight sessions. During each session five images were

  19. Error analysis of mathematical problems on TIMSS: A case of Indonesian secondary students

    Science.gov (United States)

    Priyani, H. A.; Ekawati, R.

    2018-01-01

    Indonesian students’ competence in solving mathematical problems is still considered as weak. It was pointed out by the results of international assessment such as TIMSS. This might be caused by various types of errors made. Hence, this study aimed at identifying students’ errors in solving mathematical problems in TIMSS in the topic of numbers that considered as the fundamental concept in Mathematics. This study applied descriptive qualitative analysis. The subject was three students with most errors in the test indicators who were taken from 34 students of 8th graders. Data was obtained through paper and pencil test and student’s’ interview. The error analysis indicated that in solving Applying level problem, the type of error that students made was operational errors. In addition, for reasoning level problem, there are three types of errors made such as conceptual errors, operational errors and principal errors. Meanwhile, analysis of the causes of students’ errors showed that students did not comprehend the mathematical problems given.

  20. Identifying Error in AUV Communication

    National Research Council Canada - National Science Library

    Coleman, Joseph; Merrill, Kaylani; O'Rourke, Michael; Rajala, Andrew G; Edwards, Dean B

    2006-01-01

    Mine Countermeasures (MCM) involving Autonomous Underwater Vehicles (AUVs) are especially susceptible to error, given the constraints on underwater acoustic communication and the inconstancy of the underwater communication channel...

  1. Human Errors in Decision Making

    OpenAIRE

    Mohamad, Shahriari; Aliandrina, Dessy; Feng, Yan

    2005-01-01

    The aim of this paper was to identify human errors in decision making process. The study was focused on a research question such as: what could be the human error as a potential of decision failure in evaluation of the alternatives in the process of decision making. Two case studies were selected from the literature and analyzed to find the human errors contribute to decision fail. Then the analysis of human errors was linked with mental models in evaluation of alternative step. The results o...

  2. Finding beam focus errors automatically

    International Nuclear Information System (INIS)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.

    1987-01-01

    An automated method for finding beam focus errors using an optimization program called COMFORT-PLUS. The steps involved in finding the correction factors using COMFORT-PLUS has been used to find the beam focus errors for two damping rings at the SLAC Linear Collider. The program is to be used as an off-line program to analyze actual measured data for any SLC system. A limitation on the application of this procedure is found to be that it depends on the magnitude of the machine errors. Another is that the program is not totally automated since the user must decide a priori where to look for errors

  3. Heuristic errors in clinical reasoning.

    Science.gov (United States)

    Rylander, Melanie; Guerrasio, Jeannette

    2016-08-01

    Errors in clinical reasoning contribute to patient morbidity and mortality. The purpose of this study was to determine the types of heuristic errors made by third-year medical students and first-year residents. This study surveyed approximately 150 clinical educators inquiring about the types of heuristic errors they observed in third-year medical students and first-year residents. Anchoring and premature closure were the two most common errors observed amongst third-year medical students and first-year residents. There was no difference in the types of errors observed in the two groups. Errors in clinical reasoning contribute to patient morbidity and mortality Clinical educators perceived that both third-year medical students and first-year residents committed similar heuristic errors, implying that additional medical knowledge and clinical experience do not affect the types of heuristic errors made. Further work is needed to help identify methods that can be used to reduce heuristic errors early in a clinician's education. © 2015 John Wiley & Sons Ltd.

  4. Device including a contact detector

    DEFF Research Database (Denmark)

    2011-01-01

    arms (12) may extend from the supporting body in co-planar relationship with the first surface. The plurality of cantilever arms (12) may extend substantially parallel to each other and each of the plurality of cantilever arms (12) may include an electrical conductive tip for contacting the area......The present invention relates to a probe for determining an electrical property of an area of a surface of a test sample, the probe is intended to be in a specific orientation relative to the test sample. The probe may comprise a supporting body defining a first surface. A plurality of cantilever...... of the test sample by movement of the probe relative to the surface of the test sample into the specific orientation.; The probe may further comprise a contact detector (14) extending from the supporting body arranged so as to contact the surface of the test sample prior to any one of the plurality...

  5. A Hybrid Unequal Error Protection / Unequal Error Resilience ...

    African Journals Online (AJOL)

    The quality layers are then assigned an Unequal Error Resilience to synchronization loss by unequally allocating the number of headers available for synchronization to them. Following that Unequal Error Protection against channel noise is provided to the layers by the use of Rate Compatible Punctured Convolutional ...

  6. Bootstrap-Based Improvements for Inference with Clustered Errors

    OpenAIRE

    Doug Miller; A. Colin Cameron; Jonah B. Gelbach

    2006-01-01

    Microeconometrics researchers have increasingly realized the essential need to account for any within-group dependence in estimating standard errors of regression parameter estimates. The typical preferred solution is to calculate cluster-robust or sandwich standard errors that permit quite general heteroskedasticity and within-cluster error correlation, but presume that the number of clusters is large. In applications with few (5-30) clusters, standard asymptotic tests can over-reject consid...

  7. Electronic prescribing reduces prescribing error in public hospitals.

    Science.gov (United States)

    Shawahna, Ramzi; Rahman, Nisar-Ur; Ahmad, Mahmood; Debray, Marcel; Yliperttula, Marjo; Declèves, Xavier

    2011-11-01

    To examine the incidence of prescribing errors in a main public hospital in Pakistan and to assess the impact of introducing electronic prescribing system on the reduction of their incidence. Medication errors are persistent in today's healthcare system. The impact of electronic prescribing on reducing errors has not been tested in developing world. Prospective review of medication and discharge medication charts before and after the introduction of an electronic inpatient record and prescribing system. Inpatient records (n = 3300) and 1100 discharge medication sheets were reviewed for prescribing errors before and after the installation of electronic prescribing system in 11 wards. Medications (13,328 and 14,064) were prescribed for inpatients, among which 3008 and 1147 prescribing errors were identified, giving an overall error rate of 22·6% and 8·2% throughout paper-based and electronic prescribing, respectively. Medications (2480 and 2790) were prescribed for discharge patients, among which 418 and 123 errors were detected, giving an overall error rate of 16·9% and 4·4% during paper-based and electronic prescribing, respectively. Electronic prescribing has a significant effect on the reduction of prescribing errors. Prescribing errors are commonplace in Pakistan public hospitals. The study evaluated the impact of introducing electronic inpatient records and electronic prescribing in the reduction of prescribing errors in a public hospital in Pakistan. © 2011 Blackwell Publishing Ltd.

  8. Reducing diagnostic errors in medicine: what's the goal?

    Science.gov (United States)

    Graber, Mark; Gordon, Ruthanna; Franklin, Nancy

    2002-10-01

    This review considers the feasibility of reducing or eliminating the three major categories of diagnostic errors in medicine: "No-fault errors" occur when the disease is silent, presents atypically, or mimics something more common. These errors will inevitably decline as medical science advances, new syndromes are identified, and diseases can be detected more accurately or at earlier stages. These errors can never be eradicated, unfortunately, because new diseases emerge, tests are never perfect, patients are sometimes noncompliant, and physicians will inevitably, at times, choose the most likely diagnosis over the correct one, illustrating the concept of necessary fallibility and the probabilistic nature of choosing a diagnosis. "System errors" play a role when diagnosis is delayed or missed because of latent imperfections in the health care system. These errors can be reduced by system improvements, but can never be eliminated because these improvements lag behind and degrade over time, and each new fix creates the opportunity for novel errors. Tradeoffs also guarantee system errors will persist, when resources are just shifted. "Cognitive errors" reflect misdiagnosis from faulty data collection or interpretation, flawed reasoning, or incomplete knowledge. The limitations of human processing and the inherent biases in using heuristics guarantee that these errors will persist. Opportunities exist, however, for improving the cognitive aspect of diagnosis by adopting system-level changes (e.g., second opinions, decision-support systems, enhanced access to specialists) and by training designed to improve cognition or cognitive awareness. Diagnostic error can be substantially reduced, but never eradicated.

  9. Error studies for SNS Linac. Part 1: Transverse errors

    International Nuclear Information System (INIS)

    Crandall, K.R.

    1998-01-01

    The SNS linac consist of a radio-frequency quadrupole (RFQ), a drift-tube linac (DTL), a coupled-cavity drift-tube linac (CCDTL) and a coupled-cavity linac (CCL). The RFQ and DTL are operated at 402.5 MHz; the CCDTL and CCL are operated at 805 MHz. Between the RFQ and DTL is a medium-energy beam-transport system (MEBT). This error study is concerned with the DTL, CCDTL and CCL, and each will be analyzed separately. In fact, the CCL is divided into two sections, and each of these will be analyzed separately. The types of errors considered here are those that affect the transverse characteristics of the beam. The errors that cause the beam center to be displaced from the linac axis are quad displacements and quad tilts. The errors that cause mismatches are quad gradient errors and quad rotations (roll)

  10. Internal Error Propagation in Explicit Runge--Kutta Methods

    KAUST Repository

    Ketcheson, David I.

    2014-09-11

    In practical computation with Runge--Kutta methods, the stage equations are not satisfied exactly, due to roundoff errors, algebraic solver errors, and so forth. We show by example that propagation of such errors within a single step can have catastrophic effects for otherwise practical and well-known methods. We perform a general analysis of internal error propagation, emphasizing that it depends significantly on how the method is implemented. We show that for a fixed method, essentially any set of internal stability polynomials can be obtained by modifying the implementation details. We provide bounds on the internal error amplification constants for some classes of methods with many stages, including strong stability preserving methods and extrapolation methods. These results are used to prove error bounds in the presence of roundoff or other internal errors.

  11. New seismograph includes filters

    Energy Technology Data Exchange (ETDEWEB)

    1979-11-02

    The new Nimbus ES-1210 multichannel signal enhancement seismograph from EG and G geometrics has recently been redesigned to include multimode signal fillers on each amplifier. The ES-1210F is a shallow exploration seismograph for near subsurface exploration such as in depth-to-bedrock, geological hazard location, mineral exploration, and landslide investigations.

  12. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.

  13. Measurement Errors and Uncertainties Theory and Practice

    CERN Document Server

    Rabinovich, Semyon G

    2006-01-01

    Measurement Errors and Uncertainties addresses the most important problems that physicists and engineers encounter when estimating errors and uncertainty. Building from the fundamentals of measurement theory, the author develops the theory of accuracy of measurements and offers a wealth of practical recommendations and examples of applications. This new edition covers a wide range of subjects, including: - Basic concepts of metrology - Measuring instruments characterization, standardization and calibration -Estimation of errors and uncertainty of single and multiple measurements - Modern probability-based methods of estimating measurement uncertainty With this new edition, the author completes the development of the new theory of indirect measurements. This theory provides more accurate and efficient methods for processing indirect measurement data. It eliminates the need to calculate the correlation coefficient - a stumbling block in measurement data processing - and offers for the first time a way to obtain...

  14. Comparing different error conditions in filmdosemeter evaluation

    International Nuclear Information System (INIS)

    Roed, H.; Figel, M.

    2005-01-01

    Full text: In the evaluation of a film used as a personal dosemeter it may be necessary to mark the dosemeters when possible error conditions are recognized. These are errors that might have an influence on the ability to make a correct evaluation of the dose value, and include broken, contaminated or improperly handled dosemeters. In this project we have examined how two services (NIRH, GSF), from two different countries within the EU, mark their dosemeters. The services have a large difference in size, customer composition and issuing period, but both use film as their primary dosemeters. The possible error conditions that are examined here are dosemeters being contaminated, dosemeters exposed to moisture or light, missing filters in the dosemeter badges among others. The data are collected for the year 2003 where NIRH evaluated approximately 50 thousand and GSF about one million filmdosemeters. For each error condition the percentage of filmdosemeters belonging hereto is calculated as well as the distribution among different employee categories, i.e. industry, medicine, research, veterinary and other. For some error conditions we see a common pattern, while for others there is a large discrepancy between the services. The differences and possible explanations are discussed. The results of the investigation may motivate further comparisons between the different monitoring services in Europe. (author)

  15. Technological Advancements and Error Rates in Radiation Therapy Delivery

    Energy Technology Data Exchange (ETDEWEB)

    Margalit, Danielle N., E-mail: dmargalit@partners.org [Harvard Radiation Oncology Program, Boston, MA (United States); Harvard Cancer Consortium and Brigham and Women' s Hospital/Dana Farber Cancer Institute, Boston, MA (United States); Chen, Yu-Hui; Catalano, Paul J.; Heckman, Kenneth; Vivenzio, Todd; Nissen, Kristopher; Wolfsberger, Luciant D.; Cormack, Robert A.; Mauch, Peter; Ng, Andrea K. [Harvard Cancer Consortium and Brigham and Women' s Hospital/Dana Farber Cancer Institute, Boston, MA (United States)

    2011-11-15

    Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)-conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system at Brigham and Women's Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher's exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01-0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08-0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique

  16. Technological Advancements and Error Rates in Radiation Therapy Delivery

    International Nuclear Information System (INIS)

    Margalit, Danielle N.; Chen, Yu-Hui; Catalano, Paul J.; Heckman, Kenneth; Vivenzio, Todd; Nissen, Kristopher; Wolfsberger, Luciant D.; Cormack, Robert A.; Mauch, Peter; Ng, Andrea K.

    2011-01-01

    Purpose: Technological advances in radiation therapy (RT) delivery have the potential to reduce errors via increased automation and built-in quality assurance (QA) safeguards, yet may also introduce new types of errors. Intensity-modulated RT (IMRT) is an increasingly used technology that is more technically complex than three-dimensional (3D)–conformal RT and conventional RT. We determined the rate of reported errors in RT delivery among IMRT and 3D/conventional RT treatments and characterized the errors associated with the respective techniques to improve existing QA processes. Methods and Materials: All errors in external beam RT delivery were prospectively recorded via a nonpunitive error-reporting system at Brigham and Women’s Hospital/Dana Farber Cancer Institute. Errors are defined as any unplanned deviation from the intended RT treatment and are reviewed during monthly departmental quality improvement meetings. We analyzed all reported errors since the routine use of IMRT in our department, from January 2004 to July 2009. Fisher’s exact test was used to determine the association between treatment technique (IMRT vs. 3D/conventional) and specific error types. Effect estimates were computed using logistic regression. Results: There were 155 errors in RT delivery among 241,546 fractions (0.06%), and none were clinically significant. IMRT was commonly associated with errors in machine parameters (nine of 19 errors) and data entry and interpretation (six of 19 errors). IMRT was associated with a lower rate of reported errors compared with 3D/conventional RT (0.03% vs. 0.07%, p = 0.001) and specifically fewer accessory errors (odds ratio, 0.11; 95% confidence interval, 0.01–0.78) and setup errors (odds ratio, 0.24; 95% confidence interval, 0.08–0.79). Conclusions: The rate of errors in RT delivery is low. The types of errors differ significantly between IMRT and 3D/conventional RT, suggesting that QA processes must be uniquely adapted for each technique

  17. Analytic device including nanostructures

    KAUST Repository

    Di Fabrizio, Enzo M.; Fratalocchi, Andrea; Totero Gongora, Juan Sebastian; Coluccio, Maria Laura; Candeloro, Patrizio; Cuda, Gianni

    2015-01-01

    A device for detecting an analyte in a sample comprising: an array including a plurality of pixels, each pixel including a nanochain comprising: a first nanostructure, a second nanostructure, and a third nanostructure, wherein size of the first nanostructure is larger than that of the second nanostructure, and size of the second nanostructure is larger than that of the third nanostructure, and wherein the first nanostructure, the second nanostructure, and the third nanostructure are positioned on a substrate such that when the nanochain is excited by an energy, an optical field between the second nanostructure and the third nanostructure is stronger than an optical field between the first nanostructure and the second nanostructure, wherein the array is configured to receive a sample; and a detector arranged to collect spectral data from a plurality of pixels of the array.

  18. Saskatchewan resources. [including uranium

    Energy Technology Data Exchange (ETDEWEB)

    1979-09-01

    The production of chemicals and minerals for the chemical industry in Saskatchewan are featured, with some discussion of resource taxation. The commodities mentioned include potash, fatty amines, uranium, heavy oil, sodium sulfate, chlorine, sodium hydroxide, sodium chlorate and bentonite. Following the successful outcome of the Cluff Lake inquiry, the uranium industry is booming. Some developments and production figures for Gulf Minerals, Amok, Cenex and Eldorado are mentioned.

  19. Dual Processing and Diagnostic Errors

    Science.gov (United States)

    Norman, Geoff

    2009-01-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…

  20. Barriers to medical error reporting

    Directory of Open Access Journals (Sweden)

    Jalal Poorolajal

    2015-01-01

    Full Text Available Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan,Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%, lack of proper reporting form (51.8%, lack of peer supporting a person who has committed an error (56.0%, and lack of personal attention to the importance of medical errors (62.9%. The rate of committing medical errors was higher in men (71.4%, age of 50-40 years (67.6%, less-experienced personnel (58.7%, educational level of MSc (87.5%, and staff of radiology department (88.9%. Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement.