WorldWideScience

Sample records for means standard errors

  1. Standard deviation and standard error of the mean.

    Science.gov (United States)

    Lee, Dong Kyu; In, Junyong; Lee, Sangseok

    2015-06-01

    In most clinical and experimental studies, the standard deviation (SD) and the estimated standard error of the mean (SEM) are used to present the characteristics of sample data and to explain statistical analysis results. However, some authors occasionally muddle the distinctive usage between the SD and SEM in medical literature. Because the process of calculating the SD and SEM includes different statistical inferences, each of them has its own meaning. SD is the dispersion of data in a normal distribution. In other words, SD indicates how accurately the mean represents sample data. However the meaning of SEM includes statistical inference based on the sampling distribution. SEM is the SD of the theoretical distribution of the sample means (the sampling distribution). While either SD or SEM can be applied to describe data and statistical results, one should be aware of reasonable methods with which to use SD and SEM. We aim to elucidate the distinctions between SD and SEM and to provide proper usage guidelines for both, which summarize data and describe statistical results.

  2. What to use to express the variability of data: Standard deviation or standard error of mean?

    OpenAIRE

    Barde, Mohini P.; Barde, Prajakt J.

    2012-01-01

    Statistics plays a vital role in biomedical research. It helps present data precisely and draws the meaningful conclusions. While presenting data, one should be aware of using adequate statistical measures. In biomedical journals, Standard Error of Mean (SEM) and Standard Deviation (SD) are used interchangeably to express the variability; though they measure different parameters. SEM quantifies uncertainty in estimate of the mean whereas SD indicates dispersion of the data from mean. As reade...

  3. What to use to express the variability of data: Standard deviation or standard error of mean?

    Science.gov (United States)

    Barde, Mohini P; Barde, Prajakt J

    2012-07-01

    Statistics plays a vital role in biomedical research. It helps present data precisely and draws the meaningful conclusions. While presenting data, one should be aware of using adequate statistical measures. In biomedical journals, Standard Error of Mean (SEM) and Standard Deviation (SD) are used interchangeably to express the variability; though they measure different parameters. SEM quantifies uncertainty in estimate of the mean whereas SD indicates dispersion of the data from mean. As readers are generally interested in knowing the variability within sample, descriptive data should be precisely summarized with SD. Use of SEM should be limited to compute CI which measures the precision of population estimate. Journals can avoid such errors by requiring authors to adhere to their guidelines.

  4. Climatologies from satellite measurements: the impact of orbital sampling on the standard error of the mean

    Directory of Open Access Journals (Sweden)

    M. Toohey

    2013-04-01

    Full Text Available Climatologies of atmospheric observations are often produced by binning measurements according to latitude and calculating zonal means. The uncertainty in these climatological means is characterised by the standard error of the mean (SEM. However, the usual estimator of the SEM, i.e., the sample standard deviation divided by the square root of the sample size, holds only for uncorrelated randomly sampled measurements. Measurements of the atmospheric state along a satellite orbit cannot always be considered as independent because (a the time-space interval between two nearest observations is often smaller than the typical scale of variations in the atmospheric state, and (b the regular time-space sampling pattern of a satellite instrument strongly deviates from random sampling. We have developed a numerical experiment where global chemical fields from a chemistry climate model are sampled according to real sampling patterns of satellite-borne instruments. As case studies, the model fields are sampled using sampling patterns of the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS and Atmospheric Chemistry Experiment Fourier-Transform Spectrometer (ACE-FTS satellite instruments. Through an iterative subsampling technique, and by incorporating information on the random errors of the MIPAS and ACE-FTS measurements, we produce empirical estimates of the standard error of monthly mean zonal mean model O3 in 5° latitude bins. We find that generally the classic SEM estimator is a conservative estimate of the SEM, i.e., the empirical SEM is often less than or approximately equal to the classic estimate. Exceptions occur only when natural variability is larger than the random measurement error, and specifically in instances where the zonal sampling distribution shows non-uniformity with a similar zonal structure as variations in the sampled field, leading to maximum sensitivity to arbitrary phase shifts between the sample distribution and

  5. Standard Errors for Matrix Correlations.

    Science.gov (United States)

    Ogasawara, Haruhiko

    1999-01-01

    Derives the asymptotic standard errors and intercorrelations for several matrix correlations assuming multivariate normality for manifest variables and derives the asymptotic standard errors of the matrix correlations for two factor-loading matrices. (SLD)

  6. A Note on Standard Deviation and Standard Error

    Science.gov (United States)

    Hassani, Hossein; Ghodsi, Mansoureh; Howell, Gareth

    2010-01-01

    Many students confuse the standard deviation and standard error of the mean and are unsure which, if either, to use in presenting data. In this article, we endeavour to address these questions and cover some related ambiguities about these quantities.

  7. Error and its meaning in forensic science.

    Science.gov (United States)

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes. © 2013 American Academy of Forensic Sciences.

  8. Characterization of XR-RV3 GafChromic{sup ®} films in standard laboratory and in clinical conditions and means to evaluate uncertainties and reduce errors

    Energy Technology Data Exchange (ETDEWEB)

    Farah, J., E-mail: jad.farah@irsn.fr; Clairand, I.; Huet, C. [External Dosimetry Department, Institut de Radioprotection et de Sûreté Nucléaire (IRSN), BP-17, 92260 Fontenay-aux-Roses (France); Trianni, A. [Medical Physics Department, Udine University Hospital S. Maria della Misericordia (AOUD), p.le S. Maria della Misericordia, 15, 33100 Udine (Italy); Ciraj-Bjelac, O. [Vinca Institute of Nuclear Sciences (VINCA), P.O. Box 522, 11001 Belgrade (Serbia); De Angelis, C. [Department of Technology and Health, Istituto Superiore di Sanità (ISS), Viale Regina Elena 299, 00161 Rome (Italy); Delle Canne, S. [Fatebenefratelli San Giovanni Calibita Hospital (FBF), UOC Medical Physics - Isola Tiberina, 00186 Rome (Italy); Hadid, L.; Waryn, M. J. [Radiology Department, Hôpital Jean Verdier (HJV), Avenue du 14 Juillet, 93140 Bondy Cedex (France); Jarvinen, H.; Siiskonen, T. [Radiation and Nuclear Safety Authority (STUK), P.O. Box 14, 00881 Helsinki (Finland); Negri, A. [Veneto Institute of Oncology (IOV), Via Gattamelata 64, 35124 Padova (Italy); Novák, L. [National Radiation Protection Institute (NRPI), Bartoškova 28, 140 00 Prague 4 (Czech Republic); Pinto, M. [Istituto Nazionale di Metrologia delle Radiazioni Ionizzanti (ENEA-INMRI), C.R. Casaccia, Via Anguillarese 301, I-00123 Santa Maria di Galeria (RM) (Italy); Knežević, Ž. [Ruđer Bošković Institute (RBI), Bijenička c. 54, 10000 Zagreb (Croatia)

    2015-07-15

    Purpose: To investigate the optimal use of XR-RV3 GafChromic{sup ®} films to assess patient skin dose in interventional radiology while addressing the means to reduce uncertainties in dose assessment. Methods: XR-Type R GafChromic films have been shown to represent the most efficient and suitable solution to determine patient skin dose in interventional procedures. As film dosimetry can be associated with high uncertainty, this paper presents the EURADOS WG 12 initiative to carry out a comprehensive study of film characteristics with a multisite approach. The considered sources of uncertainties include scanner, film, and fitting-related errors. The work focused on studying film behavior with clinical high-dose-rate pulsed beams (previously unavailable in the literature) together with reference standard laboratory beams. Results: First, the performance analysis of six different scanner models has shown that scan uniformity perpendicular to the lamp motion axis and that long term stability are the main sources of scanner-related uncertainties. These could induce errors of up to 7% on the film readings unless regularly checked and corrected. Typically, scan uniformity correction matrices and reading normalization to the scanner-specific and daily background reading should be done. In addition, the analysis on multiple film batches has shown that XR-RV3 films have generally good uniformity within one batch (<1.5%), require 24 h to stabilize after the irradiation and their response is roughly independent of dose rate (<5%). However, XR-RV3 films showed large variations (up to 15%) with radiation quality both in standard laboratory and in clinical conditions. As such, and prior to conducting patient skin dose measurements, it is mandatory to choose the appropriate calibration beam quality depending on the characteristics of the x-ray systems that will be used clinically. In addition, yellow side film irradiations should be preferentially used since they showed a lower

  9. High incorrect use of the standard error of the mean (SEM) in original articles in three cardiovascular journals evaluated for 2012.

    Science.gov (United States)

    Wullschleger, Marcel; Aghlmandi, Soheila; Egger, Marcel; Zwahlen, Marcel

    2014-01-01

    In biomedical journals authors sometimes use the standard error of the mean (SEM) for data description, which has been called inappropriate or incorrect. To assess the frequency of incorrect use of SEM in articles in three selected cardiovascular journals. All original journal articles published in 2012 in Cardiovascular Research, Circulation: Heart Failure and Circulation Research were assessed by two assessors for inappropriate use of SEM when providing descriptive information of empirical data. We also assessed whether the authors state in the methods section that the SEM will be used for data description. Of 441 articles included in this survey, 64% (282 articles) contained at least one instance of incorrect use of the SEM, with two journals having a prevalence above 70% and "Circulation: Heart Failure" having the lowest value (27%). In 81% of articles with incorrect use of SEM, the authors had explicitly stated that they use the SEM for data description and in 89% SEM bars were also used instead of 95% confidence intervals. Basic science studies had a 7.4-fold higher level of inappropriate SEM use (74%) than clinical studies (10%). The selection of the three cardiovascular journals was based on a subjective initial impression of observing inappropriate SEM use. The observed results are not representative for all cardiovascular journals. In three selected cardiovascular journals we found a high level of inappropriate SEM use and explicit methods statements to use it for data description, especially in basic science studies. To improve on this situation, these and other journals should provide clear instructions to authors on how to report descriptive information of empirical data.

  10. Forecasting Error Calculation with Mean Absolute Deviation and Mean Absolute Percentage Error

    Science.gov (United States)

    Khair, Ummul; Fahmi, Hasanul; Hakim, Sarudin Al; Rahim, Robbi

    2017-12-01

    Prediction using a forecasting method is one of the most important things for an organization, the selection of appropriate forecasting methods is also important but the percentage error of a method is more important in order for decision makers to adopt the right culture, the use of the Mean Absolute Deviation and Mean Absolute Percentage Error to calculate the percentage of mistakes in the least square method resulted in a percentage of 9.77% and it was decided that the least square method be worked for time series and trend data.

  11. Interpolation Error Estimates for Mean Value Coordinates over Convex Polygons.

    Science.gov (United States)

    Rand, Alexander; Gillette, Andrew; Bajaj, Chandrajit

    2013-08-01

    In a similar fashion to estimates shown for Harmonic, Wachspress, and Sibson coordinates in [Gillette et al., AiCM, to appear], we prove interpolation error estimates for the mean value coordinates on convex polygons suitable for standard finite element analysis. Our analysis is based on providing a uniform bound on the gradient of the mean value functions for all convex polygons of diameter one satisfying certain simple geometric restrictions. This work makes rigorous an observed practical advantage of the mean value coordinates: unlike Wachspress coordinates, the gradient of the mean value coordinates does not become large as interior angles of the polygon approach π.

  12. Mean value estimates of the error terms of Lehmer problem

    Indian Academy of Sciences (India)

    Mean value estimates of the error terms of Lehmer problem. DONGMEI REN1 and YAMING ... For further properties of N(a,p) in [6], he studied the mean square value of the error term. E(a, p) = N(a,p) − 1. 2 (p − 1) ..... [1] Apostol Tom M, Introduction to Analytic Number Theory (New York: Springer-Verlag). (1976). [2] Guy R K ...

  13. Some Results on Mean Square Error for Factor Score Prediction

    Science.gov (United States)

    Krijnen, Wim P.

    2006-01-01

    For the confirmatory factor model a series of inequalities is given with respect to the mean square error (MSE) of three main factor score predictors. The eigenvalues of these MSE matrices are a monotonic function of the eigenvalues of the matrix gamma[subscript rho] = theta[superscript 1/2] lambda[subscript rho] 'psi[subscript rho] [superscript…

  14. On the Correspondence between Mean Forecast Errors and Climate Errors in CMIP5 Models

    Energy Technology Data Exchange (ETDEWEB)

    Ma, H. -Y.; Xie, S.; Klein, S. A.; Williams, K. D.; Boyle, J. S.; Bony, S.; Douville, H.; Fermepin, S.; Medeiros, B.; Tyteca, S.; Watanabe, M.; Williamson, D.

    2014-02-01

    The present study examines the correspondence between short- and long-term systematic errors in five atmospheric models by comparing the 16 five-day hindcast ensembles from the Transpose Atmospheric Model Intercomparison Project II (Transpose-AMIP II) for July–August 2009 (short term) to the climate simulations from phase 5 of the Coupled Model Intercomparison Project (CMIP5) and AMIP for the June–August mean conditions of the years of 1979–2008 (long term). Because the short-term hindcasts were conducted with identical climate models used in the CMIP5/AMIP simulations, one can diagnose over what time scale systematic errors in these climate simulations develop, thus yielding insights into their origin through a seamless modeling approach. The analysis suggests that most systematic errors of precipitation, clouds, and radiation processes in the long-term climate runs are present by day 5 in ensemble average hindcasts in all models. Errors typically saturate after few days of hindcasts with amplitudes comparable to the climate errors, and the impacts of initial conditions on the simulated ensemble mean errors are relatively small. This robust bias correspondence suggests that these systematic errors across different models likely are initiated by model parameterizations since the atmospheric large-scale states remain close to observations in the first 2–3 days. However, biases associated with model physics can have impacts on the large-scale states by day 5, such as zonal winds, 2-m temperature, and sea level pressure, and the analysis further indicates a good correspondence between short- and long-term biases for these large-scale states. Therefore, improving individual model parameterizations in the hindcast mode could lead to the improvement of most climate models in simulating their climate mean state and potentially their future projections.

  15. Estimating the Standard Error of the Judging in a modified-Angoff Standards Setting Procedure

    Directory of Open Access Journals (Sweden)

    Robert G. MacCann

    2004-03-01

    Full Text Available For a modified Angoff standards setting procedure, two methods of calculating the standard error of the..judging were compared. The Central Limit Theorem (CLT method is easy to calculate and uses readily..available data. It estimates the variance of mean cut scores as a function of the variance of cut scores within..a judging group, based on the independent judgements at Stage 1 of the process. Its theoretical drawback is..that it is unable to take account of the effects of collaboration among the judges at Stages 2 and 3. The..second method, an application of equipercentile (EQP equating, relies on the selection of very large stable..candidatures and the standardisation of the raw score distributions to remove effects associated with test..difficulty. The standard error estimates were then empirically obtained from the mean cut score variation..observed over a five year period. For practical purposes, the two methods gave reasonable agreement, with..the CLT method working well for the top band, the band that attracts most public attention. For some..bands in English and Mathematics, the CLT standard error was smaller than the EQP estimate, suggesting..the CLT method be used with caution as an approximate guide only.

  16. Semiparametric Bernstein–von Mises for the error standard deviation

    OpenAIRE

    Jonge, de, R.; Zanten, van, J.H.

    2013-01-01

    We study Bayes procedures for nonparametric regression problems with Gaussian errors, giving conditions under which a Bernstein–von Mises result holds for the marginal posterior distribution of the error standard deviation. We apply our general results to show that a single Bayes procedure using a hierarchical spline-based prior on the regression function and an independent prior on the error variance, can simultaneously achieve adaptive, rate-optimal estimation of a smooth, multivariate regr...

  17. Semiparametric Bernstein–von Mises for the error standard deviation

    NARCIS (Netherlands)

    Jonge, de R.; Zanten, van J.H.

    2013-01-01

    We study Bayes procedures for nonparametric regression problems with Gaussian errors, giving conditions under which a Bernstein–von Mises result holds for the marginal posterior distribution of the error standard deviation. We apply our general results to show that a single Bayes procedure using a

  18. Semiparametric Bernstein-von Mises for the error standard deviation

    NARCIS (Netherlands)

    de Jonge, R.; van Zanten, H.

    2013-01-01

    We study Bayes procedures for nonparametric regression problems with Gaussian errors, giving conditions under which a Bernstein-von Mises result holds for the marginal posterior distribution of the error standard deviation. We apply our general results to show that a single Bayes procedure using a

  19. Augmented GNSS Differential Corrections Minimum Mean Square Error Estimation Sensitivity to Spatial Correlation Modeling Errors

    Directory of Open Access Journals (Sweden)

    Nazelie Kassabian

    2014-06-01

    Full Text Available Railway signaling is a safety system that has evolved over the last couple of centuries towards autonomous functionality. Recently, great effort is being devoted in this field, towards the use and exploitation of Global Navigation Satellite System (GNSS signals and GNSS augmentation systems in view of lower railway track equipments and maintenance costs, that is a priority to sustain the investments for modernizing the local and regional lines most of which lack automatic train protection systems and are still manually operated. The objective of this paper is to assess the sensitivity of the Linear Minimum Mean Square Error (LMMSE algorithm to modeling errors in the spatial correlation function that characterizes true pseudorange Differential Corrections (DCs. This study is inspired by the railway application; however, it applies to all transportation systems, including the road sector, that need to be complemented by an augmentation system in order to deliver accurate and reliable positioning with integrity specifications. A vector of noisy pseudorange DC measurements are simulated, assuming a Gauss-Markov model with a decay rate parameter inversely proportional to the correlation distance that exists between two points of a certain environment. The LMMSE algorithm is applied on this vector to estimate the true DC, and the estimation error is compared to the noise added during simulation. The results show that for large enough correlation distance to Reference Stations (RSs distance separation ratio values, the LMMSE brings considerable advantage in terms of estimation error accuracy and precision. Conversely, the LMMSE algorithm may deteriorate the quality of the DC measurements whenever the ratio falls below a certain threshold.

  20. Mean Bias in Seasonal Forecast Model and ENSO Prediction Error.

    Science.gov (United States)

    Kim, Seon Tae; Jeong, Hye-In; Jin, Fei-Fei

    2017-07-20

    This study uses retrospective forecasts made using an APEC Climate Center seasonal forecast model to investigate the cause of errors in predicting the amplitude of El Niño Southern Oscillation (ENSO)-driven sea surface temperature variability. When utilizing Bjerknes coupled stability (BJ) index analysis, enhanced errors in ENSO amplitude with forecast lead times are found to be well represented by those in the growth rate estimated by the BJ index. ENSO amplitude forecast errors are most strongly associated with the errors in both the thermocline slope response and surface wind response to forcing over the tropical Pacific, leading to errors in thermocline feedback. This study concludes that upper ocean temperature bias in the equatorial Pacific, which becomes more intense with increasing lead times, is a possible cause of forecast errors in the thermocline feedback and thus in ENSO amplitude.

  1. Quantitative autoradiography of semiconductor materials by means of diffused phosphorus standards

    International Nuclear Information System (INIS)

    Treutler, H.C.; Freyer, K.

    1983-01-01

    A suitable standard sample was developed and tested on the basis of phosphorus for the quantitative autoradiography of elements of interest in semiconductor technology. By the aid of silicon disks with a phosphorus concentration of 6x10 17 atomsxcm - 2 the error of the quantitative autoradiogprahic method is determined. The relative mean error of the density measurement is at best +-4%; the relative mean error of the determination of phosphorus concentration by use of an error-free standard sample is about +-15%. The method will be extended to other elements by use of this standard sample of phosphorus. (author)

  2. Decreasing patient identification band errors by standardizing processes.

    Science.gov (United States)

    Walley, Susan Chu; Berger, Stephanie; Harris, Yolanda; Gallizzi, Gina; Hayes, Leslie

    2013-04-01

    Patient identification (ID) bands are an essential component in patient ID. Quality improvement methodology has been applied as a model to reduce ID band errors although previous studies have not addressed standardization of ID bands. Our specific aim was to decrease ID band errors by 50% in a 12-month period. The Six Sigma DMAIC (define, measure, analyze, improve, and control) quality improvement model was the framework for this study. ID bands at a tertiary care pediatric hospital were audited from January 2011 to January 2012 with continued audits to June 2012 to confirm the new process was in control. After analysis, the major improvement strategy implemented was standardization of styles of ID bands and labels. Additional interventions included educational initiatives regarding the new ID band processes and disseminating institutional and nursing unit data. A total of 4556 ID bands were audited with a preimprovement ID band error average rate of 9.2%. Significant variation in the ID band process was observed, including styles of ID bands. Interventions were focused on standardization of the ID band and labels. The ID band error rate improved to 5.2% in 9 months (95% confidence interval: 2.5-5.5; P error rates. This decrease in ID band error rates was maintained over the subsequent 8 months.

  3. Errors as a Means of Reducing Impulsive Food Choice.

    Science.gov (United States)

    Sellitto, Manuela; di Pellegrino, Giuseppe

    2016-06-05

    Nowadays, the increasing incidence of eating disorders due to poor self-control has given rise to increased obesity and other chronic weight problems, and ultimately, to reduced life expectancy. The capacity to refrain from automatic responses is usually high in situations in which making errors is highly likely. The protocol described here aims at reducing imprudent preference in women during hypothetical intertemporal choices about appetitive food by associating it with errors. First, participants undergo an error task where two different edible stimuli are associated with two different error likelihoods (high and low). Second, they make intertemporal choices about the two edible stimuli, separately. As a result, this method decreases the discount rate for future amounts of the edible reward that cued higher error likelihood, selectively. This effect is under the influence of the self-reported hunger level. The present protocol demonstrates that errors, well known as motivationally salient events, can induce the recruitment of cognitive control, thus being ultimately useful in reducing impatient choices for edible commodities.

  4. Conditional Standard Errors of Measurement for Scale Scores.

    Science.gov (United States)

    Kolen, Michael J.; And Others

    1992-01-01

    A procedure is described for estimating the reliability and conditional standard errors of measurement of scale scores incorporating the discrete transformation of raw scores to scale scores. The method is illustrated using a strong true score model, and practical applications are described. (SLD)

  5. Radiological error: analysis, standard setting, targeted instruction and teamworking

    International Nuclear Information System (INIS)

    FitzGerald, Richard

    2005-01-01

    Diagnostic radiology does not have objective benchmarks for acceptable levels of missed diagnoses [1]. Until now, data collection of radiological discrepancies has been very time consuming. The culture within the specialty did not encourage it. However, public concern about patient safety is increasing. There have been recent innovations in compiling radiological interpretive discrepancy rates which may facilitate radiological standard setting. However standard setting alone will not optimise radiologists' performance or patient safety. We must use these new techniques in radiological discrepancy detection to stimulate greater knowledge sharing, targeted instruction and teamworking among radiologists. Not all radiological discrepancies are errors. Radiological discrepancy programmes must not be abused as an instrument for discrediting individual radiologists. Discrepancy rates must not be distorted as a weapon in turf battles. Radiological errors may be due to many causes and are often multifactorial. A systems approach to radiological error is required. Meaningful analysis of radiological discrepancies and errors is challenging. Valid standard setting will take time. Meanwhile, we need to develop top-up training, mentoring and rehabilitation programmes. (orig.)

  6. Error detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3

    Science.gov (United States)

    Fujiwara, Toru; Kasami, Tadao; Lin, Shu

    1989-09-01

    The error-detecting capabilities of the shortened Hamming codes adopted for error detection in IEEE Standard 802.3 are investigated. These codes are also used for error detection in the data link layer of the Ethernet, a local area network. The weight distributions for various code lengths are calculated to obtain the probability of undetectable error and that of detectable error for a binary symmetric channel with bit-error rate between 0.00001 and 1/2.

  7. [Roaming through methodology. XXXVIII. Common misconceptions involving standard deviation and standard error

    NARCIS (Netherlands)

    Mokkink, H.G.A.

    2002-01-01

    Standard deviation and standard error have a clear mutual relationship, but at the same time they differ strongly in the type of information they supply. This can lead to confusion and misunderstandings. Standard deviation describes the variability in a sample of measures of a variable, for instance

  8. A Generalizability Theory Approach to Standard Error Estimates for Bookmark Standard Settings

    Science.gov (United States)

    Lee, Guemin; Lewis, Daniel M.

    2008-01-01

    The bookmark standard-setting procedure is an item response theory-based method that is widely implemented in state testing programs. This study estimates standard errors for cut scores resulting from bookmark standard settings under a generalizability theory model and investigates the effects of different universes of generalization and error…

  9. Accounting Standards: What Do They Mean?

    Science.gov (United States)

    Farley, Jerry B.

    1992-01-01

    Four recent and proposed changes in national school accounting standards have significant policy implications for colleges and universities. These changes address (1) standards regarding postemployment benefits other than pensions, (2) depreciation, (3) financial report format, and (4) contributions and pledges made to the school. Governing boards…

  10. Analysis of S-box in Image Encryption Using Root Mean Square Error Method

    Science.gov (United States)

    Hussain, Iqtadar; Shah, Tariq; Gondal, Muhammad Asif; Mahmood, Hasan

    2012-07-01

    The use of substitution boxes (S-boxes) in encryption applications has proven to be an effective nonlinear component in creating confusion and randomness. The S-box is evolving and many variants appear in literature, which include advanced encryption standard (AES) S-box, affine power affine (APA) S-box, Skipjack S-box, Gray S-box, Lui J S-box, residue prime number S-box, Xyi S-box, and S8 S-box. These S-boxes have algebraic and statistical properties which distinguish them from each other in terms of encryption strength. In some circumstances, the parameters from algebraic and statistical analysis yield results which do not provide clear evidence in distinguishing an S-box for an application to a particular set of data. In image encryption applications, the use of S-boxes needs special care because the visual analysis and perception of a viewer can sometimes identify artifacts embedded in the image. In addition to existing algebraic and statistical analysis already used for image encryption applications, we propose an application of root mean square error technique, which further elaborates the results and enables the analyst to vividly distinguish between the performances of various S-boxes. While the use of the root mean square error analysis in statistics has proven to be effective in determining the difference in original data and the processed data, its use in image encryption has shown promising results in estimating the strength of the encryption method. In this paper, we show the application of the root mean square error analysis to S-box image encryption. The parameters from this analysis are used in determining the strength of S-boxes

  11. Error modelling of quantum Hall array resistance standards

    Science.gov (United States)

    Marzano, Martina; Oe, Takehiko; Ortolano, Massimo; Callegaro, Luca; Kaneko, Nobu-Hisa

    2018-04-01

    Quantum Hall array resistance standards (QHARSs) are integrated circuits composed of interconnected quantum Hall effect elements that allow the realization of virtually arbitrary resistance values. In recent years, techniques were presented to efficiently design QHARS networks. An open problem is that of the evaluation of the accuracy of a QHARS, which is affected by contact and wire resistances. In this work, we present a general and systematic procedure for the error modelling of QHARSs, which is based on modern circuit analysis techniques and Monte Carlo evaluation of the uncertainty. As a practical example, this method of analysis is applied to the characterization of a 1 MΩ QHARS developed by the National Metrology Institute of Japan. Software tools are provided to apply the procedure to other arrays.

  12. Gas measuring apparatus with standardization means, and method therefor

    International Nuclear Information System (INIS)

    Typpo, P.M.

    1980-01-01

    An apparatus and a method for standardizing a gas measuring device has a source capable of emitting a beam of radiation aligned to impinge a detector. A housing means encloses the beam. The housing means has a plurality of apertures permitting the gas to enter the housing means, to intercept the beam, and to exit from the housing means. The device further comprises means for closing the apertures and a means for purging said gas from the housing means

  13. Quantum mean-field decoding algorithm for error-correcting codes

    International Nuclear Information System (INIS)

    Inoue, Jun-ichi; Saika, Yohei; Okada, Masato

    2009-01-01

    We numerically examine a quantum version of TAP (Thouless-Anderson-Palmer)-like mean-field algorithm for the problem of error-correcting codes. For a class of the so-called Sourlas error-correcting codes, we check the usefulness to retrieve the original bit-sequence (message) with a finite length. The decoding dynamics is derived explicitly and we evaluate the average-case performance through the bit-error rate (BER).

  14. The impact of a sustainability constraint on the mean-tracking error efficient frontier

    NARCIS (Netherlands)

    Boudt, K.M.R.; Cornelissen, J.; Croux, C.

    2013-01-01

    Most socially responsible investment funds combine a sustainability objective with a tracking error constraint. We characterize the impact of a sustainability constraint on the mean-tracking error efficient frontier and illustrate this on a universe of US stocks for the period 2003-2010. © 2013

  15. On the mean squared error of the ridge estimator of the covariance and precision matrix

    NARCIS (Netherlands)

    van Wieringen, Wessel N.

    2017-01-01

    For a suitably chosen ridge penalty parameter, the ridge regression estimator uniformly dominates the maximum likelihood regression estimator in terms of the mean squared error. Analogous results for the ridge maximum likelihood estimators of covariance and precision matrix are presented.

  16. Physical Activity Stories: Assessing the "Meaning Standard" in Physical Education

    Science.gov (United States)

    Johnson, Tyler G.

    2016-01-01

    The presence of the "meaning standard" in both national and state content standards suggests that professionals consider it an important outcome of a quality physical education program. However, only 10 percent of states require an assessment to examine whether students achieve this standard. The purpose of this article is to introduce…

  17. Correcting the Standard Errors of 2-Stage Residual Inclusion Estimators for Mendelian Randomization Studies.

    Science.gov (United States)

    Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A

    2017-11-01

    Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.

  18. The two errors of using the within-subject standard deviation (WSD) as the standard error of a reliable change index.

    Science.gov (United States)

    Maassen, Gerard H

    2010-08-01

    In this Journal, Lewis and colleagues introduced a new Reliable Change Index (RCI(WSD)), which incorporated the within-subject standard deviation (WSD) of a repeated measurement design as the standard error. In this note, two opposite errors in using WSD this way are demonstrated. First, being the standard error of measurement of only a single assessment makes WSD too small when practice effects are absent. Then, too many individuals will be designated reliably changed. Second, WSD can grow unlimitedly to the extent that differential practice effects occur. This can even make RCI(WSD) unable to detect any reliable change.

  19. Systematic error in the precision measurement of the mean wavelength of a nearly monochromatic neutron beam due to geometric errors

    Energy Technology Data Exchange (ETDEWEB)

    Coakley, K.J., E-mail: kevin.coakley@nist.go [National Institute of Standards and Technology, 325 Broadway, Boulder, CO 80305 (United States); Dewey, M.S. [National Institute of Standards and Technology, Gaithersburg, MD (United States); Yue, A.T. [University of Tennessee, Knoxville, TN (United States); Laptev, A.B. [Tulane University, New Orleans, LA (United States)

    2009-12-11

    Many experiments at neutron scattering facilities require nearly monochromatic neutron beams. In such experiments, one must accurately measure the mean wavelength of the beam. We seek to reduce the systematic uncertainty of this measurement to approximately 0.1%. This work is motivated mainly by an effort to improve the measurement of the neutron lifetime determined from data collected in a 2003 in-beam experiment performed at NIST. More specifically, we seek to reduce systematic uncertainty by calibrating the neutron detector used in this lifetime experiment. This calibration requires simultaneous measurement of the responses of both the neutron detector used in the lifetime experiment and an absolute black neutron detector to a highly collimated nearly monochromatic beam of cold neutrons, as well as a separate measurement of the mean wavelength of the neutron beam. The calibration uncertainty will depend on the uncertainty of the measured efficiency of the black neutron detector and the uncertainty of the measured mean wavelength. The mean wavelength of the beam is measured by Bragg diffracting the beam from a nearly perfect silicon analyzer crystal. Given the rocking curve data and knowledge of the directions of the rocking axis and the normal to the scattering planes in the silicon crystal, one determines the mean wavelength of the beam. In practice, the direction of the rocking axis and the normal to the silicon scattering planes are not known exactly. Based on Monte Carlo simulation studies, we quantify systematic uncertainties in the mean wavelength measurement due to these geometric errors. Both theoretical and empirical results are presented and compared.

  20. Robust Least-Squares Support Vector Machine With Minimization of Mean and Variance of Modeling Error.

    Science.gov (United States)

    Lu, Xinjiang; Liu, Wenbo; Zhou, Chuang; Huang, Minghui

    2017-06-13

    The least-squares support vector machine (LS-SVM) is a popular data-driven modeling method and has been successfully applied to a wide range of applications. However, it has some disadvantages, including being ineffective at handling non-Gaussian noise as well as being sensitive to outliers. In this paper, a robust LS-SVM method is proposed and is shown to have more reliable performance when modeling a nonlinear system under conditions where Gaussian or non-Gaussian noise is present. The construction of a new objective function allows for a reduction of the mean of the modeling error as well as the minimization of its variance, and it does not constrain the mean of the modeling error to zero. This differs from the traditional LS-SVM, which uses a worst-case scenario approach in order to minimize the modeling error and constrains the mean of the modeling error to zero. In doing so, the proposed method takes the modeling error distribution information into consideration and is thus less conservative and more robust in regards to random noise. A solving method is then developed in order to determine the optimal parameters for the proposed robust LS-SVM. An additional analysis indicates that the proposed LS-SVM gives a smaller weight to a large-error training sample and a larger weight to a small-error training sample, and is thus more robust than the traditional LS-SVM. The effectiveness of the proposed robust LS-SVM is demonstrated using both artificial and real life cases.

  1. Estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean.

    Science.gov (United States)

    Schillaci, Michael A; Schillaci, Mario E

    2009-02-01

    The use of small sample sizes in human and primate evolutionary research is commonplace. Estimating how well small samples represent the underlying population, however, is not commonplace. Because the accuracy of determinations of taxonomy, phylogeny, and evolutionary process are dependant upon how well the study sample represents the population of interest, characterizing the uncertainty, or potential error, associated with analyses of small sample sizes is essential. We present a method for estimating the probability that the sample mean is within a desired fraction of the standard deviation of the true mean using small (nresearchers to determine post hoc the probability that their sample is a meaningful approximation of the population parameter. We tested the method using a large craniometric data set commonly used by researchers in the field. Given our results, we suggest that sample estimates of the population mean can be reasonable and meaningful even when based on small, and perhaps even very small, sample sizes.

  2. Towards reporting standards for neuropsychological study results: A proposal to minimize communication errors with standardized qualitative descriptors for normalized test scores.

    Science.gov (United States)

    Schoenberg, Mike R; Rum, Ruba S

    2017-11-01

    Rapid, clear and efficient communication of neuropsychological results is essential to benefit patient care. Errors in communication are a lead cause of medical errors; nevertheless, there remains a lack of consistency in how neuropsychological scores are communicated. A major limitation in the communication of neuropsychological results is the inconsistent use of qualitative descriptors for standardized test scores and the use of vague terminology. PubMed search from 1 Jan 2007 to 1 Aug 2016 to identify guidelines or consensus statements for the description and reporting of qualitative terms to communicate neuropsychological test scores was conducted. The review found the use of confusing and overlapping terms to describe various ranges of percentile standardized test scores. In response, we propose a simplified set of qualitative descriptors for normalized test scores (Q-Simple) as a means to reduce errors in communicating test results. The Q-Simple qualitative terms are: 'very superior', 'superior', 'high average', 'average', 'low average', 'borderline' and 'abnormal/impaired'. A case example illustrates the proposed Q-Simple qualitative classification system to communicate neuropsychological results for neurosurgical planning. The Q-Simple qualitative descriptor system is aimed as a means to improve and standardize communication of standardized neuropsychological test scores. Research are needed to further evaluate neuropsychological communication errors. Conveying the clinical implications of neuropsychological results in a manner that minimizes risk for communication errors is a quintessential component of evidence-based practice. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Minimum Mean-Square Error Estimation of Mel-Frequency Cepstral Features

    DEFF Research Database (Denmark)

    Jensen, Jesper; Tan, Zheng-Hua

    2015-01-01

    In this work we consider the problem of feature enhancement for noise-robust automatic speech recognition (ASR). We propose a method for minimum mean-square error (MMSE) estimation of mel-frequency cepstral features, which is based on a minimum number of well-established, theoretically consistent......-of-the-art MFCC feature enhancement algorithms within this class of algorithms, while theoretically suboptimal or based on theoretically inconsistent assumptions, perform close to optimally in the MMSE sense....

  4. Development of an analysis rule of diagnosis error for standard method of human reliability analysis

    International Nuclear Information System (INIS)

    Jeong, W. D.; Kang, D. I.; Jeong, K. S.

    2003-01-01

    This paper presents the status of development of Korea standard method for Human Reliability Analysis (HRA), and proposed a standard procedure and rules for the evaluation of diagnosis error probability. The quality of KSNP HRA was evaluated using the requirement of ASME PRA standard guideline, and the design requirement for the standard HRA method was defined. Analysis procedure and rules, developed so far, to analyze diagnosis error probability was suggested as a part of the standard method. And also a study of comprehensive application was performed to evaluate the suitability of the proposed rules

  5. Standard Error Computations for Uncertainty Quantification in Inverse Problems: Asymptotic Theory vs. Bootstrapping.

    Science.gov (United States)

    Banks, H T; Holm, Kathleen; Robbins, Danielle

    2010-11-01

    We computationally investigate two approaches for uncertainty quantification in inverse problems for nonlinear parameter dependent dynamical systems. We compare the bootstrapping and asymptotic theory approaches for problems involving data with several noise forms and levels. We consider both constant variance absolute error data and relative error which produces non-constant variance data in our parameter estimation formulations. We compare and contrast parameter estimates, standard errors, confidence intervals, and computational times for both bootstrapping and asymptotic theory methods.

  6. Standardizing Medication Error Event Reporting in the U.S. Department of Defense

    National Research Council Canada - National Science Library

    Nosek, Ronald A., Jr; McMeekin, Judy; Rake, Geoffrey W

    2005-01-01

    ...) began an aggressive examination of medical errors and the strategies for minimizing them. A primary goal was the creation of a standardized medication event reporting system, including a central registry for the compilation of reported data...

  7. Mean-value identities as an opportunity for Monte Carlo error reduction.

    Science.gov (United States)

    Fernandez, L A; Martin-Mayor, V

    2009-05-01

    In the Monte Carlo simulation of both lattice field theories and of models of statistical mechanics, identities verified by exact mean values, such as Schwinger-Dyson equations, Guerra relations, Callen identities, etc., provide well-known and sensitive tests of thermalization bias as well as checks of pseudo-random-number generators. We point out that they can be further exploited as control variates to reduce statistical errors. The strategy is general, very simple, and almost costless in CPU time. The method is demonstrated in the two-dimensional Ising model at criticality, where the CPU gain factor lies between 2 and 4.

  8. Standard error propagation in R-matrix model fitting for light elements

    International Nuclear Information System (INIS)

    Chen Zhenpeng; Zhang Rui; Sun Yeying; Liu Tingjin

    2003-01-01

    The error propagation features with R-matrix model fitting 7 Li, 11 B and 17 O systems were researched systematically. Some laws of error propagation were revealed, an empirical formula P j = U j c / U j d = K j · S-bar · √m / √N for describing standard error propagation was established, the most likely error ranges for standard cross sections of 6 Li(n,t), 10 B(n,α0) and 10 B(n,α1) were estimated. The problem that the standard error of light nuclei standard cross sections may be too small results mainly from the R-matrix model fitting, which is not perfect. Yet R-matrix model fitting is the most reliable evaluation method for such data. The error propagation features of R-matrix model fitting for compound nucleus system of 7 Li, 11 B and 17 O has been studied systematically, some laws of error propagation are revealed, and these findings are important in solving the problem mentioned above. Furthermore, these conclusions are suitable for similar model fitting in other scientific fields. (author)

  9. Improvement of least-squares collocation error estimates using local GOCE Tzz signal standard deviations

    DEFF Research Database (Denmark)

    Tscherning, Carl Christian

    2015-01-01

    outside the data area. On the other hand, a comparison of predicted quantities with observed values show that the error also varies depending on the local data standard deviation. This quantity may be (and has been) estimated using the GOCE second order vertical derivative, Tzz, in the area covered...... by the satellite. The ratio between the nearly constant standard deviations of a predicted quantity (e.g. in a 25° × 25° area) and the standard deviations of Tzz in smaller cells (e.g., 1° × 1°) have been used as a scale factor in order to obtain more realistic error estimates. This procedure has been applied...

  10. Optical pattern recognition architecture implementing the mean-square error correlation algorithm

    Science.gov (United States)

    Molley, Perry A.

    1991-01-01

    An optical architecture implementing the mean-square error correlation algorithm, MSE=.SIGMA.[I-R].sup.2 for discriminating the presence of a reference image R in an input image scene I by computing the mean-square-error between a time-varying reference image signal s.sub.1 (t) and a time-varying input image signal s.sub.2 (t) includes a laser diode light source which is temporally modulated by a double-sideband suppressed-carrier source modulation signal I.sub.1 (t) having the form I.sub.1 (t)=A.sub.1 [1+.sqroot.2m.sub.1 s.sub.1 (t)cos (2.pi.f.sub.o t)] and the modulated light output from the laser diode source is diffracted by an acousto-optic deflector. The resultant intensity of the +1 diffracted order from the acousto-optic device is given by: I.sub.2 (t)=A.sub.2 [+2m.sub.2.sup.2 s.sub.2.sup.2 (t)-2.sqroot.2m.sub.2 (t) cos (2.pi.f.sub.o t] The time integration of the two signals I.sub.1 (t) and I.sub.2 (t) on the CCD deflector plane produces the result R(.tau.) of the mean-square error having the form: R(.tau.)=A.sub.1 A.sub.2 {[T]+[2m.sub.2.sup.2.multidot..intg.s.sub.2.sup.2 (t-.tau.)dt]-[2m.sub.1 m.sub.2 cos (2.tau.f.sub.o .tau.).multidot..intg.s.sub.1 (t)s.sub.2 (t-.tau.)dt]} where: s.sub.1 (t) is the signal input to the diode modulation source: s.sub.2 (t) is the signal input to the AOD modulation source; A.sub.1 is the light intensity; A.sub.2 is the diffraction efficiency; m.sub.1 and m.sub.2 are constants that determine the signal-to-bias ratio; f.sub.o is the frequency offset between the oscillator at f.sub.c and the modulation at f.sub.c +f.sub.o ; and a.sub.o and a.sub.1 are constant chosen to bias the diode source and the acousto-optic deflector into their respective linear operating regions so that the diode source exhibits a linear intensity characteristic and the AOD exhibits a linear amplitude characteristic.

  11. The Standard Error of a Proportion for Different Scores and Test Length.

    Directory of Open Access Journals (Sweden)

    David A. Walker

    2005-06-01

    Full Text Available This paper examines Smith's (2003 proposed standard error of a proportion index..associated with the idea of reliability as sufficiency of information. A detailed table..indexing all of the standard error values affiliated with assessments that range from 5 to..100 items, where students scored as low as 50% correct and 50% incorrect to as high as..95% correct and 5% incorrect, calculated in increments of 1 percentage point, is..presented, along with distributional qualities. Examples using this measure for classroom..teachers and higher education instructors of assessment are provided.

  12. Uncertainty on PIV mean and fluctuating velocity due to bias and random errors

    International Nuclear Information System (INIS)

    Wilson, Brandon M; Smith, Barton L

    2013-01-01

    Particle image velocimetry is a powerful and flexible fluid velocity measurement tool. In spite of its widespread use, the uncertainty of PIV measurements has not been sufficiently addressed to date. The calculation and propagation of local, instantaneous uncertainties on PIV results into the measured mean and Reynolds stresses are demonstrated for four PIV error sources that impact uncertainty through the vector computation: particle image density, diameter, displacement and velocity gradients. For the purpose of this demonstration, velocity data are acquired in a rectangular jet. Hot-wire measurements are compared to PIV measurements with velocity fields computed using two PIV algorithms. Local uncertainty on the velocity mean and Reynolds stress for these algorithms are automatically estimated using a previously published method. Previous work has shown that PIV measurements can become ‘noisy’ in regions of high shear as well as regions of small displacement. This paper also demonstrates the impact of these effects by comparing PIV data to data acquired using hot-wire anemometry, which does not suffer from the same issues. It is confirmed that flow gradients, large particle images and insufficient particle image displacements can result in elevated measurements of turbulence levels. The uncertainty surface method accurately estimates the difference between hot-wire and PIV measurements for most cases. The uncertainty based on each algorithm is found to be unique, motivating the use of algorithm-specific uncertainty estimates. (paper)

  13. Scatter-Reducing Sounding Filtration Using a Genetic Algorithm and Mean Monthly Standard Deviation

    Science.gov (United States)

    Mandrake, Lukas

    2013-01-01

    Retrieval algorithms like that used by the Orbiting Carbon Observatory (OCO)-2 mission generate massive quantities of data of varying quality and reliability. A computationally efficient, simple method of labeling problematic datapoints or predicting soundings that will fail is required for basic operation, given that only 6% of the retrieved data may be operationally processed. This method automatically obtains a filter designed to reduce scatter based on a small number of input features. Most machine-learning filter construction algorithms attempt to predict error in the CO2 value. By using a surrogate goal of Mean Monthly STDEV, the goal is to reduce the retrieved CO2 scatter rather than solving the harder problem of reducing CO2 error. This lends itself to improved interpretability and performance. This software reduces the scatter of retrieved CO2 values globally based on a minimum number of input features. It can be used as a prefilter to reduce the number of soundings requested, or as a post-filter to label data quality. The use of the MMS (Mean Monthly Standard deviation) provides a much cleaner, clearer filter than the standard ABS(CO2-truth) metrics previously employed by competitor methods. The software's main strength lies in a clearer (i.e., fewer features required) filter that more efficiently reduces scatter in retrieved CO2 rather than focusing on the more complex (and easily removed) bias issues.

  14. Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters

    Science.gov (United States)

    Hoshino, Takahiro; Shigemasu, Kazuo

    2008-01-01

    The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…

  15. A Sandwich-Type Standard Error Estimator of SEM Models with Multivariate Time Series

    Science.gov (United States)

    Zhang, Guangjian; Chow, Sy-Miin; Ong, Anthony D.

    2011-01-01

    Structural equation models are increasingly used as a modeling tool for multivariate time series data in the social and behavioral sciences. Standard error estimators of SEM models, originally developed for independent data, require modifications to accommodate the fact that time series data are inherently dependent. In this article, we extend a…

  16. Asymptotic Standard Errors for Item Response Theory True Score Equating of Polytomous Items

    Science.gov (United States)

    Cher Wong, Cheow

    2015-01-01

    Building on previous works by Lord and Ogasawara for dichotomous items, this article proposes an approach to derive the asymptotic standard errors of item response theory true score equating involving polytomous items, for equivalent and nonequivalent groups of examinees. This analytical approach could be used in place of empirical methods like…

  17. A parallel row-based algorithm for standard cell placement with integrated error control

    Science.gov (United States)

    Sargent, Jeff S.; Banerjee, Prith

    1989-01-01

    A new row-based parallel algorithm for standard-cell placement targeted for execution on a hypercube multiprocessor is presented. Key features of this implementation include a dynamic simulated-annealing schedule, row-partitioning of the VLSI chip image, and two novel approaches to control error in parallel cell-placement algorithms: (1) Heuristic Cell-Coloring; (2) Adaptive Sequence Length Control.

  18. WASP (Write a Scientific Paper) using Excel - 6: Standard error and confidence interval.

    Science.gov (United States)

    Grech, Victor

    2018-03-01

    The calculation of descriptive statistics includes the calculation of standard error and confidence interval, an inevitable component of data analysis in inferential statistics. This paper provides pointers as to how to do this in Microsoft Excel™. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Error analysis of terrestrial laser scanning data by means of spherical statistics and 3D graphs.

    Science.gov (United States)

    Cuartero, Aurora; Armesto, Julia; Rodríguez, Pablo G; Arias, Pedro

    2010-01-01

    This paper presents a complete analysis of the positional errors of terrestrial laser scanning (TLS) data based on spherical statistics and 3D graphs. Spherical statistics are preferred because of the 3D vectorial nature of the spatial error. Error vectors have three metric elements (one module and two angles) that were analyzed by spherical statistics. A study case has been presented and discussed in detail. Errors were calculating using 53 check points (CP) and CP coordinates were measured by a digitizer with submillimetre accuracy. The positional accuracy was analyzed by both the conventional method (modular errors analysis) and the proposed method (angular errors analysis) by 3D graphics and numerical spherical statistics. Two packages in R programming language were performed to obtain graphics automatically. The results indicated that the proposed method is advantageous as it offers a more complete analysis of the positional accuracy, such as angular error component, uniformity of the vector distribution, error isotropy, and error, in addition the modular error component by linear statistics.

  20. Decomposition of the Mean Squared Error and NSE Performance Criteria: Implications for Improving Hydrological Modelling

    Science.gov (United States)

    Gupta, Hoshin V.; Kling, Harald; Yilmaz, Koray K.; Martinez-Baquero, Guillermo F.

    2009-01-01

    The mean squared error (MSE) and the related normalization, the Nash-Sutcliffe efficiency (NSE), are the two criteria most widely used for calibration and evaluation of hydrological models with observed data. Here, we present a diagnostically interesting decomposition of NSE (and hence MSE), which facilitates analysis of the relative importance of its different components in the context of hydrological modelling, and show how model calibration problems can arise due to interactions among these components. The analysis is illustrated by calibrating a simple conceptual precipitation-runoff model to daily data for a number of Austrian basins having a broad range of hydro-meteorological characteristics. Evaluation of the results clearly demonstrates the problems that can be associated with any calibration based on the NSE (or MSE) criterion. While we propose and test an alternative criterion that can help to reduce model calibration problems, the primary purpose of this study is not to present an improved measure of model performance. Instead, we seek to show that there are systematic problems inherent with any optimization based on formulations related to the MSE. The analysis and results have implications to the manner in which we calibrate and evaluate environmental models; we discuss these and suggest possible ways forward that may move us towards an improved and diagnostically meaningful approach to model performance evaluation and identification.

  1. Optimal design of minimum mean-square error noise reduction algorithms using the simulated annealing technique.

    Science.gov (United States)

    Bai, Mingsian R; Hsieh, Ping-Ju; Hur, Kur-Nan

    2009-02-01

    The performance of the minimum mean-square error noise reduction (MMSE-NR) algorithm in conjunction with time-recursive averaging (TRA) for noise estimation is found to be very sensitive to the choice of two recursion parameters. To address this problem in a more systematic manner, this paper proposes an optimization method to efficiently search the optimal parameters of the MMSE-TRA-NR algorithms. The objective function is based on a regression model, whereas the optimization process is carried out with the simulated annealing algorithm that is well suited for problems with many local optima. Another NR algorithm proposed in the paper employs linear prediction coding as a preprocessor for extracting the correlated portion of human speech. Objective and subjective tests were undertaken to compare the optimized MMSE-TRA-NR algorithm with several conventional NR algorithms. The results of subjective tests were processed by using analysis of variance to justify the statistic significance. A post hoc test, Tukey's Honestly Significant Difference, was conducted to further assess the pairwise difference between the NR algorithms.

  2. Standardizing electrophoresis conditions: how to eliminate a major source of error in the comet assay.

    Directory of Open Access Journals (Sweden)

    Gunnar Brunborg

    2015-06-01

    Full Text Available In the alkaline comet assay, cells are embedded in agarose, lysed, and then subjected to further processing including electrophoresis at high pH (>13. We observed very large variations of mean comet tail lengths of cell samples from the same population when spread on a glass or plastic substrate and subjected to electrophoresis. These variations might be cancelled out if comets are scored randomly over a large surface, or if all the comets are scored. The mean tail length may then be representative of the population, although its standard error is large. However, the scoring process often involves selection of 50 – 100 comets in areas selected in an unsystematic way from a large gel on a glass slide. When using our 96-sample minigel format (1, neighbouring sample variations are easily detected. We have used this system to study the cause of the comet assay variations during electrophoresis and we have defined experimental conditions which reduce the variations to a minimum. We studied the importance of various physical parameters during electrophoresis: (i voltage; (ii duration of electrophoresis; (iii electric current; (iv temperature; and (v agarose concentration. We observed that the voltage (V/cm varied substantially during electrophoresis, even within a few millimetres of distance between gel samples. Not unexpectedly, both the potential ( V/cm and the time were linearly related to the mean comet tail, whereas the current was not. By measuring the local voltage with microelectrodes a few millimetres apart, we observed substantial local variations in V/cm, and they increased with time. This explains the large variations in neighbouring sample comet tails of 25% or more. By introducing simple technology (circulation of the solution during electrophoresis, and temperature control, these variations in mean comet tail were largely abolished, as were the V/cm variations. Circulation was shown to be particularly important and optimal conditions

  3. Error analysis of isotope dilution mass spectrometry method with internal standard

    International Nuclear Information System (INIS)

    Rizhinskii, M.W.; Vitinskii, M.Y.

    1989-02-01

    The computation algorithms of the normalized isotopic ratios and element concentration by isotope dilution mass spectrometry with internal standard are presented. A procedure based on the Monte-Carlo calculation is proposed for predicting the magnitude of the errors to be expected. The estimation of systematic and random errors is carried out in the case of the certification of uranium and plutonium reference materials as well as for the use of those reference materials in the analysis of irradiated nuclear fuels. 4 refs, 11 figs, 2 tabs

  4. Estimates and Standard Errors for Ratios of Normalizing Constants from Multiple Markov Chains via Regeneration.

    Science.gov (United States)

    Doss, Hani; Tan, Aixin

    2014-09-01

    In the classical biased sampling problem, we have k densities π 1 (·), …, π k (·), each known up to a normalizing constant, i.e. for l = 1, …, k , π l (·) = ν l (·)/ m l , where ν l (·) is a known function and m l is an unknown constant. For each l , we have an iid sample from π l , · and the problem is to estimate the ratios m l /m s for all l and all s . This problem arises frequently in several situations in both frequentist and Bayesian inference. An estimate of the ratios was developed and studied by Vardi and his co-workers over two decades ago, and there has been much subsequent work on this problem from many different perspectives. In spite of this, there are no rigorous results in the literature on how to estimate the standard error of the estimate. We present a class of estimates of the ratios of normalizing constants that are appropriate for the case where the samples from the π l 's are not necessarily iid sequences, but are Markov chains. We also develop an approach based on regenerative simulation for obtaining standard errors for the estimates of ratios of normalizing constants. These standard error estimates are valid for both the iid case and the Markov chain case.

  5. Improvement and standardization of communication means for control room personnel

    International Nuclear Information System (INIS)

    Preuss, W.; Eggerdinger, C.; Sieber, R.

    1983-01-01

    The subjects under investigation were the ''Shift book'', ''Simulation book'', and ''Technical and organisational changes and their records''. It was intended to analyse both the communication processes and the associated written documentation in order to determine areas for potential improvement and possibilities for standardization. Information was obtained by interviewing shift members and their supervisors, by general observation, and by compilation and evaluation of the extensive dokumentation. Assessment criteria were developed on a scientific basis and in the course of the investigation, in particular from ergonomic findings, as well as from standards and regulations and comparison between the plants. General practical suggestions were developed for the improvement of the communication forms and the formal design of the documents and their contents. The transfer of the recommendations to practical use in the plants presupposes the consideration of plant-specific frames of reference. The report includes a compilation and listing of suggestions for improvement in topical subdivisions. (orig.) [de

  6. Improved ensemble-mean forecast skills of ENSO events by a zero-mean stochastic model-error model of an intermediate coupled model

    Science.gov (United States)

    Zheng, F.; Zhu, J.

    2015-12-01

    To perform an ensemble-based ENSO probabilistic forecast, the crucial issue is to design a reliable ensemble prediction strategy that should include the major uncertainties of a forecast system. In this study, we developed a new general ensemble perturbation technique to improve the ensemble-mean predictive skill of forecasting ENSO using an intermediate coupled model (ICM). The model uncertainties are first estimated and analyzed from EnKF analysis results through assimilating observed SST. Then, based on the pre-analyzed properties of the model errors, a zero-mean stochastic model-error model is developed to mainly represent the model uncertainties induced by some important physical processes missed in the coupled model (i.e., stochastic atmospheric forcing/MJO, extra-tropical cooling and warming, Indian Ocean Dipole mode, etc.). Each member of an ensemble forecast is perturbed by the stochastic model-error model at each step during the 12-month forecast process, and the stochastical perturbations are added into the modeled physical fields to mimic the presence of these high-frequency stochastic noises and model biases and their effect on the predictability of the coupled system. The impacts of stochastic model-error perturbations on ENSO deterministic predictions are examined by performing two sets of 21-yr retrospective forecast experiments. The two forecast schemes are differentiated by whether they considered the model stochastic perturbations, with both initialized by the ensemble-mean analysis states from EnKF. The comparison results suggest that the stochastic model-error perturbations have significant and positive impacts on improving the ensemble-mean prediction skills during the entire 12-month forecast process. Because the nonlinear feature of the coupled model can induce the nonlinear growth of the added stochastic model errors with model integration, especially through the nonlinear heating mechanism with the vertical advection term of the model, the

  7. A multiobserver study of the effects of including point-of-care patient photographs with portable radiography: a means to detect wrong-patient errors.

    Science.gov (United States)

    Tridandapani, Srini; Ramamurthy, Senthil; Provenzale, James; Obuchowski, Nancy A; Evanoff, Michael G; Bhatti, Pamela

    2014-08-01

    To evaluate whether the presence of facial photographs obtained at the point-of-care of portable radiography leads to increased detection of wrong-patient errors. In this institutional review board-approved study, 166 radiograph-photograph combinations were obtained from 30 patients. Consecutive radiographs from the same patients resulted in 83 unique pairs (ie, a new radiograph and prior, comparison radiograph) for interpretation. To simulate wrong-patient errors, mismatched pairs were generated by pairing radiographs from different patients chosen randomly from the sample. Ninety radiologists each interpreted a unique randomly chosen set of 10 radiographic pairs, containing up to 10% mismatches (ie, error pairs). Radiologists were randomly assigned to interpret radiographs with or without photographs. The number of mismatches was identified, and interpretation times were recorded. Ninety radiologists with 21 ± 10 (mean ± standard deviation) years of experience were recruited to participate in this observer study. With the introduction of photographs, the proportion of errors detected increased from 31% (9 of 29) to 77% (23 of 30; P = .006). The odds ratio for detection of error with photographs to detection without photographs was 7.3 (95% confidence interval: 2.29-23.18). Observer qualifications, training, or practice in cardiothoracic radiology did not influence sensitivity for error detection. There is no significant difference in interpretation time for studies without photographs and those with photographs (60 ± 22 vs. 61 ± 25 seconds; P = .77). In this observer study, facial photographs obtained simultaneously with portable chest radiographs increased the identification of any wrong-patient errors, without substantial increase in interpretation time. This technique offers a potential means to increase patient safety through correct patient identification. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.

  8. Sparse reconstruction by means of the standard Tikhonov regularization

    International Nuclear Information System (INIS)

    Lu Shuai; Pereverzev, Sergei V

    2008-01-01

    It is a common belief that Tikhonov scheme with || · ||L 2 -penalty fails in sparse reconstruction. We are going to show, however, that this standard regularization can help if the stability measured in L 1 -norm will be properly taken into account in the choice of the regularization parameter. The crucial point is that now a stability bound may depend on the bases with respect to which the solution of the problem is assumed to be sparse. We discuss how such a stability can be estimated numerically and present the results of computational experiments giving the evidence of the reliability of our approach.

  9. Standardized mean differences cause funnel plot distortion in publication bias assessments.

    Science.gov (United States)

    Zwetsloot, Peter-Paul; Van Der Naald, Mira; Sena, Emily S; Howells, David W; IntHout, Joanna; De Groot, Joris Ah; Chamuleau, Steven Aj; MacLeod, Malcolm R; Wever, Kimberley E

    2017-09-08

    Meta-analyses are increasingly used for synthesis of evidence from biomedical research, and often include an assessment of publication bias based on visual or analytical detection of asymmetry in funnel plots. We studied the influence of different normalisation approaches, sample size and intervention effects on funnel plot asymmetry, using empirical datasets and illustrative simulations. We found that funnel plots of the Standardized Mean Difference (SMD) plotted against the standard error (SE) are susceptible to distortion, leading to overestimation of the existence and extent of publication bias. Distortion was more severe when the primary studies had a small sample size and when an intervention effect was present. We show that using the Normalised Mean Difference measure as effect size (when possible), or plotting the SMD against a sample size-based precision estimate, are more reliable alternatives. We conclude that funnel plots using the SMD in combination with the SE are unsuitable for publication bias assessments and can lead to false-positive results.

  10. Errors of Mean Dynamic Topography and Geostrophic Current Estimates in China's Marginal Seas from GOCE and Satellite Altimetry

    DEFF Research Database (Denmark)

    Jin, Shuanggen; Feng, Guiping; Andersen, Ole Baltazar

    2014-01-01

    and geostrophic current estimates from satellite gravimetry and altimetry are investigated and evaluated in China's marginal seas. The cumulative error in MDT from GOCE is reduced from 22.75 to 9.89 cm when compared to the Gravity Recovery and Climate Experiment (GRACE) gravity field model ITG-Grace2010 results......The Gravity Field and Steady-State Ocean Circulation Explorer (GOCE) and satellite altimetry can provide very detailed and accurate estimates of the mean dynamic topography (MDT) and geostrophic currents in China's marginal seas, such as, the newest high-resolution GOCE gravity field model GO......-CONS-GCF-2-TIM-R4 and the new Centre National d'Etudes Spatiales mean sea surface model MSS_CNES_CLS_11 from satellite altimetry. However, errors and uncertainties of MDT and geostrophic current estimates from satellite observations are not generally quantified. In this paper, errors and uncertainties of MDT...

  11. Error detection in GPS observations by means of Multi-process models

    DEFF Research Database (Denmark)

    Thomsen, Henrik F.

    2001-01-01

    The main purpose of this article is to present the idea of using Multi-process models as a method of detecting errors in GPS observations. The theory behind Multi-process models, and double differenced phase observations in GPS is presented shortly. It is shown how to model cycle slips in the Mul...

  12. Enforcing environmental standards: Economic mechanisms as viable means?

    International Nuclear Information System (INIS)

    Wolfrum, R.; Heidelberg Univ.

    1996-01-01

    The papers presented at the symposium organised by the Heidelberg Max-Planck-Institute for international law touch upon two major aspects of developments in international law, relating to international environmental law for protection of the global atmosphere and environment, and to international and national means of enforcing existing laws. The situation is shown against the background of conflicts of interests arising from the different perspectives and objectives involved, i.e. those of protection of the environment or economic development. The 21 contributions, all in English, present an outline picture of developments and activities as well as legal regimes and instruments and address details of agreements and their implementation and enforcement. Individual subject analyses of 17 papers are available in the database. (CB)

  13. Construction of a Mean Square Error Adaptive Euler–Maruyama Method With Applications in Multilevel Monte Carlo

    KAUST Repository

    Hoel, Hakon

    2016-06-13

    A formal mean square error expansion (MSE) is derived for Euler-Maruyama numerical solutions of stochastic differential equations (SDE). The error expansion is used to construct a pathwise, a posteriori, adaptive time-stepping Euler-Maruyama algorithm for numerical solutions of SDE, and the resulting algorithm is incorporated into a multilevel Monte Carlo (MLMC) algorithm for weak approximations of SDE. This gives an efficient MSE adaptive MLMC algorithm for handling a number of low-regularity approximation problems. In low-regularity numerical example problems, the developed adaptive MLMC algorithm is shown to outperform the uniform time-stepping MLMC algorithm by orders of magnitude, producing output whose error with high probability is bounded by TOL > 0 at the near-optimal MLMC cost rate б(TOL log(TOL)) that is achieved when the cost of sample generation is б(1).

  14. Mitigating voltage lead errors of an AC Josephson voltage standard by impedance matching

    Science.gov (United States)

    Zhao, Dongsheng; van den Brom, Helko E.; Houtzager, Ernest

    2017-09-01

    A pulse-driven AC Josephson voltage standard (ACJVS) generates calculable AC voltage signals at low temperatures, whereas measurements are performed with a device under test (DUT) at room temperature. The voltage leads cause the output voltage to show deviations that scale with the frequency squared. Error correction mechanisms investigated so far allow the ACJVS to be operational for frequencies up to 100 kHz. In this paper, calculations are presented to deal with these errors in terms of reflected waves. Impedance matching at the source side of the system, which is loaded with a high-impedance DUT, is proposed as an accurate method to mitigate these errors for frequencies up to 1 MHz. Simulations show that the influence of non-ideal component characteristics, such as the tolerance of the matching resistor, the capacitance of the load input impedance, losses in the voltage leads, non-homogeneity in the voltage leads, a non-ideal on-chip connection and inductors between the Josephson junction array and the voltage leads, can be corrected for using the proposed procedures. The results show that an expanded uncertainty of 12 parts in 106 (k  =  2) at 1 MHz and 0.5 part in 106 (k  =  2) at 100 kHz is within reach.

  15. Preparing Emergency Medicine Residents to Disclose Medical Error Using Standardized Patients

    Directory of Open Access Journals (Sweden)

    Carmen N. Spalding

    2017-12-01

    Full Text Available Introduction Emergency Medicine (EM is a unique clinical learning environment. The American College of Graduate Medical Education Clinical Learning Environment Review Pathways to Excellence calls for “hands-on training” of disclosure of medical error (DME during residency. Training and practicing key elements of DME using standardized patients (SP may enhance preparedness among EM residents in performing this crucial skill in a clinical setting. Methods This training was developed to improve resident preparedness in DME in the clinical setting. Objectives included the following: the residents will be able to define a medical error; discuss ethical and professional standards of DME; recognize common barriers to DME; describe key elements in effective DME to patients and families; and apply key elements during a SP encounter. The four-hour course included didactic and experiential learning methods, and was created collaboratively by core EM faculty and subject matter experts in conflict resolution and healthcare simulation. Educational media included lecture, video exemplars of DME communication with discussion, small group case-study discussion, and SP encounters. We administered a survey assessing for preparedness in DME pre-and post-training. A critical action checklist was administered to assess individual performance of key elements of DME during the evaluated SP case. A total of 15 postgraduate-year 1 and 2 EM residents completed the training. Results After the course, residents reported increased comfort with and preparedness in performing several key elements in DME. They were able to demonstrate these elements in a simulated setting using SP. Residents valued the training, rating the didactic, SP sessions, and overall educational experience very high. Conclusion Experiential learning using SP is effective in improving resident knowledge of and preparedness in performing medical error disclosure. This educational module can be adapted

  16. Unreliability and error in the military's "gold standard" measure of sexual harassment by education and gender.

    Science.gov (United States)

    Murdoch, Maureen; Pryor, John B; Griffin, Joan M; Ripley, Diane Cowper; Gackstetter, Gary D; Polusny, Melissa A; Hodges, James S

    2011-01-01

    The Department of Defense's "gold standard" sexual harassment measure, the Sexual Harassment Core Measure (SHCore), is based on an earlier measure that was developed primarily in college women. Furthermore, the SHCore requires a reading grade level of 9.1. This may be higher than some troops' reading abilities and could generate unreliable estimates of their sexual harassment experiences. Results from 108 male and 96 female soldiers showed that the SHCore's temporal stability and alternate-forms reliability was significantly worse (a) in soldiers without college experience compared to soldiers with college experience and (b) in men compared to women. For men without college experience, almost 80% of the temporal variance in SHCore scores was attributable to error. A plain language version of the SHCore had mixed effects on temporal stability depending on education and gender. The SHCore may be particularly ill suited for evaluating population trends of sexual harassment in military men without college experience.

  17. An emerging network storage management standard: Media error monitoring and reporting information (MEMRI) - to determine optical tape data integrity

    Science.gov (United States)

    Podio, Fernando; Vollrath, William; Williams, Joel; Kobler, Ben; Crouse, Don

    1998-01-01

    Sophisticated network storage management applications are rapidly evolving to satisfy a market demand for highly reliable data storage systems with large data storage capacities and performance requirements. To preserve a high degree of data integrity, these applications must rely on intelligent data storage devices that can provide reliable indicators of data degradation. Error correction activity generally occurs within storage devices without notification to the host. Early indicators of degradation and media error monitoring 333 and reporting (MEMR) techniques implemented in data storage devices allow network storage management applications to notify system administrators of these events and to take appropriate corrective actions before catastrophic errors occur. Although MEMR techniques have been implemented in data storage devices for many years, until 1996 no MEMR standards existed. In 1996 the American National Standards Institute (ANSI) approved the only known (world-wide) industry standard specifying MEMR techniques to verify stored data on optical disks. This industry standard was developed under the auspices of the Association for Information and Image Management (AIIM). A recently formed AIIM Optical Tape Subcommittee initiated the development of another data integrity standard specifying a set of media error monitoring tools and media error monitoring information (MEMRI) to verify stored data on optical tape media. This paper discusses the need for intelligent storage devices that can provide data integrity metadata, the content of the existing data integrity standard for optical disks, and the content of the MEMRI standard being developed by the AIIM Optical Tape Subcommittee.

  18. A measurement error approach to assess the association between dietary diversity, nutrient intake, and mean probability of adequacy.

    Science.gov (United States)

    Joseph, Maria L; Carriquiry, Alicia

    2010-11-01

    Collection of dietary intake information requires time-consuming and expensive methods, making it inaccessible to many resource-poor countries. Quantifying the association between simple measures of usual dietary diversity and usual nutrient intake/adequacy would allow inferences to be made about the adequacy of micronutrient intake at the population level for a fraction of the cost. In this study, we used secondary data from a dietary intake study carried out in Bangladesh to assess the association between 3 food group diversity indicators (FGI) and calcium intake; and the association between these same 3 FGI and a composite measure of nutrient adequacy, mean probability of adequacy (MPA). By implementing Fuller's error-in-the-equation measurement error model (EEM) and simple linear regression (SLR) models, we assessed these associations while accounting for the error in the observed quantities. Significant associations were detected between usual FGI and usual calcium intakes, when the more complex EEM was used. The SLR model detected significant associations between FGI and MPA as well as for variations of these measures, including the best linear unbiased predictor. Through simulation, we support the use of the EEM. In contrast to the EEM, the SLR model does not account for the possible correlation between the measurement errors in the response and predictor. The EEM performs best when the model variables are not complex functions of other variables observed with error (e.g. MPA). When observation days are limited and poor estimates of the within-person variances are obtained, the SLR model tends to be more appropriate.

  19. Self-interaction error in density functional theory: a mean-field correction for molecules and large systems

    International Nuclear Information System (INIS)

    Ciofini, Ilaria; Adamo, Carlo; Chermette, Henry

    2005-01-01

    Corrections to the self-interaction error which is rooted in all standard exchange-correlation functionals in the density functional theory (DFT) have become the object of an increasing interest. After an introduction reminding the origin of the self-interaction error in the DFT formalism, and a brief review of the self-interaction free approximations, we present a simple, yet effective, self-consistent method to correct this error. The model is based on an average density self-interaction correction (ADSIC), where both exchange and Coulomb contributions are screened by a fraction of the electron density. The ansatz on which the method is built makes it particularly appealing, due to its simplicity and its favorable scaling with the size of the system. We have tested the ADSIC approach on one of the classical pathological problem for density functional theory: the direct estimation of the ionization potential from orbital eigenvalues. A large set of different chemical systems, ranging from simple atoms to large fullerenes, has been considered as test cases. Our results show that the ADSIC approach provides good numerical values for all the molecular systems, the agreement with the experimental values increasing, due to its average ansatz, with the size (conjugation) of the systems

  20. Simulation-based estimation of mean and standard deviation for meta-analysis via Approximate Bayesian Computation (ABC).

    Science.gov (United States)

    Kwon, Deukwoo; Reis, Isildinha M

    2015-08-12

    When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.

  1. A Theoretically Consistent Method for Minimum Mean-Square Error Estimation of Mel-Frequency Cepstral Features

    DEFF Research Database (Denmark)

    Jensen, Jesper; Tan, Zheng-Hua

    2014-01-01

    We propose a method for minimum mean-square error (MMSE) estimation of mel-frequency cepstral features for noise robust automatic speech recognition (ASR). The method is based on a minimum number of well-established statistical assumptions; no assumptions are made which are inconsistent with others....... The strength of the proposed method is that it allows MMSE estimation of mel-frequency cepstral coefficients (MFCC's), cepstral mean-subtracted MFCC's (CMS-MFCC's), velocity, and acceleration coefficients. Furthermore, the method is easily modified to take into account other compressive non-linearities than...... the logarithmic which is usually used for MFCC computation. The proposed method shows estimation performance which is identical to or better than state-of-the-art methods. It further shows comparable ASR performance, where the advantage of being able to use mel-frequency speech features based on a power non...

  2. Estimasi Kanal Akustik Bawah Air Untuk Perairan Dangkal Menggunakan Metode Least Square (LS dan Minimum Mean Square Error (MMSE

    Directory of Open Access Journals (Sweden)

    Mardawia M Panrereng

    2015-06-01

    Full Text Available Dalam beberapa tahun terakhir, sistem komunikasi akustik bawah air banyak dikembangkan oleh beberapa peneliti. Besarnya tantangan yang dihadapi membuat para peneliti semakin tertarik untuk mengembangkan penelitian dibidang ini. Kanal bawah air merupakan media komunikasi yang sulit karena adanya attenuasi, absorption, dan multipath yang disebabkan oleh gerakan gelombang air setiap saat. Untuk perairan dangkal, multipath disebabkan adanya pantulan dari permukaan dan dasar laut. Kebutuhan pengiriman data cepat dengan bandwidth terbatas menjadikan Ortogonal Frequency Division Multiplexing (OFDM sebagai solusi untuk komunikasi transmisi tinggi dengan modulasi menggunakan Binary Phase-Shift Keying (BPSK. Estimasi kanal bertujuan untuk mengetahui karakteristik respon impuls kanal propagasi dengan mengirimkan pilot simbol. Pada estimasi kanal menggunakan metode Least Square (LS nilai Mean Square Error (MSE yang diperoleh cenderung lebih besar dari metode estimasi kanal menggunakan metode Minimum Mean Square (MMSE. Hasil kinerja estimasi kanal berdasarkan perhitungan Bit Error Rate (BER untuk estimasi kanal menggunakan metode LS dan metode MMSE tidak menunjukkan perbedaan yang signifikan yaitu berselisih satu SNR untuk setiap metode estimasi kanal yang digunakan.

  3. First among Others? Cohen's "d" vs. Alternative Standardized Mean Group Difference Measures

    Science.gov (United States)

    Cahan, Sorel; Gamliel, Eyal

    2011-01-01

    Standardized effect size measures typically employed in behavioral and social sciences research in the multi-group case (e.g., [eta][superscript 2], f[superscript 2]) evaluate between-group variability in terms of either total or within-group variability, such as variance or standard deviation--that is, measures of dispersion about the mean. In…

  4. Standard practice for construction of a stepped block and its use to estimate errors produced by speed-of-sound measurement systems for use on solids

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1999-01-01

    1.1 This practice provides a means for evaluating both systematic and random errors for ultrasonic speed-of-sound measurement systems which are used for evaluating material characteristics associated with residual stress and which may also be used for nondestructive measurements of the dynamic elastic moduli of materials. Important features and construction details of a reference block crucial to these error evaluations are described. This practice can be used whenever the precision and bias of sound speed values are in question. 1.2 This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of regulatory limitations prior to use.

  5. Qubits in phase space: Wigner-function approach to quantum-error correction and the mean-king problem

    International Nuclear Information System (INIS)

    Paz, Juan Pablo; Roncaglia, Augusto Jose; Saraceno, Marcos

    2005-01-01

    We analyze and further develop a method to represent the quantum state of a system of n qubits in a phase-space grid of NxN points (where N=2 n ). The method, which was recently proposed by Wootters and co-workers (Gibbons et al., Phys. Rev. A 70, 062101 (2004).), is based on the use of the elements of the finite field GF(2 n ) to label the phase-space axes. We present a self-contained overview of the method, we give insights into some of its features, and we apply it to investigate problems which are of interest for quantum-information theory: We analyze the phase-space representation of stabilizer states and quantum error-correction codes and present a phase-space solution to the so-called mean king problem

  6. Penalized linear regression for discrete ill-posed problems: A hybrid least-squares and mean-squared error approach

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2016-12-19

    This paper proposes a new approach to find the regularization parameter for linear least-squares discrete ill-posed problems. In the proposed approach, an artificial perturbation matrix with a bounded norm is forced into the discrete ill-posed model matrix. This perturbation is introduced to enhance the singular-value (SV) structure of the matrix and hence to provide a better solution. The proposed approach is derived to select the regularization parameter in a way that minimizes the mean-squared error (MSE) of the estimator. Numerical results demonstrate that the proposed approach outperforms a set of benchmark methods in most cases when applied to different scenarios of discrete ill-posed problems. Jointly, the proposed approach enjoys the lowest run-time and offers the highest level of robustness amongst all the tested methods.

  7. On the Linear Relation between the Mean and the Standard Deviation of a Response Time Distribution

    Science.gov (United States)

    Wagenmakers, Eric-Jan; Brown, Scott

    2007-01-01

    Although it is generally accepted that the spread of a response time (RT) distribution increases with the mean, the precise nature of this relation remains relatively unexplored. The authors show that in several descriptive RT distributions, the standard deviation increases linearly with the mean. Results from a wide range of tasks from different…

  8. Nonlinear method for including the mass uncertainty of standards and the system measurement errors in the fitting of calibration curves

    International Nuclear Information System (INIS)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-01-01

    A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities with a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO 3 can have an accuracy of 0.2% in 1000 s. 5 figures

  9. Conditional standard errors of measurement for composite scores on the Wechsler Preschool and Primary Scale of Intelligence-Third Edition.

    Science.gov (United States)

    Price, Larry R; Raju, Nambury; Lurie, Anna; Wilkins, Charles; Zhu, Jianjun

    2006-02-01

    A specific recommendation of the 1999 Standards for Educational and Psychological Testing by the American Educational Research Association, the American Psychological Association, and the National Council on Measurement in Education is that test publishers report estimates of the conditional standard error of measurement (SEM). Procedures for calculating the conditional (score-level) SEM based on raw scores are well documented; however, few procedures have been developed for estimating the conditional SEM of subtest or composite scale scores resulting from a nonlinear transformation. Item response theory provided the psychometric foundation to derive the conditional standard errors of measurement and confidence intervals for composite scores on the Wechsler Preschool and Primary Scale of Intelligence-Third Edition.

  10. Reducing matrix effect error in EDXRF: Comparative study of using standard and standard less methods for stainless steel samples

    International Nuclear Information System (INIS)

    Meor Yusoff Meor Sulaiman; Masliana Muhammad; Wilfred, P.

    2013-01-01

    Even though EDXRF analysis has major advantages in the analysis of stainless steel samples such as simultaneous determination of the minor elements, analysis can be done without sample preparation and non-destructive analysis, the matrix issue arise from the inter element interaction can make the the final quantitative result to be in accurate. The paper relates a comparative quantitative analysis using standard and standard less methods in the determination of these elements. Standard method was done by plotting regression calibration graphs of the interested elements using BCS certified stainless steel standards. Different calibration plots were developed based on the available certified standards and these stainless steel grades include low alloy steel, austenitic, ferritic and high speed. The standard less method on the other hand uses a mathematical modelling with matrix effect correction derived from Lucas-Tooth and Price model. Further improvement on the accuracy of the standard less method was done by inclusion of pure elements into the development of the model. Discrepancy tests were then carried out for these quantitative methods on different certified samples and the results show that the high speed method is most reliable for determining of Ni and the standard less method for Mn. (Author)

  11. Breakup of inverse golden mean shearless tori in the two-frequency standard nontwist map

    International Nuclear Information System (INIS)

    Wurm, A.; Martini, K.M.

    2013-01-01

    The breakup of shearless invariant tori with winding number ω=(√(5)−1)/2 (inverse golden mean) is studied using Greene's residue criterion in the recently derived two-frequency or extended standard nontwist map (ESNM). Depending on the frequency ratio, the ESNM has or does not have a particular spatial symmetry. If the symmetry is present, the breakup is shown to be the same as in the standard nontwist map; if not, the results are very different.

  12. Closed-form confidence intervals for functions of the normal mean and standard deviation.

    Science.gov (United States)

    Donner, Allan; Zou, G Y

    2012-08-01

    Confidence interval methods for a normal mean and standard deviation are well known and simple to apply. However, the same cannot be said for important functions of these parameters. These functions include the normal distribution percentiles, the Bland-Altman limits of agreement, the coefficient of variation and Cohen's effect size. We present a simple approach to this problem by using variance estimates recovered from confidence limits computed for the mean and standard deviation separately. All resulting confidence intervals have closed forms. Simulation results demonstrate that this approach performs very well for limits of agreement, coefficients of variation and their differences.

  13. More recent robust methods for the estimation of mean and standard deviation of data

    International Nuclear Information System (INIS)

    Kanisch, G.

    2003-01-01

    Outliers in a data set result in biased values of mean and standard deviation. One way to improve the estimation of a mean is to apply tests to identify outliers and to exclude them from the calculations. Tests according to Grubbs or to Dixon, which are frequently used in practice, especially within laboratory intercomparisons, are not very efficient in identifying outliers. Since more than ten years now so-called robust methods are used more and more, which determine mean and standard deviation by iteration and down-weighting values far from the mean, thereby diminishing the impact of outliers. In 1989 the Analytical Methods Committee of the British Royal Chemical Society published such a robust method. Since 1993 the US Environmental Protection Agency published a more efficient and quite versatile method. Mean and standard deviation are calculated by iteration and application of a special weight function for down-weighting outlier candidates. In 2000, W. Cofino et al. published a very efficient robust method which works quite different from the others. It applies methods taken from the basics of quantum mechanics, such as ''wave functions'' associated with each laboratory mean value and matrix algebra (solving eigenvalue problems). In contrast to the other ones, this method includes the individual measurement uncertainties. (orig.)

  14. Random and correlated errors in gold standards used in nutritional epidemiology: implications for validation studies

    Science.gov (United States)

    The measurement error correction de-attenuation factor was estimated from two studies using recovery biomarkers. One study, the Observing Protein and Energy Nutrition (OPEN), was unable to adequately account for within-person variation in protein and energy intake estimated by recovery biomarkers, ...

  15. 'When measurements mean action' decision models for portal image review to eliminate systematic set-up errors

    International Nuclear Information System (INIS)

    Wratten, C.R.; Denham, J.W.; O; Brien, P.; Hamilton, C.S.; Kron, T.; London Regional Cancer Centre, London, Ontario

    2004-01-01

    The aim of the present paper is to evaluate how the use of decision models in the review of portal images can eliminate systematic set-up errors during conformal therapy. Sixteen patients undergoing four-field irradiation of prostate cancer have had daily portal images obtained during the first two treatment weeks and weekly thereafter. The magnitude of random and systematic variations has been calculated by comparison of the portal image with the reference simulator images using the two-dimensional decision model embodied in the Hotelling's evaluation process (HEP). Random day-to-day set-up variation was small in this group of patients. Systematic errors were, however, common. In 15 of 16 patients, one or more errors of >2 mm were diagnosed at some stage during treatment. Sixteen of the 23 errors were between 2 and 4 mm. Although there were examples of oversensitivity of the HEP in three cases, and one instance of undersensitivity, the HEP proved highly sensitive to the small (2-4 mm) systematic errors that must be eliminated during high precision radiotherapy. The HEP has proven valuable in diagnosing very small ( 4 mm) systematic errors using one-dimensional decision models, HEP can eliminate the majority of systematic errors during the first 2 treatment weeks. Copyright (2004) Blackwell Science Pty Ltd

  16. "First among Others? Cohen's ""d"" vs. Alternative Standardized Mean Group Difference Measures"

    Directory of Open Access Journals (Sweden)

    Sorel Cahan

    2011-06-01

    Full Text Available Standardized effect size measures typically employed in behavioral and social sciences research in the multi-group case (e.g., 2, f2 evaluate between-group variability in terms of either total or within-group variability, such as variance or standard deviation -' that is, measures of dispersion about the mean. In contrast, the definition of Cohen's d, the effect size measure typically computed in the two-group case, is incongruent due to a conceptual difference between the numerator -' which measures between-group variability by the intuitive and straightforward raw difference between the two group means -' and the denominator - which measures within-group variability in terms of the difference between all observations and the group mean (i.e., the pooled within-groups standard deviation, SW. Two congruent alternatives to d, in which the root square or absolute mean difference between all observation pairs is substituted for SW as the variability measure in the denominator of d, are suggested and their conceptual and statistical advantages and disadvantages are discussed.

  17. Breakup of inverse golden mean shearless tori in the two-frequency standard nontwist map

    Energy Technology Data Exchange (ETDEWEB)

    Wurm, A., E-mail: awurm@wne.edu [Department of Physical and Biological Sciences, Western New England University, Springfield, MA 01119 (United States); Martini, K.M. [Department of Physics, Rochester Institute of Technology, Rochester, NY 14623 (United States)

    2013-03-01

    The breakup of shearless invariant tori with winding number ω=(√(5)−1)/2 (inverse golden mean) is studied using Greene's residue criterion in the recently derived two-frequency or extended standard nontwist map (ESNM). Depending on the frequency ratio, the ESNM has or does not have a particular spatial symmetry. If the symmetry is present, the breakup is shown to be the same as in the standard nontwist map; if not, the results are very different.

  18. The impact of statistical adjustment on conditional standard errors of measurement in the assessment of physician communication skills.

    Science.gov (United States)

    Raymond, Mark R; Clauser, Brian E; Furman, Gail E

    2010-10-01

    The use of standardized patients to assess communication skills is now an essential part of assessing a physician's readiness for practice. To improve the reliability of communication scores, it has become increasingly common in recent years to use statistical models to adjust ratings provided by standardized patients. This study employed ordinary least squares regression to adjust ratings, and then used generalizability theory to evaluate the impact of these adjustments on score reliability and the overall standard error of measurement. In addition, conditional standard errors of measurement were computed for both observed and adjusted scores to determine whether the improvements in measurement precision were uniform across the score distribution. Results indicated that measurement was generally less precise for communication ratings toward the lower end of the score distribution; and the improvement in measurement precision afforded by statistical modeling varied slightly across the score distribution such that the most improvement occurred in the upper-middle range of the score scale. Possible reasons for these patterns in measurement precision are discussed, as are the limitations of the statistical models used for adjusting performance ratings.

  19. Standardized error severity score (ESS) ratings to quantify risk associated with child restraint system (CRS) and booster seat misuse.

    Science.gov (United States)

    Rudin-Brown, Christina M; Kramer, Chelsea; Langerak, Robin; Scipione, Andrea; Kelsey, Shelley

    2017-11-17

    Although numerous research studies have reported high levels of error and misuse of child restraint systems (CRS) and booster seats in experimental and real-world scenarios, conclusions are limited because they provide little information regarding which installation issues pose the highest risk and thus should be targeted for change. Beneficial to legislating bodies and researchers alike would be a standardized, globally relevant assessment of the potential injury risk associated with more common forms of CRS and booster seat misuse, which could be applied with observed error frequency-for example, in car seat clinics or during prototype user testing-to better identify and characterize the installation issues of greatest risk to safety. A group of 8 leading world experts in CRS and injury biomechanics, who were members of an international child safety project, estimated the potential injury severity associated with common forms of CRS and booster seat misuse. These injury risk error severity score (ESS) ratings were compiled and compared to scores from previous research that had used a similar procedure but with fewer respondents. To illustrate their application, and as part of a larger study examining CRS and booster seat labeling requirements, the new standardized ESS ratings were applied to objective installation performance data from 26 adult participants who installed a convertible (rear- vs. forward-facing) CRS and booster seat in a vehicle, and a child test dummy in the CRS and booster seat, using labels that only just met minimal regulatory requirements. The outcome measure, the risk priority number (RPN), represented the composite scores of injury risk and observed installation error frequency. Variability within the sample of ESS ratings in the present study was smaller than that generated in previous studies, indicating better agreement among experts on what constituted injury risk. Application of the new standardized ESS ratings to installation

  20. Meaning

    Science.gov (United States)

    Harteveld, Casper

    The second world to be considered concerns Meaning. In contrast to Reality and Play, this world relates to the people, disciplines, and domains that are focused on creating a certain value. For example, if this value is about providing students knowledge about physics, it involves teachers, the learning sciences, and the domains education and physics. This level goes into the aspects and criteria that designers need to take into account from this perspective. The first aspect seems obvious when we talk of “games with a serious purpose.” They have a purpose and this needs to be elaborated on, for example in terms of what “learning objectives” it attempts to achieve. The subsequent aspect is not about what is being pursued but how. To attain a value, designers have to think about a strategy that they employ. In my case this concerned looking at the learning paradigms that have come into existence in the past century and see what they have to tell us about learning. This way, their principles can be translated into a game environment. This translation involves making the strategy concrete. Or, in other words, operationalizing the plan. This is the third aspect. In this level, I will further specifically explain how I derived requirements from each of the learning paradigms, like reflection and exploration, and how they can possibly be related to games. The fourth and final aspect is the context in which the game is going to be used. It matters who uses the game and when, where, and how the game is going to be used. When designers have looked at these aspects, they have developed a “value proposal” and the worth of it may be judged by criteria, like motivation, relevance, and transfer. But before I get to this, I first go into how we human beings are meaning creators and what role assumptions, knowledge, and ambiguity have in this. I will illustrate this with some silly jokes about doctors and Mickey Mouse, and with an illusion.

  1. Sampling errors associated with soil composites used to estimate mean Ra-226 concentrations at an UMTRA remedial-action site

    International Nuclear Information System (INIS)

    Gilbert, R.O.; Baker, K.R.; Nelson, R.A.; Miller, R.H.; Miller, M.L.

    1987-07-01

    The decision whether to take additional remedial action (removal of soil) from regions contaminated by uranium mill tailings involves collecting 20 plugs of soil from each 10-m by 10-m plot in the region and analyzing a 500-g portion of the mixed soil for 226 Ra. A soil sampling study was conducted in the windblown mill-tailings flood plain area at Shiprock, New Mexico, to evaluate whether reducing the number of soil plugs to 9 would have any appreciable impact on remedial-action decisions. The results of the Shiprock study are described and used in this paper to develop a simple model of the standard deviation of 226 Ra measurements on composite samples formed from 21 or fewer plugs. This model is used to predict as a function of the number of soil plugs per composite, the percent accuracy with which the mean 226 Ra concentration in surface soil can be estimated, and the probability of making incorrect remedial action decisions on the basis of statistical tests. 8 refs., 15 figs., 9 tabs

  2. The Applicability of Standard Error of Measurement and Minimal Detectable Change to Motor Learning Research-A Behavioral Study.

    Science.gov (United States)

    Furlan, Leonardo; Sterr, Annette

    2018-01-01

    Motor learning studies face the challenge of differentiating between real changes in performance and random measurement error. While the traditional p -value-based analyses of difference (e.g., t -tests, ANOVAs) provide information on the statistical significance of a reported change in performance scores, they do not inform as to the likely cause or origin of that change, that is, the contribution of both real modifications in performance and random measurement error to the reported change. One way of differentiating between real change and random measurement error is through the utilization of the statistics of standard error of measurement (SEM) and minimal detectable change (MDC). SEM is estimated from the standard deviation of a sample of scores at baseline and a test-retest reliability index of the measurement instrument or test employed. MDC, in turn, is estimated from SEM and a degree of confidence, usually 95%. The MDC value might be regarded as the minimum amount of change that needs to be observed for it to be considered a real change, or a change to which the contribution of real modifications in performance is likely to be greater than that of random measurement error. A computer-based motor task was designed to illustrate the applicability of SEM and MDC to motor learning research. Two studies were conducted with healthy participants. Study 1 assessed the test-retest reliability of the task and Study 2 consisted in a typical motor learning study, where participants practiced the task for five consecutive days. In Study 2, the data were analyzed with a traditional p -value-based analysis of difference (ANOVA) and also with SEM and MDC. The findings showed good test-retest reliability for the task and that the p -value-based analysis alone identified statistically significant improvements in performance over time even when the observed changes could in fact have been smaller than the MDC and thereby caused mostly by random measurement error, as opposed

  3. A parallel row-based algorithm with error control for standard-cell replacement on a hypercube multiprocessor

    Science.gov (United States)

    Sargent, Jeff Scott

    1988-01-01

    A new row-based parallel algorithm for standard-cell placement targeted for execution on a hypercube multiprocessor is presented. Key features of this implementation include a dynamic simulated-annealing schedule, row-partitioning of the VLSI chip image, and two novel new approaches to controlling error in parallel cell-placement algorithms; Heuristic Cell-Coloring and Adaptive (Parallel Move) Sequence Control. Heuristic Cell-Coloring identifies sets of noninteracting cells that can be moved repeatedly, and in parallel, with no buildup of error in the placement cost. Adaptive Sequence Control allows multiple parallel cell moves to take place between global cell-position updates. This feedback mechanism is based on an error bound derived analytically from the traditional annealing move-acceptance profile. Placement results are presented for real industry circuits and the performance is summarized of an implementation on the Intel iPSC/2 Hypercube. The runtime of this algorithm is 5 to 16 times faster than a previous program developed for the Hypercube, while producing equivalent quality placement. An integrated place and route program for the Intel iPSC/2 Hypercube is currently being developed.

  4. The error and covariance structures of the mean approach model of pooled cross-section and time series data

    International Nuclear Information System (INIS)

    Nuamah, N.N.N.N.

    1991-01-01

    This paper postulates the assumptions underlying the Mean Approach model and recasts the re-expressions of the normal equations of this model in partitioned matrices of covariances. These covariance structures have been analysed. (author). 16 refs

  5. Statistical analysis of solid waste composition data: Arithmetic mean, standard deviation and correlation coefficients

    DEFF Research Database (Denmark)

    Edjabou, Maklawe Essonanawe; Martín-Fernández, Josep Antoni; Scheutz, Charlotte

    2017-01-01

    -derived food waste amounted to 2.21 ± 3.12% with a confidence interval of (−4.03; 8.45), which highlights the problem of the biased negative proportions. A Pearson’s correlation test, applied to waste fraction generation (kg mass), indicated a positive correlation between avoidable vegetable food waste...... and plastic packaging. However, correlation tests applied to waste fraction compositions (percentage values) showed a negative association in this regard, thus demonstrating that statistical analyses applied to compositional waste fraction data, without addressing the closed characteristics of these data......, have the potential to generate spurious or misleading results. Therefore, ¨compositional data should be transformed adequately prior to any statistical analysis, such as computing mean, standard deviation and correlation coefficients....

  6. Developing Psychological Culture of Schoolchildren as a Means of Supporting Implementation of Basic Education Standards

    Directory of Open Access Journals (Sweden)

    Dubrovina I.V.

    2018-01-01

    Full Text Available The paper reviews the social situation of development of children and adolescents in the modern society marked by rapid changes. The development of children and adolescents is described as ‘embedding into the culture’ through the education and is closely associated with the formation of their psychological culture. The paper analyses the conditions of personality development in modern children and adolescents, the factors which impede the communication and understanding of other people; it highlights the risks of escaping into the virtual reality or joining asocial groups. The paper also suggests important measures aimed at the formation of psychological culture in children in relation to age-specific tasks of development in primary school and adolescent ages. The development of psychological culture is regarded as the key means of supporting the implementation of modern educational standards as well as the foundation of psychological health in schoolchildren.

  7. Data Citation Standard: A Means to Support Data Sharing, Attribution, and Traceability

    Science.gov (United States)

    McCallum, I.; Plag, H. P.; Fritz, S.

    2012-04-01

    Geo-referenced data are crucial for addressing many of the burning societal problems and to support related interdisciplinary research. Data sharing is hampered by the lack of a widely accepted method for giving credit to those who make their data freely available and for tracking the use of data throughout it's life-cycle. Particularly in the scientific community, recognition and renown are important currencies. Providing means for data citation would be a strong incentive for data sharing. Recently, a number of organizations and projects have started to address the concept of data citation (e.g., PANGAEA, NASA DAACS, USGS, NOAA National Data Centers, ESIP, US National Academy of Sciences, and EGIDA). A number of proposals for data citation guidelines have emerged and a better understanding of the many issues at hand is evolving, but to date, no standard has been accepted. This is not surprising, as data citation is far more complicated than citation of scientific publication. Data sets differ in many aspects from standard scientific publications. For example, data sets generally are not locatable and attributable in the same way as scientific publications. Data sets often are not static (introducing versioning), and they are mostly not peer-reviewed (requiring quality control). There is a consensus that the implementation of a standard would reveal new issues that are not obvious today. With the Global Earth Observation System of Systems (GEOSS), the Group on Earth Observations (GEO) is in a unique position to provide the testbed for the implementation of a draft standard. The GEO Plenary supports the implementation of a draft standard developed by the Science and Technology Committee (STC) of GEO with support of the EGIDA Project. This draft is based on guidelines developed by international groups. Currently, users of the GEO-Portal are not obliged or encouraged to cite data accessed through GEOSS - if at all, citation requirements come from the individual data

  8. Improvements and standardization of communication means for control room personnel in nuclear power plants

    International Nuclear Information System (INIS)

    Preuss, W.; Eggerdinger, C.; Sieber, R.

    1982-01-01

    This report describes the findings of an investigation into selected communication means for control room personnel in nuclear power stations. The study can be seen as a contribution to the systematic analysis of major problem areas which were identified in the general study 'Human factors in the nuclear power plant'. The subjects under investigation were the 'Shift book', 'Simulation book', and 'Technical and organisational changes and their records'. It was intended to analyse both the communication, processes and the associated written documentation in order to determine areas for potential improvement and possibilities for standardization. Information was obtained by interviewing shift members and their supervisors, by general observation, and by compilation and evaluation of the extensive dokumentation. Assessment criteria were developed on a scientific basis and in the course of the investigation, in particular from ergonomic findings, as well as from standards and regulations and comparison between the plants. General practical suggestions were developed for the improvement of the communication forms and the formal design of the documents and their contents. The transfer of the recommendations to practical use in the plants presupposes the consideration of plant-specific frames of reference. The report includes a compilation and listing of suggestions for improvement in topical subdivisions. (orig.) [de

  9. Validation by simulation of a clinical trial model using the standardized mean and variance criteria.

    Science.gov (United States)

    Abbas, Ismail; Rovira, Joan; Casanovas, Josep

    2006-12-01

    To develop and validate a model of a clinical trial that evaluates the changes in cholesterol level as a surrogate marker for lipodystrophy in HIV subjects under alternative antiretroviral regimes, i.e., treatment with Protease Inhibitors vs. a combination of nevirapine and other antiretroviral drugs. Five simulation models were developed based on different assumptions, on treatment variability and pattern of cholesterol reduction over time. The last recorded cholesterol level, the difference from the baseline, the average difference from the baseline and level evolution, are the considered endpoints. Specific validation criteria based on a 10% minus or plus standardized distance in means and variances were used to compare the real and the simulated data. The validity criterion was met by all models for considered endpoints. However, only two models met the validity criterion when all endpoints were considered. The model based on the assumption that within-subjects variability of cholesterol levels changes over time is the one that minimizes the validity criterion, standardized distance equal to or less than 1% minus or plus. Simulation is a useful technique for calibration, estimation, and evaluation of models, which allows us to relax the often overly restrictive assumptions regarding parameters required by analytical approaches. The validity criterion can also be used to select the preferred model for design optimization, until additional data are obtained allowing an external validation of the model.

  10. An Investigation into the Psychometric Properties of the Proportional Reduction of Mean Squared Error and Augmented Scores

    Science.gov (United States)

    Stephens, Christopher Neil

    2012-01-01

    Augmentation procedures are designed to provide better estimates for a given test or subtest through the use of collateral information. The main purpose of this dissertation was to use Haberman's and Wainer's augmentation procedures on a large-scale, standardized achievement test to understand the relationship between reliability and…

  11. Use of a non-linear method for including the mass uncertainty of gravimetric standards and system measurement errors in the fitting of calibration curves for XRFA freeze-dried UNO3 standards

    International Nuclear Information System (INIS)

    Pickles, W.L.; McClure, J.W.; Howell, R.H.

    1978-05-01

    A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities with a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO 3 can have an accuracy of 0.2% in 1000 s

  12. Composite Reliability and Standard Errors of Measurement for a Seven-Subtest Short Form of the Wechsler Adult Intelligence Scale-Revised.

    Science.gov (United States)

    Schretlen, David; And Others

    1994-01-01

    Composite reliability and standard errors of measurement were computed for prorated Verbal, Performance, and Full-Scale intelligence quotient (IQ) scores from a seven-subtest short form of the Wechsler Adult Intelligence Scale-Revised. Results with 1,880 adults (standardization sample) indicate that this form is as reliable as the complete test.…

  13. Evaluation of errors in prior mean and variance in the estimation of integrated circuit failure rates using Bayesian methods

    Science.gov (United States)

    Fletcher, B. C.

    1972-01-01

    The critical point of any Bayesian analysis concerns the choice and quantification of the prior information. The effects of prior data on a Bayesian analysis are studied. Comparisons of the maximum likelihood estimator, the Bayesian estimator, and the known failure rate are presented. The results of the many simulated trails are then analyzed to show the region of criticality for prior information being supplied to the Bayesian estimator. In particular, effects of prior mean and variance are determined as a function of the amount of test data available.

  14. Statistical analysis of solid waste composition data: Arithmetic mean, standard deviation and correlation coefficients.

    Science.gov (United States)

    Edjabou, Maklawe Essonanawe; Martín-Fernández, Josep Antoni; Scheutz, Charlotte; Astrup, Thomas Fruergaard

    2017-11-01

    Data for fractional solid waste composition provide relative magnitudes of individual waste fractions, the percentages of which always sum to 100, thereby connecting them intrinsically. Due to this sum constraint, waste composition data represent closed data, and their interpretation and analysis require statistical methods, other than classical statistics that are suitable only for non-constrained data such as absolute values. However, the closed characteristics of waste composition data are often ignored when analysed. The results of this study showed, for example, that unavoidable animal-derived food waste amounted to 2.21±3.12% with a confidence interval of (-4.03; 8.45), which highlights the problem of the biased negative proportions. A Pearson's correlation test, applied to waste fraction generation (kg mass), indicated a positive correlation between avoidable vegetable food waste and plastic packaging. However, correlation tests applied to waste fraction compositions (percentage values) showed a negative association in this regard, thus demonstrating that statistical analyses applied to compositional waste fraction data, without addressing the closed characteristics of these data, have the potential to generate spurious or misleading results. Therefore, ¨compositional data should be transformed adequately prior to any statistical analysis, such as computing mean, standard deviation and correlation coefficients. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Over-Sampling Codebook-Based Hybrid Minimum Sum-Mean-Square-Error Precoding for Millimeter-Wave 3D-MIMO

    KAUST Repository

    Mao, Jiening; Gao, Zhen; Wu, Yongpeng; Alouini, Mohamed-Slim

    2018-01-01

    Hybrid precoding design is challenging for millimeter-wave (mmWave) massive MIMO. Most prior hybrid precoding schemes are designed to maximize the sum spectral efficiency (SSE), while seldom investigate the bit-error-rate (BER). Therefore, this letter designs an over-sampling codebook (OSC)-based hybrid minimum sum-mean-square-error (min-SMSE) precoding to optimize the BER. Specifically, given the effective baseband channel consisting of the real channel and analog precoding, we first design the digital precoder/combiner based on min-SMSE criterion to optimize the BER. To further reduce the SMSE between the transmit and receive signals, we propose an OSC-based joint analog precoder/combiner (JAPC) design. Simulation results show that the proposed scheme can achieve the better performance than its conventional counterparts.

  16. Over-Sampling Codebook-Based Hybrid Minimum Sum-Mean-Square-Error Precoding for Millimeter-Wave 3D-MIMO

    KAUST Repository

    Mao, Jiening

    2018-05-23

    Abstract: Hybrid precoding design is challenging for millimeter-wave (mmWave) massive MIMO. Most prior hybrid precoding schemes are designed to maximize the sum spectral efficiency (SSE), while seldom investigate the bit-error-rate (BER). Therefore, this letter designs an over-sampling codebook (OSC)-based hybrid minimum sum-mean-square-error (min-SMSE) precoding to optimize the BER. Specifically, given the effective baseband channel consisting of the real channel and analog precoding, we first design the digital precoder/combiner based on min-SMSE criterion to optimize the BER. To further reduce the SMSE between the transmit and receive signals, we propose an OSC-based joint analog precoder/combiner (JAPC) design. Simulation results show that the proposed scheme can achieve the better performance than its conventional counterparts.

  17. Understanding Problem-Solving Errors by Students with Learning Disabilities in Standards-Based and Traditional Curricula

    Science.gov (United States)

    Bouck, Emily C.; Bouck, Mary K.; Joshi, Gauri S.; Johnson, Linley

    2016-01-01

    Students with learning disabilities struggle with word problems in mathematics classes. Understanding the type of errors students make when working through such mathematical problems can further describe student performance and highlight student difficulties. Through the use of error codes, researchers analyzed the type of errors made by 14 sixth…

  18. Performance comparison of weighted sum-minimum mean square error and virtual signal-to-interference plus noise ratio algorithms in simulated and measured channels

    DEFF Research Database (Denmark)

    Rahimi, Maryam; Nielsen, Jesper Ødum; Pedersen, Troels

    2014-01-01

    A comparison in data achievement between two well-known algorithms with simulated and real measured data is presented. The algorithms maximise the data rate in cooperative base stations (BS) multiple-input-single-output scenario. Weighted sum-minimum mean square error algorithm could be used...... in multiple-input-multiple-output scenarios, but it has lower performance than virtual signal-to-interference plus noise ratio algorithm in theory and practice. A real measurement environment consisting of two BS and two users have been studied to evaluate the simulation results....

  19. U.S. Navy Marine Climatic Atlas of the World. Volume IX. World-Wide Means and Standard Deviations

    Science.gov (United States)

    1981-10-01

    TITLE (..d SobtII,) S. TYPE OF REPORT & PERIOD COVERED U. S. Navy Marine Climatic Atlas of the World Volume IX World-wide Means and Standard Reference...Ives the best estimate of the population standard deviations. The means, , are com~nuted from: EX IIN I 90 80 70 60" 50’ 40, 30 20 10 0 1070 T- VErr ...or 10%, whichever is greater Since the mean ice limit approximates the minus two de l temperature isopleth, this analyzed lower limit was Wave Heights

  20. The standard deviation method: data analysis by classical means and by neural networks

    International Nuclear Information System (INIS)

    Bugmann, G.; Stockar, U. von; Lister, J.B.

    1989-08-01

    The Standard Deviation Method is a method for determining particle size which can be used, for instance, to determine air-bubble sizes in a fermentation bio-reactor. The transmission coefficient of an ultrasound beam through a gassy liquid is measured repetitively. Due to the displacements and random positions of the bubbles, the measurements show a scatter whose standard deviation is dependent on the bubble-size. The precise relationship between the measured standard deviation, the transmission and the particle size has been obtained from a set of computer-simulated data. (author) 9 figs., 5 refs

  1. The radiological examination standards of East Germany as a means of quality assurance and radiation protection

    International Nuclear Information System (INIS)

    Angerstein, W.

    1988-01-01

    The standards (technical standards) define the minimum expenditure required for an X-ray examination. They must be reconfirmed about every 5 years. The regulations refer to basic care and do not restrict the possibility of carrying out additional examinations. Their observance ensures an optimum quality of the diagnosis and an optimum comparability of the pictures and a radiation burden that is as low as possible under realizable conditions (expenditure). The standards regulate the extent of the examinations (minimal number of the pictures to be taken) and also the periodic sequence of the pictures in the case of angiocardiographies. Some types of examinations require special standards for examinations of adults and children. (orig./DG) [de

  2. How Much Confidence Can We Have in EU-SILC? Complex Sample Designs and the Standard Error of the Europe 2020 Poverty Indicators

    Science.gov (United States)

    Goedeme, Tim

    2013-01-01

    If estimates are based on samples, they should be accompanied by appropriate standard errors and confidence intervals. This is true for scientific research in general, and is even more important if estimates are used to inform and evaluate policy measures such as those aimed at attaining the Europe 2020 poverty reduction target. In this article I…

  3. Standard Errors for National Trends in International Large-Scale Assessments in the Case of Cross-National Differential Item Functioning

    Science.gov (United States)

    Sachse, Karoline A.; Haag, Nicole

    2017-01-01

    Standard errors computed according to the operational practices of international large-scale assessment studies such as the Programme for International Student Assessment's (PISA) or the Trends in International Mathematics and Science Study (TIMSS) may be biased when cross-national differential item functioning (DIF) and item parameter drift are…

  4. A Comparison of Kernel Equating and Traditional Equipercentile Equating Methods and the Parametric Bootstrap Methods for Estimating Standard Errors in Equipercentile Equating

    Science.gov (United States)

    Choi, Sae Il

    2009-01-01

    This study used simulation (a) to compare the kernel equating method to traditional equipercentile equating methods under the equivalent-groups (EG) design and the nonequivalent-groups with anchor test (NEAT) design and (b) to apply the parametric bootstrap method for estimating standard errors of equating. A two-parameter logistic item response…

  5. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range.

    Science.gov (United States)

    Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

    2014-12-19

    In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. In this paper, we propose to improve the existing literature in several directions. First, we show that the sample standard deviation estimation in Hozo et al.'s method (BMC Med Res Methodol 5:13, 2005) has some serious limitations and is always less satisfactory in practice. Inspired by this, we propose a new estimation method by incorporating the sample size. Second, we systematically study the sample mean and standard deviation estimation problem under several other interesting settings where the interquartile range is also available for the trials. We demonstrate the performance of the proposed methods through simulation studies for the three frequently encountered scenarios, respectively. For the first two scenarios, our method greatly improves existing methods and provides a nearly unbiased estimate of the true sample standard deviation for normal data and a slightly biased estimate for skewed data. For the third scenario, our method still performs very well for both normal data and skewed data. Furthermore, we compare the estimators of the sample mean and standard deviation under all three scenarios and present some suggestions on which scenario is preferred in real-world applications. In this paper, we discuss different approximation methods in the estimation of the sample mean and standard deviation and propose some new estimation methods to improve the existing literature. We conclude our work with a summary table (an Excel spread sheet including all formulas) that serves as a comprehensive guidance for performing meta-analysis in different

  6. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range

    OpenAIRE

    Wan, Xiang; Wang, Wenqian; Liu, Jiming; Tong, Tiejun

    2014-01-01

    Background In systematic reviews and meta-analysis, researchers often pool the results of the sample mean and standard deviation from a set of similar clinical trials. A number of the trials, however, reported the study using the median, the minimum and maximum values, and/or the first and third quartiles. Hence, in order to combine results, one may have to estimate the sample mean and standard deviation for such trials. Methods In this paper, we propose to improve the existing literature in ...

  7. Zero-Forcing and Minimum Mean-Square Error Multiuser Detection in Generalized Multicarrier DS-CDMA Systems for Cognitive Radio

    Directory of Open Access Journals (Sweden)

    Lie-Liang Yang

    2008-01-01

    Full Text Available In wireless communications, multicarrier direct-sequence code-division multiple access (MC DS-CDMA constitutes one of the highly flexible multiple access schemes. MC DS-CDMA employs a high number of degrees-of-freedom, which are beneficial to design and reconfiguration for communications in dynamic communications environments, such as in the cognitive radios. In this contribution, we consider the multiuser detection (MUD in MC DS-CDMA, which motivates lowcomplexity, high flexibility, and robustness so that the MUD schemes are suitable for deployment in dynamic communications environments. Specifically, a range of low-complexity MUDs are derived based on the zero-forcing (ZF, minimum mean-square error (MMSE, and interference cancellation (IC principles. The bit-error rate (BER performance of the MC DS-CDMA aided by the proposed MUDs is investigated by simulation approaches. Our study shows that, in addition to the advantages provided by a general ZF, MMSE, or IC-assisted MUD, the proposed MUD schemes can be implemented using modular structures, where most modules are independent of each other. Due to the independent modular structure, in the proposed MUDs one module may be reconfigured without yielding impact on the others. Therefore, the MC DS-CDMA, in conjunction with the proposed MUDs, constitutes one of the promising multiple access schemes for communications in the dynamic communications environments such as in the cognitive radios.

  8. Zero-Forcing and Minimum Mean-Square Error Multiuser Detection in Generalized Multicarrier DS-CDMA Systems for Cognitive Radio

    Directory of Open Access Journals (Sweden)

    Wang Li-Chun

    2008-01-01

    Full Text Available Abstract In wireless communications, multicarrier direct-sequence code-division multiple access (MC DS-CDMA constitutes one of the highly flexible multiple access schemes. MC DS-CDMA employs a high number of degrees-of-freedom, which are beneficial to design and reconfiguration for communications in dynamic communications environments, such as in the cognitive radios. In this contribution, we consider the multiuser detection (MUD in MC DS-CDMA, which motivates lowcomplexity, high flexibility, and robustness so that the MUD schemes are suitable for deployment in dynamic communications environments. Specifically, a range of low-complexity MUDs are derived based on the zero-forcing (ZF, minimum mean-square error (MMSE, and interference cancellation (IC principles. The bit-error rate (BER performance of the MC DS-CDMA aided by the proposed MUDs is investigated by simulation approaches. Our study shows that, in addition to the advantages provided by a general ZF, MMSE, or IC-assisted MUD, the proposed MUD schemes can be implemented using modular structures, where most modules are independent of each other. Due to the independent modular structure, in the proposed MUDs one module may be reconfigured without yielding impact on the others. Therefore, the MC DS-CDMA, in conjunction with the proposed MUDs, constitutes one of the promising multiple access schemes for communications in the dynamic communications environments such as in the cognitive radios.

  9. Establishment and application of medication error classification standards in nursing care based on the International Classification of Patient Safety

    Directory of Open Access Journals (Sweden)

    Xiao-Ping Zhu

    2014-09-01

    Conclusion: Application of this classification system will help nursing administrators to accurately detect system- and process-related defects leading to medication errors, and enable the factors to be targeted to improve the level of patient safety management.

  10. Flexibility in hospital building and application by means of standardized medical room types

    NARCIS (Netherlands)

    Kamp, Pieter; Kooistra, Rien; Ankersmid, H.A.H.G.; Bonnema, Gerrit Maarten

    2014-01-01

    This paper presents an approach to standardization of hospital rooms. As hospitals are becoming more complex, the need for quality assurance and validation increases as well. Several sources mention the responsibility of the medical personnel for the quality and safety of the equipment with which

  11. Sustainability standards for bioenergy-A means to reduce climate change risks?

    International Nuclear Information System (INIS)

    Schubert, Renate; Blasch, Julia

    2010-01-01

    The paper discusses the importance of standards for sustainable bioenergy production. Sustainability of bioenergy production is crucial if bioenergy is supposed to contribute effectively to climate change mitigation. First, a brief overview of current bioenergy policies and of initiatives and legislation for bioenergy sustainability are given. Then, the authors show that under free market conditions undersupply of sustainable bioenergy will prevail. Two types of market failures are identified: information asymmetry and externalities in bioenergy production. Due to these market failures bioenergy is less sustainable than it could be. It is shown that mandatory certification and subsequent labeling can help to overcome the information asymmetry and lead to a more efficient market outcome since consumers can choose products according to their preferences. The authors conclude, however, that the existence of production externalities asks for stronger market intervention, for example in the form of binding minimum standards or taxes. The paper discusses the efficiency and feasibility of such policy measures and shows that mandatory certification combined with binding minimum standards can be an adequate policy choice to regulate the bioenergy market.

  12. Utilizing the Six Realms of Meaning in Improving Campus Standardized Test Scores through Team Teaching and Strategic Planning

    Science.gov (United States)

    Stevenson, Rosnisha D.; Kritsonis, William Allan

    2009-01-01

    This article will seek to utilize Dr. William Allan Kritsonis' book "Ways of Knowing Through the Realms of Meaning" (2007) as a framework to improve a campus's standardized test scores, more specifically, their TAKS (Texas Assessment of Knowledge and Skills) scores. Many campuses have an improvement plan, also known as a Campus…

  13. Interval sampling of end-expiratory hydrogen (H2) concentrations to quantify carbohydrate malabsorption by means of lactulose standards

    DEFF Research Database (Denmark)

    Rumessen, J J; Hamberg, O; Gudmand-Høyer, E

    1990-01-01

    -60%, interquartile range). This corresponded to the deviation in reproducibility of the standard dose. We suggest that individual estimates of carbohydrate malabsorption by means of H2 breath tests should be interpreted with caution if tests of reproducibility are not incorporated. Both areas under curves and peak H...... and the accuracy with which 5 g and 20 g doses of lactulose could be calculated from the H2 excretion after their ingestion by means of a 10 g lactulose standard. The influence of different lengths of the test period, different definitions of the baseline and the significance of standard meals and peak H2...... concentrations was also studied. Regardless of baseline definition, estimates of malabsorption were most precise, if areas under the H2 concentration v time curves for four hours or more from the start of the excess H2 excretion were used. The median deviations from the expected values were 20-30% (5...

  14. Standard Test Method for Impact Resistance of Monolithic Polycarbonate Sheet by Means of a Falling Weight

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    1995-01-01

    1.1 This test method covers the determination of the energy required to initiate failure in monolithic polycarbonate sheet material under specified conditions of impact using a free falling weight. 1.2 Two specimen types are defined as follows: 1.2.1 Type A consists of a flat plate test specimen and employs a clamped ring support. 1.2.2 Type B consists of a simply supported three-point loaded beam specimen (Fig. 1) and is recommended for use with material which can not be failed using the Type A specimen. For a maximum drop height of 6.096 m (20 ft) and a maximum drop weight of 22.68 kg (50 lb), virgin polycarbonate greater than 12.70 mm (1/2 in.) thick will probably require use of the Type B specimen. Note 1 - See also ASTM Methods: D 1709, D 2444 and D 3029. This standard does not purport to address all of the safety concerns, if any, associated with its use. It is the responsibility of the user of this standard to establish appropriate safety and health practices and determine the applicability of reg...

  15. Comparing biomarker measurements to a normal range: when to use standard error of the mean (SEM) or standard deviation (SD) confidence intervals tests

    Science.gov (United States)

    This commentary is the second of a series outlining one specific concept in interpreting biomarkers data. In the first, an observational method was presented for assessing the distribution of measurements before making parametric calculations. Here, the discussion revolves around...

  16. Damage evolution in TWIP and standard austenitic steel by means of 3D X ray tomography

    Energy Technology Data Exchange (ETDEWEB)

    Fabrègue, D., E-mail: damien.fabregue@insa-lyon.fr [Université de Lyon, CNRS, F-69621 Villeurbanne (France); INSA-Lyon, MATEIS UMR5510, F-69621 Villeurbanne (France); Landron, C. [Université de Lyon, CNRS, F-69621 Villeurbanne (France); INSA-Lyon, MATEIS UMR5510, F-69621 Villeurbanne (France); Bouaziz, O. [ArcelorMittal Research, Voie Romaine-BP30320, F-57283 Maizières les Metz (France); Maire, E. [Université de Lyon, CNRS, F-69621 Villeurbanne (France); INSA-Lyon, MATEIS UMR5510, F-69621 Villeurbanne (France)

    2013-09-01

    The evolution of ductile damage of Fe–22Mn–0.6C austenitic TWIP steel by means of 3D X ray tomography in-situ tensile tests is reported for the first time. The comparison with another fully austenitic steel (316 stainless steel) is also carried out. The damage process of TWIP steel involves intense nucleation of small voids combined with the significant growth of the biggest cavities whereas macroscopical triaxiality remains constant. Due to this high nucleation rate, the average cavity diameter remains constant unlike the 316 stainless steel.

  17. A report on the investigation of aging qualification by means of industrial standards

    International Nuclear Information System (INIS)

    Gradin, L.P.; Farina, T.C.

    1985-01-01

    It is a requirement to show that age sensitive safety-related electrical equipment in a nuclear power plant must perform its safety functions satisfactorily at any time during the life of a plant. Naturally or artificially aged equipment is usually used in type test programs for demonstration of performance or an analysis performed which is supported by partial test data. These Methods often become costly and burdensome tasks or cannot be repeated on test sample equipment to duplicate installed equipment no longer in manufacture. This report describes an alternative approach to determining the expected life of electrical equipment by use of general (non-nuclear) industry standards. The commercial electrical industry has developed a technical basis to use the generally accepted Arrhenius aging methodology. This methodology is not used as an absolute, but as a tool for comparison with successful past history using methods which have evolved over decades

  18. Analysis of ultrafiltration failure in peritoneal dialysis patients by means of standard peritoneal permeability analysis.

    Science.gov (United States)

    Ho-dac-Pannekeet, M M; Atasever, B; Struijk, D G; Krediet, R T

    1997-01-01

    Ultrafiltration failure (UFF) is a complication of peritoneal dialysis (PD) treatment that occurs especially in long-term patients. Etiological factors include a large effective peritoneal surface area [measured as high mass transfer area coefficient (MTAC) of creatinine], a high effective lymphatic absorption rate (ELAR), a large residual volume, or combinations. The prevalence and etiology of UFF were studied and the contribution of transcellular water transport (TCWT) was analyzed. A new definition of UFF and guidelines for the analysis of its etiology were derived from the results. Peritoneal dialysis unit in the Academic Medical Center in Amsterdam. Cross-sectional study of standard peritoneal permeability analyses (4-hr dwells, dextran 70 as volume marker) with 1.36% glucose in 68 PD patients. Patients with negative net UF (change in intraperitoneal volume, dIPV rate (TCUFR) were lower (p lower residual volume (p = 0.03), and lower TCUFR (p = 0.01). Ultrafiltration failure was associated with a high MTAC creatinine in 3 patients, a high ELAR in 4 patients, and a combination of factors in one. As an additional possible cause, TCWT was studied, using the sodium gradient in the first hour of the dwell, corrected for diffusion (dNA). Five patients had dNA > 5 mmol/L, indicating normal TCWT. The 3 patients with dNA lower TCUFR (p = 0.04). A smaller difference was found between dIPV 3.86% and 1.36% (p = 0.04) compared to the dNA > 5 mmol/L group, but no differences were present for MTAC creatinine, ELAR, residual volume, or glucose absorption. In addition to known factors, impairment of TCWT can be a cause of UFF. A standardized dwell with 1.36% glucose overestimates UFF. Therefore, 3.86% glucose should be used for identification of patients with UFF, especially because it provides additional information on TCWT. Ultrafiltration failure can be defined as net UF exchange.

  19. Does non-standard work mean non-standard health? Exploring links between non-standard work schedules, health behavior, and well-being

    Directory of Open Access Journals (Sweden)

    Megan R. Winkler

    2018-04-01

    Full Text Available The last century has seen dramatic shifts in population work circumstances, leading to an increasing normalization of non-standard work schedules (NSWSs, defined as non-daytime, irregular hours. An ever-growing body of evidence links NSWSs to a host of non-communicable chronic conditions; yet, these associations primarily concentrate on the physiologic mechanisms created by circadian disruption and insufficient sleep. While important, not all NSWSs create such chronobiologic disruption, and other aspects of working time and synchronization could be important to the relationships between work schedules and chronic disease. Leveraging survey data from Project EAT, a population-based study with health-related behavioral and psychological data from U.S. adults aged 25–36 years, this study explored the risks for a broad range of less healthful behavioral and well-being outcomes among NSWS workers compared to standard schedule workers (n = 1402. Variations across different NSWSs (evening, night/rotating, and irregular schedules were also explored. Results indicated that, relative to standard schedule workers, workers with NSWSs are at increased risk for non-optimal sleep, substance use, greater recreational screen time, worse dietary practices, obesity, and depression. There was minimal evidence to support differences in relative risks across workers with different types of NSWSs. The findings provide insight into the potential links between NSWSs and chronic disease and indicate the relevancy social disruption and daily health practices may play in the production of health and well-being outcomes among working populations. Keywords: United States, Work schedule tolerance, Health behavior, Mental health, Substance abuse, Obesity

  20. Data Citation Standard: A Means to Support Data Sharing, Attribution, and Traceability

    Directory of Open Access Journals (Sweden)

    McCallum I.

    2013-04-01

    Full Text Available An important incentive for scientists and researchers is the recognition and renown given to them in citations of their work. While citation rules are well developed for the use of papers published by others, very little rules are available for the citation of data made available by others. Increasingly, citation of the source of data is also requested in the context of socially relevant topics, such as climate change and its potential impacts. Providing means for data citation would be a strong incentive for data sharing. Georeferenced data are crucial for addressing many of the burning societal problems and to support related interdisciplinary research. The lack of a widely accepted method for giving credit to those who make their data freely available and for tracking the use of data throughout their life-cycle hampers data sharing. Furthermore, only clear and transparent data citation allows other scientists to obtain the identical data to replicate findings or for further research.

  1. Novel surgical performance evaluation approximates Standardized Incidence Ratio with high accuracy at simple means.

    Science.gov (United States)

    Gabbay, Itay E; Gabbay, Uri

    2013-01-01

    Excess adverse events may be attributable to poor surgical performance but also to case-mix, which is controlled through the Standardized Incidence Ratio (SIR). SIR calculations can be complicated, resource consuming, and unfeasible in some settings. This article suggests a novel method for SIR approximation. In order to evaluate a potential SIR surrogate measure we predefined acceptance criteria. We developed a new measure - Approximate Risk Index (ARI). "Number Needed for Event" (NNE) is the theoretical number of patients needed "to produce" one adverse event. ARI is defined as the quotient of the group of patients needed for no observed events Ge by total patients treated Ga. Our evaluation compared 2500 surgical units and over 3 million heterogeneous risk surgical patients that were induced through a computerized simulation. Surgical unit's data were computed for SIR and ARI to evaluate compliance with the predefined criteria. Approximation was evaluated by correlation analysis and performance prediction capability by Receiver Operating Characteristics (ROC) analysis. ARI strongly correlates with SIR (r(2) = 0.87, p 0.9) 87% sensitivity and 91% specificity. ARI provides good approximation of SIR and excellent prediction capability. ARI is simple and cost-effective as it requires thorough risk evaluation of only the adverse events patients. ARI can provide a crucial screening and performance evaluation quality control tool. The ARI method may suit other clinical and epidemiological settings where relatively small fraction of the entire population is affected. Copyright © 2013 Surgical Associates Ltd. Published by Elsevier Ltd. All rights reserved.

  2. ACHIEVEMENT MOTIVATION AS A MEANS OF PROFESSIONAL DEVELOPMENT OF TEACHERS AND ADMINISTRATORS IN THE IMPLEMENTATION OF FEDERAL STATE EDUCATIONAL STANDARDS

    Directory of Open Access Journals (Sweden)

    Е. А. Сиденко

    2014-01-01

    Full Text Available In our time, gradually increasing dissatisfaction with the results of public school education, their inadequcy to modern requirements and expectations. Real benchmarks of general education in a traditional school, until recently, remained the specific knowledge and skills of individual school subjects. Beyond these substantive results is lost identity of the child, whose development should be meaning and purpose of education. The Federal state educational standard of the second generation was created to solve these problems. In this article the author talks about the difficulties faced by educational institutions in connection with the transition to the federal government general education standard. The author developed and validated a model of training based on the formation of learners’ motivation of achievement through the acquisition of personal meaning.Purchase on Elibrary.ru > Buy now

  3. Ge well detector calibration by means of a trial and error procedure using the dead layers as a unique parameter in a Monte Carlo simulation

    International Nuclear Information System (INIS)

    Courtine, Fabien; Pilleyre, Thierry; Sanzelle, Serge; Miallier, Didier

    2008-01-01

    The project aimed at modelling an HPGe well detector in view to predict its photon-counting efficiency by means of the Monte Carlo simulation code GEANT4. Although a qualitative and quantitative description of the crystal and housing was available, uncertainties were associated to parameters controlling the detector response. This induced poor agreement between the efficiency calculated on the basis of nominal data and the actual efficiency experimentally measured with a 137 Cs point source. It was then decided to improve the model, by parameterization of a trial and error method. The distribution of the dead layers was adopted as a unique parameter, in order to explore the possibilities and pertinence of this parameter. In the course of the work, it appeared necessary to introduce the possibility that the thickness of the dead layers was not uniform for a given surface. At the end of the process, the results allowed to conclude that the approach was able to give a model adapted to practical application with a satisfactory precision in the calculated efficiency. The pattern of the 'dead layers' that was obtained is characterized by a variable thickness which seems to be physically relevant. It implicitly and partly accounts for effects that are not originated from actual dead layers, such as incomplete charge collection. But, such effects, which are uneasily accounted for, can, in a first approximation, be represented by 'dead layers'; this is an advantage of the parameterization that was adopted.

  4. A Simulation Analysis of Errors in the Measurement of Standard Electrochemical Rate Constants from Phase-Selective Impedance Data.

    Science.gov (United States)

    1987-09-30

    RESTRICTIVE MARKINGSC Unclassif ied 2a SECURIly CLASSIFICATION ALIIMOA4TY 3 DIS1RSBj~jiOAVAILAB.I1Y OF RkPORI _________________________________ Approved...of the AC current, including the time dependence at a growing DME, at a given fixed potential either in the presence or the absence of an...the relative error in k b(app) is ob relatively small for ks (true) : 0.5 cm s-, and increases rapidly for ob larger rate constants as kob reaches the

  5. A Comparison of AOP Classification Based on Difficulty, Importance, and Frequency by Cluster Analysis and Standardized Mean

    International Nuclear Information System (INIS)

    Choi, Sun Yeong; Jung, Wondea

    2014-01-01

    In Korea, there are plants that have more than one-hundred kinds of abnormal operation procedures (AOPs). Therefore, operators have started to recognize the importance of classifying the AOPs. They should pay attention to those AOPs required to take emergency measures against an abnormal status that has a more serious effect on plant safety and/or often occurs. We suggested a measure of prioritizing AOPs for a training purpose based on difficulty, importance, and frequency. A DIF analysis based on how difficult the task is, how important it is, and how frequently they occur is a well-known method of assessing the performance, prioritizing training needs and planning. We used an SDIF-mean (Standardized DIF-mean) to prioritize AOPs in the previous paper. For the SDIF-mean, we standardized the three kinds of data respectively. The results of this research will be utilized not only to understand the AOP characteristics at a job analysis level but also to develop an effective AOP training program. The purpose of this paper is to perform a cluster analysis for an AOP classification and compare the results through a cluster analysis with that by a standardized mean based on difficulty, importance, and frequency. In this paper, we categorized AOPs into three groups by a cluster analysis based on D, I, and F. Clustering is the classification of similar objects into groups so that each group shares some common characteristics. In addition, we compared the result by the cluster analysis in this paper with the classification result by the SDIF-mean in the previous paper. From the comparison, we found that a reevaluation can be required to assign a training interval for the AOPs of group C' in the previous paper those have lower SDIF-mean. The reason for this is that some of the AOPs of group C' have quite high D and I values while they have the lowest frequencies. From an educational point of view, AOPs in group which have the highest difficulty and importance, but

  6. A Comparison of AOP Classification Based on Difficulty, Importance, and Frequency by Cluster Analysis and Standardized Mean

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Sun Yeong; Jung, Wondea [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2014-05-15

    In Korea, there are plants that have more than one-hundred kinds of abnormal operation procedures (AOPs). Therefore, operators have started to recognize the importance of classifying the AOPs. They should pay attention to those AOPs required to take emergency measures against an abnormal status that has a more serious effect on plant safety and/or often occurs. We suggested a measure of prioritizing AOPs for a training purpose based on difficulty, importance, and frequency. A DIF analysis based on how difficult the task is, how important it is, and how frequently they occur is a well-known method of assessing the performance, prioritizing training needs and planning. We used an SDIF-mean (Standardized DIF-mean) to prioritize AOPs in the previous paper. For the SDIF-mean, we standardized the three kinds of data respectively. The results of this research will be utilized not only to understand the AOP characteristics at a job analysis level but also to develop an effective AOP training program. The purpose of this paper is to perform a cluster analysis for an AOP classification and compare the results through a cluster analysis with that by a standardized mean based on difficulty, importance, and frequency. In this paper, we categorized AOPs into three groups by a cluster analysis based on D, I, and F. Clustering is the classification of similar objects into groups so that each group shares some common characteristics. In addition, we compared the result by the cluster analysis in this paper with the classification result by the SDIF-mean in the previous paper. From the comparison, we found that a reevaluation can be required to assign a training interval for the AOPs of group C' in the previous paper those have lower SDIF-mean. The reason for this is that some of the AOPs of group C' have quite high D and I values while they have the lowest frequencies. From an educational point of view, AOPs in group which have the highest difficulty and importance, but

  7. Combining Mean and Standard Deviation of Hounsfield Unit Measurements from Preoperative CT Allows More Accurate Prediction of Urinary Stone Composition Than Mean Hounsfield Units Alone.

    Science.gov (United States)

    Tailly, Thomas; Larish, Yaniv; Nadeau, Brandon; Violette, Philippe; Glickman, Leonard; Olvera-Posada, Daniel; Alenezi, Husain; Amann, Justin; Denstedt, John; Razvi, Hassan

    2016-04-01

    The mineral composition of a urinary stone may influence its surgical and medical treatment. Previous attempts at identifying stone composition based on mean Hounsfield Units (HUm) have had varied success. We aimed to evaluate the additional use of standard deviation of HU (HUsd) to more accurately predict stone composition. We identified patients from two centers who had undergone urinary stone treatment between 2006 and 2013 and had mineral stone analysis and a computed tomography (CT) available. HUm and HUsd of the stones were compared with ANOVA. Receiver operative characteristic analysis with area under the curve (AUC), Youden index, and likelihood ratio calculations were performed. Data were available for 466 patients. The major components were calcium oxalate monohydrate (COM), uric acid, hydroxyapatite, struvite, brushite, cystine, and CO dihydrate (COD) in 41.4%, 19.3%, 12.4%, 7.5%, 5.8%, 5.4%, and 4.7% of patients, respectively. The HUm of UA and Br was significantly lower and higher than the HUm of any other stone type, respectively. HUm and HUsd were most accurate in predicting uric acid with an AUC of 0.969 and 0.851, respectively. The combined use of HUm and HUsd resulted in increased positive predictive value and higher likelihood ratios for identifying a stone's mineral composition for all stone types but COM. To the best of our knowledge, this is the first report of CT data aiding in the prediction of brushite stone composition. Both HUm and HUsd can help predict stone composition and their combined use results in higher likelihood ratios influencing probability.

  8. Time-order errors and standard-position effects in duration discrimination: An experimental study and an analysis by the sensation-weighting model.

    Science.gov (United States)

    Hellström, Åke; Rammsayer, Thomas H

    2015-10-01

    Studies have shown that the discriminability of successive time intervals depends on the presentation order of the standard (St) and the comparison (Co) stimuli. Also, this order affects the point of subjective equality. The first effect is here called the standard-position effect (SPE); the latter is known as the time-order error. In the present study, we investigated how these two effects vary across interval types and standard durations, using Hellström's sensation-weighting model to describe the results and relate them to stimulus comparison mechanisms. In Experiment 1, four modes of interval presentation were used, factorially combining interval type (filled, empty) and sensory modality (auditory, visual). For each mode, two presentation orders (St-Co, Co-St) and two standard durations (100 ms, 1,000 ms) were used; half of the participants received correctness feedback, and half of them did not. The interstimulus interval was 900 ms. The SPEs were negative (i.e., a smaller difference limen for St-Co than for Co-St), except for the filled-auditory and empty-visual 100-ms standards, for which a positive effect was obtained. In Experiment 2, duration discrimination was investigated for filled auditory intervals with four standards between 100 and 1,000 ms, an interstimulus interval of 900 ms, and no feedback. Standard duration interacted with presentation order, here yielding SPEs that were negative for standards of 100 and 1,000 ms, but positive for 215 and 464 ms. Our findings indicate that the SPE can be positive as well as negative, depending on the interval type and standard duration, reflecting the relative weighting of the stimulus information, as is described by the sensation-weighting model.

  9. Dealing with missing standard deviation and mean values in meta-analysis of continuous outcomes: a systematic review.

    Science.gov (United States)

    Weir, Christopher J; Butcher, Isabella; Assi, Valentina; Lewis, Stephanie C; Murray, Gordon D; Langhorne, Peter; Brady, Marian C

    2018-03-07

    Rigorous, informative meta-analyses rely on availability of appropriate summary statistics or individual participant data. For continuous outcomes, especially those with naturally skewed distributions, summary information on the mean or variability often goes unreported. While full reporting of original trial data is the ideal, we sought to identify methods for handling unreported mean or variability summary statistics in meta-analysis. We undertook two systematic literature reviews to identify methodological approaches used to deal with missing mean or variability summary statistics. Five electronic databases were searched, in addition to the Cochrane Colloquium abstract books and the Cochrane Statistics Methods Group mailing list archive. We also conducted cited reference searching and emailed topic experts to identify recent methodological developments. Details recorded included the description of the method, the information required to implement the method, any underlying assumptions and whether the method could be readily applied in standard statistical software. We provided a summary description of the methods identified, illustrating selected methods in example meta-analysis scenarios. For missing standard deviations (SDs), following screening of 503 articles, fifteen methods were identified in addition to those reported in a previous review. These included Bayesian hierarchical modelling at the meta-analysis level; summary statistic level imputation based on observed SD values from other trials in the meta-analysis; a practical approximation based on the range; and algebraic estimation of the SD based on other summary statistics. Following screening of 1124 articles for methods estimating the mean, one approximate Bayesian computation approach and three papers based on alternative summary statistics were identified. Illustrative meta-analyses showed that when replacing a missing SD the approximation using the range minimised loss of precision and generally

  10. Standard Practice for Minimizing Dosimetry Errors in Radiation Hardness Testing of Silicon Electronic Devices Using Co-60 Sources

    CERN Document Server

    American Society for Testing and Materials. Philadelphia

    2010-01-01

    1.1 This practice covers recommended procedures for the use of dosimeters, such as thermoluminescent dosimeters (TLD's), to determine the absorbed dose in a region of interest within an electronic device irradiated using a Co-60 source. Co-60 sources are commonly used for the absorbed dose testing of silicon electronic devices. Note 1—This absorbed-dose testing is sometimes called “total dose testing” to distinguish it from “dose rate testing.” Note 2—The effects of ionizing radiation on some types of electronic devices may depend on both the absorbed dose and the absorbed dose rate; that is, the effects may be different if the device is irradiated to the same absorbed-dose level at different absorbed-dose rates. Absorbed-dose rate effects are not covered in this practice but should be considered in radiation hardness testing. 1.2 The principal potential error for the measurement of absorbed dose in electronic devices arises from non-equilibrium energy deposition effects in the vicinity o...

  11. Composite Gauss-Legendre Quadrature with Error Control

    Science.gov (United States)

    Prentice, J. S. C.

    2011-01-01

    We describe composite Gauss-Legendre quadrature for determining definite integrals, including a means of controlling the approximation error. We compare the form and performance of the algorithm with standard Newton-Cotes quadrature. (Contains 1 table.)

  12. Phase transitions in scale-free neural networks: Departure from the standard mean-field universality class

    International Nuclear Information System (INIS)

    Aldana, Maximino; Larralde, Hernan

    2004-01-01

    We investigate the nature of the phase transition from an ordered to a disordered state that occurs in a family of neural network models with noise. These models are closely related to the majority voter model, where a ferromagneticlike interaction between the elements prevails. Each member of the family is distinguished by the network topology, which is determined by the probability distribution of the number of incoming links. We show that for homogeneous random topologies, the phase transition belongs to the standard mean-field universality class, characterized by the order parameter exponent β=1/2. However, for scale-free networks we obtain phase transition exponents ranging from 1/2 to infinity. Furthermore, we show the existence of a phase transition even for values of the scale-free exponent in the interval (1.5,2], where the average network connectivity diverges

  13. Risk management and errors in the surgical clinic of Serres hospital compared with the requirements of standard OHSAS 18001: 1999

    Directory of Open Access Journals (Sweden)

    Maria Eleni Megalomystaka

    2016-12-01

    Full Text Available The purpose of this study was to investigate the measures implemented to manage risks at work in the surgical clinic of a public hospital in Northern Greece, in relation to the requirements of the standard OHSAS 18001: 1999, and to refer to an integrated program to manage those risks. The right to safe and high-quality patient care and management of adverse events is part of the quality system and must be pursued by every health organization. In recent years, in Greece, there are measures taken by the country to align with European Union directives on matters related to safety in the workplace. In this direction, this hospital takes the initiative to reduce accidents and improve working conditions. The ELOT 1801 is a model for the management of health and safety, it is compatible and has technical equivalence with the corresponding BSI-OHSAS 18001: 1999. Since the relevant investigation found that the implementation of policy on health and safety in the surgical clinic under hospital study showed that there is a will by the authorities to adopt and implement procedures that contribute to the proper management and reduction of upcoming events. However, improvement actions are related to staff training can be made in the provision of health services, while considered necessary staffing the department with personnel and equipping adequate consumables.

  14. Standard error of measurement of 5 health utility indexes across the range of health for use in estimating reliability and responsiveness.

    Science.gov (United States)

    Palta, Mari; Chen, Han-Yang; Kaplan, Robert M; Feeny, David; Cherepanov, Dasha; Fryback, Dennis G

    2011-01-01

    Standard errors of measurement (SEMs) of health-related quality of life (HRQoL) indexes are not well characterized. SEM is needed to estimate responsiveness statistics, and is a component of reliability. To estimate the SEM of 5 HRQoL indexes. The National Health Measurement Study (NHMS) was a population-based survey. The Clinical Outcomes and Measurement of Health Study (COMHS) provided repeated measures. A total of 3844 randomly selected adults from the noninstitutionalized population aged 35 to 89 y in the contiguous United States and 265 cataract patients. The SF6-36v2™, QWB-SA, EQ-5D, HUI2, and HUI3 were included. An item-response theory approach captured joint variation in indexes into a composite construct of health (theta). The authors estimated 1) the test-retest standard deviation (SEM-TR) from COMHS, 2) the structural standard deviation (SEM-S) around theta from NHMS, and 3) reliability coefficients. SEM-TR was 0.068 (SF-6D), 0.087 (QWB-SA), 0.093 (EQ-5D), 0.100 (HUI2), and 0.134 (HUI3), whereas SEM-S was 0.071, 0.094, 0.084, 0.074, and 0.117, respectively. These yield reliability coefficients 0.66 (COMHS) and 0.71 (NHMS) for SF-6D, 0.59 and 0.64 for QWB-SA, 0.61 and 0.70 for EQ-5D, 0.64 and 0.80 for HUI2, and 0.75 and 0.77 for HUI3, respectively. The SEM varied across levels of health, especially for HUI2, HUI3, and EQ-5D, and was influenced by ceiling effects. Limitations. Repeated measures were 5 mo apart, and estimated theta contained measurement error. The 2 types of SEM are similar and substantial for all the indexes and vary across health.

  15. Use of a pressure sensing sheath: comparison with standard means of blood pressure monitoring in catheterization procedures.

    Science.gov (United States)

    Purdy, Phillip D; South, Charles; Klucznik, Richard P; Liu, Kenneth C; Novakovic, Robin L; Puri, Ajit S; Pride, G Lee; Aagaard-Kienitz, Beverly; Ray, Abishek; Elliott, Alan C

    2017-08-01

    Monitoring of blood pressure (BP) during procedures is variable, depending on multiple factors. Common methods include sphygmomanometer (BP cuff), separate radial artery catheterization, and side port monitoring of an indwelling sheath. Each means of monitoring has disadvantages, including time consumption, added risk, and signal dampening due to multiple factors. We sought an alternative approach to monitoring during procedures in the catheterization laboratory. A new technology involving a 330 µm fiberoptic sensor embedded in the wall of a sheath structure was tested against both radial artery catheter and sphygmomanometer readings obtained simultaneous with readings recorded from the pressure sensing system (PSS). Correlations and Bland-Altman analysis were used to determine whether use of the PSS could substitute for these standard techniques. The results indicated highly significant correlations in systolic, diastolic, and mean arterial pressures (MAP) when compared against radial artery catheterization (p<0.0001), and MAP means differed by <4%. Bland-Altman analysis of the data suggested that the sheath measurements can replace a separate radial artery catheter. While less striking, significant correlations were seen when PSS readings were compared against BP cuff readings. The PSS has competitive functionality to that seen with a dedicated radial artery catheter for BP monitoring and is available immediately on sheath insertion without the added risk of radial catheterization. The sensor is structurally separated from the primary sheath lumen and readings are unaffected by device introduction through the primary lumen. Time delays and potential complications from radial artery catheterization are avoided. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.

  16. Adaptive algorithm of selecting optimal variant of errors detection system for digital means of automation facility of oil and gas complex

    Science.gov (United States)

    Poluyan, A. Y.; Fugarov, D. D.; Purchina, O. A.; Nesterchuk, V. V.; Smirnova, O. V.; Petrenkova, S. B.

    2018-05-01

    To date, the problems associated with the detection of errors in digital equipment (DE) systems for the automation of explosive objects of the oil and gas complex are extremely actual. Especially this problem is actual for facilities where a violation of the accuracy of the DE will inevitably lead to man-made disasters and essential material damage, at such facilities, the diagnostics of the accuracy of the DE operation is one of the main elements of the industrial safety management system. In the work, the solution of the problem of selecting the optimal variant of the errors detection system of errors detection by a validation criterion. Known methods for solving these problems have an exponential valuation of labor intensity. Thus, with a view to reduce time for solving the problem, a validation criterion is compiled as an adaptive bionic algorithm. Bionic algorithms (BA) have proven effective in solving optimization problems. The advantages of bionic search include adaptability, learning ability, parallelism, the ability to build hybrid systems based on combining. [1].

  17. Primary standardization of C-14 by means of CIEMAT/NIST, TDCR and 4πβ-γ methods

    International Nuclear Information System (INIS)

    Kuznetsova, Maria

    2016-01-01

    In this work, the primary standardization of "1"4C solution, which emits beta particles of maximum energy 156 keV, was made by means of three different methods: CIEMAT/NIST and TDCR (Triple To Double Coincidence Ratio) methods in liquid scintillation systems and the tracing method, in the 4πβ-γ coincidence system. TRICARB LSC (Liquid Scintillator Counting) system, equipped with two photomultipliers tubes, was used for CIEMAT/NIST method, using a "3H standard that emits beta particles with maximum energy of 18.7 keV, as efficiency tracing. HIDEX 300SL LSC system, equipped with three photomultipliers tubes, was used for TDCR method. Samples of "1"4C and "3H, for the liquid scintillator system, were prepared using three commercial scintillation cocktails, UltimaGold, Optiphase Hisafe3 and InstaGel-Plus, in order to compare the performance in the measurements. All samples were prepared with 15 mL scintillators, in glass vials with low potassium concentration. Known aliquots of radioactive solution were dropped onto the cocktail scintillators. In order to obtain the quenching parameter curve, a nitro methane carrier solution and 1 mL of distilled water were used. For measurements in the 4πβ-γ system, "6"0Co was used as beta gamma emitter. SCS (software coincidence system) was applied and the beta efficiency was changed by using electronic discrimination. The behavior of the extrapolation curve was predicted with code ESQUEMA, using Monte Carlo technique. The "1"4C activity obtained by the three methods applied in this work was compared and the results showed to be in agreement, within the experimental uncertainty. (author)

  18. WAIS-IV administration errors: effects of altered response requirements on Symbol Search and violation of standard surface-variety patterns on Block Design.

    Science.gov (United States)

    Ryan, Joseph J; Swopes-Willhite, Nicole; Franklin, Cassi; Kreiner, David S

    2015-01-01

    This study utilized a sample of 50 college students to assess the possibility that responding to the Wechsler Adult Intelligence Scale-Fourth Edition (WAIS-IV) Symbol Search subtest items with an "x" instead of a "single slash mark" would affect performance. A second sample of 50 college students was used to assess the impact on WAIS-IV Block Design performance of presenting all the items with only red surfaces facing up. The modified Symbol Search and Block Design administrations yielded mean scaled scores and raw scores that did not differ significantly from mean scores obtained with standard administrations. Findings should not be generalized beyond healthy, well-educated young adults.

  19. Standard error of measurement of five health utility indexes across the range of health for use in estimating reliability and responsiveness

    Science.gov (United States)

    Palta, Mari; Chen, Han-Yang; Kaplan, Robert M.; Feeny, David; Cherepanov, Dasha; Fryback, Dennis

    2011-01-01

    Background Standard errors of measurement (SEMs) of health related quality of life (HRQoL) indexes are not well characterized. SEM is needed to estimate responsiveness statistics and provides guidance on using indexes on the individual and group level. SEM is also a component of reliability. Purpose To estimate SEM of five HRQoL indexes. Design The National Health Measurement Study (NHMS) was a population based telephone survey. The Clinical Outcomes and Measurement of Health Study (COMHS) provided repeated measures 1 and 6 months post cataract surgery. Subjects 3844 randomly selected adults from the non-institutionalized population 35 to 89 years old in the contiguous United States and 265 cataract patients. Measurements The SF6-36v2™, QWB-SA, EQ-5D, HUI2 and HUI3 were included. An item-response theory (IRT) approach captured joint variation in indexes into a composite construct of health (theta). We estimated: (1) the test-retest standard deviation (SEM-TR) from COMHS, (2) the structural standard deviation (SEM-S) around the composite construct from NHMS and (3) corresponding reliability coefficients. Results SEM-TR was 0.068 (SF-6D), 0.087 (QWB-SA), 0.093 (EQ-5D), 0.100 (HUI2) and 0.134 (HUI3), while SEM-S was 0.071, 0.094, 0.084, 0.074 and 0.117, respectively. These translate into reliability coefficients for SF-6D: 0.66 (COMHS) and 0.71 (NHMS), for QWB: 0.59 and 0.64, for EQ-5D: 0.61 and 0.70 for HUI2: 0.64 and 0.80, and for HUI3: 0.75 and 0.77, respectively. The SEM varied considerably across levels of health, especially for HUI2, HUI3 and EQ-5D, and was strongly influenced by ceiling effects. Limitations Repeated measures were five months apart and estimated theta contain measurement error. Conclusions The two types of SEM are similar and substantial for all the indexes, and vary across the range of health. PMID:20935280

  20. Scaling prediction errors to reward variability benefits error-driven learning in humans.

    Science.gov (United States)

    Diederen, Kelly M J; Schultz, Wolfram

    2015-09-01

    Effective error-driven learning requires individuals to adapt learning to environmental reward variability. The adaptive mechanism may involve decays in learning rate across subsequent trials, as shown previously, and rescaling of reward prediction errors. The present study investigated the influence of prediction error scaling and, in particular, the consequences for learning performance. Participants explicitly predicted reward magnitudes that were drawn from different probability distributions with specific standard deviations. By fitting the data with reinforcement learning models, we found scaling of prediction errors, in addition to the learning rate decay shown previously. Importantly, the prediction error scaling was closely related to learning performance, defined as accuracy in predicting the mean of reward distributions, across individual participants. In addition, participants who scaled prediction errors relative to standard deviation also presented with more similar performance for different standard deviations, indicating that increases in standard deviation did not substantially decrease "adapters'" accuracy in predicting the means of reward distributions. However, exaggerated scaling beyond the standard deviation resulted in impaired performance. Thus efficient adaptation makes learning more robust to changing variability. Copyright © 2015 the American Physiological Society.

  1. Estimation of the limit of detection with a bootstrap-derived standard error by a partly non-parametric approach. Application to HPLC drug assays

    DEFF Research Database (Denmark)

    Linnet, Kristian

    2005-01-01

    Bootstrap, HPLC, limit of blank, limit of detection, non-parametric statistics, type I and II errors......Bootstrap, HPLC, limit of blank, limit of detection, non-parametric statistics, type I and II errors...

  2. Error Patterns

    NARCIS (Netherlands)

    Hoede, C.; Li, Z.

    2001-01-01

    In coding theory the problem of decoding focuses on error vectors. In the simplest situation code words are $(0,1)$-vectors, as are the received messages and the error vectors. Comparison of a received word with the code words yields a set of error vectors. In deciding on the original code word,

  3. Reliability, standard error, and minimum detectable change of clinical pressure pain threshold testing in people with and without acute neck pain.

    Science.gov (United States)

    Walton, David M; Macdermid, Joy C; Nielson, Warren; Teasell, Robert W; Chiasson, Marco; Brown, Lauren

    2011-09-01

    Clinical measurement. To evaluate the intrarater, interrater, and test-retest reliability of an accessible digital algometer, and to determine the minimum detectable change in normal healthy individuals and a clinical population with neck pain. Pressure pain threshold testing may be a valuable assessment and prognostic indicator for people with neck pain. To date, most of this research has been completed using algometers that are too resource intensive for routine clinical use. Novice raters (physiotherapy students or clinical physiotherapists) were trained to perform algometry testing over 2 clinically relevant sites: the angle of the upper trapezius and the belly of the tibialis anterior. A convenience sample of normal healthy individuals and a clinical sample of people with neck pain were tested by 2 different raters (all participants) and on 2 different days (healthy participants only). Intraclass correlation coefficient (ICC), standard error of measurement, and minimum detectable change were calculated. A total of 60 healthy volunteers and 40 people with neck pain were recruited. Intrarater reliability was almost perfect (ICC = 0.94-0.97), interrater reliability was substantial to near perfect (ICC = 0.79-0.90), and test-retest reliability was substantial (ICC = 0.76-0.79). Smaller change was detectable in the trapezius compared to the tibialis anterior. This study provides evidence that novice raters can perform digital algometry with adequate reliability for research and clinical use in people with and without neck pain.

  4. Operator errors

    International Nuclear Information System (INIS)

    Knuefer; Lindauer

    1980-01-01

    Besides that at spectacular events a combination of component failure and human error is often found. Especially the Rasmussen-Report and the German Risk Assessment Study show for pressurised water reactors that human error must not be underestimated. Although operator errors as a form of human error can never be eliminated entirely, they can be minimized and their effects kept within acceptable limits if a thorough training of personnel is combined with an adequate design of the plant against accidents. Contrary to the investigation of engineering errors, the investigation of human errors has so far been carried out with relatively small budgets. Intensified investigations in this field appear to be a worthwhile effort. (orig.)

  5. Standardization of heparins by means of high performance liquid chromatography equipped with a low angle laser light scattering detector

    NARCIS (Netherlands)

    Hennink, W.E.; van den Berg, J.W.A.; Feijen, Jan

    1987-01-01

    This study shows that HPLC-LALLS (high performance liquid chromatography with a light-scattering detector) is a convenient and reliable method for the characterization of standard heparin samples, provided that polyelectrolyte artefacts are suppressed by a suitable dialysis procedure. The method has

  6. The meaning of fear. Emotional standards for children in the Netherlands, 1850-1950 : Was there a western transformation?

    NARCIS (Netherlands)

    Bakker, Nelleke

    2000-01-01

    This essay considers the changes in standards for children's fear and for ways of handling it in Dutch parental guidance literature between 1850 and 1950. Steams and Haggerty's hypothesis about a Western transformation of fear from an avoidable and relatively unimportant emotion to fear as a normal

  7. Fostering Self-Monitoring of University Students by Means of a Standardized Learning Journal--A Longitudinal Study with Process Analyses

    Science.gov (United States)

    Fabriz, Sabine; Dignath-van Ewijk, Charlotte; Poarch, Gregory; Büttner, Gerhard

    2014-01-01

    The self-regulation of learning behavior is an important key competence for university students. In this presented study, we aimed at fostering students' self-regulation of learning by means of a standardized learning journal. In two of four courses that were included in the study, students had to keep a structured learning diary and/or…

  8. Variability of standard artificial soils: Physico-chemical properties and phenanthrene desorption measured by means of supercritical fluid extraction

    International Nuclear Information System (INIS)

    Bielská, Lucie; Hovorková, Ivana; Komprdová, Klára; Hofman, Jakub

    2012-01-01

    The study is focused on artificial soil which is supposed to be a standardized “soil like” medium. We compared physico-chemical properties and extractability of Phenanthrene from 25 artificial soils prepared according to OECD standardized procedures at different laboratories. A substantial range of soil properties was found, also for parameters which should be standardized because they have an important influence on the bioavailability of pollutants (e.g. total organic carbon ranged from 1.4 to 6.1%). The extractability of Phe was measured by supercritical fluid extraction (SFE) at harsh and mild conditions. Highly variable Phe extractability from different soils (3–89%) was observed. The extractability was strongly related (R 2 = 0.87) to total organic carbon content, 0.1–2 mm particle size, and humic/fulvic acid ratio in the following multiple regression model: SFE (%) = 1.35 * sand (%) − 0.77 * TOC (%)2 + 0.27 * HA/FA. - Highlights: ► We compared properties and extractability of Phe from 25 different artificial soils. ► Substantial range of soil properties was found, also for important parameters. ► Phe extractability was measured by supercritical fluid extraction (SFE) at 2 modes. ► Phe extractability was highly variable from different soils (3–89%). ► Extractability was strongly related to TOC, 0.1–2 mm particles, and HA/FA. - Significant variability in physico-chemical properties exists between artificial soils prepared at different laboratories and affects behavior of contaminants in these soils.

  9. Means to verify the accuracy of CT systems for metrology applications (In the Absence of Established International Standards)

    International Nuclear Information System (INIS)

    Lettenbauer, H.; Georgi, B.; Weib, D.

    2007-01-01

    X-ray computed tomography (CT) reconstructs an unknown object from X-ray projections and has long been used for qualitative investigation of internal structures in industrial applications. Recently there has been increased interest in applying X-ray cone beam CT to the task of high-precision dimensional measurements of machined parts, since it is a relatively fast method of measuring both inner and outer geometries of arbitrary complexity. The important information for the user in dimensional metrology is if measured elements of a machined part are within the defined tolerances or not. In order to qualify cone beam CT as an established measurement technology, it must be qualified in the same manner as established measurement technologies such as coordinate measurement machines (CMMs) with tactile or optical sensors. In international standards artefacts are defined that are calibrated by certified institutions. These artefacts are defined by certain geometrical elements. CT measurements are performed on the reconstructed object volume, either directly or using an intermediate surface-extraction step. The results of these measurements have to be compared to the values of the calibrated elements; the level of agreement of the results defines the accuracy of the measurements. By using established methods to define measurement uncertainty a very high level of acceptance in dimensional metrology can be reached for the user. Only if results are comparable to standards of the established technologies the barriers of entry into metrology will be removed and all benefits of this technology will be available for the user. (authors)

  10. Error analysis by means of acoustic holography

    International Nuclear Information System (INIS)

    Kutzner, J.; Wuestenberg, H.

    1976-01-01

    The possilbilities to use the acoustical holography in nondestructive testing are discussed. Although compared to optical holography the image quality of acoustical holography is reduced this technique can give important informations about the shape of defects. Especially in nondestructive testing of thick walled components no alternative exists until now. (orig.) [de

  11. Diagnostic errors in pediatric radiology

    International Nuclear Information System (INIS)

    Taylor, George A.; Voss, Stephan D.; Melvin, Patrice R.; Graham, Dionne A.

    2011-01-01

    Little information is known about the frequency, types and causes of diagnostic errors in imaging children. Our goals were to describe the patterns and potential etiologies of diagnostic error in our subspecialty. We reviewed 265 cases with clinically significant diagnostic errors identified during a 10-year period. Errors were defined as a diagnosis that was delayed, wrong or missed; they were classified as perceptual, cognitive, system-related or unavoidable; and they were evaluated by imaging modality and level of training of the physician involved. We identified 484 specific errors in the 265 cases reviewed (mean:1.8 errors/case). Most discrepancies involved staff (45.5%). Two hundred fifty-eight individual cognitive errors were identified in 151 cases (mean = 1.7 errors/case). Of these, 83 cases (55%) had additional perceptual or system-related errors. One hundred sixty-five perceptual errors were identified in 165 cases. Of these, 68 cases (41%) also had cognitive or system-related errors. Fifty-four system-related errors were identified in 46 cases (mean = 1.2 errors/case) of which all were multi-factorial. Seven cases were unavoidable. Our study defines a taxonomy of diagnostic errors in a large academic pediatric radiology practice and suggests that most are multi-factorial in etiology. Further study is needed to define effective strategies for improvement. (orig.)

  12. New internal standard method for activation analysis and its application. Determination of Co, Ni, Rb, Sr in pepperbush by means of photon activation

    Energy Technology Data Exchange (ETDEWEB)

    Yagi, M.; Masumoto, K. (Tohoku Univ., Sendai (Japan). Lab. of Nuclear Science)

    1984-08-01

    A new internal standard method for activation analysis has been developed. In this method a suitable element present originally in the sample is used as an internal standard and the comparative standard is prepared by applying the standard addition method to the duplicated sample. The present method has the great advantages that the comparative standard spiked with the element of interest has the same matrix as the sample, and then the amount of the element to be determined in the sample can be evaluated easily by using a very simple equation even though the sample and comparative standard are irradiated separately by particles with different flux. Neither correction of the inhomogeneities of flux between the sample and comparative standard, nor that of the self-shielding effects are necessary for the present method. The usefulness of the method was examined through the determination of Co, Ni, Rb and Sr in pepperbush by means of photon activation, and the precision and accuracy of the method were proved to be valid. 29 refs.

  13. Standard-Fractionated Radiotherapy for Optic Nerve Sheath Meningioma: Visual Outcome Is Predicted by Mean Eye Dose

    Energy Technology Data Exchange (ETDEWEB)

    Abouaf, Lucie [Neuro-Ophthalmology Unit, Pierre-Wertheimer Hospital, Hospices Civils de Lyon, Lyon (France); Girard, Nicolas [Radiotherapy-Oncology Department, Lyon Sud Hospital, Hospices Civils de Lyon, Lyon (France); Claude Bernard University, Lyon (France); Lefort, Thibaud [Neuro-Radiology Department, Pierre-Wertheimer Hospital, Hospices Civils de Lyon, Lyon (France); D' hombres, Anne [Claude Bernard University, Lyon (France); Tilikete, Caroline; Vighetto, Alain [Neuro-Ophthalmology Unit, Pierre-Wertheimer Hospital, Hospices Civils de Lyon, Lyon (France); Claude Bernard University, Lyon (France); Mornex, Francoise, E-mail: francoise.mornex@chu-lyon.fr [Claude Bernard University, Lyon (France)

    2012-03-01

    Purpose: Radiotherapy has shown its efficacy in controlling optic nerve sheath meningiomas (ONSM) tumor growth while allowing visual acuity to improve or stabilize. However, radiation-induced toxicity may ultimately jeopardize the functional benefit. The purpose of this study was to identify predictive factors of poor visual outcome in patients receiving radiotherapy for ONSM. Methods and Materials: We conducted an extensive analysis of 10 patients with ONSM with regard to clinical, radiologic, and dosimetric aspects. All patients were treated with conformal radiotherapy and subsequently underwent biannual neuroophthalmologic and imaging assessments. Pretreatment and posttreatment values of visual acuity and visual field were compared with Wilcoxon's signed rank test. Results: Visual acuity values significantly improved after radiotherapy. After a median follow-up time of 51 months, 6 patients had improved visual acuity, 4 patients had improved visual field, 1 patient was in stable condition, and 1 patient had deteriorated visual acuity and visual field. Tumor control rate was 100% at magnetic resonance imaging assessment. Visual acuity deterioration after radiotherapy was related to radiation-induced retinopathy in 2 patients and radiation-induced mature cataract in 1 patient. Study of radiotherapy parameters showed that the mean eye dose was significantly higher in those 3 patients who had deteriorated vision. Conclusions: Our study confirms that radiotherapy is efficient in treating ONSM. Long-term visual outcome may be compromised by radiation-induced side effects. Mean eye dose has to be considered as a limiting constraint in treatment planning.

  14. Decree 2210: by means of which technical standards and allowed proceedings for radioactive material handling are issued

    International Nuclear Information System (INIS)

    1992-01-01

    This Decree has the regulation and handling of radioactive material as object, in order to protect the health of people as well as the atmosphere. These regulations are applicable to: all natural or artificial person, public or private, that imports, manufacture, transport, store, trade, transfer or use with industrial, commercial, scientific, medical or any other aim; apparatuses capable to generate ionizing radiations whose quantum energy is superior to 5 k lo electron volts (KeV) or materials that contain radionuclides whose activities surpass the maxim exempt registration; notification and license concession. It includes: definitions; signalizing by means of basic symbol that must be included in all object, material and their mixtures, that emit ionizing radiations; control; production; import and export; trade; use; and transport of activities that involve materials and apparatuses capable to generate ionizing radiations; the category and labeled of bundles; the limits of activity for excepted bundles; the corresponding values for the different radionuclides; the limits of activity for the means of material haulage; the storage; and the handling of radioactive waste [es

  15. Einstein's error

    International Nuclear Information System (INIS)

    Winterflood, A.H.

    1980-01-01

    In discussing Einstein's Special Relativity theory it is claimed that it violates the principle of relativity itself and that an anomalous sign in the mathematics is found in the factor which transforms one inertial observer's measurements into those of another inertial observer. The apparent source of this error is discussed. Having corrected the error a new theory, called Observational Kinematics, is introduced to replace Einstein's Special Relativity. (U.K.)

  16. Multi-slice spiral CT of the coronary arteries: improved vessel presentation by means of a standard software

    International Nuclear Information System (INIS)

    Schmitt, R.; Froehner, S.; Coblenz, G.; Christopoulos, G.; Brunn, J.; Mueller, M.; Kerber, S.; Fellner, F.

    2001-01-01

    Material and methods: Image data of 151 patients suffering from coronary artery disease were calculated by means of retrospective triggering at four different diastolic delay times in contrast-enhanced CT. The large coronary segments were subsequently reconstructed in two planes with multiplanar volume reconstruction (MPVR). Results: On the pre-condition that data sets were acquired at sinus rhythm and at a heart beat rate lower than 65/min coronary arteries could be depicted over a long distance in single or double angulated reconstruction planes with the help of MPVR. Time consumption for image reconstruction was reasonable. Additionally to the anatomy of the coronary arteries in two different planes, typical CT findings in occluding coronary artery disease are presented. (orig.) [de

  17. Primary 4πβ-γ coincidence system for standardization of radionuclides by means of plastic scintillators

    International Nuclear Information System (INIS)

    Baccarelli, Aida Maria

    2003-01-01

    The present work describes a 4π(α,β)-γ coincidence system for absolute measurement of radionuclide activity using a plastic scintillator in 4π geometry for charged particles detection and a Nal (Tl) crystal for gamma-ray detection. Several shapes and dimensions of the plastic scintillator have been tried in order to obtain the best system configuration. Radionuclides which decay by alpha emission, β - , β + and electron capture have been standardized. The results showed excellent agreement with other conventional primary system which makes use of a 4π proportional counter for X-ray and charged particle detection. The system developed in the present work have some advantages when compared with the conventional systems, namely; it does not need metal coating on the films used as radioactive source holders. When compared to liquid scintillators, is showed the advantage of not needing to be kept in dark for more than 24 h to allow phosphorescence decay of ambient light. Therefore it can be set to count immediately after the sources are placed inside of it. (author)

  18. Rounding errors in weighing

    International Nuclear Information System (INIS)

    Jeach, J.L.

    1976-01-01

    When rounding error is large relative to weighing error, it cannot be ignored when estimating scale precision and bias from calibration data. Further, if the data grouping is coarse, rounding error is correlated with weighing error and may also have a mean quite different from zero. These facts are taken into account in a moment estimation method. A copy of the program listing for the MERDA program that provides moment estimates is available from the author. Experience suggests that if the data fall into four or more cells or groups, it is not necessary to apply the moment estimation method. Rather, the estimate given by equation (3) is valid in this instance. 5 tables

  19. Probabilistic performance estimators for computational chemistry methods: The empirical cumulative distribution function of absolute errors

    Science.gov (United States)

    Pernot, Pascal; Savin, Andreas

    2018-06-01

    Benchmarking studies in computational chemistry use reference datasets to assess the accuracy of a method through error statistics. The commonly used error statistics, such as the mean signed and mean unsigned errors, do not inform end-users on the expected amplitude of prediction errors attached to these methods. We show that, the distributions of model errors being neither normal nor zero-centered, these error statistics cannot be used to infer prediction error probabilities. To overcome this limitation, we advocate for the use of more informative statistics, based on the empirical cumulative distribution function of unsigned errors, namely, (1) the probability for a new calculation to have an absolute error below a chosen threshold and (2) the maximal amplitude of errors one can expect with a chosen high confidence level. Those statistics are also shown to be well suited for benchmarking and ranking studies. Moreover, the standard error on all benchmarking statistics depends on the size of the reference dataset. Systematic publication of these standard errors would be very helpful to assess the statistical reliability of benchmarking conclusions.

  20. Bandwagon effects and error bars in particle physics

    Science.gov (United States)

    Jeng, Monwhea

    2007-02-01

    We study historical records of experiments on particle masses, lifetimes, and widths, both for signs of expectation bias, and to compare actual errors with reported error bars. We show that significant numbers of particle properties exhibit "bandwagon effects": reported values show trends and clustering as a function of the year of publication, rather than random scatter about the mean. While the total amount of clustering is significant, it is also fairly small; most individual particle properties do not display obvious clustering. When differences between experiments are compared with the reported error bars, the deviations do not follow a normal distribution, but instead follow an exponential distribution for up to ten standard deviations.

  1. Bandwagon effects and error bars in particle physics

    International Nuclear Information System (INIS)

    Jeng, Monwhea

    2007-01-01

    We study historical records of experiments on particle masses, lifetimes, and widths, both for signs of expectation bias, and to compare actual errors with reported error bars. We show that significant numbers of particle properties exhibit 'bandwagon effects': reported values show trends and clustering as a function of the year of publication, rather than random scatter about the mean. While the total amount of clustering is significant, it is also fairly small; most individual particle properties do not display obvious clustering. When differences between experiments are compared with the reported error bars, the deviations do not follow a normal distribution, but instead follow an exponential distribution for up to ten standard deviations

  2. Use of precision measurements for the limitation of effects beyond the standard model by means of an effective-field-theoretical approach

    International Nuclear Information System (INIS)

    Bauer, A.

    2006-01-01

    The standard model of elementary particle physics (SM) is perhaps the most significant theory in physics. It describes the interacting matter and gauge fields at high prescision. Nevertheless, there are a few requirements, which are not fulfilled by the SM, for example the incorporation of gravity, neutrino oscillations and further open questions. On the way to a more comprehensive theory, one can make use of an effective power series ansatz, which describes the SM physics as well as new phenomena. We exploit this ansatz to parameterize new effects with the help of a new mass scale and a set of new coupling constants. In the lowest order, one retrieves the SM. Higher order effects describe the new physics. Requiring certain properties under symmetry transformations gives a proper number of effective operators with mass dimension six. These operators are the starting point of our considerations. First, we calculate decay rates and cross sections, respectively, for selected processes under the assumption that only one new operator contributes at a time. Assuming that the observable's additional contribution is smaller than the experimental error, we give upper limits to the new coupling constant depending on the new mass scale. For this purpose we use leptonic and certain semileptonic precision data. On the one hand, the results presented in this thesis give physicists the opportunity to decide, which experiments are good candidates to increase precision. On the other hand, they show which experiment has the most promising potential for discoveries. (orig.)

  3. A randomized controlled trial comparing customized versus standard headrests for head and neck radiotherapy immobilization in terms of set-up errors, patient comfort and staff satisfaction (ICORG 08-09)

    International Nuclear Information System (INIS)

    Howlin, C.; O'Shea, E.; Dunne, M.; Mullaney, L.; McGarry, M.; Clayton-Lea, A.; Finn, M.; Carter, P.; Garret, B.; Thirion, P.

    2015-01-01

    Purpose: To recommend a specific headrest, customized or standard, for head and neck radiotherapy patients in our institution based primarily on an evaluation of set-up accuracy, taking into account a comparison of patient comfort, staff and patient satisfaction, and resource implications. Methods and materials: Between 2008 and 2009, 40 head and neck patients were randomized to either a standard (Arm A, n = 21) or customized (Arm B, n = 19) headrest, and immobilized with a customized thermoplastic mask. Set-up accuracy was assessed using electronic portal images (EPI). Random and systematic set-up errors for each arm were determined from 668 EPIs, which were analyzed by one Radiation Therapist. Patient comfort was assessed using a visual analogue scale (VAS) and staff satisfaction was measured using an in-house questionnaire. Resource implications were also evaluated. Results: The difference in set-up errors between arms was not significant in any direction. However, in this study the standard headrest (SH) arm performed well, with set-up errors comparative to customized headrests (CHs) in previous studies. CHs require regular monitoring and 47% were re-vacuumed making them more resource intensive. Patient comfort and staff satisfaction were comparable in both arms. Conclusion: The SH provided similar treatment accuracy and patient comfort compared with the CH. The large number of CHs that needed to be re-vacuumed undermines their reliability for radiotherapy schedules that extend beyond 34 days from the initial CT scan. Accordingly the CH was more resource intensive without improving the accuracy of positioning, thus the standard headrest is recommended for continued use at our institution

  4. Fast motion-including dose error reconstruction for VMAT with and without MLC tracking

    DEFF Research Database (Denmark)

    Ravkilde, Thomas; Keall, Paul J.; Grau, Cai

    2014-01-01

    of the algorithm for reconstruction of dose and motion-induced dose errors throughout the tracking and non-tracking beam deliveries was quantified. Doses were reconstructed with a mean dose difference relative to the measurements of -0.5% (5.5% standard deviation) for cumulative dose. More importantly, the root...... validate a simple model for fast motion-including dose error reconstruction applicable to intrafractional QA of MLC tracking treatments of moving targets. MLC tracking experiments were performed on a standard linear accelerator with prototype MLC tracking software guided by an electromagnetic transponder......-mean-square deviation between reconstructed and measured motion-induced 3%/3 mm γ failure rates (dose error) was 2.6%. The mean computation time for each calculation of dose and dose error was 295 ms. The motion-including dose reconstruction allows accurate temporal and spatial pinpointing of errors in absorbed dose...

  5. Medication Errors - A Review

    OpenAIRE

    Vinay BC; Nikhitha MK; Patel Sunil B

    2015-01-01

    In this present review article, regarding medication errors its definition, medication error problem, types of medication errors, common causes of medication errors, monitoring medication errors, consequences of medication errors, prevention of medication error and managing medication errors have been explained neatly and legibly with proper tables which is easy to understand.

  6. Auto-calibration of Systematic Odometry Errors in Mobile Robots

    DEFF Research Database (Denmark)

    Bak, Martin; Larsen, Thomas Dall; Andersen, Nils Axel

    1999-01-01

    This paper describes the phenomenon of systematic errors in odometry models in mobile robots and looks at various ways of avoiding it by means of auto-calibration. The systematic errors considered are incorrect knowledge of the wheel base and the gains from encoder readings to wheel displacement....... By auto-calibration we mean a standardized procedure which estimates the uncertainties using only on-board equipment such as encoders, an absolute measurement system and filters; no intervention by operator or off-line data processing is necessary. Results are illustrated by a number of simulations...... and experiments on a mobile robot....

  7. Medical Error Types and Causes Made by Nurses in Turkey

    Directory of Open Access Journals (Sweden)

    Dilek Kucuk Alemdar

    2013-06-01

    Full Text Available AIM: This study was carried out as a descriptive study in order to determine types, causes and prevalence of medical errors made by nurses in Turkey. METHOD: Seventy eight (78 nurses who have worked in a randomly selected hospital from five hospitals in Giresun city centre were enrolled in the study. The data was collected by the researchers using the ‘Information Form for Nurses’ and ‘Medical Error Form’. The Medical Error Form consists of 2 parts and 40 items including types and causes of medical errors. Nurses’ socio-demographic variables, medical error types and causes were evaluated using the percentage distribution and mean. RESULTS: The mean age of the nurses was 25.5 years, with a standard deviation 6.03 years. 50% of the nurses graduated health professional high school in the study. 53.8% of the nurses are single, 63.1% worked between 1-5 years, 71.8% day and night shifts and 42.3% in medical clinics. The common types of medical errors were hospital infection rate of 15.4%, diagnostic errors 12.8%, needle or cutting tool injuries and problems related to drug usage which has side effects 10.3%. In the study 38.5% of the nurses reported that they thought the cause of medical error highly was tiredness, 36.4% increased workload and 34.6% long working hours. CONCLUSION: As a result of the present study, nurses mentioned hospital infection, diagnostic errors, needle or cutting tool injuries as the most common medical errors and fatigue, over work load and long working hours as the most common medical error reasons. [TAF Prev Med Bull 2013; 12(3.000: 307-314

  8. Learning from Errors

    Directory of Open Access Journals (Sweden)

    MA. Lendita Kryeziu

    2015-06-01

    Full Text Available “Errare humanum est”, a well known and widespread Latin proverb which states that: to err is human, and that people make mistakes all the time. However, what counts is that people must learn from mistakes. On these grounds Steve Jobs stated: “Sometimes when you innovate, you make mistakes. It is best to admit them quickly, and get on with improving your other innovations.” Similarly, in learning new language, learners make mistakes, thus it is important to accept them, learn from them, discover the reason why they make them, improve and move on. The significance of studying errors is described by Corder as: “There have always been two justifications proposed for the study of learners' errors: the pedagogical justification, namely that a good understanding of the nature of error is necessary before a systematic means of eradicating them could be found, and the theoretical justification, which claims that a study of learners' errors is part of the systematic study of the learners' language which is itself necessary to an understanding of the process of second language acquisition” (Corder, 1982; 1. Thus the importance and the aim of this paper is analyzing errors in the process of second language acquisition and the way we teachers can benefit from mistakes to help students improve themselves while giving the proper feedback.

  9. Error Budgeting

    Energy Technology Data Exchange (ETDEWEB)

    Vinyard, Natalia Sergeevna [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Perry, Theodore Sonne [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Usov, Igor Olegovich [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-10-04

    We calculate opacity from k (hn)=-ln[T(hv)]/pL, where T(hv) is the transmission for photon energy hv, p is sample density, and L is path length through the sample. The density and path length are measured together by Rutherford backscatter. Δk = $\\partial k$\\ $\\partial T$ ΔT + $\\partial k$\\ $\\partial (pL)$. We can re-write this in terms of fractional error as Δk/k = Δ1n(T)/T + Δ(pL)/(pL). Transmission itself is calculated from T=(U-E)/(V-E)=B/B0, where B is transmitted backlighter (BL) signal and B0 is unattenuated backlighter signal. Then ΔT/T=Δln(T)=ΔB/B+ΔB0/B0, and consequently Δk/k = 1/T (ΔB/B + ΔB$_0$/B$_0$ + Δ(pL)/(pL). Transmission is measured in the range of 0.2

  10. How to Avoid Errors in Error Propagation: Prediction Intervals and Confidence Intervals in Forest Biomass

    Science.gov (United States)

    Lilly, P.; Yanai, R. D.; Buckley, H. L.; Case, B. S.; Woollons, R. C.; Holdaway, R. J.; Johnson, J.

    2016-12-01

    Calculations of forest biomass and elemental content require many measurements and models, each contributing uncertainty to the final estimates. While sampling error is commonly reported, based on replicate plots, error due to uncertainty in the regression used to estimate biomass from tree diameter is usually not quantified. Some published estimates of uncertainty due to the regression models have used the uncertainty in the prediction of individuals, ignoring uncertainty in the mean, while others have propagated uncertainty in the mean while ignoring individual variation. Using the simple case of the calcium concentration of sugar maple leaves, we compare the variation among individuals (the standard deviation) to the uncertainty in the mean (the standard error) and illustrate the declining importance in the prediction of individual concentrations as the number of individuals increases. For allometric models, the analogous statistics are the prediction interval (or the residual variation in the model fit) and the confidence interval (describing the uncertainty in the best fit model). The effect of propagating these two sources of error is illustrated using the mass of sugar maple foliage. The uncertainty in individual tree predictions was large for plots with few trees; for plots with 30 trees or more, the uncertainty in individuals was less important than the uncertainty in the mean. Authors of previously published analyses have reanalyzed their data to show the magnitude of these two sources of uncertainty in scales ranging from experimental plots to entire countries. The most correct analysis will take both sources of uncertainty into account, but for practical purposes, country-level reports of uncertainty in carbon stocks, as required by the IPCC, can ignore the uncertainty in individuals. Ignoring the uncertainty in the mean will lead to exaggerated estimates of confidence in estimates of forest biomass and carbon and nutrient contents.

  11. Laboratory errors and patient safety.

    Science.gov (United States)

    Miligy, Dawlat A

    2015-01-01

    Laboratory data are extensively used in medical practice; consequently, laboratory errors have a tremendous impact on patient safety. Therefore, programs designed to identify and reduce laboratory errors, as well as, setting specific strategies are required to minimize these errors and improve patient safety. The purpose of this paper is to identify part of the commonly encountered laboratory errors throughout our practice in laboratory work, their hazards on patient health care and some measures and recommendations to minimize or to eliminate these errors. Recording the encountered laboratory errors during May 2008 and their statistical evaluation (using simple percent distribution) have been done in the department of laboratory of one of the private hospitals in Egypt. Errors have been classified according to the laboratory phases and according to their implication on patient health. Data obtained out of 1,600 testing procedure revealed that the total number of encountered errors is 14 tests (0.87 percent of total testing procedures). Most of the encountered errors lay in the pre- and post-analytic phases of testing cycle (representing 35.7 and 50 percent, respectively, of total errors). While the number of test errors encountered in the analytic phase represented only 14.3 percent of total errors. About 85.7 percent of total errors were of non-significant implication on patients health being detected before test reports have been submitted to the patients. On the other hand, the number of test errors that have been already submitted to patients and reach the physician represented 14.3 percent of total errors. Only 7.1 percent of the errors could have an impact on patient diagnosis. The findings of this study were concomitant with those published from the USA and other countries. This proves that laboratory problems are universal and need general standardization and bench marking measures. Original being the first data published from Arabic countries that

  12. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  13. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  14. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  15. Otimização do processo de soldagem FCAW usando o erro quadrático médio multivariado FCAW welding process optimization using the multivariate mean square error

    Directory of Open Access Journals (Sweden)

    Emerson José de Paiva

    2010-03-01

    Full Text Available Encontrar um conjunto ótimo de parâmetros para um processo de soldagem é uma tarefa pouco trivial, face às múltiplas características exigíveis ou desejáveis que devem ser analisadas. Além disso, a negligência da estrutura de variância-covariância destas características na otimização pode conduzir a ótimos inadequados. Com o intuito de auxiliar na busca desses parâmetros, um método para otimização multiobjetiva, desenvolvido para o estudo do processo de soldagem FCAW (do inglês Flux Cored Arc Welding, utilizando-se arames tubulares, baseado no conceito de Erro Quadrático Médio Multivariado, será apresentado. Trata-se de uma abordagem combinada da Metodologia de Superfície de Resposta, Projeto de Experimentos e Análise de Componentes Principais, na tentativa de localizar valores próximos a alvos especificados, para cada uma das características estudadas (Penetração, Taxa de deposição, Rendimento, Índice de convexidade e Diluição, considerando-se as variáveis de processo expressas em função da tensão (V, velocidade de alimentação do arame (Va e da distância do bico de contato-peça (d. Os resultados obtidos apontam para uma boa adequação desta proposta.The optimization of welding processes is not a trivial task, mainly due to the great number of exigible and desirable characteristics that must be analyzed. Moreover, the optimization of a welding process with multiple characteristics without to consider the variance-covariance structure, may lead to inadequate optimum. To help in this task, a method of multiobjective optimization based in the Multivariate Mean Square Error applied in the study of multiple correlated characteristics of a FCAW (Flux Cored Arc Welding welding process will be presented. This method characterized by a combined approach based in the Response Surface Methodology, Design of Experiments and Principal Components Analysis consisted in an attempt to achieve the nearest values to

  16. Errors in Neonatology

    Directory of Open Access Journals (Sweden)

    Antonio Boldrini

    2013-06-01

    Full Text Available Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy. Results: In Neonatology the main error domains are: medication and total parenteral nutrition, resuscitation and respiratory care, invasive procedures, nosocomial infections, patient identification, diagnostics. Risk factors include patients’ size, prematurity, vulnerability and underlying disease conditions but also multidisciplinary teams, working conditions providing fatigue, a large variety of treatment and investigative modalities needed. Discussion and Conclusions: In our opinion, it is hardly possible to change the human beings but it is likely possible to change the conditions under they work. Voluntary errors report systems can help in preventing adverse events. Education and re-training by means of simulation can be an effective strategy too. In Pisa (Italy Nina (ceNtro di FormazIone e SimulazioNe NeonAtale is a simulation center that offers the possibility of a continuous retraining for technical and non-technical skills to optimize neonatological care strategies. Furthermore, we have been working on a novel skill trainer for mechanical ventilation (MEchatronic REspiratory System SImulator for Neonatal Applications, MERESSINA. Finally, in our opinion national health policy indirectly influences risk for errors. Proceedings of the 9th International Workshop on Neonatology · Cagliari (Italy · October 23rd-26th, 2013 · Learned lessons, changing practice and cutting-edge research

  17. Errors in clinical laboratories or errors in laboratory medicine?

    Science.gov (United States)

    Plebani, Mario

    2006-01-01

    Laboratory testing is a highly complex process and, although laboratory services are relatively safe, they are not as safe as they could or should be. Clinical laboratories have long focused their attention on quality control methods and quality assessment programs dealing with analytical aspects of testing. However, a growing body of evidence accumulated in recent decades demonstrates that quality in clinical laboratories cannot be assured by merely focusing on purely analytical aspects. The more recent surveys on errors in laboratory medicine conclude that in the delivery of laboratory testing, mistakes occur more frequently before (pre-analytical) and after (post-analytical) the test has been performed. Most errors are due to pre-analytical factors (46-68.2% of total errors), while a high error rate (18.5-47% of total errors) has also been found in the post-analytical phase. Errors due to analytical problems have been significantly reduced over time, but there is evidence that, particularly for immunoassays, interference may have a serious impact on patients. A description of the most frequent and risky pre-, intra- and post-analytical errors and advice on practical steps for measuring and reducing the risk of errors is therefore given in the present paper. Many mistakes in the Total Testing Process are called "laboratory errors", although these may be due to poor communication, action taken by others involved in the testing process (e.g., physicians, nurses and phlebotomists), or poorly designed processes, all of which are beyond the laboratory's control. Likewise, there is evidence that laboratory information is only partially utilized. A recent document from the International Organization for Standardization (ISO) recommends a new, broader definition of the term "laboratory error" and a classification of errors according to different criteria. In a modern approach to total quality, centered on patients' needs and satisfaction, the risk of errors and mistakes

  18. Sources of variability and systematic error in mouse timing behavior.

    Science.gov (United States)

    Gallistel, C R; King, Adam; McDonald, Robert

    2004-01-01

    In the peak procedure, starts and stops in responding bracket the target time at which food is expected. The variability in start and stop times is proportional to the target time (scalar variability), as is the systematic error in the mean center (scalar error). The authors investigated the source of the error and the variability, using head poking in the mouse, with target intervals of 5 s, 15 s, and 45 s, in the standard procedure, and in a variant with 3 different target intervals at 3 different locations in a single trial. The authors conclude that the systematic error is due to the asymmetric location of start and stop decision criteria, and the scalar variability derives primarily from sources other than memory.

  19. Crystalline lens power and refractive error.

    Science.gov (United States)

    Iribarren, Rafael; Morgan, Ian G; Nangia, Vinay; Jonas, Jost B

    2012-02-01

    To study the relationships between the refractive power of the crystalline lens, overall refractive error of the eye, and degree of nuclear cataract. All phakic participants of the population-based Central India Eye and Medical Study with an age of 50+ years were included. Calculation of the refractive lens power was based on distance noncycloplegic refractive error, corneal refractive power, anterior chamber depth, lens thickness, and axial length according to Bennett's formula. The study included 1885 subjects. Mean refractive lens power was 25.5 ± 3.0 D (range, 13.9-36.6). After adjustment for age and sex, the standardized correlation coefficients (β) of the association with the ocular refractive error were highest for crystalline lens power (β = -0.41; P lens opacity grade (β = -0.42; P lens power (β = -0.95), lower corneal refractive power (β = -0.76), higher lens thickness (β = 0.30), deeper anterior chamber (β = 0.28), and less marked nuclear lens opacity (β = -0.05). Lens thickness was significantly lower in eyes with greater nuclear opacity. Variations in refractive error in adults aged 50+ years were mostly influenced by variations in axial length and in crystalline lens refractive power, followed by variations in corneal refractive power, and, to a minor degree, by variations in lens thickness and anterior chamber depth.

  20. Correction of refractive errors

    Directory of Open Access Journals (Sweden)

    Vladimir Pfeifer

    2005-10-01

    Full Text Available Background: Spectacles and contact lenses are the most frequently used, the safest and the cheapest way to correct refractive errors. The development of keratorefractive surgery has brought new opportunities for correction of refractive errors in patients who have the need to be less dependent of spectacles or contact lenses. Until recently, RK was the most commonly performed refractive procedure for nearsighted patients.Conclusions: The introduction of excimer laser in refractive surgery has given the new opportunities of remodelling the cornea. The laser energy can be delivered on the stromal surface like in PRK or deeper on the corneal stroma by means of lamellar surgery. In LASIK flap is created with microkeratome in LASEK with ethanol and in epi-LASIK the ultra thin flap is created mechanically.

  1. An Empirical State Error Covariance Matrix Orbit Determination Example

    Science.gov (United States)

    Frisbee, Joseph H., Jr.

    2015-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance

  2. Practical application of the theory of errors in measurement

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the practical application of the theory of errors in measurement. The topics of the chapter include fixing on a maximum desired error, selecting a maximum error, the procedure for limiting the error, utilizing a standard procedure, setting specifications for a standard procedure, and selecting the number of measurements to be made

  3. Modeling coherent errors in quantum error correction

    Science.gov (United States)

    Greenbaum, Daniel; Dutton, Zachary

    2018-01-01

    Analysis of quantum error correcting codes is typically done using a stochastic, Pauli channel error model for describing the noise on physical qubits. However, it was recently found that coherent errors (systematic rotations) on physical data qubits result in both physical and logical error rates that differ significantly from those predicted by a Pauli model. Here we examine the accuracy of the Pauli approximation for noise containing coherent errors (characterized by a rotation angle ɛ) under the repetition code. We derive an analytic expression for the logical error channel as a function of arbitrary code distance d and concatenation level n, in the small error limit. We find that coherent physical errors result in logical errors that are partially coherent and therefore non-Pauli. However, the coherent part of the logical error is negligible at fewer than {ε }-({dn-1)} error correction cycles when the decoder is optimized for independent Pauli errors, thus providing a regime of validity for the Pauli approximation. Above this number of correction cycles, the persistent coherent logical error will cause logical failure more quickly than the Pauli model would predict, and this may need to be combated with coherent suppression methods at the physical level or larger codes.

  4. Error estimation of deformable image registration of pulmonary CT scans using convolutional neural networks.

    Science.gov (United States)

    Eppenhof, Koen A J; Pluim, Josien P W

    2018-04-01

    Error estimation in nonlinear medical image registration is a nontrivial problem that is important for validation of registration methods. We propose a supervised method for estimation of registration errors in nonlinear registration of three-dimensional (3-D) images. The method is based on a 3-D convolutional neural network that learns to estimate registration errors from a pair of image patches. By applying the network to patches centered around every voxel, we construct registration error maps. The network is trained using a set of representative images that have been synthetically transformed to construct a set of image pairs with known deformations. The method is evaluated on deformable registrations of inhale-exhale pairs of thoracic CT scans. Using ground truth target registration errors on manually annotated landmarks, we evaluate the method's ability to estimate local registration errors. Estimation of full domain error maps is evaluated using a gold standard approach. The two evaluation approaches show that we can train the network to robustly estimate registration errors in a predetermined range, with subvoxel accuracy. We achieved a root-mean-square deviation of 0.51 mm from gold standard registration errors and of 0.66 mm from ground truth landmark registration errors.

  5. An Empirical State Error Covariance Matrix for Batch State Estimation

    Science.gov (United States)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the

  6. Measuring worst-case errors in a robot workcell

    International Nuclear Information System (INIS)

    Simon, R.W.; Brost, R.C.; Kholwadwala, D.K.

    1997-10-01

    Errors in model parameters, sensing, and control are inevitably present in real robot systems. These errors must be considered in order to automatically plan robust solutions to many manipulation tasks. Lozano-Perez, Mason, and Taylor proposed a formal method for synthesizing robust actions in the presence of uncertainty; this method has been extended by several subsequent researchers. All of these results presume the existence of worst-case error bounds that describe the maximum possible deviation between the robot's model of the world and reality. This paper examines the problem of measuring these error bounds for a real robot workcell. These measurements are difficult, because of the desire to completely contain all possible deviations while avoiding bounds that are overly conservative. The authors present a detailed description of a series of experiments that characterize and quantify the possible errors in visual sensing and motion control for a robot workcell equipped with standard industrial robot hardware. In addition to providing a means for measuring these specific errors, these experiments shed light on the general problem of measuring worst-case errors

  7. Intervention strategies for the management of human error

    Science.gov (United States)

    Wiener, Earl L.

    1993-01-01

    This report examines the management of human error in the cockpit. The principles probably apply as well to other applications in the aviation realm (e.g. air traffic control, dispatch, weather, etc.) as well as other high-risk systems outside of aviation (e.g. shipping, high-technology medical procedures, military operations, nuclear power production). Management of human error is distinguished from error prevention. It is a more encompassing term, which includes not only the prevention of error, but also a means of disallowing an error, once made, from adversely affecting system output. Such techniques include: traditional human factors engineering, improvement of feedback and feedforward of information from system to crew, 'error-evident' displays which make erroneous input more obvious to the crew, trapping of errors within a system, goal-sharing between humans and machines (also called 'intent-driven' systems), paperwork management, and behaviorally based approaches, including procedures, standardization, checklist design, training, cockpit resource management, etc. Fifteen guidelines for the design and implementation of intervention strategies are included.

  8. Approximation errors during variance propagation

    International Nuclear Information System (INIS)

    Dinsmore, Stephen

    1986-01-01

    Risk and reliability analyses are often performed by constructing and quantifying large fault trees. The inputs to these models are component failure events whose probability of occuring are best represented as random variables. This paper examines the errors inherent in two approximation techniques used to calculate the top event's variance from the inputs' variance. Two sample fault trees are evaluated and several three dimensional plots illustrating the magnitude of the error over a wide range of input means and variances are given

  9. The Role of Standardized Tests as a Means of Assessment of Young Children: A Review of Related Literature and Recommendations of Alternative Assessments for Administrators and Teachers.

    Science.gov (United States)

    Meadows, Stacie; Karr-Kidwell, P. J.

    An extensive review of literature related to the role of standardized tests in the assessment of young children was conducted, and recommendations were made for alternative approaches more appropriate to the assessment of young children. The first section of the paper contains a literature review that provides a brief history of standardized tests…

  10. How Do Simulated Error Experiences Impact Attitudes Related to Error Prevention?

    Science.gov (United States)

    Breitkreuz, Karen R; Dougal, Renae L; Wright, Melanie C

    2016-10-01

    The objective of this project was to determine whether simulated exposure to error situations changes attitudes in a way that may have a positive impact on error prevention behaviors. Using a stratified quasi-randomized experiment design, we compared risk perception attitudes of a control group of nursing students who received standard error education (reviewed medication error content and watched movies about error experiences) to an experimental group of students who reviewed medication error content and participated in simulated error experiences. Dependent measures included perceived memorability of the educational experience, perceived frequency of errors, and perceived caution with respect to preventing errors. Experienced nursing students perceived the simulated error experiences to be more memorable than movies. Less experienced students perceived both simulated error experiences and movies to be highly memorable. After the intervention, compared with movie participants, simulation participants believed errors occurred more frequently. Both types of education increased the participants' intentions to be more cautious and reported caution remained higher than baseline for medication errors 6 months after the intervention. This study provides limited evidence of an advantage of simulation over watching movies describing actual errors with respect to manipulating attitudes related to error prevention. Both interventions resulted in long-term impacts on perceived caution in medication administration. Simulated error experiences made participants more aware of how easily errors can occur, and the movie education made participants more aware of the devastating consequences of errors.

  11. Robust topology optimization accounting for spatially varying manufacturing errors

    DEFF Research Database (Denmark)

    Schevenels, M.; Lazarov, Boyan Stefanov; Sigmund, Ole

    2011-01-01

    This paper presents a robust approach for the design of macro-, micro-, or nano-structures by means of topology optimization, accounting for spatially varying manufacturing errors. The focus is on structures produced by milling or etching; in this case over- or under-etching may cause parts...... optimization problem is formulated in a probabilistic way: the objective function is defined as a weighted sum of the mean value and the standard deviation of the structural performance. The optimization problem is solved by means of a Monte Carlo method: in each iteration of the optimization scheme, a Monte...

  12. Errors due to random noise in velocity measurement using incoherent-scatter radar

    Directory of Open Access Journals (Sweden)

    P. J. S. Williams

    1996-12-01

    Full Text Available The random-noise errors involved in measuring the Doppler shift of an 'incoherent-scatter' spectrum are predicted theoretically for all values of Te/Ti from 1.0 to 3.0. After correction has been made for the effects of convolution during transmission and reception and the additional errors introduced by subtracting the average of the background gates, the rms errors can be expressed by a simple semi-empirical formula. The observed errors are determined from a comparison of simultaneous EISCAT measurements using an identical pulse code on several adjacent frequencies. The plot of observed versus predicted error has a slope of 0.991 and a correlation coefficient of 99.3%. The prediction also agrees well with the mean of the error distribution reported by the standard EISCAT analysis programme.

  13. Evaluating the prevalence and impact of examiner errors on the Wechsler scales of intelligence: A meta-analysis.

    Science.gov (United States)

    Styck, Kara M; Walsh, Shana M

    2016-01-01

    The purpose of the present investigation was to conduct a meta-analysis of the literature on examiner errors for the Wechsler scales of intelligence. Results indicate that a mean of 99.7% of protocols contained at least 1 examiner error when studies that included a failure to record examinee responses as an error were combined and a mean of 41.2% of protocols contained at least 1 examiner error when studies that ignored errors of omission were combined. Furthermore, graduate student examiners were significantly more likely to make at least 1 error on Wechsler intelligence test protocols than psychologists. However, psychologists made significantly more errors per protocol than graduate student examiners regardless of the inclusion or exclusion of failure to record examinee responses as errors. On average, 73.1% of Full-Scale IQ (FSIQ) scores changed as a result of examiner errors, whereas 15.8%-77.3% of scores on the Verbal Comprehension Index (VCI), Perceptual Reasoning Index (PRI), Working Memory Index (WMI), and Processing Speed Index changed as a result of examiner errors. In addition, results suggest that examiners tend to overestimate FSIQ scores and underestimate VCI scores. However, no strong pattern emerged for the PRI and WMI. It can be concluded that examiner errors occur frequently and impact index and FSIQ scores. Consequently, current estimates for the standard error of measurement of popular IQ tests may not adequately capture the variance due to the examiner. (c) 2016 APA, all rights reserved).

  14. Missing data and the accuracy of magnetic-observatory hour means

    Directory of Open Access Journals (Sweden)

    J. J. Love

    2009-09-01

    Full Text Available Analysis is made of the accuracy of magnetic-observatory hourly means constructed from definitive minute data having missing values (gaps. Bootstrap sampling from different data-gap distributions is used to estimate average errors on hourly means as a function of the number of missing data. Absolute and relative error results are calculated for horizontal-intensity, declination, and vertical-component data collected at high, medium, and low magnetic latitudes. For 90% complete coverage (10% missing data, average (RMS absolute errors on hourly means are generally less than errors permitted by Intermagnet for minute data. As a rule of thumb, the average relative error for hourly means with 10% missing minute data is approximately equal to 10% of the hourly standard deviation of the source minute data.

  15. Measurement error models with uncertainty about the error variance

    NARCIS (Netherlands)

    Oberski, D.L.; Satorra, A.

    2013-01-01

    It is well known that measurement error in observable variables induces bias in estimates in standard regression analysis and that structural equation models are a typical solution to this problem. Often, multiple indicator equations are subsumed as part of the structural equation model, allowing

  16. Learning from prescribing errors

    OpenAIRE

    Dean, B

    2002-01-01

    

 The importance of learning from medical error has recently received increasing emphasis. This paper focuses on prescribing errors and argues that, while learning from prescribing errors is a laudable goal, there are currently barriers that can prevent this occurring. Learning from errors can take place on an individual level, at a team level, and across an organisation. Barriers to learning from prescribing errors include the non-discovery of many prescribing errors, lack of feedback to th...

  17. Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)

    Science.gov (United States)

    Adler, Robert; Gu, Guojun; Huffman, George

    2012-01-01

    A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a

  18. Quantile Regression With Measurement Error

    KAUST Repository

    Wei, Ying

    2009-08-27

    Regression quantiles can be substantially biased when the covariates are measured with error. In this paper we propose a new method that produces consistent linear quantile estimation in the presence of covariate measurement error. The method corrects the measurement error induced bias by constructing joint estimating equations that simultaneously hold for all the quantile levels. An iterative EM-type estimation algorithm to obtain the solutions to such joint estimation equations is provided. The finite sample performance of the proposed method is investigated in a simulation study, and compared to the standard regression calibration approach. Finally, we apply our methodology to part of the National Collaborative Perinatal Project growth data, a longitudinal study with an unusual measurement error structure. © 2009 American Statistical Association.

  19. Parts of the Whole: Error Estimation for Science Students

    Directory of Open Access Journals (Sweden)

    Dorothy Wallace

    2017-01-01

    Full Text Available It is important for science students to understand not only how to estimate error sizes in measurement data, but also to see how these errors contribute to errors in conclusions they may make about the data. Relatively small errors in measurement, errors in assumptions, and roundoff errors in computation may result in large error bounds on computed quantities of interest. In this column, we look closely at a standard method for measuring the volume of cancer tumor xenografts to see how small errors in each of these three factors may contribute to relatively large observed errors in recorded tumor volumes.

  20. Neck Flexor and Extensor Muscle Endurance in Subclinical Neck Pain: Intrarater Reliability, Standard Error of Measurement, Minimal Detectable Change, and Comparison With Asymptomatic Participants in a University Student Population.

    Science.gov (United States)

    Lourenço, Ana S; Lameiras, Carina; Silva, Anabela G

    2016-01-01

    The aims of this study were to assess intrarater reliability and to calculate the standard error of measurement (SEM) and minimal detectable change (MDC) for deep neck flexor and neck extensor muscle endurance tests, and compare the results between individuals with and without subclinical neck pain. Participants were students of the University of Aveiro reporting subclinical neck pain and asymptomatic participants matched for sex and age to the neck pain group. Data on endurance capacity of the deep neck flexors and neck extensors were collected by a blinded assessor using the deep neck flexor endurance test and the extensor endurance test, respectively. Intraclass correlation coefficients (ICCs), SEM, and MDC were calculated for measurements taken within a session by the same assessor. Differences between groups for endurance capacity were investigated using a Mann-Whitney U test. The deep neck flexor endurance test (ICC = 0.71; SEM = 6.91 seconds; MDC = 19.15 seconds) and neck extensor endurance test (ICC = 0.73; SEM = 9.84 minutes; MDC = 2.34 minutes) are reliable. No significant differences were found between participants with and without neck pain for both tests of muscle endurance (P > .05). The endurance capacity of the deep neck flexors and neck extensors can be reliably measured in participants with subclinical neck pain. However, the wide SEM and MDC might limit the sensitivity of these tests. Copyright © 2016. Published by Elsevier Inc.

  1. Comparison between calorimeter and HLNC errors

    International Nuclear Information System (INIS)

    Goldman, A.S.; De Ridder, P.; Laszlo, G.

    1991-01-01

    This paper summarizes an error analysis that compares systematic and random errors of total plutonium mass estimated for high-level neutron coincidence counter (HLNC) and calorimeter measurements. This task was part of an International Atomic Energy Agency (IAEA) study on the comparison of the two instruments to determine if HLNC measurement errors met IAEA standards and if the calorimeter gave ''significantly'' better precision. Our analysis was based on propagation of error models that contained all known sources of errors including uncertainties associated with plutonium isotopic measurements. 5 refs., 2 tabs

  2. Stochastic and sensitivity analysis of shape error of inflatable antenna reflectors

    Science.gov (United States)

    San, Bingbing; Yang, Qingshan; Yin, Liwei

    2017-03-01

    Inflatable antennas are promising candidates to realize future satellite communications and space observations since they are lightweight, low-cost and small-packaged-volume. However, due to their high flexibility, inflatable reflectors are difficult to manufacture accurately, which may result in undesirable shape errors, and thus affect their performance negatively. In this paper, the stochastic characteristics of shape errors induced during manufacturing process are investigated using Latin hypercube sampling coupled with manufacture simulations. Four main random error sources are involved, including errors in membrane thickness, errors in elastic modulus of membrane, boundary deviations and pressure variations. Using regression and correlation analysis, a global sensitivity study is conducted to rank the importance of these error sources. This global sensitivity analysis is novel in that it can take into account the random variation and the interaction between error sources. Analyses are parametrically carried out with various focal-length-to-diameter ratios (F/D) and aperture sizes (D) of reflectors to investigate their effects on significance ranking of error sources. The research reveals that RMS (Root Mean Square) of shape error is a random quantity with an exponent probability distribution and features great dispersion; with the increase of F/D and D, both mean value and standard deviation of shape errors are increased; in the proposed range, the significance ranking of error sources is independent of F/D and D; boundary deviation imposes the greatest effect with a much higher weight than the others; pressure variation ranks the second; error in thickness and elastic modulus of membrane ranks the last with very close sensitivities to pressure variation. Finally, suggestions are given for the control of the shape accuracy of reflectors and allowable values of error sources are proposed from the perspective of reliability.

  3. Analysis of translational errors in frame-based and frameless cranial radiosurgery using an anthropomorphic phantom

    Energy Technology Data Exchange (ETDEWEB)

    Almeida, Taynna Vernalha Rocha [Faculdades Pequeno Principe (FPP), Curitiba, PR (Brazil); Cordova Junior, Arno Lotar; Almeida, Cristiane Maria; Piedade, Pedro Argolo; Silva, Cintia Mara da, E-mail: taynnavra@gmail.com [Centro de Radioterapia Sao Sebastiao, Florianopolis, SC (Brazil); Brincas, Gabriela R. Baseggio [Centro de Diagnostico Medico Imagem, Florianopolis, SC (Brazil); Marins, Priscila; Soboll, Danyel Scheidegger [Universidade Tecnologica Federal do Parana (UTFPR), Curitiba, PR (Brazil)

    2016-03-15

    Objective: To evaluate three-dimensional translational setup errors and residual errors in image-guided radiosurgery, comparing frameless and frame-based techniques, using an anthropomorphic phantom. Materials and Methods: We initially used specific phantoms for the calibration and quality control of the image-guided system. For the hidden target test, we used an Alderson Radiation Therapy (ART)-210 anthropomorphic head phantom, into which we inserted four 5- mm metal balls to simulate target treatment volumes. Computed tomography images were the taken with the head phantom properly positioned for frameless and frame-based radiosurgery. Results: For the frameless technique, the mean error magnitude was 0.22 ± 0.04 mm for setup errors and 0.14 ± 0.02 mm for residual errors, the combined uncertainty being 0.28 mm and 0.16 mm, respectively. For the frame-based technique, the mean error magnitude was 0.73 ± 0.14 mm for setup errors and 0.31 ± 0.04 mm for residual errors, the combined uncertainty being 1.15 mm and 0.63 mm, respectively. Conclusion: The mean values, standard deviations, and combined uncertainties showed no evidence of a significant differences between the two techniques when the head phantom ART-210 was used. (author)

  4. Explorations in Statistics: Standard Deviations and Standard Errors

    Science.gov (United States)

    Curran-Everett, Douglas

    2008-01-01

    Learning about statistics is a lot like learning about science: the learning is more meaningful if you can actively explore. This series in "Advances in Physiology Education" provides an opportunity to do just that: we will investigate basic concepts in statistics using the free software package R. Because this series uses R solely as a vehicle…

  5. Phase Error Modeling and Its Impact on Precise Orbit Determination of GRACE Satellites

    Directory of Open Access Journals (Sweden)

    Jia Tu

    2012-01-01

    Full Text Available Limiting factors for the precise orbit determination (POD of low-earth orbit (LEO satellite using dual-frequency GPS are nowadays mainly encountered with the in-flight phase error modeling. The phase error is modeled as a systematic and a random component each depending on the direction of GPS signal reception. The systematic part and standard deviation of random part in phase error model are, respectively, estimated by bin-wise mean and standard deviation values of phase postfit residuals computed by orbit determination. By removing the systematic component and adjusting the weight of phase observation data according to standard deviation of random component, the orbit can be further improved by POD approach. The GRACE data of 1–31 January 2006 are processed, and three types of orbit solutions, POD without phase error model correction, POD with mean value correction of phase error model, and POD with phase error model correction, are obtained. The three-dimensional (3D orbit improvements derived from phase error model correction are 0.0153 m for GRACE A and 0.0131 m for GRACE B, and the 3D influences arisen from random part of phase error model are 0.0068 m and 0.0075 m for GRACE A and GRACE B, respectively. Thus the random part of phase error model cannot be neglected for POD. It is also demonstrated by phase postfit residual analysis, orbit comparison with JPL precise science orbit, and orbit validation with KBR data that the results derived from POD with phase error model correction are better than another two types of orbit solutions generated in this paper.

  6. Two-dimensional errors

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter addresses the extension of previous work in one-dimensional (linear) error theory to two-dimensional error analysis. The topics of the chapter include the definition of two-dimensional error, the probability ellipse, the probability circle, elliptical (circular) error evaluation, the application to position accuracy, and the use of control systems (points) in measurements

  7. Part two: Error propagation

    International Nuclear Information System (INIS)

    Picard, R.R.

    1989-01-01

    Topics covered in this chapter include a discussion of exact results as related to nuclear materials management and accounting in nuclear facilities; propagation of error for a single measured value; propagation of error for several measured values; error propagation for materials balances; and an application of error propagation to an example of uranium hexafluoride conversion process

  8. Learning from Errors

    OpenAIRE

    Martínez-Legaz, Juan Enrique; Soubeyran, Antoine

    2003-01-01

    We present a model of learning in which agents learn from errors. If an action turns out to be an error, the agent rejects not only that action but also neighboring actions. We find that, keeping memory of his errors, under mild assumptions an acceptable solution is asymptotically reached. Moreover, one can take advantage of big errors for a faster learning.

  9. Performance, postmodernity and errors

    DEFF Research Database (Denmark)

    Harder, Peter

    2013-01-01

    speaker’s competency (note the –y ending!) reflects adaptation to the community langue, including variations. This reversal of perspective also reverses our understanding of the relationship between structure and deviation. In the heyday of structuralism, it was tempting to confuse the invariant system...... with the prestige variety, and conflate non-standard variation with parole/performance and class both as erroneous. Nowadays the anti-structural sentiment of present-day linguistics makes it tempting to confuse the rejection of ideal abstract structure with a rejection of any distinction between grammatical...... as deviant from the perspective of function-based structure and discuss to what extent the recognition of a community langue as a source of adaptive pressure may throw light on different types of deviation, including language handicaps and learner errors....

  10. Generalized Gaussian Error Calculus

    CERN Document Server

    Grabe, Michael

    2010-01-01

    For the first time in 200 years Generalized Gaussian Error Calculus addresses a rigorous, complete and self-consistent revision of the Gaussian error calculus. Since experimentalists realized that measurements in general are burdened by unknown systematic errors, the classical, widespread used evaluation procedures scrutinizing the consequences of random errors alone turned out to be obsolete. As a matter of course, the error calculus to-be, treating random and unknown systematic errors side by side, should ensure the consistency and traceability of physical units, physical constants and physical quantities at large. The generalized Gaussian error calculus considers unknown systematic errors to spawn biased estimators. Beyond, random errors are asked to conform to the idea of what the author calls well-defined measuring conditions. The approach features the properties of a building kit: any overall uncertainty turns out to be the sum of a contribution due to random errors, to be taken from a confidence inter...

  11. A criticism of the paper entitled 'A practical method of estimating standard error of the age in the Fission Track Dating method' by Johnson, McGee and Naeser

    International Nuclear Information System (INIS)

    Green, P.F.

    1981-01-01

    It is stated that the common use of Poissonian errors to assign uncertainties in fission-track dating studies has led Johnson, McGee and Naeser (1979) to the mistaken assumption that such errors could be used to measure the spatial variation of track densities. The analysis proposed by JMN 79, employing this assumption, therefore leads to erroneous assessment of the error in an age determination. The basis for the statement is discussed. (U.K.)

  12. Medication errors: prescribing faults and prescription errors.

    Science.gov (United States)

    Velo, Giampaolo P; Minuz, Pietro

    2009-06-01

    1. Medication errors are common in general practice and in hospitals. Both errors in the act of writing (prescription errors) and prescribing faults due to erroneous medical decisions can result in harm to patients. 2. Any step in the prescribing process can generate errors. Slips, lapses, or mistakes are sources of errors, as in unintended omissions in the transcription of drugs. Faults in dose selection, omitted transcription, and poor handwriting are common. 3. Inadequate knowledge or competence and incomplete information about clinical characteristics and previous treatment of individual patients can result in prescribing faults, including the use of potentially inappropriate medications. 4. An unsafe working environment, complex or undefined procedures, and inadequate communication among health-care personnel, particularly between doctors and nurses, have been identified as important underlying factors that contribute to prescription errors and prescribing faults. 5. Active interventions aimed at reducing prescription errors and prescribing faults are strongly recommended. These should be focused on the education and training of prescribers and the use of on-line aids. The complexity of the prescribing procedure should be reduced by introducing automated systems or uniform prescribing charts, in order to avoid transcription and omission errors. Feedback control systems and immediate review of prescriptions, which can be performed with the assistance of a hospital pharmacist, are also helpful. Audits should be performed periodically.

  13. Analysis of errors in forensic science

    Directory of Open Access Journals (Sweden)

    Mingxiao Du

    2017-01-01

    Full Text Available Reliability of expert testimony is one of the foundations of judicial justice. Both expert bias and scientific errors affect the reliability of expert opinion, which in turn affects the trustworthiness of the findings of fact in legal proceedings. Expert bias can be eliminated by replacing experts; however, it may be more difficult to eliminate scientific errors. From the perspective of statistics, errors in operation of forensic science include systematic errors, random errors, and gross errors. In general, process repetition and abiding by the standard ISO/IEC:17025: 2005, general requirements for the competence of testing and calibration laboratories, during operation are common measures used to reduce errors that originate from experts and equipment, respectively. For example, to reduce gross errors, the laboratory can ensure that a test is repeated several times by different experts. In applying for forensic principles and methods, the Federal Rules of Evidence 702 mandate that judges consider factors such as peer review, to ensure the reliability of the expert testimony. As the scientific principles and methods may not undergo professional review by specialists in a certain field, peer review serves as an exclusive standard. This study also examines two types of statistical errors. As false-positive errors involve a higher possibility of an unfair decision-making, they should receive more attention than false-negative errors.

  14. Calculations of standard-Higgs-boson production cross sections in e+e- collisions by means of a reasonable set of parameters

    International Nuclear Information System (INIS)

    Biyajima, M.; Shirane, K.; Terazawa, O.

    1987-01-01

    We calculate cross sections for production of the standard Higgs boson in e + e - collisions and compare our results with those of several authors. It is found that there are appreciable differences among them which can be attributed to the coupling constants used, α(0) ( = (1/137) and G/sub F/. We also observe that cross sections depend on the magnitudes of the total width of the Z particle. The use of a reasonable set of parameters in calculations is emphasized

  15. Bridging the gap in complementary and alternative medicine research: manualization as a means of promoting standardization and flexibility of treatment in clinical trials of acupuncture.

    Science.gov (United States)

    Schnyer, Rosa N; Allen, John J B

    2002-10-01

    An important methodological challenge encountered in acupuncture clinical research involves the design of treatment protocols that help ensure standardization and replicability while allowing for the necessary flexibility to tailor treatments to each individual. Manualization of protocols used in clinical trials of acupuncture and other traditionally-based complementary and alternative medicine (CAM) systems facilitates the systematic delivery of replicable and standardized, yet individually-tailored treatments. To facilitate high-quality CAM acupuncture research by outlining a method for the systematic design and implementation of protocols used in CAM clinical trials based on the concept of treatment manualization. A series of treatment manuals was developed to systematically articulate the Chinese medical theoretical and clinical framework for a given Western-defined illness, to increase the quality and consistency of treatment, and to standardize the technical aspects of the protocol. In all, three manuals were developed for National Institutes of Health (NIH)-funded clinical trials of acupuncture for depression, spasticity in cerebral palsy, and repetitive stress injury. In Part I, the rationale underlying these manuals and the challenges encountered in creating them are discussed, and qualitative assessments of their utility are provided. In Part II, a methodology to develop treatment manuals for use in clinical trials is detailed, and examples are given. A treatment manual provides a precise way to train and supervise practitioners, enable evaluation of conformity and competence, facilitate the training process, and increase the ability to identify the active therapeutic ingredients in clinical trials of acupuncture.

  16. Method for decoupling error correction from privacy amplification

    Energy Technology Data Exchange (ETDEWEB)

    Lo, Hoi-Kwong [Department of Electrical and Computer Engineering and Department of Physics, University of Toronto, 10 King' s College Road, Toronto, Ontario, Canada, M5S 3G4 (Canada)

    2003-04-01

    In a standard quantum key distribution (QKD) scheme such as BB84, two procedures, error correction and privacy amplification, are applied to extract a final secure key from a raw key generated from quantum transmission. To simplify the study of protocols, it is commonly assumed that the two procedures can be decoupled from each other. While such a decoupling assumption may be valid for individual attacks, it is actually unproven in the context of ultimate or unconditional security, which is the Holy Grail of quantum cryptography. In particular, this means that the application of standard efficient two-way error-correction protocols like Cascade is not proven to be unconditionally secure. Here, I provide the first proof of such a decoupling principle in the context of unconditional security. The method requires Alice and Bob to share some initial secret string and use it to encrypt their communications in the error correction stage using one-time-pad encryption. Consequently, I prove the unconditional security of the interactive Cascade protocol proposed by Brassard and Salvail for error correction and modified by one-time-pad encryption of the error syndrome, followed by the random matrix protocol for privacy amplification. This is an efficient protocol in terms of both computational power and key generation rate. My proof uses the entanglement purification approach to security proofs of QKD. The proof applies to all adaptive symmetric methods for error correction, which cover all existing methods proposed for BB84. In terms of the net key generation rate, the new method is as efficient as the standard Shor-Preskill proof.

  17. Method for decoupling error correction from privacy amplification

    International Nuclear Information System (INIS)

    Lo, Hoi-Kwong

    2003-01-01

    In a standard quantum key distribution (QKD) scheme such as BB84, two procedures, error correction and privacy amplification, are applied to extract a final secure key from a raw key generated from quantum transmission. To simplify the study of protocols, it is commonly assumed that the two procedures can be decoupled from each other. While such a decoupling assumption may be valid for individual attacks, it is actually unproven in the context of ultimate or unconditional security, which is the Holy Grail of quantum cryptography. In particular, this means that the application of standard efficient two-way error-correction protocols like Cascade is not proven to be unconditionally secure. Here, I provide the first proof of such a decoupling principle in the context of unconditional security. The method requires Alice and Bob to share some initial secret string and use it to encrypt their communications in the error correction stage using one-time-pad encryption. Consequently, I prove the unconditional security of the interactive Cascade protocol proposed by Brassard and Salvail for error correction and modified by one-time-pad encryption of the error syndrome, followed by the random matrix protocol for privacy amplification. This is an efficient protocol in terms of both computational power and key generation rate. My proof uses the entanglement purification approach to security proofs of QKD. The proof applies to all adaptive symmetric methods for error correction, which cover all existing methods proposed for BB84. In terms of the net key generation rate, the new method is as efficient as the standard Shor-Preskill proof

  18. Words from spontaneous conversational speech can be recognized with human-like accuracy by an error-driven learning algorithm that discriminates between meanings straight from smart acoustic features, bypassing the phoneme as recognition unit.

    Science.gov (United States)

    Arnold, Denis; Tomaschek, Fabian; Sering, Konstantin; Lopez, Florence; Baayen, R Harald

    2017-01-01

    Sound units play a pivotal role in cognitive models of auditory comprehension. The general consensus is that during perception listeners break down speech into auditory words and subsequently phones. Indeed, cognitive speech recognition is typically taken to be computationally intractable without phones. Here we present a computational model trained on 20 hours of conversational speech that recognizes word meanings within the range of human performance (model 25%, native speakers 20-44%), without making use of phone or word form representations. Our model also generates successfully predictions about the speed and accuracy of human auditory comprehension. At the heart of the model is a 'wide' yet sparse two-layer artificial neural network with some hundred thousand input units representing summaries of changes in acoustic frequency bands, and proxies for lexical meanings as output units. We believe that our model holds promise for resolving longstanding theoretical problems surrounding the notion of the phone in linguistic theory.

  19. Correcting quantum errors with entanglement.

    Science.gov (United States)

    Brun, Todd; Devetak, Igor; Hsieh, Min-Hsiu

    2006-10-20

    We show how entanglement shared between encoder and decoder can simplify the theory of quantum error correction. The entanglement-assisted quantum codes we describe do not require the dual-containing constraint necessary for standard quantum error-correcting codes, thus allowing us to "quantize" all of classical linear coding theory. In particular, efficient modern classical codes that attain the Shannon capacity can be made into entanglement-assisted quantum codes attaining the hashing bound (closely related to the quantum capacity). For systems without large amounts of shared entanglement, these codes can also be used as catalytic codes, in which a small amount of initial entanglement enables quantum communication.

  20. Errors in second moments estimated from monostatic Doppler sodar winds. II. Application to field measurements

    DEFF Research Database (Denmark)

    Gaynor, J. E.; Kristensen, Leif

    1986-01-01

    Observatory tower. The approximate magnitude of the error due to spatial and temporal pulse volume separation is presented as a function of mean wind angle relative to the sodar configuration and for several antenna pulsing orders. Sodar-derived standard deviations of the lateral wind component, before...

  1. Error Analysis of Determining Airplane Location by Global Positioning System

    OpenAIRE

    Hajiyev, Chingiz; Burat, Alper

    1999-01-01

    This paper studies the error analysis of determining airplane location by global positioning system (GPS) using statistical testing method. The Newton Rhapson method positions the airplane at the intersection point of four spheres. Absolute errors, relative errors and standard deviation have been calculated The results show that the positioning error of the airplane varies with the coordinates of GPS satellite and the airplane.

  2. Advanced hardware design for error correcting codes

    CERN Document Server

    Coussy, Philippe

    2015-01-01

    This book provides thorough coverage of error correcting techniques. It includes essential basic concepts and the latest advances on key topics in design, implementation, and optimization of hardware/software systems for error correction. The book’s chapters are written by internationally recognized experts in this field. Topics include evolution of error correction techniques, industrial user needs, architectures, and design approaches for the most advanced error correcting codes (Polar Codes, Non-Binary LDPC, Product Codes, etc). This book provides access to recent results, and is suitable for graduate students and researchers of mathematics, computer science, and engineering. • Examines how to optimize the architecture of hardware design for error correcting codes; • Presents error correction codes from theory to optimized architecture for the current and the next generation standards; • Provides coverage of industrial user needs advanced error correcting techniques.

  3. Errors in practical measurement in surveying, engineering, and technology

    International Nuclear Information System (INIS)

    Barry, B.A.; Morris, M.D.

    1991-01-01

    This book discusses statistical measurement, error theory, and statistical error analysis. The topics of the book include an introduction to measurement, measurement errors, the reliability of measurements, probability theory of errors, measures of reliability, reliability of repeated measurements, propagation of errors in computing, errors and weights, practical application of the theory of errors in measurement, two-dimensional errors and includes a bibliography. Appendices are included which address significant figures in measurement, basic concepts of probability and the normal probability curve, writing a sample specification for a procedure, classification, standards of accuracy, and general specifications of geodetic control surveys, the geoid, the frequency distribution curve and the computer and calculator solution of problems

  4. Field error lottery

    Energy Technology Data Exchange (ETDEWEB)

    Elliott, C.J.; McVey, B. (Los Alamos National Lab., NM (USA)); Quimby, D.C. (Spectra Technology, Inc., Bellevue, WA (USA))

    1990-01-01

    The level of field errors in an FEL is an important determinant of its performance. We have computed 3D performance of a large laser subsystem subjected to field errors of various types. These calculations have been guided by simple models such as SWOOP. The technique of choice is utilization of the FELEX free electron laser code that now possesses extensive engineering capabilities. Modeling includes the ability to establish tolerances of various types: fast and slow scale field bowing, field error level, beam position monitor error level, gap errors, defocusing errors, energy slew, displacement and pointing errors. Many effects of these errors on relative gain and relative power extraction are displayed and are the essential elements of determining an error budget. The random errors also depend on the particular random number seed used in the calculation. The simultaneous display of the performance versus error level of cases with multiple seeds illustrates the variations attributable to stochasticity of this model. All these errors are evaluated numerically for comprehensive engineering of the system. In particular, gap errors are found to place requirements beyond mechanical tolerances of {plus minus}25{mu}m, and amelioration of these may occur by a procedure utilizing direct measurement of the magnetic fields at assembly time. 4 refs., 12 figs.

  5. Measurement Error in Education and Growth Regressions

    NARCIS (Netherlands)

    Portela, Miguel; Alessie, Rob; Teulings, Coen

    2010-01-01

    The use of the perpetual inventory method for the construction of education data per country leads to systematic measurement error. This paper analyzes its effect on growth regressions. We suggest a methodology for correcting this error. The standard attenuation bias suggests that using these

  6. Random error in cardiovascular meta-analyses

    DEFF Research Database (Denmark)

    Albalawi, Zaina; McAlister, Finlay A; Thorlund, Kristian

    2013-01-01

    BACKGROUND: Cochrane reviews are viewed as the gold standard in meta-analyses given their efforts to identify and limit systematic error which could cause spurious conclusions. The potential for random error to cause spurious conclusions in meta-analyses is less well appreciated. METHODS: We exam...

  7. Error tracking in a clinical biochemistry laboratory

    DEFF Research Database (Denmark)

    Szecsi, Pal Bela; Ødum, Lars

    2009-01-01

    BACKGROUND: We report our results for the systematic recording of all errors in a standard clinical laboratory over a 1-year period. METHODS: Recording was performed using a commercial database program. All individuals in the laboratory were allowed to report errors. The testing processes were cl...

  8. Spacecraft and propulsion technician error

    Science.gov (United States)

    Schultz, Daniel Clyde

    Commercial aviation and commercial space similarly launch, fly, and land passenger vehicles. Unlike aviation, the U.S. government has not established maintenance policies for commercial space. This study conducted a mixed methods review of 610 U.S. space launches from 1984 through 2011, which included 31 failures. An analysis of the failure causal factors showed that human error accounted for 76% of those failures, which included workmanship error accounting for 29% of the failures. With the imminent future of commercial space travel, the increased potential for the loss of human life demands that changes be made to the standardized procedures, training, and certification to reduce human error and failure rates. Several recommendations were made by this study to the FAA's Office of Commercial Space Transportation, space launch vehicle operators, and maintenance technician schools in an effort to increase the safety of the space transportation passengers.

  9. Prescription Errors in Psychiatry

    African Journals Online (AJOL)

    Arun Kumar Agnihotri

    clinical pharmacists in detecting errors before they have a (sometimes serious) clinical impact should not be underestimated. Research on medication error in mental health care is limited. .... participation in ward rounds and adverse drug.

  10. Operator quantum error-correcting subsystems for self-correcting quantum memories

    International Nuclear Information System (INIS)

    Bacon, Dave

    2006-01-01

    The most general method for encoding quantum information is not to encode the information into a subspace of a Hilbert space, but to encode information into a subsystem of a Hilbert space. Recently this notion has led to a more general notion of quantum error correction known as operator quantum error correction. In standard quantum error-correcting codes, one requires the ability to apply a procedure which exactly reverses on the error-correcting subspace any correctable error. In contrast, for operator error-correcting subsystems, the correction procedure need not undo the error which has occurred, but instead one must perform corrections only modulo the subsystem structure. This does not lead to codes which differ from subspace codes, but does lead to recovery routines which explicitly make use of the subsystem structure. Here we present two examples of such operator error-correcting subsystems. These examples are motivated by simple spatially local Hamiltonians on square and cubic lattices. In three dimensions we provide evidence, in the form a simple mean field theory, that our Hamiltonian gives rise to a system which is self-correcting. Such a system will be a natural high-temperature quantum memory, robust to noise without external intervening quantum error-correction procedures

  11. Diagnosis of cerebral metastases by means of standard doses of Gadobutrol versus a high-dose protocol. Intraindividual evaluation of a phase-II high-dose study

    International Nuclear Information System (INIS)

    Vogl, T.J.; Friebe, C.E.; Balzer, T.; Mack, M.G.; Steiner, S.; Schedel, H.; Pegios, W.; Lanksch, W.; Banzer, D.; Felix, R.

    1995-01-01

    In a clinical phase-II study 20 patients who had been diagnosed as having brain metastases with CT or MRT were studied prospectively with Gadobutrol, a new nonionic, low osmolality contrast agent. Each patient received an initial injection of 0.1 mmol/kg body weight and an additional dose of 0.2 mmol/kg Gadobutrol 10 min later. Spinecho images were obtained before and after the two applications of Gadobutrol. Dynamic scanning (Turbo-FLASH) was performed for 3 min after each injection of the contrast agent. Both quantitative and qualitative data were intraindividually evaluated. The primary tumor was a bronchial carcinoma in 11 cases; in 9 other cases there were different primary tumors. Forty-eight hours after the use of Gadobutrol there were no adverse signs in the clinical examination, vital signs or blood and urine chemistry. Statistical analysis (Friedman test and Wilcoxon test) of the C/N ratios between tumor and white matter, percentage enhancement, and visual assessment rating revealed statistically significant superiority of high-dose Gadobutrol injection in comparison to the standard dose. The percentage enhancement increased on average from 104% after 0.1 mmol/kg to 162% after 0.3 mmol/kg Gadobutrol. Qualitative delineation and contrast of the lesions increased significantly. The use of high-dose Gadobutrol improved the detection of 36 additional lesions in 6 patients. (orig./VHE) [de

  12. Large errors and severe conditions

    CERN Document Server

    Smith, D L; Van Wormer, L A

    2002-01-01

    Physical parameters that can assume real-number values over a continuous range are generally represented by inherently positive random variables. However, if the uncertainties in these parameters are significant (large errors), conventional means of representing and manipulating the associated variables can lead to erroneous results. Instead, all analyses involving them must be conducted in a probabilistic framework. Several issues must be considered: First, non-linear functional relations between primary and derived variables may lead to significant 'error amplification' (severe conditions). Second, the commonly used normal (Gaussian) probability distribution must be replaced by a more appropriate function that avoids the occurrence of negative sampling results. Third, both primary random variables and those derived through well-defined functions must be dealt with entirely in terms of their probability distributions. Parameter 'values' and 'errors' should be interpreted as specific moments of these probabil...

  13. Optimized universal color palette design for error diffusion

    Science.gov (United States)

    Kolpatzik, Bernd W.; Bouman, Charles A.

    1995-04-01

    Currently, many low-cost computers can only simultaneously display a palette of 256 color. However, this palette is usually selectable from a very large gamut of available colors. For many applications, this limited palette size imposes a significant constraint on the achievable image quality. We propose a method for designing an optimized universal color palette for use with halftoning methods such as error diffusion. The advantage of a universal color palette is that it is fixed and therefore allows multiple images to be displayed simultaneously. To design the palette, we employ a new vector quantization method known as sequential scalar quantization (SSQ) to allocate the colors in a visually uniform color space. The SSQ method achieves near-optimal allocation, but may be efficiently implemented using a series of lookup tables. When used with error diffusion, SSQ adds little computational overhead and may be used to minimize the visual error in an opponent color coordinate system. We compare the performance of the optimized algorithm to standard error diffusion by evaluating a visually weighted mean-squared-error measure. Our metric is based on the color difference in CIE L*AL*B*, but also accounts for the lowpass characteristic of human contrast sensitivity.

  14. Errors in otology.

    Science.gov (United States)

    Kartush, J M

    1996-11-01

    Practicing medicine successfully requires that errors in diagnosis and treatment be minimized. Malpractice laws encourage litigators to ascribe all medical errors to incompetence and negligence. There are, however, many other causes of unintended outcomes. This article describes common causes of errors and suggests ways to minimize mistakes in otologic practice. Widespread dissemination of knowledge about common errors and their precursors can reduce the incidence of their occurrence. Consequently, laws should be passed to allow for a system of non-punitive, confidential reporting of errors and "near misses" that can be shared by physicians nationwide.

  15. Random and Systematic Errors Share in Total Error of Probes for CNC Machine Tools

    Directory of Open Access Journals (Sweden)

    Adam Wozniak

    2018-03-01

    Full Text Available Probes for CNC machine tools, as every measurement device, have accuracy limited by random errors and by systematic errors. Random errors of these probes are described by a parameter called unidirectional repeatability. Manufacturers of probes for CNC machine tools usually specify only this parameter, while parameters describing systematic errors of the probes, such as pre-travel variation or triggering radius variation, are used rarely. Systematic errors of the probes, linked to the differences in pre-travel values for different measurement directions, can be corrected or compensated, but it is not a widely used procedure. In this paper, the share of systematic errors and random errors in total error of exemplary probes are determined. In the case of simple, kinematic probes, systematic errors are much greater than random errors, so compensation would significantly reduce the probing error. Moreover, it shows that in the case of kinematic probes commonly specified unidirectional repeatability is significantly better than 2D performance. However, in the case of more precise strain-gauge probe systematic errors are of the same order as random errors, which means that errors correction or compensation, in this case, would not yield any significant benefits.

  16. The Meaning of Meaning, Etc.

    Science.gov (United States)

    Nilsen, Don L. F.

    This paper attempts to dispel a number of misconceptions about the nature of meaning, namely that: (1) synonyms are words that have the same meanings, (2) antonyms are words that have opposite meanings, (3) homonyms are words that sound the same but have different spellings and meanings, (4) converses are antonyms rather than synonyms, (5)…

  17. Performance evaluation of emerging JPEGXR compression standard for medical images

    International Nuclear Information System (INIS)

    Basit, M.A.

    2012-01-01

    Medical images require loss less compression as a small error due to lossy compression may be considered as a diagnostic error. JPEG XR is the latest image compression standard designed for variety of applications and has a support for lossy and loss less modes. This paper provides in-depth performance evaluation of latest JPEGXR with existing image coding standards for medical images using loss less compression. Various medical images are used for evaluation and ten images of each organ are tested. Performance of JPEGXR is compared with JPEG2000 and JPEGLS using mean square error, peak signal to noise ratio, mean absolute error and structural similarity index. JPEGXR shows improvement of 20.73 dB and 5.98 dB over JPEGLS and JPEG2000 respectively for various test images used in experimentation. (author)

  18. The error in total error reduction.

    Science.gov (United States)

    Witnauer, James E; Urcelay, Gonzalo P; Miller, Ralph R

    2014-02-01

    Most models of human and animal learning assume that learning is proportional to the discrepancy between a delivered outcome and the outcome predicted by all cues present during that trial (i.e., total error across a stimulus compound). This total error reduction (TER) view has been implemented in connectionist and artificial neural network models to describe the conditions under which weights between units change. Electrophysiological work has revealed that the activity of dopamine neurons is correlated with the total error signal in models of reward learning. Similar neural mechanisms presumably support fear conditioning, human contingency learning, and other types of learning. Using a computational modeling approach, we compared several TER models of associative learning to an alternative model that rejects the TER assumption in favor of local error reduction (LER), which assumes that learning about each cue is proportional to the discrepancy between the delivered outcome and the outcome predicted by that specific cue on that trial. The LER model provided a better fit to the reviewed data than the TER models. Given the superiority of the LER model with the present data sets, acceptance of TER should be tempered. Copyright © 2013 Elsevier Inc. All rights reserved.

  19. Errors in Neonatology

    OpenAIRE

    Antonio Boldrini; Rosa T. Scaramuzzo; Armando Cuttano

    2013-01-01

    Introduction: Danger and errors are inherent in human activities. In medical practice errors can lean to adverse events for patients. Mass media echo the whole scenario. Methods: We reviewed recent published papers in PubMed database to focus on the evidence and management of errors in medical practice in general and in Neonatology in particular. We compared the results of the literature with our specific experience in Nina Simulation Centre (Pisa, Italy). Results: In Neonatology the main err...

  20. Learning from errors in super-resolution.

    Science.gov (United States)

    Tang, Yi; Yuan, Yuan

    2014-11-01

    A novel framework of learning-based super-resolution is proposed by employing the process of learning from the estimation errors. The estimation errors generated by different learning-based super-resolution algorithms are statistically shown to be sparse and uncertain. The sparsity of the estimation errors means most of estimation errors are small enough. The uncertainty of the estimation errors means the location of the pixel with larger estimation error is random. Noticing the prior information about the estimation errors, a nonlinear boosting process of learning from these estimation errors is introduced into the general framework of the learning-based super-resolution. Within the novel framework of super-resolution, a low-rank decomposition technique is used to share the information of different super-resolution estimations and to remove the sparse estimation errors from different learning algorithms or training samples. The experimental results show the effectiveness and the efficiency of the proposed framework in enhancing the performance of different learning-based algorithms.

  1. Bootstrap-Based Improvements for Inference with Clustered Errors

    OpenAIRE

    Doug Miller; A. Colin Cameron; Jonah B. Gelbach

    2006-01-01

    Microeconometrics researchers have increasingly realized the essential need to account for any within-group dependence in estimating standard errors of regression parameter estimates. The typical preferred solution is to calculate cluster-robust or sandwich standard errors that permit quite general heteroskedasticity and within-cluster error correlation, but presume that the number of clusters is large. In applications with few (5-30) clusters, standard asymptotic tests can over-reject consid...

  2. Systematic Procedural Error

    National Research Council Canada - National Science Library

    Byrne, Michael D

    2006-01-01

    .... This problem has received surprisingly little attention from cognitive psychologists. The research summarized here examines such errors in some detail both empirically and through computational cognitive modeling...

  3. Human errors and mistakes

    International Nuclear Information System (INIS)

    Wahlstroem, B.

    1993-01-01

    Human errors have a major contribution to the risks for industrial accidents. Accidents have provided important lesson making it possible to build safer systems. In avoiding human errors it is necessary to adapt the systems to their operators. The complexity of modern industrial systems is however increasing the danger of system accidents. Models of the human operator have been proposed, but the models are not able to give accurate predictions of human performance. Human errors can never be eliminated, but their frequency can be decreased by systematic efforts. The paper gives a brief summary of research in human error and it concludes with suggestions for further work. (orig.)

  4. Technical errors in MR arthrography

    International Nuclear Information System (INIS)

    Hodler, Juerg

    2008-01-01

    This article discusses potential technical problems of MR arthrography. It starts with contraindications, followed by problems relating to injection technique, contrast material and MR imaging technique. For some of the aspects discussed, there is only little published evidence. Therefore, the article is based on the personal experience of the author and on local standards of procedures. Such standards, as well as medico-legal considerations, may vary from country to country. Contraindications for MR arthrography include pre-existing infection, reflex sympathetic dystrophy and possibly bleeding disorders, avascular necrosis and known allergy to contrast media. Errors in injection technique may lead to extra-articular collection of contrast agent or to contrast agent leaking from the joint space, which may cause diagnostic difficulties. Incorrect concentrations of contrast material influence image quality and may also lead to non-diagnostic examinations. Errors relating to MR imaging include delays between injection and imaging and inadequate choice of sequences. Potential solutions to the various possible errors are presented. (orig.)

  5. Technical errors in MR arthrography

    Energy Technology Data Exchange (ETDEWEB)

    Hodler, Juerg [Orthopaedic University Hospital of Balgrist, Radiology, Zurich (Switzerland)

    2008-01-15

    This article discusses potential technical problems of MR arthrography. It starts with contraindications, followed by problems relating to injection technique, contrast material and MR imaging technique. For some of the aspects discussed, there is only little published evidence. Therefore, the article is based on the personal experience of the author and on local standards of procedures. Such standards, as well as medico-legal considerations, may vary from country to country. Contraindications for MR arthrography include pre-existing infection, reflex sympathetic dystrophy and possibly bleeding disorders, avascular necrosis and known allergy to contrast media. Errors in injection technique may lead to extra-articular collection of contrast agent or to contrast agent leaking from the joint space, which may cause diagnostic difficulties. Incorrect concentrations of contrast material influence image quality and may also lead to non-diagnostic examinations. Errors relating to MR imaging include delays between injection and imaging and inadequate choice of sequences. Potential solutions to the various possible errors are presented. (orig.)

  6. Minimum Mean-Square Error Single-Channel Signal Estimation

    DEFF Research Database (Denmark)

    Beierholm, Thomas

    2008-01-01

    This topic of this thesis is MMSE signal estimation for hearing aids when only one microphone is available. The research is relevant for noise reduction systems in hearing aids. To fully benefit from the amplification provided by a hearing aid, noise reduction functionality is important as hearin...... algorithm. Although performance of the two algorithms is found comparable then the particle filter algorithm is doing a much better job tracking the noise.......-impaired persons in some noisy situations need a higher signal to noise ratio for speech to be intelligible when compared to normal-hearing persons. In this thesis two different methods to approach the MMSE signal estimation problem is examined. The methods differ in the way that models for the signal and noise...... inference is performed by particle filtering. The speech model is a time-varying auto-regressive model reparameterized by formant frequencies and bandwidths. The noise is assumed non-stationary and white. Compared to the case of using the AR coefficients directly then it is found very beneficial to perform...

  7. Mean-Square Error Due to Gradiometer Field Measuring Devices

    Science.gov (United States)

    1991-06-01

    convolving the gradiometer data with the inverse transform of I /T(a, 13), applying an ap- Hence (2) may be expressed in the transform domain as propriate... inverse transform of I / T(ot, 1) will not be possible quency measurements," Superconductor Applications: SQUID’s and because its inverse does not exist...and because it is a high- Machines, B. B. Schwartz and S. Foner, Eds. New York: Plenum pass function its use in an inverse transform technique Press

  8. Learning from Errors

    Science.gov (United States)

    Metcalfe, Janet

    2017-01-01

    Although error avoidance during learning appears to be the rule in American classrooms, laboratory studies suggest that it may be a counterproductive strategy, at least for neurologically typical students. Experimental investigations indicate that errorful learning followed by corrective feedback is beneficial to learning. Interestingly, the…

  9. Action errors, error management, and learning in organizations.

    Science.gov (United States)

    Frese, Michael; Keith, Nina

    2015-01-03

    Every organization is confronted with errors. Most errors are corrected easily, but some may lead to negative consequences. Organizations often focus on error prevention as a single strategy for dealing with errors. Our review suggests that error prevention needs to be supplemented by error management--an approach directed at effectively dealing with errors after they have occurred, with the goal of minimizing negative and maximizing positive error consequences (examples of the latter are learning and innovations). After defining errors and related concepts, we review research on error-related processes affected by error management (error detection, damage control). Empirical evidence on positive effects of error management in individuals and organizations is then discussed, along with emotional, motivational, cognitive, and behavioral pathways of these effects. Learning from errors is central, but like other positive consequences, learning occurs under certain circumstances--one being the development of a mind-set of acceptance of human error.

  10. Computable Error Estimates for Finite Element Approximations of Elliptic Partial Differential Equations with Rough Stochastic Data

    KAUST Repository

    Hall, Eric Joseph; Hoel, Hå kon; Sandberg, Mattias; Szepessy, Anders; Tempone, Raul

    2016-01-01

    posteriori error estimates fail to capture. We propose goal-oriented estimates, based on local error indicators, for the pathwise Galerkin and expected quadrature errors committed in standard, continuous, piecewise linear finite element approximations

  11. Error monitoring issues for common channel signaling

    Science.gov (United States)

    Hou, Victor T.; Kant, Krishna; Ramaswami, V.; Wang, Jonathan L.

    1994-04-01

    Motivated by field data which showed a large number of link changeovers and incidences of link oscillations between in-service and out-of-service states in common channel signaling (CCS) networks, a number of analyses of the link error monitoring procedures in the SS7 protocol were performed by the authors. This paper summarizes the results obtained thus far and include the following: (1) results of an exact analysis of the performance of the error monitoring procedures under both random and bursty errors; (2) a demonstration that there exists a range of error rates within which the error monitoring procedures of SS7 may induce frequent changeovers and changebacks; (3) an analysis of the performance ofthe SS7 level-2 transmission protocol to determine the tolerable error rates within which the delay requirements can be met; (4) a demonstration that the tolerable error rate depends strongly on various link and traffic characteristics, thereby implying that a single set of error monitor parameters will not work well in all situations; (5) some recommendations on a customizable/adaptable scheme of error monitoring with a discussion on their implementability. These issues may be particularly relevant in the presence of anticipated increases in SS7 traffic due to widespread deployment of Advanced Intelligent Network (AIN) and Personal Communications Service (PCS) as well as for developing procedures for high-speed SS7 links currently under consideration by standards bodies.

  12. Analysis of the Mean Absolute Error (MAE) and the Root Mean Square Error (RMSE) in Assessing Rounding Model

    Science.gov (United States)

    Wang, Weijie; Lu, Yanmin

    2018-03-01

    Most existing Collaborative Filtering (CF) algorithms predict a rating as the preference of an active user toward a given item, which is always a decimal fraction. Meanwhile, the actual ratings in most data sets are integers. In this paper, we discuss and demonstrate why rounding can bring different influences to these two metrics; prove that rounding is necessary in post-processing of the predicted ratings, eliminate of model prediction bias, improving the accuracy of the prediction. In addition, we also propose two new rounding approaches based on the predicted rating probability distribution, which can be used to round the predicted rating to an optimal integer rating, and get better prediction accuracy compared to the Basic Rounding approach. Extensive experiments on different data sets validate the correctness of our analysis and the effectiveness of our proposed rounding approaches.

  13. Refractive error magnitude and variability: Relation to age.

    Science.gov (United States)

    Irving, Elizabeth L; Machan, Carolyn M; Lam, Sharon; Hrynchak, Patricia K; Lillakas, Linda

    2018-03-19

    To investigate mean ocular refraction (MOR) and astigmatism, over the human age range and compare severity of refractive error to earlier studies from clinical populations having large age ranges. For this descriptive study patient age, refractive error and history of surgery affecting refraction were abstracted from the Waterloo Eye Study database (WatES). Average MOR, standard deviation of MOR and astigmatism were assessed in relation to age. Refractive distributions for developmental age groups were determined. MOR standard deviation relative to average MOR was evaluated. Data from earlier clinically based studies with similar age ranges were compared to WatES. Right eye refractive errors were available for 5933 patients with no history of surgery affecting refraction. Average MOR varied with age. Children <1 yr of age were the most hyperopic (+1.79D) and the highest magnitude of myopia was found at 27yrs (-2.86D). MOR distributions were leptokurtic, and negatively skewed. The mode varied with age group. MOR variability increased with increasing myopia. Average astigmatism increased gradually to age 60 after which it increased at a faster rate. By 85+ years it was 1.25D. J 0 power vector became increasingly negative with age. J 45 power vector values remained close to zero but variability increased at approximately 70 years. In relation to comparable earlier studies, WatES data were most myopic. Mean ocular refraction and refractive error distribution vary with age. The highest magnitude of myopia is found in young adults. Similar to prevalence, the severity of myopia also appears to have increased since 1931. Copyright © 2018 Spanish General Council of Optometry. Published by Elsevier España, S.L.U. All rights reserved.

  14. Uncorrected refractive errors.

    Science.gov (United States)

    Naidoo, Kovin S; Jaggernath, Jyoti

    2012-01-01

    Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC), were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR) Development, Service Development and Social Entrepreneurship.

  15. Uncorrected refractive errors

    Directory of Open Access Journals (Sweden)

    Kovin S Naidoo

    2012-01-01

    Full Text Available Global estimates indicate that more than 2.3 billion people in the world suffer from poor vision due to refractive error; of which 670 million people are considered visually impaired because they do not have access to corrective treatment. Refractive errors, if uncorrected, results in an impaired quality of life for millions of people worldwide, irrespective of their age, sex and ethnicity. Over the past decade, a series of studies using a survey methodology, referred to as Refractive Error Study in Children (RESC, were performed in populations with different ethnic origins and cultural settings. These studies confirmed that the prevalence of uncorrected refractive errors is considerably high for children in low-and-middle-income countries. Furthermore, uncorrected refractive error has been noted to have extensive social and economic impacts, such as limiting educational and employment opportunities of economically active persons, healthy individuals and communities. The key public health challenges presented by uncorrected refractive errors, the leading cause of vision impairment across the world, require urgent attention. To address these issues, it is critical to focus on the development of human resources and sustainable methods of service delivery. This paper discusses three core pillars to addressing the challenges posed by uncorrected refractive errors: Human Resource (HR Development, Service Development and Social Entrepreneurship.

  16. Use of precision measurements for the limitation of effects beyond the standard model by means of an effective-field-theoretical approach; Verwendung von Praezisionsmessungen zur Eingrenzung von Effekten jenseits des Standardmodells mittels eines effektiven feldtheoretischen Zugangs

    Energy Technology Data Exchange (ETDEWEB)

    Bauer, A.

    2006-09-25

    The standard model of elementary particle physics (SM) is perhaps the most significant theory in physics. It describes the interacting matter and gauge fields at high prescision. Nevertheless, there are a few requirements, which are not fulfilled by the SM, for example the incorporation of gravity, neutrino oscillations and further open questions. On the way to a more comprehensive theory, one can make use of an effective power series ansatz, which describes the SM physics as well as new phenomena. We exploit this ansatz to parameterize new effects with the help of a new mass scale and a set of new coupling constants. In the lowest order, one retrieves the SM. Higher order effects describe the new physics. Requiring certain properties under symmetry transformations gives a proper number of effective operators with mass dimension six. These operators are the starting point of our considerations. First, we calculate decay rates and cross sections, respectively, for selected processes under the assumption that only one new operator contributes at a time. Assuming that the observable's additional contribution is smaller than the experimental error, we give upper limits to the new coupling constant depending on the new mass scale. For this purpose we use leptonic and certain semileptonic precision data. On the one hand, the results presented in this thesis give physicists the opportunity to decide, which experiments are good candidates to increase precision. On the other hand, they show which experiment has the most promising potential for discoveries. (orig.)

  17. Use of precision measurements for the limitation of effects beyond the standard model by means of an effective-field-theoretical approach; Verwendung von Praezisionsmessungen zur Eingrenzung von Effekten jenseits des Standardmodells mittels eines effektiven feldtheoretischen Zugangs

    Energy Technology Data Exchange (ETDEWEB)

    Bauer, A

    2006-09-25

    The standard model of elementary particle physics (SM) is perhaps the most significant theory in physics. It describes the interacting matter and gauge fields at high prescision. Nevertheless, there are a few requirements, which are not fulfilled by the SM, for example the incorporation of gravity, neutrino oscillations and further open questions. On the way to a more comprehensive theory, one can make use of an effective power series ansatz, which describes the SM physics as well as new phenomena. We exploit this ansatz to parameterize new effects with the help of a new mass scale and a set of new coupling constants. In the lowest order, one retrieves the SM. Higher order effects describe the new physics. Requiring certain properties under symmetry transformations gives a proper number of effective operators with mass dimension six. These operators are the starting point of our considerations. First, we calculate decay rates and cross sections, respectively, for selected processes under the assumption that only one new operator contributes at a time. Assuming that the observable's additional contribution is smaller than the experimental error, we give upper limits to the new coupling constant depending on the new mass scale. For this purpose we use leptonic and certain semileptonic precision data. On the one hand, the results presented in this thesis give physicists the opportunity to decide, which experiments are good candidates to increase precision. On the other hand, they show which experiment has the most promising potential for discoveries. (orig.)

  18. Preventing Errors in Laterality

    OpenAIRE

    Landau, Elliot; Hirschorn, David; Koutras, Iakovos; Malek, Alexander; Demissie, Seleshie

    2014-01-01

    An error in laterality is the reporting of a finding that is present on the right side as on the left or vice versa. While different medical and surgical specialties have implemented protocols to help prevent such errors, very few studies have been published that describe these errors in radiology reports and ways to prevent them. We devised a system that allows the radiologist to view reports in a separate window, displayed in a simple font and with all terms of laterality highlighted in sep...

  19. Errors and violations

    International Nuclear Information System (INIS)

    Reason, J.

    1988-01-01

    This paper is in three parts. The first part summarizes the human failures responsible for the Chernobyl disaster and argues that, in considering the human contribution to power plant emergencies, it is necessary to distinguish between: errors and violations; and active and latent failures. The second part presents empirical evidence, drawn from driver behavior, which suggest that errors and violations have different psychological origins. The concluding part outlines a resident pathogen view of accident causation, and seeks to identify the various system pathways along which errors and violations may be propagated

  20. Parameters and error of a theoretical model

    International Nuclear Information System (INIS)

    Moeller, P.; Nix, J.R.; Swiatecki, W.

    1986-09-01

    We propose a definition for the error of a theoretical model of the type whose parameters are determined from adjustment to experimental data. By applying a standard statistical method, the maximum-likelihoodlmethod, we derive expressions for both the parameters of the theoretical model and its error. We investigate the derived equations by solving them for simulated experimental and theoretical quantities generated by use of random number generators. 2 refs., 4 tabs

  1. Error analysis for mesospheric temperature profiling by absorptive occultation sensors

    Directory of Open Access Journals (Sweden)

    M. J. Rieder

    Full Text Available An error analysis for mesospheric profiles retrieved from absorptive occultation data has been performed, starting with realistic error assumptions as would apply to intensity data collected by available high-precision UV photodiode sensors. Propagation of statistical errors was investigated through the complete retrieval chain from measured intensity profiles to atmospheric density, pressure, and temperature profiles. We assumed unbiased errors as the occultation method is essentially self-calibrating and straight-line propagation of occulted signals as we focus on heights of 50–100 km, where refractive bending of the sensed radiation is negligible. Throughout the analysis the errors were characterized at each retrieval step by their mean profile, their covariance matrix and their probability density function (pdf. This furnishes, compared to a variance-only estimation, a much improved insight into the error propagation mechanism. We applied the procedure to a baseline analysis of the performance of a recently proposed solar UV occultation sensor (SMAS – Sun Monitor and Atmospheric Sounder and provide, using a reasonable exponential atmospheric model as background, results on error standard deviations and error correlation functions of density, pressure, and temperature profiles. Two different sensor photodiode assumptions are discussed, respectively, diamond diodes (DD with 0.03% and silicon diodes (SD with 0.1% (unattenuated intensity measurement noise at 10 Hz sampling rate. A factor-of-2 margin was applied to these noise values in order to roughly account for unmodeled cross section uncertainties. Within the entire height domain (50–100 km we find temperature to be retrieved to better than 0.3 K (DD / 1 K (SD accuracy, respectively, at 2 km height resolution. The results indicate that absorptive occultations acquired by a SMAS-type sensor could provide mesospheric profiles of fundamental variables such as temperature with

  2. Error analysis for mesospheric temperature profiling by absorptive occultation sensors

    Directory of Open Access Journals (Sweden)

    M. J. Rieder

    2001-01-01

    Full Text Available An error analysis for mesospheric profiles retrieved from absorptive occultation data has been performed, starting with realistic error assumptions as would apply to intensity data collected by available high-precision UV photodiode sensors. Propagation of statistical errors was investigated through the complete retrieval chain from measured intensity profiles to atmospheric density, pressure, and temperature profiles. We assumed unbiased errors as the occultation method is essentially self-calibrating and straight-line propagation of occulted signals as we focus on heights of 50–100 km, where refractive bending of the sensed radiation is negligible. Throughout the analysis the errors were characterized at each retrieval step by their mean profile, their covariance matrix and their probability density function (pdf. This furnishes, compared to a variance-only estimation, a much improved insight into the error propagation mechanism. We applied the procedure to a baseline analysis of the performance of a recently proposed solar UV occultation sensor (SMAS – Sun Monitor and Atmospheric Sounder and provide, using a reasonable exponential atmospheric model as background, results on error standard deviations and error correlation functions of density, pressure, and temperature profiles. Two different sensor photodiode assumptions are discussed, respectively, diamond diodes (DD with 0.03% and silicon diodes (SD with 0.1% (unattenuated intensity measurement noise at 10 Hz sampling rate. A factor-of-2 margin was applied to these noise values in order to roughly account for unmodeled cross section uncertainties. Within the entire height domain (50–100 km we find temperature to be retrieved to better than 0.3 K (DD / 1 K (SD accuracy, respectively, at 2 km height resolution. The results indicate that absorptive occultations acquired by a SMAS-type sensor could provide mesospheric profiles of fundamental variables such as temperature with

  3. Valuation Biases, Error Measures, and the Conglomerate Discount

    NARCIS (Netherlands)

    I. Dittmann (Ingolf); E.G. Maug (Ernst)

    2006-01-01

    textabstractWe document the importance of the choice of error measure (percentage vs. logarithmic errors) for the comparison of alternative valuation procedures. We demonstrate for several multiple valuation methods (averaging with the arithmetic mean, harmonic mean, median, geometric mean) that the

  4. Help prevent hospital errors

    Science.gov (United States)

    ... this page: //medlineplus.gov/ency/patientinstructions/000618.htm Help prevent hospital errors To use the sharing features ... in the hospital. If You Are Having Surgery, Help Keep Yourself Safe Go to a hospital you ...

  5. Pedal Application Errors

    Science.gov (United States)

    2012-03-01

    This project examined the prevalence of pedal application errors and the driver, vehicle, roadway and/or environmental characteristics associated with pedal misapplication crashes based on a literature review, analysis of news media reports, a panel ...

  6. Spotting software errors sooner

    International Nuclear Information System (INIS)

    Munro, D.

    1989-01-01

    Static analysis is helping to identify software errors at an earlier stage and more cheaply than conventional methods of testing. RTP Software's MALPAS system also has the ability to check that a code conforms to its original specification. (author)

  7. Errors in energy bills

    International Nuclear Information System (INIS)

    Kop, L.

    2001-01-01

    On request, the Dutch Association for Energy, Environment and Water (VEMW) checks the energy bills for her customers. It appeared that in the year 2000 many small, but also big errors were discovered in the bills of 42 businesses

  8. Medical Errors Reduction Initiative

    National Research Council Canada - National Science Library

    Mutter, Michael L

    2005-01-01

    The Valley Hospital of Ridgewood, New Jersey, is proposing to extend a limited but highly successful specimen management and medication administration medical errors reduction initiative on a hospital-wide basis...

  9. The surveillance error grid.

    Science.gov (United States)

    Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris

    2014-07-01

    Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to

  10. Design for Error Tolerance

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1983-01-01

    An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability.......An important aspect of the optimal design of computer-based operator support systems is the sensitivity of such systems to operator errors. The author discusses how a system might allow for human variability with the use of reversibility and observability....

  11. Understanding and Confronting Our Mistakes: The Epidemiology of Error in Radiology and Strategies for Error Reduction.

    Science.gov (United States)

    Bruno, Michael A; Walker, Eric A; Abujudeh, Hani H

    2015-10-01

    Arriving at a medical diagnosis is a highly complex process that is extremely error prone. Missed or delayed diagnoses often lead to patient harm and missed opportunities for treatment. Since medical imaging is a major contributor to the overall diagnostic process, it is also a major potential source of diagnostic error. Although some diagnoses may be missed because of the technical or physical limitations of the imaging modality, including image resolution, intrinsic or extrinsic contrast, and signal-to-noise ratio, most missed radiologic diagnoses are attributable to image interpretation errors by radiologists. Radiologic interpretation cannot be mechanized or automated; it is a human enterprise based on complex psychophysiologic and cognitive processes and is itself subject to a wide variety of error types, including perceptual errors (those in which an important abnormality is simply not seen on the images) and cognitive errors (those in which the abnormality is visually detected but the meaning or importance of the finding is not correctly understood or appreciated). The overall prevalence of radiologists' errors in practice does not appear to have changed since it was first estimated in the 1960s. The authors review the epidemiology of errors in diagnostic radiology, including a recently proposed taxonomy of radiologists' errors, as well as research findings, in an attempt to elucidate possible underlying causes of these errors. The authors also propose strategies for error reduction in radiology. On the basis of current understanding, specific suggestions are offered as to how radiologists can improve their performance in practice. © RSNA, 2015.

  12. Comparison of computer workstation with film for detecting setup errors

    International Nuclear Information System (INIS)

    Fritsch, D.S.; Boxwala, A.A.; Raghavan, S.; Coffee, C.; Major, S.A.; Muller, K.E.; Chaney, E.L.

    1997-01-01

    reviewed using each display modality. Images were randomly assigned for each session, and observers viewed a different subset of images on films than the images viewed with PortFolio in the same session. The same image never appeared in both the workstation and view box portions of each session. A total of 360 observations were made. The number of cases in which the observers could detect the induced field placement errors correctly and the mean accuracy with which the errors could be detected for each approach were measured and compared using repeated measure Analysis of Variance (ANOVA). Results: Results of the ANOVA analysis show that radiation oncologists participating in this study could detect and quantitate in-plane rotation and translation errors significantly more accurately with PortFolio compared with standard clinical practice. No significant difference was found between the performances of experienced attendings and residents. Conclusion: Based on the results of this limited study, it is reasonable to conclude that workstations similar to PortFolio can be used efficaciously in clinical practice. Results strongly suggest that setup errors can be detected more accurately using workstation technology

  13. Apologies and Medical Error

    Science.gov (United States)

    2008-01-01

    One way in which physicians can respond to a medical error is to apologize. Apologies—statements that acknowledge an error and its consequences, take responsibility, and communicate regret for having caused harm—can decrease blame, decrease anger, increase trust, and improve relationships. Importantly, apologies also have the potential to decrease the risk of a medical malpractice lawsuit and can help settle claims by patients. Patients indicate they want and expect explanations and apologies after medical errors and physicians indicate they want to apologize. However, in practice, physicians tend to provide minimal information to patients after medical errors and infrequently offer complete apologies. Although fears about potential litigation are the most commonly cited barrier to apologizing after medical error, the link between litigation risk and the practice of disclosure and apology is tenuous. Other barriers might include the culture of medicine and the inherent psychological difficulties in facing one’s mistakes and apologizing for them. Despite these barriers, incorporating apology into conversations between physicians and patients can address the needs of both parties and can play a role in the effective resolution of disputes related to medical error. PMID:18972177

  14. Thermodynamics of Error Correction

    Directory of Open Access Journals (Sweden)

    Pablo Sartori

    2015-12-01

    Full Text Available Information processing at the molecular scale is limited by thermal fluctuations. This can cause undesired consequences in copying information since thermal noise can lead to errors that can compromise the functionality of the copy. For example, a high error rate during DNA duplication can lead to cell death. Given the importance of accurate copying at the molecular scale, it is fundamental to understand its thermodynamic features. In this paper, we derive a universal expression for the copy error as a function of entropy production and work dissipated by the system during wrong incorporations. Its derivation is based on the second law of thermodynamics; hence, its validity is independent of the details of the molecular machinery, be it any polymerase or artificial copying device. Using this expression, we find that information can be copied in three different regimes. In two of them, work is dissipated to either increase or decrease the error. In the third regime, the protocol extracts work while correcting errors, reminiscent of a Maxwell demon. As a case study, we apply our framework to study a copy protocol assisted by kinetic proofreading, and show that it can operate in any of these three regimes. We finally show that, for any effective proofreading scheme, error reduction is limited by the chemical driving of the proofreading reaction.

  15. Alpha-particle-induced soft errors in high speed bipolar RAM

    International Nuclear Information System (INIS)

    Mitsusada, Kazumichi; Kato, Yukio; Yamaguchi, Kunihiko; Inadachi, Masaaki

    1980-01-01

    As bipolar RAM (Random Access Memory) has been improved to a fast acting and highly integrated device, the problems negligible in the past have become the ones that can not be ignored. The problem of a-particles emitted from the radioactive substances in semiconductor package materials should be specifically noticed, which cause soft errors. The authors have produced experimentally the special 1 kbit bipolar RAM to investigate its soft errors. The package used was the standard 16 pin dual in-line type, with which the practical system mounting test and a-particle irradiation test have been performed. The results showed the occurrence of soft errors at the average rate of about 1 bit/700 device hour. It is concluded that the cause was due to the a-particles emitted from the package materials, and at the same time, it was found that the rate of soft error occurrence was able to be greatly reduced by shielding a-particles. The error rate significantly increased with the decrease of the stand-by current of memory cells and with the accumulated charge determined by time constant. The mechanism of soft error was also investigated, for which an approximate model to estimate the error rate by means of the effective noise charge due to a-particles and of the amount of reversible charges of memory cells is shown to compare it with the experimental results. (Wakatsuki, Y.)

  16. Radiative flux and forcing parameterization error in aerosol-free clear skies.

    Science.gov (United States)

    Pincus, Robert; Mlawer, Eli J; Oreopoulos, Lazaros; Ackerman, Andrew S; Baek, Sunghye; Brath, Manfred; Buehler, Stefan A; Cady-Pereira, Karen E; Cole, Jason N S; Dufresne, Jean-Louis; Kelley, Maxwell; Li, Jiangnan; Manners, James; Paynter, David J; Roehrig, Romain; Sekiguchi, Miho; Schwarzkopf, Daniel M

    2015-07-16

    Radiation parameterizations in GCMs are more accurate than their predecessorsErrors in estimates of 4 ×CO 2 forcing are large, especially for solar radiationErrors depend on atmospheric state, so global mean error is unknown.

  17. FREIGHT CONTAINER LIFTING STANDARD

    Energy Technology Data Exchange (ETDEWEB)

    POWERS DJ; SCOTT MA; MACKEY TC

    2010-01-13

    This standard details the correct methods of lifting and handling Series 1 freight containers following ISO-3874 and ISO-1496. The changes within RPP-40736 will allow better reading comprehension, as well as correcting editorial errors.

  18. Study of Errors among Nursing Students

    Directory of Open Access Journals (Sweden)

    Ella Koren

    2007-09-01

    Full Text Available The study of errors in the health system today is a topic of considerable interest aimed at reducing errors through analysis of the phenomenon and the conclusions reached. Errors that occur frequently among health professionals have also been observed among nursing students. True, in most cases they are actually “near errors,” but these could be a future indicator of therapeutic reality and the effect of nurses' work environment on their personal performance. There are two different approaches to such errors: (a The EPP (error prone person approach lays full responsibility at the door of the individual involved in the error, whether a student, nurse, doctor, or pharmacist. According to this approach, handling consists purely in identifying and penalizing the guilty party. (b The EPE (error prone environment approach emphasizes the environment as a primary contributory factor to errors. The environment as an abstract concept includes components and processes of interpersonal communications, work relations, human engineering, workload, pressures, technical apparatus, and new technologies. The objective of the present study was to examine the role played by factors in and components of personal performance as compared to elements and features of the environment. The study was based on both of the aforementioned approaches, which, when combined, enable a comprehensive understanding of the phenomenon of errors among the student population as well as a comparison of factors contributing to human error and to error deriving from the environment. The theoretical basis of the study was a model that combined both approaches: one focusing on the individual and his or her personal performance and the other focusing on the work environment. The findings emphasize the work environment of health professionals as an EPE. However, errors could have been avoided by means of strict adherence to practical procedures. The authors examined error events in the

  19. Temperature error in digital bathythermograph data

    Digital Repository Service at National Institute of Oceanography (India)

    Pankajakshan, T.; Reddy, G.V.; Ratnakaran, L.; Sarupria, J.S.; RameshBabu, V.

    Sciences Vol. 32(3), September 2003, pp. 234-236 Short Communication Temperature error in digital bathythermograph data Thadathil Pankajakshan, G. V. Reddy, Lasitha Ratnakaran, J. S. Sarupria & V. Ramesh Babu Data and Information Division... Oceanographic Data Centre (JODC) 17,305 Short communication 235 Mean difference between DBT and Nansen temperature (here after referred to ‘error’) from surface to 800 m depth and for the two cruises is given in Fig. 3. Error bars are provided...

  20. Determination of potassium concentration in organic samples by means of x-ray fluorescence analysis

    International Nuclear Information System (INIS)

    Soto Moran, R.L.; Szgedi, S.

    1993-01-01

    By means of x-ray fluorescence analysis and the inner standard method using KH 2 PO 4 as the added chemical compound, potassium concentration of roots, stems , leaf, flowers and grains from Quinua (Chenopodium Quinoa Willd). which was previously treated with a nitrogen ed fertilizers has been determined taking into account the increasing effect the average atomic number due to used standard. Experimental errors are lower than 10 %

  1. Learning mechanisms to limit medication administration errors.

    Science.gov (United States)

    Drach-Zahavy, Anat; Pud, Dorit

    2010-04-01

    This paper is a report of a study conducted to identify and test the effectiveness of learning mechanisms applied by the nursing staff of hospital wards as a means of limiting medication administration errors. Since the influential report ;To Err Is Human', research has emphasized the role of team learning in reducing medication administration errors. Nevertheless, little is known about the mechanisms underlying team learning. Thirty-two hospital wards were randomly recruited. Data were collected during 2006 in Israel by a multi-method (observations, interviews and administrative data), multi-source (head nurses, bedside nurses) approach. Medication administration error was defined as any deviation from procedures, policies and/or best practices for medication administration, and was identified using semi-structured observations of nurses administering medication. Organizational learning was measured using semi-structured interviews with head nurses, and the previous year's reported medication administration errors were assessed using administrative data. The interview data revealed four learning mechanism patterns employed in an attempt to learn from medication administration errors: integrated, non-integrated, supervisory and patchy learning. Regression analysis results demonstrated that whereas the integrated pattern of learning mechanisms was associated with decreased errors, the non-integrated pattern was associated with increased errors. Supervisory and patchy learning mechanisms were not associated with errors. Superior learning mechanisms are those that represent the whole cycle of team learning, are enacted by nurses who administer medications to patients, and emphasize a system approach to data analysis instead of analysis of individual cases.

  2. Compact disk error measurements

    Science.gov (United States)

    Howe, D.; Harriman, K.; Tehranchi, B.

    1993-01-01

    The objectives of this project are as follows: provide hardware and software that will perform simple, real-time, high resolution (single-byte) measurement of the error burst and good data gap statistics seen by a photoCD player read channel when recorded CD write-once discs of variable quality (i.e., condition) are being read; extend the above system to enable measurement of the hard decision (i.e., 1-bit error flags) and soft decision (i.e., 2-bit error flags) decoding information that is produced/used by the Cross Interleaved - Reed - Solomon - Code (CIRC) block decoder employed in the photoCD player read channel; construct a model that uses data obtained via the systems described above to produce meaningful estimates of output error rates (due to both uncorrected ECC words and misdecoded ECC words) when a CD disc having specific (measured) error statistics is read (completion date to be determined); and check the hypothesis that current adaptive CIRC block decoders are optimized for pressed (DAD/ROM) CD discs. If warranted, do a conceptual design of an adaptive CIRC decoder that is optimized for write-once CD discs.

  3. Dataset on the mean, standard deviation, broad-sense heritability and stability of wheat quality bred in three different ways and grown under organic and low-input conventional systems.

    Science.gov (United States)

    Rakszegi, Marianna; Löschenberger, Franziska; Hiltbrunner, Jürg; Vida, Gyula; Mikó, Péter

    2016-06-01

    An assessment was previously made of the effects of organic and low-input field management systems on the physical, grain compositional and processing quality of wheat and on the performance of varieties developed using different breeding methods ("Comparison of quality parameters of wheat varieties with different breeding origin under organic and low-input conventional conditions" [1]). Here, accompanying data are provided on the performance and stability analysis of the genotypes using the coefficient of variation and the 'ranking' and 'which-won-where' plots of GGE biplot analysis for the most important quality traits. Broad-sense heritability was also evaluated and is given for the most important physical and quality properties of the seed in organic and low-input management systems, while mean values and standard deviation of the studied properties are presented separately for organic and low-input fields.

  4. Error analysis for determination of accuracy of an ultrasound navigation system for head and neck surgery.

    Science.gov (United States)

    Kozak, J; Krysztoforski, K; Kroll, T; Helbig, S; Helbig, M

    2009-01-01

    The use of conventional CT- or MRI-based navigation systems for head and neck surgery is unsatisfactory due to tissue shift. Moreover, changes occurring during surgical procedures cannot be visualized. To overcome these drawbacks, we developed a novel ultrasound-guided navigation system for head and neck surgery. A comprehensive error analysis was undertaken to determine the accuracy of this new system. The evaluation of the system accuracy was essentially based on the method of error definition for well-established fiducial marker registration methods (point-pair matching) as used in, for example, CT- or MRI-based navigation. This method was modified in accordance with the specific requirements of ultrasound-guided navigation. The Fiducial Localization Error (FLE), Fiducial Registration Error (FRE) and Target Registration Error (TRE) were determined. In our navigation system, the real error (the TRE actually measured) did not exceed a volume of 1.58 mm(3) with a probability of 0.9. A mean value of 0.8 mm (standard deviation: 0.25 mm) was found for the FRE. The quality of the coordinate tracking system (Polaris localizer) could be defined with an FLE of 0.4 +/- 0.11 mm (mean +/- standard deviation). The quality of the coordinates of the crosshairs of the phantom was determined with a deviation of 0.5 mm (standard deviation: 0.07 mm). The results demonstrate that our newly developed ultrasound-guided navigation system shows only very small system deviations and therefore provides very accurate data for practical applications.

  5. LIBERTARISMO & ERROR CATEGORIAL

    Directory of Open Access Journals (Sweden)

    Carlos G. Patarroyo G.

    2009-01-01

    Full Text Available En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibilidad de la libertad humana no necesariamente puede ser acusado de incurrir en ellos.

  6. Libertarismo & Error Categorial

    OpenAIRE

    PATARROYO G, CARLOS G

    2009-01-01

    En este artículo se ofrece una defensa del libertarismo frente a dos acusaciones según las cuales éste comete un error categorial. Para ello, se utiliza la filosofía de Gilbert Ryle como herramienta para explicar las razones que fundamentan estas acusaciones y para mostrar por qué, pese a que ciertas versiones del libertarismo que acuden a la causalidad de agentes o al dualismo cartesiano cometen estos errores, un libertarismo que busque en el indeterminismo fisicalista la base de la posibili...

  7. Error Free Software

    Science.gov (United States)

    1985-01-01

    A mathematical theory for development of "higher order" software to catch computer mistakes resulted from a Johnson Space Center contract for Apollo spacecraft navigation. Two women who were involved in the project formed Higher Order Software, Inc. to develop and market the system of error analysis and correction. They designed software which is logically error-free, which, in one instance, was found to increase productivity by 600%. USE.IT defines its objectives using AXES -- a user can write in English and the system converts to computer languages. It is employed by several large corporations.

  8. Regression away from the mean: Theory and examples.

    Science.gov (United States)

    Schwarz, Wolf; Reike, Dennis

    2018-02-01

    Using a standard repeated measures model with arbitrary true score distribution and normal error variables, we present some fundamental closed-form results which explicitly indicate the conditions under which regression effects towards (RTM) and away from the mean are expected. Specifically, we show that for skewed and bimodal distributions many or even most cases will show a regression effect that is in expectation away from the mean, or that is not just towards but actually beyond the mean. We illustrate our results in quantitative detail with typical examples from experimental and biometric applications, which exhibit a clear regression away from the mean ('egression from the mean') signature. We aim not to repeal cautionary advice against potential RTM effects, but to present a balanced view of regression effects, based on a clear identification of the conditions governing the form that regression effects take in repeated measures designs. © 2017 The British Psychological Society.

  9. Chernobyl - system accident or human error?

    International Nuclear Information System (INIS)

    Stang, E.

    1996-01-01

    Did human error cause the Chernobyl disaster? The standard point of view is that operator error was the root cause of the disaster. This was also the view of the Soviet Accident Commission. The paper analyses the operator errors at Chernobyl in a system context. The reactor operators committed errors that depended upon a lot of other failures that made up a complex accident scenario. The analysis is based on Charles Perrow's analysis of technological disasters. Failure possibility is an inherent property of high-risk industrial installations. The Chernobyl accident consisted of a chain of events that were both extremely improbable and difficult to predict. It is not reasonable to put the blame for the disaster on the operators. (author)

  10. Analysis of Wind Speed Forecasting Error Effects on Automatic Generation Control Performance

    Directory of Open Access Journals (Sweden)

    H. Rajabi Mashhadi

    2014-09-01

    Full Text Available The main goal of this paper is to study statistical indices and evaluate AGC indices in power system which has large penetration of the WTGs. Increasing penetration of wind turbine generations, needs to study more about impacts of it on power system frequency control. Frequency control is changed with unbalancing real-time system generation and load . Also wind turbine generations have more fluctuations and make system more unbalance. Then AGC loop helps to adjust system frequency and the scheduled tie-line powers. The quality of AGC loop is measured by some indices. A good index is a proper measure shows the AGC performance just as the power system operates. One of well-known measures in literature which was introduced by NERC is Control Performance Standards(CPS. Previously it is claimed that a key factor in CPS index is related to standard deviation of generation error, installed power and frequency response. This paper focuses on impact of a several hours-ahead wind speed forecast error on this factor. Furthermore evaluation of conventional control performances in the power systems with large-scale wind turbine penetration is studied. Effects of wind speed standard deviation and also degree of wind farm penetration are analyzed and importance of mentioned factor are criticized. In addition, influence of mean wind speed forecast error on this factor is investigated. The study system is a two area system which there is significant wind farm in one of those. The results show that mean wind speed forecast error has considerable effect on AGC performance while the mentioned key factor is insensitive to this mean error.

  11. Error Correcting Codes

    Indian Academy of Sciences (India)

    Science and Automation at ... the Reed-Solomon code contained 223 bytes of data, (a byte ... then you have a data storage system with error correction, that ..... practical codes, storing such a table is infeasible, as it is generally too large.

  12. Error Correcting Codes

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 3. Error Correcting Codes - Reed Solomon Codes. Priti Shankar. Series Article Volume 2 Issue 3 March ... Author Affiliations. Priti Shankar1. Department of Computer Science and Automation, Indian Institute of Science, Bangalore 560 012, India ...

  13. Improved synthesis of glycine, taurine and sulfate conjugated bile acids as reference compounds and internal standards for ESI-MS/MS urinary profiling of inborn errors of bile acid synthesis.

    Science.gov (United States)

    Donazzolo, Elena; Gucciardi, Antonina; Mazzier, Daniela; Peggion, Cristina; Pirillo, Paola; Naturale, Mauro; Moretto, Alessandro; Giordano, Giuseppe

    2017-04-01

    Bile acid synthesis defects are rare genetic disorders characterized by a failure to produce normal bile acids (BAs), and by an accumulation of unusual and intermediary cholanoids. Measurements of cholanoids in urine samples by mass spectrometry are a gold standard for the diagnosis of these diseases. In this work improved methods for the chemical synthesis of 30 BAs conjugated with glycine, taurine and sulfate were developed. Diethyl phosphorocyanidate (DEPC) and diphenyl phosphoryl azide (DPPA) were used as coupling reagents for glycine and taurine conjugation. Sulfated BAs were obtained by sulfur trioxide-triethylamine complex (SO 3 -TEA) as sulfating agent and thereafter conjugated with glycine and taurine. All products were characterized by NMR, IR spectroscopy and high resolution mass spectrometry (HRMS). The use of these compounds as internal standards allows an improved accuracy of both identification and quantification of urinary bile acids. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Challenge and Error: Critical Events and Attention-Related Errors

    Science.gov (United States)

    Cheyne, James Allan; Carriere, Jonathan S. A.; Solman, Grayden J. F.; Smilek, Daniel

    2011-01-01

    Attention lapses resulting from reactivity to task challenges and their consequences constitute a pervasive factor affecting everyday performance errors and accidents. A bidirectional model of attention lapses (error [image omitted] attention-lapse: Cheyne, Solman, Carriere, & Smilek, 2009) argues that errors beget errors by generating attention…

  15. Textbook Error: Short Circuiting on Electrochemical Cell

    Science.gov (United States)

    Bonicamp, Judith M.; Clark, Roy W.

    2007-01-01

    Short circuiting an electrochemical cell is an unreported but persistent error in the electrochemistry textbooks. It is suggested that diagrams depicting a cell delivering usable current to a load be postponed, the theory of open-circuit galvanic cells is explained, the voltages from the tables of standard reduction potentials is calculated and…

  16. Team errors: definition and taxonomy

    International Nuclear Information System (INIS)

    Sasou, Kunihide; Reason, James

    1999-01-01

    In error analysis or error management, the focus is usually upon individuals who have made errors. In large complex systems, however, most people work in teams or groups. Considering this working environment, insufficient emphasis has been given to 'team errors'. This paper discusses the definition of team errors and its taxonomy. These notions are also applied to events that have occurred in the nuclear power industry, aviation industry and shipping industry. The paper also discusses the relations between team errors and Performance Shaping Factors (PSFs). As a result, the proposed definition and taxonomy are found to be useful in categorizing team errors. The analysis also reveals that deficiencies in communication, resource/task management, excessive authority gradient, excessive professional courtesy will cause team errors. Handling human errors as team errors provides an opportunity to reduce human errors

  17. Putting a face on medical errors: a patient perspective.

    Science.gov (United States)

    Kooienga, Sarah; Stewart, Valerie T

    2011-01-01

    Knowledge of the patient's perspective on medical error is limited. Research efforts have centered on how best to disclose error and how patients desire to have medical error disclosed. On the basis of a qualitative descriptive component of a mixed method study, a purposive sample of 30 community members told their stories of medical error. Their experiences focused on lack of communication, missed communication, or provider's poor interpersonal style of communication, greatly contrasting with the formal definition of error as failure to follow a set standard of care. For these participants, being a patient was more important than error or how an error is disclosed. The patient's understanding of error must be a key aspect of any quality improvement strategy. © 2010 National Association for Healthcare Quality.

  18. Errors in ADAS-cog administration and scoring may undermine clinical trials results.

    Science.gov (United States)

    Schafer, K; De Santi, S; Schneider, L S

    2011-06-01

    The Alzheimer's Disease Assessment Scale - cognitive subscale (ADAS-cog) is the most widely used cognitive outcome measure in AD trials. Although errors in administration and scoring have been suggested as factors masking accurate estimates and potential effects of treatments, there have been few formal examinations of errors with the ADAS-cog. We provided ADAS-cog administration training using standard methods to raters who were designated as experienced, potential raters by sponsors or contract research organizations for two clinical trials. Training included 1 hour sessions on test administration, scoring, question periods, and required that raters individually view and score a model ADAS-cog administration. Raters scores were compared to the criterion scores established for the model administration. A total of 108 errors were made by 80.6% of the 72 raters; 37.5% made 1 error, 25.0% made 2 errors and 18.0% made 3 or more. Errors were made in all ADAS-cog subsections. The most common were in word finding difficulty (67% of the raters), word recognition (22%), and orientation (22%). For the raters who made 1, 2, or ≥ 3 errors the ADAS-cog score was 17.5 (95% CI, 17.3 - 17.8), 17.8 (17.0 - 18.5), and 18.8 (17.6 - 20.0), respectively, and compared to the criterion score, 18.3. ADAS-cog means differed significantly and the variances were more than twice as large between those who made errors on word finding and those who did not, 17.6 (SD=1.4) vs. 18.8 (SD=0.9), respectively (χ(2) = 37.2, P ADAS-cog scores and clinical trials outcomes. These errors may undermine detection of medication effects by contributing both to a biased point estimate and increased variance of the outcome.

  19. Some problems and errors in cytogenetic biodosimetry

    International Nuclear Information System (INIS)

    Mosse, Irma; Kilchevsky, Alexander; Nikolova, Nevena; Zhelev, Nikolai

    2017-01-01

    Human radiosensitivity is a quantitative trait that is generally subject to binomial distribution. Individual radiosensitivity, however, may deviate significantly from the mean (by 2–3 standard deviations). Thus, the same dose of radiation may result in different levels of genotoxic damage (commonly measured as chromosome aberration rates) in different individuals. There is significant genetic component in individual radiosensitivity. It is related to carrier ship of variant alleles of various single-nucleotide polymorphisms (most of these in genes coding for proteins functioning in DNA damage identification and repair); carrier ship of a different number of alleles producing cumulative effects; amplification of gene copies coding for proteins responsible for radioresistance, mobile genetic elements and others. Among the other factors influencing individual radioresistance are: the radio adaptive response; the bystander effect; the levels of endogenous substances with radioprotective and antimutagenic properties and environmental factors such as lifestyle and diet, physical activity, psycho emotional state, hormonal state, certain drugs, infections and others. These factors may have radioprotective or sensitizing effects. Apparently, there are too many factors that may significantly modulate the biological effects of ionizing radiation. Thus, conventional methodologies for biodosimetry (specifically, cytogenetic methods) may produce significant errors if personal traits that may affect radioresistance are not accounted for

  20. Prescription Writing Errors of Midwifery Students in Common Gynecological problems

    Directory of Open Access Journals (Sweden)

    Serveh Parang

    2014-04-01

    Full Text Available Background and aim: Giving improper prescriptions is common among medical practitioners, mostly graduates, in most communities even developed countries. So far, to our knowledge, no study has been conducted on prescription writing of graduate midwifery students. Therefore, this study aimed to detect prescription writing errors of midwifery students in common gynecological problems. Methods: In this descriptive cross-sectional study, 56 bachelor midwifery students, who had passed the theoretical and clinical courses of gynecology, were evaluated by Objective Structured Clinical Examination (OSCE. A demographic questionnaire and a standard checklist for writing the prescriptions and medications were used for data collection. SPSS Version 16 was used to carry out descriptive statistics. Findings: Most of the students were single, with the mean age of 23.0±1.7 years. Most errors were related to not recording the patients’ age and sex, diagnosis, chief complaint, and the prescriber’s name (observed in less than 10% of the prescriptions. The complete dosage schedule and drug name were stated only in 1.8±4.8 and 14±18.6 of prescriptions, respectively. In more than 93% of the cases, route of use and treatment duration were not recorded. Conclusion: According to the results, the number of prescription errors of midwifery students was high. Therefore, it is recommended to run educational courses on prescription writing skills (e.g. writing prescriptions based on World Health Organization (WHO guidelines for the midwifery students.

  1. Error analysis of the crystal orientations obtained by the dictionary approach to EBSD indexing.

    Science.gov (United States)

    Ram, Farangis; Wright, Stuart; Singh, Saransh; De Graef, Marc

    2017-10-01

    The efficacy of the dictionary approach to Electron Back-Scatter Diffraction (EBSD) indexing was evaluated through the analysis of the error in the retrieved crystal orientations. EBSPs simulated by the Callahan-De Graef forward model were used for this purpose. Patterns were noised, distorted, and binned prior to dictionary indexing. Patterns with a high level of noise, with optical distortions, and with a 25 × 25 pixel size, when the error in projection center was 0.7% of the pattern width and the error in specimen tilt was 0.8°, were indexed with a 0.8° mean error in orientation. The same patterns, but 60 × 60 pixel in size, were indexed by the standard 2D Hough transform based approach with almost the same orientation accuracy. Optimal detection parameters in the Hough space were obtained by minimizing the orientation error. It was shown that if the error in detector geometry can be reduced to 0.1% in projection center and 0.1° in specimen tilt, the dictionary approach can retrieve a crystal orientation with a 0.2° accuracy. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. On a Test of Hypothesis to Verify the Operating Risk Due to Accountancy Errors

    Directory of Open Access Journals (Sweden)

    Paola Maddalena Chiodini

    2014-12-01

    Full Text Available According to the Statement on Auditing Standards (SAS No. 39 (AU 350.01, audit sampling is defined as “the application of an audit procedure to less than 100 % of the items within an account balance or class of transactions for the purpose of evaluating some characteristic of the balance or class”. The audit system develops in different steps: some are not susceptible to sampling procedures, while others may be held using sampling techniques. The auditor may also be interested in two types of accounting error: the number of incorrect records in the sample that overcome a given threshold (natural error rate, which may be indicative of possible fraud, and the mean amount of monetary errors found in incorrect records. The aim of this study is to monitor jointly both types of errors through an appropriate system of hypotheses, with particular attention to the second type error that indicates the risk of non-reporting errors overcoming the upper precision limits.

  3. Changes in medical errors after implementation of a handoff program.

    Science.gov (United States)

    Starmer, Amy J; Spector, Nancy D; Srivastava, Rajendu; West, Daniel C; Rosenbluth, Glenn; Allen, April D; Noble, Elizabeth L; Tse, Lisa L; Dalal, Anuj K; Keohane, Carol A; Lipsitz, Stuart R; Rothschild, Jeffrey M; Wien, Matthew F; Yoon, Catherine S; Zigmont, Katherine R; Wilson, Karen M; O'Toole, Jennifer K; Solan, Lauren G; Aylor, Megan; Bismilla, Zia; Coffey, Maitreya; Mahant, Sanjay; Blankenburg, Rebecca L; Destino, Lauren A; Everhart, Jennifer L; Patel, Shilpa J; Bale, James F; Spackman, Jaime B; Stevenson, Adam T; Calaman, Sharon; Cole, F Sessions; Balmer, Dorene F; Hepps, Jennifer H; Lopreiato, Joseph O; Yu, Clifton E; Sectish, Theodore C; Landrigan, Christopher P

    2014-11-06

    Miscommunications are a leading cause of serious medical errors. Data from multicenter studies assessing programs designed to improve handoff of information about patient care are lacking. We conducted a prospective intervention study of a resident handoff-improvement program in nine hospitals, measuring rates of medical errors, preventable adverse events, and miscommunications, as well as resident workflow. The intervention included a mnemonic to standardize oral and written handoffs, handoff and communication training, a faculty development and observation program, and a sustainability campaign. Error rates were measured through active surveillance. Handoffs were assessed by means of evaluation of printed handoff documents and audio recordings. Workflow was assessed through time-motion observations. The primary outcome had two components: medical errors and preventable adverse events. In 10,740 patient admissions, the medical-error rate decreased by 23% from the preintervention period to the postintervention period (24.5 vs. 18.8 per 100 admissions, P<0.001), and the rate of preventable adverse events decreased by 30% (4.7 vs. 3.3 events per 100 admissions, P<0.001). The rate of nonpreventable adverse events did not change significantly (3.0 and 2.8 events per 100 admissions, P=0.79). Site-level analyses showed significant error reductions at six of nine sites. Across sites, significant increases were observed in the inclusion of all prespecified key elements in written documents and oral communication during handoff (nine written and five oral elements; P<0.001 for all 14 comparisons). There were no significant changes from the preintervention period to the postintervention period in the duration of oral handoffs (2.4 and 2.5 minutes per patient, respectively; P=0.55) or in resident workflow, including patient-family contact and computer time. Implementation of the handoff program was associated with reductions in medical errors and in preventable adverse events

  4. Imagery of Errors in Typing

    Science.gov (United States)

    Rieger, Martina; Martinez, Fanny; Wenke, Dorit

    2011-01-01

    Using a typing task we investigated whether insufficient imagination of errors and error corrections is related to duration differences between execution and imagination. In Experiment 1 spontaneous error imagination was investigated, whereas in Experiment 2 participants were specifically instructed to imagine errors. Further, in Experiment 2 we…

  5. Error-Free Software

    Science.gov (United States)

    1989-01-01

    001 is an integrated tool suited for automatically developing ultra reliable models, simulations and software systems. Developed and marketed by Hamilton Technologies, Inc. (HTI), it has been applied in engineering, manufacturing, banking and software tools development. The software provides the ability to simplify the complex. A system developed with 001 can be a prototype or fully developed with production quality code. It is free of interface errors, consistent, logically complete and has no data or control flow errors. Systems can be designed, developed and maintained with maximum productivity. Margaret Hamilton, President of Hamilton Technologies, also directed the research and development of USE.IT, an earlier product which was the first computer aided software engineering product in the industry to concentrate on automatically supporting the development of an ultrareliable system throughout its life cycle. Both products originated in NASA technology developed under a Johnson Space Center contract.

  6. Minimum Tracking Error Volatility

    OpenAIRE

    Luca RICCETTI

    2010-01-01

    Investors assign part of their funds to asset managers that are given the task of beating a benchmark. The risk management department usually imposes a maximum value of the tracking error volatility (TEV) in order to keep the risk of the portfolio near to that of the selected benchmark. However, risk management does not establish a rule on TEV which enables us to understand whether the asset manager is really active or not and, in practice, asset managers sometimes follow passively the corres...

  7. Error-correction coding

    Science.gov (United States)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  8. Satellite Photometric Error Determination

    Science.gov (United States)

    2015-10-18

    Satellite Photometric Error Determination Tamara E. Payne, Philip J. Castro, Stephen A. Gregory Applied Optimization 714 East Monument Ave, Suite...advocate the adoption of new techniques based on in-frame photometric calibrations enabled by newly available all-sky star catalogs that contain highly...filter systems will likely be supplanted by the Sloan based filter systems. The Johnson photometric system is a set of filters in the optical

  9. Video Error Correction Using Steganography

    Science.gov (United States)

    Robie, David L.; Mersereau, Russell M.

    2002-12-01

    The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  10. Video Error Correction Using Steganography

    Directory of Open Access Journals (Sweden)

    Robie David L

    2002-01-01

    Full Text Available The transmission of any data is always subject to corruption due to errors, but video transmission, because of its real time nature must deal with these errors without retransmission of the corrupted data. The error can be handled using forward error correction in the encoder or error concealment techniques in the decoder. This MPEG-2 compliant codec uses data hiding to transmit error correction information and several error concealment techniques in the decoder. The decoder resynchronizes more quickly with fewer errors than traditional resynchronization techniques. It also allows for perfect recovery of differentially encoded DCT-DC components and motion vectors. This provides for a much higher quality picture in an error-prone environment while creating an almost imperceptible degradation of the picture in an error-free environment.

  11. Large-scale simulations of error-prone quantum computation devices

    International Nuclear Information System (INIS)

    Trieu, Doan Binh

    2009-01-01

    The theoretical concepts of quantum computation in the idealized and undisturbed case are well understood. However, in practice, all quantum computation devices do suffer from decoherence effects as well as from operational imprecisions. This work assesses the power of error-prone quantum computation devices using large-scale numerical simulations on parallel supercomputers. We present the Juelich Massively Parallel Ideal Quantum Computer Simulator (JUMPIQCS), that simulates a generic quantum computer on gate level. It comprises an error model for decoherence and operational errors. The robustness of various algorithms in the presence of noise has been analyzed. The simulation results show that for large system sizes and long computations it is imperative to actively correct errors by means of quantum error correction. We implemented the 5-, 7-, and 9-qubit quantum error correction codes. Our simulations confirm that using error-prone correction circuits with non-fault-tolerant quantum error correction will always fail, because more errors are introduced than being corrected. Fault-tolerant methods can overcome this problem, provided that the single qubit error rate is below a certain threshold. We incorporated fault-tolerant quantum error correction techniques into JUMPIQCS using Steane's 7-qubit code and determined this threshold numerically. Using the depolarizing channel as the source of decoherence, we find a threshold error rate of (5.2±0.2) x 10 -6 . For Gaussian distributed operational over-rotations the threshold lies at a standard deviation of 0.0431±0.0002. We can conclude that quantum error correction is especially well suited for the correction of operational imprecisions and systematic over-rotations. For realistic simulations of specific quantum computation devices we need to extend the generic model to dynamic simulations, i.e. time-dependent Hamiltonian simulations of realistic hardware models. We focus on today's most advanced technology, i

  12. Large-scale simulations of error-prone quantum computation devices

    Energy Technology Data Exchange (ETDEWEB)

    Trieu, Doan Binh

    2009-07-01

    The theoretical concepts of quantum computation in the idealized and undisturbed case are well understood. However, in practice, all quantum computation devices do suffer from decoherence effects as well as from operational imprecisions. This work assesses the power of error-prone quantum computation devices using large-scale numerical simulations on parallel supercomputers. We present the Juelich Massively Parallel Ideal Quantum Computer Simulator (JUMPIQCS), that simulates a generic quantum computer on gate level. It comprises an error model for decoherence and operational errors. The robustness of various algorithms in the presence of noise has been analyzed. The simulation results show that for large system sizes and long computations it is imperative to actively correct errors by means of quantum error correction. We implemented the 5-, 7-, and 9-qubit quantum error correction codes. Our simulations confirm that using error-prone correction circuits with non-fault-tolerant quantum error correction will always fail, because more errors are introduced than being corrected. Fault-tolerant methods can overcome this problem, provided that the single qubit error rate is below a certain threshold. We incorporated fault-tolerant quantum error correction techniques into JUMPIQCS using Steane's 7-qubit code and determined this threshold numerically. Using the depolarizing channel as the source of decoherence, we find a threshold error rate of (5.2{+-}0.2) x 10{sup -6}. For Gaussian distributed operational over-rotations the threshold lies at a standard deviation of 0.0431{+-}0.0002. We can conclude that quantum error correction is especially well suited for the correction of operational imprecisions and systematic over-rotations. For realistic simulations of specific quantum computation devices we need to extend the generic model to dynamic simulations, i.e. time-dependent Hamiltonian simulations of realistic hardware models. We focus on today's most advanced

  13. Medication administration errors in Eastern Saudi Arabia

    International Nuclear Information System (INIS)

    Mir Sadat-Ali

    2010-01-01

    To assess the prevalence and characteristics of medication errors (ME) in patients admitted to King Fahd University Hospital, Alkhobar, Kingdom of Saudi Arabia. Medication errors are documented by the nurses and physicians standard reporting forms (Hospital Based Incident Report). The study was carried out in King Fahd University Hospital, Alkhobar, Kingdom of Saudi Arabia and all the incident reports were collected during the period from January 2008 to December 2009. The incident reports were analyzed for age, gender, nationality, nursing unit, and time where ME was reported. The data were analyzed and the statistical significance differences between groups were determined by Student's t-test, and p-values of <0.05 using confidence interval of 95% were considered significant. There were 38 ME reported for the study period. The youngest patient was 5 days and the oldest 70 years. There were 31 Saudis, and 7 non-Saudi patients involved. The most common error was missed medication, which was seen in 15 (39.5%) patients. Over 15 (39.5%) of errors occurred in 2 units (pediatric medicine, and obstetrics and gynecology). Nineteen (50%) of the errors occurred during the 3-11 pm shift. Our study shows that the prevalence of ME in our institution is low, in comparison with the world literature. This could be due to under reporting of the errors, and we believe that ME reporting should be made less punitive so that ME can be studied and preventive measures implemented (Author).

  14. A Method for Calculating the Mean Orbits of Meteor Streams

    Science.gov (United States)

    Voloshchuk, Yu. I.; Kashcheev, B. L.

    An examination of the published catalogs of orbits of meteor streams and of a large number of works devoted to the selection of streams, their analysis and interpretation, showed that elements of stream orbits are calculated, as a rule, as arithmetical (sometimes, weighed) sample means. On the basis of these means, a search for parent bodies, a study of the evolution of swarms generating these streams, an analysis of one-dimensional and multidimensional distributions of these elements, etc., are performed. We show that systematic errors in the estimates of elements of the mean orbits are present in each of the catalogs. These errors are caused by the formal averaging of orbital elements over the sample, while ignoring the fact that they represent not only correlated, but dependent quantities, with nonlinear, in most cases, interrelations between them. Numerous examples are given of such inaccuracies, in particular, the cases where the "mean orbit of the stream" recorded by ground-based techniques does not cross the Earth's orbit. We suggest the computation algorithm, in which the averaging over the sample is carried out at the initial stage of the calculation of the mean orbit, and only for the variables required for subsequent calculations. After this, the known astrometric formulas are used to sequentially calculate all other parameters of the stream, considered now as a standard orbit. Variance analysis is used to estimate the errors in orbital elements of the streams, in the case that their orbits are obtained by averaging the orbital elements of meteoroids forming the stream, without taking into account their interdependence. The results obtained in this analysis indicate the behavior of systematic errors in the elements of orbits of meteor streams. As an example, the effect of the incorrect computation method on the distribution of elements of the stream orbits close to the orbits of asteroids of the Apollo, Aten, and Amor groups (AAA asteroids) is examined.

  15. Error-related brain activity and error awareness in an error classification paradigm.

    Science.gov (United States)

    Di Gregorio, Francesco; Steinhauser, Marco; Maier, Martin E

    2016-10-01

    Error-related brain activity has been linked to error detection enabling adaptive behavioral adjustments. However, it is still unclear which role error awareness plays in this process. Here, we show that the error-related negativity (Ne/ERN), an event-related potential reflecting early error monitoring, is dissociable from the degree of error awareness. Participants responded to a target while ignoring two different incongruent distractors. After responding, they indicated whether they had committed an error, and if so, whether they had responded to one or to the other distractor. This error classification paradigm allowed distinguishing partially aware errors, (i.e., errors that were noticed but misclassified) and fully aware errors (i.e., errors that were correctly classified). The Ne/ERN was larger for partially aware errors than for fully aware errors. Whereas this speaks against the idea that the Ne/ERN foreshadows the degree of error awareness, it confirms the prediction of a computational model, which relates the Ne/ERN to post-response conflict. This model predicts that stronger distractor processing - a prerequisite of error classification in our paradigm - leads to lower post-response conflict and thus a smaller Ne/ERN. This implies that the relationship between Ne/ERN and error awareness depends on how error awareness is related to response conflict in a specific task. Our results further indicate that the Ne/ERN but not the degree of error awareness determines adaptive performance adjustments. Taken together, we conclude that the Ne/ERN is dissociable from error awareness and foreshadows adaptive performance adjustments. Our results suggest that the relationship between the Ne/ERN and error awareness is correlative and mediated by response conflict. Copyright © 2016 Elsevier Inc. All rights reserved.

  16. Conjugate descent formulation of backpropagation error in feedforward neural networks

    Directory of Open Access Journals (Sweden)

    NK Sharma

    2009-06-01

    Full Text Available The feedforward neural network architecture uses backpropagation learning to determine optimal weights between different interconnected layers. This learning procedure uses a gradient descent technique applied to a sum-of-squares error function for the given input-output pattern. It employs an iterative procedure to minimise the error function for a given set of patterns, by adjusting the weights of the network. The first derivates of the error with respect to the weights identify the local error surface in the descent direction. Hence the network exhibits a different local error surface for every different pattern presented to it, and weights are iteratively modified in order to minimise the current local error. The determination of an optimal weight vector is possible only when the total minimum error (mean of the minimum local errors for all patterns from the training set may be minimised. In this paper, we present a general mathematical formulation for the second derivative of the error function with respect to the weights (which represents a conjugate descent for arbitrary feedforward neural network topologies, and we use this derivative information to obtain the optimal weight vector. The local error is backpropagated among the units of hidden layers via the second order derivative of the error with respect to the weights of the hidden and output layers independently and also in combination. The new total minimum error point may be evaluated with the help of the current total minimum error and the current minimised local error. The weight modification processes is performed twice: once with respect to the present local error and once more with respect to the current total or mean error. We present some numerical evidence that our proposed method yields better network weights than those determined via a conventional gradient descent approach.

  17. Standard cross-section data

    International Nuclear Information System (INIS)

    Carlson, A.D.

    1984-01-01

    The accuracy of neutron cross-section measurement is limited by the uncertainty in the standard cross-section and the errors associated with using it. Any improvement in the standard immediately improves all cross-section measurements which have been made relative to that standard. Light element, capture and fission standards are discussed. (U.K.)

  18. A Relative View on Tracking Error

    NARCIS (Netherlands)

    W.G.P.M. Hallerbach (Winfried); I. Pouchkarev (Igor)

    2005-01-01

    textabstractWhen delegating an investment decisions to a professional manager, investors often anchor their mandate to a specific benchmark. The manager’s exposure to risk is controlled by means of a tracking error volatility constraint. It depends on market conditions whether this constraint is

  19. Comparison of computer workstation with light box for detecting setup errors from portal images

    International Nuclear Information System (INIS)

    Boxwala, Aziz A.; Chaney, Edward L.; Fritsch, Daniel S.; Raghavan, Suraj; Coffey, Christopher S.; Major, Stacey A.; Muller, Keith E.

    1999-01-01

    Purpose: Observer studies were conducted to test the hypothesis that radiation oncologists using a computer workstation for portal image analysis can detect setup errors at least as accurately as when following standard clinical practice of inspecting portal films on a light box. Methods and Materials: In a controlled observer study, nine radiation oncologists used a computer workstation, called PortFolio, to detect setup errors in 40 realistic digitally reconstructed portal radiograph (DRPR) images. PortFolio is a prototype workstation for radiation oncologists to display and inspect digital portal images for setup errors. PortFolio includes tools for image enhancement; alignment of crosshairs, field edges, and anatomic structures on reference and acquired images; measurement of distances and angles; and viewing registered images superimposed on one another. The test DRPRs contained known in-plane translation or rotation errors in the placement of the fields over target regions in the pelvis and head. Test images used in the study were also printed on film for observers to view on a light box and interpret using standard clinical practice. The mean accuracy for error detection for each approach was measured and the results were compared using repeated measures analysis of variance (ANOVA) with the Geisser-Greenhouse test statistic. Results: The results indicate that radiation oncologists participating in this study could detect and quantify in-plane rotation and translation errors more accurately with PortFolio compared to standard clinical practice. Conclusions: Based on the results of this limited study, it is reasonable to conclude that workstations similar to PortFolio can be used efficaciously in clinical practice

  20. The effect of errors in charged particle beams

    International Nuclear Information System (INIS)

    Carey, D.C.

    1987-01-01

    Residual errors in a charged particle optical system determine how well the performance of the system conforms to the theory on which it is based. Mathematically possible optical modes can sometimes be eliminated as requiring precisions not attainable. Other plans may require introduction of means of correction for the occurrence of various errors. Error types include misalignments, magnet fabrication precision limitations, and magnet current regulation errors. A thorough analysis of a beam optical system requires computer simulation of all these effects. A unified scheme for the simulation of errors and their correction is discussed

  1. Minimum Error Entropy Classification

    CERN Document Server

    Marques de Sá, Joaquim P; Santos, Jorge M F; Alexandre, Luís A

    2013-01-01

    This book explains the minimum error entropy (MEE) concept applied to data classification machines. Theoretical results on the inner workings of the MEE concept, in its application to solving a variety of classification problems, are presented in the wider realm of risk functionals. Researchers and practitioners also find in the book a detailed presentation of practical data classifiers using MEE. These include multi‐layer perceptrons, recurrent neural networks, complexvalued neural networks, modular neural networks, and decision trees. A clustering algorithm using a MEE‐like concept is also presented. Examples, tests, evaluation experiments and comparison with similar machines using classic approaches, complement the descriptions.

  2. Unitary Application of the Quantum Error Correction Codes

    International Nuclear Information System (INIS)

    You Bo; Xu Ke; Wu Xiaohua

    2012-01-01

    For applying the perfect code to transmit quantum information over a noise channel, the standard protocol contains four steps: the encoding, the noise channel, the error-correction operation, and the decoding. In present work, we show that this protocol can be simplified. The error-correction operation is not necessary if the decoding is realized by the so-called complete unitary transformation. We also offer a quantum circuit, which can correct the arbitrary single-qubit errors.

  3. Investigating Medication Errors in Educational Health Centers of Kermanshah

    Directory of Open Access Journals (Sweden)

    Mohsen Mohammadi

    2015-08-01

    Full Text Available Background and objectives : Medication errors can be a threat to the safety of patients. Preventing medication errors requires reporting and investigating such errors. The present study was conducted with the purpose of investigating medication errors in educational health centers of Kermanshah. Material and Methods: The present research is an applied, descriptive-analytical study and is done as a survey. Error Report of Ministry of Health and Medical Education was used for data collection. The population of the study included all the personnel (nurses, doctors, paramedics of educational health centers of Kermanshah. Among them, those who reported the committed errors were selected as the sample of the study. The data analysis was done using descriptive statistics and Chi 2 Test using SPSS version 18. Results: The findings of the study showed that most errors were related to not using medication properly, the least number of errors were related to improper dose, and the majority of errors occurred in the morning. The most frequent reason for errors was staff negligence and the least frequent was the lack of knowledge. Conclusion: The health care system should create an environment for detecting and reporting errors by the personnel, recognizing related factors causing errors, training the personnel and create a good working environment and standard workload.

  4. Errors from Image Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wood, William Monford [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    Presenting a systematic study of the standard analysis of rod-pinch radiographs for obtaining quantitative measurements of areal mass densities, and making suggestions for improving the methodology of obtaining quantitative information from radiographed objects.

  5. First order error corrections in common introductory physics experiments

    Science.gov (United States)

    Beckey, Jacob; Baker, Andrew; Aravind, Vasudeva; Clarion Team

    As a part of introductory physics courses, students perform different standard lab experiments. Almost all of these experiments are prone to errors owing to factors like friction, misalignment of equipment, air drag, etc. Usually these types of errors are ignored by students and not much thought is paid to the source of these errors. However, paying attention to these factors that give rise to errors help students make better physics models and understand physical phenomena behind experiments in more detail. In this work, we explore common causes of errors in introductory physics experiment and suggest changes that will mitigate the errors, or suggest models that take the sources of these errors into consideration. This work helps students build better and refined physical models and understand physics concepts in greater detail. We thank Clarion University undergraduate student grant for financial support involving this project.

  6. Prevalence and risk factors of undercorrected refractive errors among Singaporean Malay adults: the Singapore Malay Eye Study.

    Science.gov (United States)

    Rosman, Mohamad; Wong, Tien Y; Tay, Wan-Ting; Tong, Louis; Saw, Seang-Mei

    2009-08-01

    To describe the prevalence and the risk factors of undercorrected refractive error in an adult urban Malay population. This population-based, cross-sectional study was conducted in Singapore in 3280 Malay adults, aged 40 to 80 years. All individuals were examined at a centralized clinic and underwent standardized interviews and assessment of refractive errors and presenting and best corrected visual acuities. Distance presenting visual acuity was monocularly measured by using a logarithm of the minimum angle of resolution (logMAR) number chart at a distance of 4 m, with the participants wearing their "walk-in" optical corrections (spectacles or contact lenses), if any. Refraction was determined by subjective refraction by trained, certified study optometrists. Best corrected visual acuity was monocularly assessed and recorded in logMAR scores using the same test protocol as was used for presenting visual acuity. Undercorrected refractive error was defined as an improvement of at least 0.2 logMAR (2 lines equivalent) in the best corrected visual acuity compared with the presenting visual acuity in the better eye. The mean age of the subjects included in our study was 58 +/- 11 years, and 52% of the subjects were women. The prevalence rate of undercorrected refractive error among Singaporean Malay adults in our study (n = 3115) was 20.4% (age-standardized prevalence rate, 18.3%). More of the women had undercorrected refractive error than the men (21.8% vs. 18.8%, P = 0.04). Undercorrected refractive error was also more common in subjects older than 50 years than in subjects aged 40 to 49 years (22.6% vs. 14.3%, P Malay adults with refractive errors was higher than that of the Singaporean Chinese adults with refractive errors. Undercorrected refractive error is a significant cause of correctable visual impairment among Singaporean Malay adults, affecting one in five persons.

  7. Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes

    Science.gov (United States)

    Zavorsky, Gerald S.

    2010-01-01

    Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…

  8. Moderating Argos location errors in animal tracking data

    Science.gov (United States)

    Douglas, David C.; Weinziert, Rolf; Davidson, Sarah C.; Kays, Roland; Wikelski, Martin; Bohrer, Gil

    2012-01-01

    1. The Argos System is used worldwide to satellite-track free-ranging animals, but location errors can range from tens of metres to hundreds of kilometres. Low-quality locations (Argos classes A, 0, B and Z) dominate animal tracking data. Standard-quality animal tracking locations (Argos classes 3, 2 and 1) have larger errors than those reported in Argos manuals.

  9. Measurement Errors and Uncertainties Theory and Practice

    CERN Document Server

    Rabinovich, Semyon G

    2006-01-01

    Measurement Errors and Uncertainties addresses the most important problems that physicists and engineers encounter when estimating errors and uncertainty. Building from the fundamentals of measurement theory, the author develops the theory of accuracy of measurements and offers a wealth of practical recommendations and examples of applications. This new edition covers a wide range of subjects, including: - Basic concepts of metrology - Measuring instruments characterization, standardization and calibration -Estimation of errors and uncertainty of single and multiple measurements - Modern probability-based methods of estimating measurement uncertainty With this new edition, the author completes the development of the new theory of indirect measurements. This theory provides more accurate and efficient methods for processing indirect measurement data. It eliminates the need to calculate the correlation coefficient - a stumbling block in measurement data processing - and offers for the first time a way to obtain...

  10. MO-F-BRA-04: Voxel-Based Statistical Analysis of Deformable Image Registration Error via a Finite Element Method.

    Science.gov (United States)

    Li, S; Lu, M; Kim, J; Glide-Hurst, C; Chetty, I; Zhong, H

    2012-06-01

    Purpose Clinical implementation of adaptive treatment planning is limited by the lack of quantitative tools to assess deformable image registration errors (R-ERR). The purpose of this study was to develop a method, using finite element modeling (FEM), to estimate registration errors based on mechanical changes resulting from them. Methods An experimental platform to quantify the correlation between registration errors and their mechanical consequences was developed as follows: diaphragm deformation was simulated on the CT images in patients with lung cancer using a finite element method (FEM). The simulated displacement vector fields (F-DVF) were used to warp each CT image to generate a FEM image. B-Spline based (Elastix) registrations were performed from reference to FEM images to generate a registration DVF (R-DVF). The F- DVF was subtracted from R-DVF. The magnitude of the difference vector was defined as the registration error, which is a consequence of mechanically unbalanced energy (UE), computed using 'in-house-developed' FEM software. A nonlinear regression model was used based on imaging voxel data and the analysis considered clustered voxel data within images. Results A regression model analysis showed that UE was significantly correlated with registration error, DVF and the product of registration error and DVF respectively with R̂2=0.73 (R=0.854). The association was verified independently using 40 tracked landmarks. A linear function between the means of UE values and R- DVF*R-ERR has been established. The mean registration error (N=8) was 0.9 mm. 85.4% of voxels fit this model within one standard deviation. Conclusions An encouraging relationship between UE and registration error has been found. These experimental results suggest the feasibility of UE as a valuable tool for evaluating registration errors, thus supporting 4D and adaptive radiotherapy. The research was supported by NIH/NCI R01CA140341. © 2012 American Association of Physicists in

  11. Measurements of stem diameter: implications for individual- and stand-level errors.

    Science.gov (United States)

    Paul, Keryn I; Larmour, John S; Roxburgh, Stephen H; England, Jacqueline R; Davies, Micah J; Luck, Hamish D

    2017-08-01

    Stem diameter is one of the most common measurements made to assess the growth of woody vegetation, and the commercial and environmental benefits that it provides (e.g. wood or biomass products, carbon sequestration, landscape remediation). Yet inconsistency in its measurement is a continuing source of error in estimates of stand-scale measures such as basal area, biomass, and volume. Here we assessed errors in stem diameter measurement through repeated measurements of individual trees and shrubs of varying size and form (i.e. single- and multi-stemmed) across a range of contrasting stands, from complex mixed-species plantings to commercial single-species plantations. We compared a standard diameter tape with a Stepped Diameter Gauge (SDG) for time efficiency and measurement error. Measurement errors in diameter were slightly (but significantly) influenced by size and form of the tree or shrub, and stem height at which the measurement was made. Compared to standard tape measurement, the mean systematic error with SDG measurement was only -0.17 cm, but varied between -0.10 and -0.52 cm. Similarly, random error was relatively large, with standard deviations (and percentage coefficients of variation) averaging only 0.36 cm (and 3.8%), but varying between 0.14 and 0.61 cm (and 1.9 and 7.1%). However, at the stand scale, sampling errors (i.e. how well individual trees or shrubs selected for measurement of diameter represented the true stand population in terms of the average and distribution of diameter) generally had at least a tenfold greater influence on random errors in basal area estimates than errors in diameter measurements. This supports the use of diameter measurement tools that have high efficiency, such as the SDG. Use of the SDG almost halved the time required for measurements compared to the diameter tape. Based on these findings, recommendations include the following: (i) use of a tape to maximise accuracy when developing allometric models, or when

  12. Decoding of DBEC-TBED Reed-Solomon codes. [Double-Byte-Error-Correcting, Triple-Byte-Error-Detecting

    Science.gov (United States)

    Deng, Robert H.; Costello, Daniel J., Jr.

    1987-01-01

    A problem in designing semiconductor memories is to provide some measure of error control without requiring excessive coding overhead or decoding time. In LSI and VLSI technology, memories are often organized on a multiple bit (or byte) per chip basis. For example, some 256 K bit DRAM's are organized in 32 K x 8 bit-bytes. Byte-oriented codes such as Reed-Solomon (RS) codes can provide efficient low overhead error control for such memories. However, the standard iterative algorithm for decoding RS codes is too slow for these applications. The paper presents a special decoding technique for double-byte-error-correcting, triple-byte-error-detecting RS codes which is capable of high-speed operation. This technique is designed to find the error locations and the error values directly from the syndrome without having to use the iterative algorithm to find the error locator polynomial.

  13. Overview of error-tolerant cockpit research

    Science.gov (United States)

    Abbott, Kathy

    1990-01-01

    The objectives of research in intelligent cockpit aids and intelligent error-tolerant systems are stated. In intelligent cockpit aids research, the objective is to provide increased aid and support to the flight crew of civil transport aircraft through the use of artificial intelligence techniques combined with traditional automation. In intelligent error-tolerant systems, the objective is to develop and evaluate cockpit systems that provide flight crews with safe and effective ways and means to manage aircraft systems, plan and replan flights, and respond to contingencies. A subsystems fault management functional diagram is given. All information is in viewgraph form.

  14. Error-Detecting Identification Codes for Algebra Students.

    Science.gov (United States)

    Sutherland, David C.

    1990-01-01

    Discusses common error-detecting identification codes using linear algebra terminology to provide an interesting application of algebra. Presents examples from the International Standard Book Number, the Universal Product Code, bank identification numbers, and the ZIP code bar code. (YP)

  15. Error forecasting schemes of error correction at receiver

    International Nuclear Information System (INIS)

    Bhunia, C.T.

    2007-08-01

    To combat error in computer communication networks, ARQ (Automatic Repeat Request) techniques are used. Recently Chakraborty has proposed a simple technique called the packet combining scheme in which error is corrected at the receiver from the erroneous copies. Packet Combining (PC) scheme fails: (i) when bit error locations in erroneous copies are the same and (ii) when multiple bit errors occur. Both these have been addressed recently by two schemes known as Packet Reversed Packet Combining (PRPC) Scheme, and Modified Packet Combining (MPC) Scheme respectively. In the letter, two error forecasting correction schemes are reported, which in combination with PRPC offer higher throughput. (author)

  16. Abnormal error monitoring in math-anxious individuals: evidence from error-related brain potentials.

    Directory of Open Access Journals (Sweden)

    Macarena Suárez-Pellicioni

    Full Text Available This study used event-related brain potentials to investigate whether math anxiety is related to abnormal error monitoring processing. Seventeen high math-anxious (HMA and seventeen low math-anxious (LMA individuals were presented with a numerical and a classical Stroop task. Groups did not differ in terms of trait or state anxiety. We found enhanced error-related negativity (ERN in the HMA group when subjects committed an error on the numerical Stroop task, but not on the classical Stroop task. Groups did not differ in terms of the correct-related negativity component (CRN, the error positivity component (Pe, classical behavioral measures or post-error measures. The amplitude of the ERN was negatively related to participants' math anxiety scores, showing a more negative amplitude as the score increased. Moreover, using standardized low resolution electromagnetic tomography (sLORETA we found greater activation of the insula in errors on a numerical task as compared to errors in a non-numerical task only for the HMA group. The results were interpreted according to the motivational significance theory of the ERN.

  17. Evaluating a medical error taxonomy.

    OpenAIRE

    Brixey, Juliana; Johnson, Todd R.; Zhang, Jiajie

    2002-01-01

    Healthcare has been slow in using human factors principles to reduce medical errors. The Center for Devices and Radiological Health (CDRH) recognizes that a lack of attention to human factors during product development may lead to errors that have the potential for patient injury, or even death. In response to the need for reducing medication errors, the National Coordinating Council for Medication Errors Reporting and Prevention (NCC MERP) released the NCC MERP taxonomy that provides a stand...

  18. Uncertainty quantification and error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  19. Error Patterns in Problem Solving.

    Science.gov (United States)

    Babbitt, Beatrice C.

    Although many common problem-solving errors within the realm of school mathematics have been previously identified, a compilation of such errors is not readily available within learning disabilities textbooks, mathematics education texts, or teacher's manuals for school mathematics texts. Using data on error frequencies drawn from both the Fourth…

  20. Analysis of Errors in a Special Perturbations Satellite Orbit Propagator

    Energy Technology Data Exchange (ETDEWEB)

    Beckerman, M.; Jones, J.P.

    1999-02-01

    We performed an analysis of error densities for the Special Perturbations orbit propagator using data for 29 satellites in orbits of interest to Space Shuttle and International Space Station collision avoidance. We find that the along-track errors predominate. These errors increase monotonically over each 36-hour prediction interval. The predicted positions in the along-track direction progressively either leap ahead of or lag behind the actual positions. Unlike the along-track errors the radial and cross-track errors oscillate about their nearly zero mean values. As the number of observations per fit interval decline the along-track prediction errors, and amplitudes of the radial and cross-track errors, increase.

  1. Role of memory errors in quantum repeaters

    International Nuclear Information System (INIS)

    Hartmann, L.; Kraus, B.; Briegel, H.-J.; Duer, W.

    2007-01-01

    We investigate the influence of memory errors in the quantum repeater scheme for long-range quantum communication. We show that the communication distance is limited in standard operation mode due to memory errors resulting from unavoidable waiting times for classical signals. We show how to overcome these limitations by (i) improving local memory and (ii) introducing two operational modes of the quantum repeater. In both operational modes, the repeater is run blindly, i.e., without waiting for classical signals to arrive. In the first scheme, entanglement purification protocols based on one-way classical communication are used allowing to communicate over arbitrary distances. However, the error thresholds for noise in local control operations are very stringent. The second scheme makes use of entanglement purification protocols with two-way classical communication and inherits the favorable error thresholds of the repeater run in standard mode. One can increase the possible communication distance by an order of magnitude with reasonable overhead in physical resources. We outline the architecture of a quantum repeater that can possibly ensure intercontinental quantum communication

  2. Generalizing human error rates: A taxonomic approach

    International Nuclear Information System (INIS)

    Buffardi, L.; Fleishman, E.; Allen, J.

    1989-01-01

    It is well established that human error plays a major role in malfunctioning of complex, technological systems and in accidents associated with their operation. Estimates of the rate of human error in the nuclear industry range from 20-65% of all system failures. In response to this, the Nuclear Regulatory Commission has developed a variety of techniques for estimating human error probabilities for nuclear power plant personnel. Most of these techniques require the specification of the range of human error probabilities for various tasks. Unfortunately, very little objective performance data on error probabilities exist for nuclear environments. Thus, when human reliability estimates are required, for example in computer simulation modeling of system reliability, only subjective estimates (usually based on experts' best guesses) can be provided. The objective of the current research is to provide guidelines for the selection of human error probabilities based on actual performance data taken in other complex environments and applying them to nuclear settings. A key feature of this research is the application of a comprehensive taxonomic approach to nuclear and non-nuclear tasks to evaluate their similarities and differences, thus providing a basis for generalizing human error estimates across tasks. In recent years significant developments have occurred in classifying and describing tasks. Initial goals of the current research are to: (1) identify alternative taxonomic schemes that can be applied to tasks, and (2) describe nuclear tasks in terms of these schemes. Three standardized taxonomic schemes (Ability Requirements Approach, Generalized Information-Processing Approach, Task Characteristics Approach) are identified, modified, and evaluated for their suitability in comparing nuclear and non-nuclear power plant tasks. An agenda for future research and its relevance to nuclear power plant safety is also discussed

  3. Errors in causal inference: an organizational schema for systematic error and random error.

    Science.gov (United States)

    Suzuki, Etsuji; Tsuda, Toshihide; Mitsuhashi, Toshiharu; Mansournia, Mohammad Ali; Yamamoto, Eiji

    2016-11-01

    To provide an organizational schema for systematic error and random error in estimating causal measures, aimed at clarifying the concept of errors from the perspective of causal inference. We propose to divide systematic error into structural error and analytic error. With regard to random error, our schema shows its four major sources: nondeterministic counterfactuals, sampling variability, a mechanism that generates exposure events and measurement variability. Structural error is defined from the perspective of counterfactual reasoning and divided into nonexchangeability bias (which comprises confounding bias and selection bias) and measurement bias. Directed acyclic graphs are useful to illustrate this kind of error. Nonexchangeability bias implies a lack of "exchangeability" between the selected exposed and unexposed groups. A lack of exchangeability is not a primary concern of measurement bias, justifying its separation from confounding bias and selection bias. Many forms of analytic errors result from the small-sample properties of the estimator used and vanish asymptotically. Analytic error also results from wrong (misspecified) statistical models and inappropriate statistical methods. Our organizational schema is helpful for understanding the relationship between systematic error and random error from a previously less investigated aspect, enabling us to better understand the relationship between accuracy, validity, and precision. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Dual Numbers Approach in Multiaxis Machines Error Modeling

    Directory of Open Access Journals (Sweden)

    Jaroslav Hrdina

    2014-01-01

    Full Text Available Multiaxis machines error modeling is set in the context of modern differential geometry and linear algebra. We apply special classes of matrices over dual numbers and propose a generalization of such concept by means of general Weil algebras. We show that the classification of the geometric errors follows directly from the algebraic properties of the matrices over dual numbers and thus the calculus over the dual numbers is the proper tool for the methodology of multiaxis machines error modeling.

  5. Prevalence and type of errors in dual-energy X-ray absorptiometry

    Energy Technology Data Exchange (ETDEWEB)

    Messina, Carmelo; Bandirali, Michele; D' Alonzo, Nathascja Katia [Universita degli Studi di Milano, Scuola di Specializzazione in Radiodiagnostica, Milano (Italy); Sconfienza, Luca Maria; Sardanelli, Francesco [IRCCS Policlinico San Donato, Unita di Radiologia, San Donato Milanese (Italy); Universita degli Studi di Milano, Dipartimento di Scienze Biomediche per la Salute, San Donato Milanese (Italy); Di Leo, Giovanni; Papini, Giacomo Davide Edoardo [IRCCS Policlinico San Donato, Unita di Radiologia, San Donato Milanese (Italy); Ulivieri, Fabio Massimo [IRCCS Fondazione Ca' Granda Ospedale Maggiore Policlinico, Mineralometria Ossea Computerizzata e Ambulatorio Malattie Metabolismo Minerale e Osseo, Servizio di Medicina Nucleare, Milano (Italy)

    2015-05-01

    Pitfalls in dual-energy x-ray absorptiometry (DXA) are common. Our aim was to assess rate and type of errors in DXA examinations/reports, evaluating a consecutive series of DXA images of patients examined elsewhere and later presenting to our institution for a follow-up DXA. After ethics committee approval, a radiologist retrospectively reviewed all DXA images provided by patients presenting at our institution for a new DXA. Errors were categorized as patient positioning (PP), data analysis (DA), artefacts and/or demographics. Of 2,476 patients, 1,198 had no previous DXA, while 793 had a previous DXA performed in our institution. The remaining 485 (20 %) patients entered the study (38 men and 447 women; mean age ± standard deviation, 68 ± 9 years). Previous DXA examinations were performed at a total of 37 centres. Of 485 reports, 451 (93 %) had at least one error out of a total of 558 errors distributed as follows: 441 (79 %) were DA, 66 (12 %) PP, 39 (7 %) artefacts and 12 (2 %) demographics. About 20 % of patients did not undergo DXA at the same institution as previously. More than 90 % of DXA presented at least one error, mainly of DA. International Society for Clinical Densitometry guidelines are very poorly adopted. (orig.)

  6. Controlling errors in unidosis carts

    Directory of Open Access Journals (Sweden)

    Inmaculada Díaz Fernández

    2010-01-01

    Full Text Available Objective: To identify errors in the unidosis system carts. Method: For two months, the Pharmacy Service controlled medication either returned or missing from the unidosis carts both in the pharmacy and in the wards. Results: Uncorrected unidosis carts show a 0.9% of medication errors (264 versus 0.6% (154 which appeared in unidosis carts previously revised. In carts not revised, the error is 70.83% and mainly caused when setting up unidosis carts. The rest are due to a lack of stock or unavailability (21.6%, errors in the transcription of medical orders (6.81% or that the boxes had not been emptied previously (0.76%. The errors found in the units correspond to errors in the transcription of the treatment (3.46%, non-receipt of the unidosis copy (23.14%, the patient did not take the medication (14.36%or was discharged without medication (12.77%, was not provided by nurses (14.09%, was withdrawn from the stocks of the unit (14.62%, and errors of the pharmacy service (17.56% . Conclusions: It is concluded the need to redress unidosis carts and a computerized prescription system to avoid errors in transcription.Discussion: A high percentage of medication errors is caused by human error. If unidosis carts are overlooked before sent to hospitalization units, the error diminishes to 0.3%.

  7. Prioritising interventions against medication errors

    DEFF Research Database (Denmark)

    Lisby, Marianne; Pape-Larsen, Louise; Sørensen, Ann Lykkegaard

    errors are therefore needed. Development of definition: A definition of medication errors including an index of error types for each stage in the medication process was developed from existing terminology and through a modified Delphi-process in 2008. The Delphi panel consisted of 25 interdisciplinary......Abstract Authors: Lisby M, Larsen LP, Soerensen AL, Nielsen LP, Mainz J Title: Prioritising interventions against medication errors – the importance of a definition Objective: To develop and test a restricted definition of medication errors across health care settings in Denmark Methods: Medication...... errors constitute a major quality and safety problem in modern healthcare. However, far from all are clinically important. The prevalence of medication errors ranges from 2-75% indicating a global problem in defining and measuring these [1]. New cut-of levels focusing the clinical impact of medication...

  8. Social aspects of clinical errors.

    Science.gov (United States)

    Richman, Joel; Mason, Tom; Mason-Whitehead, Elizabeth; McIntosh, Annette; Mercer, Dave

    2009-08-01

    Clinical errors, whether committed by doctors, nurses or other professions allied to healthcare, remain a sensitive issue requiring open debate and policy formulation in order to reduce them. The literature suggests that the issues underpinning errors made by healthcare professionals involve concerns about patient safety, professional disclosure, apology, litigation, compensation, processes of recording and policy development to enhance quality service. Anecdotally, we are aware of narratives of minor errors, which may well have been covered up and remain officially undisclosed whilst the major errors resulting in damage and death to patients alarm both professionals and public with resultant litigation and compensation. This paper attempts to unravel some of these issues by highlighting the historical nature of clinical errors and drawing parallels to contemporary times by outlining the 'compensation culture'. We then provide an overview of what constitutes a clinical error and review the healthcare professional strategies for managing such errors.

  9. Relationship between Recent Flight Experience and Pilot Error General Aviation Accidents

    Science.gov (United States)

    Nilsson, Sarah J.

    Aviation insurance agents and fixed-base operation (FBO) owners use recent flight experience, as implied by the 90-day rule, to measure pilot proficiency in physical airplane skills, and to assess the likelihood of a pilot error accident. The generally accepted premise is that more experience in a recent timeframe predicts less of a propensity for an accident, all other factors excluded. Some of these aviation industry stakeholders measure pilot proficiency solely by using time flown within the past 90, 60, or even 30 days, not accounting for extensive research showing aeronautical decision-making and situational awareness training decrease the likelihood of a pilot error accident. In an effort to reduce the pilot error accident rate, the Federal Aviation Administration (FAA) has seen the need to shift pilot training emphasis from proficiency in physical airplane skills to aeronautical decision-making and situational awareness skills. However, current pilot training standards still focus more on the former than on the latter. The relationship between pilot error accidents and recent flight experience implied by the FAA's 90-day rule has not been rigorously assessed using empirical data. The intent of this research was to relate recent flight experience, in terms of time flown in the past 90 days, to pilot error accidents. A quantitative ex post facto approach, focusing on private pilots of single-engine general aviation (GA) fixed-wing aircraft, was used to analyze National Transportation Safety Board (NTSB) accident investigation archival data. The data were analyzed using t-tests and binary logistic regression. T-tests between the mean number of hours of recent flight experience of tricycle gear pilots involved in pilot error accidents (TPE) and non-pilot error accidents (TNPE), t(202) = -.200, p = .842, and conventional gear pilots involved in pilot error accidents (CPE) and non-pilot error accidents (CNPE), t(111) = -.271, p = .787, indicate there is no

  10. Using total quality management approach to improve patient safety by preventing medication error incidences*.

    Science.gov (United States)

    Yousef, Nadin; Yousef, Farah

    2017-09-04

    Whereas one of the predominant causes of medication errors is a drug administration error, a previous study related to our investigations and reviews estimated that the incidences of medication errors constituted 6.7 out of 100 administrated medication doses. Therefore, we aimed by using six sigma approach to propose a way that reduces these errors to become less than 1 out of 100 administrated medication doses by improving healthcare professional education and clearer handwritten prescriptions. The study was held in a General Government Hospital. First, we systematically studied the current medication use process. Second, we used six sigma approach by utilizing the five-step DMAIC process (Define, Measure, Analyze, Implement, Control) to find out the real reasons behind such errors. This was to figure out a useful solution to avoid medication error incidences in daily healthcare professional practice. Data sheet was used in Data tool and Pareto diagrams were used in Analyzing tool. In our investigation, we reached out the real cause behind administrated medication errors. As Pareto diagrams used in our study showed that the fault percentage in administrated phase was 24.8%, while the percentage of errors related to prescribing phase was 42.8%, 1.7 folds. This means that the mistakes in prescribing phase, especially because of the poor handwritten prescriptions whose percentage in this phase was 17.6%, are responsible for the consequent) mistakes in this treatment process later on. Therefore, we proposed in this study an effective low cost strategy based on the behavior of healthcare workers as Guideline Recommendations to be followed by the physicians. This method can be a prior caution to decrease errors in prescribing phase which may lead to decrease the administrated medication error incidences to less than 1%. This improvement way of behavior can be efficient to improve hand written prescriptions and decrease the consequent errors related to administrated

  11. Forecast Combination under Heavy-Tailed Errors

    Directory of Open Access Journals (Sweden)

    Gang Cheng

    2015-11-01

    Full Text Available Forecast combination has been proven to be a very important technique to obtain accurate predictions for various applications in economics, finance, marketing and many other areas. In many applications, forecast errors exhibit heavy-tailed behaviors for various reasons. Unfortunately, to our knowledge, little has been done to obtain reliable forecast combinations for such situations. The familiar forecast combination methods, such as simple average, least squares regression or those based on the variance-covariance of the forecasts, may perform very poorly due to the fact that outliers tend to occur, and they make these methods have unstable weights, leading to un-robust forecasts. To address this problem, in this paper, we propose two nonparametric forecast combination methods. One is specially proposed for the situations in which the forecast errors are strongly believed to have heavy tails that can be modeled by a scaled Student’s t-distribution; the other is designed for relatively more general situations when there is a lack of strong or consistent evidence on the tail behaviors of the forecast errors due to a shortage of data and/or an evolving data-generating process. Adaptive risk bounds of both methods are developed. They show that the resulting combined forecasts yield near optimal mean forecast errors relative to the candidate forecasts. Simulations and a real example demonstrate their superior performance in that they indeed tend to have significantly smaller prediction errors than the previous combination methods in the presence of forecast outliers.

  12. The Distance Standard Deviation

    OpenAIRE

    Edelmann, Dominic; Richards, Donald; Vogel, Daniel

    2017-01-01

    The distance standard deviation, which arises in distance correlation analysis of multivariate data, is studied as a measure of spread. New representations for the distance standard deviation are obtained in terms of Gini's mean difference and in terms of the moments of spacings of order statistics. Inequalities for the distance variance are derived, proving that the distance standard deviation is bounded above by the classical standard deviation and by Gini's mean difference. Further, it is ...

  13. The error performance analysis over cyclic redundancy check codes

    Science.gov (United States)

    Yoon, Hee B.

    1991-06-01

    The burst error is generated in digital communication networks by various unpredictable conditions, which occur at high error rates, for short durations, and can impact services. To completely describe a burst error one has to know the bit pattern. This is impossible in practice on working systems. Therefore, under the memoryless binary symmetric channel (MBSC) assumptions, the performance evaluation or estimation schemes for digital signal 1 (DS1) transmission systems carrying live traffic is an interesting and important problem. This study will present some analytical methods, leading to efficient detecting algorithms of burst error using cyclic redundancy check (CRC) code. The definition of burst error is introduced using three different models. Among the three burst error models, the mathematical model is used in this study. The probability density function, function(b) of burst error of length b is proposed. The performance of CRC-n codes is evaluated and analyzed using function(b) through the use of a computer simulation model within CRC block burst error. The simulation result shows that the mean block burst error tends to approach the pattern of the burst error which random bit errors generate.

  14. Error Correcting Codes -34 ...

    Indian Academy of Sciences (India)

    information and coding theory. A large scale relay computer had failed to deliver the expected results due to a hardware fault. Hamming, one of the active proponents of computer usage, was determined to find an efficient means by which computers could detect and correct their own faults. A mathematician by train-.

  15. SU-F-T-310: Does a Head-Mounted Ionization Chamber Detect IMRT Errors?

    International Nuclear Information System (INIS)

    Wegener, S; Herzog, B; Sauer, O

    2016-01-01

    Purpose: The conventional plan verification strategy is delivering a plan to a QA-phantom before the first treatment. Monitoring each fraction of the patient treatment in real-time would improve patient safety. We evaluated how well a new detector, the IQM (iRT Systems, Germany), is capable of detecting errors we induced into IMRT plans of three different treatment regions. Results were compared to an established phantom. Methods: Clinical plans of a brain, prostate and head-and-neck patient were modified in the Pinnacle planning system, such that they resulted in either several percent lower prescribed doses to the target volume or several percent higher doses to relevant organs at risk. Unaltered plans were measured on three days, modified plans once, each with the IQM at an Elekta Synergy with an Agility MLC. All plans were also measured with the ArcCHECK with the cavity plug and a PTW semiflex 31010 ionization chamber inserted. Measurements were evaluated with SNC patient software. Results: Repeated IQM measurements of the original plans were reproducible, such that a 1% deviation from the mean as warning and 3% as action level as suggested by the manufacturer seemed reasonable. The IQM detected most of the simulated errors including wrong energy, a faulty leaf, wrong trial exported and a 2 mm shift of one leaf bank. Detection limits were reached for two plans - a 2 mm field position error and a leaf bank offset combined with an MU change. ArcCHECK evaluation according to our current standards also left undetected errors. Ionization chamber evaluation alone would leave most errors undetected. Conclusion: The IQM detected most errors and performed as well as currently established phantoms with the advantage that it can be used throughout the whole treatment. Drawback is that it does not indicate the source of the error.

  16. SU-F-T-310: Does a Head-Mounted Ionization Chamber Detect IMRT Errors?

    Energy Technology Data Exchange (ETDEWEB)

    Wegener, S; Herzog, B; Sauer, O [University of Wuerzburg, Wuerzburg (Germany)

    2016-06-15

    Purpose: The conventional plan verification strategy is delivering a plan to a QA-phantom before the first treatment. Monitoring each fraction of the patient treatment in real-time would improve patient safety. We evaluated how well a new detector, the IQM (iRT Systems, Germany), is capable of detecting errors we induced into IMRT plans of three different treatment regions. Results were compared to an established phantom. Methods: Clinical plans of a brain, prostate and head-and-neck patient were modified in the Pinnacle planning system, such that they resulted in either several percent lower prescribed doses to the target volume or several percent higher doses to relevant organs at risk. Unaltered plans were measured on three days, modified plans once, each with the IQM at an Elekta Synergy with an Agility MLC. All plans were also measured with the ArcCHECK with the cavity plug and a PTW semiflex 31010 ionization chamber inserted. Measurements were evaluated with SNC patient software. Results: Repeated IQM measurements of the original plans were reproducible, such that a 1% deviation from the mean as warning and 3% as action level as suggested by the manufacturer seemed reasonable. The IQM detected most of the simulated errors including wrong energy, a faulty leaf, wrong trial exported and a 2 mm shift of one leaf bank. Detection limits were reached for two plans - a 2 mm field position error and a leaf bank offset combined with an MU change. ArcCHECK evaluation according to our current standards also left undetected errors. Ionization chamber evaluation alone would leave most errors undetected. Conclusion: The IQM detected most errors and performed as well as currently established phantoms with the advantage that it can be used throughout the whole treatment. Drawback is that it does not indicate the source of the error.

  17. Spent fuel bundle counter sequence error manual - BRUCE NGS

    International Nuclear Information System (INIS)

    Nicholson, L.E.

    1992-01-01

    The Spent Fuel Bundle Counter (SFBC) is used to count the number and type of spent fuel transfers that occur into or out of controlled areas at CANDU reactor sites. However if the transfers are executed in a non-standard manner or the SFBC is malfunctioning, the transfers are recorded as sequence errors. Each sequence error message typically contains adequate information to determine the cause of the message. This manual provides a guide to interpret the various sequence error messages that can occur and suggests probable cause or causes of the sequence errors. Each likely sequence error is presented on a 'card' in Appendix A. Note that it would be impractical to generate a sequence error card file with entries for all possible combinations of faults. Therefore the card file contains sequences with only one fault at a time. Some exceptions have been included however where experience has indicated that several faults can occur simultaneously

  18. Spent fuel bundle counter sequence error manual - DARLINGTON NGS

    International Nuclear Information System (INIS)

    Nicholson, L.E.

    1992-01-01

    The Spent Fuel Bundle Counter (SFBC) is used to count the number and type of spent fuel transfers that occur into or out of controlled areas at CANDU reactor sites. However if the transfers are executed in a non-standard manner or the SFBC is malfunctioning, the transfers are recorded as sequence errors. Each sequence error message typically contains adequate information to determine the cause of the message. This manual provides a guide to interpret the various sequence error messages that can occur and suggests probable cause or causes of the sequence errors. Each likely sequence error is presented on a 'card' in Appendix A. Note that it would be impractical to generate a sequence error card file with entries for all possible combinations of faults. Therefore the card file contains sequences with only one fault at a time. Some exceptions have been included however where experience has indicated that several faults can occur simultaneously

  19. Errors in abdominal computed tomography

    International Nuclear Information System (INIS)

    Stephens, S.; Marting, I.; Dixon, A.K.

    1989-01-01

    Sixty-nine patients are presented in whom a substantial error was made on the initial abdominal computed tomography report. Certain features of these errors have been analysed. In 30 (43.5%) a lesion was simply not recognised (error of observation); in 39 (56.5%) the wrong conclusions were drawn about the nature of normal or abnormal structures (error of interpretation). The 39 errors of interpretation were more complex; in 7 patients an abnormal structure was noted but interpreted as normal, whereas in four a normal structure was thought to represent a lesion. Other interpretive errors included those where the wrong cause for a lesion had been ascribed (24 patients), and those where the abnormality was substantially under-reported (4 patients). Various features of these errors are presented and discussed. Errors were made just as often in relation to small and large lesions. Consultants made as many errors as senior registrar radiologists. It is like that dual reporting is the best method of avoiding such errors and, indeed, this is widely practised in our unit. (Author). 9 refs.; 5 figs.; 1 tab

  20. Relative Error Evaluation to Typical Open Global dem Datasets in Shanxi Plateau of China

    Science.gov (United States)

    Zhao, S.; Zhang, S.; Cheng, W.

    2018-04-01

    Produced by radar data or stereo remote sensing image pairs, global DEM datasets are one of the most important types for DEM data. Relative error relates to surface quality created by DEM data, so it relates to geomorphology and hydrologic applications using DEM data. Taking Shanxi Plateau of China as the study area, this research evaluated the relative error to typical open global DEM datasets including Shuttle Radar Terrain Mission (SRTM) data with 1 arc second resolution (SRTM1), SRTM data with 3 arc second resolution (SRTM3), ASTER global DEM data in the second version (GDEM-v2) and ALOS world 3D-30m (AW3D) data. Through process and selection, more than 300,000 ICESat/GLA14 points were used as the GCP data, and the vertical error was computed and compared among four typical global DEM datasets. Then, more than 2,600,000 ICESat/GLA14 point pairs were acquired using the distance threshold between 100 m and 500 m. Meanwhile, the horizontal distance between every point pair was computed, so the relative error was achieved using slope values based on vertical error difference and the horizontal distance of the point pairs. Finally, false slope ratio (FSR) index was computed through analyzing the difference between DEM and ICESat/GLA14 values for every point pair. Both relative error and FSR index were categorically compared for the four DEM datasets under different slope classes. Research results show: Overall, AW3D has the lowest relative error values in mean error, mean absolute error, root mean square error and standard deviation error; then the SRTM1 data, its values are a little higher than AW3D data; the SRTM3 and GDEM-v2 data have the highest relative error values, and the values for the two datasets are similar. Considering different slope conditions, all the four DEM data have better performance in flat areas but worse performance in sloping regions; AW3D has the best performance in all the slope classes, a litter better than SRTM1; with slope increasing

  1. 75 FR 15371 - Time Error Correction Reliability Standard

    Science.gov (United States)

    2010-03-29

    ... Electric Reliability Council of Texas (ERCOT) manages the flow of electric power to 22 million Texas customers. As the independent system operator for the region, ERCOT schedules power on an electric grid that... Coordinating Council (WECC) is responsible for coordinating and promoting bulk electric system reliability in...

  2. On the Estimation of Standard Errors in Cognitive Diagnosis Models

    Science.gov (United States)

    Philipp, Michel; Strobl, Carolin; de la Torre, Jimmy; Zeileis, Achim

    2018-01-01

    Cognitive diagnosis models (CDMs) are an increasingly popular method to assess mastery or nonmastery of a set of fine-grained abilities in educational or psychological assessments. Several inference techniques are available to quantify the uncertainty of model parameter estimates, to compare different versions of CDMs, or to check model…

  3. The Mean as Balance Point

    Science.gov (United States)

    O'Dell, Robin S.

    2012-01-01

    There are two primary interpretations of the mean: as a leveler of data (Uccellini 1996, pp. 113-114) and as a balance point of a data set. Typically, both interpretations of the mean are ignored in elementary school and middle school curricula. They are replaced with a rote emphasis on calculation using the standard algorithm. When students are…

  4. Death Certification Errors and the Effect on Mortality Statistics.

    Science.gov (United States)

    McGivern, Lauri; Shulman, Leanne; Carney, Jan K; Shapiro, Steven; Bundock, Elizabeth

    Errors in cause and manner of death on death certificates are common and affect families, mortality statistics, and public health research. The primary objective of this study was to characterize errors in the cause and manner of death on death certificates completed by non-Medical Examiners. A secondary objective was to determine the effects of errors on national mortality statistics. We retrospectively compared 601 death certificates completed between July 1, 2015, and January 31, 2016, from the Vermont Electronic Death Registration System with clinical summaries from medical records. Medical Examiners, blinded to original certificates, reviewed summaries, generated mock certificates, and compared mock certificates with original certificates. They then graded errors using a scale from 1 to 4 (higher numbers indicated increased impact on interpretation of the cause) to determine the prevalence of minor and major errors. They also compared International Classification of Diseases, 10th Revision (ICD-10) codes on original certificates with those on mock certificates. Of 601 original death certificates, 319 (53%) had errors; 305 (51%) had major errors; and 59 (10%) had minor errors. We found no significant differences by certifier type (physician vs nonphysician). We did find significant differences in major errors in place of death ( P statistics. Surveillance and certifier education must expand beyond local and state efforts. Simplifying and standardizing underlying literal text for cause of death may improve accuracy, decrease coding errors, and improve national mortality statistics.

  5. Reducing number entry errors: solving a widespread, serious problem.

    Science.gov (United States)

    Thimbleby, Harold; Cairns, Paul

    2010-10-06

    Number entry is ubiquitous: it is required in many fields including science, healthcare, education, government, mathematics and finance. People entering numbers are to be expected to make errors, but shockingly few systems make any effort to detect, block or otherwise manage errors. Worse, errors may be ignored but processed in arbitrary ways, with unintended results. A standard class of error (defined in the paper) is an 'out by 10 error', which is easily made by miskeying a decimal point or a zero. In safety-critical domains, such as drug delivery, out by 10 errors generally have adverse consequences. Here, we expose the extent of the problem of numeric errors in a very wide range of systems. An analysis of better error management is presented: under reasonable assumptions, we show that the probability of out by 10 errors can be halved by better user interface design. We provide a demonstration user interface to show that the approach is practical.To kill an error is as good a service as, and sometimes even better than, the establishing of a new truth or fact. (Charles Darwin 1879 [2008], p. 229).

  6. Multicenter Assessment of Gram Stain Error Rates.

    Science.gov (United States)

    Samuel, Linoj P; Balada-Llasat, Joan-Miquel; Harrington, Amanda; Cavagnolo, Robert

    2016-06-01

    Gram stains remain the cornerstone of diagnostic testing in the microbiology laboratory for the guidance of empirical treatment prior to availability of culture results. Incorrectly interpreted Gram stains may adversely impact patient care, and yet there are no comprehensive studies that have evaluated the reliability of the technique and there are no established standards for performance. In this study, clinical microbiology laboratories at four major tertiary medical care centers evaluated Gram stain error rates across all nonblood specimen types by using standardized criteria. The study focused on several factors that primarily contribute to errors in the process, including poor specimen quality, smear preparation, and interpretation of the smears. The number of specimens during the evaluation period ranged from 976 to 1,864 specimens per site, and there were a total of 6,115 specimens. Gram stain results were discrepant from culture for 5% of all specimens. Fifty-eight percent of discrepant results were specimens with no organisms reported on Gram stain but significant growth on culture, while 42% of discrepant results had reported organisms on Gram stain that were not recovered in culture. Upon review of available slides, 24% (63/263) of discrepant results were due to reader error, which varied significantly based on site (9% to 45%). The Gram stain error rate also varied between sites, ranging from 0.4% to 2.7%. The data demonstrate a significant variability between laboratories in Gram stain performance and affirm the need for ongoing quality assessment by laboratories. Standardized monitoring of Gram stains is an essential quality control tool for laboratories and is necessary for the establishment of a quality benchmark across laboratories. Copyright © 2016, American Society for Microbiology. All Rights Reserved.

  7. Advanced error diagnostics of the CMAQ and Chimere modelling systems within the AQMEII3 model evaluation framework

    Directory of Open Access Journals (Sweden)

    E. Solazzo

    2017-09-01

    Full Text Available The work here complements the overview analysis of the modelling systems participating in the third phase of the Air Quality Model Evaluation International Initiative (AQMEII3 by focusing on the performance for hourly surface ozone by two modelling systems, Chimere for Europe and CMAQ for North America. The evaluation strategy outlined in the course of the three phases of the AQMEII activity, aimed to build up a diagnostic methodology for model evaluation, is pursued here and novel diagnostic methods are proposed. In addition to evaluating the base case simulation in which all model components are configured in their standard mode, the analysis also makes use of sensitivity simulations in which the models have been applied by altering and/or zeroing lateral boundary conditions, emissions of anthropogenic precursors, and ozone dry deposition. To help understand of the causes of model deficiencies, the error components (bias, variance, and covariance of the base case and of the sensitivity runs are analysed in conjunction with timescale considerations and error modelling using the available error fields of temperature, wind speed, and NOx concentration. The results reveal the effectiveness and diagnostic power of the methods devised (which remains the main scope of this study, allowing the detection of the timescale and the fields that the two models are most sensitive to. The representation of planetary boundary layer (PBL dynamics is pivotal to both models. In particular, (i the fluctuations slower than ∼ 1.5 days account for 70–85 % of the mean square error of the full (undecomposed ozone time series; (ii a recursive, systematic error with daily periodicity is detected, responsible for 10–20 % of the quadratic total error; (iii errors in representing the timing of the daily transition between stability regimes in the PBL are responsible for a covariance error as large as 9 ppb (as much as the standard deviation of the network

  8. Advanced error diagnostics of the CMAQ and Chimere modelling systems within the AQMEII3 model evaluation framework

    Science.gov (United States)

    Solazzo, Efisio; Hogrefe, Christian; Colette, Augustin; Garcia-Vivanco, Marta; Galmarini, Stefano

    2017-09-01

    The work here complements the overview analysis of the modelling systems participating in the third phase of the Air Quality Model Evaluation International Initiative (AQMEII3) by focusing on the performance for hourly surface ozone by two modelling systems, Chimere for Europe and CMAQ for North America. The evaluation strategy outlined in the course of the three phases of the AQMEII activity, aimed to build up a diagnostic methodology for model evaluation, is pursued here and novel diagnostic methods are proposed. In addition to evaluating the base case simulation in which all model components are configured in their standard mode, the analysis also makes use of sensitivity simulations in which the models have been applied by altering and/or zeroing lateral boundary conditions, emissions of anthropogenic precursors, and ozone dry deposition. To help understand of the causes of model deficiencies, the error components (bias, variance, and covariance) of the base case and of the sensitivity runs are analysed in conjunction with timescale considerations and error modelling using the available error fields of temperature, wind speed, and NOx concentration. The results reveal the effectiveness and diagnostic power of the methods devised (which remains the main scope of this study), allowing the detection of the timescale and the fields that the two models are most sensitive to. The representation of planetary boundary layer (PBL) dynamics is pivotal to both models. In particular, (i) the fluctuations slower than ˜ 1.5 days account for 70-85 % of the mean square error of the full (undecomposed) ozone time series; (ii) a recursive, systematic error with daily periodicity is detected, responsible for 10-20 % of the quadratic total error; (iii) errors in representing the timing of the daily transition between stability regimes in the PBL are responsible for a covariance error as large as 9 ppb (as much as the standard deviation of the network-average ozone observations in

  9. Dopamine reward prediction error coding.

    Science.gov (United States)

    Schultz, Wolfram

    2016-03-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards-an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware.

  10. Volatility Mean Reversion and the Market Price of Volatility Risk

    NARCIS (Netherlands)

    Boswijk, H.P.

    2001-01-01

    This paper analyzes sources of derivative pricing errors in a stochastic volatility model estimated on stock return data. It is shown that such pricing errors may reflect the existence of a market price of volatility risk, but also may be caused by estimation errors due to a slow mean reversion in

  11. Architecture design for soft errors

    CERN Document Server

    Mukherjee, Shubu

    2008-01-01

    This book provides a comprehensive description of the architetural techniques to tackle the soft error problem. It covers the new methodologies for quantitative analysis of soft errors as well as novel, cost-effective architectural techniques to mitigate them. To provide readers with a better grasp of the broader problem deffinition and solution space, this book also delves into the physics of soft errors and reviews current circuit and software mitigation techniques.

  12. Dopamine reward prediction error coding

    OpenAIRE

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards?an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less...

  13. Systematic literature review of hospital medication administration errors in children

    Directory of Open Access Journals (Sweden)

    Ameer A

    2015-11-01

    Full Text Available Ahmed Ameer,1 Soraya Dhillon,1 Mark J Peters,2 Maisoon Ghaleb11Department of Pharmacy, School of Life and Medical Sciences, University of Hertfordshire, Hatfield, UK; 2Paediatric Intensive Care Unit, Great Ormond Street Hospital, London, UK Objective: Medication administration is the last step in the medication process. It can act as a safety net to prevent unintended harm to patients if detected. However, medication administration errors (MAEs during this process have been documented and thought to be preventable. In pediatric medicine, doses are usually administered based on the child's weight or body surface area. This in turn increases the risk of drug miscalculations and therefore MAEs. The aim of this review is to report MAEs occurring in pediatric inpatients. Methods: Twelve bibliographic databases were searched for studies published between January 2000 and February 2015 using “medication administration errors”, “hospital”, and “children” related terminologies. Handsearching of relevant publications was also carried out. A second reviewer screened articles for eligibility and quality in accordance with the inclusion/exclusion criteria. Key findings: A total of 44 studies were systematically reviewed. MAEs were generally defined as a deviation of dose given from that prescribed; this included omitted doses and administration at the wrong time. Hospital MAEs in children accounted for a mean of 50% of all reported medication error reports (n=12,588. It was also identified in a mean of 29% of doses observed (n=8,894. The most prevalent type of MAEs related to preparation, infusion rate, dose, and time. This review has identified five types of interventions to reduce hospital MAEs in children: barcode medicine administration, electronic prescribing, education, use of smart pumps, and standard concentration. Conclusion: This review has identified a wide variation in the prevalence of hospital MAEs in children. This is attributed to

  14. Calibration of Flick standards

    International Nuclear Information System (INIS)

    Thalmann, Ruedi; Spiller, Jürg; Küng, Alain; Jusko, Otto

    2012-01-01

    Flick standards or magnification standards are widely used for an efficient and functional calibration of the sensitivity of form measuring instruments. The results of a recent measurement comparison have shown to be partially unsatisfactory and revealed problems related to the calibration of these standards. In this paper the influence factors for the calibration of Flick standards using roundness measurement instruments are discussed in detail, in particular the bandwidth of the measurement chain, residual form errors of the device under test, profile distortions due to the diameter of the probing element and questions related to the definition of the measurand. The different contributions are estimated using simulations and are experimentally verified. Also alternative methods to calibrate Flick standards are investigated. Finally the practical limitations of Flick standard calibration are shown and the usability of Flick standards both to calibrate the sensitivity of roundness instruments and to check the filter function of such instruments is analysed. (paper)

  15. Error and discrepancy in radiology: inevitable or avoidable?

    Science.gov (United States)

    Brady, Adrian P

    2017-02-01

    Errors and discrepancies in radiology practice are uncomfortably common, with an estimated day-to-day rate of 3-5% of studies reported, and much higher rates reported in many targeted studies. Nonetheless, the meaning of the terms "error" and "discrepancy" and the relationship to medical negligence are frequently misunderstood. This review outlines the incidence of such events, the ways they can be categorized to aid understanding, and potential contributing factors, both human- and system-based. Possible strategies to minimise error are considered, along with the means of dealing with perceived underperformance when it is identified. The inevitability of imperfection is explained, while the importance of striving to minimise such imperfection is emphasised. • Discrepancies between radiology reports and subsequent patient outcomes are not inevitably errors. • Radiologist reporting performance cannot be perfect, and some errors are inevitable. • Error or discrepancy in radiology reporting does not equate negligence. • Radiologist errors occur for many reasons, both human- and system-derived. • Strategies exist to minimise error causes and to learn from errors made.

  16. Shared dosimetry error in epidemiological dose-response analyses

    International Nuclear Information System (INIS)

    Stram, Daniel O.; Preston, Dale L.; Sokolnikov, Mikhail; Napier, Bruce; Kopecky, Kenneth J.; Boice, John; Beck, Harold; Till, John; Bouville, Andre; Zeeb, Hajo

    2015-01-01

    Radiation dose reconstruction systems for large-scale epidemiological studies are sophisticated both in providing estimates of dose and in representing dosimetry uncertainty. For example, a computer program was used by the Hanford Thyroid Disease Study to provide 100 realizations of possible dose to study participants. The variation in realizations reflected the range of possible dose for each cohort member consistent with the data on dose determinates in the cohort. Another example is the Mayak Worker Dosimetry System 2013 which estimates both external and internal exposures and provides multiple realizations of 'possible' dose history to workers given dose determinants. This paper takes up the problem of dealing with complex dosimetry systems that provide multiple realizations of dose in an epidemiologic analysis. In this paper we derive expected scores and the information matrix for a model used widely in radiation epidemiology, namely the linear excess relative risk (ERR) model that allows for a linear dose response (risk in relation to radiation) and distinguishes between modifiers of background rates and of the excess risk due to exposure. We show that treating the mean dose for each individual (calculated by averaging over the realizations) as if it was true dose (ignoring both shared and unshared dosimetry errors) gives asymptotically unbiased estimates (i.e. the score has expectation zero) and valid tests of the null hypothesis that the ERR slope β is zero. Although the score is unbiased the information matrix (and hence the standard errors of the estimate of β) is biased for β≠0 when ignoring errors in dose estimates, and we show how to adjust the information matrix to remove this bias, using the multiple realizations of dose. The use of these methods in the context of several studies including, the Mayak Worker Cohort, and the U.S. Atomic Veterans Study, is discussed

  17. CTER—Rapid estimation of CTF parameters with error assessment

    Energy Technology Data Exchange (ETDEWEB)

    Penczek, Pawel A., E-mail: Pawel.A.Penczek@uth.tmc.edu [Department of Biochemistry and Molecular Biology, The University of Texas Medical School, 6431 Fannin MSB 6.220, Houston, TX 77054 (United States); Fang, Jia [Department of Biochemistry and Molecular Biology, The University of Texas Medical School, 6431 Fannin MSB 6.220, Houston, TX 77054 (United States); Li, Xueming; Cheng, Yifan [The Keck Advanced Microscopy Laboratory, Department of Biochemistry and Biophysics, University of California, San Francisco, CA 94158 (United States); Loerke, Justus; Spahn, Christian M.T. [Institut für Medizinische Physik und Biophysik, Charité – Universitätsmedizin Berlin, Charitéplatz 1, 10117 Berlin (Germany)

    2014-05-01

    In structural electron microscopy, the accurate estimation of the Contrast Transfer Function (CTF) parameters, particularly defocus and astigmatism, is of utmost importance for both initial evaluation of micrograph quality and for subsequent structure determination. Due to increases in the rate of data collection on modern microscopes equipped with new generation cameras, it is also important that the CTF estimation can be done rapidly and with minimal user intervention. Finally, in order to minimize the necessity for manual screening of the micrographs by a user it is necessary to provide an assessment of the errors of fitted parameters values. In this work we introduce CTER, a CTF parameters estimation method distinguished by its computational efficiency. The efficiency of the method makes it suitable for high-throughput EM data collection, and enables the use of a statistical resampling technique, bootstrap, that yields standard deviations of estimated defocus and astigmatism amplitude and angle, thus facilitating the automation of the process of screening out inferior micrograph data. Furthermore, CTER also outputs the spatial frequency limit imposed by reciprocal space aliasing of the discrete form of the CTF and the finite window size. We demonstrate the efficiency and accuracy of CTER using a data set collected on a 300 kV Tecnai Polara (FEI) using the K2 Summit DED camera in super-resolution counting mode. Using CTER we obtained a structure of the 80S ribosome whose large subunit had a resolution of 4.03 Å without, and 3.85 Å with, inclusion of astigmatism parameters. - Highlights: • We describe methodology for estimation of CTF parameters with error assessment. • Error estimates provide means for automated elimination of inferior micrographs. • High computational efficiency allows real-time monitoring of EM data quality. • Accurate CTF estimation yields structure of the 80S human ribosome at 3.85 Å.

  18. Identifying Error in AUV Communication

    National Research Council Canada - National Science Library

    Coleman, Joseph; Merrill, Kaylani; O'Rourke, Michael; Rajala, Andrew G; Edwards, Dean B

    2006-01-01

    Mine Countermeasures (MCM) involving Autonomous Underwater Vehicles (AUVs) are especially susceptible to error, given the constraints on underwater acoustic communication and the inconstancy of the underwater communication channel...

  19. Human Errors in Decision Making

    OpenAIRE

    Mohamad, Shahriari; Aliandrina, Dessy; Feng, Yan

    2005-01-01

    The aim of this paper was to identify human errors in decision making process. The study was focused on a research question such as: what could be the human error as a potential of decision failure in evaluation of the alternatives in the process of decision making. Two case studies were selected from the literature and analyzed to find the human errors contribute to decision fail. Then the analysis of human errors was linked with mental models in evaluation of alternative step. The results o...

  20. Finding beam focus errors automatically

    International Nuclear Information System (INIS)

    Lee, M.J.; Clearwater, S.H.; Kleban, S.D.

    1987-01-01

    An automated method for finding beam focus errors using an optimization program called COMFORT-PLUS. The steps involved in finding the correction factors using COMFORT-PLUS has been used to find the beam focus errors for two damping rings at the SLAC Linear Collider. The program is to be used as an off-line program to analyze actual measured data for any SLC system. A limitation on the application of this procedure is found to be that it depends on the magnitude of the machine errors. Another is that the program is not totally automated since the user must decide a priori where to look for errors

  1. Heuristic errors in clinical reasoning.

    Science.gov (United States)

    Rylander, Melanie; Guerrasio, Jeannette

    2016-08-01

    Errors in clinical reasoning contribute to patient morbidity and mortality. The purpose of this study was to determine the types of heuristic errors made by third-year medical students and first-year residents. This study surveyed approximately 150 clinical educators inquiring about the types of heuristic errors they observed in third-year medical students and first-year residents. Anchoring and premature closure were the two most common errors observed amongst third-year medical students and first-year residents. There was no difference in the types of errors observed in the two groups. Errors in clinical reasoning contribute to patient morbidity and mortality Clinical educators perceived that both third-year medical students and first-year residents committed similar heuristic errors, implying that additional medical knowledge and clinical experience do not affect the types of heuristic errors made. Further work is needed to help identify methods that can be used to reduce heuristic errors early in a clinician's education. © 2015 John Wiley & Sons Ltd.

  2. A Hybrid Unequal Error Protection / Unequal Error Resilience ...

    African Journals Online (AJOL)

    The quality layers are then assigned an Unequal Error Resilience to synchronization loss by unequally allocating the number of headers available for synchronization to them. Following that Unequal Error Protection against channel noise is provided to the layers by the use of Rate Compatible Punctured Convolutional ...

  3. EPIC: an Error Propagation/Inquiry Code

    International Nuclear Information System (INIS)

    Baker, A.L.

    1985-01-01

    The use of a computer program EPIC (Error Propagation/Inquiry Code) will be discussed. EPIC calculates the variance of a materials balance closed about a materials balance area (MBA) in a processing plant operated under steady-state conditions. It was designed for use in evaluating the significance of inventory differences in the Department of Energy (DOE) nuclear plants. EPIC rapidly estimates the variance of a materials balance using average plant operating data. The intent is to learn as much as possible about problem areas in a process with simple straightforward calculations assuming a process is running in a steady-state mode. EPIC is designed to be used by plant personnel or others with little computer background. However, the user should be knowledgeable about measurement errors in the system being evaluated and have a limited knowledge of how error terms are combined in error propagation analyses. EPIC contains six variance equations; the appropriate equation is used to calculate the variance at each measurement point. After all of these variances are calculated, the total variance for the MBA is calculated using a simple algebraic sum of variances. The EPIC code runs on any computer that accepts a standard form of the BASIC language. 2 refs., 1 fig., 6 tabs

  4. Error studies for SNS Linac. Part 1: Transverse errors

    International Nuclear Information System (INIS)

    Crandall, K.R.

    1998-01-01

    The SNS linac consist of a radio-frequency quadrupole (RFQ), a drift-tube linac (DTL), a coupled-cavity drift-tube linac (CCDTL) and a coupled-cavity linac (CCL). The RFQ and DTL are operated at 402.5 MHz; the CCDTL and CCL are operated at 805 MHz. Between the RFQ and DTL is a medium-energy beam-transport system (MEBT). This error study is concerned with the DTL, CCDTL and CCL, and each will be analyzed separately. In fact, the CCL is divided into two sections, and each of these will be analyzed separately. The types of errors considered here are those that affect the transverse characteristics of the beam. The errors that cause the beam center to be displaced from the linac axis are quad displacements and quad tilts. The errors that cause mismatches are quad gradient errors and quad rotations (roll)

  5. Subspace K-means clustering.

    Science.gov (United States)

    Timmerman, Marieke E; Ceulemans, Eva; De Roover, Kim; Van Leeuwen, Karla

    2013-12-01

    To achieve an insightful clustering of multivariate data, we propose subspace K-means. Its central idea is to model the centroids and cluster residuals in reduced spaces, which allows for dealing with a wide range of cluster types and yields rich interpretations of the clusters. We review the existing related clustering methods, including deterministic, stochastic, and unsupervised learning approaches. To evaluate subspace K-means, we performed a comparative simulation study, in which we manipulated the overlap of subspaces, the between-cluster variance, and the error variance. The study shows that the subspace K-means algorithm is sensitive to local minima but that the problem can be reasonably dealt with by using partitions of various cluster procedures as a starting point for the algorithm. Subspace K-means performs very well in recovering the true clustering across all conditions considered and appears to be superior to its competitor methods: K-means, reduced K-means, factorial K-means, mixtures of factor analyzers (MFA), and MCLUST. The best competitor method, MFA, showed a performance similar to that of subspace K-means in easy conditions but deteriorated in more difficult ones. Using data from a study on parental behavior, we show that subspace K-means analysis provides a rich insight into the cluster characteristics, in terms of both the relative positions of the clusters (via the centroids) and the shape of the clusters (via the within-cluster residuals).

  6. MEAN OF MEDIAN ABSOLUTE DERIVATION TECHNIQUE MEAN ...

    African Journals Online (AJOL)

    eobe

    development of mean of median absolute derivation technique based on the based on the based on .... of noise mean to estimate the speckle noise variance. Noise mean property ..... Foraging Optimization,” International Journal of. Advanced ...

  7. Error-Transparent Quantum Gates for Small Logical Qubit Architectures

    Science.gov (United States)

    Kapit, Eliot

    2018-02-01

    One of the largest obstacles to building a quantum computer is gate error, where the physical evolution of the state of a qubit or group of qubits during a gate operation does not match the intended unitary transformation. Gate error stems from a combination of control errors and random single qubit errors from interaction with the environment. While great strides have been made in mitigating control errors, intrinsic qubit error remains a serious problem that limits gate fidelity in modern qubit architectures. Simultaneously, recent developments of small error-corrected logical qubit devices promise significant increases in logical state lifetime, but translating those improvements into increases in gate fidelity is a complex challenge. In this Letter, we construct protocols for gates on and between small logical qubit devices which inherit the parent device's tolerance to single qubit errors which occur at any time before or during the gate. We consider two such devices, a passive implementation of the three-qubit bit flip code, and the author's own [E. Kapit, Phys. Rev. Lett. 116, 150501 (2016), 10.1103/PhysRevLett.116.150501] very small logical qubit (VSLQ) design, and propose error-tolerant gate sets for both. The effective logical gate error rate in these models displays superlinear error reduction with linear increases in single qubit lifetime, proving that passive error correction is capable of increasing gate fidelity. Using a standard phenomenological noise model for superconducting qubits, we demonstrate a realistic, universal one- and two-qubit gate set for the VSLQ, with error rates an order of magnitude lower than those for same-duration operations on single qubits or pairs of qubits. These developments further suggest that incorporating small logical qubits into a measurement based code could substantially improve code performance.

  8. The computation of equating errors in international surveys in education.

    Science.gov (United States)

    Monseur, Christian; Berezner, Alla

    2007-01-01

    Since the IEA's Third International Mathematics and Science Study, one of the major objectives of international surveys in education has been to report trends in achievement. The names of the two current IEA surveys reflect this growing interest: Trends in International Mathematics and Science Study (TIMSS) and Progress in International Reading Literacy Study (PIRLS). Similarly a central concern of the OECD's PISA is with trends in outcomes over time. To facilitate trend analyses these studies link their tests using common item equating in conjunction with item response modelling methods. IEA and PISA policies differ in terms of reporting the error associated with trends. In IEA surveys, the standard errors of the trend estimates do not include the uncertainty associated with the linking step while PISA does include a linking error component in the standard errors of trend estimates. In other words, PISA implicitly acknowledges that trend estimates partly depend on the selected common items, while the IEA's surveys do not recognise this source of error. Failing to recognise the linking error leads to an underestimation of the standard errors and thus increases the Type I error rate, thereby resulting in reporting of significant changes in achievement when in fact these are not significant. The growing interest of policy makers in trend indicators and the impact of the evaluation of educational reforms appear to be incompatible with such underestimation. However, the procedure implemented by PISA raises a few issues about the underlying assumptions for the computation of the equating error. After a brief introduction, this paper will describe the procedure PISA implemented to compute the linking error. The underlying assumptions of this procedure will then be discussed. Finally an alternative method based on replication techniques will be presented, based on a simulation study and then applied to the PISA 2000 data.

  9. Error begat error: design error analysis and prevention in social infrastructure projects.

    Science.gov (United States)

    Love, Peter E D; Lopez, Robert; Edwards, David J; Goh, Yang M

    2012-09-01

    Design errors contribute significantly to cost and schedule growth in social infrastructure projects and to engineering failures, which can result in accidents and loss of life. Despite considerable research that has addressed their error causation in construction projects they still remain prevalent. This paper identifies the underlying conditions that contribute to design errors in social infrastructure projects (e.g. hospitals, education, law and order type buildings). A systemic model of error causation is propagated and subsequently used to develop a learning framework for design error prevention. The research suggests that a multitude of strategies should be adopted in congruence to prevent design errors from occurring and so ensure that safety and project performance are ameliorated. Copyright © 2011. Published by Elsevier Ltd.

  10. Monte Carlo simulation of expert judgments on human errors in chemical analysis--a case study of ICP-MS.

    Science.gov (United States)

    Kuselman, Ilya; Pennecchi, Francesca; Epstein, Malka; Fajgelj, Ales; Ellison, Stephen L R

    2014-12-01

    Monte Carlo simulation of expert judgments on human errors in a chemical analysis was used for determination of distributions of the error quantification scores (scores of likelihood and severity, and scores of effectiveness of a laboratory quality system in prevention of the errors). The simulation was based on modeling of an expert behavior: confident, reasonably doubting and irresolute expert judgments were taken into account by means of different probability mass functions (pmfs). As a case study, 36 scenarios of human errors which may occur in elemental analysis of geological samples by ICP-MS were examined. Characteristics of the score distributions for three pmfs of an expert behavior were compared. Variability of the scores, as standard deviation of the simulated score values from the distribution mean, was used for assessment of the score robustness. A range of the score values, calculated directly from elicited data and simulated by a Monte Carlo method for different pmfs, was also discussed from the robustness point of view. It was shown that robustness of the scores, obtained in the case study, can be assessed as satisfactory for the quality risk management and improvement of a laboratory quality system against human errors. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Performance Evaluation of Five Turbidity Sensors in Three Primary Standards

    Science.gov (United States)

    Snazelle, Teri T.

    2015-10-28

    Open-File Report 2015-1172 is temporarily unavailable.Five commercially available turbidity sensors were evaluated by the U.S. Geological Survey, Hydrologic Instrumentation Facility (HIF) for accuracy and precision in three types of turbidity standards; formazin, StablCal, and AMCO Clear (AMCO–AEPA). The U.S. Environmental Protection Agency (EPA) recognizes all three turbidity standards as primary standards, meaning they are acceptable for reporting purposes. The Forrest Technology Systems (FTS) DTS-12, the Hach SOLITAX sc, the Xylem EXO turbidity sensor, the Yellow Springs Instrument (YSI) 6136 turbidity sensor, and the Hydrolab Series 5 self-cleaning turbidity sensor were evaluated to determine if turbidity measurements in the three primary standards are comparable to each other, and to ascertain if the primary standards are truly interchangeable. A formazin 4000 nephelometric turbidity unit (NTU) stock was purchased and dilutions of 40, 100, 400, 800, and 1000 NTU were made fresh the day of testing. StablCal and AMCO Clear (for Hach 2100N) standards with corresponding concentrations were also purchased for the evaluation. Sensor performance was not evaluated in turbidity levels less than 40 NTU due to the unavailability of polymer-bead turbidity standards rated for general use. The percent error was calculated as the true (not absolute) difference between the measured turbidity and the standard value, divided by the standard value.The sensors that demonstrated the best overall performance in the evaluation were the Hach SOLITAX and the Hydrolab Series 5 turbidity sensor when the operating range (0.001–4000 NTU for the SOLITAX and 0.1–3000 NTU for the Hydrolab) was considered in addition to sensor accuracy and precision. The average percent error in the three standards was 3.80 percent for the SOLITAX and -4.46 percent for the Hydrolab. The DTS-12 also demonstrated good accuracy with an average percent error of 2.02 percent and a maximum relative standard

  12. Dual Processing and Diagnostic Errors

    Science.gov (United States)

    Norman, Geoff

    2009-01-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical,…

  13. Barriers to medical error reporting

    Directory of Open Access Journals (Sweden)

    Jalal Poorolajal

    2015-01-01

    Full Text Available Background: This study was conducted to explore the prevalence of medical error underreporting and associated barriers. Methods: This cross-sectional study was performed from September to December 2012. Five hospitals, affiliated with Hamadan University of Medical Sciences, in Hamedan,Iran were investigated. A self-administered questionnaire was used for data collection. Participants consisted of physicians, nurses, midwives, residents, interns, and staffs of radiology and laboratory departments. Results: Overall, 50.26% of subjects had committed but not reported medical errors. The main reasons mentioned for underreporting were lack of effective medical error reporting system (60.0%, lack of proper reporting form (51.8%, lack of peer supporting a person who has committed an error (56.0%, and lack of personal attention to the importance of medical errors (62.9%. The rate of committing medical errors was higher in men (71.4%, age of 50-40 years (67.6%, less-experienced personnel (58.7%, educational level of MSc (87.5%, and staff of radiology department (88.9%. Conclusions: This study outlined the main barriers to reporting medical errors and associated factors that may be helpful for healthcare organizations in improving medical error reporting as an essential component for patient safety enhancement.

  14. A theory of human error

    Science.gov (United States)

    Mcruer, D. T.; Clement, W. F.; Allen, R. W.

    1981-01-01

    Human errors tend to be treated in terms of clinical and anecdotal descriptions, from which remedial measures are difficult to derive. Correction of the sources of human error requires an attempt to reconstruct underlying and contributing causes of error from the circumstantial causes cited in official investigative reports. A comprehensive analytical theory of the cause-effect relationships governing propagation of human error is indispensable to a reconstruction of the underlying and contributing causes. A validated analytical theory of the input-output behavior of human operators involving manual control, communication, supervisory, and monitoring tasks which are relevant to aviation, maritime, automotive, and process control operations is highlighted. This theory of behavior, both appropriate and inappropriate, provides an insightful basis for investigating, classifying, and quantifying the needed cause-effect relationships governing propagation of human error.

  15. Correcting AUC for Measurement Error.

    Science.gov (United States)

    Rosner, Bernard; Tworoger, Shelley; Qiu, Weiliang

    2015-12-01

    Diagnostic biomarkers are used frequently in epidemiologic and clinical work. The ability of a diagnostic biomarker to discriminate between subjects who develop disease (cases) and subjects who do not (controls) is often measured by the area under the receiver operating characteristic curve (AUC). The diagnostic biomarkers are usually measured with error. Ignoring measurement error can cause biased estimation of AUC, which results in misleading interpretation of the efficacy of a diagnostic biomarker. Several methods have been proposed to correct AUC for measurement error, most of which required the normality assumption for the distributions of diagnostic biomarkers. In this article, we propose a new method to correct AUC for measurement error and derive approximate confidence limits for the corrected AUC. The proposed method does not require the normality assumption. Both real data analyses and simulation studies show good performance of the proposed measurement error correction method.

  16. Cognitive aspect of diagnostic errors.

    Science.gov (United States)

    Phua, Dong Haur; Tan, Nigel C K

    2013-01-01

    Diagnostic errors can result in tangible harm to patients. Despite our advances in medicine, the mental processes required to make a diagnosis exhibits shortcomings, causing diagnostic errors. Cognitive factors are found to be an important cause of diagnostic errors. With new understanding from psychology and social sciences, clinical medicine is now beginning to appreciate that our clinical reasoning can take the form of analytical reasoning or heuristics. Different factors like cognitive biases and affective influences can also impel unwary clinicians to make diagnostic errors. Various strategies have been proposed to reduce the effect of cognitive biases and affective influences when clinicians make diagnoses; however evidence for the efficacy of these methods is still sparse. This paper aims to introduce the reader to the cognitive aspect of diagnostic errors, in the hope that clinicians can use this knowledge to improve diagnostic accuracy and patient outcomes.

  17. Minimum Probability of Error-Based Equalization Algorithms for Fading Channels

    Directory of Open Access Journals (Sweden)

    Janos Levendovszky

    2007-06-01

    Full Text Available Novel channel equalizer algorithms are introduced for wireless communication systems to combat channel distortions resulting from multipath propagation. The novel algorithms are based on newly derived bounds on the probability of error (PE and guarantee better performance than the traditional zero forcing (ZF or minimum mean square error (MMSE algorithms. The new equalization methods require channel state information which is obtained by a fast adaptive channel identification algorithm. As a result, the combined convergence time needed for channel identification and PE minimization still remains smaller than the convergence time of traditional adaptive algorithms, yielding real-time equalization. The performance of the new algorithms is tested by extensive simulations on standard mobile channels.

  18. Generalized additive models and Lucilia sericata growth: assessing confidence intervals and error rates in forensic entomology.

    Science.gov (United States)

    Tarone, Aaron M; Foran, David R

    2008-07-01

    Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.

  19. Mismeasurement and the resonance of strong confounders: correlated errors.

    Science.gov (United States)

    Marshall, J R; Hastrup, J L; Ross, J S

    1999-07-01

    Confounding in epidemiology, and the limits of standard methods of control for an imperfectly measured confounder, have been understood for some time. However, most treatments of this problem are based on the assumption that errors of measurement in confounding and confounded variables are independent. This paper considers the situation in which a strong risk factor (confounder) and an inconsequential but suspected risk factor (confounded) are each measured with errors that are correlated; the situation appears especially likely to occur in the field of nutritional epidemiology. Error correlation appears to add little to measurement error as a source of bias in estimating the impact of a strong risk factor: it can add to, diminish, or reverse the bias induced by measurement error in estimating the impact of the inconsequential risk factor. Correlation of measurement errors can add to the difficulty involved in evaluating structures in which confounding and measurement error are present. In its presence, observed correlations among risk factors can be greater than, less than, or even opposite to the true correlations. Interpretation of multivariate epidemiologic structures in which confounding is likely requires evaluation of measurement error structures, including correlations among measurement errors.

  20. Preanalytical Blood Sampling Errors in Clinical Settings

    International Nuclear Information System (INIS)

    Zehra, N.; Malik, A. H.; Arshad, Q.; Sarwar, S.; Aslam, S.

    2016-01-01

    Background: Blood sampling is one of the common procedures done in every ward for disease diagnosis and prognosis. Daily hundreds of samples are collected from different wards but lack of appropriate knowledge of blood sampling by paramedical staff and accidental errors make the samples inappropriate for testing. Thus the need to avoid these errors for better results still remains. We carried out this research with an aim to determine the common errors during blood sampling; find factors responsible and propose ways to reduce these errors. Methods: A cross sectional descriptive study was carried out at the Military and Combined Military Hospital Rawalpindi during February and March 2014. A Venous Blood Sampling questionnaire (VBSQ) was filled by the staff on voluntary basis in front of the researchers. The staff was briefed on the purpose of the survey before filling the questionnaire. Sample size was 228. Results were analysed using SPSS-21. Results: When asked in the questionnaire, around 61.6 percent of the paramedical staff stated that they cleaned the vein by moving the alcohol swab from inward to outwards while 20.8 percent of the staff reported that they felt the vein after disinfection. On contrary to WHO guidelines, 89.6 percent identified that they had a habit of placing blood in the test tube by holding it in the other hand, which should actually be done after inserting it into the stand. Although 86 percent thought that they had ample knowledge regarding the blood sampling process but they did not practice it properly. Conclusion: Pre analytical blood sampling errors are common in our setup. Eighty six percent participants though thought that they had adequate knowledge regarding blood sampling, but most of them were not adhering to standard protocols. There is a need of continued education and refresher courses. (author)

  1. Adaptive behaviors of experts in following standard protocol in trauma management: implications for developing flexible guidelines.

    Science.gov (United States)

    Vankipuram, Mithra; Ghaemmaghami, Vafa; Patel, Vimla L

    2012-01-01

    Critical care environments are complex and dynamic. To adapt to such environments, clinicians may be required to make alterations to their workflows resulting in deviations from standard procedures. In this work, deviations from standards in trauma critical care are studied. Thirty trauma cases were observed in a Level 1 trauma center. Activities tracked were compared to the Advance Trauma Life Support standard to determine (i) if deviations had occurred, (ii) type of deviations and (iii) whether deviations were initiated by individuals or collaboratively by the team. Results show that expert clinicians deviated to innovate, while deviations of novices result mostly in error. Experts' well developed knowledge allows for flexibility and adaptiveness in dealing with standards, resulting in innovative deviations while minimizing errors made. Providing informatics solution, in such a setting, would mean that standard protocols would have be flexible enough to "learn" from new knowledge, yet provide strong support for the trainees.

  2. Learning and coping strategies versus standard education in cardiac rehabilitation

    DEFF Research Database (Denmark)

    Tayyari Dehbarez, Nasrin; Lynggaard, Vibeke; May, Ole

    2015-01-01

    Background Learning and coping education strategies (LC) was implemented to enhance patient attendance in the cardiac rehabilitation programme. This study assessed the cost-utility of LC compared to standard education (standard) as part of a rehabilitation programme for patients with ischemic heart...... disease and heart failure. Methods The study was conducted alongside a randomised controlled trial with 825 patients who were allocated to LC or standard rehabilitation and followed for 5 months. The LC approach was identical to the standard approach in terms of physical training and education...... to estimate the net benefit of the LC and to illustrate cost effectiveness acceptability curves. The statistical analysis was based on means and bootstrapped standard errors. Results An additional cost of DKK 6,043 (95 % CI −5,697; 17,783) and a QALY gain of 0.005 (95 % CI −0.001; 0.012) was estimated for LC...

  3. The Standard Model

    International Nuclear Information System (INIS)

    Sutton, Christine

    1994-01-01

    The initial evidence from Fermilab for the long awaited sixth ('top') quark puts another rivet in the already firm structure of today's Standard Model of physics. Analysis of the Fermilab CDF data gives a top mass of 174 GeV with an error of ten per cent either way. This falls within the mass band predicted by the sum total of world Standard Model data and underlines our understanding of physics in terms of six quarks and six leptons. In this specially commissioned overview, physics writer Christine Sutton explains the Standard Model

  4. Unit of measurement used and parent medication dosing errors.

    Science.gov (United States)

    Yin, H Shonna; Dreyer, Benard P; Ugboaja, Donna C; Sanchez, Dayana C; Paul, Ian M; Moreira, Hannah A; Rodriguez, Luis; Mendelsohn, Alan L

    2014-08-01

    Adopting the milliliter as the preferred unit of measurement has been suggested as a strategy to improve the clarity of medication instructions; teaspoon and tablespoon units may inadvertently endorse nonstandard kitchen spoon use. We examined the association between unit used and parent medication errors and whether nonstandard instruments mediate this relationship. Cross-sectional analysis of baseline data from a larger study of provider communication and medication errors. English- or Spanish-speaking parents (n = 287) whose children were prescribed liquid medications in 2 emergency departments were enrolled. Medication error defined as: error in knowledge of prescribed dose, error in observed dose measurement (compared to intended or prescribed dose); >20% deviation threshold for error. Multiple logistic regression performed adjusting for parent age, language, country, race/ethnicity, socioeconomic status, education, health literacy (Short Test of Functional Health Literacy in Adults); child age, chronic disease; site. Medication errors were common: 39.4% of parents made an error in measurement of the intended dose, 41.1% made an error in the prescribed dose. Furthermore, 16.7% used a nonstandard instrument. Compared with parents who used milliliter-only, parents who used teaspoon or tablespoon units had twice the odds of making an error with the intended (42.5% vs 27.6%, P = .02; adjusted odds ratio=2.3; 95% confidence interval, 1.2-4.4) and prescribed (45.1% vs 31.4%, P = .04; adjusted odds ratio=1.9; 95% confidence interval, 1.03-3.5) dose; associations greater for parents with low health literacy and non-English speakers. Nonstandard instrument use partially mediated teaspoon and tablespoon-associated measurement errors. Findings support a milliliter-only standard to reduce medication errors. Copyright © 2014 by the American Academy of Pediatrics.

  5. Prediction of monthly mean daily global solar radiation using ...

    Indian Academy of Sciences (India)

    a 4-layer MLFF network was developed and the average value of the mean absolute percentage error ... and sunshine hours to estimate the monthly mean .... work. The outputs of the layers are com- puted using the equations (1) and (2).

  6. The effect of monetary punishment on error evaluation in a Go/No-go task.

    Science.gov (United States)

    Maruo, Yuya; Sommer, Werner; Masaki, Hiroaki

    2017-10-01

    Little is known about the effects of the motivational significance of errors in Go/No-go tasks. We investigated the impact of monetary punishment on the error-related negativity (ERN) and error positivity (Pe) for both overt errors and partial errors, that is, no-go trials without overt responses but with covert muscle activities. We compared high and low punishment conditions where errors were penalized with 50 or 5 yen, respectively, and a control condition without monetary consequences for errors. Because we hypothesized that the partial-error ERN might overlap with the no-go N2, we compared ERPs between correct rejections (i.e., successful no-go trials) and partial errors in no-go trials. We also expected that Pe amplitudes should increase with the severity of the penalty for errors. Mean error rates were significantly lower in the high punishment than in the control condition. Monetary punishment did not influence the overt-error ERN and partial-error ERN in no-go trials. The ERN in no-go trials did not differ between partial errors and overt errors; in addition, ERPs for correct rejections in no-go trials without partial errors were of the same size as in go-trial. Therefore the overt-error ERN and the partial-error ERN may share similar error monitoring processes. Monetary punishment increased Pe amplitudes for overt errors, suggesting enhanced error evaluation processes. For partial errors an early Pe was observed, presumably representing inhibition processes. Interestingly, even partial errors elicited the Pe, suggesting that covert erroneous activities could be detected in Go/No-go tasks. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  7. Modeling error distributions of growth curve models through Bayesian methods.

    Science.gov (United States)

    Zhang, Zhiyong

    2016-06-01

    Growth curve models are widely used in social and behavioral sciences. However, typical growth curve models often assume that the errors are normally distributed although non-normal data may be even more common than normal data. In order to avoid possible statistical inference problems in blindly assuming normality, a general Bayesian framework is proposed to flexibly model normal and non-normal data through the explicit specification of the error distributions. A simulation study shows when the distribution of the error is correctly specified, one can avoid the loss in the efficiency of standard error estimates. A real example on the analysis of mathematical ability growth data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-99 is used to show the application of the proposed methods. Instructions and code on how to conduct growth curve analysis with both normal and non-normal error distributions using the the MCMC procedure of SAS are provided.

  8. Operator error and emotions. Operator error and emotions - a major cause of human failure

    Energy Technology Data Exchange (ETDEWEB)

    Patterson, B.K. [Human Factors Practical Incorporated (Canada); Bradley, M. [Univ. of New Brunswick, Saint John, New Brunswick (Canada); Artiss, W.G. [Human Factors Practical (Canada)

    2000-07-01

    This paper proposes the idea that a large proportion of the incidents attributed to operator and maintenance error in a nuclear or industrial plant are actually founded in our human emotions. Basic psychological theory of emotions is briefly presented and then the authors present situations and instances that can cause emotions to swell and lead to operator and maintenance error. Since emotional information is not recorded in industrial incident reports, the challenge is extended to industry, to review incident source documents for cases of emotional involvement and to develop means to collect emotion related information in future root cause analysis investigations. Training must then be provided to operators and maintainers to enable them to know one's emotions, manage emotions, motivate one's self, recognize emotions in others and handle relationships. Effective training will reduce the instances of human error based in emotions and enable a cooperative, productive environment in which to work. (author)

  9. Operator error and emotions. Operator error and emotions - a major cause of human failure

    International Nuclear Information System (INIS)

    Patterson, B.K.; Bradley, M.; Artiss, W.G.

    2000-01-01

    This paper proposes the idea that a large proportion of the incidents attributed to operator and maintenance error in a nuclear or industrial plant are actually founded in our human emotions. Basic psychological theory of emotions is briefly presented and then the authors present situations and instances that can cause emotions to swell and lead to operator and maintenance error. Since emotional information is not recorded in industrial incident reports, the challenge is extended to industry, to review incident source documents for cases of emotional involvement and to develop means to collect emotion related information in future root cause analysis investigations. Training must then be provided to operators and maintainers to enable them to know one's emotions, manage emotions, motivate one's self, recognize emotions in others and handle relationships. Effective training will reduce the instances of human error based in emotions and enable a cooperative, productive environment in which to work. (author)

  10. Telemetry Standards, RCC Standard 106-17. Chapter 8. Digital Data Bus Acquisition Formatting Standard

    Science.gov (United States)

    2017-07-01

    incorrect word count/message and illegal mode codes are not considered bus errors. 8.6.2 Source Signal The source of data is a signal conforming to...Telemetry Standards, RCC Standard 106-17 Chapter 8, July 2017 CHAPTER 8 Digital Data Bus Acquisition Formatting Standard Acronyms...check FCS frame check sequence HDDR high-density digital recording MIL-STD Military Standard msb most significant bit PCM pulse code modulation

  11. Errors, error detection, error correction and hippocampal-region damage: data and theories.

    Science.gov (United States)

    MacKay, Donald G; Johnson, Laura W

    2013-11-01

    This review and perspective article outlines 15 observational constraints on theories of errors, error detection, and error correction, and their relation to hippocampal-region (HR) damage. The core observations come from 10 studies with H.M., an amnesic with cerebellar and HR damage but virtually no neocortical damage. Three studies examined the detection of errors planted in visual scenes (e.g., a bird flying in a fish bowl in a school classroom) and sentences (e.g., I helped themselves to the birthday cake). In all three experiments, H.M. detected reliably fewer errors than carefully matched memory-normal controls. Other studies examined the detection and correction of self-produced errors, with controls for comprehension of the instructions, impaired visual acuity, temporal factors, motoric slowing, forgetting, excessive memory load, lack of motivation, and deficits in visual scanning or attention. In these studies, H.M. corrected reliably fewer errors than memory-normal and cerebellar controls, and his uncorrected errors in speech, object naming, and reading aloud exhibited two consistent features: omission and anomaly. For example, in sentence production tasks, H.M. omitted one or more words in uncorrected encoding errors that rendered his sentences anomalous (incoherent, incomplete, or ungrammatical) reliably more often than controls. Besides explaining these core findings, the theoretical principles discussed here explain H.M.'s retrograde amnesia for once familiar episodic and semantic information; his anterograde amnesia for novel information; his deficits in visual cognition, sentence comprehension, sentence production, sentence reading, and object naming; and effects of aging on his ability to read isolated low frequency words aloud. These theoretical principles also explain a wide range of other data on error detection and correction and generate new predictions for future test. Copyright © 2013 Elsevier Ltd. All rights reserved.

  12. Mejora de la calidad de las medidas de ozono mediante un fotómetro UV de referencia Improvement of the quality of the ozone measurements by means of a standard reference photometer

    Directory of Open Access Journals (Sweden)

    R. Fernández Patier

    2001-11-01

    Full Text Available La Directiva 92/72/CE hace referencia al método de análisis descrito en la UNE 77-221:2000 y a que los analizadores de ozono se deben calibrar con un fotómetro UV de referencia o con un patrón transferido.De la necesidad de desarrollar un procedimiento que asegure la calidad y la trazabilidad de las mediciones en España, el Área de Contaminación Atmosférica ha decidido implantar un fotómetro UV de referencia NIST como patrón de ozono.Teniendo en cuenta los procedimientos empleados por EPA y NIST, se ha desarrollado un procedimiento de verificación consistente en la realización de 6 comparaciones del Patrón Transferido frente al Fotómetro UV de Referencia NIST, en días diferentes, analizándose, como mínimo, 5 concentraciones de ozono diferentes. Cada comparación se inicia y se finaliza siempre con una concentración de 0 ppb de O3. De cada comparación se obtiene una regresión lineal.Una vez realizadas las 6 comparaciones se obtiene la Recta de calibración y se calcula la incertidumbre asociada al patrón transferido.Se ha realizado la verificación de 17 patrones transferidos de los que 11 son fotómetros UV, 2 son generadores de ozono y 4 son generadores de ozono de bancos de dilución.De los resultados se concluye que las incertidumbres de los generadores de ozono, en general, son mayores que las de los fotómetros UV, recomendándose estos últimos como patrones transferidos.Destacar que mediante la utilización de los patrones transferidos para la calibración de analizadores de ozono se garantiza tanto la calidad como la trazabilidad de los datos generados.The Directive 92/72/CE makes reference to the analysis method described in the UNE 77-221:2000 and to the fact that the ozone analyzers shall be calibrated with a UV reference photometer or with a transfer standard.From the need of developing a procedure that assure the quality and the trazability of the measurements in Spain, the Atmospheric Pollution Area has decided

  13. Human errors in NPP operations

    International Nuclear Information System (INIS)

    Sheng Jufang

    1993-01-01

    Based on the operational experiences of nuclear power plants (NPPs), the importance of studying human performance problems is described. Statistical analysis on the significance or frequency of various root-causes and error-modes from a large number of human-error-related events demonstrate that the defects in operation/maintenance procedures, working place factors, communication and training practices are primary root-causes, while omission, transposition, quantitative mistake are the most frequent among the error-modes. Recommendations about domestic research on human performance problem in NPPs are suggested

  14. Linear network error correction coding

    CERN Document Server

    Guang, Xuan

    2014-01-01

    There are two main approaches in the theory of network error correction coding. In this SpringerBrief, the authors summarize some of the most important contributions following the classic approach, which represents messages by sequences?similar to algebraic coding,?and also briefly discuss the main results following the?other approach,?that uses the theory of rank metric codes for network error correction of representing messages by subspaces. This book starts by establishing the basic linear network error correction (LNEC) model and then characterizes two equivalent descriptions. Distances an

  15. Computer-socket manufacturing error: How much before it is clinically apparent?

    Science.gov (United States)

    Sanders, Joan E.; Severance, Michael R.; Allyn, Kathryn J.

    2015-01-01

    The purpose of this research was to pursue quality standards for computer-manufacturing of prosthetic sockets for people with transtibial limb loss. Thirty-three duplicates of study participants’ normally used sockets were fabricated using central fabrication facilities. Socket-manufacturing errors were compared with clinical assessments of socket fit. Of the 33 sockets tested, 23 were deemed clinically to need modification. All 13 sockets with mean radial error (MRE) greater than 0.25 mm were clinically unacceptable, and 11 of those were deemed in need of sizing reduction. Of the remaining 20 sockets, 5 sockets with interquartile range (IQR) greater than 0.40 mm were deemed globally or regionally oversized and in need of modification. Of the remaining 15 sockets, 5 sockets with closed contours of elevated surface normal angle error (SNAE) were deemed clinically to need shape modification at those closed contour locations. The remaining 10 sockets were deemed clinically acceptable and not in need modification. MRE, IQR, and SNAE may serve as effective metrics to characterize quality of computer-manufactured prosthetic sockets, helping facilitate the development of quality standards for the socket manufacturing industry. PMID:22773260

  16. Perancangan Fasilitas Kerja untuk Mereduksi Human Error

    Directory of Open Access Journals (Sweden)

    Harmein Nasution

    2012-01-01

    Full Text Available Work equipments and environment which are not design ergonomically can cause physical exhaustion to the workers. As a result of that physical exhaustion, many defects in the production lines can happen due to human error and also cause musculoskeletal complaints. To overcome, those effects, we occupied methods for analyzing the workers posture based on the SNQ (Standard Nordic Questionnaire, plibel, QEC (Quick Exposure Check and biomechanism. Moreover, we applied those methods for designing rolling machines and grip egrek ergono-mically, so that the defects on those production lines can be minimized.

  17. Effects of variable transformations on errors in FORM results

    International Nuclear Information System (INIS)

    Qin Quan; Lin Daojin; Mei Gang; Chen Hao

    2006-01-01

    On the basis of studies on second partial derivatives of the variable transformation functions for nine different non-normal variables the paper comprehensively discusses the effects of the transformation on FORM results and shows that senses and values of the errors in FORM results depend on distributions of the basic variables, whether resistances or actions basic variables represent, and the design point locations in the standard normal space. The transformations of the exponential or Gamma resistance variables can generate +24% errors in the FORM failure probability, and the transformation of Frechet action variables could generate -31% errors

  18. Standards, the users perspective

    International Nuclear Information System (INIS)

    Nason, W.D.

    1993-01-01

    The term standard has little meaning until put into the proper context. What is being standardized? What are the standard conditions to be applied? The list of questions that arise goes on and on. In this presentation, answers to these questions are considered in the interest of providing a basic understanding of what might be useful to the electrical power industry in the way of standards and what the limitations on application of them would be as well. 16 figs

  19. The error analysis of field size variation in pelvis region by using immobilization device

    International Nuclear Information System (INIS)

    Kim, Ki Hwan; Kang, No Hyun; Kim, Dong Wuk; Kim, Jun Sang; Jang, Ji Young; Kim, Jae Sung; Kim, Yong Eun; Cho, Moon June

    2000-01-01

    In radiotherapy, it may happen to radiate surrounding normal tissue because of inconsistent field size by changing patient position during treatment. We are going to analyze errors reduced by using immobilization device with Electronic Portal Imaging Device(EPID) in this study. We had treated the twenty-one patients in pelvic region with 10 MV X-ray from Aug. 1998 to Aug. 1999 at chungnam National University Hospital. All patients were treated at supine position during treatment. They were separated to two groups, 11 patients without device and 10 patients with immobilization device. We used styrofoam for immobilization device and measured the error of anterior direction for x, y axis and lateral direction for z, y axis from simulation film to EPID image using matching technique. For no immobilization device group, the mean deviation values of x axis and y axis are 0.19 mm. 0.48 mm, respectively and the standard deviations of systematic deviation are 2.38 mm, 2.19 mm, respectively and of random deviation for x axis and y axis are 1.92 mm. 1.29 mm, respectively. The mean deviation values of z axis and y axis are -3.61 mm. 2.07 mm, respectively and the standard deviations of systematic deviation are 3.20 mm, 2.29 mm, respectively and of random deviation for z axis and y axis are 2.73 mm. 1.62 mm, respectively. For immobilization device group, the mean deviation values of x axis and y axis are 0.71 mm. -1.07 mm, respectively and the standard deviations of systematic deviation are 1.80 mm, 2.26 mm, respectively and the standard deviations of systematic deviation are 1.80 mm, 2.26 mm, respectively of random deviation for x axis and y axis are 1.56 mm. 1.27 mm, respectively. The mean deviation values of z axis and y axis are -1.76 mm. 1.08 mm, respectively and the standard deviations of systematic deviation are 1.87 mm, 2.83 mm, respectively and of random deviation for x axis and y axis are 1.68 mm, 1.65 mm, respectively. Because of reducing random and systematic error

  20. Error field considerations for BPX

    International Nuclear Information System (INIS)

    LaHaye, R.J.

    1992-01-01

    Irregularities in the position of poloidal and/or toroidal field coils in tokamaks produce resonant toroidal asymmetries in the vacuum magnetic fields. Otherwise stable tokamak discharges become non-linearly unstable to disruptive locked modes when subjected to low level error fields. Because of the field errors, magnetic islands are produced which would not otherwise occur in tearing mode table configurations; a concomitant reduction of the total confinement can result. Poloidal and toroidal asymmetries arise in the heat flux to the divertor target. In this paper, the field errors from perturbed BPX coils are used in a field line tracing code of the BPX equilibrium to study these deleterious effects. Limits on coil irregularities for device design and fabrication are computed along with possible correcting coils for reducing such field errors

  1. The uncorrected refractive error challenge

    Directory of Open Access Journals (Sweden)

    Kovin Naidoo

    2016-11-01

    Full Text Available Refractive error affects people of all ages, socio-economic status and ethnic groups. The most recent statistics estimate that, worldwide, 32.4 million people are blind and 191 million people have vision impairment. Vision impairment has been defined based on distance visual acuity only, and uncorrected distance refractive error (mainly myopia is the single biggest cause of worldwide vision impairment. However, when we also consider near visual impairment, it is clear that even more people are affected. From research it was estimated that the number of people with vision impairment due to uncorrected distance refractive error was 107.8 million,1 and the number of people affected by uncorrected near refractive error was 517 million, giving a total of 624.8 million people.

  2. Comprehensive Error Rate Testing (CERT)

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services (CMS) implemented the Comprehensive Error Rate Testing (CERT) program to measure improper payments in the Medicare...

  3. Numerical optimization with computational errors

    CERN Document Server

    Zaslavski, Alexander J

    2016-01-01

    This book studies the approximate solutions of optimization problems in the presence of computational errors. A number of results are presented on the convergence behavior of algorithms in a Hilbert space; these algorithms are examined taking into account computational errors. The author illustrates that algorithms generate a good approximate solution, if computational errors are bounded from above by a small positive constant. Known computational errors are examined with the aim of determining an approximate solution. Researchers and students interested in the optimization theory and its applications will find this book instructive and informative. This monograph contains 16 chapters; including a chapters devoted to the subgradient projection algorithm, the mirror descent algorithm, gradient projection algorithm, the Weiszfelds method, constrained convex minimization problems, the convergence of a proximal point method in a Hilbert space, the continuous subgradient method, penalty methods and Newton’s meth...

  4. Dual processing and diagnostic errors.

    Science.gov (United States)

    Norman, Geoff

    2009-09-01

    In this paper, I review evidence from two theories in psychology relevant to diagnosis and diagnostic errors. "Dual Process" theories of thinking, frequently mentioned with respect to diagnostic error, propose that categorization decisions can be made with either a fast, unconscious, contextual process called System 1 or a slow, analytical, conscious, and conceptual process, called System 2. Exemplar theories of categorization propose that many category decisions in everyday life are made by unconscious matching to a particular example in memory, and these remain available and retrievable individually. I then review studies of clinical reasoning based on these theories, and show that the two processes are equally effective; System 1, despite its reliance in idiosyncratic, individual experience, is no more prone to cognitive bias or diagnostic error than System 2. Further, I review evidence that instructions directed at encouraging the clinician to explicitly use both strategies can lead to consistent reduction in error rates.

  5. Nuclear standardization development study

    International Nuclear Information System (INIS)

    Pan Jianjun

    2010-01-01

    Nuclear industry is the important part of national security and national economic development is key area of national new energy supported by government. nuclear standardization is the important force for nuclear industry development, is the fundamental guarantee of nuclear safe production, is the valuable means of China's nuclear industry technology to the world market. Now nuclear standardization faces to the new development opportunity, nuclear standardization should implement strategy in standard system building, foreign standard research, company standard building, and talented people building to meet the requirement of nuclear industry development. (author)

  6. Error correcting coding for OTN

    DEFF Research Database (Denmark)

    Justesen, Jørn; Larsen, Knud J.; Pedersen, Lars A.

    2010-01-01

    Forward error correction codes for 100 Gb/s optical transmission are currently receiving much attention from transport network operators and technology providers. We discuss the performance of hard decision decoding using product type codes that cover a single OTN frame or a small number...... of such frames. In particular we argue that a three-error correcting BCH is the best choice for the component code in such systems....

  7. Negligence, genuine error, and litigation

    OpenAIRE

    Sohn DH

    2013-01-01

    David H SohnDepartment of Orthopedic Surgery, University of Toledo Medical Center, Toledo, OH, USAAbstract: Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort syst...

  8. Eliminating US hospital medical errors.

    Science.gov (United States)

    Kumar, Sameer; Steinebach, Marc

    2008-01-01

    Healthcare costs in the USA have continued to rise steadily since the 1980s. Medical errors are one of the major causes of deaths and injuries of thousands of patients every year, contributing to soaring healthcare costs. The purpose of this study is to examine what has been done to deal with the medical-error problem in the last two decades and present a closed-loop mistake-proof operation system for surgery processes that would likely eliminate preventable medical errors. The design method used is a combination of creating a service blueprint, implementing the six sigma DMAIC cycle, developing cause-and-effect diagrams as well as devising poka-yokes in order to develop a robust surgery operation process for a typical US hospital. In the improve phase of the six sigma DMAIC cycle, a number of poka-yoke techniques are introduced to prevent typical medical errors (identified through cause-and-effect diagrams) that may occur in surgery operation processes in US hospitals. It is the authors' assertion that implementing the new service blueprint along with the poka-yokes, will likely result in the current medical error rate to significantly improve to the six-sigma level. Additionally, designing as many redundancies as possible in the delivery of care will help reduce medical errors. Primary healthcare providers should strongly consider investing in adequate doctor and nurse staffing, and improving their education related to the quality of service delivery to minimize clinical errors. This will lead to an increase in higher fixed costs, especially in the shorter time frame. This paper focuses additional attention needed to make a sound technical and business case for implementing six sigma tools to eliminate medical errors that will enable hospital managers to increase their hospital's profitability in the long run and also ensure patient safety.

  9. Does semantic impairment explain surface dyslexia? VLSM evidence for a double dissociation between regularization errors in reading and semantic errors in picture naming

    Directory of Open Access Journals (Sweden)

    Sara Pillay

    2014-04-01

    Full Text Available The correlation between semantic deficits and exception word regularization errors ("surface dyslexia" in semantic dementia has been taken as strong evidence for involvement of semantic codes in exception word pronunciation. Rare cases with semantic deficits but no exception word reading deficit have been explained as due to individual differences in reading strategy, but this account is hotly debated. Semantic dementia is a diffuse process that always includes semantic impairment, making lesion localization difficult and independent assessment of semantic deficits and reading errors impossible. We addressed this problem using voxel-based lesion symptom mapping in 38 patients with left hemisphere stroke. Patients were all right-handed, native English speakers and at least 6 months from stroke onset. Patients performed an oral reading task that included 80 exception words (words with inconsistent orthographic-phonologic correspondence, e.g., pint, plaid, glove. Regularization errors were defined as plausible but incorrect pronunciations based on application of spelling-sound correspondence rules (e.g., 'plaid' pronounced as "played". Two additional tests examined explicit semantic knowledge and retrieval. The first measured semantic substitution errors during naming of 80 standard line drawings of objects. This error type is generally presumed to arise at the level of concept selection. The second test (semantic matching required patients to match a printed sample word (e.g., bus with one of two alternative choice words (e.g., car, taxi on the basis of greater similarity of meaning. Lesions were labeled on high-resolution T1 MRI volumes using a semi-automated segmentation method, followed by diffeomorphic registration to a template. VLSM used an ANCOVA approach to remove variance due to age, education, and total lesion volume. Regularization errors during reading were correlated with damage in the posterior half of the middle temporal gyrus and

  10. [Medical errors: inevitable but preventable].

    Science.gov (United States)

    Giard, R W

    2001-10-27

    Medical errors are increasingly reported in the lay press. Studies have shown dramatic error rates of 10 percent or even higher. From a methodological point of view, studying the frequency and causes of medical errors is far from simple. Clinical decisions on diagnostic or therapeutic interventions are always taken within a clinical context. Reviewing outcomes of interventions without taking into account both the intentions and the arguments for a particular action will limit the conclusions from a study on the rate and preventability of errors. The interpretation of the preventability of medical errors is fraught with difficulties and probably highly subjective. Blaming the doctor personally does not do justice to the actual situation and especially the organisational framework. Attention for and improvement of the organisational aspects of error are far more important then litigating the person. To err is and will remain human and if we want to reduce the incidence of faults we must be able to learn from our mistakes. That requires an open attitude towards medical mistakes, a continuous effort in their detection, a sound analysis and, where feasible, the institution of preventive measures.

  11. Quantum error correction for beginners

    International Nuclear Information System (INIS)

    Devitt, Simon J; Nemoto, Kae; Munro, William J

    2013-01-01

    Quantum error correction (QEC) and fault-tolerant quantum computation represent one of the most vital theoretical aspects of quantum information processing. It was well known from the early developments of this exciting field that the fragility of coherent quantum systems would be a catastrophic obstacle to the development of large-scale quantum computers. The introduction of quantum error correction in 1995 showed that active techniques could be employed to mitigate this fatal problem. However, quantum error correction and fault-tolerant computation is now a much larger field and many new codes, techniques, and methodologies have been developed to implement error correction for large-scale quantum algorithms. In response, we have attempted to summarize the basic aspects of quantum error correction and fault-tolerance, not as a detailed guide, but rather as a basic introduction. The development in this area has been so pronounced that many in the field of quantum information, specifically researchers who are new to quantum information or people focused on the many other important issues in quantum computation, have found it difficult to keep up with the general formalisms and methodologies employed in this area. Rather than introducing these concepts from a rigorous mathematical and computer science framework, we instead examine error correction and fault-tolerance largely through detailed examples, which are more relevant to experimentalists today and in the near future. (review article)

  12. Medical Error and Moral Luck.

    Science.gov (United States)

    Hubbeling, Dieneke

    2016-09-01

    This paper addresses the concept of moral luck. Moral luck is discussed in the context of medical error, especially an error of omission that occurs frequently, but only rarely has adverse consequences. As an example, a failure to compare the label on a syringe with the drug chart results in the wrong medication being administered and the patient dies. However, this error may have previously occurred many times with no tragic consequences. Discussions on moral luck can highlight conflicting intuitions. Should perpetrators receive a harsher punishment because of an adverse outcome, or should they be dealt with in the same way as colleagues who have acted similarly, but with no adverse effects? An additional element to the discussion, specifically with medical errors, is that according to the evidence currently available, punishing individual practitioners does not seem to be effective in preventing future errors. The following discussion, using relevant philosophical and empirical evidence, posits a possible solution for the moral luck conundrum in the context of medical error: namely, making a distinction between the duty to make amends and assigning blame. Blame should be assigned on the basis of actual behavior, while the duty to make amends is dependent on the outcome.

  13. An investigation of error correcting techniques for OMV and AXAF

    Science.gov (United States)

    Ingels, Frank; Fryer, John

    1991-01-01

    The original objectives of this project were to build a test system for the NASA 255/223 Reed/Solomon encoding/decoding chip set and circuit board. This test system was then to be interfaced with a convolutional system at MSFC to examine the performance of the concantinated codes. After considerable work, it was discovered that the convolutional system could not function as needed. This report documents the design, construction, and testing of the test apparatus for the R/S chip set. The approach taken was to verify the error correcting behavior of the chip set by injecting known error patterns onto data and observing the results. Error sequences were generated using pseudo-random number generator programs, with Poisson time distribution between errors and Gaussian burst lengths. Sample means, variances, and number of un-correctable errors were calculated for each data set before testing.

  14. Bias correction by use of errors-in-variables regression models in studies with K-X-ray fluorescence bone lead measurements.

    Science.gov (United States)

    Lamadrid-Figueroa, Héctor; Téllez-Rojo, Martha M; Angeles, Gustavo; Hernández-Ávila, Mauricio; Hu, Howard

    2011-01-01

    In-vivo measurement of bone lead by means of K-X-ray fluorescence (KXRF) is the preferred biological marker of chronic exposure to lead. Unfortunately, considerable measurement error associated with KXRF estimations can introduce bias in estimates of the effect of bone lead when this variable is included as the exposure in a regression model. Estimates of uncertainty reported by the KXRF instrument reflect the variance of the measurement error and, although they can be used to correct the measurement error bias, they are seldom used in epidemiological statistical analyzes. Errors-in-variables regression (EIV) allows for correction of bias caused by measurement error in predictor variables, based on the knowledge of the reliability of such variables. The authors propose a way to obtain reliability coefficients for bone lead measurements from uncertainty data reported by the KXRF instrument and compare, by the use of Monte Carlo simulations, results obtained using EIV regression models vs. those obtained by the standard procedures. Results of the simulations show that Ordinary Least Square (OLS) regression models provide severely biased estimates of effect, and that EIV provides nearly unbiased estimates. Although EIV effect estimates are more imprecise, their mean squared error is much smaller than that of OLS estimates. In conclusion, EIV is a better alternative than OLS to estimate the effect of bone lead when measured by KXRF. Copyright © 2010 Elsevier Inc. All rights reserved.

  15. Laser tracker error determination using a network measurement

    International Nuclear Information System (INIS)

    Hughes, Ben; Forbes, Alistair; Lewis, Andrew; Sun, Wenjuan; Veal, Dan; Nasr, Karim

    2011-01-01

    We report on a fast, easily implemented method to determine all the geometrical alignment errors of a laser tracker, to high precision. The technique requires no specialist equipment and can be performed in less than an hour. The technique is based on the determination of parameters of a geometric model of the laser tracker, using measurements of a set of fixed target locations, from multiple locations of the tracker. After fitting of the model parameters to the observed data, the model can be used to perform error correction of the raw laser tracker data or to derive correction parameters in the format of the tracker manufacturer's internal error map. In addition to determination of the model parameters, the method also determines the uncertainties and correlations associated with the parameters. We have tested the technique on a commercial laser tracker in the following way. We disabled the tracker's internal error compensation, and used a five-position, fifteen-target network to estimate all the geometric errors of the instrument. Using the error map generated from this network test, the tracker was able to pass a full performance validation test, conducted according to a recognized specification standard (ASME B89.4.19-2006). We conclude that the error correction determined from the network test is as effective as the manufacturer's own error correction methodologies

  16. Implementing parallel spreadsheet models for health policy decisions: The impact of unintentional errors on model projections.

    Science.gov (United States)

    Bailey, Stephanie L; Bono, Rose S; Nash, Denis; Kimmel, April D

    2018-01-01

    Spreadsheet software is increasingly used to implement systems science models informing health policy decisions, both in academia and in practice where technical capacity may be limited. However, spreadsheet models are prone to unintentional errors that may not always be identified using standard error-checking techniques. Our objective was to illustrate, through a methodologic case study analysis, the impact of unintentional errors on model projections by implementing parallel model versions. We leveraged a real-world need to revise an existing spreadsheet model designed to inform HIV policy. We developed three parallel versions of a previously validated spreadsheet-based model; versions differed by the spreadsheet cell-referencing approach (named single cells; column/row references; named matrices). For each version, we implemented three model revisions (re-entry into care; guideline-concordant treatment initiation; immediate treatment initiation). After standard error-checking, we identified unintentional errors by comparing model output across the three versions. Concordant model output across all versions was considered error-free. We calculated the impact of unintentional errors as the percentage difference in model projections between model versions with and without unintentional errors, using +/-5% difference to define a material error. We identified 58 original and 4,331 propagated unintentional errors across all model versions and revisions. Over 40% (24/58) of original unintentional errors occurred in the column/row reference model version; most (23/24) were due to incorrect cell references. Overall, >20% of model spreadsheet cells had material unintentional errors. When examining error impact along the HIV care continuum, the percentage difference between versions with and without unintentional errors ranged from +3% to +16% (named single cells), +26% to +76% (column/row reference), and 0% (named matrices). Standard error-checking techniques may not

  17. Dynamical Predictability of Monthly Means.

    Science.gov (United States)

    Shukla, J.

    1981-12-01

    We have attempted to determine the theoretical upper limit of dynamical predictability of monthly means for prescribed nonfluctuating external forcings. We have extended the concept of `classical' predictability, which primarily refers to the lack of predictability due mainly to the instabilities of synoptic-scale disturbances, to the predictability of time averages, which are determined by the predictability of low-frequency planetary waves. We have carded out 60-day integrations of a global general circulation model with nine different initial conditions but identical boundary conditions of sea surface temperature, snow, sea ice and soil moisture. Three of these initial conditions are the observed atmospheric conditions on 1 January of 1975, 1976 and 1977. The other six initial conditions are obtained by superimposing over the observed initial conditions a random perturbation comparable to the errors of observation. The root-mean-square (rms) error of random perturbations at all the grid points and all the model levels is 3 m s1 in u and v components of wind. The rms vector wind error between the observed initial conditions is >15 m s1.It is hypothesized that for a given averaging period, if the rms error among the time averages predicted from largely different initial conditions becomes comparable to the rms error among the time averages predicted from randomly perturbed initial conditions, the time averages are dynamically unpredictable. We have carried out the analysis of variance to compare the variability, among the three groups, due to largely different initial conditions, and within each group due to random perturbations.It is found that the variances among the first 30-day means, predicted from largely different initial conditions, are significantly different from the variances due to random perturbations in the initial conditions, whereas the variances among 30-day means for days 31-60 are not distinguishable from the variances due to random initial

  18. Evaluación de lubricante para apoyos de ventilador en planta termoeléctrica con empleo de norma ISO 7902. // Evaluation of lubricant for fan supports in thermoelectric plant by means of ISO standard 7902.

    Directory of Open Access Journals (Sweden)

    A. García Toll

    2007-05-01

    Full Text Available Durante la reparación de un ventilador de recirculación de gases de la caldera en una planta termoeléctrica, se modifican lasdimensiones de los cojinetes de deslizamiento lubricados por aros, que sirven de apoyo al rotor. Se decide entonces analizarla posibilidad de emplear el mismo lubricante que antes de la reparación. Para evaluar el uso del aceite propuesto se utilizala norma ISO 7902 “Hydrodynamic plain journal bearing under steady-state conditions – Circular cylindrical bearings”,que permite verificar la capacidad de trabajo de un cojinete de deslizamiento en condiciones de lubricación hidrodinámicapara un lubricante seleccionado, con una geometría y sistema de cargas en una aplicación de cojinete. En el artículo seexpone un ejemplo práctico de análisis evaluación del lubricante en una aplicación típica de cojinete de deslizamiento.Palabras claves: Cojinetes de deslizamiento, lubricante, capacidad de carga, Norma ISO 7902.___________________________________________________________________________Abstract:During the repair of a recirculation gases fan in the boiler in a thermoelectric plant frequently aremodified the dimensions of the plain bearings lubricated. In this case, it is necessary to analyze thepossibility to use the same one lubricant that before the repair. The standard ISO 7902 "Hydrodynamicplain journal bearing under steady-state conditions - Circulate cylindrical bearings", it is used to verify thecapacity of work of a plain bearing under conditions of hydrodynamic lubrication for a selected lubricant,with a geometry and loads given system. In this paper a procedure to verify the capacity of plain bearingstaking into account a specific lubricant is given. Moreover, a sample of analysis of lubricant filmthickness capacity for an application in plain journal bearing is presented.Key words: bearings, lubricant, load capacity, ISO Standard.

  19. Wind and load forecast error model for multiple geographically distributed forecasts

    Energy Technology Data Exchange (ETDEWEB)

    Makarov, Yuri V.; Reyes-Spindola, Jorge F.; Samaan, Nader; Diao, Ruisheng; Hafen, Ryan P. [Pacific Northwest National Laboratory, Richland, WA (United States)

    2010-07-01

    The impact of wind and load forecast errors on power grid operations is frequently evaluated by conducting multi-variant studies, where these errors are simulated repeatedly as random processes based on their known statistical characteristics. To simulate these errors correctly, we need to reflect their distributions (which do not necessarily follow a known distribution law), standard deviations. auto- and cross-correlations. For instance, load and wind forecast errors can be closely correlated in different zones of the system. This paper introduces a new methodology for generating multiple cross-correlated random processes to produce forecast error time-domain curves based on a transition probability matrix computed from an empirical error distribution function. The matrix will be used to generate new error time series with statistical features similar to observed errors. We present the derivation of the method and some experimental results obtained by generating new error forecasts together with their statistics. (orig.)

  20. Detecting errors in micro and trace analysis by using statistics

    DEFF Research Database (Denmark)

    Heydorn, K.

    1993-01-01

    By assigning a standard deviation to each step in an analytical method it is possible to predict the standard deviation of each analytical result obtained by this method. If the actual variability of replicate analytical results agrees with the expected, the analytical method is said...... to be in statistical control. Significant deviations between analytical results from different laboratories reveal the presence of systematic errors, and agreement between different laboratories indicate the absence of systematic errors. This statistical approach, referred to as the analysis of precision, was applied...

  1. The dance of meaning

    DEFF Research Database (Denmark)

    Rasmussen, Ole Elstrup

    2005-01-01

    competence, qualifications, sense making, reasoning, meaning, intentionality, interpersonal relationship......competence, qualifications, sense making, reasoning, meaning, intentionality, interpersonal relationship...

  2. Comparative study of anatomical normalization errors in SPM and 3D-SSP using digital brain phantom.

    Science.gov (United States)

    Onishi, Hideo; Matsutake, Yuki; Kawashima, Hiroki; Matsutomo, Norikazu; Amijima, Hizuru

    2011-01-01

    In single photon emission computed tomography (SPECT) cerebral blood flow studies, two major algorithms are widely used statistical parametric mapping (SPM) and three-dimensional stereotactic surface projections (3D-SSP). The aim of this study is to compare an SPM algorithm-based easy Z score imaging system (eZIS) and a 3D-SSP system in the errors of anatomical standardization using 3D-digital brain phantom images. We developed a 3D-brain digital phantom based on MR images to simulate the effects of head tilt, perfusion defective region size, and count value reduction rate on the SPECT images. This digital phantom was used to compare the errors of anatomical standardization by the eZIS and the 3D-SSP algorithms. While the eZIS allowed accurate standardization of the images of the phantom simulating a head in rotation, lateroflexion, anteflexion, or retroflexion without angle dependency, the standardization by 3D-SSP was not accurate enough at approximately 25° or more head tilt. When the simulated head contained perfusion defective regions, one of the 3D-SSP images showed an error of 6.9% from the true value. Meanwhile, one of the eZIS images showed an error as large as 63.4%, revealing a significant underestimation. When required to evaluate regions with decreased perfusion due to such causes as hemodynamic cerebral ischemia, the 3D-SSP is desirable. In a statistical image analysis, we must reconfirm the image after anatomical standardization by all means.

  3. Comparative study of anatomical normalization errors in SPM and 3D-SSP using digital brain phantom

    International Nuclear Information System (INIS)

    Onishi, Hideo; Matsutomo, Norikazu; Matsutake, Yuki; Kawashima, Hiroki; Amijima, Hizuru

    2011-01-01

    In single photon emission computed tomography (SPECT) cerebral blood flow studies, two major algorithms are widely used statistical parametric mapping (SPM) and three-dimensional stereotactic surface projections (3D-SSP). The aim of this study is to compare an SPM algorithm-based easy Z score imaging system (eZIS) and a 3D-SSP system in the errors of anatomical standardization using 3D-digital brain phantom images. We developed a 3D-brain digital phantom based on MR images to simulate the effects of head tilt, perfusion defective region size, and count value reduction rate on the SPECT images. This digital phantom was used to compare the errors of anatomical standardization by the eZIS and the 3D-SSP algorithms. While the eZIS allowed accurate standardization of the images of the phantom simulating a head in rotation, lateroflexion, anteflexion, or retroflexion without angle dependency, the standardization by 3D-SSP was not accurate enough at approximately 25 deg or more head tilt. When the simulated head contained perfusion defective regions, one of the 3D-SSP images showed an error of 6.9% from the true value. Meanwhile, one of the eZIS images showed an error as large as 63.4%, revealing a significant underestimation. When required to evaluate regions with decreased perfusion due to such causes as hemodynamic cerebral ischemia, the 3D-SSP is desirable. In a statistical image analysis, we must reconfirm the image after anatomical standardization by all means. (author)

  4. Error and discrepancy in radiology: inevitable or avoidable?

    OpenAIRE

    Brady, Adrian P.

    2016-01-01

    Abstract Errors and discrepancies in radiology practice are uncomfortably common, with an estimated day-to-day rate of 3?5% of studies reported, and much higher rates reported in many targeted studies. Nonetheless, the meaning of the terms ?error? and ?discrepancy? and the relationship to medical negligence are frequently misunderstood. This review outlines the incidence of such events, the ways they can be categorized to aid understanding, and potential contributing factors, both human- and ...

  5. Rectifying calibration error of Goldmann applanation tonometer is easy!

    Directory of Open Access Journals (Sweden)

    Nikhil S Choudhari

    2014-01-01

    Full Text Available Purpose: Goldmann applanation tonometer (GAT is the current Gold standard tonometer. However, its calibration error is common and can go unnoticed in clinics. Its company repair has limitations. The purpose of this report is to describe a self-taught technique of rectifying calibration error of GAT. Materials and Methods: Twenty-nine slit-lamp-mounted Haag-Streit Goldmann tonometers (Model AT 900 C/M; Haag-Streit, Switzerland were included in this cross-sectional interventional pilot study. The technique of rectification of calibration error of the tonometer involved cleaning and lubrication of the instrument followed by alignment of weights when lubrication alone didn′t suffice. We followed the South East Asia Glaucoma Interest Group′s definition of calibration error tolerance (acceptable GAT calibration error within ±2, ±3 and ±4 mm Hg at the 0, 20 and 60-mm Hg testing levels, respectively. Results: Twelve out of 29 (41.3% GATs were out of calibration. The range of positive and negative calibration error at the clinically most important 20-mm Hg testing level was 0.5 to 20 mm Hg and -0.5 to -18 mm Hg, respectively. Cleaning and lubrication alone sufficed to rectify calibration error of 11 (91.6% faulty instruments. Only one (8.3% faulty GAT required alignment of the counter-weight. Conclusions: Rectification of calibration error of GAT is possible in-house. Cleaning and lubrication of GAT can be carried out even by eye care professionals and may suffice to rectify calibration error in the majority of faulty instruments. Such an exercise may drastically reduce the downtime of the Gold standard tonometer.

  6. The Geometric Mean Value Theorem

    Science.gov (United States)

    de Camargo, André Pierro

    2018-01-01

    In a previous article published in the "American Mathematical Monthly," Tucker ("Amer Math Monthly." 1997; 104(3): 231-240) made severe criticism on the Mean Value Theorem and, unfortunately, the majority of calculus textbooks also do not help to improve its reputation. The standard argument for proving it seems to be applying…

  7. Predictors of Errors of Novice Java Programmers

    Science.gov (United States)

    Bringula, Rex P.; Manabat, Geecee Maybelline A.; Tolentino, Miguel Angelo A.; Torres, Edmon L.

    2012-01-01

    This descriptive study determined which of the sources of errors would predict the errors committed by novice Java programmers. Descriptive statistics revealed that the respondents perceived that they committed the identified eighteen errors infrequently. Thought error was perceived to be the main source of error during the laboratory programming…

  8. Learning time-dependent noise to reduce logical errors: real time error rate estimation in quantum error correction

    Science.gov (United States)

    Huo, Ming-Xia; Li, Ying

    2017-12-01

    Quantum error correction is important to quantum information processing, which allows us to reliably process information encoded in quantum error correction codes. Efficient quantum error correction benefits from the knowledge of error rates. We propose a protocol for monitoring error rates in real time without interrupting the quantum error correction. Any adaptation of the quantum error correction code or its implementation circuit is not required. The protocol can be directly applied to the most advanced quantum error correction techniques, e.g. surface code. A Gaussian processes algorithm is used to estimate and predict error rates based on error correction data in the past. We find that using these estimated error rates, the probability of error correction failures can be significantly reduced by a factor increasing with the code distance.

  9. EG type radioactive calibration standards

    International Nuclear Information System (INIS)

    1980-01-01

    EG standards are standards with a radioactive substance deposited as a solution on filtration paper and after drying sealed into a plastic disc or cylinder shaped casing. They serve the official testing of X-ray and gamma spectrometers and as test sources. The table shows the types of used radionuclides, nominal values of activity and total error of determination not exceeding +-4%. Activity of standards is calculated from the charge and the specific activity of standard solution used for the preparation of the standard. Tightness and surface contamination is measured for each standard. The manufacturer, UVVVR Praha, gives a guarantee for the given values of activity and total error of determination. (M.D.)

  10. The probability and the management of human error

    International Nuclear Information System (INIS)

    Dufey, R.B.; Saull, J.W.

    2004-01-01

    Embedded within modern technological systems, human error is the largest, and indeed dominant contributor to accident cause. The consequences dominate the risk profiles for nuclear power and for many other technologies. We need to quantify the probability of human error for the system as an integral contribution within the overall system failure, as it is generally not separable or predictable for actual events. We also need to provide a means to manage and effectively reduce the failure (error) rate. The fact that humans learn from their mistakes allows a new determination of the dynamic probability and human failure (error) rate in technological systems. The result is consistent with and derived from the available world data for modern technological systems. Comparisons are made to actual data from large technological systems and recent catastrophes. Best estimate values and relationships can be derived for both the human error rate, and for the probability. We describe the potential for new approaches to the management of human error and safety indicators, based on the principles of error state exclusion and of the systematic effect of learning. A new equation is given for the probability of human error (λ) that combines the influences of early inexperience, learning from experience (ε) and stochastic occurrences with having a finite minimum rate, this equation is λ 5.10 -5 + ((1/ε) - 5.10 -5 ) exp(-3*ε). The future failure rate is entirely determined by the experience: thus the past defines the future

  11. Primary 4{pi}{beta}-{gamma} coincidence system for standardization of radionuclides by means of plastic scintillators; Sistema primario por coincidencias 4{pi}{beta}-{gamma} para a padronizacao de radionuclideos empregando cintiladores plasticos

    Energy Technology Data Exchange (ETDEWEB)

    Baccarelli, Aida Maria

    2003-07-01

    The present work describes a 4{pi}({alpha},{beta})-{gamma} coincidence system for absolute measurement of radionuclide activity using a plastic scintillator in 4{pi} geometry for charged particles detection and a Nal (Tl) crystal for gamma-ray detection. Several shapes and dimensions of the plastic scintillator have been tried in order to obtain the best system configuration. Radionuclides which decay by alpha emission, {beta}{sup -}, {beta}{sup +} and electron capture have been standardized. The results showed excellent agreement with other conventional primary system which makes use of a 4{pi} proportional counter for X-ray and charged particle detection. The system developed in the present work have some advantages when compared with the conventional systems, namely; it does not need metal coating on the films used as radioactive source holders. When compared to liquid scintillators, is showed the advantage of not needing to be kept in dark for more than 24 h to allow phosphorescence decay of ambient light. Therefore it can be set to count immediately after the sources are placed inside of it. (author)

  12. Taking serial correlation into account in tests of the mean

    International Nuclear Information System (INIS)

    Zwiers, F.W.; Storch, H. von

    1993-01-01

    The comparison of means derived from samples of noisy data is a standard part of climatology. When the data are not serially correlated the appropriate statistical tool for this task is usually the conventional Student's t-test. However, data frequently are serially correlated in climatological applications with the result that the t-tests in its standard form is not applicable. The usual solution to this problem is to scale the t-statistic by a factor which depends upon the equivalent sample size n e . We show, by means of simulations, that the revised t-test is often conservative (the actual significance level is smaller than the specified significance level) when the equivalent sample size is known. However, in most practical cases the equivalent sample size is not known. Then the test becomes liberal (the actual significance level is greater than the specified significance level). This systematic error becomes small when the true equivalent sample size is large (greater than approximately 30). We re-examine the difficulties inherent in difference of means tests when there is serial dependence. We provide guidelines for the application of the 'usual' t-test and propose two alternative tests which substantially improve upon the 'usual' t-test when samples are small. (orig.)

  13. Errors in MR-based attenuation correction for brain imaging with PET/MR scanners

    International Nuclear Information System (INIS)

    Rota Kops, Elena; Herzog, Hans

    2013-01-01

    Aim: Attenuation correction of PET data acquired by hybrid MR/PET scanners remains a challenge, even if several methods for brain and whole-body measurements have been developed recently. A template-based attenuation correction for brain imaging proposed by our group is easy to handle and delivers reliable attenuation maps in a short time. However, some potential error sources are analyzed in this study. We investigated the choice of template reference head among all the available data (error A), and possible skull anomalies of the specific patient, such as discontinuities due to surgery (error B). Materials and methods: An anatomical MR measurement and a 2-bed-position transmission scan covering the whole head and neck region were performed in eight normal subjects (4 females, 4 males). Error A: Taking alternatively one of the eight heads as reference, eight different templates were created by nonlinearly registering the images to the reference and calculating the average. Eight patients (4 females, 4 males; 4 with brain lesions, 4 w/o brain lesions) were measured in the Siemens BrainPET/MR scanner. The eight templates were used to generate the patients' attenuation maps required for reconstruction. ROI and VOI atlas-based comparisons were performed employing all the reconstructed images. Error B: CT-based attenuation maps of two volunteers were manipulated by manually inserting several skull lesions and filling a nasal cavity. The corresponding attenuation coefficients were substituted with the water's coefficient (0.096/cm). Results: Error A: The mean SUVs over the eight templates pairs for all eight patients and all VOIs did not differ significantly one from each other. Standard deviations up to 1.24% were found. Error B: After reconstruction of the volunteers' BrainPET data with the CT-based attenuation maps without and with skull anomalies, a VOI-atlas analysis was performed revealing very little influence of the skull lesions (less than 3%), while the filled

  14. Errors in MR-based attenuation correction for brain imaging with PET/MR scanners

    Science.gov (United States)

    Rota Kops, Elena; Herzog, Hans

    2013-02-01

    AimAttenuation correction of PET data acquired by hybrid MR/PET scanners remains a challenge, even if several methods for brain and whole-body measurements have been developed recently. A template-based attenuation correction for brain imaging proposed by our group is easy to handle and delivers reliable attenuation maps in a short time. However, some potential error sources are analyzed in this study. We investigated the choice of template reference head among all the available data (error A), and possible skull anomalies of the specific patient, such as discontinuities due to surgery (error B). Materials and methodsAn anatomical MR measurement and a 2-bed-position transmission scan covering the whole head and neck region were performed in eight normal subjects (4 females, 4 males). Error A: Taking alternatively one of the eight heads as reference, eight different templates were created by nonlinearly registering the images to the reference and calculating the average. Eight patients (4 females, 4 males; 4 with brain lesions, 4 w/o brain lesions) were measured in the Siemens BrainPET/MR scanner. The eight templates were used to generate the patients' attenuation maps required for reconstruction. ROI and VOI atlas-based comparisons were performed employing all the reconstructed images. Error B: CT-based attenuation maps of two volunteers were manipulated by manually inserting several skull lesions and filling a nasal cavity. The corresponding attenuation coefficients were substituted with the water's coefficient (0.096/cm). ResultsError A: The mean SUVs over the eight templates pairs for all eight patients and all VOIs did not differ significantly one from each other. Standard deviations up to 1.24% were found. Error B: After reconstruction of the volunteers' BrainPET data with the CT-based attenuation maps without and with skull anomalies, a VOI-atlas analysis was performed revealing very little influence of the skull lesions (less than 3%), while the filled nasal

  15. Attitudes of Mashhad Public Hospital's Nurses and Midwives toward the Causes and Rates of Medical Errors Reporting.

    Science.gov (United States)

    Mobarakabadi, Sedigheh Sedigh; Ebrahimipour, Hosein; Najar, Ali Vafaie; Janghorban, Roksana; Azarkish, Fatemeh

    2017-03-01

    Patient's safety is one of the main objective in healthcare services; however medical errors are a prevalent potential occurrence for the patients in treatment systems. Medical errors lead to an increase in mortality rate of the patients and challenges such as prolonging of the inpatient period in the hospitals and increased cost. Controlling the medical errors is very important, because these errors besides being costly, threaten the patient's safety. To evaluate the attitudes of nurses and midwives toward the causes and rates of medical errors reporting. It was a cross-sectional observational study. The study population was 140 midwives and nurses employed in Mashhad Public Hospitals. The data collection was done through Goldstone 2001 revised questionnaire. SPSS 11.5 software was used for data analysis. To analyze data, descriptive and inferential analytic statistics were used. Standard deviation and relative frequency distribution, descriptive statistics were used for calculation of the mean and the results were adjusted as tables and charts. Chi-square test was used for the inferential analysis of the data. Most of midwives and nurses (39.4%) were in age range of 25 to 34 years and the lowest percentage (2.2%) were in age range of 55-59 years. The highest average of medical errors was related to employees with three-four years of work experience, while the lowest average was related to those with one-two years of work experience. The highest average of medical errors was during the evening shift, while the lowest were during the night shift. Three main causes of medical errors were considered: illegibile physician prescription orders, similarity of names in different drugs and nurse fatigueness. The most important causes for medical errors from the viewpoints of nurses and midwives are illegible physician's order, drug name similarity with other drugs, nurse's fatigueness and damaged label or packaging of the drug, respectively. Head nurse feedback, peer

  16. The use of adaptive radiation therapy to reduce setup error: a prospective clinical study

    International Nuclear Information System (INIS)

    Yan Di; Wong, John; Vicini, Frank; Robertson, John; Horwitz, Eric; Brabbins, Donald; Cook, Carla; Gustafson, Gary; Stromberg, Jannifer; Martinez, Alvaro

    1996-01-01

    Purpose: Adaptive Radiation Therapy (ART) is a closed-loop feedback process where each patients treatment is adaptively optimized according to the individual variation information measured during the course of treatment. The process aims to maximize the benefits of treatment for the individual patient. A prospective study is currently being conducted to test the feasibility and effectiveness of ART for clinical use. The present study is limited to compensating the effects of systematic setup error. Methods and Materials: The study includes 20 patients treated on a linear accelerator equipped with a computer controlled multileaf collimator (MLC) and a electronic portal imaging device (EPID). Alpha cradles are used to immobilize those patients treated for disease in the thoracic and abdominal regions, and thermal plastic masks for the head and neck. Portal images are acquired daily. Setup error of each treatment field is quantified off-line every day. As determined from an earlier retrospective study of different clinical sites, the measured setup variation from the first 4 to 9 days, are used to estimate systematic setup error and the standard deviation of random setup error for each field. Setup adjustment is made if estimated systematic setup error of the treatment field was larger than or equal to 2 mm. Instead of the conventional approach of repositioning the patient, setup correction is implemented by reshaping MLC to compensate for the estimated systematic error. The entire process from analysis of portal images to the implementation of the modified MLC field is performed via computer network. Systematic and random setup errors of the treatment after adjustment are compared with those prior to adjustment. Finally, the frequency distributions of block overlap cumulated throughout the treatment course are evaluated. Results: Sixty-seven percent of all treatment fields were reshaped to compensate for the estimated systematic errors. At the time of this writing

  17. The cost of human error intervention

    International Nuclear Information System (INIS)

    Bennett, C.T.; Banks, W.W.; Jones, E.D.

    1994-03-01

    DOE has directed that cost-benefit analyses be conducted as part of the review process for all new DOE orders. This new policy will have the effect of ensuring that DOE analysts can justify the implementation costs of the orders that they develop. We would like to argue that a cost-benefit analysis is merely one phase of a complete risk management program -- one that would more than likely start with a probabilistic risk assessment. The safety community defines risk as the probability of failure times the severity of consequence. An engineering definition of failure can be considered in terms of physical performance, as in mean-time-between-failure; or, it can be thought of in terms of human performance, as in probability of human error. The severity of consequence of a failure can be measured along any one of a number of dimensions -- economic, political, or social. Clearly, an analysis along one dimension cannot be directly compared to another but, a set of cost-benefit analyses, based on a series of cost-dimensions, can be extremely useful to managers who must prioritize their resources. Over the last two years, DOE has been developing a series of human factors orders, directed a lowering the probability of human error -- or at least changing the distribution of those errors. The following discussion presents a series of cost-benefit analyses using historical events in the nuclear industry. However, we would first like to discuss some of the analytic cautions that must be considered when we deal with human error

  18. Less Truth Than Error: Massachusetts Teacher Tests

    Directory of Open Access Journals (Sweden)

    Walt Haney

    1999-02-01

    Full Text Available Scores on the Massachusetts Teacher Tests of reading and writing are highly unreliable. The tests' margin of error is close to double to triple the range found on well-developed tests. A person retaking the MTT several times could have huge fluctuations in their scores even if their skill level did not change significantly. In fact, the 9 to 17 point margin of error calculated for the tests represents more than 10 percent of the grading scale (assumed to be 0 to 100. The large margin of error means there is both a high false-pass rate and a high false-failure rate. For example, a person who received a score of 72 on the writing test could have scored an 89 or a 55 simply because of the unreliability of the test. Since adults' reading and writing skills do not change a great deal over several months, this range of scores on the same test should not be possible. While this test is being touted as an accurate assessment of a person's fitness to be a teacher, one would expect the scores to accurately reflect a test-taker's verbal ability level. In addition to the large margin of error, the MTT contain questionable content that make them poor tools for measuring test-takers' reading and writing skills. The content and lack of correlation between the reading and writing scores reduces the meaningfulness, or validity, of the tests. The validity is affected not just by the content, but by a host of factors, such as the conditions under which tests were administered and how they were scored. Interviews with a small sample of test-takers confirmed published reports concerning problems with the content and administration.

  19. Error review: Can this improve reporting performance?

    International Nuclear Information System (INIS)

    Tudor, Gareth R.; Finlay, David B.

    2001-01-01

    AIM: This study aimed to assess whether error review can improve radiologists' reporting performance. MATERIALS AND METHODS: Ten Consultant Radiologists reported 50 plain radiographs, in which the diagnoses were established. Eighteen of the radiographs were normal, 32 showed an abnormality. The radiologists were shown their errors and then re-reported the series of radiographs after an interval of 4-5 months. The accuracy of the reports to the established diagnoses was assessed. Chi-square test was used to calculate the difference between the viewings. RESULTS: On re-reporting the radiographs, seven radiologists improved their accuracy score, two had a lower score and one radiologist showed no score difference. Mean accuracy pre-education was 82.2%, (range 78-92%) and post-education was 88%, (range 76-96%). Individually, two of the radiologists showed a statistically significant improvement post-education (P < 0.01,P < 0.05). Assessing the group as a whole, there was a trend for improvement post-education but this did not reach statistical significance. Assessing only the radiographs where errors were made on the initial viewing, for the group as a whole there was a 63% improvement post-education. CONCLUSION: We suggest that radiologists benefit from error review, although there was not a statistically significant improvement for the series of radiographs in total. This is partly explained by the fact that some radiologists gave incorrect responses post-education that had initially been correct, thus masking the effect of the educational intervention. Tudor, G.R. and Finlay, D.B. (2001

  20. Redundant measurements for controlling errors

    International Nuclear Information System (INIS)

    Ehinger, M.H.; Crawford, J.M.; Madeen, M.L.

    1979-07-01

    Current federal regulations for nuclear materials control require consideration of operating data as part of the quality control program and limits of error propagation. Recent work at the BNFP has revealed that operating data are subject to a number of measurement problems which are very difficult to detect and even more difficult to correct in a timely manner. Thus error estimates based on operational data reflect those problems. During the FY 1978 and FY 1979 R and D demonstration runs at the BNFP, redundant measurement techniques were shown to be effective in detecting these problems to allow corrective action. The net effect is a reduction in measurement errors and a significant increase in measurement sensitivity. Results show that normal operation process control measurements, in conjunction with routine accountability measurements, are sensitive problem indicators when incorporated in a redundant measurement program

  1. Negligence, genuine error, and litigation

    Directory of Open Access Journals (Sweden)

    Sohn DH

    2013-02-01

    Full Text Available David H SohnDepartment of Orthopedic Surgery, University of Toledo Medical Center, Toledo, OH, USAAbstract: Not all medical injuries are the result of negligence. In fact, most medical injuries are the result either of the inherent risk in the practice of medicine, or due to system errors, which cannot be prevented simply through fear of disciplinary action. This paper will discuss the differences between adverse events, negligence, and system errors; the current medical malpractice tort system in the United States; and review current and future solutions, including medical malpractice reform, alternative dispute resolution, health courts, and no-fault compensation systems. The current political environment favors investigation of non-cap tort reform remedies; investment into more rational oversight systems, such as health courts or no-fault systems may reap both quantitative and qualitative benefits for a less costly and safer health system.Keywords: medical malpractice, tort reform, no fault compensation, alternative dispute resolution, system errors

  2. Sensation seeking and error processing.

    Science.gov (United States)

    Zheng, Ya; Sheng, Wenbin; Xu, Jing; Zhang, Yuanyuan

    2014-09-01

    Sensation seeking is defined by a strong need for varied, novel, complex, and intense stimulation, and a willingness to take risks for such experience. Several theories propose that the insensitivity to negative consequences incurred by risks is one of the hallmarks of sensation-seeking behaviors. In this study, we investigated the time course of error processing in sensation seeking by recording event-related potentials (ERPs) while high and low sensation seekers performed an Eriksen flanker task. Whereas there were no group differences in ERPs to correct trials, sensation seeking was associated with a blunted error-related negativity (ERN), which was female-specific. Further, different subdimensions of sensation seeking were related to ERN amplitude differently. These findings indicate that the relationship between sensation seeking and error processing is sex-specific. Copyright © 2014 Society for Psychophysiological Research.

  3. Errors of Inference Due to Errors of Measurement.

    Science.gov (United States)

    Linn, Robert L.; Werts, Charles E.

    Failure to consider errors of measurement when using partial correlation or analysis of covariance techniques can result in erroneous conclusions. Certain aspects of this problem are discussed and particular attention is given to issues raised in a recent article by Brewar, Campbell, and Crano. (Author)

  4. Reward positivity: Reward prediction error or salience prediction error?

    Science.gov (United States)

    Heydari, Sepideh; Holroyd, Clay B

    2016-08-01

    The reward positivity is a component of the human ERP elicited by feedback stimuli in trial-and-error learning and guessing tasks. A prominent theory holds that the reward positivity reflects a reward prediction error signal that is sensitive to outcome valence, being larger for unexpected positive events relative to unexpected negative events (Holroyd & Coles, 2002). Although the theory has found substantial empirical support, most of these studies have utilized either monetary or performance feedback to test the hypothesis. However, in apparent contradiction to the theory, a recent study found that unexpected physical punishments also elicit the reward positivity (Talmi, Atkinson, & El-Deredy, 2013). The authors of this report argued that the reward positivity reflects a salience prediction error rather than a reward prediction error. To investigate this finding further, in the present study participants navigated a virtual T maze and received feedback on each trial under two conditions. In a reward condition, the feedback indicated that they would either receive a monetary reward or not and in a punishment condition the feedback indicated that they would receive a small shock or not. We found that the feedback stimuli elicited a typical reward positivity in the reward condition and an apparently delayed reward positivity in the punishment condition. Importantly, this signal was more positive to the stimuli that predicted the omission of a possible punishment relative to stimuli that predicted a forthcoming punishment, which is inconsistent with the salience hypothesis. © 2016 Society for Psychophysiological Research.

  5. Radiology errors: are we learning from our mistakes?

    International Nuclear Information System (INIS)

    Mankad, K.; Hoey, E.T.D.; Jones, J.B.; Tirukonda, P.; Smith, J.T.

    2009-01-01

    Aim: To question practising radiologists and radiology trainees at a large international meeting in an attempt to survey individuals about error reporting. Materials and methods: Radiologists attending the 2007 Radiological Society of North America (RSNA) annual meeting were approached to fill in a written questionnaire. Participants were questioned as to their grade, country in which they practised, and subspecialty interest. They were asked whether they kept a personal log of their errors (with an error defined as 'a mistake that has management implications for the patient'), how many errors they had made in the preceding 12 months, and the types of errors that had occurred. They were also asked whether their local department held regular discrepancy/errors meetings, how many they had attended in the preceding 12 months, and the perceived atmosphere at these meetings (on a qualitative scale). Results: A total of 301 radiologists with a wide range of specialty interests from 32 countries agreed to take part. One hundred and sixty-six of 301 (55%) of responders were consultant/attending grade. One hundred and thirty-five of 301 (45%) were residents/fellows. Fifty-nine of 301 (20%) of responders kept a personal record of their errors. The number of errors made per person per year ranged from none (2%) to 16 or more (7%). The majority (91%) reported making between one and 15 errors/year. Overcalls (40%), under-calls (25%), and interpretation error (15%) were the predominant error types. One hundred and seventy-eight of 301 (59%) of participants stated that their department held regular errors meeting. One hundred and twenty-seven of 301 (42%) had attended three or more meetings in the preceding year. The majority (55%) who had attended errors meetings described the atmosphere as 'educational.' Only a small minority (2%) described the atmosphere as 'poor' meaning non-educational and/or blameful. Conclusion: Despite the undeniable importance of learning from errors

  6. ERROR HANDLING IN INTEGRATION WORKFLOWS

    Directory of Open Access Journals (Sweden)

    Alexey M. Nazarenko

    2017-01-01

    Full Text Available Simulation experiments performed while solving multidisciplinary engineering and scientific problems require joint usage of multiple software tools. Further, when following a preset plan of experiment or searching for optimum solu- tions, the same sequence of calculations is run multiple times with various simulation parameters, input data, or conditions while overall workflow does not change. Automation of simulations like these requires implementing of a workflow where tool execution and data exchange is usually controlled by a special type of software, an integration environment or plat- form. The result is an integration workflow (a platform-dependent implementation of some computing workflow which, in the context of automation, is a composition of weakly coupled (in terms of communication intensity typical subtasks. These compositions can then be decomposed back into a few workflow patterns (types of subtasks interaction. The pat- terns, in their turn, can be interpreted as higher level subtasks.This paper considers execution control and data exchange rules that should be imposed by the integration envi- ronment in the case of an error encountered by some integrated software tool. An error is defined as any abnormal behavior of a tool that invalidates its result data thus disrupting the data flow within the integration workflow. The main requirementto the error handling mechanism implemented by the integration environment is to prevent abnormal termination of theentire workflow in case of missing intermediate results data. Error handling rules are formulated on the basic pattern level and on the level of a composite task that can combine several basic patterns as next level subtasks. The cases where workflow behavior may be different, depending on user's purposes, when an error takes place, and possible error handling op- tions that can be specified by the user are also noted in the work.

  7. Analysis of Medication Error Reports

    Energy Technology Data Exchange (ETDEWEB)

    Whitney, Paul D.; Young, Jonathan; Santell, John; Hicks, Rodney; Posse, Christian; Fecht, Barbara A.

    2004-11-15

    In medicine, as in many areas of research, technological innovation and the shift from paper based information to electronic records has created a climate of ever increasing availability of raw data. There has been, however, a corresponding lag in our abilities to analyze this overwhelming mass of data, and classic forms of statistical analysis may not allow researchers to interact with data in the most productive way. This is true in the emerging area of patient safety improvement. Traditionally, a majority of the analysis of error and incident reports has been carried out based on an approach of data comparison, and starts with a specific question which needs to be answered. Newer data analysis tools have been developed which allow the researcher to not only ask specific questions but also to “mine” data: approach an area of interest without preconceived questions, and explore the information dynamically, allowing questions to be formulated based on patterns brought up by the data itself. Since 1991, United States Pharmacopeia (USP) has been collecting data on medication errors through voluntary reporting programs. USP’s MEDMARXsm reporting program is the largest national medication error database and currently contains well over 600,000 records. Traditionally, USP has conducted an annual quantitative analysis of data derived from “pick-lists” (i.e., items selected from a list of items) without an in-depth analysis of free-text fields. In this paper, the application of text analysis and data analysis tools used by Battelle to analyze the medication error reports already analyzed in the traditional way by USP is described. New insights and findings were revealed including the value of language normalization and the distribution of error incidents by day of the week. The motivation for this effort is to gain additional insight into the nature of medication errors to support improvements in medication safety.

  8. Medication errors: definitions and classification

    Science.gov (United States)

    Aronson, Jeffrey K

    2009-01-01

    To understand medication errors and to identify preventive strategies, we need to classify them and define the terms that describe them. The four main approaches to defining technical terms consider etymology, usage, previous definitions, and the Ramsey–Lewis method (based on an understanding of theory and practice). A medication error is ‘a failure in the treatment process that leads to, or has the potential to lead to, harm to the patient’. Prescribing faults, a subset of medication errors, should be distinguished from prescription errors. A prescribing fault is ‘a failure in the prescribing [decision-making] process that leads to, or has the potential to lead to, harm to the patient’. The converse of this, ‘balanced prescribing’ is ‘the use of a medicine that is appropriate to the patient's condition and, within the limits created by the uncertainty that attends therapeutic decisions, in a dosage regimen that optimizes the balance of benefit to harm’. This excludes all forms of prescribing faults, such as irrational, inappropriate, and ineffective prescribing, underprescribing and overprescribing. A prescription error is ‘a failure in the prescription writing process that results in a wrong instruction about one or more of the normal features of a prescription’. The ‘normal features’ include the identity of the recipient, the identity of the drug, the formulation, dose, route, timing, frequency, and duration of administration. Medication errors can be classified, invoking psychological theory, as knowledge-based mistakes, rule-based mistakes, action-based slips, and memory-based lapses. This classification informs preventive strategies. PMID:19594526

  9. Human Error and Organizational Management

    Directory of Open Access Journals (Sweden)

    Alecxandrina DEACONU

    2009-01-01

    Full Text Available The concern for performance is a topic that raises interest in the businessenvironment but also in other areas that – even if they seem distant from thisworld – are aware of, interested in or conditioned by the economy development.As individual performance is very much influenced by the human resource, wechose to analyze in this paper the mechanisms that generate – consciously or not–human error nowadays.Moreover, the extremely tense Romanian context,where failure is rather a rule than an exception, made us investigate thephenomenon of generating a human error and the ways to diminish its effects.

  10. A Preliminary ZEUS Lightning Location Error Analysis Using a Modified Retrieval Theory

    Science.gov (United States)

    Elander, Valjean; Koshak, William; Phanord, Dieudonne

    2004-01-01

    The ZEUS long-range VLF arrival time difference lightning detection network now covers both Europe and Africa, and there are plans for further expansion into the western hemisphere. In order to fully optimize and assess ZEUS lightning location retrieval errors and to determine the best placement of future receivers expected to be added to the network, a software package is being developed jointly between the NASA Marshall Space Flight Center (MSFC) and the University of Nevada Las Vegas (UNLV). The software package, called the ZEUS Error Analysis for Lightning (ZEAL), will be used to obtain global scale lightning location retrieval error maps using both a Monte Carlo approach and chi-squared curvature matrix theory. At the core of ZEAL will be an implementation of an Iterative Oblate (IO) lightning location retrieval method recently developed at MSFC. The IO method will be appropriately modified to account for variable wave propagation speed, and the new retrieval results will be compared with the current ZEUS retrieval algorithm to assess potential improvements. In this preliminary ZEAL work effort, we defined 5000 source locations evenly distributed across the Earth. We then used the existing (as well as potential future ZEUS sites) to simulate arrival time data between source and ZEUS site. A total of 100 sources were considered at each of the 5000 locations, and timing errors were selected from a normal distribution having a mean of 0 seconds and a standard deviation of 20 microseconds. This simulated "noisy" dataset was analyzed using the IO algorithm to estimate source locations. The exact locations were compared with the retrieved locations, and the results are summarized via several color-coded "error maps."

  11. Preventing statistical errors in scientific journals.

    NARCIS (Netherlands)

    Nuijten, M.B.

    2016-01-01

    There is evidence for a high prevalence of statistical reporting errors in psychology and other scientific fields. These errors display a systematic preference for statistically significant results, distorting the scientific literature. There are several possible causes for this systematic error

  12. Comparison of subset-based local and FE-based global digital image correlation: Theoretical error analysis and validation

    KAUST Repository

    Pan, B.

    2016-03-22

    Subset-based local and finite-element-based (FE-based) global digital image correlation (DIC) approaches are the two primary image matching algorithms widely used for full-field displacement mapping. Very recently, the performances of these different DIC approaches have been experimentally investigated using numerical and real-world experimental tests. The results have shown that in typical cases, where the subset (element) size is no less than a few pixels and the local deformation within a subset (element) can be well approximated by the adopted shape functions, the subset-based local DIC outperforms FE-based global DIC approaches because the former provides slightly smaller root-mean-square errors and offers much higher computation efficiency. Here we investigate the theoretical origin and lay a solid theoretical basis for the previous comparison. We assume that systematic errors due to imperfect intensity interpolation and undermatched shape functions are negligibly small, and perform a theoretical analysis of the random errors or standard deviation (SD) errors in the displacements measured by two local DIC approaches (i.e., a subset-based local DIC and an element-based local DIC) and two FE-based global DIC approaches (i.e., Q4-DIC and Q8-DIC). The equations that govern the random errors in the displacements measured by these local and global DIC approaches are theoretically derived. The correctness of the theoretically predicted SD errors is validated through numerical translation tests under various noise levels. We demonstrate that the SD errors induced by the Q4-element-based local DIC, the global Q4-DIC and the global Q8-DIC are 4, 1.8-2.2 and 1.2-1.6 times greater, respectively, than that associated with the subset-based local DIC, which is consistent with our conclusions from previous work. © 2016 Elsevier Ltd. All rights reserved.

  13. Errors in laboratory medicine: practical lessons to improve patient safety.

    Science.gov (United States)

    Howanitz, Peter J

    2005-10-01

    , specimen acceptability, proficiency testing, critical value reporting, blood product wastage, and blood culture contamination. Error rate benchmarks for these performance measures were cited and recommendations for improving patient safety presented. Not only has each of the 8 performance measures proven practical, useful, and important for patient care, taken together, they also fulfill regulatory requirements. All laboratories should consider implementing these performance measures and standardizing their own scientific designs, data analysis, and error reduction strategies according to findings from these published studies.

  14. Measurement errors in voice-key naming latency for Hiragana.

    Science.gov (United States)

    Yamada, Jun; Tamaoka, Katsuo

    2003-12-01

    This study makes explicit the limitations and possibilities of voice-key naming latency research on single hiragana symbols (a Japanese syllabic script) by examining three sets of voice-key naming data against Sakuma, Fushimi, and Tatsumi's 1997 speech-analyzer voice-waveform data. Analysis showed that voice-key measurement errors can be substantial in standard procedures as they may conceal the true effects of significant variables involved in hiragana-naming behavior. While one can avoid voice-key measurement errors to some extent by applying Sakuma, et al.'s deltas and by excluding initial phonemes which induce measurement errors, such errors may be ignored when test items are words and other higher-level linguistic materials.

  15. [Darwinism and the meaning of "meaning"].

    Science.gov (United States)

    Castrodeza, Carlos

    2009-01-01

    The problem of the meaning of life is herewith contemplated from a Darwinian perspective. It is argued how factors such as existential depression, the concern about the meaning of "meaning," the problem of evil, death as the end of our personal identity, happiness as an unachievable goal, etc. may well have an adaptive dimension "controlled" neither by ourselves nor obscure third parties (conspiracy theories) but "simply" by our genes (replicators in general) so that little if anything is to be done to find a radical remedy for the human condition.

  16. Disclosing harmful medical errors to patients: tackling three tough cases.

    Science.gov (United States)

    Gallagher, Thomas H; Bell, Sigall K; Smith, Kelly M; Mello, Michelle M; McDonald, Timothy B

    2009-09-01

    A gap exists between recommendations to disclose errors to patients and current practice. This gap may reflect important, yet unanswered questions about implementing disclosure principles. We explore some of these unanswered questions by presenting three real cases that pose challenging disclosure dilemmas. The first case involves a pancreas transplant that failed due to the pancreas graft being discarded, an error that was not disclosed partly because the family did not ask clarifying questions. Relying on patient or family questions to determine the content of disclosure is problematic. We propose a standard of materiality that can help clinicians to decide what information to disclose. The second case involves a fatal diagnostic error that the patient's widower was unaware had happened. The error was not disclosed out of concern that disclosure would cause the widower more harm than good. This case highlights how institutions can overlook patients' and families' needs following errors and emphasizes that benevolent deception has little role in disclosure. Institutions should consider whether involving neutral third parties could make disclosures more patient centered. The third case presents an intraoperative cardiac arrest due to a large air embolism where uncertainty around the clinical event was high and complicated the disclosure. Uncertainty is common to many medical errors but should not deter open conversations with patients and families about what is and is not known about the event. Continued discussion within the medical profession about applying disclosure principles to real-world cases can help to better meet patients' and families' needs following medical errors.

  17. Prepopulated radiology report templates: a prospective analysis of error rate and turnaround time.

    Science.gov (United States)

    Hawkins, C M; Hall, S; Hardin, J; Salisbury, S; Towbin, A J

    2012-08-01

    Current speech recognition software allows exam-specific standard reports to be prepopulated into the dictation field based on the radiology information system procedure code. While it is thought that prepopulating reports can decrease the time required to dictate a study and the overall number of errors in the final report, this hypothesis has not been studied in a clinical setting. A prospective study was performed. During the first week, radiologists dictated all studies using prepopulated standard reports. During the second week, all studies were dictated after prepopulated reports had been disabled. Final radiology reports were evaluated for 11 different types of errors. Each error within a report was classified individually. The median time required to dictate an exam was compared between the 2 weeks. There were 12,387 reports dictated during the study, of which, 1,173 randomly distributed reports were analyzed for errors. There was no difference in the number of errors per report between the 2 weeks; however, radiologists overwhelmingly preferred using a standard report both weeks. Grammatical errors were by far the most common error type, followed by missense errors and errors of omission. There was no significant difference in the median dictation time when comparing studies performed each week. The use of prepopulated reports does not alone affect the error rate or dictation time of radiology reports. While it is a useful feature for radiologists, it must be coupled with other strategies in order to decrease errors.

  18. Economic impact of medication error: a systematic review.

    Science.gov (United States)

    Walsh, Elaine K; Hansen, Christina Raae; Sahm, Laura J; Kearney, Patricia M; Doherty, Edel; Bradley, Colin P

    2017-05-01

    Medication error is a significant source of morbidity and mortality among patients. Clinical and cost-effectiveness evidence are required for the implementation of quality of care interventions. Reduction of error-related cost is a key potential benefit of interventions addressing medication error. The aim of this review was to describe and quantify the economic burden associated with medication error. PubMed, Cochrane, Embase, CINAHL, EconLit, ABI/INFORM, Business Source Complete were searched. Studies published 2004-2016 assessing the economic impact of medication error were included. Cost values were expressed in Euro 2015. A narrative synthesis was performed. A total of 4572 articles were identified from database searching, and 16 were included in the review. One study met all applicable quality criteria. Fifteen studies expressed economic impact in monetary terms. Mean cost per error per study ranged from €2.58 to €111 727.08. Healthcare costs were used to measure economic impact in 15 of the included studies with one study measuring litigation costs. Four studies included costs incurred in primary care with the remaining 12 measuring hospital costs. Five studies looked at general medication error in a general population with 11 studies reporting the economic impact of an individual type of medication error or error within a specific patient population. Considerable variability existed between studies in terms of financial cost, patients, settings and errors included. Many were of poor quality. Assessment of economic impact was conducted predominantly in the hospital setting with little assessment of primary care impact. Limited parameters were used to establish economic impact. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Medication errors in pediatric inpatients

    DEFF Research Database (Denmark)

    Rishoej, Rikke Mie; Almarsdóttir, Anna Birna; Christesen, Henrik Thybo

    2017-01-01

    The aim was to describe medication errors (MEs) in hospitalized children reported to the national mandatory reporting and learning system, the Danish Patient Safety Database (DPSD). MEs were extracted from DPSD from the 5-year period of 2010–2014. We included reports from public hospitals on pati...... safety in pediatric inpatients.(Table presented.)...

  20. Learner Corpora without Error Tagging

    Directory of Open Access Journals (Sweden)

    Rastelli, Stefano

    2009-01-01

    Full Text Available The article explores the possibility of adopting a form-to-function perspective when annotating learner corpora in order to get deeper insights about systematic features of interlanguage. A split between forms and functions (or categories is desirable in order to avoid the "comparative fallacy" and because – especially in basic varieties – forms may precede functions (e.g., what resembles to a "noun" might have a different function or a function may show up in unexpected forms. In the computer-aided error analysis tradition, all items produced by learners are traced to a grid of error tags which is based on the categories of the target language. Differently, we believe it is possible to record and make retrievable both words and sequence of characters independently from their functional-grammatical label in the target language. For this purpose at the University of Pavia we adapted a probabilistic POS tagger designed for L1 on L2 data. Despite the criticism that this operation can raise, we found that it is better to work with "virtual categories" rather than with errors. The article outlines the theoretical background of the project and shows some examples in which some potential of SLA-oriented (non error-based tagging will be possibly made clearer.